text
stringlengths 1
1.41M
| meta
dict |
|---|---|
\section{Introduction}
Einstein's gravity produced a gigantic understanding of our universe throughout both theoretical and experimental evidences. The observation of gravitational waves from a binary black hole merger, as reported in Ref. \cite{Abbott:2016blz}, was expected as one of the crucial tests for general relativity (GR). Although this huge success, the last three decades brought some questions that could not be answered by GR. These questions are related to both theoretical aspects and to observational results. The very interesting review \cite{Capozziello:2011et} points out basically two classes of ``shortcomings in GR'' on the UV and IR scales. In the UV region one has the quantum gravity problem, and in the IR regime, the dark energy and dark matter issues. In order to address these questions it were proposed new approaches known as extended theories of gravity (ETG). Such theories start with the inclusion of higher-order terms in curvature invariants in the effective Lagrangian as, for instance, $R^2$ and $R^{\alpha \beta \gamma \delta} R_{\alpha \beta \gamma \delta}$ \cite{Gottlober:1989ww, Adams:1990pn, Amendola:1993bg}, or through minimal or non-minimal coupling of scalar fields with the geometry as, for example, $\phi^2 R$ \cite{Maeda:1988ab, Wands:1993uu, Capozziello:1998dq}.
The approach which takes into account a single scalar field in general relativity is known as Horndeski’s gravity \cite{Horndeski, Charmousis:2011bf, Charmousis:2011ea, Starobinsky:2016kua, Bruneton:2012zk, Cisterna:2014nua, Maselli:2016gxk,
Heisenberg:2018vsk, Hajian:2020dcq}.
This model is quite interesting because it is the most general scalar-tensor theory with second order field equations in four dimensions.
Besides Horndeski’s gravity, in this work we will consider other two essential components. The first one is related to the AdS/CFT correspondence or duality, which fundamental concepts can be seen in Refs. \cite{Maldacena:1997re, Gubser:1998bc, Witten:1998qj, Aharony:1999ti}.
Over the past two decades since its emergence, many investigations around AdS/CFT still brings us great insights on the study of strong coupled systems. Among many interesting features of this correspondence one should notice the possibility of building models in the gravity side which are duals to phases of a nonconformal plasma at finite temperature or density. It is worthwhile to mention that in recent Refs. \cite{Jiang:2017imk, Baggioli:2017ojd, Liu:2018hzo, Li:2018kqp, Li:2018rgn}, the authors present some applications of the AdS/CFT in the Horndeski scenario.
The inclusion of another boundary in the original AdS/CFT duality leads to the AdS/BCFT correspondence, which has attracted a lot of attention in the last years. This proposal was presented by Takayanagi \cite{Takayanagi:2011zk} and soon after by Fujita, Takayanagi and Tonni \cite{Fujita:2011fp} as an extension of the standard AdS/CFT correspondence.
The main point in the AdS/CFT is based on the fact that the AdS$_{d+1}$ space is dual a conformal field in $d$ dimensions. In this case the AdS$_{d+1}$ symmetry, which is $SO(2,d)$, is the same of the conformal symmetry of the CFT$_d$. However when one adds a new boundary with $d-1$ dimension to the CFT$_d$, one notice the breaking of $SO(2,d)$ into a $SO(2,d-1)$ group. In this sense, due to the insertion of this new boundary, this theory is known as boundary conformal field theory (BCFT) and then one can construct a correspondence called as AdS/BCFT \cite{Takayanagi:2011zk, Fujita:2011fp, Nozaki:2012qd, Fujita:2012fp, Melnikov:2012tb, Magan:2014dwa}.\footnote{As pointed out in Ref. \cite{Fujita:2011fp} the relation between holography and BCFT was presented in the early 2000's as shown in Refs. \cite{Karch:2000ct, Karch:2000gx}. } In particular, in this work we are going to deal with an AdS$_3$/BCFT$_2$ correspondence.
As we know in the standard AdS/CFT correspondence, we have an asymptotically AdS spacetime N, which has a boundary M with a Dirichlet boundary condition on it. On the other hand, following the AdS/BCFT prescription, we introduce an additional boundary\footnote{Note that the boundary Q in general is not asymptotically AdS.} Q wrapping N, which interception with M is the manifold P, as shown in Fig. \ref{BCFT}. In the hypersurface Q, the bulk-metric of N should satisfy a Neumann boundary condition. Also by looking at Fig. \ref{BCFT} one should notice that the $d$-dimensional spacetime M is bounded by P, which will bound Q also. Within this construction, the $d+1$-dimensional spacetime N is limited by a region defined by $M \cup Q$.
\begin{figure}[!ht]
\begin{center}
\includegraphics[scale=0.30]{BCFT.pdf}
\caption{The figure depicts the holographic description of BCFT, where we have the asymptotically AdS bulk spacetime N with conformal boundary M and addtional boundary Q. Here P is the intersection of M and Q.}\label{BCFT}
\end{center}
\end{figure}
The second essential component of this work is to deal with a finite temperature theory within the AdS/CFT correspondence. Following the standard procedure we include a black hole in the bulk geometry and interpret the Hawking temperature as the temperature of the CFT side.
In the past, $(2+1)$-dimensional gravity was considered as a toy model since it, as pointed out in Ref. \cite{Carlip:1995qv}, does not have a Newtonian limit nor any propagating degrees of freedom. However, after the work of Bañados, Teitelboim, and Zanelli \cite{Banados:1992wn, Banados:1992gq}, it was realized that such a kind of (2+1) theory has a solution, known as the BTZ black hole, with some interesting features: an event horizon (and in some cases an additional inner horizon, if one includes rotations) presenting thermodynamic properties somehow similar to the black holes in (3+1)-dimensions and being asymptotically anti-de Sitter.\footnote{Usually black holes are asymptotically flat.}
For our purposes in this paper we will choose to work with a planar BTZ black hole with a non-trivial axis profile.
\section{Methodological route and achievements}
Motivated by the recent application of the AdS/CFT duality in Horndeski gravity together with the emergence of AdS/BCFT and taking into account the importance of (2+1)$-$dimensional black holes, here in this work we establish the AdS/BCFT in Horndeski gravity and study the thermodynamics of the corresponding AdS-BTZ black hole. Here we will present a summary of the main results achieved in this work:
\begin{itemize}
\item First, we studied the influence of the Horndeski parameters on the BCFT theory. Apart from a complete numerical solution, we derived an approximate analytical solution useful to determine the role of Q profile and perform the analysis of all quantities in this work;
\item We constructed a holographic renormalization for this setup and computed the free energy for both AdS-BTZ black hole and thermal AdS;
\item From the free energy, we computed the total and boundary entropies. In the case of the boundary entropy, one can see it as an extension of the results found in Refs. \cite{Takayanagi:2011zk, Fujita:2011fp};
\item Assuming that the total entropy and the total area of the AdS-BTZ black hole are related by the Bekenstein-Hawking formula, we could see that the influence of the Horndeski gravity enables an increasing of the black hole area as we increase the absolute value of the Horndeski parameter. This feature of our model is not present in the usual BCFT theory as discussed, for instance, in Refs. \cite{Takayanagi:2011zk, Fujita:2011fp, Magan:2014dwa}.
\item At zero temperature our setup exhibits a non-zero or residual boundary entropy, at least in certain conditions which depends on the tension of the Q profile. Besides, zero entropy seems to imply a minimum non-zero temperature.
\item From the free energy we also computed the thermodynamic observables as the heat capacity, sound speed and trace anomaly and plot their behavior against the temperature. In particular, the trace anomaly goes to zero at high temperatures indicating a restoration of the conformal symmetry or a non-trivial BCFT.
\item We studied the Hawking-Page phase transition (HPPT) in this setup. The presence of the Horndeski term allow us to analyse this transition of the free energy as a function of the temperature, as in other higher-dimensional theories. This differs from the results presented in Refs. \cite{Takayanagi:2011zk, Fujita:2011fp}, where the authors plot the free energy as a function of the tension of the Q profile.
\end{itemize}
This work is organized as follows. In Section \ref{v1}, we present our gravitational setup and how to combine it to BCFT theory. In the Section \ref{v2}, we consider a BTZ black hole in Horndeski gravity and study the influence of the Horndeski's parameter on Q profile. In Section \ref{v3}, by performing a holographic renormalization we computed the euclidean on-shell actions associated with the BTZ and thermal AdS space. In Section \ref{BTZentro}, from the euclidean on-shell action, we derive the BTZ black hole entropy, and in Section \ref{v4}, we present a systematic study its thermodynamic quantities. In section \ref{v5}, we present the Hawking-Page phase transition between the BTZ black hole and thermal AdS space. Finally, in Section \ref{v6} we present our conclusion and final comments.
\section{The Setup}\label{v1}
\subsection{Horndeski’s Lagrangian}
Here, in this section, we present the outline of Horndeski's gravity. The complete Horndeski’s Lagrangian density can be written in a general form as:
\begin{eqnarray}\label{LH}
{\cal L}_H = {\cal L}_{EH} + {\cal L}_2 + {\cal L}_3 + {\cal L}_4 + {\cal L}_5\,,
\end{eqnarray}
\noindent where ${\cal L}_{EH}=\kappa(R-2\Lambda)$ is the Einstein-Hilbert Lagrangian density with $\kappa=(16\pi G_N)^{-1}$, $G_N$ the Newton's gravitational constant, $R$ the Ricci scalar, $\Lambda$ the cosmological constant,
and\footnote{Since the publication of Ref. \cite{Charmousis:2011bf} one usually refers to ${\cal L}_2, {\cal L}_3, {\cal L}_4$ and ${\cal L}_5$ in Eq. \eqref{4L} as the {\it Fab Four} Lagrangians.}
\begin{eqnarray}
{\cal L}_2 &=& G_2(X, \phi)\,, \nonumber \\
{\cal L}_3 &=& -G_3(X, \phi) \Box \phi\,, \nonumber \\
{\cal L}_4 &=& -G_4(X, \phi) R + \partial_X G_4(X, \phi) \delta^{\mu \nu}_{\alpha \beta} \nabla^{\alpha}_{\mu} \phi \nabla^{\beta}_{\nu} \phi\,, \nonumber \\
{\cal L}_5 &=& -G_5(X, \phi) G_{\mu \nu} \nabla^{\mu} \nabla^{\nu}\phi - \frac{1}{6} \partial_X G_5(X, \phi) \delta^{\mu \nu \rho}_{\alpha \beta \gamma} \nabla^{\alpha}_{\mu} \phi \nabla^{\beta}_{\nu} \phi \nabla^{\gamma}_{\rho} \phi \,, \label{4L}
\end{eqnarray}
\noindent
$G_2$, $G_3$, $G_4$, and $G_5$ being arbitrary functions of the scalar field $\phi$ and $X$ defined by $X \equiv - \frac{1}{2} \nabla_{\mu} \phi \nabla^{\mu} \phi$, while $G_{\mu \nu}=R_{\mu\nu} -\frac 12 g_{\mu\nu}R$ is the Einstein tensor, and $g_{\mu\nu}$ is the spacetime metric. For a detailed review on Honderski's gravity, one can see Ref. \cite{Kobayashi:2019hrl}.
In particular, we are interested in a special subclass of Horndeski’s gravity which has a non-minimal coupling between the standard scalar term and the Einstein tensor
\cite{Charmousis:2011bf,Charmousis:2011ea,Starobinsky:2016kua,Bruneton:2012zk,Brito:2019ose,Santos:2020xox}. In this sense, Eq. \eqref{LH} becomes:
\begin{equation}\label{HFE}
{\cal L}_H \equiv {\cal L}_{EH} + {\cal L}_2 =(R-2\Lambda)-\frac{1}{2}(\alpha g_{\mu\nu}-\gamma G_{\mu\nu})\nabla^{\mu}\phi\nabla^{\nu}\phi\,,
\end{equation}
\noindent where the parameters $\alpha$ and $\gamma$, which control the strength of the kinetic couplings, have mass dimensions zero and -2, respectively. Note that the Lagrangian density in Eq. \eqref{HFE}, is invariant under the displacement symmetry $\phi\to\phi\, +$ constant and under the parity transformation $\phi\to-\phi$.
\subsection{AdS$_3$/BCFT$_2$ correspondence with Horndeski $\gamma$-dependence}
Here, in this section, we discuss the AdS/BCFT correspondence within the Horndeski gravity.
As discussed by \cite{Takayanagi:2011zk,Fujita:2011fp}, for the construction of boundary systems we need to take into account a Gibbons-Hawking surface term. In addition, such a surface term for the Horndeski $\gamma$-dependent gravity was proposed in Ref. \cite{Li:2018rgn}. Motivated by these works, we propose the total action including the contributions coming from the surfaces N, Q and P, besides matter terms from N and Q and the counterterms from P:\footnote{One can recall the AdS/BCFT geometry from Fig. \ref{BCFT}.}
%
\begin{eqnarray}
S&=&S^{N}+S^{N}_{mat}+S^{Q}+S^{Q}_{mat}+S^{P}_{ct}\,, \label{S}
\end{eqnarray}
where $S^{N}_{mat}$ describes an ordinary matter that is supposed
to be a perfect fluid, and
\begin{eqnarray}
&&S^{N}=\kappa\int_{N}{d^{3}x\sqrt{-g}\mathcal{L}_{H}}\\
&&S^{Q}=2\kappa\int_{bdry}{d^{2}x\sqrt{-h}\mathcal{L}_{bdry}}\\
&&S^{Q}_{mat}=2\int_{Q}{d^{2}x\sqrt{-h}\mathcal{L}_{mat}}\\
&&S^{P}_{ct}=2\kappa\int_{ct}{d^{2}x\sqrt{-h}\mathcal{L}_{ct}}\,,
\end{eqnarray}
where $\mathcal{L}_{H}$ was defined in Eq. \eqref{HFE} and
\begin{eqnarray}
\mathcal{L}_{bdry}&=&(K-\Sigma)+\frac{\gamma}{4}(\nabla_{\mu}\phi\nabla_{\nu}\phi n^{\mu}n^{\nu}-(\nabla\phi)^{2})K+\frac{\gamma}{4}\nabla_{\mu}\phi\nabla_{\nu}\phi K^{\mu\nu}\,, \label{3}\\
\mathcal{L}_{ct}&=&c_{0}+c_{1}R+c_{2}R^{ij}R_{ij}+c_{3}R^{2}+b_{1}(\partial_{i}\phi\partial^{i}\phi)^{2}+...\label{4}
\end{eqnarray}
Note that $\mathcal{L}_{mat}$ is a Lagrangian of possible matter fields on Q and $\mathcal{L}_{bdry}$ corresponds to the Gibbons-Hawking $\gamma$-dependent terms associated with the Horndeski gravity. In the boundary Lagrangian, Eq. \eqref{3}, $K_{\mu\nu}=h^{\beta}_{\mu}\nabla_{\beta}n_{\nu}$ is the extrinsic curvature,
$h_{\mu\nu}$ is the induced metric and $n^\mu$ is the normal vector both on the hypersurface Q. The traceless contraction of $K_{\mu\nu}$ is $K=h^{\mu\nu}K_{\mu\nu}$, and $\Sigma$ is the boundary tension on Q. Furthermore, ${\cal L}_{ct}$ are boundary counterterms localized on P which is required to be an asymptotic AdS spacetime. By imposing a Neumann boundary condition in Eq. \eqref{3}, we obtain\footnote{For more details on the geometry one can see \cite{Takayanagi:2011zk,Fujita:2011fp,Melnikov:2012tb,Magan:2014dwa}. Regarding the choice for the boundary condition, one can see in Ref. \cite{Compere:2008us} where the authors discussed about Neumann boundary condition, among others.}
\begin{eqnarray}
K_{\alpha\beta}-h_{\alpha\beta}(K-\Sigma)+\frac{\gamma}{4}H_{\alpha\beta}=\kappa {\cal S}^{Q}_{\alpha\beta}\,,\label{5}
\end{eqnarray}
where we defined
\begin{eqnarray}
&&H_{\alpha\beta}\equiv(\nabla_{\alpha}\phi\nabla_{\beta}\phi n^{\alpha}n^{\beta}-(\nabla\phi)^{2})(K_{\alpha\beta}-h_{\alpha\beta}K)-(\nabla_{\alpha}\phi\nabla_{\beta}\phi)h_{\alpha\beta}K\,,\label{6}\\
&&{\cal S}^{Q}_{\alpha\beta}=-\frac{2}{\sqrt{-h}}\frac{\delta S^{Q}_{mat}}{\delta h^{\alpha\beta}}\,.\label{7}
\end{eqnarray}
Considering $S^{Q}_{mat}$ as a constant one has ${\cal S}^{Q}_{\alpha\beta}=0$. Then, we can write
\begin{eqnarray}
K_{\alpha\beta}-h_{\alpha\beta}(K-\Sigma)+\frac{\gamma}{4}H_{\alpha\beta}=0\,.\label{8}
\end{eqnarray}
On the gravitational side for Einstein-Horndeski gravity assuming $S^{N}_{mat}$ as constant, varying $S^N$ with respect to $g_{\alpha\beta}$ and $\phi$, and $S^Q$ with respect to $\phi$, respectively, we have:
\begin{eqnarray}
{\cal E}_{\alpha\beta}[g_{\mu\nu},\phi]=-\frac{2}{\sqrt{-g}}\frac{\delta S^{N}}{\delta g^{\alpha\beta}}\,,\quad {\cal E}_{\phi}[g_{\mu\nu},\phi]=-\frac{2}{\sqrt{-g}}\frac{\delta S^{N}}{\delta\phi} \,,\quad {\cal F}_{\phi}[g_{\mu\nu},\phi]=-\frac{2}{\sqrt{-h}}\frac{\delta S^{Q}}{\delta\phi} \,.\nonumber\\
\end{eqnarray}
%
Then, one finds:
\begin{eqnarray}
{\cal E}_{\mu\nu}[g_{\mu\nu},\phi]&=&G_{\mu\nu}+\Lambda g_{\mu\nu}-\frac{\alpha}{2}\left(\nabla_{\mu}\phi\nabla_{\nu}\phi-\frac{1}{2}g_{\mu\nu}\nabla_{\lambda}\phi\nabla^{\lambda}\phi\right)\label{11}\nonumber\\
&-&\frac{\gamma}{2}\left(\frac{1}{2}\nabla_{\mu}\phi\nabla_{\nu}\phi R-2\nabla_{\lambda}\phi\nabla_{(\mu}\phi R^{\lambda}_{\nu)}-\nabla^{\lambda}\phi\nabla^{\rho}\phi R_{\mu\lambda\nu\rho}\right)\nonumber\\
&-&\frac{\gamma}{2}\left(-(\nabla_{\mu}\nabla^{\lambda}\phi)(\nabla_{\nu}\nabla_{\lambda}\phi)+(\nabla_{\mu}\nabla_{\nu}\phi)\Box\phi+\frac{1}{2}G_{\mu\nu}(\nabla\phi)^{2}\right)\nonumber\\
&+&\frac{\gamma g_{\mu\nu}}{2}\left(-\frac{1}{2}(\nabla^{\lambda}\nabla^{\rho}\phi)(\nabla_{\lambda}\nabla_{\rho}\phi)+\frac{1}{2}(\Box\phi)^{2}-(\nabla_{\lambda}\phi\nabla_{\rho}\phi)R^{\lambda\rho}\right),\\
{\cal E}_{\phi}[g_{\mu\nu},\phi]&=&\nabla_{\mu}[(\alpha g^{\mu\nu}-\gamma G^{\mu\nu})\nabla_{\nu}\phi]\,,\label{12}\\
{\cal F}_{\phi}[g_{\mu\nu},\phi]&=&\frac{\gamma}{4}(\nabla_{\mu}\nabla_{\nu}\phi n^{\mu}n^{\nu}-(\nabla^{2}\phi))K+\frac{\gamma}{4}(\nabla_{\mu}\nabla_{\nu}\phi)K^{\mu\nu}\,.\label{12.1}
\end{eqnarray}
Note that, from the Euler-Lagrange equation, ${\cal E}_{\phi}[g_{\mu\nu},\phi]={\cal F}_{\phi}[g_{\mu\nu},\phi]$.
\section{Q-profile within BTZ black hole in Horndeski gravity}\label{v2}
In this section, we will describe our BTZ black hole and construct the profile of the hypersurface Q taking into account the influence of the Horndeski gravity.
The black hole BTZ is defined in the three-dimensional as \cite{Banados:1992wn,Banados:1992gq}:
\begin{eqnarray}
ds^{2}=\frac{L^{2}}{r^{2}}\left(-f(r)dt^{2}+dy^{2}+\frac{dr^{2}}{f(r)}\right)\,.\label{13}
\end{eqnarray}
A condition that deals with static configurations of black holes, which can be spherically symmetric for certain Galileons, was presented in Ref. \cite{Bravo-Gaete:2013dca} to discuss the no-hair theorem. However, to escape this no-hair theorem, we have to keep the radial component of the conserved current disappearing identically without restricting the radial dependence of the scalar field:
\begin{equation}
\alpha g_{rr}-\gamma G_{rr}=0\label{14}.
\end{equation}
From this condition we have ${\cal E}_{\phi}[g_{rr},\phi]=0$. Thus, we consider just $\phi=\phi(r)$ and define $\phi^{'}(r)\equiv\psi(r)$. It can be shown that the equation ${\cal E}_{\phi}[g_{rr},\phi]={\cal E}_{rr}[g_{rr},\phi]=0$ are satisfied and it will be used to calculated the horizon function $f(r)$ and $\psi(r)$, so that:
\begin{eqnarray}
f(r)&=&\frac{\alpha L^{2}}{3\gamma}-\left(\frac{r}{r_{h}}\right)^{2},\label{15}\\
\psi^{2}(r)&=&-\frac{2L^{2}(\alpha+\gamma\Lambda)}{\alpha\gamma r^{2}f(r)}.\label{16}
\end{eqnarray}
In addition, in Eq. \eqref{15}, we choose the effective AdS radius as $L^{-2}=\alpha/(3\gamma)$ \cite{Anabalon:2013oea,Santos:2020xox}. One can note that these solutions can be asymptotically dS or AdS for the following conditions $\alpha/\gamma>0$ and $\alpha/\gamma<0$, respectively. The scalar field given by the
Eq.~(\ref{16}) should be real and then we impose the constraints $\alpha>0$ and $\gamma<0$.
The Hawking temperature is given by %
\begin{equation}\label{hawk}
T_{H}=
\dfrac{1}{4\pi} |f'(r_h)|
=\frac{1}{2\pi r_{h}}\,,
\end{equation}
which is equal to the temperature of the dual BCFT theory $T_{BCFT}=T_{H}$.
Now, in order to construct the Q boundary profile, one has the induced metric on the BCFT theory given by
\begin{eqnarray}
ds^{2}_{\rm ind}=\frac{L^{2}}{r^{2}}\left(-f(r)dt^{2}+\frac{g^{2}(r)dr^{2}}{f(r)}\right)\,,
\end{eqnarray}
where $g^{2}(r)=1+{y'}^{2}(r)f(r)$ and $y{'}(r)=dy/dr$. Then, we have that their normal vectors on Q can be presented by
\begin{eqnarray}
n^{\mu}=\frac{r}{Lg(r)}\, \left(0,\, 1, \, -{f(r)y{'}(r)}\right)\,.\label{17}
\end{eqnarray}
Fulfilling the no-hair theorem, meaning
${\cal F}_{\phi}[h_{rr},\phi]=0$, one can solve the Eq. \eqref{8}, so that
\begin{eqnarray}
y{'}(r)&=&\frac{(\Sigma L)}{\sqrt{1+\dfrac{\gamma\psi^{2}(r)}{4}-(\Sigma L)^{2}\left(1-\left(\dfrac{r}{r_{h}}\right)^{2}\right)}}\,, \label{19}
\end{eqnarray}
\noindent with $\psi(r)$ given by Eq. \eqref{16}, and
\begin{eqnarray}y{'}(r)&=&\frac{(\Sigma L)}{\sqrt{1-\dfrac{\xi}{r^{2}\left(1-\left(\dfrac{r}{r_{h}}\right)^{2}\right)}-(\Sigma L)^{2}\left(1-\left(\dfrac{r}{r_{h}}\right)^{2}\right)}}\,,
\end{eqnarray}
where
\begin{eqnarray}\label{xi}
\xi&=&\frac{6\gamma}{\alpha}\left(1+\frac{\gamma\Lambda}{\alpha}\right)\,.
\end{eqnarray}
Note that $\xi$ is negative since $\alpha>0$ and $\gamma<0$. Besides, we can introduce $\Sigma L=\cos(\theta{'}) $ with $\theta{'}$ the angle between the positive direction of the $y$ axis and the hypersurface Q.
\begin{figure}[!ht]
\vskip 1cm
\begin{center}
\includegraphics[scale=0.48]{f01.pdf}
\includegraphics[scale=0.48]{f02.pdf}
\caption{The figures show the Q boundary profile for the BTZ black hole within Horndeski gravity considering the values for $\theta=\theta'=2\pi/3$, $\kappa=1/4$, $\Lambda=-1$, $\alpha=8/3$ with $\gamma=0$ ({\sl solid}), $\gamma=-0.1$ ({\sl dashed}), $\gamma=-0.2$ ({\sl dot dashed}), and $\gamma=-0.3$ ({\sl thick}). The dashed parallel vertical lines represent the UV solution, Eq. \eqref{19.2}. The region between the curves Q represent the bulk N.
{\sl Left panel:} we show the complete numerical solution for the Eq. (\ref{19}).
{\sl Right panel:} we show the approximated solution for small values of $\xi$, from the Eq. (\ref{19.3}). }\label{p0}
\label{ylinhaz}
\end{center}
\end{figure}
The equation for $ y{'}(r)$ can be solved numerically, and we can obtain the Q-profile for the $\gamma$-dependent Horndeski terms as shown in the left panel of Fig. \ref{p0}.
Beyond the numerical solutions, we can analyze some particular cases regarding the study of the UV and IR regimes. Thus, for the UV case, performing an expansion at $r\to 0$ the Eq. (\ref{19}) becomes
\begin{eqnarray}
y_{_{UV}}(r)=y_{0}+\frac{r\cos(\theta{'})}{\sqrt{-\xi}}.\label{19.1}
\end{eqnarray}
In the above equation, considering $\xi\to-\infty$, we have
\begin{eqnarray}
y_{_{UV}}(r)=y_{0}={\rm constant}.\label{19.2}
\end{eqnarray}
This is equivalent to keep $\xi$ finite and a zero tension limit $\Sigma\to 0$.
Now, for the IR case, we take $r\to\infty$, so that from Eq. \eqref{16} implies $\psi(r\to\infty)=0$, and then $\phi=$ constant, which ensures a genuine vacuum solution. Plugging this result in Eq. (\ref{19}), in the limit $r\to\infty$, we have
\begin{eqnarray}
y_{_{IR}}(r)=y_{0}+r_{h}\ln(r).\label{19.22}
\end{eqnarray}
Another approximate analytical solution for $y(r)$ can be obtained by performing an expansion for $\xi$ very small in Eq. (\ref{19}). Considering this expansion up to first-order, we obtain
\begin{eqnarray}
y_{_Q}\equiv y(r)&=&y_{0}+r_{h}\sinh^{-1}\left[\frac{r}{r_{h}}\cot(\theta{'})\right]
+\frac{\xi\cos(\theta{'})}{2r_{h}}\tan^{-1}\left[\frac{r}{r_{h}\sqrt{1-\cos^{2}(\theta{'})f(r)}}\right]
\cr
&+&\frac{\xi\cos(\theta{'})}{2}
\frac{\sqrt{1-\cos^{2}(\theta{'})f(r)}}{{r(-1+\cos^{2}(\theta{'}))^{2}}}
\left[{1+\frac{r^{2}\cos^{4}(\theta{'})}{r^{2}_{h}-r^{2}_{h}\cos^{2}(\theta{'})f(r)}}\right]
+\mathcal{O}(\xi^{2})\,. \label{19.3}
\end{eqnarray}
In the right panel of Fig. \ref{p0}, we plot the $y_{_Q}=y(r)$ profile from Eq. \eqref{19.3}, which represents our holographic description of BCFT within Horndeski's theory. Note that the bulk spacetime N is asymptotically AdS with two boundaries M and Q. The interception of M and Q is represent by P in Fig. \ref{BCFT}. It is worthwhile to mention that the Q profile is obtained from the solution $y_{_Q}=y(r)$.
Note that the UV solution $y_{_{UV}}(r)=$ constant, Eq. \eqref{19.2} is similar to a lower-dimensional Randall-Sundrum (RS) brane, which is perpendicular to the boundary M.\footnote{A gravity theory containing solutions with non-zero tension of the RS branes was presented in Ref. \cite{Nozaki:2012qd}.} These RS-like branes could be represented in Fig \ref{p0} by the dashed parallel vertical lines. Further, as one increases the Horndeski parameter $\gamma$, one can see that the surface Q gets closer to the RS-like branes.
\section{Holographic renormalization}\label{v3}
In this section we will present the holographic renormalization scheme in order to compute the euclidean on-shell action which is related to the free energy of the corresponding thermodynamic system.\footnote{One should notice that the free energy can also be calculated {\it via} canonical thermodynamic potential by using the black hole entropy and the first law of thermodynamics. This approach can be seen, for instance, in Ref. \cite{Gursoy:2017wzz}.}
The holographic renormalization, as it is called within the AdS/CFT program, is a steady approach to remove divergences from infinite quantities in the gravitational side of the correspondence \cite{Henningson:1998gx,deBoer:1999tgo}. Such a renormalizaton in the gravity side will work similarly to usual renormalization of the gauge field theory on the boundary.
Our holographic scheme will take into account the contributions of AdS/BCFT correspondence within Horndeski gravity. Let us start with the euclidean action given by $I_{E}=I_{bulk}+2I_{bdry}$, i.e.,
\begin{eqnarray}
&&I_{bulk}=
-\frac{1}{16\pi G_{N}}\int_{N}{d^{3}x\sqrt{g}\left[(R-2\Lambda)-\frac{\gamma}{2}G_{\mu\nu}\nabla^{\mu}\phi\nabla^{\nu}\phi\right]}\cr
&&-\frac{1}{8\pi G_{N}}\int_{M}{d^{2}x\sqrt{\bar{\gamma}}\left[(K^{(\bar{\gamma})}-\Sigma^{(\bar{\gamma})})+\frac{\gamma}{4}(\nabla_{\mu}\phi\nabla_{\nu}\phi n^{\mu}n^{\nu}-(\nabla\phi)^{2})K^{(\bar{\gamma})}\right.
\left.+\frac{\gamma}{4}\nabla^{\mu}\phi\nabla^{\nu}\phi K^{(\bar{\gamma})}_{\mu\nu}\right]},\cr
&&\label{BT}
\end{eqnarray}
where $g$ is the determinant of the metric $g_{\mu\nu}$ on the bulk N, the induced metric and the surface tension on M are $\bar{\gamma}$ and $\Sigma^{(\bar{\gamma})}$, respectively and the trace of the extrinsic curvature on the surface M is $K^{(\bar{\gamma})}$.
On the other hand, for the boundary, one has
\begin{eqnarray}
I_{bdry}&=&-\frac{1}{8\pi G_{N}}\int_{Q}{d^{2}x\sqrt{h}\left[(K-\Sigma)+\frac{\gamma}{4}(\nabla_{\mu}\phi\nabla_{\nu}\phi n^{\mu}n^{\nu}-(\nabla\phi)^{2})K+\frac{\gamma}{4}\nabla^{\mu}\phi\nabla^{\nu}\phi K_{\mu\nu}\right]}\nonumber\\
&&-\frac{1}{16\pi G_{N}}\int_{N}{d^{3}x\sqrt{g}\left[(R-2\Lambda)-\frac{\gamma}{2}G_{\mu\nu}\nabla^{\mu}\phi\nabla^{\nu}\phi\right]}.\label{BT1}
\end{eqnarray}
Through the AdS/CFT correspondence, we know that IR divergences at AdS corresponds to the UV divergences at CFT. This relation is known as IR-UV connection.
Thus, for the AdS-BTZ black hole, we can remove this IR divergence by introducing a cutoff $\epsilon$:
\begin{eqnarray}
I_{bulk}&=&\frac{1}{8\pi G_{N}}\int^{2\pi r_{h}}_{0}\int^{y}_{y_{0}}\int^{r_{h}}_{\epsilon}{\frac{L}{r^{3}}d\tau dydr}+\frac{1}{32\pi G_{N}}\int^{2\pi r_{h}}_{0}\int^{y}_{y_{0}}\int^{r_{h}}_{\epsilon}{\frac{L^{3}}{r^{3}}\gamma G^{rr}\psi^{2}d\tau dydr}\nonumber\\
&&-\frac{1}{8\pi G_{N}}\int^{2\pi r_{h}}_{0}\int^{y}_{y_{0}}{\frac{L\sqrt{f(\epsilon)}}{\epsilon^{2}}d\tau dy}\,.\label{BT2}
\end{eqnarray}
Note that the coordinate $y$ in this equation, associated with AdS-BTZ black hole, is not the same as $y_{_Q}=y(r)$ related to the Q-profile discussed in section \ref{v2}.
Then, we have for the bulk term:
\begin{eqnarray}
&&I_{bulk}=-\frac{L\Delta y}{8r_{h}G_{N}}\left(1-\frac{\xi}{4L^{2}}\right)+\mathcal{O}(\epsilon)\,, \label{BT3}
\end{eqnarray}
where $\Delta y\equiv y-y_0$.
Analogously, for the boundary term we have
\begin{eqnarray}
I_{bdry}&=&\frac{1}{4\pi G_{N}}\int^{2\pi r_{h}}_{0}\int^{y_{_Q}}_{y_{0}}\int^{r_{h}}_{\epsilon}{\frac{L}{r^{3}}d\tau dy\, dr}\cr &+&\frac{1}{32\pi G_{N}}\int^{2\pi r_{h}}_{0}\int^{y_{_Q}}_{y_{0}}\int^{r_{h}}_{\epsilon}{\frac{L^{3}}{r^{3}}\gamma G^{rr}\psi^{2}d\tau dy\, dr}\label{BT4}\nonumber \\
&-&\frac{1}{8\pi G_{N}}\int^{2\pi r_{h}}_{0}\int^{r_{h}}_{\epsilon}{\frac{\Sigma L^{2}d\tau dr}{r^{2}\sqrt{1-(\Sigma L)^{2}f(r)}}}\cr
&+&\kappa\Sigma^{3}L^{2}\left(1+\frac{\gamma\Lambda}{\alpha}\right)\frac{1}{8\pi G_{N}}\int^{2\pi r_{h}}_{0}\int^{r_{h}}_{\epsilon}{\frac{\Sigma L^{2}d\tau dr}{r^{2}\sqrt{1-(\Sigma L)^{2}f(r)}}}\,.
\end{eqnarray}
This boundary action can be written as
\begin{eqnarray}
I_{bdry}&=&\frac{r_{h}L}{2G_{N}}\left(1-\frac{\xi}{8L^{2}}\right)\int^{r_{h}}_{\epsilon}{\frac{\Delta y_{_Q}(r)}{r^{3}}dr}\cr &+&\left(1-\frac{\xi\cos^{3}(\theta{'})}{2}\right)\frac{L\cot(\theta{'})\csc(\theta{'})}{4G_{N}}+\mathcal{O}(\epsilon),\label{BT5}
\end{eqnarray}
where $\Delta y_{_Q}(r)\equiv y(r)-y_0$, with $y(r)$ given by Eq.(\ref{19.3}), and the euclidean action $I_{E}=I_{bulk}+2I_{bdry}$ is given by:
\begin{eqnarray}
I_{E}&=&-\frac{L\Delta y}{8r_{h}G_{N}}\left(1-\frac{\xi}{4L^{2}}\right)+\frac{L}{G_{N}}\left(1-\frac{\xi}{8L^{2}}\right)w(\xi,r_{h})\nonumber\\
&+&\left(1-\frac{\xi\cos^{3}(\theta{'})}{2}\right)\frac{L\cot(\theta{'})\csc(\theta{'})}{2G_{N}}\,,\label{BT6}
\end{eqnarray}
\noindent with
\begin{eqnarray}
&&w(\xi,r_{h})=\int^{r_{h}}_{\epsilon}{\frac{r_{h}\Delta y_{_Q}(r)}{r^{3}}dr}\,.\nonumber
\end{eqnarray}
Using the Q-profile for $\Delta y_{_Q}(r)$ from Eq. \eqref{19.3} in $w(\xi,r_{h})$,
we can extract an approximated analytical expression for the euclidean action $I_{E}$ as
\begin{eqnarray}\label{freeEBH}
I_{E}&=&-\frac{L\Delta y}{8r_{h}G_{N}}\left(1-\frac{\xi}{4L^{2}}\right)-\frac{L}{2G_{N}}\left(1-\frac{\xi}{8L^{2}}\right)\sinh^{-1}(\cot(\theta{'}))\nonumber\\
&+&\frac{\xi q(\theta{'})L}{2G_{N}}+\frac{\xi h(\theta{'})\cot(\theta{'})}{2G_{N}r^{2}_{h}}\label{BT6.1}\,,
\end{eqnarray}
\noindent where
\begin{eqnarray}
h(\theta{'})&=&-\frac{(1+\pi/2)}{2\sin(\theta{'})}+\frac{\cot^{3}(\theta{'})\cos^{2}(\theta{'})}{(1+\cos^{2}(\theta{'}))}\tanh^{-1}\left(\frac{\sqrt{2}\cos(\theta{'})}{\sqrt{1+\cos^{2}(\theta{'})}}\right)\nonumber\\
&-&\frac{(1+\cos^{2}(\theta{'})+3\cos^{4}(\theta{'})-3\cos^{6}(\theta{'}))}{3\sin^{5}(\theta{'})(1+\cos^{2}(\theta^{'}))}\,,
\nonumber\\
q(\theta{'})&=&\left(\frac{1}{4}-\cos^{3}(\theta{'})\right)\cot(\theta{'})\csc(\theta{'})\,. \nonumber
\end{eqnarray}
Beyond the AdS-BTZ black hole, we can compute the euclidean action for the thermal AdS solution considering $f(r)\to 1$. From the equations (\ref{BT}) and (\ref{BT1}), it is straightforward to get in this limit
\begin{eqnarray}
I_{E}(0)=-\frac{L\Delta y}{8r_{h}G_{N}}\left(1-\frac{\xi}{4L^{2}}\right)\,.\label{BT6.2}
\end{eqnarray}
\section{BTZ black hole entropy in Horndeski gravity}\label{BTZentro}
In this section we will compute the entropy related to the BTZ black hole considering the contributions of the AdS/BCFT correspondence within Horndeski gravity. From the free energy defined as
\begin{equation}\label{FE}
\Omega=T_H\, I_E \,,
\end{equation}
one can obtain the corresponding entropy as:
\begin{eqnarray}
S=-\frac{\partial\Omega}{\partial T_{H}}\,.\label{BT7}
\end{eqnarray}
By plugging the euclidean on-shell action $I_E$, Eq. \eqref{freeEBH}, in the above equation, one gets
\begin{eqnarray}
S_{\rm total}&=&\frac{L\Delta y}{4r_{h}G_{N}}\left(1-\frac{\xi}{4L^{2}}\right)+\frac{L}{2G_{N}}\left(1-\frac{\xi}{8L^{2}}\right)\sinh^{-1}(\cot(\theta{'}))\cr&-&\frac{3\xi}{2r^{2}_{h}G_{N}} \cot(\theta{'}) h(\theta{'}) +\frac{\xi L}{2G_{N}}q(\theta{'}).\label{BT8}
\end{eqnarray}
Recalling that the Hawking temperature, Eq. \eqref{hawk}, is a function of $r_h$, we should evaluate the profile from Eq. \eqref{19.3} at the horizon $r=r_h$. Then, one gets
\begin{eqnarray}\label{seno}
\frac{1}{2}\sinh^{-1}(\cot(\theta{'}))=\frac{\Delta y_{_Q}}{r_{h}}-\frac{\xi }{2r^{2}_{h}}\, b(\theta{'})
\end{eqnarray}
where
\begin{eqnarray}\label{btheta}
b(\theta{'})=\cos(\theta{'})\tan^{-1}\left(\frac{1}{\sin(\theta{'})}\right)+\cot(\theta{'})\left(\frac{1+\cos^{2}(\theta{'})\cot^{2}(\theta{'})}{\sin^{2}(\theta{'})}\right)\,.
\end{eqnarray}
Replacing Eq. \eqref{seno} in Eq. \eqref{BT8} one gets the total entropy with the bulk and boundary contributions with Horndeski terms:
\begin{equation}
S_{\rm total}= S_{\rm bulk + Horndeski} + S_{\rm boundary + Horndeski}\,, \label{St}
\end{equation}
where
\begin{eqnarray}
S_{\rm bulk + Horndeski}&=&\frac{L\Delta y}{4r_{h}G_{N}}\left(1-\frac{\xi}{4L^{2}}\right) \\
S_{\rm boundary + Horndeski}&=&\frac{L\Delta y_{_Q}}{r_{h}G_{N}}\left(1-\frac{\xi}{8L^{2}}\right)-\frac{\xi b(\theta{'})L}{2r^{2}_{h}G_{N}}\left(1-\frac{\xi}{8L^{2}}\right)\nonumber \\&-&\frac{3\xi h(\theta{'})\cot(\theta^{'})}{2r^{2}_{h}G_{N}}+\frac{\xi q(\theta{'})L}{2G_{N}}\,.
\end{eqnarray}
One interpretation for this total entropy is to identify it with the Bekenstein-Hawking formula for the black hole:
\begin{eqnarray}
S_{BH}=\frac{A}{4G_{N}}\label{BT9}\,.
\end{eqnarray}
Thus, in this case, from Eq. \eqref{St}, one has
\begin{eqnarray}
A&=&\frac{L\Delta y}{r_{h}}\left(1-\frac{\xi}{4L^{2}}\right)+\frac{4L\Delta y_{_Q}}{r_{h}}\left(1-\frac{\xi}{8L^{2}}\right)-\frac{2\xi b(\theta{'})L}{r^{2}_{h}}\left(1-\frac{\xi}{8L^{2}}\right)\nonumber \\&-&\frac{6\xi h(\theta{'})\cot(\theta^{'})}{r^{2}_{h}}+2\xi q(\theta{'})L\,, \label{BT10}
\end{eqnarray}
where $A$ would be the total area of the AdS-BTZ black hole with Horndeski contribution terms for the bulk and the boundary Q. Since the information is bounded by the black hole area, the equation \eqref{BT10} suggests that the information storage increases with increasing $|\xi|$, as long as $\xi<0$.
Note that the Bekenstein-Hawking equation \eqref{BT9} is a semi-classical result \cite{Das:2010su, Almheiri:2020cfm}. In this sense our total entropy ($S_{\rm total}$), Eq. \eqref{St}, can be interpreted as a correction to the original Bekenstein-Hawking formula:
\begin{equation}
S_{total} = S_{\rm Bekenstein-Hawking} + S_{\rm Horndeski\,\, contributions} \,.
\end{equation}
It is worthwhile to mention that corrections in the entropy were studied for instance in Refs. \cite{Hendi:2010xr, Solodukhin:2011gn, Bamba:2012rv, Feng:2015oea}. In particular, we found compatible results with the ones in Ref. \cite{Feng:2015oea}, where they considered Horndeski gravity in $n$-dimensional spacetime $(n\ge 4)$ within the Wald formalism or the regularized euclidean action.
Considering the boundary entropy for the AdS-BTZ black hole with Horndeski gravity, from Eq. \eqref{St}, one has:
\begin{eqnarray}
S_{bdry}=\frac{L\Delta y_{_Q}}{r_{h}G_{N}}\left(1-\frac{\xi}{8L^{2}}\right)-\frac{\xi b(\theta{'})L}{2r^{2}_{h}G_{N}}\left(1-\frac{\xi}{8L^{2}}\right)-\frac{3\xi h(\theta{'})\cot(\theta^{'})}{2r^{2}_{h}G_{N}}+\frac{\xi q(\theta{'})L}{2G_{N}}, \;\;\quad \label{BT11}
\end{eqnarray}
which is identified with the entropy of the BCFT corrected by the Horndeski terms parametrized by $\xi$. If we put $\xi\to 0$ we recover the results presented in Refs. \cite{Takayanagi:2011zk, Fujita:2011fp}. In addition, still analyzing Eq. \eqref{BT11}, due to the effects of the Horndeski gravity, there is a non-zero boundary entropy even if we consider the zero temperature scenario similar to an extreme black hole. This can be seen if one takes the limit $T\to 0$ ($r_h \to \infty$) in Eq. \eqref{BT11}, then one gets what we call the residual boundary entropy
\begin{equation}
S_{bdry}^{res}=\frac{\xi q(\theta{'})L}{2G_{N}}\,. \label{BT11ext}
\end{equation}
Note that, since the entropy should be non-negative, this zero temperature limit is only meaningful if $q(\theta')<0$, once $\xi<0$.
In particular, considering our approximate analytical solution Eq. \eqref{19.3}, this will be fulfilled for small or large $\theta'$, $0< \theta'< \sqrt{6/13} $ or $ \pi / 2 < \theta' < \pi$, respectively. On the other side, in the region $ \sqrt{6/13} < \theta' < \pi/2 $, one has $q(\theta')>0$, and then the limit $T\to 0$ cannot be reached. In this case there should be a minimum non-zero temperature corresponding to the zero entropy.
\section{Thermodynamic quantities and results}\label{v4}
The thermodynamics of black holes was established in Refs. \cite{Hawking:1971tu, Bardeen:1973gs, Bekenstein:1973ur}, and in this section we will present our numerical results for the thermodynamic observables, from a BTZ black hole. We take into account the contribution of the AdS/BCFT correspondence within Horndeski gravity. All of those thermodynamic observables will be derived from the renormalized free energy.
Motivated by the thermodynamics of black holes, the AdS/CFT and AdS/QCD were benefited, due to the possibility to construct effective gauge theories at finite temperature opened a myriad of applications. In particular, the holographic study of charged black holes was presented in Refs. \cite{Chamblin:1999tk, Chamblin:1999hg}. These ideas were then applied to some high-energy phenomenology at finite temperature. For an incomplete list, one can see Refs. \cite{Kubiznak:2016qmn, Bravo-Gaete:2014haa, Zeng:2016aly, Gubser:2008ny, Gubser:2008yx, Li:2011hp, Cai:2012xh, He:2013qq, Zhao:2013oza, Li:2014hja, Li:2017ple, Rodrigues:2018pep, Chen:2018vty, Chen:2019rez, Rodrigues:2018chh, Arefeva:2020vae, Ballon-Bayona:2020xls, Arefeva:2020bjk, Caldeira:2020sot, Rodrigues:2020ndy, Caldeira:2020rir}.
After this brief outlook, let us start our calculation from the differential form of the thermodynamics first law, within the canonical ensemble. It can be written as:
\begin{equation}\label{1lei}
d \Omega = -p dV - S dT\,,
\end{equation}
\noindent leading to
\begin{equation}\label{omega}
\Omega = \epsilon - TS \,,
\end{equation}
\noindent where $p$ is the pressure and $\Omega$ is the canonical potential or free energy and $\Omega = T_{H}I_E $. The energy density is represented by $\epsilon$, $S$ is the entropy and $T$ is the temperature.
Besides for a fixed volume ($V \equiv 1$), one has:
\begin{equation}\label{1leientro}
d \Omega = - S dT\,.
\end{equation}
Here, we will present the behavior of the canonical potential or free energy, from Eq. \eqref{FE}. By analyzing Fig. \ref{freeenergy}, one can see that the canonical potential $\Omega$ has a minimum for each value of the Horndeski parameter $\gamma$ which assures a global condition of thermodynamic stability \cite{DeWolfe:2010he}. This picture also shows that there are critical temperatures where $\Omega =0$, depending on $\gamma$. For $\Omega >0$ these solutions become unstable. The increase of the absolute value of $\gamma$ induces a decrease of these critical temperatures.
\begin{figure}[!ht]
\begin{center}
\includegraphics[scale=0.55]{f05.pdf}
\caption{Canonical potential or free energy as a function of the temperature and considering the influence of Horndeski gravity for the following values $\theta{'}=2\pi/3$, $\kappa=1/4$, $\Lambda=-1$, $\alpha=8/3$, with $\gamma=-0.1$ (solid line), $\gamma=-0.2$ (dashed line), $\gamma=-0.3$ (dot dashed line), and $\gamma=-0.4$ (thick line).}
\label{freeenergy}
\end{center}
\end{figure}
The next thermodynamic quantity that we analyze is the heat capacity $C_V$, defined as:
\begin{equation}
C_V = T \left( \frac{\partial S}{\partial T}\right)_V = - T \left( \frac{\partial^2 \Omega}{\partial T^2} \right)\,.
\end{equation}
In Refs. \cite{Ganai:2019lgc, Myung:2015pua, Ma:2013eaa, Hendi:2015wxa, Hendi:2016pvx} the authors discussed the positivity of the heat capacity and relate it to the local black hole thermodynamic stability condition. This means that the black holes will be thermodynamic stable if $C_V >0$. From Fig. \ref{heatcapacity} one can see that the black hole can switch between stable ($C_V>0$) and unstable ($C_V<0$) phases depending on the sign of heat capacity. Also in Fig. \ref{heatcapacity} one can see the influence of Horndeski gravity in the temperature where the phase transition occurs.
\begin{figure}[!ht]
\vskip 1cm
\begin{center}
\includegraphics[scale=0.55]{f07.pdf}
\caption{Heat capacity as a function of the temperature and considering the influence of Horndeski gravity for the following values $\theta{'}=2\pi/3$, $\kappa=1/4$, $\Lambda=-1$, $\alpha=8/3$, with $\gamma=-0.1$ (solid line), $\gamma=-0.2$ (dashed line), $\gamma=-0.3$ (dot dashed line), and $\gamma=-0.4$ (thick line).}
\label{heatcapacity}
\end{center}
\end{figure}
The sound speed is defined as:
\begin{eqnarray}
c_s^2 \equiv \frac{\partial p}{\partial \epsilon}
= \frac{\partial T}{\partial \epsilon} \frac{\partial p}{\partial T} \,.
\end{eqnarray}
Identifying
\begin{eqnarray}
\frac{\partial T}{\partial \epsilon} = \left(\frac{\partial \epsilon}{\partial T}\right)^{-1} = C_V^{-1} \,;\qquad
\frac{\partial p}{\partial T} = S\,,
\end{eqnarray}
one gets:\footnote{It is also very common to describe the sound sound speed also as $c^2_s = \frac{\partial \ln{T}}{\partial \ln{S}}$.}
\begin{eqnarray}
c_s^2 &=& \frac{S}{C_V}\,.
\end{eqnarray}
In Fig. \ref{entrovs}, we present the behavior of the entropy $S$ and sound speed $c^2_s$ against the temperature achieved from our model. The entropy comes directly from Eq. \eqref{BT8}. In the left panel one can see the behavior of the entropy $S$ and the influence of the Horndeski gravity. On the other hand, in the right panel, we show the sound speed and the effects of Horndeski gravity which are more intense for $\gamma = -0.4$. In this case it deviates from the value 1/3 associated with the conformal system.
\begin{figure}[!ht]
\vskip 1cm
\begin{center}
\includegraphics[scale=0.45]{f06.pdf}
\includegraphics[scale=0.45]{f08.pdf}
\caption{ Entropy ({\sl left panel}) and sound speed ({\sl Right panel}) as a functions of the temperature and considering the influence of Horndeski gravity for the following values values $\theta{'}=2\pi/3$, $\kappa=1/4$, $\Lambda=-1$, $\alpha=8/3$, with $\gamma=-0.1$ (solid line), $\gamma=-0.2$ (dashed line), $\gamma=-0.3$ (dot dashed line), and $\gamma=-0.4$ (thick line).}
\label{entrovs}
\end{center}
\end{figure}
The last thermodynamic quantity that we will present in this section is the trace of the energy momentum tensor, defined as:
\begin{equation}
\langle T^a_{\ \ a}\rangle = \epsilon - 3p = 4 \Omega + TS\,.
\end{equation}
In Fig. \ref{trace}, one can see the behavior of the scaled trace of the energy momentum tensor ($\langle T^a_{\ \ a}\rangle/T^4$) as a function of the temperature. It has a quite interesting behavior: for the small temperature regime it presents $\langle T^a_{\ \ a}\rangle \neq 0$; for the high temperature regime, despite the influence of the Horndeski gravity, $\langle T^a_{\ \ a}\rangle \to 0$, which is an indication of a restoration of the conformal symmetry and therefore the emergence of a non-trivial BCFT.
\begin{figure}[!ht]
\begin{center}
\includegraphics[scale=0.55]{f09.pdf}
\caption{Trace of the scaled energy momentum tensor as a function of the temperature considering the values values $\theta{'}=2\pi/3$, $\kappa=1/4$, $\Lambda=-1$, $\alpha=8/3$, with $\gamma=-0.1$ (solid line), $\gamma=-0.2$ (dashed line), $\gamma=-0.3$ (dot dashed line), and $\gamma=-0.4$ (thick line).}
\label{trace}
\end{center}
\end{figure}
\section{Hawking-Page phase transition}\label{v5}
In this section, we will analyze the Hawking-Page phase transition (HPPT) for a BTZ black hole considering the contributions of the AdS/BCFT correspondence within Horndeski gravity. The HPPT was proposed originally in Ref. \cite{hawpage}, in the context of general relativity, which discusses the stability and instability of black holes in AdS space. The transition between the stable and unstable configurations characterize a phase transition of first order with an associated critical temperature.
In the context of the AdS/CFT program, the pioneer work in Ref. \cite{Witten:1998zw}, presented how to relate the temperature in gravitational theory with the one associated with the gauge theory on the boundary.\footnote{Note that in Ref. \cite{Witten:1998zw} the Hawking temperature as well as Hawking-Page phase transition were associated with the temperature of deconfimenent in QCD and confinement/deconfinement phase transition. In this work we do not use such an interpretation.} For an incomplete list of works dealing with HPPT within AdS/QCD context, see for instance Refs. \cite{Cho:2002hq,Herzog:2006ra, Kajantie:2006hv, BallonBayona:2007vp, Rodrigues:2017cha, Rodrigues:2017iqi, Chen:2020ath, Li:2020khm, Wang:2020pmb}. In particular, HPPT within BTZ black hole scenario can be seen, also in an incomplete list, in Refs. \cite{Myung:2006sq, Eune:2013qs, Detournay:2015ysa, Myung:2015pua, Tang:2016vmu, Ganai:2019lgc}.\footnote{It is worthwhile to mention that only in Ref. \cite{Eune:2013qs} the authors have used the holographic renormalization in order to compute the free energy. In all other listed references, the authors derived the free energy from the Bekenstein-Hawking entropy.}
The partition function for the AdS-black hole ($V_{E}$) is identified with minus the renormalized euclidean action, Eq. \eqref{freeEBH}, $V_{E}=-I_E$, so that:
\begin{eqnarray}
V_{E}&=&\frac{L\Delta y}{8r_{h}G_{N}}\left(1-\frac{\xi}{4L^{2}}\right)+\frac{L \Delta y_{Q}}{2r_{h}G_{N}}\left(1-\frac{\xi}{8L^{2}}\right)\nonumber\\
&-&\frac{\xi b(\theta^{'})L}{2r^{2}_{h}G_{N}}\left(1-\frac{\xi}{8L^{2}}\right)-\frac{\xi q(\theta^{'})L}{2G_{N}}-\frac{\xi h(\theta^{'})\cot(\theta^{'})}{2r^{2}_{h}}\,.\label{HP}
\end{eqnarray}
\noindent Analogously, the partition function for the thermal AdS, is defined as $V_{E}(0)=-I_E(0)$, where $I_E(0)$ is given by Eq. \eqref{BT6.2}:
\begin{eqnarray}
&&V_{E}(0)=\frac{L\Delta y}{8r_{h}G_{N}}\left(1-\frac{\xi}{4L^{2}}\right)\,.\label{HP1}
\end{eqnarray}
Now, we can compute $\Delta V_{E}$, so that:
\begin{eqnarray}
\Delta V_{E}=\frac{L\Delta y_{Q}}{r_{h}G_{N}}\left(1-\frac{\xi}{8L^{2}}\right)-\frac{\xi b(\theta^{'})L}{2r^{2}_{h}G_{N}}\left(1-\frac{\xi}{8L^{2}}\right)-\frac{\xi h(\theta^{'})\cot(\theta^{'})}{2r^{2}_{h}}-\frac{\xi q(\theta^{'})L}{2G_{N}}\,.\label{HP2}
\end{eqnarray}
According to the HPPT prescription, the difference $\Delta V_E$ vanishes at the phase transition and $\Delta V_E < 0$ indicates the stability of the black hole. On the other hand, $\Delta V_E > 0$ points the stability of the thermal AdS space.
\begin{figure}[!ht]
\vskip 1cm
\begin{center}
\includegraphics[scale=0.55]{f03.pdf}
\caption{This figure shows the Hawking-Page phase transition from Eq. \eqref{HP2} considering the values $\theta{'}=2\pi/3$, $\kappa=1/4$, $\Lambda=-1$, $\alpha=8/3$, with $\gamma=-0.1$ (solid line), $\gamma=-0.2$ (dashed line), $\gamma=-0.3$ (dot dashed line), and $\gamma=-0.4$ (thick line). See the text for discussions.}\label{p01}
\label{planohwkhz}
\end{center}
\end{figure}
In Fig. \ref{planohwkhz}, we show the the difference between the partition functions as a function of the temperature of the BTZ black hole in the AdS/BCFT correspondence and taking into account the contributions coming from the Horndeski gravity. We see that the Horndeski effect decreases the HPPT critical temperature $T_c$, where $\Delta V_E=0$. Besides, the thermal AdS space is stable for low temperatures ($T<T_c$), while the AdS black hole is stable for the high temperature regime ($T>T_c$).
\section{Conclusion}\label{v6}
Here, in this section, we are going to present our conclusions on the AdS/BCFT correspondence and BTZ black hole thermodynamics within Horndeski gravity. Considering the non-minimal coupling between the standard scalar term and the Einstein tensor we established our setup. Besides the three-dimensional bulk, we introduced a Gibbons-Hawking surface term and obtained the corresponding field equations. Then, using the no-hair-theorem, we found a consistent solution for the BTZ black hole. From this solution we construct the Q profile on the two-dimensional boundary, which characterizes the AdS$_{3}$/BCFT$_{2}$ correspondence. In particular, we found an exact numerical solution and an approximate analytical one.\footnote{Note that these solutions for the boundary Q seem to describe a Randall-Sundrum brane in the limit of large Horndeski parameter.}
These two solutions are shown in Fig. \ref{p0}, where one can see that the approximate solution describes qualitatively well the influence of the Horndeski term. So, starting in Sec. \ref{v3} and all subsequent sections, we just considered the approximated analytical solution throughout.
Using this solution, we performed a holographic renormalization procedure in order to get the euclidean on-shell action for the thermal and the AdS-BTZ black holes. The identification of the euclidean on-shell action with the free energy allowed us to computed the total entropy which is the sum of the contributions coming from the bulk and the boundary both with Horndeski terms. From this total entropy and assuming the Bekenstein-Hawking formula we derive the corresponding total area for the AdS-BTZ black hole with Horndeski terms. We found that the total area grows with the absolute value of $\xi$. This suggests that the information encoded on the black hole horizon also grows with $|\xi|$. Another interpretation for the total entropy found in this work is that it represents a correction to the Bekenstein-Hawking formula. For the boundary entropy, it is remarkable that the influence of the Horndeski gravity implies a non-zero or residual entropy in the zero temperature limit $(r_h\to\infty)$, for a certain range of the angle $\theta'$. For another range of $\theta'$
the limit $T\to 0$ cannot be reached. In this case there it seems that there should be a minimum non-zero temperature corresponding to the zero entropy.
The free energy of the AdS-BTZ black hole with Horndeski gravity is depicted in Fig. \ref{freeenergy}. This picture shows the stability of these solutions for $\Omega <0$, up to a certain critical temperature depending on the Horndeski parameter $\xi$.
From this free energy we extracted the other relevant thermodynamic quantities, as the heat capacity, sound speed and trace anomaly. These results seem to be compatible with the ones expected from usual black hole thermodynamic properties.
In Sec. \ref{v5}, we have studied the Hawking-Page phase transition in the AdS/BCFT correspondence with Horndeski gravity. The modification coming from the Horndeski contribution allow us to obtain this phase transition as a function of the temperature, as is usual in higher-dimensional contexts. This contrasts with the description of the HPPT given in Refs. \cite{Takayanagi:2011zk, Fujita:2011fp}, where the authors plotted the free energy as a function of the Q profile tension.
Finally, we would like to comment that these theories of extended gravity, as the Horndeski one, and beyond Einstein’s original proposal, where such theories take into account scalar fields and their couplings with gravity or accommodating higher-order terms in curvature invariants may provide new insights on aspects which can contribute to deepen the knowledge of the gravity duals of conformal field theories as the AdS/BCFT correspondence.
\begin{acknowledgments}
We would like to thank Konstantinos Pallikaris, Vasilis Oikonomou, Adolfo Cisterna, and Diego M. Rodrigues for discussions. H.B.-F. is partially supported by Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES), and Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) under Grant No. 311079/2019-9.
\end{acknowledgments}
|
{
"timestamp": "2021-05-11T02:16:00",
"yymm": "2105",
"arxiv_id": "2105.03802",
"language": "en",
"url": "https://arxiv.org/abs/2105.03802"
}
|
\section{Introduction}
The word aquaculture is related to firming, including breeding, raising, and harvesting fishes, aquatic plants, crustaceans, mollusks, and aquatic organisms. It involves the cultivation of both freshwater and saltwater creatures under a controlled condition and is used to produce food and commercial products as shown in Figure \ref{fig:Aquaculture}.
There are mainly two types of aquaculture. The first one is \textbf{Mariculture} which is the farming of marine organisms for food and other products such as pharmaceuticals, food additives, jewelry (e.g., cultured pearls), nutraceuticals, and cosmetics. Marine organisms are farmed either in the natural marine environment or in the land- or sea-based enclosures, such as cages, ponds, or raceways. Seaweeds, mollusks, shrimps, marine fish, and a wide range of other minor species such as sea cucumbers and sea horses are among the wide range of organisms presently farmed around the world's coastlines. It contributes to sustainable food production and the economic development of local communities. However, sometimes at a large scale of marine firming become a threat to marine and coastal environments like degradation of natural habitats, nutrients, and waste discharge, accidental release of alien organisms, the transmission of diseases to wild stocks, and displacement of local and indigenous communities \cite{MariCulture}.
The second one is \textbf{Fish farming} which is the cultivation of fish for commercial purposes in human-made tanks and other enclosures. Usually, some common types of fish like catfish, tilapia, salmon, carp, cod, and trout are firmed in these enclosures. Nowadays, the fish-farming industry has grown to meet the demand for fish products \cite{FishFarm}. This form of aquaculture is widespread for a long time as it is said to produce a cheap source of protein.
Global aquaculture is one of the quickest growing food productions, accounting for almost 53\% of all fish and invertebrate production and 97\% of the total seaweed manufacture as of 2020. Estimated global production of farmed salmon stepped up by 7 percent in 2019, to just over 2.6 million tonnes of the market \cite{AquacultureIntroduction}. Global aquaculture of salmon has a threat of various diseases that can devastate the conventional production of salmon.
Diseases have a dangerous impact on fishes in both the natural environment and in aquaculture. Diseases are globally admitted as one of the most severe warnings to the economic success of aquaculture. Diseases of fishes are provoked by a spacious range of contagious organisms such as bacteria, viruses, protozoan, and metazoan parasites. Bacteria are accountable for the preponderance of the contagious diseases in confined fish \cite{FishDiseaseIntroduction}. Infectious diseases create one in every foremost vital threat to victorious aquaculture. The massive numbers of fishes gathered in a tiny region gives an ecosystem favorable for development and quickly spreads contagious diseases. In this jam-packed situation, a comparatively fabricated environment, fishes are stressed and also respond to disease. Furthermore, the water ecosystem and insufficient water flow make it easier for the spread of pathogens in gathered populations \cite{FishDiseaseIntroduction2}. Detection of disease with the cooperation of some image processing can help to extract good features.
\begin{figure}
\centering
\includegraphics[width=1.0\columnwidth]{Images/Visuals/aquaculture.jpg}
\caption{Aquaculture~\protect\cite{FigAquaculture}}
\label{fig:Aquaculture}
\end{figure}
Image segmentation becomes indispensable for various research fields like computer vision, artificial intelligence, etc. The \textit{k} means segmentation is a popular image processing technique that mainly partitions different regions in an image without loss of information. In \cite{kailasanathan2001image}, authors applied \textit{k} means segmentation for authentication of images. Another application of \textit{k} means segmentation shown at \cite{gaur2015handwritten} where they use this technique to recognize handwritten Hindi characters.
One of the most popular supervised machine learning techniques, support vector machine (SVM), has brought convenient solutions for many classification problems in various fields. It is a powerful classification tool that brings out quality predictions for unlabeled data. In \cite{khan2016analysis} Authors built an SVM model based on three kernel functions to differentiate dengue human infected blood sera and healthy sera. For image classification, another SVM architecture has been proposed in \cite{agarap2017architecture} where they emulate the architecture by combining convolutional neural network (CNN) with SVM. SVM provides remarkable accuracy in many contexts.
In this paper, we conduct our research on the salmon fish disease classification, either the fish has an infection or not, with a machine vision-based technique. A feature set is a trade-off for the classification of the disease. Image processing techniques are used to extort the features from the images, then a support vector machine (SVM) is employed for the successful classification of infectious disease. Hither, we summarize the entire concept of this work’s contribution given below:
\begin{itemize}
\item Propose a groundbreaking framework for fish disease detection based on the machine learning model (SVM).
\item Appraising and analyzing the performance of our proposed model both with and without image augmentation.
\item Juxtaposing our proposed model with a good performing model by some evaluation metrics.
\end{itemize}
\section{Related Work}
Some works focused on only some basic image processing techniques for the identification of fish disease. Shaveta et al. \cite{LRShaveta} proposed an image-based detection technique where firstly applies image segmentation as an edge detection with Canny, Prewitt, and Sobel. However, they did not specify the exact technique that engrossed for feature extraction. In feature extraction, they applied Histogram of Gradient (HOG) and Features from Accelerated Segment Test (FAST) for classification with a combination of both techniques. They tried to discover a better classification with a combination instead of applying a specific method with less exactness. Another technique Lyubchenko et al. \cite{LRLyubchenko} proposed a structure called the clustering of objects in the image that obliged diverse image segmentation actions based on a scale of various clusters. Here, they chose markers for individual objects and objects encountered with a specific marker. Finally, they calculated the proportion of an object in the image and the proportion of infected area to the fish body to identify fish disease. However, individual marking of an object is time-consuming and not effective.
There are some approaches focused on the consolidation of image processing and machine learning. Malik et al. \cite{LRMalik} proposed a specific fish disease called Epizootic Ulcerative Syndrome (EUS) detection approach. Aphanomyces invadans, a fungal pathogen, cause this disease. Here, they approached combination styles that combine the Principal Component Analysis (PCA) and Histogram of Oriented Gradients (HOG ) with Features from Accelerated Segment Test (FAST) feature detector and then classify over machine learning algorithm (neural network). The sequence of FAST-PCA-NN gives 86 percent accuracy through the classifier, and HOG-PCA-NN gives 65.8 percent accuracy that is less than the previous combination.
Verma et al. \cite{verma2017analysis} proposed a sensitive topic that is kidney stone detection. In this paper, the authors apply morphological operations and segmentation to determine ROI (region of interest) for the SVM classification technique. After applying this technique, they investigated the kidney stone images with some difficulties, such as the similarity of kidney stone and low image resolution. Zhou et al. \cite{zhou2017device} introduced a device-free present detection and localization with SVM aid. Here, the detection algorithm can detect human presence through the SVM classifier using CSI (channel state information) fingerprint. Trojans in hardware detection \cite{inoue2017designing} depend on SVM based approach. Here, the authors evaluated a trojans detection method with their designed hardware. For SVM analysis, their netlists consist of three types of hardware trojan with normal and abnormal behavior.
We can conclude that none has performed any depth research work on salmon fish disease classification regarding the research obligations described above. Furthermore, most of the research works involved typical fish disease classification but not in aquaculture. All those described techniques depend solely on image processing or a combination of image processing and machine learning technique but not up to the mark.
\section{Preliminary and Proposed Framework}
This section has several stages presented in Figure \ref{img:proposedFramework}. Here we precisely present the appurtenant technologies and a solution framework of salmon fish disease classification.
\begin{figure*}
\begin{center}
\centering
\includegraphics[scale=.90]{Images/ProposedFrameworkV2.pdf}
\caption{Proposed Framework (The overall anatomy of our proposed work gradually from input to result).}
\label{img:proposedFramework}
\end{center}
\end{figure*}
\subsection{Cubic Splines Interpolation}
Raw images appeared on the dataset in various sizes. If we do not resize these images before training the classifier, the classifier's efficiency may be decreased. As we collected these images from different sources, we reshape them before applying them to the classifier.
For image magnification and fixed-size conversion, we use an improved interpolation method called extended \textbf{\textit{Cubic Splines interpolation}} \cite{BSPline}. For a finite interval $[a,b]$, let $\left \{ x_{i} \right \}_{i = 0}^{n}$ be a partition of the interim with steady step size, $h$. We elongate the partition using Equation \ref{eqn:cubicB1}.
\begin{equation}\label{eqn:cubicB1}
h = \frac{b - a}{n}, \quad x_{0} = a, \quad x_{i} = x_0 + ih, \quad i = \pm 1, \, \pm 2, \, \pm 3, ...
\end{equation}
Given $\left \{ x_{i} \right \}$, the extended cubic B-spline function, $S\left ( x \right )$ is a linear combination of the extended cubic B-spline basis function in Equation \ref{eqn:cubicB2},
\begin{equation}\label{eqn:cubicB2}
S\left ( x \right ) = \sum_{i = -3}^{n-1} C_{i}EB_{3,i}\left ( x \right ), \quad x \, \epsilon \, \left [ x_{0}, x_{n} \right ]
\end{equation}
where $C_{i}$ are unknown real coefficients. Since $\,C_{i}EB_{3,i}\left ( x \right )$ has a support on $\left [ x_{i}, x_{i+4} \right ]$, there are three nonzero basis function evaluated at each $x_{i}$: $C_{i}EB_{3,i-3}\left ( x \right )$, $C_{i}EB_{3,i-2}\left ( x \right )$, $C_{i}EB_{3,i-1}\left ( x \right )$.
\subsection{Adaptive Histogram Equalization}
In order to improve the image's quality, contrast enhancement is an essential technique. It contributes to recover the lost information in images. Due to magnification and resizing images, some images may lose information. To prevent this problem, we use Adaptive Histogram Equalization and enhance the contrast of each image.
Adaptive histogram equalization (AHE) is a vision processing approach used to enhance contrast in images. Here, we use an extension of AHE called contrast limited adaptive histogram equalization (CLAHE) \cite{liu2019adaptive}. CLAHE diverges from conventional AHE in its contrast limiting. A user-defined value called clip limit where CLAHE limits the augmentation by clipping the histogram. The amount of clamor in the histogram depends on the clipping level. Also, the smoothness and enhancement of contrast rely on this clipping level too. A modification of the limited contrast technique called adaptive histogram clip (AHC) can also be applied. AHC dynamically calibrates the clipping level and balanced over-enhancement of a background area of images \cite{hitam2013mixture}. Here, we use one of the AHC called the Rayleigh distribution that forms a normal histogram. Equation \ref{eqn:clahe} represents this function as below
\begin{equation}\label{eqn:clahe}
Rayleigh \;\;\; p = p_{min} + [\;2(\alpha^2)\;\;ln(\frac{1}{1-Q(f)})\;]^{0.5}
\end{equation}
Where $p_{min}$ and $Q(f)$ represent minimum pixel value and cumulative probability distribution, respectively. $\alpha$ is a non-negative real scalar indicating a distribution parameter. In this experiment, we set 0.01 as a clip limit and 0.04 as the $\alpha$ value in the Rayleigh distribution function.
\subsection{RGB Color Space to L*a*b Color Space}
We now convert the adaptive histogram equalized image from RGB to L*a*b. Hence, we use the \textit{k}-means clustering for segmentation of the image, and here \textit{k}-means clustering technique segments image efficiently in L*a*b color space rather than RGB color space \cite{burney2014k}. In L*a*b color space, where L expresses the lightness of an image and the a, b color channel depicts the other color combinations \cite{rahman2016non}. For this transformation, we need to convert from RGB color space to XYZ color space \cite{ColorCoversion}, \cite{bianco2007new} according to Equation \ref{eqn:rgbToXyz}.
\begin{equation}\label{eqn:rgbToXyz}
\begin{bmatrix}
X\\
Y\\
Z
\end{bmatrix} = \begin{bmatrix}
0.412453 & 0.357580 & 0.180423\\
0.212671 & 0.715160 & 0.072169 \\
0.019334 & 0.119193 & 0.950227
\end{bmatrix}
*
\begin{bmatrix}
R\\
G\\
B
\end{bmatrix}
\end{equation}
Now, transforming XYZ color space to L*a*b color space \cite{acharya2002median} according to equation [\ref{eqn:xyzToLab} - \ref{eqn:lab_a}] and suppose that the tristimulus values of the reference white are $X_n$, $Y_n$, $Z_n$.
\begin{equation}\label{eqn:xyzToLab}
L^* = \left\{\begin{matrix}
116(\frac{Y}{Y_n})^\frac{1}{3} - 16 & if \frac{Y}{Y_n} > 0.008856 \\ \\
903.3 \frac{Y}{Y_n} & if \frac{Y}{Y_n} \leq 0.008856
\end{matrix}\right.
\end{equation}
\begin{equation}\label{eqn:lab_a}
a^* = 500 (f(\frac{X}{X_n})- f(\frac{Y}{Y_n}))
\end{equation}
\begin{equation}\label{eqn:lab_b}
b^* = 200 (f(\frac{Y}{Y_n})- f(\frac{Z}{Z_n}))
\end{equation}
Where,
\begin{equation}\label{eqn:lab_a}
f(t) = \left\{\begin{matrix}
t^\frac{1}{3} & if \; t > 0.008856 \\ \\
7.787t + \frac{16}{116} & if \; t \leq 0.008856
\end{matrix}\right.
\end{equation}
\subsection{\textit{k}-means Clustering Segmentation}
When we segment the infected part in a fish's image, the classifier learns to identify the infected fish's images accurately. In this subsection, the converted image is then segmented, utilizing the \textit{k}-means clustering technique into several regions. As a result, it separates infected areas from fishes' images. Techniques in \cite{gaur2015handwritten} and \cite{hartigan1979algorithm} adheres to the conventional steps to appease the primary intent, and that is clustering the image objects into K distinct groups. The steps of \textit{k}-means clustering technique as follows.
\begin{enumerate}
\item Determine the total number of clusters k.
\item In each group, choose k points as a centroid.
\item Appoint each data point to the nearest centroid that assembles k clusters.
\item Calculate and assign the new centroid of each cluster.
\item Go to step 4 when any reassignment took place, and that time reassigns each data point to the nearest centroid. Otherwise, the model is ready.
\end{enumerate}
The function of the \textit{k}-means clustering technique \cite{de2009detection} is the minimum square structure that is measured by
\begin{equation}\label{eqn:KMeans}
J = \sum_{j = 1}^{k}\sum_{i=1}^{n}\left \| {x_i}^{(j)} - c_j \right \|^2
\end{equation}
Where $J$ is the objective function that displays the similarity act of the $n$ objects encompassed in specific groups. $k$ and $n$ are the numbers of clusters and number of cases, respectively. Finally, $ \left \| {x_i}^{(j)} - c_j \right \| $ is the distance function from a point ${x_i}^{(j)}$ to the centroid of group $c_j$.
Here, two types of feature vectors are acquired from the infected area in a fish, i.e., co-occurrence and statistical. We are going to explain certain features with nicety in the experimental evaluation section.
\subsection{Support Vector Machine}
We exploit the feature vectors discussed in the previous subsection to SVM. Support vector machine (SVM) is a supervised machine learning algorithm used in many classification problems for its higher accuracy rate. It aims to construct a hyperplane between different classes with a margin to classify objects. The hyperplane can be constructed in a multidimensional axis to partitioned the data points \cite{meyer2003support, noble2006support}. Figure \ref{img:svm} indicates the basic diagram for the support vector machine. Some common terms related to SVM are mentioned below:
\textbf{Optimal Hyperplane:} The boundary that distinguishes two classes with the maximum margin is the optimal hyperplane. It is an N-1 dimensional subset of N-dimensional surface that distinguishes the classes on that surface. In two dimensions, the hyperplane is a line. With the increasing number of dimensions, the hyperplane's dimension is increased. The optimal hyperplane is determined as $wx_i + b =0$. Here $w$ is the weight vector, $x$ is the input feature vector, and $b$ is the bias. For all aspects of the training set, the w and b comply with the following inequalities \cite{suthaharan2016support}:
\begin{center}
$wx_i + b \geq +1 \; \;if \; y_i = 1$\\
$wx_i + b \leq -1 \; \;if \; y_i = -1$
\end{center}
\textbf{Support Vectors:} Data points that are more adjacent to the hyperplane and influence the positioning of the hyperplane are known as support vectors. The more similar points between the two classes become the support vectors. These points avail in the establishment of SVM. Suppose a labeled training dataset represented as ${\{(x_i,y_i) \;|\; i = 1,2,...,k\}}$, Where $x_i$ is a feature vector representation or input, and $y_i$ is the class label or output. To consider the margin among two kinds of points to maximize, use the Lagrange technique to revamp the original problem to identify a maximum value of the function by Equation \ref{eqn:SVM_1}.
\begin{equation}\label{eqn:SVM_1}
Q(\alpha ) = \sum_{i=1}^{k}\alpha _i - \sum_{i,j=1}^{k}\alpha _i \alpha_j y_i y_j(x_i.y_j)
\end{equation}
Where $\alpha _i$ is the reciprocal multiplier of every snippet. Then remodel it to a large dimensionality space by kernel function $K(x_i,y_j)$ shown as Equation \ref{eqn:SVM_2}.
\begin{equation}\label{eqn:SVM_2}
Q(\alpha ) = \sum_{i=1}^{k}\alpha _i - \sum_{i,j=1}^{k}\alpha _i \alpha_j y_i y_jK(x_i.y_j)
\end{equation}
\textbf{Margin:} Margin is the gap between two non-overlapping classes separated by hyperplanes. It mainly indicates the gap between data points and the dividing line. For the optimal hyperplane, we required the maximum margin.
\textbf{Kernel:} The functions used by the SVM algorithm to classify the objects are mainly known as the kernel function. They mainly transform the inputs into a required form to construct the hyperplane easily. There are many kernels \cite{zhang2015complete} used in SVM such as linear, polynomial (homogeneous and heterogeneous), gaussian, fisher, graph, string, tree, etc.
The \textbf{linear kernel} is one of the most used and straightforward kernel functions used for the linearly separable data points \cite{ben2010user}.
There are many application areas where SVM usually outperforms any other classifier with high accuracy \cite{chandra2018survey}. It is mainly designed for the binary classification problem that we have addressed here. The SVM is trained using feature training datasets for robust performance in the test dataset. For performance analysis, we need to evaluate some metrics shown in the experimental evaluation section.
\begin{figure*}
\begin{center}
\centering
\includegraphics[scale=.85]{Images/SVM.pdf}
\caption{Support Vector Machine (Discovering the optimal hyperplane and the separation of classes for optimal hyperplane).}
\label{img:svm}
\end{center}
\end{figure*}
\subsection{System Architecture}
We embellish a system architecture shown in Figure \ref{Fig:SystemDiagram}. It contains two phases, the first one is the building phase, and the second one is the deployment phase. Inside the building phase, we process the labeled images as training data.
\begin{itemize}
\item Each image is refined through the serialization of mentioned image processing techniques such as cubic splines interpolation, adaptive histogram equalization, and covert RGB color space to L*a*b color space.
\item \textit{K}-means clustering technique is applied for the image segmentation and identifies two types of feature vectors, namely co-occurrence matrix features and statistical features, respectively.
\item Exploit these feature vectors to SVM for further processing.
\end{itemize}
\begin{figure*}
\centering
\includegraphics[scale=.65]{Images/Visuals/SystemDiagram.pdf}
\caption{System Architecture (A well-regulated diagram demonstrating the entire process from data acquisition to model training and prediction of classes).}
\label{Fig:SystemDiagram}
\end{figure*}
The weighted model is used to train the feature vectors and the corresponding labels in the building phase.
The outgrowth from the building phase comprises a trained or learned SVM model. This trained model is applied for classifying any incoming fish in the deployment phase. In the deployment phase, some steps are performing as follows.
\begin{itemize}
\item An input of any fish image is supplied into the system, then the image is refined through the serialization of image processing techniques.
\item Apprehended two types of feature vectors by the \textit{K}-means clustering technique in terms of image segmentation.
\item From the feature extractor, the feature vectors are fed for the trained SVM model.
\item Finally, the outcome is a label of an input image to classify the specific category as fresh or infected fish.
\end{itemize}
This system architecture exhibits the entire process from data acquiring to model training and prediction of classes.
\section{Evaluation}
This section delineates the experimental intervention in detail to evaluate our proposed approach. In this evaluation, we extract the features with a statistical and grey-level co-occurrence matrix (GLCM) with appropriate analogous terms utilized in our fish image dataset. For classification results, we need some performance evaluation metrics to show the operating capability to predict new data.
\subsection{Environment Specifications}
Here, we use the combination of MATLAB \footnote{https://www.mathworks.com/solutions/image-video-processing.html} as a multi-paradigm programming language and Python. For image processing tasks such as cubic splines interpolation, adaptive histogram equalization, and image conversion from RGB to L*a*b, we have used MATLAB. Then, features extraction and training of our SVM model are employed by Python in the Google Colab \footnote{https://colab.research.google.com/} platform. Image interpretation and classification need enormous computing power. Hence, powerful computing tools installation with additional hardware support is expensive. So we use the Google Colab platform that serves us high-end CPU and GPU on the cloud premises that accommodate to train our model efficiently with less time. There is no extra encumbrance to install necessary packages because this platform crafts with all the obligatory packages used in the training process \cite{bisong2019google}. Google Colab ships NVIDIA K80 with 12 GB of GPU memory and 358 GB of disk space. This embellished environment gives an immense computational power to train machine learning models.
\subsection{Experimental Dataset}
Since there is no accessible dataset of salmon fresh and infected fish, we prepared a shibboleth and novel dataset - some images from the internet and most of them from some aquaculture firms. The dataset contains images of fresh and infected salmon fish displays in Figure \ref{FIG:salmonFishExample}. We collect a total of 266 images that are used to train and validate our model. For training and testing, we split our dataset with a ratio of train and test data that depicts in Table \ref{tab:datasetSplitting}. The total number of training and testing images are 231 and 35, respectively.
Hence, our data acquisition is a complicated process. We apply the image augmentation technique in our dataset for expanding the dataset. Here we use the \textit{image\textunderscore argumentor \begin{NoHyper}\footnote{https://github.com/codebox/image\textunderscore augmentor}\end{NoHyper}} tool with some image augmentation operations such as Horizontal Fip (fliph), Vertical Flip (flipv), Rotates (rot), Pixel Shifting (trans), and Zooms (zoom). After the augmentation operation, we perceive 1,105 training images depicted in Table \ref{tab:datasetSplittingWithAugmentaion}. The total number of training and testing images is 1,105 and 221, respectively.
\begin{figure*}
\centering
\begin{subfigure}{.45\linewidth}
\centering\includegraphics[scale=.17]{Images/SalmonFish/SalmonF3.png}
\end{subfigure}
\begin{subfigure}{.45\linewidth}
\centering\includegraphics[scale=.17]{Images/SalmonFish/salmonD1.png}
\end{subfigure}
\medskip
\begin{subfigure}{.45\linewidth}
\centering\includegraphics[scale=.17]{Images/SalmonFish/SalmonF2.png}
\caption{Fresh Fish}
\end{subfigure}
\begin{subfigure}{.45\linewidth}
\centering\includegraphics[scale=.17]{Images/SalmonFish/salmonD2.png}
\caption{Infected Fish}
\end{subfigure}
\medskip
\caption{Salmon Fish (Two sample of fresh fish and infected fish from our dataset).}
\label{FIG:salmonFishExample}
\end{figure*}
\begin{table}[]
\centering
\caption{Overall dataset splitting (The total number of fresh and infected fish images without augmentation).}
\label{tab:datasetSplitting}
\begin{tabular}{ccc}
\hline
\textbf{Fish} & \textbf{Training images} & \textbf{Testing images} \\ \hline
Fresh fish & 68 & 15 \\ \hline
Infected fish & 163 & 20 \\ \hline
\textbf{Total} & \textbf{231} & \textbf{35} \\ \hline
\bottomrule
\end{tabular}
\end{table}
\begin{table}[]
\centering
\caption{Overall dataset splitting (The total number of fresh and infected fish images with augmentation). }\label{tab:datasetSplittingWithAugmentaion}
\begin{tabular}{ccc}
\hline
\textbf{Fish} & \textbf{Training images} & \textbf{Testing images} \\ \hline
Fresh fish & 320 & 64 \\ \hline
Infected fish & 785 & 157 \\ \hline
\textbf{Total} & \textbf{1,105} & \textbf{221} \\ \hline \bottomrule
\end{tabular}
\end{table}
\subsection{Features Extraction}
We consider two types of feature extraction techniques: one is statistical features, and another one is grey-level co-occurrence matrix (GLCM) features based on interpreting fish diseases. Statistical features are described as follows.
\begin{itemize}
\item Mean ($\mu$): Presume there are $P$ pixels in infected regions and $Q$ pixels in fresh regions, and the gray-scale color intensity of a pixel in infected regions is $\psi$, then the mean $\mu$ is represented as the following Equation \ref{eqn:Fea_1}.
\begin{equation}\label{eqn:Fea_1}
\mu = \frac{\sum_{i=1}^{P}\psi_i}{P}
\end{equation}
\item Standard deviation ($\sigma$): Presume there are $P$ pixels in infected regions where the gray-scale color intensity of a pixel represents $\psi$ and mean gray-scale color intensity of all pixels represents $\mu$. The standard deviation $\sigma$ is defined as the following Equation \ref{eqn:Fea_2}.
\begin{equation}\label{eqn:Fea_2}
\sigma = \sqrt{\frac{\sum_{i=1}^{P}(\psi_i-\mu)^2}{P}}
\end{equation}
\item Variance (${\sigma}^2$): If there are $P$ pixels in infected regions where the gray-scale color intensity of a pixel and mean gray-scale color intensity of all pixels represent $\mu$ and $\psi$, then the variance ${\sigma}^2$ is defined as the following Equation \ref{eqn:Fea_3}.
\begin{equation}\label{eqn:Fea_3}
{\sigma}^2 = {\frac{\sum_{i=1}^{P}(\psi_i-\mu)^2}{P}}
\end{equation}
\item Kurtosis ($\kappa$): Presume there are $P$ pixels in infected regions where $\psi$ and $\mu$ represent the gray-scale color intensity of a pixel and the mean gray-scale color intensity of all pixels respectively, then the kurtosis $\kappa$ is defined as the following Equation \ref{eqn:Fea_4}.
\begin{equation}\label{eqn:Fea_4}
\kappa = \frac{\frac{1}{P}\sum_{i=1}^{P}(\psi-\mu)^4}{(\frac{1}{P}\sum_{i=1}^{P}(\psi-\mu)^2)^2}-3
\end{equation}
\item Skewness ($\gamma$):Here, the mean is $\mu$, the standard deviation is $\sigma$, and the mode is $\rho_m$ for the gray-scale color intensity of all pixels in infected areas. Then the skewness $\gamma$ is defined as the following Equation. \ref{eqn:Fea_5}.
\begin{equation}\label{eqn:Fea_5}
\gamma = \frac{\mu - \rho_m}{\sigma}
\end{equation}
\end{itemize}
Along with those statistical features, an amount of GLCM features is used here. These are convenient for extracting textural features from images.
By examining the relationship between two pixels at a time, an assessment of the intense diversity at the pixel can be executed.
Let us presume that $f(a,b)$ be a digital image of two dimensions with $X \times Y$ pixels and gray levels $G_L$. We also presume that $(a_1,b_1)$ and $(a_2,b_2)$ are two pixels in $f(a,b)$, the distance is $D$ and the angle between the dimension and the ordinate is $\theta$. So, a GLCM $M(i, j, D, \theta)$ depicts as the following equation \ref{eqn:glcm_1}.
\begin{equation}\label{eqn:glcm_1}
M(i, j, D, \theta) = \left | \{(a_1,b_1),(a_2,b_2) \epsilon X \times Y: \linebreak D,\theta,f(a_1,b_1) \} \right |
\end{equation}
In this experiment, we use five GLCM features, namely Contrast ($C$), correlation ($\chi$), energy ($\zeta$), entropy ($\Delta$), and homogeneity ($\xi$). Those are represented as the following equation of \ref{eqn:glcm_2} to \ref{eqn:glcm_6}.
\begin{equation}\label{eqn:glcm_2}
Contrast \; \; C = \sum_{i=0}^{G_L-1}\sum_{j=0}^{G_L-1}(i-j)^2M(i,j)
\end{equation}
\begin{equation}\label{eqn:glcm_3}
Correlation \; \; \chi = \frac{\sum_{i=0}^{G_L-1}\sum_{j=0}^{G_L-1}i.j.M(i,j)-\mu_a.\mu_b}{\sigma_a.\sigma_b}
\end{equation}
\begin{equation}\label{eqn:glcm_4}
Energy \; \; \zeta = \sum_{i=0}^{G_L-1}\sum_{j=0}^{G_L-1}M(i,j)^2
\end{equation}
\begin{equation}\label{eqn:glcm_5}
Entropy \; \; \Delta = -\sum_{i=0}^{G_L-1}\sum_{j=0}^{G_L-1}M(i,j)\log M(i,j)
\end{equation}
\begin{equation}\label{eqn:glcm_6}
Homogeneity \; \; \xi = \sum_{i=0}^{G_L-1}\sum_{j=0}^{G_L-1}\frac{M(i,j)}{1+(i-j)^2}
\end{equation}
Here, $\mu_a$, $\mu_b$, $\sigma_a$, and $\sigma_b$ are the sum of anticipated and variance values individually for the row and column entries.
\subsection{Proposed Classifier}
Here, we exploit linear SVM for nonseparable sheaths as our classifier. Since the total number of training and testing images (without augmentation) are 231 and 35, respectively, the training dataset is $\{(x_1,y_1), (x_2, y_2), ...,(x_{231}, y_{231})\}$, where $x_i = (\mu, \sigma, {\sigma}^2,\kappa, \gamma, C, \chi, \zeta, \Delta, \xi)$ is the input vector and $y_i = \pm 1$. The reciprocal multiplier of $\{\alpha_1, \alpha_2,...,\alpha_{231}\}$ depicts as the following equation of \ref{eqn:SVM_Evaluation}.
\begin{equation}\label{eqn:SVM_Evaluation}
Q(\alpha ) = \sum_{i=1}^{35}\alpha _i - \frac{1}{2} \sum_{i,j=1}^{35}\alpha _i \alpha_j y_i y_j x_i x_j
\end{equation}
subject to the constraints\\
1. $\sum_{i=1}^{35}\alpha_iy_i = 0$\\
2. $0 \leq \alpha_i \leq C \; \; for \; i = 1,2,...,35$
\vspace{.5cm}
Where $C$ is a non-negative parameter functioning as an upper-bound value of $\alpha_i$.
The $C$ parameter is acknowledged as the penalty parameter. The $C$ parameter affixes a penalty for all misclassified data points. The low value of $C$ identifies the low points of misclassification. As a result, a large-margin decision boundary is taken at the expense of a higher number of misclassifications. Whereas the significant value of $C$, SVM seeks to reduce the number of misclassification resulting in a smaller margin decision boundary.
Here we settle all SVM parameters through a training process and apply a significant numeric value for $C$. We have used linear kernel in our work from our applied four kernels, namely linear, sigmoid, polynomial, and Gaussian. From those, we have seen that the accuracy varies a negligible amount, and the linear kernel has performed satisfactorily with a short time of the process.
\subsection{Performance Evaluation Metrics}
We appraise the appearance of our trained SVM model based on several metrics. For predicting new data or images, we use a confusion matrix to envision our model's performance. A confusion matrix comprises four building blocks, namely True Positive (TP), True Negative (TN), False Positive (FP), and False Negative (FN). TP and TN refer to the cases where the predictions are true and negative. FP refers to the positively false predictions, and FN indicates negatively false predictions \cite{marom2010using}. We compute more distinct metrics to evaluate our model from the confusion matrix. These precise metrics are Accuracy, Precision, Recall or Sensitivity, Specificity, and F1 score, and those are calculated by applying the formulas below.
\textbf{Accuracy:} Accuracy is one of the evaluating metrics, and informally interpreted in Equation \ref{equation:accuracy}. It is the proportion of specifically classified fishes and the total number of fishes in the test set.
\begin{equation}
Accuracy = \frac{\sum_{i}^{N} P_i}{\sum_{i}^{N} \left | Q_i \right |} \times 100\%
\label{equation:accuracy}
\end{equation}
Where, ${\sum_{i}^{N} P_i}$ is the number of correct predictions, and ${\sum_{i}^{N} \left | Q_i \right |}$ is the total number of predictions.
For binary classification, Accuracy can also be calculated as follows with Equation \ref{equation:accuracy2}.
\begin{equation}
Accuracy = \frac{TP + TN}{TP + TN + FP + FN} \times 100\%
\label{equation:accuracy2}
\end{equation}
Where, $TP$ = True Positives, $TN$ = True Negatives, $FP$ = False Positives, and $FN$ = False Negatives.
\textbf{Precision:} The proposition of classified fishes (TP) and the ground truth (the sum of TP and FP) defines the precision. It calculates the percentage of accurately classified fishes as Equation \ref{equation:precision}.
\begin{equation}
Precision = \frac{TP}{TP + FP} \times 100\%
\label{equation:precision}
\end{equation}
\textbf{Recall or Sensitivity:} The ratio of classified fishes (TP) from the ground truth fish (total number of TP and FN) is defined as Equation \ref{equation:recall}.
\begin{equation}
Recall \; or \; Sensitivity = \frac{TP}{TP + FN} \times 100\%
\label{equation:recall}
\end{equation}
\textbf{Specificity:} The ratio of TN and the summation of FP and TN determine the specificity with the equation of \ref{equation:specificity}.
\begin{equation}
Specificity = \frac{TN}{FP + TN} \times 100\%
\label{equation:specificity}
\end{equation}
\textbf{F1 score (F-measure):} The metric is calculated as the symphonic average of precision and recall \cite{minh2017deep} as the following equation of \ref{equation:F1}.
\begin{equation}
F1 \, score = \frac{2 * Precision * Recall}{Precision + Recall}
\label{equation:F1}
\end{equation}
We cannot only rely on the performance evaluation metrics of F1 score and accuracy. A very high cutoff exaggerates the accuracy of a model. So, we also measure the FPR (False Positive Rate), FNR (False Negative Rate), and TPR (True Positive Rate) by the following equation of \ref{equation:fpr} to \ref{equation:tpr}.
\begin{equation}
FPR = \frac{FP}{FP + TN} \times 100\%
\label{equation:fpr}
\end{equation}
\begin{equation}
FNR = \frac{FN}{FN + TP} \times 100\%
\label{equation:fnr}
\end{equation}
\begin{equation}
TPR = \frac{TP}{TP + FN} \times 100\%
\label{equation:tpr}
\end{equation}
We use the Receiver Operating Characteristics or ROC curve for additional evaluation. We can estimate the area under the ROC curve (AUC) with the valuing of the ROC curve \cite{bradley1997use}. The proportion of TPR and FPR from equations \ref{equation:fpr} and \ref{equation:tpr} generates the ROC curve. ROC curve can easily figure out the performance of a model or classifier to differentiate between classes. Higher the AOC means more excellent the classifier or model is predicting.
\section{Experimental Results}
This section palpates our SVM model results to inspect our model's robustness and see the outcomes of our utilized techniques in both the regular and augmented datasets. Here, we present the actual upshots and comparisons with some graphical representations and tables.
First, an input image with any dimension is converted and magnified with a fixed size of 600 $\times$ 250 pixels according to our proposed framework. The image is then segmented into various regions utilizing the \textit{k}-means clustering technique. As a result, a fish image is easily identifiable in terms of the infected and fresh areas. After this segmentation, the infected areas are more observable. All the mentioned aspects are shown in Figure \ref{FIG:salmonStages}.
\begin{figure*}
\centering
\begin{subfigure}{.24\linewidth}
\centering\includegraphics[scale=.20]{Images/SalmonFish/ImageProcessing/salmon_dis_31.png}
\end{subfigure}
\begin{subfigure}{.24\linewidth}
\centering\includegraphics[scale=.20]{Images/SalmonFish/ImageProcessing/salmon_dis_31.png}
\end{subfigure}
\begin{subfigure}{.24\linewidth}
\centering\includegraphics[scale=.20]{Images/SalmonFish/ImageProcessing/Clache1.jpg}
\end{subfigure}
\begin{subfigure}{.24\linewidth}
\centering\includegraphics[scale=.15]{Images/SalmonFish/ImageProcessing/Kmeans1.png}
\end{subfigure}
\medskip
\begin{subfigure}{.24\linewidth}
\centering\includegraphics[scale=.20]{Images/SalmonFish/ImageProcessing/salmon_dis_32.png}
\caption{Input image}
\end{subfigure}
\begin{subfigure}{.24\linewidth}
\centering\includegraphics[scale=.20]{Images/SalmonFish/ImageProcessing/salmon_dis_32.png}
\caption{Resized image}
\end{subfigure}
\begin{subfigure}{.24\linewidth}
\centering\includegraphics[scale=.20]{Images/SalmonFish/ImageProcessing/Clache2.jpg}
\caption{Contrast enhanced image}
\end{subfigure}
\begin{subfigure}{.24\linewidth}
\centering\includegraphics[scale=.15]{Images/SalmonFish/ImageProcessing/Kmeans2.png}
\caption{k-means segmented image}
\end{subfigure}
\medskip
\caption{Various appearances of image processing (Exhibit the four stages of image processing before features extraction).}
\label{FIG:salmonStages}
\end{figure*}
\subsection{Classification Performance of Proposed SVM}
The classification assessment of our proposed SVM classifier describes in Table \ref{tab:ClassificationResult} (with augmentation) and Table \ref{tab:ClassificationResultWithAugmentation} (without augmentation). Here, both tables only comprise the SVM classifier that concession two classes: fresh fish and infected fish. In Table \ref{tab:ClassificationResult}, for the class of fresh fish, a high percentage shows the sensitivity of 98.46\% whereas the accuracy of 92.0\%. There is also the precision, F1 score, and specificity of 92.75\%, 95.52\%, and 50.00\% correspondingly. In the infected fish class, the highest percentage of 96.02\% shows in the F1 score. Here, the accuracy of 93.50\% and the recall of 98.13\% are also mentioned. Table \ref{tab:ClassificationResultWithAugmentation} shows the accuracy of 93.75\% in the fresh fish class and 94.90\% in the infected fish class. It also shows a good F1 score of 96.23\% and 97.08\% in the fresh and infected fish class.
We distinctly observe Table \ref{tab:ClassificationResult} and Table \ref{tab:ClassificationResultWithAugmentation} where the infected fish class accuracy is 93.50\% and 94.90\%, higher than the fresh fish class. We also see the FPR and FNR in both classes, and the infected class shows slightly higher FNR and lower FPR than the fresh fish class. Here, the low percentage of FPR and FNR refers to our model is not underfit or overfit. So, as an individual class prediction, the infected fish class performs satisfactorily.
\begin{table*}[]
\caption{Class-wise classification results of SVM (Without augmentation).}
\label{tab:ClassificationResult}
\begin{tabular}{ccccccccc}
\hline
\textbf{Classifier} & \textbf{Class} & \textbf{\begin{tabular}[c]{@{}c@{}}Accuracy\\ (\%)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Precision\\ (\%)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Recall/Sensitivity\\ (\%)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Specificity\\ (\%)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}F1-score\\ (\%)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}False positive\\ Rate (\%)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}False negative\\ Rate (\%)\end{tabular}} \\ \hline
\multirow{2}{*}{SVM} & Fresh fish & 92.0 & 92.75 & 98.46 & 50.0 & 95.52 & 50.0 & 1.54 \\ \cline{2-9}
& Infected fish & 93.50 & 94.01 & 98.13 & 75.0 & 96.02 & 25.0 & 1.875 \\ \hline \bottomrule
\end{tabular}
\end{table*}
\begin{table*}[]
\caption{Class-wise classification results of SVM (With augmentation).}
\label{tab:ClassificationResultWithAugmentation}
\begin{tabular}{ccccccccc}
\hline
\textbf{Classifier} & \textbf{Class} & \textbf{\begin{tabular}[c]{@{}c@{}}Accuracy\\ (\%)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Precision\\ (\%)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Recall/Sensitivity\\ (\%)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Specificity\\ (\%)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}F1-score\\ (\%)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}False positive\\ Rate (\%)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}False negative\\ Rate (\%)\end{tabular}} \\ \hline
\multirow{2}{*}{SVM} & Fresh fish & 93.75 & 96.23 & 96.23 & 81.82 & 96.23 & 18.19 & 3.77 \\ \cline{2-9}
& Infected fish & 94.90 & 98.52 & 95.68 & 88.89 & 97.08 & 11.11 & 4.31 \\ \hline \bottomrule
\end{tabular}
\end{table*}
We represent the two confusion matrix as a heat map for better graphical representation shown in Figure \ref{FIG:heatmapSVM} for both with and without augmentation. This heat map can conveniently display the classification and misclassification of our binary class. From this confusion matrix in Figure \ref{FIG:heatmapSVM} (a), we see that fresh fish is misclassified only two times with infected fishes, and infected fish is misclassified once with fresh fish. Figure \ref{FIG:heatmapSVM} (b) shows the misclassification of seven fresh fish with infected fish and six infected fish with fresh fish.
\begin{figure*}[!h]
\centering
\begin{subfigure}{.48\linewidth}
\centering\includegraphics[width=\linewidth]{Images/Results/heatMapSVM.pdf}
\caption{Without augmentation}
\end{subfigure}
\begin{subfigure}{.48\linewidth}
\centering\includegraphics[width=\linewidth]{Images/Results/heatMapSVMWithAugmentation.pdf}
\caption{With augmentation}
\end{subfigure}
\caption{Confusion matrix for SVM classifier.}
\label{FIG:heatmapSVM}
\end{figure*}
We illustrate our proposed classifier SVM's overall performance in Table \ref{tab:matricEvaluation} to perceive our SVM classifier's metrics-wise performance for both with and without augmentation. From the table, the accuracy is 91.42\% in terms of without augmentation. Furthermore, 94.12\% in terms of augmentation, which is reliable for detecting an infected fish.
\begin{table*}[]
\caption{Metric evaluation of SVM classifier.}
\label{tab:matricEvaluation}
\begin{tabular}{|c|c|c|}
\hline
\multirow{2}{*}{\textbf{Evaluation metric}} & \multicolumn{2}{c|}{\textbf{Value (\% )}} \\ \cline{2-3}
& \multicolumn{1}{l|}{\textbf{Without augmentation}} & \multicolumn{1}{l|}{\textbf{With augmentation}} \\ \hline
Accuracy & 91.42 & 94.12 \\ \hline
Precision & 86.67 & 89.06 \\ \hline
Recall or Sensitivity & 92.86 & 90.48 \\ \hline
Specificity & 90.48 & 95.57 \\ \hline
F1-score & 89.66 & 89.76 \\ \hline
False positive rate & 4.43 & 9.52 \\ \hline
False negative rate & 9.52 & 7.14 \\ \hline
\end{tabular}
\end{table*}
In Figure \ref{FIG:rovSVM}, we exhibit a ROC curve that evinces the outcomes to reflect the comprehensive classification performance of SVM with and without augmentation. It scrutinizes the proportion between the true positive rate (TPR) and the false positive rate (FPR).
Figure \ref{FIG:rovSVM} (a) shows the overall micro average AUC score of 96.20\% and the macro average AUC score of 95.93\% without augmentation. And the Figure \ref{FIG:rovSVM} shows the (b) micro average AUC score of 98.12\% and the macro average AUC score of 96.71\% with augmentation.
\begin{figure*}[!h]
\centering
\begin{subfigure}{.48\linewidth}
\centering\includegraphics[width=\linewidth]{Images/Results/ROCSVM.pdf}
\caption{Without augmentation}
\end{subfigure}
\begin{subfigure}{.48\linewidth}
\centering\includegraphics[width=\linewidth]{Images/Results/ROCSVMWithAugmentaion.pdf}
\caption{With augmentation}
\end{subfigure}
\caption{ROC curve for SVM classifier.}
\label{FIG:rovSVM}
\end{figure*}
In this paper, we have analyzed SVM so far, but we likewise investigate the other three classifiers for the performance appraisal. Our investigated three classifiers are decision tree, logistic regression, and naïve Bayes. In Figure \ref{FIG:comparisonClassifier}, we sketch a bar diagram representing the evaluation metrics of our investigated all four classifiers. Here, SVM’s evaluation metrics with augmentation perform more vital than the other three classifiers. Decision tree performs more reliable than logistic regression with an accuracy of 81.54\% whereas logistic regression’s accuracy is 80.0\%, which is better than naïve Bayes. The remaining metrics precision, sensitivity, specificity, and F1-score for the decision tree are 84.84\%, 80.0\%, 83.33\%, and 82.35\% respectively, lower than SVM higher than logistic regression and naïve Bayes.
\begin{figure*}
\centering
\centering\includegraphics[scale=.46]{Images/Results/ClassificationComparision.pdf}
\caption{Comparison of classifiers evaluation metrics with image augmentation (Value of Accuracy, Precision, Sensitivity, Specificity and F1-score for SVM, Decision Tree, Logistic Regression and Naive Bayes).}
\label{FIG:comparisonClassifier}
\end{figure*}
Last, we see our predicted result from our SVM classifier in Figure \ref{FIG:predictedFish}. Here, we demonstrate our binary class predictions for fresh fishes and infected fishes. Classifier correctly predicts the inserted original image.
\begin{figure*}
\centering
\begin{subfigure}{.45\linewidth}
\centering\includegraphics[scale=.23]{Images/Results/PredictFresh.png}
\caption{Original: Fresh Fish}
\end{subfigure}
\begin{subfigure}{.45\linewidth}
\centering\includegraphics[scale=.23]{Images/Results/PredictInfected.png}
\caption{Original: Infected Fish}
\end{subfigure}
\medskip
\caption{Fish prediction according to SVM.}
\label{FIG:predictedFish}
\end{figure*}
\subsection{Comparative analysis}
Research works regarding Fish disease detection technique in machine learning mechanisms did not take place up to the mark. The related works are comparatively fewer than other detection work such as fruit disease, crop disease, and some connected work. To appraise our proposed SVM's evaluation metrics to identify the infected fishes, we require studying some relevant promulgated research works. In table \ref{tab:comprisonTable}, we list down some research related to identifying the fish disease. Some works only concentrated on image processing to identify fish disease, and some focused on machine learning based classification models. Shaveta et al. \cite{LRShaveta} arbitrated his work with \textit{k}-means segmentation algorithm and took feature set size of two and then applied neural network classifier to achieve the accuracy of 86\%. Lyubchenko et al. \cite{LRLyubchenko} applied segmentation on this work with the combination of \textit{k}-means clustering and mathematical morphology. This work took three feature sets in image processing and did not apply any classifier. As a result, accuracy is not applicable. Malik et al. \cite{LRMalik} proposed some edge detection method and morphological operation as the segmentation process. This work took three feature sets and applied multiple classification models for comparing the result. Neural network and K-NN (Nearest Neighbour) are the two applied models with 86.0\% and 63.32\% accuracy, respectively.
\begin{table*}[\linewidth,cols=8,pos=h]
\caption{Comparison analysis between this work and related works.}
\label{tab:comprisonTable}
\begin{tabular}{llllll}
\hline
\textbf{Work} & \textbf{\begin{tabular}[c]{@{}l@{}}Segmentation \\ Algorithm\end{tabular}} & \textbf{\begin{tabular}[c]{@{}l@{}}Feature \\ Set\\ Size\end{tabular}} & \textbf{\begin{tabular}[c]{@{}l@{}}Classification\\ Performed\end{tabular}} & \textbf{Classifier} & \textbf{\begin{tabular}[c]{@{}l@{}}Accuracy\\ (\%)\end{tabular}} \\ \hline
This work (with augmentation) & k-means clustering & 10 & Yes & SVM & 94.12 \\
This work (without augmentation) & --- & --- & --- & --- & 91.42 \\ \hline
Shaveta et al. \cite{LRShaveta} & k-means clustering & 2 & Yes & Neural network & 86.0 \\ \hline
Lyubchenko et al. \cite{LRLyubchenko} & \begin{tabular}[c]{@{}l@{}}Combination of k-means \\ clustering and \\ mathematical morphology\end{tabular} & 3 & No & Not applicable & Not applicable \\ \hline
Malik et al. \cite{LRMalik} & \begin{tabular}[c]{@{}l@{}}Edge detection and \\ morphological operation\end{tabular} & 3 & Yes & \begin{tabular}[c]{@{}l@{}}Neural network \\ and K-NN (Nearest \\ Neighbour)\end{tabular} & \begin{tabular}[c]{@{}l@{}}86.0 (NN) and \\ 63.32 (K-NN)\end{tabular} \\ \hline
\end{tabular}
\end{table*}
\section{Discussion}
Salmon fish disease detection is an important research area that should require the most attention in the automated research field. However, rarely any intelligent solution for this area comes up in modern times. No existed dataset is available for this research purpose. In this work, we transpire a novel dataset for salmon fish disease detection and conduct our research. Table \ref{tab:datasetSplitting} and Table \ref{tab:ClassificationResultWithAugmentation} have information about our dataset and we divide it for our experiments in this research. Figure \ref{FIG:salmonFishExample} is a little portion of our dataset in which we introduce images of fresh and infected fishes from our dataset. It is mainly defining the input images which we processed and accorded to our classifier.
The main goal we have chased in this research is to classify the infected and fresh salmon fishes. We conduct this experiment based on a real-world image dataset to bring out a reliable system. For ensuring high accuracy, we choose a very efficient machine learning algorithm called support vector machine. SVM is known as one of the leading supervised learning algorithms for classification purposes. In this work, we justify selecting the SVM classifier by comparing our result with other algorithms. The graph from Figure \ref{FIG:comparisonClassifier} justifies our decision to choose the SVM classifier over other techniques. This classifier outperforms to indicate infected and fresh fish for every performance evolution metrics we considered. Comparing Logistic regression, Decision tree, and Naive Bayes, SVM scores higher for accuracy, precision, sensitivity, specificity, and F1 score. These metrics' values from Table \ref{tab:ClassificationResult} and Table \ref{tab:ClassificationResultWithAugmentation} for our proposed classifier specify the efficiency of this work.
We apply image processing techniques like cubic spline interpolation, adaptive histogram equalization, and k means segmentation before the classification process. In Figure \ref{FIG:salmonStages} we can see how these techniques normalized the raw input image for the classifier. Figure \ref{FIG:salmonStages}(b) the resized output image of \ref{FIG:salmonStages}(a) is shown. We achieved these resized images after using cubic spline interpolation. Next \ref{FIG:salmonStages}(c) mainly showcasing the contrast-enhanced images resulting from adaptive histogram equalization. This step makes our image dataset more clearer for the classifier. Then we apply \textit{k} means clustering segmentation to differentiate the infected part and fresh part in an image. In \ref{FIG:salmonStages}(d), we display some segmented images from our experiment.
We conduct our experiment by putting the processed images in the proposed SVM classifier. We identify the performance of our classifier by measuring different evolution metrics. It is a good parameter to understand the efficiency of any algorithm for a developed model. In Table \ref{tab:matricEvaluation}, we put the values of these metrics. We sketch a heatmap that describes the confusion metrics for our model. Confusion metrics are a visualization of the performance measurement of any algorithm of machine learning. Figure \ref{FIG:heatmapSVM} (a) and \ref{FIG:heatmapSVM} (b) are the representation of our confusion metrics. Here we can observe the number of misclassification by our classifier is very low.
We present another result in our classifier's justification: the ROC (Receiver Operating Characteristic) curve in Figure \ref{FIG:rovSVM}. This graph mainly conveys our classifier's performance at every possible classification threshold by plotting True Positive Rate and False Positive Rate.
Previously we mention that less research has been conducted for fish diseases, and the existing ones are not up to the mark. Unless this, we find some related works and compare them with our work to make our research more specific. In Table \ref{tab:comprisonTable}, we differentiate our work from other works. The main noticeable difference observed is that none of these works focuses explicitly on salmon fish. One of them uses only image processing techniques which is not an intelligent system. The other two authors have used the neural network but have less accuracy than our classifier. The number of features they considered in their work is lower than ours.
\vspace{-5pt}
\section{Conclusion and Future Work}
We introduce a significant machine learning-based classification model (SVM) to identify infected fishes in this research work. The real-world without augmented dataset (163 infected and 68 fresh) and augmented dataset (785 infected and 320 fresh) are used to train our model is new and novel. We mainly classify fishes into two individual classes: fresh fish and another is infected fish. We appraise our model with various metrics and show the classified outcome with visual interaction from those classification results. Besides developing our classifier, we applied updated image processing techniques like \textit{k}-means segmentation, cubic spline interpolation, and adaptive histogram equalization for transforming our input image more adaptable to our classifier. We also compare our model results with three classification models and observe that our proposed classifier is the best solution in this case.
This work contributes to bringing out a superior automated fish detection system than the existed systems based on image processing or lower accuracy. We not only depend on the modern image processing technique but also adjoin demandable supervised learning techniques. We prosperously develop the classifier that predicts infected fish with the best accuracy rate than other systems for our real-world novel dataset.
In the future, we stratagem to utilize various Convolutional Neural Networks (CNN) architecture for identifying fish disease more precisely and meticulously. Moreover, we will focus on the implementation of a real-life IoT device using the proposed system. Doing so can be a specific solution for the farmers in aquaculture to identify infected salmon fishes and take proper steps before facing any unexpected loss in their farming. We will work with different fish datasets to make our system more usable in other sectors of aquaculture. We will also concentrate on increasing our existing dataset as salmon fish is one of the demanding elements worldwide.
\vspace{-3pt}
\vspace{-5pt}
\bibliographystyle{abbrv}
|
{
"timestamp": "2021-05-11T02:20:27",
"yymm": "2105",
"arxiv_id": "2105.03934",
"language": "en",
"url": "https://arxiv.org/abs/2105.03934"
}
|
\section{Introduction}\label{sec:introduction}}
\IEEEPARstart{H}{umans} have the amazing ability to learn new concepts from only a few examples, and then effortlessly generalize this knowledge to new samples. In contrast, despite considerable progress, existing image classification models based on deep neural networks e.g.\@,~\cite{krizhevsky2012imagenet,resnet}, are still highly dependent on large amounts of annotated training data~\cite{imagenet_cvpr09} to achieve satisfactory performance. This learnability gap between human intelligence and existing neural networks has motivated many to study learning from a few samples, e.g.\@,~\cite{fei2006one,lake2015human,ravi2017optimization,finn2017model}. Meta-learning, \textit{a.k.a.} learning to learn~\cite{Schmidhuber1992,thrun2012learning}, emerged as a promising direction for few-shot learning~\cite{andrychowicz2016learning,ravi2017optimization,finn2017model,zhen2020learning}.
The working mechanism of meta-learning involves a meta-learner that exploits the common knowledge from various tasks to improve the performance of each individual task. Remarkable success has been achieved in learning good parameter initializations~\cite{finn2017model,rusu2018meta}, efficient optimization update rules~\cite{andrychowicz2016learning, ravi2017optimization}, and powerful common metrics~\cite{vinyals2016matching, snell2017prototypical} from related tasks, which enables fast adaptation to new tasks with few training samples. Meta-learning has also proven to be effective in learning amortized networks shared by related tasks, which generate specific parameters \cite{gordon2018meta} or normalization statistics \cite{du2020metanorm} for individual few-shot learning tasks. However, how to properly define and exploit the prior knowledge from experienced tasks remains an open problem for few-shot learning, and is the one we address in this paper.
An effective base-learner should be powerful enough to solve individual tasks, while being able to absorb the information provided by the meta-learner for overall benefit. Kernels \cite{smola1998learning,scholkopf2018learning,hofmann2008kernel} have proven to be a powerful technique in the machine learning toolbox, e.g.\@,~\cite{cristianini2000introduction,smola2004tutorial,rahimi2007random,sinha2016learning,bach2004multiple}, as they are able to produce strong performance without relying on a large amount of labelled data.
Moreover, task-adaptive kernels with random features, leveraging data-driven sampling strategies~\cite{sinha2016learning}, achieve improved performance over universal ones, at low sampling rates~\cite{hensman2017variational,carratino2018learning,bullins2018not,li2019implicit}. This makes kernels with data-driven random features well-suited tools for learning tasks with limited data. Hence, we introduce kernels as base-learners into the meta-learning framework for few-shot learning.
However, due to the limited availability of samples, it is challenging to learn informative random features for few-shot tasks by solely relying on a tasks own data. Therefore, exploring the shared prior knowledge from different but related tasks is essential for obtaining richer random features and few-shot learning.
\begin{figure*}[t]
\centering
\includegraphics[width=.9\linewidth]{framework.pdf}
\vspace{-2mm}
\caption{MetaKernel learning framework. The meta-learner employs an LSTM-based context inference network $\phi(\cdot)$ to infer the spectral distribution over $\bm{\omega}_0^{t}$, the kernel from the support set $\mathcal{S}^t$ of the current task $t$, and the outputs $\mathbf{h}^{t-1}$ and $\mathbf{c}^{t-1}$ of the previous task. The enriched random bases $\bm{\omega}_k^{t}$ are obtained via conditional normalizing flows with a flow of length $k$. During the learning process, the cell state in the LSTM is deployed to accumulate shared knowledge by experiencing a set of prior tasks. The \textit{remember} and \textit{forget} gates in the LSTM episodically refine the cell state by absorbing information from each experienced task. For each individual task, the task-specific information extracted from the support set is combined with distilled information from the previous tasks to infer the adaptive spectral distribution of the kernels.}
\label{fig:MeteKernel}
\end{figure*}
We propose learning task-specific kernels in a data-driven way with variational random features by leveraging the shared knowledge provided by related tasks. To do so, we develop a latent variable model that treats the random Fourier basis of translation-invariant kernels as the latent variable. The posterior over the random feature basis corresponds to the spectral distribution associated with the kernel. The optimization of the model is formulated as a variational inference problem. Kernel learning with random Fourier features for few-shot learning allows us to leverage the universal approximation property of kernels to capture shared knowledge from related tasks. This probabilistic modelling framework provides a principled way of learning data-driven kernels with random Fourier features and, more importantly, fits well into the meta-learning framework for few-shot learning, providing us the flexibility to customize the variational posterior and leverage meta-knowledge to enhance individual tasks.
To incorporate the prior knowledge from experienced tasks, we further propose a context inference scheme to integrate the inference of random feature bases of the current task into the context of previous related tasks. The context inference provides a generalized way to integrate shared knowledge from the related tasks with task-specific information for the inference of random feature bases. To do so, we adopt a long short-term memory (LSTM) based inference network~\cite{hochreiter1997long}, leveraging its capability of learning long-term dependencies to collect and refine the shared meta-knowledge from a set of previously experienced tasks.
A preliminary conference version of this work, which also covers variational random features and task context inference was published previously~\cite{zhen2020learning}. In this extended work, we further propose conditional normalizing flows to infer richer posteriors over the random bases, which allows us to obtain more informative random features. The normalizing flows (NFs)~\cite{dinh2014nice, dinh2016density, rezende2015variational, kingma2018glow, winkler2019learning} model complicated high dimensional marginal distributions by transforming a simple base distribution (e.g.\@, a standard normal) or priors through a learnable, invertible mapping and then applying the change of variables formula. Normalizing flows, which have not yet been explored in few-shot learning, provide a well-suited technique for learning more expressive random features by transforming a random basis into a richer distribution. The overall learning framework of our MetaKernel is illustrated in Figure~\ref{fig:MeteKernel}.
To validate our method, we conduct extensive experiments on fourteen benchmark datasets for a variety of few-shot learning tasks including image classification and regression. Unlike our prior work~\cite{zhen2020learning}, we also experiment on the large-scale Meta-Dataset by Triantafillou et al.\@~\cite{triantafillou2019meta} and the challenging few-shot domain generalization setting suggested by Du et al.\@~\cite{du2020metanorm}. MetaKernel consistently delivers at least comparable and often better performance than state-of-the-art alternatives on all datasets, and the ablative analysis demonstrates the effectiveness of each MetaKernel component for few-shot learning.
The rest of this paper is organized as follows:
Section~\ref{sec:related} summarizes related work. Section~\ref{sec:method} presents the proposed MetaKernel framework. Section~\ref{sec:experiments} summarizes experimental details, state-of-the-art comparisons and detailed ablation studies. Section~\ref{sec:conclusion} closes with concluding remarks.
\section{Related Work}
\label{sec:related}
\subsection{Meta-Learning}
Meta-learning, or learning to learn, endues machine learning models the ability to improve their performance by leveraging knowledge extracted from a number of prior tasks. It has received increasing research interest with breakthroughs in many directions, e.g.\@,~\cite{finn2017model,rusu2018meta,gordon2018meta,rajeswaran2019meta, hospedales2020meta}. Existing methods can be roughly categorized into four groups.
Models in the first group are based on distance metrics and generally learn a shared or adaptive embedding space in which query images are accurately matched to support images for classification. They rely on the assumption that a common metric space is shared across related tasks and usually do not employ an explicit base-learner for each task. By extending the matching network~\cite{vinyals2016matching} to few-shot scenarios, Snell et al.\@~\cite{snell2017prototypical} constructed a prototype for each class by averaging the feature representations of samples from the class in the metric space. The classification is conducted by matching the query samples to prototypes by computing their distances. To enhance the prototype representation, Allen et al.\@~\cite{allen2019infinite} proposed an infinite mixture of prototypes (IMP) to adaptively represent data distributions for each class, using multiple clusters instead of a single vector. Oreshkin et al.\@~\cite{oreshkin2018tadam} proposed a task-dependent adaptive metric for few-shot learning and established prototypes of classes conditioned on a task representation encoded by a task embedding network. Yoon et al.\@~\cite{yoon2019tapnet} proposed a few-shot learning algorithm aided by a linear transformer that performs task-specific null-space projection of the network output. Graphical neural network based models generalize the matching methods by learning the message propagation from the support set and transferring it to the query set~\cite{garcia2018few}. Prototype based methods have recently been improved in a variety of ways \cite{cao2019theoretical,triantafillou2019meta,zhen2020memory}.
In this work, we design an explicit base-learner based on kernels for each individual task.
Algorithms in the second group learn an optimization that is shared across tasks, while being adaptable to new tasks. Finn et al.\@~\cite{finn2017model} proposed model-agnostic meta-learning (MAML) to learn an appropriate initialization of model parameters and adapt it to new tasks with only a few gradient steps. To make MAML less prone to meta-overfitting, easier to parallelize and more interpretable, Zintgraf et al.\@~\cite{zintgraf2019fast} proposed fast context adaptation via meta-learning (CAVIA), a single model that adapts to a new task via gradient descent by updating only a set of input parameters at test time, instead of the entire network. Ravi and Larochelle~\cite{ravi2017optimization} proposed an LSTM-based meta-learner that is trained to optimize a neural network classifier. It captures both the short-term knowledge in individual tasks and the long-term knowledge common to all tasks. Learning a shared optimization algorithm has also been explored to quickly learn new tasks~\cite{andrychowicz2016learning,chen2017learning}. Bayesian meta-learning methods~\cite{edwards2016towards,finn2018probabilistic, gordon2018meta,saemundsson2018meta} usually rely on hierarchical Bayesian models to learn the shared statistical information from different tasks and to infer the uncertainty of the models. Rusu et al.\@~\cite{rusu2018meta} proposed to learn a low-dimensional latent embedding of model parameters and perform optimization-based meta-learning in this space, which allows for a task-specific parameter initialization and achieves adaptation more effectively. Our method is orthogonal to optimization based methods and learns a specific base-learner for each task.
The third group of explicitly learned base-learners incorporate what meta-learners have learned and effectively addresses individual tasks~\cite{gordon2018meta, bertinetto2018meta, zhen2020learning}. Gordon et al.\@~\cite{gordon2018meta} avoided the need for gradient based optimization at test time by amortizing the posterior inference of task-specific parameters in their VERSA. It amortizes the cost of inference and alleviates the need for second derivatives during training by replacing test-time optimization with a forward pass through the inference network. To enable efficient adaptation to unseen learning problems, Bertinetto et al.\@~\cite{bertinetto2018meta} incorporated fast solvers with closed-form solutions as the base learning component of their meta-learning framework. These teach the deep network to use ridge regression as part of its own internal model, enabling it to quickly adapt to novel data. In our method, we also deploy an explicit base-learner but, differently, we leverage a memory mechanism based on an LSTM to collect shared knowledge from related tasks and enhance the base-learners for individual tasks.
In the fourth group, a memory mechanism is part of the solution, where an external memory module is deployed to store and leverage key knowledge for quick adaptation~\cite{santoro2016meta,munkhdalai2017meta, munkhdalai2017rapid}. Santoro et al.\@~\cite{santoro2016meta} introduced neural Turing machines into meta-learning by augmenting their neural network with an external memory module, which is used to rapidly assimilate new data to help make accurate predictions with only a few samples. Munkhdalai et al.\@~\cite{munkhdalai2017meta} proposed a Meta Network (MetaNet) to learn meta-level knowledge across tasks and shifting the inductive biases via fast parameterization for rapid generalization. Munkhdalai et al.\@~\cite{munkhdalai2017rapid} designed conditionally shifted neurons within the framework of meta-learning, which modify their activation values with task-specific shifts retrieved from a memory module. In this work, we also leverage a memory mechanism, but, differently, we deploy an LSTM module to collect shared knowledge from related tasks experienced previously to help solve individual tasks.
\subsection{Kernel Learning}
Kernel learning with random Fourier features is a versatile and powerful tool in machine learning~\cite{bishop2006pattern, hofmann2008kernel, shervashidze2011weisfeiler}. Pioneering works~\cite{bach2004multiple,gonen2011multiple, duvenaud2013structure} learn to combine predefined kernels in a multi-kernel learning manner. Kernel approximation by random Fourier features (RFFs)~\cite{rahimi2008random} is an effective technique for efficient kernel learning~\cite{gartner2002multi}, which has recently become increasingly popular~\cite{sinha2016learning,carratino2018learning}. RFFs~\cite{rahimi2008random} are derived from Bochner's theorem~\cite{rudin1962fourier}.
\begin{theorem}[Bochner's theorem~\cite{rudin1962fourier}] A continuous, real valued, symmetric and shift-invariant function $\mathtt{k}(\mathbf{x},\mathbf{x}') = \mathtt{k}(\mathbf{x}-\mathbf{x}')$ on $\mathbb{R}^d$ is a positive definite kernel if and only if it is the Fourier transform $p(\bm{\omega})$ of a positive finite measure such that
\begin{align}
\mathtt{k}(\mathbf{x},\mathbf{x}') =& \int_{\mathbb{R}^d} e^{i\bm{\omega}^\top(\mathbf{x}-\mathbf{x}')}dp(\bm{\omega}) = \mathbb{E}_{\bm{\omega}}[\zeta_{\bm{\omega}}(\mathbf{x})\zeta_{\bm{\omega}}(\mathbf{x}')^*]
\end{align}
where $\zeta_{\bm{\omega}}(\mathbf{x}) = e^{i\bm{\omega}^\top \mathbf{x}}$.
\end{theorem}
It is guaranteed that $\zeta_{\bm{\omega}}(\mathbf{x})\zeta_{\bm{\omega}}(\mathbf{x})^*$ is an unbiased estimation of $\mathtt{k}(\mathbf{x}, \mathbf{x}')$ with sufficient RFF bases $\{\bm{\omega}\}$ drawn from $p(\bm{\omega})$~\cite{rahimi2008random}.
For a predefined kernel, e.g.\@, radial basis function (RBF), we sample from its spectral distribution using the Monte Carlo method, and obtain the explicit feature map:
\begin{equation}
\mathbf{z}(\mathbf{x}) = \frac{1}{\sqrt{D}} [\cos(\bm{\omega}_1^{\top} \mathbf{x} + b_1), \cdots, \cos(\bm{\omega}_D^{\top} \mathbf{x} + b_D)],
\label{rfs}
\end{equation}
where $\{\bm{\omega}_1, \cdots, \bm{\omega}_D\}$ are the random bases sampled from $p(\bm{\omega})$, and $[b_1, \cdots, b_D]$ are $D$ biases sampled from a uniform distribution with a range of $[0, 2\pi]$.
Finally, the kernel value $\mathtt{k}(\mathbf{x}, \mathbf{x}')=\mathbf{z}(\mathbf{x})\mathbf{z}(\mathbf{x}')^{\top}$ in $K$ is computed as the dot product of their random feature maps with the same bases.
Wilson and Adams~\cite{wilson2013gaussian} learn kernels in the frequency domain by modelling the spectral distribution as a mixture of Gaussians and computingits optimal linear combination. Instead of modelling the spectral distribution with explicit density functions, other works focus on optimizing the random base sampling strategy~\cite{yang2015carte, sinha2016learning}. Nonetheless, it has been shown that accurate approximation of kernels does not necessarily result in high classification performance \cite{avron2016quasi,chang2017data}. This suggests that learning adaptive kernels with random features by data-driven sampling strategies \cite{sinha2016learning} can improve the performance, even with a low sampling rate, compared to using universal random features \cite{avron2016quasi,chang2017data}.
Our work introduces kernels into few-shot meta-learning. We propose to learn kernels with random features in a data-driven way by formulating it as a variational inference problem. This allows us to generate task-specific kernels as well as to leverage shared knowledge from related tasks.
\subsection{Normalizing Flows}
Normalizing flows (NFs)~\cite{papamakarios2021normalizing,dinh2014nice,rezende2015variational} are promising methods for expressive probability density estimation with tractable distributions. Unlike variational methods, sampling and density evaluation can be efficient and exact for NFs with neat architectures. Generally, NFs are categorized into five types based on how they construct a flow: 1) Autoregressive flows were one of the first classes of flows with invertible autoregressive functions. Examples of such flows include inverse autoregressive flow~\cite{kingma2016improved} and masked autoregressive flow~\cite{papamakarios2017masked}. 2) Linear flows generalize the idea of permutating of input variables via an invertible linear transformation~\cite{kingma2018glow}. 3) Residual flows~\cite{chen2019residual} are designed as residual networks. The invertible property can be preserved under appropriate constraints; 4) volume-preserved flows with effective invertible architecture, such as coupling layers~\cite{dinh2016density}, are typically used in generative tasks. 5) Infinitesimal flows provide another alternative strategy for constructing flows in continuous time by parameterizing its infinitesimal dynamics~\cite{rezende2015variational}. Normalizing flows are known to be effective in applications with probabilistic models,
including probabilistic modelling~\cite{kingma2018glow, ho2019flow++,esling2019universal, prenger2019waveglow}, inference \cite{rezende2015variational,kingma2016improved} and representation learning~\cite{jacobsen2018revnet}.
In this work, we introduce conditional normalizing flows into our kernel learning framework to infer richer posteriors over the random bases, which yields more informative random features. To our knowledge, this is the first work that introduces conditional normalizing flows into the meta-learning framework for few-shot learning.
\section{Methodology}
\label{sec:method}
In this section, we present our methodology for learning kernels with random Fourier features under the meta-learning framework with limited labels. In Section~\ref{MLK}, we describe the base-learner based on kernel ridge regression. We introduce kernel learning with random features by formulating it as a variational inference problem in Section~\ref{metavrf}. We describe the context inference to leverage the shared knowledge provided by related tasks in Section~\ref{contextinference}. We further enrich the variational random features by conditional normalizing flows in Section~\ref{MetaVRF-CNF}.
\subsection{Meta-Learning with Kernels}
\label{MLK}
We adopt the episodic training strategy~\cite{ravi2017optimization} commonly used for few-shot meta-learning, which involves \textit{meta-training} and \textit{meta-test} stages. In the \textit{meta-training} stage, a meta-learner is trained to enhance the performance of a base-learner on a \textit{meta-training} set with a batch of few-shot learning tasks, where a task is usually referred to as an episode \cite{ravi2017optimization}. In the \textit{meta-test} stage, the base-learner is evaluated on a \textit{meta-test} set with different classes of data samples from the \textit{meta-training} set.
For the few-shot classification problem, we sample $N$-way $k$-shot classification tasks from the \textit{meta-training} set, where $k$ is the number of labelled examples for each of the $N$ classes. Given the $t$-th task with a support set $\mathcal{S}^{t}=\{(\mathbf{x}_i, \mathbf{y}_i)\}_{i=1}^{N\mathord\times k}$ and query set $\mathcal{Q}^{t}=\{(\tilde{\mathbf{x}}_i, \tilde{\mathbf{y}}_i)\}_{i=1}^m$ ($\mathcal{S}^{t}, \mathcal{Q}^{t} \subseteq \mathcal{X}$), we learn the parameters $\alpha^{t}$ of the predictor $f_{\alpha^{t}}$ using a standard learning algorithm with a kernel trick $\alpha^{t} = \Lambda(\Phi(X), Y)$, where $\mathcal{S}^{t} = \{X, Y\}$.\ Here, $\Lambda$ is the base-learner and $\Phi: \mathcal{X} \rightarrow \mathbb{R}^\mathcal{X}$ is a mapping function from $\mathcal{X}$ to a dot product space $\mathcal{H}$. The similarity measure $\mathtt{k}(\mathbf{x}, \mathbf{x}')=\langle\Phi(\mathbf{x}),\Phi(\mathbf{x}')\rangle$ is called a
kernel~\cite{hofmann2008kernel}.
In traditional supervised learning, the base-learner for the $t$-th single task usually relies on a universal kernel to map the input into a dot product space for efficient learning. Once the base-learner is trained on the support set, its performance is evaluated on the query set using the following loss function:
\begin{equation}
\sum_{(\tilde{\mathbf{x}}, \tilde{\mathbf{y}}) \in \mathcal{Q}^{t}}
L \left(f_{\alpha^t} \big(\Phi(\tilde{\mathbf{x}} )\big), \tilde{\mathbf{y}}\right),
\end{equation}
where $L(\cdot)$ can be any differentiable function, e.g.\@,~cross-entropy loss. In the meta-learning setting for few-shot learning, we usually consider a batch of tasks.\ Thus, the meta-learner is trained by optimizing the following objective function \textsl{w.r.t.} the empirical loss on $T$ tasks:
\begin{equation}
\begin{aligned}
\vspace{-3mm}
\sum^T_{t} \sum_{(\tilde{\mathbf{x}}, \tilde{\mathbf{y}} ) \in \mathcal{Q}^{t}} L\left(f_{\alpha^{t}}\big(\Phi^{t}(\tilde{\mathbf{x}})\big), \tilde{\mathbf{y}}\right), \text{s.t.} \,\ \alpha^{t} = \Lambda\left(\Phi^{t}(X), Y\right),
\label{obj}
\vspace{-2mm}
\end{aligned}
\end{equation}
where $\Phi^t$ is the feature mapping function which can be obtained by learning a task-specific kernel $\mathtt{k}^t$ for each task $t$ with data-driven random Fourier features.
In this work, we employ kernel ridge regression, which has an efficient closed-form solution, as the base-learner $\Lambda$ for few-shot learning.\ The kernel value in the Gram matrix $K \in \mathbb{R}^{Ck\times Ck}$ is computed as $\mathtt{k}(\mathbf{x}, \mathbf{x}') = \Phi(\mathbf{x}) \Phi(\mathbf{x}')^{\top}$, where ``${\top}$'' is the transpose operation. The base-learner $\Lambda$ for a single task is obtained by solving the following objective \textsl{w.r.t.} the support set of this task,
\begin{equation}
\Lambda = \argmin_{\alpha} \Tr[(Y-\alpha K) (Y-\alpha K)^{\top}] + \lambda \Tr[\alpha K \alpha^{\top}],
\label{krg}
\end{equation}
which admits a closed-form solution
\begin{equation}
\alpha = Y(\lambda \mathrm{I} + K)^{-1}.
\label{closed}
\end{equation}
The learned predictor is then applied to samples in the query set $\tilde{X}$:
\begin{equation}
\hat{Y}=f_{\alpha}(\tilde{X})=\alpha \tilde{K},
\end{equation}
Here, $\tilde{K} = \Phi(X)\Phi(\tilde{X})^\top\in \mathbb{R}^{Ck\times m}$, with each element as $\mathtt{k}(\mathbf{x}, \tilde{\mathbf{x}})$ between the samples from the support and query sets. Note that we also treat $\lambda$ in (\ref{krg}) as a trainable parameter by leveraging the meta-learning setting, and all these parameters are learned by the meta-learner.
In order to obtain task-specific kernels, we consider learning adaptive kernels with random Fourier features in a data-driven way. This also enables shared knowledge of different tasks to be captured by exploring their dependencies in the meta-learning framework.
\subsection{Variational Random Features}
\label{metavrf}
From a probabilistic perspective, under the meta-learning setting for few-shot learning, the random feature basis is obtained by maximizing the conditional predictive log-likelihood of samples from the query set $\mathcal{Q}$:
\begin{align}
&\max_{p} \sum_{(\mathbf{x},\mathbf{y})\in \mathcal{Q}} \log p(\mathbf{y} | \mathbf{x}, \mathcal{S}) \\ &= \max_{p} \sum_{(\mathbf{x},\mathbf{y})\in \mathcal{Q}} \log \int p(\mathbf{y} |\mathbf{x}, \mathcal{S}, \bm{\omega}) p(\bm{\omega} | \mathbf{x}, \mathcal{S}) d\bm{\omega}.
\label{likeli}
\end{align}
We adopt a conditional prior distribution $p(\bm{\omega} | \mathbf{x}, \mathcal{S})$ over the base $\bm{\omega}$, as in the conditional variational autoencoder~\cite{sohn2015learning}, rather than an uninformative prior \cite{kingma2013auto,rezende2014stochastic}. By depending on the input $\mathbf{x}$, we infer the bases that can specifically represent the data, while leveraging the context of the current task by conditioning on the support set $\mathcal{S}$.
In order to infer the posterior $p(\bm{\omega} | \mathbf{y},\mathbf{x}, \mathcal{S})$ over $\bm{\omega}$, which is generally intractable, we use a variational distribution $q_{\phi}(\bm{\omega}| \mathcal{S})$ to approximate it, where the base is conditioned on the support set $\mathcal{S}$ by leveraging meta-learning. We obtain the variational distribution by minimizing the Kullback-Leibler (KL) divergence:
\begin{equation}
D_{\mathrm{KL}}[q_{\phi}(\bm{\omega}| \mathcal{S}) || p(\bm{\omega} | \mathbf{y}, \mathbf{x}, \mathcal{S})].
\label{kl}
\end{equation}
By applying Bayes' rule to the posterior $p(\bm{\omega}|\mathbf{y},\mathbf{x}, \mathcal{S})$, we derive the evidence lower bound (ELBO) as
\begin{align}
\log p(\mathbf{y} | \mathbf{x}, \mathcal{S}) \geq \,\,\, &\mathbb{E}_{q_{\phi}(\bm{\omega}| \mathcal{S})} \log \, p(\mathbf{y} | \mathbf{x}, \mathcal{S}, \bm{\omega} ) \nonumber\\ &- D_{\mathrm{KL}}[q_{\phi}(\bm{\omega}|\mathcal{S}) || p(\bm{\omega} | \mathbf{x}, \mathcal{S})].
\label{eq:elbo}
\end{align}
The first term of the ELBO is the predictive log-likelihood conditioned on the observation $\mathbf{x}$, $ \mathcal{S}$ and the inferred RFF bases $\bm{\omega}$. Maximizing it enables us to make an accurate prediction for the query set by utilizing the inferred bases from the support set. The second term in the ELBO minimizes the discrepancy between the meta variational distribution $q_{\phi}(\bm{\omega}|\mathcal{S})$ and the meta prior $p(\bm{\omega} | \mathbf{x}, \mathcal{S})$, which encourages samples from the support and query sets to share the same random Fourier bases. The full derivation of the ELBO is provided in the supplementary material.
We now obtain the objective by maximizing the ELBO with respect to a batch of $T$ tasks:
\begin{align}
\vspace{-4mm}
\mathcal{L} = &\frac{1}{T} \sum_{t=1}^{T} \Big[ \sum_{(\mathbf{x},\mathbf{y})\in \mathcal{Q}^{t}} \!\!\!\! \mathbb{E}_{q_{\phi}(\bm{\omega}^t| \mathcal{S}^t)} \log \, p(\mathbf{y} | \mathbf{x},\mathcal{S}^t, \bm{\omega}^t ) \nonumber\\ &- D_{\mathrm{KL}}[q_{\phi}(\bm{\omega}^t|\mathcal{S}^t) || p(\bm{\omega}^t | \mathbf{x}, \mathcal{S}^t)] \Big],
\label{vi-obj-base}
\end{align}
where $\mathcal{S}^t$ is the support set of the $t$-th task associated with its specific bases $\{\bm{\omega}^t_{d}\}_{d=1}^{D}$ and $(\mathbf{x}, \mathbf{y}) \in \mathcal{Q}^t$ is the sample from the query set of the $t$-th task.
\subsection{Task Context Inference}
\label{contextinference}
We propose a context inference which puts the inference of random feature bases for the current task in the context of related tasks. We replace the variational distribution in (\ref{kl}) with a conditional distribution $q_{\phi}(\bm{\omega}^t| \mathcal{S}^t,\mathcal{C})$, where we use $\mathcal{C}$ to contain the shared knoweledge provided by related tasks. This makes the bases $\{\bm{\omega}^t_{d}\}_{d=1}^{D}$ of the current $t$-th task conditioned also on the context $\mathcal{C}$ of related tasks, which gives rise to a new ELBO, as follows:
\begin{equation}
\begin{aligned}
\log p(\mathbf{y} | \mathbf{x}, \mathcal{S}^t) &\geq \,\,\, \mathbb{E}_{q_{\phi}(\bm{\omega}| \mathcal{S}^t,\mathcal{C})} \log \, p(\mathbf{y} | \mathbf{x}, \mathcal{S}^t, \bm{\omega} ) \\ &- D_{\mathrm{KL}}[q_{\phi}(\bm{\omega}|\mathcal{S}^t,\mathcal{C}) || p(\bm{\omega} | \mathbf{x}, \mathcal{S}^t)].
\label{metaelbo}
\end{aligned}
\end{equation}
This can be represented in a directed graphical model, as shown in Figure~\ref{graph}. In a practical sense, the KL term in (\ref{metaelbo}) encourages the model to extract useful information from previous tasks for inferring the spectral distribution associated with each individual sample $\mathbf{x}$ of the query set in the current task.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{tci.pdf}
\caption{Graphical illustration of variational inference of the random Fourier basis under the meta-learning framework for few-shot learning, where $(\mathbf{x}, \mathbf{y})$ is a sample in the query set $\mathcal{Q}^t$. The base $\bm{\omega}^t$ of the $t$-th task is dependent on the support set $\mathcal{S}^t$ of the current task and the context $\mathcal{C}$ of related tasks. The dashed lines indicate variational inference.}
\label{graph}
\end{figure}
The context inference integrates the knowledge shared across tasks with the task-specific knowledge to build up adaptive kernels for individual tasks. The inferred random features are highly informative due to the information absorbed from experienced tasks. The base-learner built on the inferred kernel with the informative random features effectively solves the current task.
However, since there is usually a large number of related tasks, it is non-trivial to model them all simultaneously. We consider using recurrent neural networks to gradually accumulate information episodically along with the learning process by organizing tasks in a sequence. We propose an LSTM-based inference network, leveraging its innate capability of remembering long-term information~\cite{gers2000recurrent}. The LSTM offers a well-suited structure to implement the context inference. The cell state $\mathbf{c}$ stores and accrues the meta knowledge shared among related tasks. It can also be updated when experiencing a new task in each episode over the course of learning, where the output $\mathbf{h}$ is used to adapt the model to each specific task.
To be more specific, we model the variational posterior $q_{\phi}(\bm{\omega}^t| \mathcal{S}^t,\mathcal{C})$ through $q_{\phi}(\bm{\omega}|\mathbf{h}^t)$, which is parameterized as a multi-layer perceptron (MLP) $\phi(\mathbf{h}^t)$. Note that $\mathbf{h}^t$ is the output from an LSTM that takes $\mathcal{S}^t$ and $\mathcal{C}$ as inputs. We implement the inference network with bidirectional LSTMs \cite{schuster1997bidirectional,graves2005framewise}. For the LSTM, we have
\begin{equation}
[\mathbf{h}^t, \mathbf{c}^t] = g_{\mathrm{LSTM}}(\mathcal{\bar{S}}^t,\mathbf{h}^{t-1},\mathbf{c}^{t-1}),
\label{vlstm}
\end{equation}
where $g_{\mathrm{LSTM}}(\cdot)$ is a LSTM network that takes the current support set, the output $\mathbf{h}^{t-1}$ and the cell state $\mathbf{c}^{t-1}$ as input. $\mathcal{\bar{S}}^t$ is the average over the feature representation vectors of samples in the support set~\cite{zaheer2017deep}. The feature representation is obtained by a shared convolutional network $\psi(\cdot)$. To incorporate more context information, we also implement the inference with a bidirectional LSTM. We thus have $\mathbf{h}^t = [\stackrel{\rightarrow}{\mathbf{h}^t}, \stackrel{\leftarrow}{\mathbf{h}^t}]$,
where $\stackrel{\rightarrow}{\mathbf{h}^t}$ and $\stackrel{\leftarrow}{\mathbf{h}^t}$ are the outputs from the forward and backward LSTMs, respectively, and $[\cdot,\cdot]$ indicates a concatenation operation.
Therefore, the optimization objective with the context inference is:
\begin{equation}
\begin{aligned}
\mathcal{L} = &\frac{1}{T} \sum_{t=1}^{T} \Big[\sum_{(\mathbf{x},\mathbf{y})\in \mathcal{Q}^{t}} \!\!\!\! \mathbb{E}_{q_{\phi}(\bm{\omega}^t| \mathbf{h}^t)} \log \, p(\mathbf{y} | \mathbf{x},\mathcal{S}^t, \bm{\omega}^t) \\ -& D_{\mathrm{KL}}[q_{\phi}(\bm{\omega}^t|\mathbf{h}^t) || p(\bm{\omega}^t | \mathbf{x},\mathcal{S}^t)] \Big],
\label{vi-obj
\end{aligned}
\end{equation}
where the variational approximate posterior $q_{\phi}(\bm{\omega}^t| \mathbf{h}^t)$ is taken as a multivariate Gaussian with a diagonal covariance. Given the support set as input, the mean $\bm{\omega}_{\mu}$ and standard deviation $\bm{\omega}_{\sigma}$ are output from the inference network $\phi(\cdot)$. The conditional prior $p(\bm{\omega}^t | \mathbf{x},\mathcal{S}^t)$ is implemented with a prior network which takes an aggregated representation using the cross attention \cite{kim2019attentive} between $\mathbf{x}$ and $\mathcal{S}^t$. The details of the prior network are provided in the supplementary material. To enable back propagation with the sampling operation during training, we adopt the reparametrization trick \cite{rezende2014stochastic,kingma2013auto} as
$\bm{\omega}= \bm{\omega}_{\mu} + \bm{\omega}_{\sigma} \odot \boldsymbol\epsilon$, where $\bm\epsilon \sim \mathcal{N}(0, \mathrm{I} ).$
During the course of learning, the LSTMs accumulate knowledge in the cell state by updating their cells using information extracted from each task. For the current task $t$, the knowledge stored in the cell is combined with the task-specific information from the support set to infer the spectral distribution for this task. To accrue information across all the tasks in the meta-training set, the output and the cell state of the LSTMs are passed down across batches. As a result, the final the cell state contains the distilled prior knowledge from all the tasks experienced in the meta-training set.
\subsection{Enriching Random Features by Normalizing Flows}
\label{MetaVRF-CNF}
The posterior distribution $q_{\phi}(\bm{\omega}|\mathbf{h}^t)$ is assumed to be a fully factorized Gaussian, resulting in limited expressive ability to approximate the true posterior over random Fourier bases. Motivated by the empirical success of normalizing flows~\cite{rezende2015variational} and conditional normalizing flows~\cite{winkler2019learning}, we propose the conditional normalizing flows that provide a principled way to learn richer posteriors.
Normalizing flows map a complex distribution $p_{\mathbf{x}}(\mathrm{X})$ to a simpler distribution $p_{\vect{z}}(\mathrm{Z})$ through a chain of transformations.
Let $\mathbf{x} \in X$ denote data sampled from an unknown distribution $\mathbf{x} \sim p_{X}(\mathbf{x})$.
The key idea in normalizing flows is to represent $p_{X}(\mathbf{x})$ as a transformation $\mathbf{x}=g(\mathbf{z})$ of a single Gaussian distribution $\mathbf{z} \sim p_{Z} = \mathcal{N}(0, I)$. Moreover, we assume that the mapping is bijective: $\mathbf{x} = g(\mathbf{z}) = f^{-1}(\mathbf{z})$. Therefore, the log-likelihood of the data is given by the change of variable formula:
\begin{equation}
\begin{aligned}
\label{eq:likelihood}
\log\left(p_X(\mathbf{x})\right) =& \log\left(p_Z\left(f(\mathbf{x})\right)\right)+\log\left( \left|\det\left(\frac{\partial f(\mathbf{x})}{\partial \mathbf{x}^T}\right)\right|\right),
\end{aligned}
\end{equation}
where $\frac{\partial f(\mathbf{x})}{\partial \mathbf{x}^T}$ is the Jacobian of the map $f(\mathbf{x})$ at $\mathbf{x}$.
The functions $f$ can be learned by maximum likelihood~(\ref{eq:likelihood}), where the bijectivity assumption allows expressive mappings to be trained by gradient backpropagation.
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth]{nf.pdf}
\label{fig:frame}
\vspace{-5mm}
\caption{Effect of conditional normalizing flows on the random bases. They transform the single Gaussian distribution of the random bases into a more complex distribution, which yields more informative random features.}
\vspace{-3mm}
\label{fig: nf_distribution}
\end{figure}
To make the Jacobian tractable for the map $f(\mathbf{x})$, NICE~\cite{dinh2014nice} and RealNVP~\cite{dinh2016density} proposed to stack a sequence of simple bijective transformations, such that their Jacobian is a triangular matrix. In this way, the log-determinant depends only on the sum of its diagonal elements. Dinh et al.\@~\cite{dinh2014nice, dinh2016density} proposed the additive coupling layer for each transformation. In each affine coupling transformation, the input vector $\mathbf{x}\in \mathbb{R}^d$ is split into upper and lower halves, $\mathbf{x}_{I_1},\mathbf{x}_{I_2} \in \mathbb{R}^{d/2}$. These are plugged into the following transformation, referred to as a single flow-block $f_i$:
\begin{equation}
\begin{aligned}\label{eq:3}
\mathbf{z}_1 = \mathbf{x}_{I_1},~~~ \mathbf{z}_2 = \mathbf{x}_{I_2} \circ \exp(s_i(\mathbf{x}_{I_1})) + t_i(\mathbf{x}_{I_1}),
\end{aligned}
\end{equation}
where $\circ$ denotes element-wise multiplication. It is important to note that the mappings $s_i$ and $t_i$ can be arbitrarily complicated functions of $\mathbf{x}_i$ and need not be invertible themselves. In practice, $s_i$ and $t_i$ are achieved via neural networks.
Given the outputs $\mathbf{z}_1$ and $\mathbf{z}_2$, this affine transformation is invertible by:
\begin{equation}
\begin{aligned}\label{eq:4}
\mathbf{x}_{I_1} = \mathbf{z}_1,~~~ \mathbf{x}_{I_2} = (\mathbf{z}_2 - t_i(\mathbf{z}_1)) \circ \exp(-s_i(\mathbf{z}_1)).
\end{aligned}
\end{equation}
The RealNVP~\cite{dinh2016density} flow comprises $k$ reversible flow-blocks interleaved with switch-permutations,
\begin{equation}
f_{\textit{RealNVP}} = f_k\cdot r \dots f_2 \cdot r \cdot f_1,
\end{equation}
where $r$ denotes a switch-permutation, which permutes the order of $\mathbf{x}_1$ and $\mathbf{x}_2$. According to the chain rule, the log-determinant of the Jacobian of the whole transformation $f$ is computed by summing the log-determinants of the Jacobian of each $f_i$, making the likelihood calculation tractable.
Conditional normalizing flows~\cite{winkler2019learning} learn conditional likelihoods for complicated target distributions in multivariate prediction tasks.
Take an input $\mathbf{x} \in \mathcal{X}$ and a regression target $\mathbf{y} \in \mathcal{Y}$. CNFs learn a complicated distribution $p_{Y|X}(\mathbf{y} | \mathbf{x})$ using a conditional prior $p_{Z|X}(\mathbf{z} | \mathbf{x})$ and a mapping $f_\phi: {\mathcal{Y}} \times {\mathcal{X}} \to {\mathcal{Z}}$, which is bijective in ${\mathcal{Y}}$ and ${\mathcal{Z}}$. The log-likelihood of CNFs is:
\begin{equation}
\begin{aligned}
\log (p_{Y|X}(\mathbf{y} | \mathbf{x})) = & \log(p_{Z|X}(\mathbf{z} | \mathbf{x})) + \log(\left\lvert \frac{\partial \mathbf{z}}{\partial \mathbf{y}} \right\rvert) \\ = & \log(p_{Z|X}(f_{\phi}(\mathbf{y} , \mathbf{x}) | \mathbf{x})) + \log(\left\lvert \frac{\partial f_{\phi}(\mathbf{y} , \mathbf{x})}{\partial \mathbf{y}} \right\rvert). \label{eq:cnf}
\end{aligned}
\end{equation}
Different from NFs, in the log-likelihood of CNFs, all distributions are conditional and the flow has a conditioning argument for $\mathbf{x}$.
We parameterize the approximate posterior distribution $q_{\phi}(\bm{\omega}|\mathbf{h}^t)$ with a flow of length $K$, $q_{\phi}(\bm{\omega}|\mathbf{h}^t) := q_{K}(\bm{\omega}_K)$.
The ELBO~(\ref{eq:elbo}) is thus written as an expectation over the initial distribution $q_0(\bm{\omega})$:
\begin{equation}
\begin{aligned}
\log p(\mathbf{y} | \mathbf{x}, \mathcal{S}) \geq & -\mathbb{E}_{q_{\phi} ( \bm{\omega} |\mathbf{h}^t )} [ \log q_{\phi}(\bm{\omega}|\mathbf{h}^t) + \log p ( \mathbf{y}, \bm{\omega} | \mathcal{S}, \mathbf{x} ) ] \\
= &-\mathbb{E}_{q_{0} ( \bm{\omega}_0 )} \left [ \ln q_{K} ( \bm{\omega}_K ) + \log p ( \mathbf{y} ,\bm{\omega}_K|\mathcal{S}, \mathbf{x}) \right ] \\
= &-\mathbb{E}_{q_{0} (\bm{\omega}_0)} [ \ln q_{0} ( \bm{\omega}_0 ) - \sum_{k=1}^{K} \ln |\det \frac{\partial f}{\partial \bm{\omega}_k}| ] \\
& + \mathbb{E}_{q_{0} ( \bm{\omega}_0 )} [\log p ( \mathbf{y} ,\bm{\omega}_K|\mathcal{S}, \mathbf{x}) ],
\label{eq:elbo-nf}
\end{aligned}
\end{equation}
where $q_0(\bm{\omega}_0)$ is obtained from the approximate posterior distribution $q_{\phi}(\bm{\omega}|\mathbf{h}^t)$ without transformation.
We then obtain the objective by maximizing the log-likelihood $ \log p(\mathbf{y} | \mathbf{x}, \mathcal{S})$ with respect to a batch of $T$ tasks:
\begin{equation}
\begin{aligned}
\mathcal{L} = &\frac{1}{T} \sum_{t=1}^{T} \Big[ \sum_{(\mathbf{x},\mathbf{y})\in \mathcal{Q}^{t}} \!\!\!\! \mathbb{E}_{q_{0} (\bm{\omega}_0^t)} [ - \ln q_{0} ( \bm{\omega}_0^t ) + \sum_{k=1}^{K} \ln |\det \frac{\partial f}{\partial \bm{\omega}_k^t}| ] \\
+& \mathbb{E}_{q_{0} ( \bm{\omega}_0^t )} \left[\log p ( \mathbf{y} ,\bm{\omega}^t_K|\mathcal{S}^t, \mathbf{x}) \right] \Big],
\label{eq:obj-cnf}
\end{aligned}
\end{equation}
where $\bm{\omega}_k^t$ is the random base after $k$ transformations.
We rely on the conditional coupling layer from~\cite{winkler2019learning} to transform the random base distribution. This layer is an extension of the affine coupling layer from RealNVP~\cite{dinh2016density} to make the computation of the Jacobian for the map $f(x)$ tractable. The input $\bm{\omega}_{k-1} = [\bm{\omega}_{k-1}^{I_0}, \bm{\omega}_{k-1}^{I_1}]$ of an affine coupling layer is split into two parts, which are transformed individually:
\begin{equation}
\begin{aligned}
\bm{\omega}_{k}^{I_i} = \bm{\omega}_{k-1}^{I_i} \odot \exp(s_{i+1}(\bm{\omega}_{k-(1-i)}^{I_{(1-i)}}, \mathbf{h}^t)& \\
+ t_{(i+1)}(\bm{\omega}_{k-(1-i)}^{I_{(1-i)}}, \mathbf{h}^t)&
\end{aligned}
\end{equation}
where $i \in \{0, 1\}$. Note that the transformations $s_{i+1}, t_{i+1}$ do not need to be invertible and are modelled as convolutional neural networks. The inverse of an affine coupling layer is:
\begin{equation}
\begin{aligned}
\bm{\omega}_{k-1}^{I_i} = (\bm{\omega}_{k}^{I_i} - t_{(1+i)}(\bm{\omega}_{k-(1-i)}^{I_1}, \mathbf{h}^t))&\\ \odot \exp(-s_{(1+i)}(\bm{\omega}_{k-i}^{I_i}, \mathbf{h}^t))&.
\end{aligned}
\end{equation}
The log-determinant of the Jacobian for one affine coupling layer is calculated as the sum over $s_i$, i.e.\@, $\sum_j s_1(\bm{\omega}_{k-1}^{I_1}, \mathbf{h}^t)_j + \sum_j s_2(\bm{\omega}_{k}^{I_0}, \mathbf{h}^t)_j$.
A deep invertible network is built as a sequence of multiple such layers, with a permutation of the dimensions after each layer.
The conditional input $\mathbf{h}^t$ is added as an extra input to each transformation in the coupling layer. We refer to the kernel constructed based on the random bases by conditional normalizing flows as MetaKernel.
We visualize the distribution of the random bases produced by the CNFs in Figure~\ref{fig: nf_distribution}. $\bm{\omega}_k$ indicates the distribution of the random bases after $k$ transformations. This visualization shows that we can transform a single Gaussian distribution of random bases into a more complex distribution, which achieves more informative random features, resulting in improved performance, as we will demonstrate in our experiments.
\section{Experiments}
\label{sec:experiments}
In this section, we report our experiments to demonstrate the effectiveness of the proposed MetaKernel for both regression and classification with limited labels. We also provide thorough ablation studies to gain insight into our method by showing the efficacy of each introduced component.
\subsection{Few-Shot Classification}
The few-shot classification experiments are conducted on four commonly used benchmarks, i.e.\@, Omniglot \cite{lake2015human}, \textit{mini}ImageNet{} \cite{vinyals2016matching}, CIFAR-FS \cite{krizhevsky2009learning} and Meta-Dataset~\cite{triantafillou2019meta}.
We also perform experiments on DomainNet~\cite{peng2019moment} for few-shot domain generalization. Sample images from each dataset are provided in Figure~\ref{fig:Dataset}.
\begin{figure*}[t]
\centering
\includegraphics[width=1.\linewidth]{Dataset_4.pdf}
\caption{Examples from each dataset. Orange and green boxes indicate the meta-training and meta-test tasks for each dataset. $\mathcal{S}$ and $\mathcal{Q}$ indicate the support and query sets for each task. For Meta-Dataset, we only show examples from \textit{ImageNet}~\cite{russakovsky2015imagenet}, \textit{Aircraft}~\cite{maji13finegrained}, \textit{Quick Draw}~\cite{Quick}, \textit{Fungi}~\cite{Fungi},\textit{Traffic Signs}~\cite{Houben-IJCNN-2013} and
\textit{MS-COCO}~\cite{lin2014microsoft}. For the few-shot domain generalization, we only show the examples from DomainNet using \textit{Quick Draw} as the target domain during the meta-test stage.}
\label{fig:Dataset}
\end{figure*}
\subsubsection{Datasets}
\textbf{Omniglot}~\cite{lake2015human} is a few-shot classification benchmark that contains $1623$ handwritten characters (each with $20$ examples). All characters are grouped into one of $50$ alphabets. For fair comparison against the state of the art, we follow the same data split and pre-processing used by Vinyals et al.\@~\cite{vinyals2016matching}. Specifically, the training, validation, and test sets are composed of a random split of $[1100, 200, 423]$. The dataset is augmented with rotations of $90$ degrees, which results in $4000$ classes for training, $400$ for validation, and $1292$ for testing. The number of examples is fixed to $20$. All images are resized to $28\mathord\times 28$. For a $N$-way, $k$-shot task at training time, we randomly sample $N$ classes from the $4000$ classes, each with $(k+15)$ examples. Thus, there are $C\mathord\times k$ examples in the support set and $C \mathord\times 15$ examples in the query set. The same sampling strategy is followed for validation and testing.
\textbf{\textit{mini}ImageNet{}}~\cite{vinyals2016matching} is a challenging dataset constructed from ImageNet~\cite{russakovsky2015imagenet}, which comprises a total of $100$ different classes (each with $600$ instances). All images are downsampled to $84\mathord\times 84$.
We use the same splits as Ravi and Larochelle~\cite{ravi2017optimization}, with $[64, 16, 20]$ classes for training, validation and testing. We use the same episodic sampling strategy as for Omniglot.
\textbf{\textsc{cifar-fs}{}}~\cite{bertinetto2018meta} is adapted from CIFAR-100~\cite{krizhevsky2009learning} for few-shot learning. In the many-shot image classification benchmark CIFAR-100, there are $100$ classes grouped into $20$ superclasses (each with $600$ instances). \textsc{cifar-fs}{} uses the same split criteria ($64, 16, 20$) with which \textit{mini}ImageNet{} has been generated. The resolution of all images is $32\mathord\times 32$.
\textbf{Meta-Dataset} \cite{triantafillou2019meta} is composed of ten existing image classification datasets (eight for training, two for testing). These are: \textit{ILSVRC-2012} (ImageNet, \cite{russakovsky2015imagenet}), \textit{Omniglot}~\cite{lake2015human}, \textit{Aircraft}~\cite{maji13finegrained}, \textit{CUB-200-2011}
(Birds, \cite{WahCUB_200_2011}), \textit{Describable Textures}~\cite{cimpoi14describing},
\textit{Quick Draw}~\cite{Quick}, \textit{Fungi}~\cite{Fungi}, \textit{VGG Flowr}~\cite{Nilsback08}, \textit{Traffic Signs}~\cite{Houben-IJCNN-2013} and
\textit{MS-COCO}~\cite{lin2014microsoft}. Each episode generated in Meta-Dataset uses classes from a single dataset.
Two of these datasets, \textit{Traffic Signs} and \textit{MSCOCO}, are fully reserved for evaluation, which means that no classes from these sets are participated in the training set.
Apart from for \textit{Traffic Signs} and \textit{MS-COCO},
the remaining datasets contribute some classes to the training, validation and test splits. There are about 14 million images in total in Meta-Dataset.
\textbf{DomainNet.}~\cite{peng2019moment}. Du et al.\@~\cite{du2020metanorm} introduced the setting of few-shot domain generalization, which combines the challenges of both few-shot classification and domain generalization. It is based on the DomainNet dataset by Peng et al.\@~\cite{peng2019moment}, which contains six distinct domains, i.e.\@, \textit{clipart}, \textit{infograph}, \textit{painting}, \textit{quickdraw}, \textit{real}, and \textit{sketch}, for 345 categories. The categories are from 24 divisions.
\input{table_min_omn}
\subsubsection{Implementation Details}
We extract image features using a shallow convolutional neural network with the same architecture as~\cite{gordon2018meta} for \textit{mini}ImageNet{}, and \textsc{cifar-fs}{}. We do not use any fully connected layers in this CNNs.
For the Meta-Dataset experiments, we use a ResNet-18~\cite{resnet} as our base learner to be consistent with~\cite{triantafillou2019meta}. The dimension of all feature vectors is $256$. We also evaluate the random Fourier features (RFFs) and the radial basis function (RBF) kernel, where we take the bandwidth $\sigma$ as the mean of the pair-wise distances between samples in the support set of each task. The inference network $\phi(\cdot)$ is a three-layer MLP with $256$ units in the hidden layers and rectifier non-linearity,
where the input sizes is $512$ for the bidirectional LSTMs.
We use an SGD optimizer with a momentum of $0.9$ in all experiments.
\input{table_meta-dataset}
The key hyperparameter for the number of bases $D$ in (\ref{rfs}) is set to $D{=}780$ for MetaKernel in all experiments, while we use RFFs with $D{=}2048$ as this produces the best performance. The sampling rate in MetaKernel is much lower than in previous works using RFFs, in which $D$ is usually set to be $5$ to $10$ times the dimension of the input features~\cite{yu2016orthogonal, rahimi2008random}. We adopt a similar meta-testing protocol as~\cite{gordon2018meta, finn2017model}, but we test on $3000$ episodes rather than $600$ and present the results with $95\%$ confidence intervals. All reported results are produced by models trained from scratch. We compare with previous methods that use the same training procedures and similar shallow conventional CNN architectures as ours. Our code will be publicly released.
\input{table_few-shotdg}
\begin{figure*}[t]
\centerline{\includegraphics[width=1\linewidth]{new_regression.pdf}}
\sbox1{\raisebox{2pt}{\tikz{\draw[blue,dashed,line width = 1.5pt](0,0) -- (6mm,0);}}} \sbox2{\raisebox{2pt}{\tikz{\draw[-,green,dashed,line width = 1.5pt](0,0) -- (6mm,0);}}} \sbox3{\raisebox{2pt}{\tikz{\draw[-,red,dashed,line width = 1.5pt](0,0) -- (6mm,0);}}} \sbox4{\raisebox{2pt}{\tikz{\draw[-, black, dashed,line width = 1.5pt](0,0) -- (6mm,0);}}} \sbox5{\raisebox{2pt}{\tikz{\draw[-,black!40!gray,solid,line width = 0.9pt](0,0) -- (6mm,0);}}} \sbox6{\begin{tikzpicture}
\caption{
Few-shot regression
performance comparison (MSE). MetaKernel fits the target function well, even with variational random features only using three shots, and consistently outperforms MAML for all settings.
Legend: \usebox1 MAML; \usebox2 MetaKernel (variational RFFs only); \usebox3 MetaKernel~(variational RFFs \& task context); \usebox4 MetaKernel (full model); \usebox5 Ground Truth;
\usebox6 Support Samples.
}
\label{fig:reg}
\end{figure*}
\input{table_kernel_mini}
\subsubsection{Comparison to the State of the art}
\textbf{Few-shot image classification.}
We first evaluate MetaKernel on the \textit{mini}ImageNet{}, \textsc{cifar-fs}{} and Omniglot datasets under various way (the number of classes used in each task) and shot (the number of support set examples used per class) configurations.
The results are reported in Table~\ref{tab:miniandcifar}.
We report the results of two experiments using MAML~\cite{finn2017model}.
To keep MAML~\cite{finn2017model} consistent with our backbone for \textit{mini}ImageNet{} and \textsc{cifar-fs}{}, in addition to its original results, we also implement MAML ($64C$) with $64$ channels in each convolutional layer for fair comparison.
While it obtains modest performance, we believe the increased model size leads to overfitting.
As the original SNAIL uses a very deep ResNet-12 network for embedding, we cite the results of SNAIL reported in \cite{bertinetto2018meta} using a similar shallow network as ours. For fair comparison, we also cite the original results of R2-D2~\cite{bertinetto2018meta} using $64$ channels.
On all benchmark datasets, MetaKernel delivers the best performance.
It is worth noting that MetaKernel achieves an accuracy of $55.5\%$ under the $5$-way $1$-shot setting on the \textit{mini}ImageNet~dataset,
surpassing the second-best model by $1.3\%$. This is a good improvement considering the challenge of this setting.
On \textsc{cifar-fs}{}, our model surpasses the second-best method, i.e.\@, VERSA~\cite{gordon2018meta} and has a smaller margin of error bar under the $5$-way $1$-shot setting using the same backbone.
On Omniglot, performance of all methods saturates. Nonetheless, MetaKernel achieves the best performance under most settings, including $5$-way $1$-shot, $5$-way $5$-shot, and $20$-way $1$-shot. It is also competitive under the $20$-way $5$-shot setting, falling within the error bars of the state of the art.
\textbf{Few-shot meta-dataset classification.}
Next, we evaluate MetaKernel on the most challenging few-shot classification benchmark i.e.\@,~Meta-Dataset~\cite{triantafillou2019meta}, which is composed of 10 image classification datasets.
For Meta-Dataset, we train our model on the ILSVRC~\cite{russakovsky2015imagenet} training split and test on the 10 diverse datasets. As shown in Table~\ref{tab:meta_dataset}, MetaKernel outperforms fo-Proto-MAML~\cite{triantafillou2019meta} across all 10 datasets. MetaKernel also surpasses the second-best method, RFS~\cite{tian2020rethinking}, on 7 out of 10 datasets. Overall, we perform well against previous methods, achieving new state-of-the-art results on the challenging Meta-Dataset.
\textbf{Few-shot domain generalization.}
We also evaluate our method on few-shot domain generalization~\cite{du2020metanorm}, which combines the challenges of both few-shot classification and domain generalization. For few-shot domain generalization, each task has only a few samples in the support set for training and we test the model on tasks in a query set, which come from a different domain than the support set.
The results are reported in Table~\ref{tab:fewdg}. MetaKernel obtains the best performance, surpassing the MetaNorm~\cite{du2020metanorm} by a margin of up to $2.0\%$ on the $5$-way $1$-shot and $1.8\%$ on the $5$-way $5$-shot setting. Its performance on the few-shot domain generalization task demonstrates that MetaKernel is not only able to handle the problem of few-shot learning, but also thrives under domain-shifts.
\subsection{Few-Shot Regression}
We also consider regression tasks with a varying number of shots $k$, and compare MetaKernel with MAML~\cite{finn2017model}, a representative meta-learning algorithm. We follow MAML \cite{finn2017model} and fit a target sine function $y{=}A \sin{(wx + b)}$, with only a few annotated samples. $A \in [0.1, 5]$, $w \in [0.8, 1.2]$, and $ b\in [0, \pi ]$ denote the amplitude, frequency, and phase, which follow a uniform distribution within the corresponding interval. The goal is to estimate the target sine function given only $n$ randomly sampled data points. Here, we consider inputs within the range of $x\in [-5, 5]$, and conduct three tests under the conditions of $k {=} 3, 5, 10$. For fair comparison, we compute the feature embedding using a small MLP with two hidden layers of size $40$, following the same settings used in MAML.
The results in Figure~\ref{fig:reg} show that MetaKernel fits the function well with only three shots, even when we do not use the full model. It performs better with an increasing number of shots, almost entirely fitting the target function with ten shots. We observe all MetaKernel variants perform better than MAML~\cite{finn2017model} for all three settings with varying numbers of shots, both visually and in terms of MSE. Best results are obtained with our full model.
\begin{figure*}[t]
\centering
\begin{minipage}{.48\textwidth}
\begin{subfigure
\centering
\includegraphics[width=0.482\columnwidth]{dim_5w5s_mini.png}
\label{fig:eff1}
\end{subfigure}%
\begin{subfigure
\centering
\includegraphics[width=0.482\columnwidth]{dim_5w5s_cifar.png}
\label{fig:eff2}
\end{subfigure}
\vspace{-4mm}
\caption{Efficiency with varying numbers $D$ of bases. MetaKernel consistently achieves better performance than regular RFFs, especially with relatively low sampling rates.}
\label{fig:eff}
\end{minipage}
\hspace{4mm}
\begin{minipage}{.48\textwidth}
\centering
\vspace{-6mm}
\begin{subfigure
\centering
\includegraphics[width=0.482\columnwidth]{flex_way.pdf}
\end{subfigure}%
\begin{subfigure
\centering
\includegraphics[width=0.482\columnwidth]{flex_shot.pdf}
\end{subfigure}
\caption{Versatility of MetaKernel with varied ways and shots on Omniglot.}
\label{fig:flex}
\end{minipage}
\end{figure*}
\subsection{Ablation Studies}
To study how our proposed components bring performance gains to MetaKernel on few-shot learning, our ablations consider: (1) the benefit of random Fourier features; (2) the benefit of task context inference; (3) the benefit of enriching random features by normalizing flows; (4) the effect of deeper embeddings; (5) the efficiency of the model; (6) the versatility of the model.
\textbf{Benefit of random Fourier features.} We first show the benefit of random Fourier features (RFFs) by comparing them with the regular RBF kernel. As can be seen from the first two rows in Table~\ref{tab:mini_kernel}, RFFs perform 10.7\% better than an RBF kernel on the $5$-way $1$-shot setting of \textit{mini}ImageNet{}, and 14.9\% better on the $5$-way $5$-shot setting of \textsc{cifar-fs}{}. The considerable performance gain over RBF kernels on both datasets indicates the benefit of adaptive kernels based on random Fourier features for few-shot image classification. The modest performance obtained by RBF kernels is due to the mean of pair-wise distances of support samples being unable to provide a proper estimate of the kernel bandwidth. Note that the performance of RFFs is better than the variational RFFs on the $5$-way $1$-shot setting of \textit{mini}ImageNet{}. This may be due to the fact that the support samples are too small, resulting in the random bases generated from the samples not accurately representing the current task, while the parameters in the random bases of RFFs are sampled from a standard Gaussian distribution. Therefore, the context information among previous related tasks should be integrated into the variational RFFs. In addition, RFFs cannot use the context information directly since it generates random base parameters sampled from a deterministic distribution.
\textbf{Benefit of task context inference.}
We investigate the benefit of adding task context inference to the MetaKernel. Specifically, we leverage a bi-\textsc{lstm}~cell state $\textbf{c}$ to store and accrue the meta-knowledge shared among related tasks. The experimental results are reported in Table~\ref{tab:mini_kernel}. Adding task context inference on top of the MetaKernel with variational random features leads to a consistent gain under all settings, for both datasets. This demonstrates the effectiveness of using an \textsc{lstm}~to explore task dependency.
\textbf{Benefit of enriching features by normalizing flows.}
We show the benefit of enriching the variational random features by conditional normalizing flow in the last row of Table~\ref{tab:mini_kernel}.
we find that MetaKernel performs better than MetaVRF (55.5\% -up 1.3\%) under the $5$-way $1$-shot setting on \textit{mini}ImageNet{} and (64.3\% -up 1.2\%) under the $5$-way $1$-shot setting on \textsc{cifar-fs}{}.
These results indicate that the CNFs provide more informative kernels for the new task, which allows the learned distribution of random bases to more closely approximate the real random bases distribution and therefore improves few-shot classification performance.
\input{table_tiered}
\textbf{Deep embeddings.} MetaKernel is independent of the convolutional architecture for feature extraction and works with deeper embeddings, either pre-trained or trained from scratch. In general, the performance improves with more powerful feature extraction architectures. We evaluate our method using pre-trained embeddings in order to compare with existing methods using deep embedding architectures.
Specifically, we adopt the pre-trained embeddings from a 28-layer wide residual network (WRN-28-10) \cite{zagoruyko2016wide}, in a similar fashion to \cite{rusu2018meta, bauer2017discriminative, qiao2018few}. We choose activations in the 21-st layer, with average pooling over spatial dimensions, as feature embeddings. The dimension of the pre-trained embeddings is $640$. We show the comparison results on the \textit{mini}ImageNet~dataset for 5-way 1-shot and 5-shot settings in Table~\ref{tab:mini}. MetaKernel achieves the best performance under both settings and surpasses LEO~\cite{rusu2018meta}, a recently proposed meta-learning method, especially on the challenging 5-way 1-shot setting. Compared with our conference paper, MetaVRF~\cite{zhen2020learning}, MetaKernel performs 1.23\% better on the $5$-way $1$-shot setting of \textit{mini}ImageNet{}, which also validates the effectiveness of the CNFs. The consistent state-of-the-art results on all benchmarks using both shallow and deep feature extraction networks validate the effectiveness of MetaKernel for few-shot learning.
\textbf{Efficiency.} Regular RFFs usually require high sampling rates to achieve satisfactory performance. However, our MetaKernel achieves high performance with a relatively low sampling rate, which guarantees its high efficiency. In Figure~\ref{fig:eff}, we compare with regular RFFs using different sampling rates. We provide the performance change of fully trained models using RFFs and MetaKernel under a varying number of bases $D$. We show the comparison results for the $5$-way $5$-shot setting on \textit{mini}ImageNet~ and CIFAR-FS in Figure~\ref{fig:eff}. MetaKernel consistently yields higher performance than regular RFFs with the same number of sampled bases. The results verify the efficiency of MetaKernel in learning adaptive kernels and its effectiveness in improving performance by exploring the dependencies of related tasks.
\textbf{Versatility.} In contrast to most existing meta-learning methods, MetaKernel is applicable to versatile settings.
We evaluate the performance of MetaKernel on more challenging scenarios where the number of ways $N$ and shots $k$ between training and testing are inconsistent. Specifically, we test the performance of MetaKernel on Omniglot tasks with varied $N$ and $k$, when it is trained on one particular $N$-way $k$-shot task. As shown in Figure~\ref{fig:flex}, the results demonstrate the trained model still produces good performance, even under the challenging conditions with a far higher number of ways. In particular, the model trained on the $20$-way $5$-shot task retains a high accuracy of $94\%$ on the $100$-way setting, as shown in Figure~\ref{fig:flex}(a). The results also indicate that our model exhibits considerable robustness and flexibility to a variety of testing conditions.
\section{Conclusion}
\label{sec:conclusion}
In this paper, we introduce kernel approximation based on random Fourier features into the meta-learning framework for few-shot learning. We propose to learn random features for each few-shot task in a data-driven way by formulating it as a variational inference problem, where the random Fourier basis is defined as the latent variable. We introduce an inference network based on an LSTM module, which enables the shared knowledge from related tasks to be incorporated into each individual task. To further enhance the kernels, we introduce conditional normalizing flows to generate richer posteriors over random bases, resulting in more informative random features. Experimental results on both regression and classification tasks demonstrate the effectiveness for few-shot learning. The extensive ablation study demonstrates the efficacy of each component in our MetaKernel.
\bibliographystyle{IEEEtran}
|
{
"timestamp": "2021-05-11T02:15:24",
"yymm": "2105",
"arxiv_id": "2105.03781",
"language": "en",
"url": "https://arxiv.org/abs/2105.03781"
}
|
\section{Introduction}
All groups considered in this paper are finite, and all graphs considered are finite, undirected and simple.
Let $\Gamma$ be a graph with vertex set $V$, and let $a, b$ be nonnegative integers. An \emph{$(a,b)$-regular set} \cite{Cardoso2019} in $\Gamma$ (or simply a \emph{regular set} in $\Gamma$ if the parameters $a, b$ are not important in the context) is a nonempty proper subset $C$ of $V$ such that $|\Gamma(v)\cap C|=a$ for each $v\in C$ and $|\Gamma(v)\cap C|=b$ for each $v \in V\setminus C$, where $\Gamma(v)$, the \emph{neighborhood} of $v$ in $\Gamma$, is the set of neighbors of $v$ in $\Gamma$. (Two vertices are neighbors of each other if they are adjacent in the graph.) In particular, a $(0,1)$-regular set is called a \emph{perfect code}, and a $(1,1)$-regular set is called a \emph{total perfect code}.
In other words, a perfect code \cite{Big, Kratochvil1986} in $\Gamma$ is an independent set $C$ of $\Gamma$ such that every vertex in $V \setminus C$ has exactly one neighbor in $C$, and a total perfect code \cite{Zhou2016} in $\Gamma$ is a subset $C$ of $V$ such that every vertex of $\Gamma$ has exactly one neighbor in $C$.
We study regular sets in Cayley graphs in this paper. Given a group $G$ and an inverse-closed subset $S$ of $G\setminus\{e\}$, the \emph{Cayley graph} $\Cay(G,S)$ of $G$ with \emph{connection set} $S$ is the graph with vertex set $G$ such that $x,y\in G$ are adjacent if and only if $yx^{-1}\in S$. Herein and in the sequel we use $e$ to denote the identity element of the group under consideration. The main results in this paper affirmatively answer a question in \cite{WXZ2020} in the case when the group involved is a generalized dihedral group or a group of order $4p$ or $pq$, where $p$ and $q$ are primes. We prove that, for such a group $G$ and any nontrivial subgroup $H$ of $G$, if there exists a Cayley graph of $G$ which admits $H$ as a perfect code, then for any $0\leqslant a\leqslant|H|-1$ and $0\leqslant b\leqslant |H|$, with $a$ even when $|H|$ is odd, there exists a Cayley graph of $G$ which admits $H$ as an $(a,b)$-regular set.
Before formally stating our results, let us briefly discuss our motivation and related background. First, the study of perfect codes and total perfect codes is a significant part of the theory of domination in graphs \cite{HHS1998}, because a perfect code is exactly an efficient dominating set \cite{DS2003} (also known as an independent perfect dominating set~\cite{Lee2001}) and a total perfect code is exactly an efficient open dominating set \cite{HHS1998}. Second, as mentioned in \cite{HXZ2018, MWWZ2019}, perfect codes in Cayley graphs are especially interesting objects of study due to their connections with perfect codes in coding theory \cite{Va73}. In fact, the Hamming $H(n, q)$ and the Cartesian product $C_q^{\Box n}$ of $n$ copies of cycle $C_q$ with length $q$ are both Cayley graphs of $\mathbb{Z}_q^n$. It is well known that the Hamming and Lee metrics over $\mathbb{Z}_q^n$ are exactly the graph distances in $H(n, q)$ and $C_q^{\Box n}$, respectively. Therefore, perfect codes under these metrics \cite{Heden1, Va75} in classical coding theory are exactly perfect codes in Cayley graphs $H(n, q)$ and $C_q^{\Box n}$, respectively. So perfect codes in Cayley graphs can be considered as a generalization of perfect codes in the classical setting \cite{Va73}. For this reason, perfect codes in Cayley graphs have attracted considerable attention; see \cite[Section 1]{HXZ2018} for a brief account of results and \cite{CWX2020, HXZ2018, MWWZ2019, ZZ2021, Z15} for several recent studies in this line of research.
Thirdly, perfect codes in Cayley graphs are closely related to factorizations and tilings of groups. In general, a \emph{factorization} \cite{SS2009} of a group $G$ (into two factors) is a pair of subsets $(A, B)$ of $G$ such that every element of $G$ can be written uniquely as $ab$ with $a \in A$ and $b \in B$. If in addition $e \in A \cap B$, then such a factorization is called a \emph{tiling} \cite{Dinitz2006} of $G$. Beginning with G. Haj\'{o}s \cite{Hajos1942} in his proof of a well-known conjecture of Minkowski, the study of factorizations and tilings of groups is a classical topic which lies in the intersection of group theory and combinatorics. See, for example, \cite{Dinitz2006,RT1966,Szab2006} for several related results and \cite{SS2009} for a monograph on this topic. One can easily verify that $(A, B)$ is a tiling of $G$ such that $A$ is inverse-closed if and only if $B$ is a perfect code in $\Cay(G, A \setminus \{e\})$ with $e \in B$. Thus, results on perfect codes in Cayley graphs can be regarded as results on tilings of the underlying groups, and the converse is also true if the first factor is required to be inverse-closed. For example, the main result in \cite{RT1966} can be restated as follows: If $1 + (n(n-1)/2)$ is divisible by a prime exceeding $2 + \sqrt{n}$, then the complete transposition graph $T_n$ does not admit any perfect code, where $T_n$ is defined (see, for example, \cite{CLWZ2021}) as the Cayley graph of the symmetric group $S_n$ with connection set consisting of all translations in $S_n$.
Finally, a regular set in a regular graph is precisely one part of an equitable partition into two parts. More specifically, for a $k$-regular graph $\Gamma$ with vertex set $V$, a subset $C$ of $V$ is an $(a, b)$-regular set in $\Gamma$ if and only if $\{C, V\setminus C\}$ is an equitable partition of $\Gamma$ with quotient matrix $\pmat{a & k-a\\b &k-b}$. In general, given a graph $\Gamma$ with vertex set $V$, a partition $\mathcal{V} = \{V_1, V_2, \dots, V_r\}$ of $V$ is called an \emph{equitable partition} \cite[\S 9.3]{GR2001} (or a \emph{perfect colouring} \cite{F2007}) of $\Gamma$ if there exists an $r\times r$ matrix $M=(m_{ij})$, called the \emph{quotient matrix} of $\mathcal{V}$, such that for any $i, j$ with $1 \le i, j \le r$, every vertex in $V_i$ has exactly $m_{ij}$ neighbors in $V_j$. It is known that all eigenvalues of $M$ are eigenvalues of $\Gamma$ (see \cite[Theorem 9.3.3]{GR2001}). In particular, if a regular graph admits an $(a, b)$-regular set, then it has $a-b$ as an eigenvalue. As one can find in \cite{G1993,GR2001}, equitable partitions play an important role in the study of many combinatorial structures, including distance-regular graphs and association schemes. Now assume that $\Gamma$ is a connected $k$-regular graph with vertex set $V$. Then $k$ is a simple eigenvalue of $M$~\cite[Theorem 9.3.3]{GR2001}, and the equitable partition $\mathcal{V}$ of $\Gamma$ is said to be \emph{$\mu$-equitable}~\cite{BCGG2019} if all eigenvalues of $M$ other than $k$ are equal to $\mu$. In particular, if an equitable partition $\{C, V\setminus C\}$ is $\mu$-equitable, then the nonempty proper subset $C$ of $V$ is called a \emph{$\mu$-perfect set} \cite{BCGG2019}. It was proved in \cite[Proposition 2.1]{BCGG2019} that, for a partition $\mathcal{V} = \{V_1, V_2, \dots, V_r\}$ of $V$, if $\mathcal{V}$ is $\mu$-equitable, then each $V_i$ is $\mu$-perfect, and conversely if $V_1, V_2, \dots, V_{r-1}$ are all $\mu$-perfect, then $\mathcal{V}$ is $\mu$-equitable. Thus it is particularly important to study equitable partitions with exactly two parts. Such two-part equitable partitions are called \emph{perfect $2$-colourings} \cite{F2007}, and they are essentially regular sets as seen above. Perfect $2$-colourings are closely related to coding theory and as such have been studied extensively over many years. See, for example, \cite{GG2013} for a study of perfect $2$-colorings of Johnson graphs $J(v,3)$, and \cite{BKMV2021, MV2020} for some recent results on perfect $2$-colourings of Hamming graphs. In \cite{BCGG2019}, equitable partitions of Latin square graphs are studied and those whose quotient matrix does not have an eigenvalue $-3$ are classified. In \cite{RCZ2018}, a few results on equitable partitions and regular sets of Cayley graphs involving irreducible characters of the underlying groups are obtained.
A subset $C$ of a group $G$ is called \cite{HXZ2018} a \emph{(total) perfect code of $G$} if it is a (total) perfect code in some Cayley graph of $G$. In general, a subset $C$ of $G$ is called an \emph{$(a,b)$-regular set of $G$} if $C$ is an $(a,b)$-regular set in some Cayley graph of $G$. As noted in \cite{HXZ2018}, subgroup perfect codes (that is, subgroups which are perfect codes of the group under consideration) are particularly interesting since they are an analogue of perfect linear codes \cite{Heden1, Va75} in coding theory. The study of subgroup perfect codes was initiated in \cite{HXZ2018}, and further results on them were obtained in \cite{ZZ2021}. Recently, a characterization of those groups whose subgroups are all perfect codes of the group was given in \cite{MWWZ2019}.
In \cite[Theorem~2.2]{HXZ2018}, it was proved that, for a normal subgroup $H$ of a group $G$, $H$ is a perfect code of $G$ if and only if
\begin{equation}\label{equ9}
\text{for any $g\in G$ with $g^2\in H$, there exists $h\in H$ such that $(gh)^2=e$},
\end{equation}
and $H$ is a total perfect code of $G$ if and only if~\eqref{equ9} holds and $|H|$ is even. In \cite{WXZ2020}, the authors of the present paper improved this result as follows.
\begin{theorem}\label{thm6}
\emph{(\cite[Theorem~1.2]{WXZ2020})}
Let $G$ be a group and let $H$ be a nontrivial normal subgroup of $G$. Then the following statements are equivalent:
\begin{enumerate}
\item[{\rm(a1)}] $G$ and $H$ satisfy condition \eqref{equ9};
\item[{\rm(a2)}] $H$ is a perfect code of $G$;
\item[{\rm(a3)}] $H$ is an $(a,b)$-regular set of $G$ for every pair of integers $a$ and $b$ with $0\leqslant a\leqslant|H|-1$ and $0\leqslant b\leqslant |H|$ such that $\gcd(2,|H|-1)$ divides $a$.
\end{enumerate}
And the following statements are also equivalent:
\begin{enumerate}
\item[{\rm(b1)}] $G$ and $H$ satisfy condition \eqref{equ9}, and $|H|$ is even;
\item[{\rm(b2)}] $H$ is a total perfect code of $G$;
\item[{\rm(b3)}] $H$ is an $(a,b)$-regular set of $G$ for every pair of integers $a$ and $b$ with $0\leqslant a\leqslant|H|-1$ and $0\leqslant b\leqslant |H|$.
\end{enumerate}
\end{theorem}
Since (a3) implies (a2) and (b3) implies (b2), Theorem \ref{thm6} essentially says that (a2) implies (a3) and (b2) implies (b3). In \cite[Question~1.3]{WXZ2020}, we asked whether these implications are still true if the subgroup $H$ of $G$ is not normal. In this paper we give an affirmative answer to this question in the case when $G$ is a generalized dihedral group or a group of order $4p$ or $pq$ for some primes $p$ and $q$. A group $G$ is called a \emph{generalized dihedral group} with respect to $A$ if $A$ is an abelian subgroup of $G$ and there exists an element $y\in G$ such that
\[
G=\langle A,y\mid y^2=e,x^y=x^{-1},\forall x\in A\rangle.
\]
The main results in this paper are the following two theorems.
\begin{theorem}\label{thm4}
Let $G$ be a generalized dihedral group and let $H$ be a nontrivial subgroup of $G$. Then the following hold:
\begin{enumerate}[\rm (a)]
\item $H$ is a perfect code of $G$ if and only if $H$ is an $(a,b)$-regular set of $G$ for every pair of integers $a, b$ with $0\leqslant a\leqslant|H|-1$, $0\leqslant b\leqslant |H|$, and $a$ even when $|H|$ is odd;
\item $H$ is a total perfect code of $G$ if and only if $H$ is an $(a,b)$-regular set of $G$ for every pair of integers $a, b$ with $0\leqslant a\leqslant|H|-1$ and $0\leqslant b\leqslant |H|$.
\end{enumerate}
\end{theorem}
\begin{theorem}\label{thm1}
Let $G$ be a group of order $4p$ or $pq$ for some primes $p$ and $q$, and let $H$ be a nontrivial subgroup of $G$. Then the following hold:
\begin{enumerate}[\rm (a)]
\item $H$ is a perfect code of $G$ if and only if $H$ is an $(a,b)$-regular set of $G$ for every pair of integers $a, b$ with $0\leqslant a\leqslant|H|-1$, $0\leqslant b\leqslant |H|$, and $a$ even when $|H|$ is odd;
\item $H$ is a total perfect code of $G$ if and only if $H$ is an $(a,b)$-regular set of $G$ for every pair of integers $a, b$ with $0\leqslant a\leqslant|H|-1$ and $0\leqslant b\leqslant |H|$.
\end{enumerate}
\end{theorem}
As mentioned earlier, every regular graph admitting an $(a,b)$-regular set has $a-b$ as one of its eigenvalues. Obviously, $\{a-b: 0\leqslant a\leqslant|H|-1, 0\leqslant b\leqslant |H|\} = \{-|H|, -|H|+1, \dots, |H|-1\}$, and also $\{a-b: 0\leqslant a\leqslant|H|-1, 0\leqslant b\leqslant |H|, \mbox{$a$ is even}\} = \{-|H|, -|H|+1, \dots, |H|-1\}$ when $|H|$ is odd. So part (a) in Theorems \ref{thm6} and \ref{thm4} implies the following result.
\begin{corollary}
Let $G$ be a generalized dihedral group or a group of order $4p$ or $pq$ for some primes $p$ and $q$. Let $H$ be a nontrivial subgroup of $G$. If $H$ is a perfect code of $G$, then for every integer $\ell$ between $-|H|$ and $|H|-1$ there exists a Cayley graph of $G$ which has $\ell$ as one of its eigenvalues.
\end{corollary}
The rest of the paper is structured as follows. In Section~\ref{sec1}, we present characterizations of perfect codes and regular sets in Cayley graphs. Based on these characterizations we establish several technical lemmas in Section~\ref{sec2}. Using these preparatory results, proofs of Theorem~\ref{thm4} and Theorem~\ref{thm1} will be given in Section~\ref{sec3}.
\section{Characterizations of perfect codes and regular sets}\label{sec1}
Let $G$ be a group and $H$ a subgroup of $G$. Denote by $[G{:}H]$ the set of right cosets of $H$ in $G$, and define a binary relation $\sim$ on $[G{:}H]$ such that
\[
Hx\sim Hy\Leftrightarrow y\in Hx^{-1}H.
\]
It is readily seen that the relation $\sim$ is well defined and symmetric. Define $\Gamma$ to be the graph with vertex set $[G{:}H]$ such that $\{Hx,Hy\}$ is an edge if and only if $Hx\neq Hy$ and $Hx\sim Hy$.
Then for any $x\in G$ and $h\in H$,
\begin{equation}\label{eq1}
Hxh\sim Hy\text{ for all $y\in x^{-1}H$}\quad\text{and}\quad Hx^{-1}h\sim Hz\text{ for all $z\in xH$}.
\end{equation}
Since each double coset of $H$ is a union of right cosets of $H$ in $G$, we may view $HxH$ and $Hx^{-1}H$ as sets of vertices of $\Gamma$. Then~\eqref{eq1} shows that the subgraph of $\Gamma$ induced by $HxH\cup Hx^{-1}H$ is a connected component of $\Gamma$, which we denote by $\Gamma_x$.
Note that in $\Gamma_x$ each edge induces a pair of inverse elements. Note also that $\Gamma$ and $\Gamma_x$ depend on $G$ and $H$, but in our subsequently discussion the groups underlying $\Gamma$ and $\Gamma_x$ should be clear from the context.
The following lemma gives the structure of $\Gamma_x$.
\begin{lemma}\label{lem5}\cite{CWX2020}
Let $G$ be a group and let $H$ be a subgroup of $G$. Let $\Gamma_x$ be as above and let $m=|H|/|H\cap H^x|$. If $HxH=Hx^{-1}H$, then $\Gamma_x$ is the complete graph $K_m$. If $HxH\neq Hx^{-1}H$, then $\Gamma_x$ is the complete bipartite graph $K_{m,m}$.
\end{lemma}
\begin{theorem}\label{thm2}\cite{CWX2020}
Let $G$ be a group and let $H$ be a subgroup of $G$.
Then the following statements are equivalent:
\begin{enumerate}[{\rm(a)}]
\item $H$ is a perfect code of $G$;
\item there exists an inverse-closed right transversal of $H$ in $G$;
\item for each $x\in G$ such that $x^2\in H$ and $|H|/|H\cap H^x|$ is odd, there exists $y\in Hx$ such that $y^2=e$;
\item for each $x\in G$ such that $HxH=Hx^{-1}H$ and $|H|/|H\cap H^x|$ is odd, there exists $y\in Hx$ such that $y^2=e$;
\item for each $x\in G\setminus H$ such that $x^2\in H$ and $|H|/|H\cap H^x|$ is odd, there exists an involution in $Hx$;
\item for each $x\in G\setminus H$ such that $HxH=Hx^{-1}H$ and $|H|/|H\cap H^x|$ is odd, there exists an involution in $Hx$.
\end{enumerate}
\end{theorem}
As usual, for a group $G$, denote by $\mathbb{Z}[G]$ the group ring of $G$ over $\mathbb{Z}$. For a subset $A$ of group $G$, denote
\[
\overline{A}=\sum_{g\in G}\mu_A(g)g\in \mathbb{Z}[G],
\]
where
\[
\mu_A(g)=\left\{\begin{aligned}
1,&\quad g\in A;\\
0,&\quad g\in G\setminus A.
\end{aligned}
\right.
\]
\begin{lemma}\label{lem4}~\cite{WXZ2020}
Let $G$ be a group, $H$ a subgroup of $G$, and $S$ an inverse-closed subset of $G\setminus\{e\}$. Let $a$ and $b$ be nonnegative integers. Suppose that $H$ is a perfect code in some Cayley graph $\Cay(G,S_0)$ of $G$. Then $H$ is an $(a,b)$-regular set in $\Cay(G,S)$ if and only if $|S\cap H|=a$ and $\overline{S\setminus H}\cdot \overline{H}=b\,\overline{S_0}\cdot \overline{H}$.
\end{lemma}
Let $G$ be a group, let $H$ be a subgroup of $G$, and let $K$ be a set of right cosets of $H$ in $G$. A \emph{transversal of $H$ in $K$} is a subset of $G$ which is formed by taking exactly one element from each right coset of $H$ in $K$. By Lemma~\ref{lem4}, to prove that $H$ is an $(a,b)$-regular set of $G$ it suffices to prove the existence of an inverse-closed subset $S_a$ of $H\setminus\{e\}$ with size $a$ and an inverse-closed subset $S_b$ of $G\setminus H$ which is the union of $b$ pairwise disjoint right transversals of $H$ in $G\setminus H$. In fact, Lemma~\ref{lem4} ensures that $H$ is an $(a,b)$-regular set in $\Cay(G, S)$, where $S = S_a \cup S_b$.
\section{Technical lemmas}\label{sec2}
\begin{lemma}\label{lem2}
Let $G$ be a group, let $H$ be a nontrivial subgroup of $G$, and let $x\in G$. If $HxH\neq Hx^{-1}H$, then there is no involution in $HxH\cup Hx^{-1}H$. If there is an involution in $HxH$, then there is an involution in each right coset in $HxH$.
\end{lemma}
\begin{proof}
First assume that $HxH\neq Hx^{-1}H$. Then $HxH\cap Hx^{-1}H=\emptyset$. Suppose that $y$ is an involution in $HxH$. Then $y^{-1}\in (HxH)^{-1}=Hx^{-1}H$, which is a contradiction.
Next assume that there is an involution in $HxH$, say, $y$. Then for each $h\in H$, the element $h^{-1}yh$ is an involution in $Hxh$. This shows that each right coset in $HxH$ contains an involution.
\end{proof}
To obtain transversals of $H$ in $G\setminus H$ it suffices to find those of $H$ in $HxH\cup Hx^{-1}H$ for each $\Gamma_x$ with $x\in G$. For this purpose, we will analyze the cases $HxH=Hx^{-1}H$ and $HxH\neq Hx^{-1}H$ separately. First, we consider the case $HxH\neq Hx^{-1}H$.
\begin{lemma}\label{lem9}
Let $G$ be a group, let $H$ be a nontrivial subgroup of $G$, and let $x\in G$. Suppose that $HxH\neq Hx^{-1}H$ and $|HxH|/|H|=1$ or $2$. Then there exist $|H|$ pairwise disjoint inverse-closed right transversals of $H$ in $HxH\cup Hx^{-1}H$. In paricular, for any integer $0\leqslant b\leqslant |H|$, there exist $b$ pairwise disjoint right transversals of $H$ in $HxH\cup Hx^{-1}H$ whose union is inverse-closed.
\end{lemma}
\begin{proof}
Since $HxH\neq Hx^{-1}H$, we have $HxH\cap Hx^{-1}H=\emptyset$.
First suppose that $|HxH|/|H|=1$. Then each of $HxH$ and $Hx^{-1}H$ consists of a single right coset of $H$. For each $r\in HxH$, since $r^{-1}\in Hx^{-1}H$, we see that $\{r,r^{-1}\}$ is a right transversal of $H$ in $HxH\cup Hx^{-1}H$. Write $HxH=\{r_1,\dots,r_{|H|}\}$. Then
\[
\{r_1,r_1^{-1}\},\dots,\{r_{|H|},r_{|H|}^{-1}\}
\]
are $|H|$ pairwise disjoint inverse-closed right transversals of $H$ in $HxH\cup Hx^{-1}H$.
Next suppose that $|HxH|/|H|=2$. Then each of $HxH$ and $Hx^{-1}H$ consists of two right cosets of $H$. Let the right cosets of $H$ in $HxH$ be $Hx_1$ and $Hx_2$, and let the right cosets of $H$ in $Hx^{-1}H$ be $Hy_1$ and $Hy_2$. Take
\begin{align*}
S=\{z\mid z\in Hx_1,\,z^{-1}\in Hy_1\},\quad &T=\{z\mid z\in Hx_2,\,z^{-1}\in Hy_2\},\\
U=\{z\mid z\in Hx_1,\,z^{-1}\in Hy_2\},\quad &V=\{z\mid z\in Hx_2,\,z^{-1}\in Hy_1\}.
\end{align*}
Then $S$, $T$, $U$, $V$, $S^{-1}$, $T^{-1}$, $U^{-1}$, $V^{-1}$ are pairwise disjoint, and we have
\[
Hx_1=S\cup U,\ \ Hx_2=T\cup V,\ \ Hy_1=S^{-1}\cup V^{-1},\ \ Hy_2=T^{-1}\cup U^{-1}.
\]
Hence $|H|=|S|+|U|=|T|+|V|=|S|+|V|=|T|+|U|$, which implies that $|S|=|T|$ and $|U|=|V|$. Write $S=\{s_1,\dots,s_c\}$, $T=\{t_1,\dots,t_c\}$, $U=\{u_1,\dots,u_d\}$ and $V=\{v_1,\dots,v_d\}$, where $c+d = |H|$. Then for each $i\in\{1,\dots,c\}$ and $j\in\{1,\dots,d\}$, the sets $\{s_i,t_i,s_i^{-1},t_i^{-1}\}$ and $\{u_j,v_j,u_j^{-1},v_j^{-1}\}$ are both right transversals of $H$ in $HxH\cup Hx^{-1}H=Hx_1\cup Hx_2\cup Hy_1\cup Hy_2$. Therefore,
$$\{s_1,t_1,s_1^{-1},t_1^{-1}\}, \dots, \{s_c,t_c,s_c^{-1},t_c^{-1}\},
\{u_d,v_d,u_d^{-1},v_d^{-1}\}, \dots, \{u_d,v_d,u_d^{-1},v_d^{-1}\}$$
are $|H|$ pairwise disjoint inverse-closed right transversals of $H$ in $HxH\cup Hx^{-1}H$.
\end{proof}
\begin{lemma}\label{lem10}
Let $G$ be a group, let $H$ be a nontrivial subgroup of $G$, and let $x\in G$. Suppose that $HxH\neq Hx^{-1}H$ and $|H|$ is a prime. Then there exist $|H|$ pairwise disjoint inverse-closed right transversals of $H$ in $HxH\cup Hx^{-1}H$. In paricular, for any integer $0\leqslant b\leqslant |H|$, there exist $b$ pairwise disjoint right transversals of $H$ in $HxH\cup Hx^{-1}H$ whose union is inverse-closed.
\end{lemma}
\begin{proof}
Since $|HxH|/|H|=|H|/|H\cap H^x|$ and $|H|$ is a prime, we have $|HxH|/|H|=1$ or $|H|$. If $|HxH|/|H|=1$, then the conclusion follows from Lemma~\ref{lem9}. Now assume that $|HxH|/|H|=|H|$. Then Lemma~\ref{lem5} asserts that $\Gamma_x$ is a complete bipartite graph of valency $|H|$. By K\"{o}nig's $1$-factorization theorem \cite{Konig1976} (see also \cite[Corollary 16.6]{BM}), the $|H|$-regular bipartite graph $\Gamma_x$ can be decomposed into $|H|$ edge-disjoint perfect matchings. These $|H|$ perfect matchings induce $|H|$ pairwise disjoint right transversals of $H$ in $HxH\cup Hx^{-1}H$. Moreover, by the definition of $\Gamma_x$, each of these right transversals is inverse-closed. This completes the proof.
\end{proof}
Next we consider the case $HxH=Hx^{-1}H$.
\begin{lemma}\label{lem11}
Let $G$ be a group, let $H$ be a nontrivial subgroup of $G$, and let $x\in G$. Suppose that $HxH=Hx^{-1}H=Hx$ and there is an involution in $Hx$. Then for any integer $0\leqslant b\leqslant |H|$, there exist $b$ pairwise disjoint right transversals of $H$ in $HxH$ whose union is inverse-closed.
\end{lemma}
\begin{proof}
In this case, every single element in $Hx$ gives a right transversal of $H$ in $HxH\cup Hx^{-1}H=Hx$. Since $Hx$ is inverse-closed and contains at least one involution, we can write
\[
Hx=\{r_1,\dots,r_c,s_1,s^{-1}_1,\dots,s_d,s^{-1}_d\}
\]
for some integers $c, d$ with $1\leqslant c\leqslant|H|$ and $c+2d=|H|$, where $r_i$ is an involution for $i=1,\dots,c$ and $s_j$ has order greater than $2$ for $j=1,\dots,d$. If $b>2d$, then $0<b-2d\leqslant|H|-2d=c$, and we take
\[
R=\{r_1,\dots,r_{b-2d},s_1,s^{-1}_1,\dots,s_d,s^{-1}_d\}.
\]
If $b\leqslant2d$ and $b$ is odd, then we take
\[
R=\{r_1,s_1,s^{-1}_1,\dots,s_{(b-1)/2},s^{-1}_{(b-1)/2}\}.
\]
If $b\leqslant2d$ and $b$ is even, then we take
\[
R=\{s_1,s^{-1}_1,\dots,s_{b/2},s^{-1}_{b/2}\}.
\]
In each case $R$ consists of $b$ pairwise disjoint transversals of $H$ in $HxH\cup Hx^{-1}H$, and moreover $R$ is inverse-closed.
\end{proof}
\begin{lemma}\label{lem12}
Let $G$ be a group, let $H$ be a nontrivial subgroup of $G$, and let $x\in G$. Suppose that $HxH=Hx^{-1}H$ and $|HxH|/|H|=2$. Then there exist $|H|$ pairwise disjoint inverse-closed right transversals of $H$ in $HxH$. In paricular, for any integer $0\leqslant b\leqslant |H|$, there exist $b$ pairwise disjoint right transversals of $H$ in $HxH$ whose union is inverse-closed.
\end{lemma}
\begin{proof}
Since $|HxH|/|H|=2$, we have $HxH=Hx\cup Hy$ for some $y\in G$. Take
\begin{align*}
S=\{z\mid z\in Hx,\,z^{-1}\in Hy\},\quad &T=\{z\mid z\in Hy,\,z^{-1}\in Hx\},\\
U=\{z\mid z\in Hx,\,z^2=e\},\quad &V=\{z\mid z\in Hy,\,z^2=e\},\\
X=\{z\mid z\in Hx,\,z^{-1}\in Hx,\,z^2\ne e\},\quad &Y=\{z\mid z\in Hy,\,z^{-1}\in Hy,\,z^2\ne e\}.
\end{align*}
By Lemma~\ref{lem5}\cha{}{,} we have $|S|\geqslant 1$. It is clear that $S\cup U\cup X=Hx$ and $T\cup V\cup Y=Hy$. Since $S^{-1}=T$, we have $|S|=|T|$ and so $|U|+|X|=|V|+|Y|$. Hence we can write
\begin{align*}
X=\{x_1,x_1^{-1},\dots,x_k,x_k^{-1}\},\quad &Y=\{y_1,y_1^{-1},\dots,y_\ell,y_\ell^{-1}\},\\
U=\{x_{k+1},x_{k+2},\dots,x_{k+u}\},\quad &V=\{y_{\ell+1},y_{\ell+2},\dots,y_{\ell+v}\},\\
S=\{x_{k+u+1},x_{k+u+2},\dots,x_{k+u+s}\},\quad &T=\{x_{k+u+1}^{-1},x_{k+u+2}^{-1},\dots,x_{k+u+s}^{-1}\},
\end{align*}
where $2k+u+s=2\ell+v+s=|H|$.
Without loss of generality we may assume that $k\leqslant\ell$.
If $b\leqslant 2k+1$ and $b$ is odd, then we take
\[
R=\{x_{1},x_{1}^{-1},\dots,x_{(b-1)/2},x_{(b-1)/2}^{-1}\}\cup \{y_{1},y_{1}^{-1},\dots,y_{(b-1)/2},y_{(b-1)/2}^{-1}\}\cup \{x_{k+u+1},x_{k+u+1}^{-1}\}.
\]
If $b\leqslant 2k+1$ and $b$ is even, then we take
\[
R=\{x_{1},x_{1}^{-1},\dots,x_{b/2},x_{b/2}^{-1}\}\cup \{y_{1},y_{1}^{-1},\dots,y_{b/2},y_{b/2}^{-1}\}.
\]
If $2k+1<b\leqslant2\ell+1$ and $b$ is odd, then we take
\[
R=X\cup\{x_{k+1},x_{k+2},\dots,x_{k+(b-1-2k)}\}\cup\{y_1,y_1^{-1},\dots,y_{(b-1)/2},y_{(b-1)/2}^{-1}\}\cup\{x_{k+u+1},x_{k+u+1}^{-1}\}.
\]
If $2k+1<b\leqslant2\ell+1$ and $b$ is even, then we take
\[
R=X\cup\{x_{k+1},x_{k+2},\dots,x_{k+(b-2k)}\}\cup\{y_1,y_1^{-1},\dots,y_{b/2},y_{b/2}^{-1}\}.
\]
If $2\ell+1<b\leqslant2\ell+v$, then we take
\[
R=X\cup Y\cup\{x_{k+1},x_{k+2},\dots,x_{k+(b-2k)}\}\cup\{y_{\ell+1},y_{\ell+2},\dots,y_{\ell+(b-2\ell)}\}.
\]
If $2\ell+v<b\leqslant|H|$, then we take
\[
R=X\cup Y\cup U\cup V\cup \{x_{k+u+1},x_{k+u+2},\dots,x_{b-2k-u}\}\cup \{x_{k+u+1}^{-1},x_{k+u+2}^{-1},\dots,x_{b-2k-u}^{-1}\}.
\]
In each case $R$ consists of $b$ pairwise disjoint transversals of $H$ in $HxH\cup Hx^{-1}H$, and moreover $R$ is inverse-closed.
\end{proof}
\begin{lemma}\label{lem13}
Let $G$ be a group, let $H$ be a subgroup of $G$ with $|H|=m$ odd, and let $x\in G\setminus H$. Suppose that $HxH=Hx^{-1}H$, $|HxH|/|H|=m$, and $HxH$ contains an involution. Then there exist $|H|$ pairwise disjoint inverse-closed right transversals of $H$ in $HxH$. In paricular, for any integer $0\leqslant b\leqslant |H|$, there exist $b$ pairwise disjoint right transversals of $H$ in $HxH$ whose union is inverse-closed.
\end{lemma}
\begin{proof}
Since $|HxH|/|H|=m$, Lemma~\ref{lem5} implies that $\Gamma_x=K_m$. Write
\[
HxH=Hx_0\cup Hx_1\cup\dots\cup Hx_{m-1}.
\]
Consider $Hx_i$ for a fixed $i\in\{0,\dots,m-1\}$. Since $\Gamma_x=K_m$, for each $j\in\{0,\dots,p-1\}\setminus\{i\}$, there exists an element in $Hx_i$ whose inverse is in $Hx_j$.
Hence there is at most one involution in $Hx_i$ as $|Hx_i|=m$.
Moreover, since there is an involution in $HxH$, Lemma~\ref{lem2} implies that there is at least one involution in $Hx_i$.
Thus there is exactly one involution in $Hx_i$, which we denote by $x_{i,0}$.
Let $n=(m-1)/2$.
For $j=1,\dots,n$, since $\Gamma_x=K_m$, there exists $x_{i,j}\in Hx_{i+j}$ such that $x_{i,j}^{-1}\in Hx_{i-j}$, where the subscripts of $x_{i+j}$ and $x_{i-j}$ are taken modulo $m$.
Then
\[
R_i:=\{x_{i,0},x_{i,1},x_{i,1}^{-1},\dots,x_{i,n},x_{i,n}^{-1}\}
\]
is an inverse-closed right transversal of $H$ in $HxH$.
Clearly, $R_0,\dots,R_{m-1}$ are pairwise disjoint, and they form $|H|$ pairwise disjoint inverse-closed right transversals as desired.
\end{proof}
\section{Proofs of Theorems \ref{thm4} and \ref{thm1}}
\label{sec3}
\begin{proof}[Proof of Theorem~$\ref{thm4}$]
We only prove (a) as the proof of (b) is similar. Let
\[
G=A\rtimes \mathbb{Z}_2=\langle A,y\mid y^2=e,\,x^y=x^{-1}\text{ for all }x\in A\rangle,
\]
where $A$ is abelian. Let $H$ be a nontrivial subgroup of $G$. The ``if" part of the statement in (a) is clearly true as a perfect code of $G$ is simply a $(0, 1)$-regular set of $G$. Now we prove the ``only if" part of the statement. Assume that $H$ is a perfect code of $G$. Let $a$ and $b$ be integers with $0\leqslant a\leqslant|H|-1$, $0\leqslant b\leqslant |H|$, and $a$ even when $|H|$ is odd. By Lemma~\ref{lem4}, to prove that $H$ is an $(a,b)$-regular set of $G$, we only need to take an inverse-closed subset of $H\setminus\{e\}$ with size $a$ and $b$ pairwise disjoint right transversals of $H$ in $G \setminus H$ which are inverse-closed in each $HxH\cup Hx^{-1}H$ for $x\in G$.
Case~1: $H\leq A$. Then $H^y=H^{-1}=H$. It follows that $H$ is normal in $G$, and so $|HzH|/|H|=|H|/|H^z\cap H|=1$ for any $z\in G$.
Since $H$ is a perfect code of $G$, by Theorem~\ref{thm2}, for each $x \in G\setminus H$ such that $HxH=Hx^{-1}H$ and $|H|/|H\cap H^{x}|$ is odd, there exists an involution $y'\in Hx$.
Thus, by Lemmas~\ref{lem9} and \ref{lem11}, $H$ is an $(a,b)$-regular set of $G$.
Case~2: $H\nleq A$. Then $H$ contains an element $g\notin A $. Hence we can suppose $g=x'y$ with $x'\in A$. It follows that $g^2=x'(yx'y)=x'x'^{-1}=e$. That is, $g$ is an involution. Thus $G=A\rtimes \langle g\rangle$, and so $H=(H\cap A)\rtimes \langle g\rangle$. In particular, $H=(H\cap A)\cup (H\cap A)g$ and $|H|/|H\cap A|=2$.
It is clear that $H\cap A\leq H\cap H^{z}$ for any $z\in G$.
So we have $|HzH|/|H|=|H|/|H\cap H^{z}|=1$ or $2$. Since $H$ is a perfect code of $G$, by Theorem~\ref{thm2}, for each $x\in G\setminus H$ such that $HxH=Hx^{-1}H$ and $|H|/|H\cap H^x|$ is odd, there exists an involution in $Hx$. Hence, by Lemmas~\ref{lem9}, \ref{lem11} and \ref{lem12}, $H$ is an $(a,b)$-regular set of $G$.
\end{proof}
\begin{lemma}\label{lem8}
Let $G$ be a group of order $4p$, where $p$ is a prime, let $H$ be a nontrivial subgroup of $G$, and let $x\in G$. If $HxH\neq Hx^{-1}H$, then for any integer $0\leqslant b\leqslant |H|$, there exist $b$ pairwise disjoint right transversals of $H$ in $HxH\cup Hx^{-1}H$ whose union is inverse-closed.
\end{lemma}
\begin{proof}
Since $|G|=4p$ with $p$ prime, we have $|H|=r$ or $2r$ with $r\in\{2,p\}$. If $|H|=r$, then Lemma~\ref{lem10} yields the desired result.
Thus assume that $|H|=2r$ in the rest of the proof. Then $|HxH|/|H|=|H|/|H\cap H^x|=1,2,r$ or $2r$. If $|HxH|/|H|=1$ or $2$, then Lemma~\ref{lem9} yields the desired result.
Next assume that $|HxH|/|H|=r$. Then by Lemma~\ref{lem2} we have $\Gamma_x=K_{r,r}$. By K\"{o}nig's 1-factorization theorem \cite{Konig1976}, $\Gamma_x$ can be decomposed into $r$ edge-disjoint perfect matchings. These matchings induce $r$ pairwise disjoint inverse-closed right transversals of $H$ in $HxH\cup Hx^{-1}H$, denoted by $R_1,R_2,\dots,R_r$, where $R_i^{-1}=R_i$ and $R_i\cap R_j=\emptyset$ for all distinct $i,j\in\{1,2,\dots,r\}$. Write $HxH=\{Hx_1,Hx_2,\dots,Hx_r\},Hx^{-1}H=\{Hx_{r+1},Hx_{r+2},\dots,Hx_{2r}\}$ and
\[M=(HxH\cup Hx^{-1}H)\setminus (R_1\cup R_2\cup\dots\cup R_r).
\]
Then for each $i\in \{1,2,\dots,2r\}$, since $|Hx_i|=|H|=2r$ and
\begin{align*}
M\cap Hx_i&=\big((Hx_1\cup Hx_2\cup\dots\cup Hx_{2r})\setminus (R_1\cup R_2\cup\dots\cup R_r)\big)\cap (Hx_i)\\
&=\big((Hx_1\cup Hx_2\cup\dots\cup Hx_{2r})\cap (Hx_i)\big)\setminus \big((R_1\cup R_2\cup\dots\cup R_r)\cap Hx_i\big)\\
&=(Hx_i)\setminus \big((R_1\cup R_2\cup\dots\cup R_r)\cap Hx_i\big),
\end{align*}
we have
\[
|M\cap Hx_i|=|Hx_i|-|(R_1\cup R_2\cup\dots\cup R_r)\cap Hx_i|=2r-r=r.
\]
Hence $M$ is the union of $r$ pairwise disjoint right transversals of $H$ in $HxH\cup Hx^{-1}H$. Moreover, since both $HxH\cup Hx^{-1}H$ and $R_1\cup R_2\cup\dots\cup R_r$ are inverse-closed, $M$ is inverse-closed as well. If $0\leqslant b\leqslant r$, then we take
\[
R=R_1\cup R_2\cup\dots\cup R_b.
\]
If $r< b\leqslant 2r$, then we take
\[
R=M\cup R_1\cup R_2\cup\dots\cup R_{b-r}.
\]
Hence, for $0\leqslant b\leqslant 2r$, $R$ consists of $b$ pairwise disjoint transversals of $H$ in $HxH\cup Hx^{-1}H$, and moreover $R$ is inverse-closed.
Now assume that $|HxH|/|H|=2r$. Then $\Gamma_x=K_{2r,2r}$ by Lemma~\ref{lem2}. Again, by K\"{o}nig's 1-factorization theorem \cite{Konig1976}, $\Gamma_x$ can be decomposed into $2r$ edge-disjoint perfect matchings. These $2r$ perfect matchings give rise to $2r$ pairwise disjoint right transversals of $H$ in $HxH\cup Hx^{-1}H$, and each of these right transversals is inverse-closed.
\end{proof}
\begin{lemma}\label{lem14}
Let $G$ be a group of order $4p$, where $p$ is a prime, let $H$ be a nontrivial subgroup of $G$, and let $x\in G\setminus H$ be such that $HxH=Hx^{-1}H$. Suppose that $HxH$ contains an involution when $|HxH|/|H|$ is odd. Then for any integer $0\leqslant b\leqslant |H|$, there exist $b$ pairwise disjoint right transversals of $H$ in $HxH$ whose union is inverse-closed.
\end{lemma}
\begin{proof}
If $|HxH/H|=1$ or $2$, then the result follows from Lemmas~\ref{lem11} and \ref{lem12}. Thus assume that $|HxH/H|\geqslant3$ in the rest of the proof. Then $|G|\neq8$, and so $p$ is odd. Hence $|H|=2$, $4$, $p$ or $2p$. If $|H|=2$, then $|HxH|/|H|=|H|/|H\cap H^x|\leqslant2$, a contradiction. If $|H|=2p$, then $|HxH|/|H|\leqslant|G|/|H|=2$, again a contradiction. If $|H|=p$, then the result follows from Lemmas~\ref{lem11} and \ref{lem13}.
It remains to consider the case $|H|=4$. Since $|HxH|/|H|\geqslant3$, we have $|HxH|/|H|=4$. So $\Gamma_x$ is the complete graph $K_4$. Consequently, there are three pairwise disjoint perfect matchings in $\Gamma_x$ which induce three pairwise disjoint inverse-closed transversals of $H$ in $HxH\cup Hx^{-1}H$.
\end{proof}
\begin{proof}[Proof of Theorem~$\ref{thm1}$]
We only prove (a) as the proof of (b) is similar. Let $G$ be a group of order $4p$ or $pq$ for some primes $p$ and $q$, and let $H$ be a nontrivial subgroup of $G$. Since a perfect code of $G$ is exactly a $(0, 1)$-regular set of $G$, the ``if" part of the statement in (a) is clearly true.
Now we prove the ``only if" part of the statement in (a). Assume that $H$ is a perfect code of $G$. Let $a$ and $b$ be integers with $0\leqslant a\leqslant|H|-1$, $0\leqslant b\leqslant |H|$, and $a$ even when $|H|$ is odd. By Lemma~\ref{lem4}, to prove that $H$ is an $(a,b)$-regular set of $G$, it suffices to take an inverse-closed subset of $H\setminus\{e\}$ with size $a$ and $b$ pairwise disjoint right transversals of $H$ in $G \setminus H$ which are inverse-closed in each $HxH\cup Hx^{-1}H$ for $x\in G$.
Since $H$ is a perfect code of $G$, by Theorem~\ref{thm2}, for each $x\in G\setminus H$ such that $HxH=Hx^{-1}H$ and $|H|/|H\cap H^x|$ is odd, there exists an involution $y\in Hx$. Hence, if $|G|=4p$ for some prime $p$, then $H$ is an $(a,b)$-regular set of $G$ by Lemmas~\ref{lem8} and \ref{lem14}. The rest of the proof handles the case $|G|=pq$.
Case~1: Both $p$ and $q$ are odd primes. Suppose that there exists some $x\in G\setminus H$ such that $HxH=Hx^{-1}H$. Since $H$ is a perfect code of $G$, Lemma~\ref{thm2} implies that there exists some $z\in G\setminus H$ with $z^2\in H$ such that $Hz$ contains an involution, which contradicts the assumption that $|G| = pq$ is odd. Hence $HxH\neq Hx^{-1}H$ for any $x\in G\setminus H$. Since $|G|=pq$ with both $p$ and $q$ odd primes, we have $|H|=r$ for $r \in \{p, q\}$. Since $|HxH|/|H|=|H|/|H\cap H^x|$, it follows that $|HxH|/|H|=1$ or $r$. Thus, by Lemmas~\ref{lem9} and \ref{lem10}, $H$ is an $(a,b)$-regular set of $G$.
Case~2: One of $p$ and $q$ is even, say, $p=2$. Then $|H|=2$ or $q$. First assume that $HxH\neq Hx^{-1}H$.
If $|H|=2$, then $|HxH|/|H|=|H|/|H\cap H^x|\leqslant2$, and by Lemma~\ref{lem9}, $H$ is an $(a,b)$-regular set of $G$. If $|H|=q$, then by Lemma~\ref{lem10}, $H$ is an $(a,b)$-regular set of $G$. Next assume that $HxH=Hx^{-1}H$. If $|H|=2$, then $|HxH|/|H|=|H|/|H\cap H^x|\leqslant2$, and by Lemmas~\ref{lem11} and \ref{lem12}, $H$ is an $(a,b)$-regular set of $G$. If $|H|=q$, then $|HxH|/|H|=|H|/|H\cap H^x|$ is $1$ or $q$, and hence by Lemmas~\ref{lem11} and \ref{lem13}, $H$ is an $(a,b)$-regular set of $G$.
\end{proof}
|
{
"timestamp": "2021-05-11T02:19:42",
"yymm": "2105",
"arxiv_id": "2105.03913",
"language": "en",
"url": "https://arxiv.org/abs/2105.03913"
}
|
\section{Subvarieties of ${\mathcal A}_g(n)$ }\label{sec:Ag}
\subsection{Siegel upper half-space, ${\mathcal A}_g$ and level structures}
We denote by ${\mathbb{H}}_g=\{\tau\in\operatorname{Mat}_{g\times g}({\mathbb{C}})\mid \tau=\tau^t, \operatorname{Im}\tau>0\}$ the {\it{Siegel upper half-space}} of symmetric matrices with positive definitive imaginary part. It is a homogeneous space for the action of $\op{Sp}(2g,{\mathbb{R}})$, where an element
$$
\gamma=\left(\begin{matrix} A & B \\ C & D\end{matrix}\right) \in \op{Sp}(2g,{\mathbb{R}})
$$
acts via
$$
\gamma\cdot \tau:=(A\tau+B)(C\tau+D)^{-1}.
$$
We denote by $\Gamma_g:=\op{Sp}(2g,{\mathbb{Z}})$ the {\it{Siegel modular group}}, and let $\Gamma_g(n):=\{\sigma\in\Gamma_g:\sigma\equiv I_{2g}\mod n\}$ denote the
{\it{$n$-th principal congruence subgroup}} of $\Gamma_g$. The quotient ${\mathcal A}_g={\mathbb{H}}_g/\Gamma_g$ is the moduli space of complex principally polarized abelian varieties (ppav), and ${\mathcal A}_g(n)={\mathbb{H}}_g/\Gamma_g(n)$ is the moduli space of ppav with a choice of a full symplectic level~$n$ structure.
We denote by $p:{\mathbb{H}}_g\to{\mathcal A}_g$ and $p_n:{\mathbb{H}}_g\to{\mathcal A}_g(n)$ the quotient maps,
so that ${\mathbb{H}}_g$ identifies to the universal cover of ${\mathcal A}_g$
(in the orbifold sense). We recall incidentally that, for $n\geq 3$,
the moduli space ${\mathcal A}_g(n)$ is a smooth variety and $p_n$ is its universal cover in the standard sense.
Given a subvariety ${\mathcal Y}\subset{\mathcal A}_g$, we will also write $\wti{{\mathcal Y}}$ to denote
its preimage $p^{-1}({\mathcal Y})$ inside ${\mathbb{H}}_g$ and
${\mathcal Y}(n)$ to denote its preimage $p_n(\wti{{\mathcal Y}})$ in ${\mathcal A}_g(n)$.\\
We assume $g\geq 2$ and $k$ half integral, a function $F:{\mathbb{H}}_g\to{\mathbb{C}}$ is called a {\em modular form of weight $k$ and multiplier $\chi$
with respect to a subgroup $\Gamma\subset\Gamma_g$} if
$$
F(\gamma\circ\tau)=\chi(\gamma)\det(C\tau+D)^kF(\tau),\quad \forall \gamma
\in\Gamma,\ \forall \tau\in{\mathbb{H}}_g.
$$
We shall write $[\Gamma, k,\chi]$ for this space. We omit the {multiplier} if it is trivial.
We shall define the ring of modular forms as
\begin{equation}\label{modforms}
A(\Gamma, \chi)=\bigoplus_{j=0}^{\infty}[\Gamma, j/2, \chi ^j].
\end{equation}
\subsection{Some topological properties of ${\mathcal A}_g$}
Let $g\geq 3$ and let ${\mathcal A}={\mathbb{H}}_g/\Gamma$
for a certain
subgroup $\Gamma\subset\Gamma_g$ of finite index.
We can regard the map ${\mathcal A}\rightarrow{\mathcal A}_g$
as a finite \'etale cover of orbifolds, or as a
finite, possibly ramified, cover of ${\mathcal A}_g$.
Since $\Gamma_g$ is finitely presented, so is $\Gamma$.
Moreover, by \cite{bms} such $\Gamma$ contains a principal congruence
subgroup $\Gamma_g(n)$ for some $n\geq 2$.
We recall that
$\mathrm{Pic}({\mathcal A})/\mathrm{tors}$ is generated by
$\lambda$ or by $\lambda/2$,
where $(w/2)\lambda$ is the class of a divisor described by a modular form of weight $w/2$. The same holds for the Satake
compactification $\ol{{\mathcal A}}$ of ${\mathcal A}$, whose boundary has codimension $g$.
Thus every effective divisor $D$ in ${\mathcal A}$ is in fact the zero locus of a modular form, and so its closure $\ol{D}$ in $\ol{{\mathcal A}}$ is Poincar\'e dual
to a positive half-integer multiple of $\lambda$ (up to torsion). In particular, $\ol{D}$ is ample and $\partial D$ has codimension $g-1$ or $g$ inside $\ol{D}$.
We begin by stating a consequence of the Lefschetz hyperplane theorem
as declined in Proposition \ref{prop:divisor}.
\begin{thm}[Connectedness of zero loci of modular forms]\label{thm:connectedness}
Let $g\geq 3$ and let ${\mathcal A}={\mathbb{H}}_g/\Gamma$ for some finite-index subgroup $\Gamma$ of $\Gamma_g$. Then every divisor $D$ in ${\mathcal A}$ pulls back to a connected divisor $\wti{D}$ in ${\mathbb{H}}_g$.
Moreover, if $D$ is locally irreducible, then $\wti{D}$ is irreducible.\\
For $g=2$ the same conclusions hold if $\ol{D}$ does not contain an irreducible component of $\partial{\mathcal A}$.
\end{thm}
\begin{proof}
Up to replacing $\Gamma$ by $\Gamma_g(n)\subset\Gamma$ for a suitable $n\geq 3$, and $D$ by its pull-back
to ${\mathbb{H}}_g/\Gamma_g(n)$, we can assume that ${\mathcal A}$ is a smooth variety
and we let $\ol{{\mathcal A}}$ be its Satake compactification and $\partial{\mathcal A}=\ol{{\mathcal A}}\setminus{\mathcal A}$.
By the above discussion, $\ol{D}$ is an ample Cartier divisor in $\ol{{\mathcal A}}$.
We recall that, for $g\geq 3$, the couple $(\ol{{\mathcal A}},\partial{\mathcal A})$ satisfy properties
(I)-(II)-(III$'_2$) and so $\pi_1(D)\twoheadrightarrow\Gamma$ by Proposition \ref{prop:divisor}.
It follows that $\wti{D}$ is connected by Lemma \ref{lm:lift}(iii).
The second claim is a consequence of Proposition \ref{prop:irred}(i).\\
For $g=2$ it is enough to note that the required condition ensures that $\ol{D}$
satisfies (III$_2$), so that Proposition \ref{prop:divisor} still applies.
\end{proof}
Note that, for $g\geq 4$, the same argument as above
shows that $\pi_1(D)\cong\Gamma$ and so its lift $\wti{D}$ is simply connected.
\begin{example}[Eisenstein series]
For every $g$ we write $\Gamma_{g,0}\subset \Gamma_{g}$ for the subgroup defined by $C=0$.
For every $k\geq 1$ the Eisenstein series
\[
E_{2k}(\tau):=\sum_{\gamma\in\Gamma_g/\Gamma_{g,0}} \det(C\tau+D)^{-2k},\qquad
\text{where}\quad\gamma=
\left(\begin{array}{cc}
A & B\\
C & D
\end{array}\right)
\]
is a modular form and determines a divisor $\ol{D}$ in $\ol{{\mathcal A}}_g$ that does not contain $\partial{\mathcal A}_g$. It follows that $\wti{D}\subset{\mathbb{H}}_g$ is connected for $g\geq 2$.
\end{example}
In some cases the generic element $\ol{D}$ in its linear system
has smooth internal part $D$, which occurs by Bertini if
$|\ol{D}|$ is very ample. One can thus
apply Theorem \ref{thm:connectedness}
to conclude that its preimage of $D$ in ${\mathbb{H}}_g$ is irreducible.
Since the Satake compactification of ${\mathbb{H}}_g/\Gamma$ is
${\rm Proj} (A(\Gamma))$, the linear system of modular forms
of large weight is certainly very ample.
On the other hand, if we are allowed to go up to high levels (namely,
to pick $\Gamma$ of arbitrarily large finite index),
such phenomenon can occur at any positive weight.
In fact, as consequence of the results of Igusa, cf. \cite{Igbook}, we have that for high enough levels, modular forms of weight $1/2$ give an embedding of the modular variety. And we recall that for $g\geq 2$ the minimal weight is exactly $1/2$, {\cite[page 88]{FR}}.\\
Our aim now is to investigate the irreducibility of the preimage of
{\it{any}} divisor of ${\mathcal A}$.
In order to do that, we need to recall, again from {\cite[page 88]{FR}}, the following
result, which can be also obtained as a consequence of Lemma \ref{lemma:prime}.
We will use the term {\it{absolutely prime form}} (resp. {\it{totally prime form}}) to mean
a reduced absolutely prime modular form (resp. a reduced totally prime modular form).
\begin{lm}[Factorization into absolutely prime forms]\label{lm:factorization}
For $g\geq 3$ the following hold.
\begin{itemize}
\item[(i)]
Each non-vanishing modular form of weight $1/2$ is an absolutely prime form.
\item[(ii)]
Let $F$ be a non-vanishing modular form with respect to a finite-index subgroup $\Gamma$
of $\Gamma_g$.
There is a unique factorization
$$F= f_1^{r_1}\cdot\dots\cdot f_u^{r_u}$$
where $r_1,\dots,r_u$ are positive integers and $f_1,\dots,f_u$ are absolute prime forms
with respect to some congruence subgroup $\Gamma'\subset\Gamma$.
Moreover, such factorization is unique up to reordering the $f_i$'s.
\end{itemize}
\end{lm}
As an immediate consequence of Theorem \ref{thm:prime} we have the following
\begin{thm}[Absolutely prime modular forms are totally prime] \label{thm:ag}
For $g\geq 2$,
every absolutely irreducible divisor $D\subset {\mathbb{H}}_g/\Gamma$
is totally irreducible.
\end{thm}
Note that Lemma \ref{lm:factorization} does not hold for $g=2$: an example
is discussed in Section \ref{sec:tn2}.
For a deeper investigation regarding subvarieties of ${\mathbb{H}}_g/\Gamma$ it would be useful to discuss the case of theta constants, and to consider
the known characterizations of the locus of hyperelliptic Jacobians
in ${\mathcal A}_g(2)$ in terms of vanishing of certain sets of theta constants and the jacobian locus too.
\subsection{Theta functions}\label{sec:theta}
The {\it{(first-order) theta function $\vartheta:{\mathbb{H}}_g\times{\mathbb{C}}^g\rightarrow{\mathbb{C}}$}}
is defined as
\[
\vartheta(\tau,z):=\sum\limits_{n\in{\mathbb{Z}}^g} \exp \pi i \left(
n^t\tau n+2n^t z\right)
\]
and it is even in $z$.
The {\it{theta constant}}
$\theta:=\vartheta(\cdot,0):{\mathbb{H}}_g\rightarrow{\mathbb{C}}$ associated to $\vartheta$
is a modular form of weight $1/2$ relative to a suitable subgroup of $\Gamma_g$.
For $\varepsilon,\delta\in {\mathbb{Z}}_2^g$
we define { \it the (first order) theta function
$\vartheta\chars\varepsilon\delta:{\mathbb{H}}_g\times{\mathbb{C}}^g\rightarrow{\mathbb{C}}$
with characteristic $[\varepsilon,\delta]$} as
$$
\tt\varepsilon\delta(\tau,z):=\sum\limits_{n\in{\mathbb{Z}}^g} \exp \pi i \left[\left(
n+\frac{\varepsilon}{2}\right)^t\tau \left(n+\frac{\varepsilon}{2}\right)+2\left(n+\frac{\varepsilon}{2}\right)^t\left(z+
\frac{\delta}{2}\right)\right]
$$
and note that $\vartheta\chars{0}{0}=\vartheta$.
We will usually write $\varepsilon,\delta$ as rows (or sometimes columns, if notationally more convenient) of $g$ zeroes and ones, and operate with them over ${\mathbb{Z}}_2$ unless stated otherwise.
The {\it{theta characteristic}} $m=\chars{\varepsilon}{\delta}\in{\mathbb{Z}}_2^{2g}$ is called {\it{even}} or {\it{odd}} depending on whether the scalar product $\varepsilon\cdot\delta$ is zero or one as an element of ${\mathbb{Z}}_2$. It turns out that there are $2^{g-1}(2^g+1)$ even characteristics and $2^{g-1}(2^g-1)$ odd ones.
As a function of~$z$, the theta function $\vartheta_m(\tau,z)$ is even (resp.~odd)
if its characteristic $m$ is even (resp.~odd).
The {\it{theta constant $\theta_m$}} is the value of theta function $\vartheta_m$ at $z=0$, namely
the function $\theta_m:{\mathbb{H}}_g\rightarrow{\mathbb{C}}$ defined as
$$ \theta_m(\tau):=\vartheta_m(\tau,0).$$
Note that theta constants vanish identically for $m$ odd.
Similarly to $\theta$, the theta constants $\theta_m$ are examples of modular forms, i.e.~of sections of a suitable line bundle on a suitable cover of ${\mathcal A}_g$. In fact, these are seen to transform as follows (cf. \cite{Igbook} and \cite{FR} for the general formula):
\[
\tt\varepsilon\delta(\gamma \cdot \tau,0) = \left(\kappa(\gamma) \chi_{\varepsilon,\delta}(\gamma) \det{(C \tau + D)}^{\frac{1}{2}}\right)\cdot \tt\varepsilon\delta(\tau,0) \quad \quad \forall \gamma \in \Gamma_g(2)
\]
where $\kappa(\gamma)$ is an eighth root of the unity for any $\gamma=\left(\begin{array}{cc} A & B \\ C & D\end{array}\right)$ and $ \chi_{\varepsilon,\delta}$ is a character.
In particular, $\theta\chars\varepsilon\delta$ is a modular form of weight $\frac{1}{2}$ and
can be seen as a well-defined section on a line bundle
over ${\mathcal A}_g(2)$, whose zero locus in ${\mathcal A}_g(2)$ is denoted by $\tn\chars\varepsilon\delta$.
The group $\Gamma_g$ acts on theta characteristics, considered as elements of ${\mathbb{Z}}_2^{2g}$, via an affine-linear action of its quotient $\op{Sp}(2g,{\mathbb{Z}}_2)=\Gamma_g/\Gamma_g(2)$. In particular, this is to say that $\Gamma_g(2)$ is precisely equal to the subgroup of~$\Gamma_g$ that fixes each characteristic. Moreover, $\Gamma_g/\Gamma_g(2)$ transitively acts on the subset
of even characteristics.
We consider the union $\tn(2)$ of all $\tn\chars\varepsilon\delta\subset {\mathcal A}_g(2)$, as $\chars\varepsilon\delta$ ranges among the $2^{g-1}(2^g+1)$ even theta characteristics. It descends to a divisor $\tn\subset {\mathcal A}_g$, which is thus the image of $\tn\chars\varepsilon\delta$
via the cover ${\mathcal A}_g(2)\rightarrow{\mathcal A}_g$ for any even $\chars\varepsilon\delta$.
Geometrically,~$\tn$ is the locus of ppav whose theta divisor has a singularity at an even two-torsion point.
Thus we have
\begin{prop}[Irreducible components of $\tn(2)$] \label{prop:tn-irred}
For any $g\geq 3$ and for any $\varepsilon,\delta$
the divisor $\tn\chars\varepsilon\delta$ is irreducible, and so are its preimages
in ${\mathcal A}_g(2n)$ for every $n>1$. It follows that $\tn(2n)$
is the union of $2^{g-1}(2^g+1)$ irreducible components and it is connected,
thus $\tn(2n)$ is not locally irreducible.
As a consequence, $\tn\subset{\mathcal A}_g$ is irreducible but it is not locally irreducible.
\end{prop}
\begin{proof}
Fix an even $\chars\varepsilon\delta$.
The divisor $\theta\chars\varepsilon\delta$ is the zero locus inside ${\mathcal A}_g(2)$
of a modular form $\theta\chars\varepsilon\delta(\tau)$ of minimal weight $\frac{1}{2}$, which then
is an absolutely prime form.
Observe now that, for $g\geq 3$, every pair of irreducible components of $\tn(2)$
intersect. In fact, the closure of two distinct $\theta_m$ and $\theta_{m'}$
inside the Satake compactification $\ol{{\mathcal A}}_g(2)$ must intersect each other, because theta-nulls are ample.
Since $\partial{\mathcal A}_g(2)$ inside $\ol{{\mathcal A}}_g(2)$ has codimension $g\geq 3$,
it follows that $\theta_m\cap\theta_{m'}$ is not contained
inside $\partial{\mathcal A}_g(2)$, and so they must intersect inside ${\mathcal A}_g(2)$.
As a consequence, $\tn(2)$ is connected and not locally irreducible, and so $\tn(2n)$ is too for all $n$.
Since $\tn$ is the image of any $\tn\chars\varepsilon\delta$, it follows that $\tn$ is irreducible.
Moreover, $\tn(2)\rightarrow\tn$ is an \'etale cover (in the orbifold sense), and so $\tn$ cannot be locally irreducible either.
\end{proof}
As a consequence of our Theorem \ref{thm:ag}, we then have the following
\begin{cor}[$\tc\varepsilon\delta$ is totally prime for $g\geq 3$]\label{cor:tn}
For $g\geq 3$ and for any even characteristics $\chars\varepsilon\delta$,
the divisor $\wti{\tn}\chars\varepsilon\delta\subset{\mathbb{H}}_g$ is normal and irreducible.
\end{cor}
\begin{proof}
We have seen that, since $\vartheta\chars\varepsilon\delta$ is modular form of weight $1/2$, it is absolutely prime.
The irreducibility of $\wti{\tn}\chars\varepsilon\delta$ follows from Theorem \ref{thm:ag}.
Moreover, $\tn\chars\varepsilon\delta$ is a divisor inside
${\mathcal A}_g(2)$ and regular in codimension $1$ and so it is normal (see \cite{cvg}). Since normality is a local property, it follows that
$\wti{\tn}\chars\varepsilon\delta$ is normal too.
\end{proof}
\subsection{Lifting subvarieties that contain ${\mathcal J}_g$ or ${\mathcal H}_g$}\label{sec:tn}
We use ${\mathcal J}_g\subset{\mathcal A}_g$ to denote the locus of Jacobians of smooth genus $g$ curves, and denote by ${\mathcal H}_g\subset{\mathcal J}_g$ the locus of hyperelliptic Jacobians.
We recall that ${\mathcal J}_g$ is the image of the moduli space of curves ${\mathcal M}_g$
via the Torelli morphism $t:{\mathcal M}_g\rightarrow{\mathcal A}_g$
and that ${\mathcal H}_g$ is the image of the hyperelliptic locus $\mathcal{HM}_g\subset{\mathcal M}_g$
via $t$.
Moreover ${\mathcal J}_g\setminus{\mathcal H}_g$ and ${\mathcal H}_g$ are smooth and
${\mathcal J}_g$ has unibranched singular locus ${\mathcal H}_g$.
We recall the following fact.
\begin{prop}[Irreducible components of ${\mathcal H}_g(2)$ \cite{tsu}]
For $g\geq 2$, the hyperelliptic locus ${\mathcal H}_g(2)$ has
$$ 2^{g^2} \prod_{k=1}^{g}( 2^{2k}-1)/(2g+2)! = |\op{Sp}(2g,{\mathbb{Z}}_2)|/ |\mathfrak{S}_{2g+2}|$$ irreducible components.
Moreover, if ${\mathcal H}_g(2)_{\mathrm{irr}}$ is such a component,
then the preimage of ${\mathcal H}_g(2)_{\mathrm{irr}}$ in ${\mathcal A}_g(2n)$ is irreducible.
\end{prop}
Since ${\mathcal H}_g$ is irreducible, the group
$\Gamma_g$ transitively acts on the set of irreducible components of ${\mathcal H}_g(2)$.
Hence the above
Proposition \ref{prop:pi1-hyp} is a consequence of the following, cf.~\cite{mutheta2}, or \cite{ac}.
\begin{prop}[Fundamental group of the hyperelliptic locus]\label{prop:pi1-hyp}
If $\iota:{\mathcal H}_g\rightarrow{\mathcal A}_g$ is the inclusion of the locus of hyperelliptic Jacobians
with $g\geq 2$,
then the image of $\pi_1(\iota):\pi_1({\mathcal H}_g)\rightarrow\pi_1({\mathcal A}_g)=\Gamma_g$ fits into the following exact sequence
\[
1\rightarrow \Gamma_g(2)\rightarrow \mathrm{Im}(\pi_1(\iota))\rightarrow \mathfrak{S}_{2g+2}\rightarrow 1.
\]
In particular, any irreducible component ${\mathcal H}_g(2n)_{\mathrm{irr}}$ of ${\mathcal H}_g(2n)$
satisfies $$\pi_1({\mathcal H}_g(2n)_{\mathrm{irr}})\twoheadrightarrow \Gamma_g(2n).$$
\end{prop}
In fact, using Proposition \ref{prop:pi1-hyp}
and Lemma \ref{lm:lift}(iii) we can draw the following conclusion.
\begin{cor}[Irreducible components of $\wti{{\mathcal H}}_g$]
Let ${\mathcal H}_g(2)_{\mathrm{irr}}$ be a connected component of ${\mathcal H}_g(2)$.
\begin{itemize}
\item[(a)]
The preimage of ${\mathcal H}_g(2)_{\mathrm{irr}}$ in ${\mathbb{H}}_g$ is smooth and connected.
\item[(b)]
The preimage of ${\mathcal H}_g$ in ${\mathbb{H}}_g$ is formed by
$2^{g^2} \prod_{k=1}^{g}( 2^{2k}-1)/(2g+2)!$ smooth irreducible components.
\end{itemize}
\end{cor}
Similarly, for the jacobian locus we have:
\begin{lm}[Irreducibility of the Jacobian locus $\wti{{\mathcal J}}_g$]
The preimage $\wti{{\mathcal J}}_g$ in ${\mathbb{H}}_g$ is irreducible, and so
${\mathcal J}_g(n)$ for all $n$.
\end{lm}
\begin{proof}
Since ${\mathcal M}_g (n)$ is smooth and connected,
and so ${\mathcal M}_g$ is a connected orbifold,
by Lemma \ref{lm:lift}(iii) it is enough to show
that $\pi_1(t):\pi_1({\mathcal M}_g)\rightarrow\pi_1({\mathcal A}_g)$ is surjective
as a map of orbifold fundamental groups.
Such homomorphism can be identified to the symplectic representation
$\mathrm{MCG}_g\rightarrow \op{Sp}(2g,{\mathbb{Z}})$, which is classically known to be surjective
(see, for instance, Chapter 6 of \cite{farb-margalit}).
\end{proof}
Here we specialize some of the results obtained in Section \ref{sec:topological} to loci in ${\mathcal A}_g$ that contain the Jacobian
or the hyperelliptic locus.\smallskip
Since the inclusions ${\mathcal J}_g(n)\hookrightarrow{\mathcal A}_g(n)$ and ${\mathcal H}_g(2k)\hookrightarrow{\mathcal A}_g(2k)$
induce surjections at the level of orbifold fundamental groups,
the following is an immediate consequence of Corollary \ref{cor:criterion}(i).
\begin{prop}[Subvarieties containing hyperelliptic or jacobian locus]\label{prop:irr-Mg}
Let $Z\subset{\mathcal A}_g (n)$ be an irreducible subvariety.
\begin{itemize}
\item[(i)]
If a Zariski-open subset of ${\mathcal J}_g (n)$
(respectively, of ${\mathcal H}_g(n)_{\mathrm{irr}}$ with $n$ even) is contained in $Z$, then
$\wti{Z}\subset{\mathbb{H}}_g$ is connected, with finitely many irreducible components.
\item[(ii)]
If a Zariski-open subset of ${\mathcal J}_g (n)$ (respectively, of ${\mathcal H}_g(n)_{\mathrm{irr}}$ with $n$ even)
is contained in $Z$, but it does not entirely sit
inside the singular locus of $Z$, then $\wti{Z}\subset{\mathbb{H}}_g$ is irreducible.
\end{itemize}
\end{prop}
As a first example, one can apply the above Proposition
\ref{prop:irr-Mg}(ii) again to the case of $Z=\tn\chars\varepsilon\delta$ to deduce the irreducibility of $\wti{\tn}\chars\varepsilon\delta$. In fact in Theorem 9.1 at page 137 of \cite{mutheta2},
Mumford gives a characterization of a component $\wti{{\mathcal H}}_g(2)_{\mathrm{irr}}$ in terms of vanishing of theta constants. Moreover, in \cite{sm} it is proved that these equations cut $\wti{{\mathcal H}}_g(2)_{\mathrm{irr}}$ smoothly, and so $\wti{{\mathcal H}}_g(2)_{\mathrm{irr}}$ is contained in the smooth locus of $\tn\chars\varepsilon\delta$.\\ %
As a second example, we mention the connected component
of $\mathcal{N}_k$ that contains the jacobian locus.
It was proven by Andreotti-Mayer \cite{anma} that the theta divisor of a Jacobian of dimension $g$ has singular locus of dimension at least $g-4$.
If $\mathcal{N}_k\subset{\mathcal A}_g$ is the locus of Abelian varieties whose theta divisor has singular locus of dimension at least $k$,
then
for all $k\leq g-4$ there exists an irreducibile component
$\mathcal{N}^J_k$ of $\mathcal{N}_k$ that contains the jacobian locus.
Then $\wti{\mathcal{N}}^J_k$ is connected with finitely many irreducible components
by Proposition \ref{prop:irr-Mg}(i).
In a successive investigation we will check that ${\mathcal J}_g$ sits
inside the singular locus of $\mathcal{N}^J_k$ for all $k\leq g-5$.
Hence we cannot immediately conclude that $\wti{\mathcal{N}}^J_k\subset{\mathbb{H}}_g$ is irreducible.
\subsection{Examples}
In this last section we treat three distinguished examples.
\subsubsection{Theta-nulls in genus two}\label{sec:tn2}
First of all we observe that
the situation for $\tn\chars\varepsilon\delta$ in genus $2$ is completely different, as
$\tc\varepsilon\delta$ is not absolutely prime for $g=2$.
In fact, the divisor $\tn\subset {\mathcal A}_2$ is irreducible
and it coincides with the locus ${\mathcal A}_2^{\mathrm{dec}}$ of decomposable Abelian varieties. Hence, its closure contains $\partial{\mathcal A}_2$, and so $\partial\tn$ has codimension $1$ inside $\ol{\tn}$.
It follows that each of the $10$ smooth irreducible components $\tn\chars\varepsilon\delta$ of $\tn(2)$ consists of decomposable Abelian variety and $\partial\tn\chars\varepsilon\delta$ has codimension $1$ inside $\ol{\tn}\chars\varepsilon\delta$. Hence, in this case the conditions of the Proposition \ref{prop:divisor} are not satisfied.
In particular $\pi_1(\tn)$ can be identified to
$(\Gamma_1\times\Gamma_1)\ltimes \mathfrak{S}_2$, namely
\[
\pi_1(\tn)\cong
\left\{
\left(\begin{array}{c|c}A & 0\\ \hline 0 & D\end{array}\right),
\ \left(\begin{array}{c|c}A\cdot\sigma & 0\\ \hline 0 & D\cdot\sigma\end{array}\right)
\ \Big|\ A,D\in\mathrm{SL}(2,{\mathbb{Z}})\right\}
\]
where $\sigma=\left(\begin{array}{cc}0 & 1\\ 1 & 0\end{array}\right)
\in\mathrm{GL}(2,{\mathbb{Z}})$.
It follows that $\pi_1(\tn)$ is isomorphic to a subgroup of infinite index in $\Gamma_2$, and so $\wti{\tn} \subset {\mathbb{H}}_2$ has infinitely many smooth connected components.
Note furthermore that the index
of $\mathrm{SL}(2,{\mathbb{Z}}_{2n})\times\mathrm{SL}(2,{\mathbb{Z}}_{2n})$
inside $\op{Sp}(4,{\mathbb{Z}}_{2n})$ increases with $n$.
We condense the above considerations into the following.
\begin{prop}[Theta-nulls are not absolutely prime for $g=2$]\label{prop:th2}
The divisor $\tn\chars\varepsilon\delta$ in ${\mathcal A}_2(2)$ is
not absolutely prime, and $\wti{\tn}\chars\varepsilon\delta$ has infinitely many smooth connected components.
\end{prop}
\begin{proof}
In view of Lemma \ref{lm:finite-lift}, the number of connected components
of the preimage of $\tn\chars\varepsilon\delta$ inside ${\mathcal A}_g(2n)$
increases with $n$ and so
$\tc\varepsilon\delta$ is not absolutely prime.
\end{proof}
In fact, for $g=2$
a factorization
of $\tc\varepsilon\delta$ into absolutely prime forms does not exist (note
that Lemma \ref{lm:factorization} does not apply).
\subsubsection{Intermediate jacobians of cubic threefolds}
We recall that for an odd characteristic
$\chars\varepsilon\delta$ we can define $\gn\chars\varepsilon\delta\subset{\mathcal A}_g(2)$ as the locus of all ppav's at which the gradient $d_z\tc\varepsilon\delta(\tau,0)$ vanishes.
Such locus has expected codimension $g$ in ${\mathcal A}_g(2)$.
For $g\geq 5$, it is known that each $\gn \chars\varepsilon\delta$ has an irreducible component, which we denote by ${\mathcal C}\chars\varepsilon\delta$, that contains some ${\mathcal H}_g(2)_{\mathrm{irr}}$; however, such $\gn \chars\varepsilon\delta$
has other irreducible components beside ${\mathcal C}\chars\varepsilon\delta$
(for example, inside the decomposable locus ${\mathcal A}_1(2)\times{\mathcal A}_{g-1}(2)$
we will find some components of type ${\mathcal A}_1(2)\times\tn\chars{\varepsilon'}{\delta'}$).
In genus $g=5$ we have that, at level $1$, this component is the closure of the moduli space
$\mathcal C$ of the intermediate Jacobians of cubic threefolds. In \cite{ACT} Sect. 4 it is proved that ${\mathcal H}_5$ in ${\mathcal C}$ is smooth (in an orbifold sense) and the period map
$$ \pi: {\mathcal C} \longrightarrow {\mathcal A}_5$$
extends to a regular map at the generic point of ${\mathcal H}_5$ in ${\mathcal C}$. Hence we have
\begin{cor}[Irreducibility of $\wti{{\mathcal C}}\chars\varepsilon\delta$]\label{cor:grad}
For $g=5$ the locus $\wti{{\mathcal C}}\chars\varepsilon\delta$ is irreducible in ${\mathbb{H}}_5$.
For $g\geq 6$
the locus ${\mathcal C}\chars\varepsilon\delta$ lifts to the union $\wti{{\mathcal C}}\chars\varepsilon\delta$ of finitely many irreducible divisors in ${\mathbb{H}}_g$.
\end{cor}
\begin{proof}
The proof is an immediate consequence of the above discussion.
Moreover, Julia Bernatska kindly communicated us that
${\mathcal H}_5(2)_{\mathrm{irr}}$ includes in the smooth locus of ${\mathcal C}\chars\varepsilon\delta$ as a consequence of \cite[Remark 9]{ber}.
\end{proof}
\subsubsection{The Schottky form}
We are going to exhibit an absolutely prime form of weight greater than $1/2$.
This will be the so-called Schottky form.
Let $S$ be an integral, positive-definite quadratic form on ${\mathbb{Z}}^m$,
and consider the theta series
$$\theta_S (\tau)=\sum_{G\in \mathrm{Mat}_{m,g}({\mathbb{Z}})}\mathrm{exp}(\pi i\cdot\mathrm{tr}( G^{t}SG\tau )). $$
Such $\theta_S$ is a modular form of weight $m/2$ with respect to a subgroup of finite index $\Gamma \subset \Gamma_g$.
If $S$ is even and unimodular, then $\Gamma= \Gamma_g$. If the rank is $m=16$, then there are two integral classes of such quadratic forms: those associated to the lattices $E_8\oplus E_8$ and $D_{16}^+$ . For any $g$ we will consider the form
$$f_g=\theta_{E_8\oplus E_8}- \theta_{D_{16}^+}.$$
It is a well-known fact that $f_g$ vanishes identically for $g=1,2,3$.
It is
Schottky's form when $g=4$, cf.~\cite{sch} or \cite{csb}.
For $g\geq 4$ we set $F_g$ for the divisor defined by $f_g$. We know that $F_4$ in ${\mathcal A}_4$ is irreducible, since
it is the closure of the jacobian locus ${\mathcal J}_4$. Since the preimage of ${\mathcal J}_4$ inside
${\mathbb{H}}_g$ is irreducible, we have that $f_4$ is a totally prime form.
\smallskip
We want to study the divisor $F_g$ for $g\geq 5$.
We know that in these cases $F_g$ does not contain the jacobian locus, cf. \cite{sch} or \cite{csb}. However it contains the hyperelliptic locus ${\mathcal H}_g$, cf. \cite{poor1}. Thus the preimage $\wti{F}_g \subset {\mathbb{H}}_g$ of $F_g$ is connected.
Really we have
\begin{prop}[The Schottky form is totally prime]\label{prop:sch}
The divisor $\wti{F}_g \subset {\mathbb{H}}_g$ is irreducible
for all $g\geq 4$.
\end{prop}
\begin{proof}
The statement is trivial if $g=4$, since $f_4$ is the form defining (the closure of) ${\mathcal J}_4$.
We proceed by induction on $g\geq 5$ and we show that $f_g$ is an absolutely prime form.
In order to do so, we recall that the Siegel operator
$$\Phi(f)(\tau'): = \lim_{t\to +\infty} f \left(\begin{matrix} \tau ' & * \\ * &it\end{matrix}\right) $$
sends a modular form $f$ of genus $g$ at level $n$ to a form $\Phi(f)$ of genus $g-1$ at level $n$,
in such a way that $\Phi(f_a f_b)=\Phi(f_a)\Phi(f_b)$.
By contradiction, suppose that, at level $n$, the form $f_g$ breaks as
$f_g=f_{g, a}f_{g, b}$.
Since $f_g$ is a linear combination of theta series we have
$$ \Phi(f_g)= f_{g-1}.$$
This would imply that $f_{g-1}$ is not irreducible at level $n$. Such contradiction proves that
$f_g$ is an absolutely prime form.
As a consequence, $f_g$ is also totally prime by Theorem \ref{thm:ag}, namely
$\wti{F}_g$ is irreducible.
\end{proof}
Note that, even without knowing that ${\mathcal H}_g\subset F_g$, the fact that $F_g$ is absolutely
irreducible implies that it is totally irreducible by Theorem \ref{thm:ag}.
\section{Introduction}
\subsection{Motivation and main results}\label{sec:main}
Several years ago E. Freitag and the second named author discussed longly about singular modular forms and theta series. In particular the discussion focalized around a problem involving singular theta series of weight $1/2$, and $g\geq 3$. A basic example are the theta constants.
\begin{qu}[Irreducible factorization of theta constants]
Fix $\varepsilon,\delta\in{\mathbb{Z}}_2^g$. Determine the irreducible factorization
of the function
\[
\theta\chars\varepsilon\delta(\tau)=\sum_{n\in{\mathbb{Z}}^g}
\exp\,\pi i\left[ \left(n+\varepsilon/2\right)^t \tau\left(n+\varepsilon/2\right)+\left(n+\varepsilon/2\right)\delta\right]
\]
defined on the Siegel upper half-space \[
{\mathbb{H}}_g:=\left\{
\tau\in\mathrm{Mat}_{g\times g}({\mathbb{C}})
\ \text{symmetric, with $\Im(\tau)>0$}\right\}
\]
into irreducible analytic functions.
\end{qu}
In \cite{FR}, page 88, Freitag proved that these forms are {\it{absolutely prime forms}}, i.e.~their divisors are irreducible in ${\mathbb{H}}_g/\Gamma$ with respect to arbitrary small congruence subgroups $\Gamma\subset \Gamma_g=\op{Sp}(2g,{\mathbb{Z}})$.
Hence the problem of considering the preimage of these divisors and other absolutely prime forms on the universal covering ${\mathbb{H}}_g$
is rather natural. This will have the advantage of working, at least analytically, on an
open subset ${\mathbb{H}}_g$ of ${\mathbb{C}}^{g(g+1)/2}$ that is also a bounded homogeneous domain.
Another explicit example of modular form of higher weight is the so-called Schottky form $f_g$ with $g\geq 4$.
\begin{qu}[Irreducible factorization of Schottky forms]
Determine the factorization of
\[
f_g(\tau):=\theta_{E_8\oplus E_8}(\tau)-\theta_{D_{16}^+}(\tau)
\]
into irreducible analytic functions, where
\[
\theta_S (\tau):=\sum_{G\in \mathrm{Mat}_{m,g}({\mathbb{Z}})}\mathrm{exp}(\pi i\cdot\mathrm{tr}( G^{t}SG\tau ))
\]
for an integral, positive-definite quadratic form $S$ on ${\mathbb{Z}}^m$.
\end{qu}
If the zero locus of a modular form in ${\mathbb{H}}_g$ is irreducible, we say that the form is {\it{totally prime}}.
The divisor associated to an absolutely prime (resp. totally prime)
form will be called {\it{absolutely irreducible}} (resp. {\it{totally irreducible}}).
Our first main result is the following.
\begin{mthm*}
Let $g\geq 3$ and let $D\subset{\mathbb{H}}_g/\Gamma$ be a hypersurface.
Then the preimage $\wti{D}$ in ${\mathbb{H}}_g$ of $D$ is connected.
If furthermore
\begin{itemize}
\item[(i)]
$D$ is locally irreducible (for example, normal), or
\item[(ii)]
$D$ is absolutely irreducible,
\end{itemize}
then $D$ is totally irreducible.
\end{mthm*}
The connectedness claim and case (i) are proven in
Theorem \ref{thm:connectedness} and case (ii)
is proven in Theorem \ref{thm:ag}.
We mention that, for $g=2$, connectedness of $\wti{D}$ holds for
divisors $D$ whose closures intersect the boundary of the Satake compactification of ${\mathbb{H}}_2/\Gamma$
in a finite set of points, i.e. divisors defined by Eisenstein series.
\smallskip
The connectedness of $\wti{D}$ relies on
a generalization (Proposition \ref{prop:divisor}) of the homotopical Leschetz hyperplane section theorem for the fundamental group of smooth quasi-projective varieties that have a projective model with small boundary. The proof of case (i) is a consequence of the connectedness
of $\wti{D}$.
\smallskip
The proof of case (ii)
mainly relies on certain properties
of $\Gamma$ (see Theorem \ref{thm:prime}): in particular,
such group is residually finite and
its normal subgroups are either finite or of finite index.
\smallskip
As noted by Freitag, for $g\geq 3$
all modular forms can be factorized
into absolutely prime ones (Lemma \ref{lm:factorization}).
Such result heavily relies on the fact that
$\mathrm{Pic}({\mathbb{H}}_g/\Gamma) \otimes {\mathbb{Q}} = {\mathbb{Q}}\cdot \lambda$
and the divisibility of the integral class $\lambda$
is uniformly bounded from above for every finite-index
subgroup $\Gamma$ of $\Gamma_g$.\\
\noindent
Our second main result is a criterion for connectedness and irreducibility
of the preimage $\wti{Z}\subset{\mathbb{H}}_g$ of a subvariety $Z\subset{\mathbb{H}}_g/\Gamma$. We stress that such method
applies to subvarieties that are not necessarily divisors.
\begin{mprop*}
Let $Z\subset{\mathbb{H}}_g/\Gamma$ be an irreducible subvariety
and let $\wti{Z}$ be its preimage in ${\mathbb{H}}_g$.
\begin{itemize}
\item[(i)]
If a Zariski-open subset of the jacobian locus
(respectively, of a component of the hyperelliptic locus with $\Gamma\subset\Gamma(2)$)
is contained in $Z$, then $\wti{Z}$ is connected, with finitely many irreducible components.
\item[(ii)]
If a Zariski-open subset of the jacobian locus
(respectively, of a component of the hyperelliptic locus with $\Gamma\subset\Gamma(2)$)
is contained in $Z$, but it does not entirely sit
inside the singular locus of $Z$, then $\wti{Z}$ is irreducible.
\end{itemize}
\end{mprop*}
The above result (Proposition \ref{prop:irr-Mg} in the body of the article)
essentially relies on the following facts
\begin{itemize}
\item
$\wti{Z}$ is connected if and only if $\pi_1(Z)\rightarrow\Gamma$ is surjective
(follows from Lemma \ref{lm:lift})
\item
$\wti{Z}$ is irreducible if and only if $\pi_1(Z_{\mathrm{sm}})\rightarrow\Gamma$ is surjective (Corollary \ref{cor:criterion})
\item
the fundamental group of the jacobian locus surjects onto $\Gamma$
(see \cite{farb-margalit}, Chapter 6)
\item
the fundamental group of a hyperelliptic component surjects
onto $\Gamma$, if $\Gamma\subset\Gamma(2)$
(see \cite{mutheta2} or \cite{ac}).
\end{itemize}
As applications of the above technique,
we show that even theta-nulls are totally prime for $g\geq 3$
(Corollary \ref{cor:tn}),
that each component of the moduli space of intermediate jacobians of cubic threefolds in ${\mathcal A}_5 (2)$ has irreducible preimage in ${\mathbb{H}}_5$
(Corollary \ref{cor:grad}),
and that the Schottky form is totally prime for $g\geq 4$
(Proposition \ref{prop:sch}).
Finally, we also describe how different
the situation with even theta-nulls is in genus two (Proposition \ref{prop:th2}).
\subsection{Structure of the paper}
Beside the present introduction, the paper has two sections.
In Section \ref{sec:topological} we collect some standard
facts about the topology of complex analytic spaces
(Sections \ref{sec:stratification}-\ref{sec:cpx-irred})
and we prove some criteria for connectedness
and irreducibility of liftings (Sections \ref{sec:connected}-\ref{sec:lift}).
The above mentioned version of the Lefschetz hyperplane theorem
is proven in Section \ref{sec:LHT}.
In Section \ref{sec:absol} we distil some topological properties
needed in a more general setting to have
the existence of a factorization into absolutely irreducible divisors (Lemma \ref{lemma:prime}) and the fact that absolutely irreducible subvarieties
are totally irreducible (Theorem \ref{thm:prime}).
Finally, in Section \ref{sec:orbispaces}
we describe the modifications needed to have the above
results work for orbispaces that are global quotients.
In Section \ref{sec:Ag} we prove the results
about subvarieties of ${\mathcal A}_g$ or of its finite \'etale covers ${\mathbb{H}}_g/\Gamma$ mentioned in the above Section \ref{sec:main},
using the tools developed in Section \ref{sec:topological}.
\subsection{Acknowledgements}
We are grateful to Eberhard Freitag for stimulating discussions, suggestions and many useful comments on an earlier version of the manuscript. We thank Mark Goresky for stimulating discussions on Leschetz hyperplane section theorem and Julia Bernatska for illustrating a result on gradient of theta functions.\\
The first-named author was partially supported by GNSAGA research group. Both authors were partially supported by PRIN 2017 grant ``Moduli and Lie theory''.
\section{Covering spaces, connectedness and irreducibility}\label{sec:topological}
In the following section we collect some
sufficient conditions that ensure that, via a covering map of a smooth variety $X$,
the preimage of an irreducible subvariety $Z\subset X$ is connected or irreducible.
In the former case we have to estimate the fundamental group
of the subvariety $Z$ and, in particular, the image of $\pi_1(Z)\rightarrow\pi_1(X)$;
in the latter case we have to deal with the fundamental group of
the smooth locus of $Z$, which can be subtler if $Z$ is not normal
and in particular is not locally irreducible.
In order to gain some control on such fundamental groups, we introduce two techniques.
The first technique consists in finding a irreducible, locally irreducible
subvariety $Y\subset Z$ for which we have better control of the image of $\pi_1(Y)\rightarrow\pi_1(X)$.
The second technique can be employed when $Z$ is a hyperplane section of $X$
(or a complete intersection of such) and $X$ satisfies suitable properties.
We also show that, under certain assumptions on $\pi_1(X)$
(which include all finite-index subgroups of $\op{Sp}(2g,{\mathbb{Z}})$),
absolutely irreducible hypersurfaces are totally irreducible.
We state such results for complex-analytic spaces; see also Section \ref{sec:orbispaces}.
\subsection{Complex varieties and links}\label{sec:stratification}
Here we collect some classical and basic facts about the topology
of complex analytic spaces.
Unless differently specified, we work with the classical topology.
Let $X$ be a reduced analytic space, which we assume to be connected, and
let $\{X_\bullet\}_{i\in I}$ be a stratification of $X$,
namely $I$ is a partially ordered set,
$\bigsqcup_i X_i=X$ and $\ol{X}_i=\bigcup_{j\leq i}X_j$, where each $\ol{X}_i$ is an analytic subspace of $X$
and $X_i$ is Zariski open inside $\ol{X}_i$.
Such stratification can be refined so that
\begin{itemize}
\item
each $X_i$ is connected and non-singular
\end{itemize}
and we will always assume that this is the case.
A stratification $\{X_\bullet\}$ with the above properties will be called a {\it{good stratification}}.
Given a collection $\{Y_\alpha\}$ of reduced analytic subspaces, we will say that the above
stratification $\{X_\bullet\}$ is {\it{compatible with $\{Y_\alpha\}$}} if
every finite intersection $Y_{\alpha_1}\cap\dots\cap Y_{\alpha_k}$ is a union of strata.
We incidentally remark that the above $X$ admits a locally finite
triangulation such that every open simplex is contained
in a unique $X_i$: in particular, $X$ is locally contractible. \\
Endow $X$ with a complete metric and denote by $B(x,r)$ the ball
of radius $r$ centered at $x$.
Given a stratum $X_i$ and a smooth function $r_i:X_i\rightarrow{\mathbb{R}}_+$
define $U_i:=\bigcup_{x\in X_i} \ol{B}(x,r_i(x))$ and
$L_i:=(\partial U_i)\setminus (\partial X_i)\subset \bigcup_{j>i}X_j$
For every $i\in I$ there exists a smooth function $r_i: X_i\rightarrow{\mathbb{R}}_+$ such that
\begin{itemize}
\item
for every diverging sequence $(x_n)\subset X_i$, we have
$r_i(x_n)\rightarrow 0$
\item
the closest point projection determines map
$U_i\rightarrow X_i$ and $L_i\rightarrow X_i$, which are in fact fiber bundles,
and the fiber $(U_i)_x$ over $x$ is the cone over the fiber $(L_i)_x$
for every $x\in X_i$
\item
$U_j\cap U_i\neq\emptyset$ if and only if $i\leq j$ or $j\leq i$.
\end{itemize}
Let now $Y$ be a reduced, irreducible subvariety of $X$.
There exists a stratification of $X$ such that a stratum $X_j$ consists of
a Zariski-open (dense) subset $\mathring{Y}$ of $Y$.
It follows that the neighbourhood $U_{\mathring{Y}/X}:=U_j$ and the link $L_{\mathring{Y}/X}:=L_j$
are fiber bundles over $\mathring{Y}$.
\subsection{Irreducibility and fundamental group}\label{sec:cpx-irred}
Here we collect some remarks about fundamental groups
and local irreducibility of analytic varieties.
We underline that ``locally irreducile'' is to be understood with respect
to the classical topology, and that such condition is equivalent to being ``unibranch''
(i.e.~locally irreducible with respect to the \'etale topology).
The results collected in the following lemma are classical.
\begin{lm}[Properties equivalent to irreducibility]\label{lm:irred}
Let $X$ be an analytic space.
\begin{itemize}
\item[(i)]
$X$ is irreducible if and only if its smooth locus $X_{\mathrm{sm}}$ is connected.
\item[(ii)]
If $X$ is locally irreducible and connected, then it is irreducible.
\item[(iii)]
If $X$ is normal, then it is locally irreducible.
\item[(iv)]
If $X$ is locally irreducible along a connected smooth analytic subspace $W\subset X$
and $L_{W/X}$ is the link of $W$ inside $X$,
then the bundle $L_{W/X}\rightarrow W$ has connected fibers.
\end{itemize}
\end{lm}
The following is then an immediate consequence.
\begin{cor}[Irreducibility of universal cover]
If an analytic space $X$ is normal irreducible, then its universal cover $\wti{X}$ is normal irreducible.
\end{cor}
\begin{proof}
Normality is a local property, so it is inherited by $\wti{X}$, which is thus locally irreducible.
Since $\wti{X}$ is connected, it is irreducible.
\end{proof}
The following is also classical, but we include a proof for completeness.
\begin{lm}[Fundamental groups of locally irreducible varieties]\label{lm:normal}
Let $X$ be an irreducible, locally irreducible analytic variety of positive dimension.
\begin{itemize}
\item[(i)]
If $W\subsetneq X$ is a closed analytic subspace, then
$\mathring{X}=X\setminus W$ is connected and
the inclusion $\mathring{X} \hookrightarrow X$ induces a surjection
$\pi_1(\mathring{X})\twoheadrightarrow\pi_1(X)$.
\item[(ii)]
$X_{\mathrm{sm}}$ is connected and
$\pi_1(X_{\mathrm{sm}})\twoheadrightarrow\pi_1(X)$ is surjective.
\end{itemize}
\end{lm}
\begin{proof}
(i) Since $X$ is irreducible, $\mathring{X}$ is irreducible too and so it is connected.
Thus, it is enough to show that, given $x'\in \mathring{X}$, every loop $\gamma\subset X$ based at $x'$
can be deformed so to avoid $W$.
Given such $\gamma$, we can restrict $X$ to a relatively compact open neighbourhood of $\gamma$.
Thus, $W$ admits a stratification as in Section \ref{lm:irred} with finitely many (connected) strata.
We can thus proceed by induction, removing from $X$ one stratum of $W$ at a time,
starting from the deepest strata of $W$.
Hence, it is enough to treat the case of $W$ consisting of one closed (connected) stratum
and, in particular, it is enough to show that an arc $\alpha:[0,1]\rightarrow U_{W/X}$
with $\alpha(0),\alpha(1)\in L_{W/X}$ can be deformed to an arc entirely contained in $L_{W/X}$.
Since the fibers of $U_{W/X}\rightarrow W$ are contractible, it is enough to show
that the composition
$\ol{\alpha}:[0,1]\arr{\alpha} U_{W/X}\rightarrow W$ can be lifted to some $\alpha':[0,1]\rightarrow L_{W/X}$
in such a way that $\alpha'(0)=\alpha(0)$ and $\alpha'(1)=\alpha(1)$.
This is indeed the case, since the local irreducibility of $X$
implies that $L_{W/X}\rightarrow W$ has connected fibers
by Lemma \ref{lm:irred}(iv).
(ii) By Lemma \ref{lm:irred}(i) the locus $X_{\mathrm{sm}}$ is connected.
The second claim follows from (i) by taking $W=X_{\mathrm{sing}}$.
\end{proof}
In the following lemma we estimate the fundamental group of the smooth locus
of a variety $Z$ in terms of the fundamental group of a subvariety $Y\subset Z$,
thus employing the first technique mentioned at the beginning of the section.
\begin{lm}[Estimating the fundamental group of $Z_{\mathrm{sm}}$]\label{surjects}
Suppose $Z$ an irreducible analytic variety and $Y\subset Z$ an irreducible, locally irreducible subvariety with $\mathrm{dim}(Y)<\mathrm{dim}(Z)$,
and let $G$ be the image of $\pi_1(Y)\rightarrow\pi_1(Z)$.
\begin{itemize}
\item[(i)]
Suppose that $Z$ is locally irreducible at the general point of $Y$.
Then
the image of $\pi_1(Z_{\mathrm{sm}})\rightarrow \pi_1(Z)$ contains $G$.
\item[(ii)]
Suppose that $Z$ has $k$ branches at the general point of $Y$.
Then the image of $\pi_1(Z_{\mathrm{sm}})\rightarrow \pi_1(Z)$
contains a subgroup of $G$ of index at most $k$.
\end{itemize}
\end{lm}
\begin{proof}
Note preliminarly that $Z_{\mathrm{sm}}$ is smooth and connected
by Lemma \ref{lm:irred}(i), and so $Z_{\mathrm{sm}}\setminus \ol{Y}$ is connected
by Lemma \ref{lm:normal}(i).
According to Section \ref{sec:stratification},
there exists a smooth, Zariski open subset $\mathring{Y}$ of $Y$ whose link inside $Z$ is a fibre bundle $L_{\mathring{Y}/Z}$ with fiber $F$.
Observe that $\pi_1(\mathring{Y})\twoheadrightarrow\pi_1(Y)$ by Lemma \ref{lm:normal}(i).
(i) By hypothesis, up to restricting $\mathring{Y}$ to a smaller Zariski-open subset, we can assume that $Z$ is locally irreducible at all points of $\mathring{Y}$.
Since $\mathring{Y}$ is locally irreducible
and $Z$ is locally irreducible at points of $\mathring{Y}$, the fiber $F$ is connected.
Now, $L_{\mathring{Y}/Z}$ can be embedded in $V\setminus Y$,
where $V$ is a small classical neighbourhood of $Y$ in $Z$.
We denote by $L_{\mathring{Y}/Z_{\mathrm{sm}}}:=L_{\mathring{Y}/Z}\setminus Z_{\mathrm{sing}}$ the locus of $L_{\mathring{Y}/Z}$ that corresponds to
smooth points in $Z$. Since $Z$ is locally irreducible at points of $\mathring{Y}$,
the map $L_{\mathring{Y}/Z_{\mathrm{sm}}}\rightarrow\mathring{Y}$ is a submersion of manifolds of finite type
with connected fibers and so the composition $\pi_1(L_{\mathring{Y}/Z_{\mathrm{sm}}})\rightarrow\pi_1(L_{\mathring{Y}/Z})\rightarrow \pi_1(\mathring{Y})$ is surjective.
The conclusion follows noting that $\pi_1(L_{\mathring{Y}/Z_{\mathrm{sm}}})\rightarrow \pi_1(Z_{\mathrm{sm}})\rightarrow\pi_1(Z)$
factors through $\pi_1(L_{\mathring{Y}/Z_{\mathrm{sm}}})\rightarrow \pi_1(\mathring{Y})\rightarrow\pi_1(Z)$.
(ii) The argument is similar to case (i). By hypothesis, up to restricting $\mathring{Y}$ to a smaller Zariski-open subset, we can assume that $Z$ is has exactly $k$ branches at each point of $\mathring{Y}$.
Pick a connected component $L'_{\mathring{Y}/Z}$ of $L_{\mathring{Y}/Z}$.
The fiber $F'$ of $L'_{\mathring{Y}/Z}\rightarrow \mathring{Y}$ has $c\leq k$ connected components and so
the image of $\pi_1(L'_{\mathring{Y}/Z})\rightarrow \pi_1(\mathring{Y})$ has index $c$. Moreover $L'_{\mathring{Y}/Z}$
corresponds to the choice of $c$ branches $Z_1,\dots,Z_c$ of $Z$ at $\mathring{Y}$.
Let $V_i$ be a small classical neighbourhood of $\mathring{Y}$ inside $Z_i$ and let $V'=\bigcup_i V_i$.
Then $L'_{\mathring{Y}/Z}$ can be embedded inside $V'\setminus Y$.
Similarly to (i), we call $L'_{\mathring{Y}/Z_{\mathrm{sm}}}$ the locus of $L'_{\mathring{Y}/Z}$ that corresponds to smooth points of $V'$. Since $Z_i$ is unibranched at $\mathring{Y}$, the fibers of the
the submersion $L'_{\mathring{Y}/Z_{\mathrm{sm}}}\rightarrow\mathring{Y}$ have $c$ connected components
(one for each connected component of the fibers of $L'_{\mathring{Y}/Z}\rightarrow\mathring{Y}$)
and so $\pi_1(L'_{\mathring{Y}/Z_{\mathrm{sm}}})\twoheadrightarrow \pi_1(\mathring{Y})$ has the same image as
$\pi_1(L'_{\mathring{Y}/Z})\twoheadrightarrow \pi_1(\mathring{Y})$.
Again, the factorization of $\pi_1(L'_{\mathring{Y}/Z_{\mathrm{sm}}})\rightarrow \pi_1(Z_{\mathrm{sm}})\rightarrow\pi_1(Z)$
as $\pi_1(L'_{\mathring{Y}/Z_{\mathrm{sm}}})\rightarrow \pi_1(\mathring{Y})\rightarrow\pi_1(Z)$ gives the wished conclusion.
\end{proof}
\subsection{A hyperplane section theorem}\label{sec:LHT}
In the following section we present a generalization of the
homotopical Leschetz hyperplane section theorem (LHT) for $\pi_0$ and $\pi_1$
to smooth quasi-projective varieties that admit a projective model with small boundary.
\begin{terminology}
Let $\ol{X}$ be a projective variety $\ol{X}$, endowed with a good stratification.
We say that an algebraic subset $\ol{Z}\subset\ol{X}$ is a complete intersection of ample divisors transverse to certain
strata of $\ol{X}$ if there exist an embedding $\ol{X}\hookrightarrow{\mathbb{P}}$ inside a projective space
and a linear subspace $H\subset{\mathbb{P}}$
transverse to the strata of $\ol{X}$ (as smooth submanifolds of ${\mathbb{P}}$)
such that $\ol{Z}=\ol{X}\cap H$.
\end{terminology}
We begin by recalling the following version of LHT for smooth quasi-projective varieties (Theorem \ref{thm:hyperplane}).
The proof of such result can be found in \cite{GM}, Part II Section 5.1 (note, in particular, the ``proof of the furthermore'' at the end of such section).
\begin{thm}[LHT for smooth quasi-projective varieties]\label{thm:hyperplane}
Let $\ol{X}$ be a connected,
projective variety of dimension at least $2$
endowed with a good stratification of $\ol{X}$,
and let $\partial X$ be a closed union of non-open strata of $\ol{X}$ such that
$X:=\ol{X}\setminus\partial X$ is smooth.
Consider a complete intersection $\ol{Z}\subset\ol{X}$ of ample divisors
in the same linear system such that either of the following holds
\begin{itemize}
\item[(a)]
$Z=\ol{Z}\cap X$ is compact, or
\item[(b)]
$\ol{Z}$ is transverse to all strata of $\ol{X}$.
\end{itemize}
If $\mathrm{dim}(Z)\geq 1$, then $Z$ is connected and $\pi_1(Z)\rightarrow\pi_1(X)$ is surjective.
Moreover, if $\mathrm{dim}(Z)\geq 2$, then $\pi_1(Z)\rightarrow\pi_1(X)$ is an isomorphism.
\end{thm}
The goal now is to weaken the transversality hypothesis
in Theorem \ref{thm:hyperplane}, which can be done if $\partial X$ is small.
In the typical application that we have in mind
we pick as $X$ the moduli space ${\mathcal A}_g(n)$ for some level $n$, as $\ol{X}$ its Satake compactification and as
$H\cap\ol{X}$ the zero locus of a modular form.
More precisely, we consider a variety $\ol{X}$ with algebraic loci $\partial X$ and
$\ol{D}$ that satisfy the following properties for $h=2$ or $h=3$.
\begin{itemize}
\item[(I)]
$\ol{X}$ is a connected projective variety of dimension $N$
and $\partial X$ be a closed subscheme of dimension at most $N-1$\
such that $X=\ol{X}\setminus\partial X$ is smooth and connected
\item[(II)]
$\ol{D}\subset \ol{X}$ is the support of an ample Cartier divisor
\item[(III$_h$)]
$\mathrm{codim}(\partial D/\ol{D})\geq h$, where $\partial D=\ol{D}\cap\partial X$.
\end{itemize}
We remark that (III$_h$) is certainly implied by
\begin{itemize}
\item[(III$'_h$)]
$\mathrm{codim}(\partial X/\ol{X})\geq h+1$.
\end{itemize}
The version of LHT we wish to prove is the following.
\begin{prop}[Improved LHT for smooth quasi-projective varieties]\label{prop:divisor}
Let $\ol{X}$ be a variety and $\partial X$, $\ol{D}$ be subschemes of $\ol{X}$ such that
properties (I)-(II)-(III$_h$) above hold with $h=2$ or $h=3$.
Then $D=\ol{D}\setminus\partial D$ is connected
and the natural $\pi_1(D)\rightarrow\pi_1(X)$ is an isomorphism if $h=3$ (resp. is surjective, if $h=2$).
\end{prop}
\begin{proof}
Endow $\ol{X}$ with a good stratification that is compatible with $\partial X$ and $\ol{D}$.
Consider then general very ample divisors $\ol{D}_1,\dots,\ol{D}_{N-h}$ in $\ol{X}$
such that
\begin{itemize}
\item[(a)]
$D_i:=\ol{D}_i\setminus \partial D_i$ is smooth, where $\partial D_i=\ol{D}_i\cap \partial X$
\item[(b)]
$\ol{D}_1$ intersects all the strata of $\ol{D}$ and all strata of $\ol{X}$ transversally
and $\ol{D}_k$ transversally
intersects all strata of $\ol{D}\cap \ol{D}_1\cap\dots\ol{D}_{k-1}$
and all strata of $\ol{D}_1\cap\dots\ol{D}_{k-1}$
for all $k=2,\dots,N-h$
\item[(c)]
the intersection $E=D_1\cap\dots D_{N-h}$ is transverse, and so $E$
is a smooth variety of dimension $c$
\item[(d)]
$S=E\cap D=E\cap\ol{D}$ is compact, of dimension $h-1$
\item[(e)]
the singular locus of $S$ is contained in the singular locus of $D$.
\end{itemize}
Note that we use property (I) in (a)-(b)-(c)-(e), property (III$_h$) in (d).
We are going to apply Theorem \ref{thm:hyperplane} a few times
in the case of $\ol{Z}$ an ample hypersurface.
Suppose first that $h=3$.
Properties (II) and (c) together with
Theorem \ref{thm:hyperplane}(a) imply that $S$ is a compact connected surface
and $\pi_1(S)\rightarrow\pi_1(E)$ is an isomorphism.
On the other hand, (b) and Theorem \ref{thm:hyperplane}(b)
imply that $E$ is connected and
\[
\pi_1(E)\rightarrow \pi_1(D_1\cap\dots D_{N-h-1})
\rightarrow\dots\rightarrow\pi_1(D_1\cap D_2)\rightarrow\pi_1(D_1)\rightarrow\pi_1(X)
\]
are isomorphisms.
Similarly, $S$ intersects every connected component of $D$ and
\[
\pi_1(S)\rightarrow \pi_1(D\cap D_1\cap\dots D_{N-h-1})
\rightarrow\dots\rightarrow\pi_1(D\cap D_1)\rightarrow\pi_1(D)
\]
are isomorphisms.
It follows that $D$ is connected
and $\pi_1(S)\rightarrow\pi_1(D)\rightarrow\pi_1(X)$ are isomorphisms.\\
If $h=2$,
then we can argue analogously as above and consider
general very ample divisors $\ol{D}_1,\dots,\ol{D}_{N-2}$
such that $E=D_1\cap\dots D_{N-2}$ is a smooth surface and
$C=D\cap E$ is a compact curve.
A similar repeated application of Theorem \ref{thm:hyperplane}
gives $\pi_1(C)\twoheadrightarrow\pi_1(D)\rightarrow \pi_1(X)$ and that $\pi_1(C)\twoheadrightarrow\pi_1(E)\cong\pi_1(X)$,
from which we conclude that $\pi_1(D)\twoheadrightarrow\pi_1(X)$.
\end{proof}
An analogous statement holds for complete intersections
$\ol{Z}$, by replacing properties (II)-(III$_h$) by
\begin{itemize}
\item[(III$^{ci}$)]
$\ol{Z}\subset\ol{X}$ is the support of a complete intersection of Cartier divisors,
whose classes in $X$ are all proportional to the same ample class in $H^2(X;{\mathbb{Q}})$,
\item[(III$^{ci}_h$)]
$\mathrm{codim}(\partial Z/\ol{Z})\geq h$, where $\partial Z=\ol{Z}\cap\partial X$,
\end{itemize}
so that the following holds.
\begin{cor}[Improved LHT for complete intersections]\label{cor:LHT}
Suppose that $\ol{X}$, $\partial X$ and $\ol{Z}$ satisfy (I)-(III$^{ci}$)-(III$^{ci}_h$) above
for $h=2,3$. Then $Z=\ol{Z}\setminus\partial Z$ is connected and
$\pi_1(Z)\rightarrow\pi_1(X)$ is an isomorphism if $h=3$ (resp. is surjective, if $h=2$).
\end{cor}
\begin{proof}
By (III$^{ci}$) it is possible to embed $\ol{X}$ inside some projective space ${\mathbb{P}}$
in such a way that $\ol{Z}=\ol{X}\cap H$ for some linear subspace $H\subset{\mathbb{P}}$
of codimension $\codim(H/{\mathbb{P}})=\codim(\ol{Z}/\ol{X})$.
We then proceed analogously to the proof of Proposition \ref{prop:divisor},
replacing the role of $\ol{D}$ there by $\ol{Z}$.
\end{proof}
\begin{rem} Assuming that $\ol{X}$, $\partial X$ and $\ol{Z}$ satisfy (I)-(III$^{ci}$)-(III$^{ci}_h$) above, it is immediate that for higher values of $h$ we have similar results about the higher homotopy groups. In particular we obtain that
$\pi_i(Z)\rightarrow\pi_i(X)$ is an isomorphism if $ i\leq h-2$, and $\pi_{h-1}(Z)\rightarrow\pi_{h-1}(X)$ is surjective.
\end{rem}
\subsection{Liftings and connectedness}\label{sec:connected}
Let $X,Z$ be connected and locally arc-connected topological spaces
with universal covers $p:\wti{X}\rightarrow X$ and $\wh{Z}\rightarrow Z$.
Given a map $f:Z\rightarrow X$, consider the following diagram
\[
\xymatrix{
\wh{Z} \ar[d] \ar@/^1pc/[drr]^{\hat{f}_i}\\
\wti{Z}_i \ar[dr] \ar@{^(->}[r] & Z\times_X \wti{X} \ar[r]^{\quad\tilde{f}} \ar[d]^{\tilde{p}} & \wti{X} \ar[d]^{p}\\
& Z \ar[r]^f & X
}
\]
where the rectangle is Cartesian and $\wti{Z}_i$ is a connected component of $Z\times_X \wti{X}$. We denote by $f_*:\pi_1(Z)\rightarrow\pi_1(X)$ the homomorphism induced by $f$.
\begin{lm}[Liftings]\label{lm:lift}
Let $f:Z\rightarrow X$ be a map of connected topological spaces.
\begin{itemize}
\item[(i)]
The set of liftings $\wh{f}:\wh{Z}\rightarrow\wti{X}$ is acted on by $\pi_1(X)$ in a transitive way.
\item[(ii)]
The $\pi_1(X)$-set of connected components of
the fiber product $Z\times_X \wti{X}$ is isomorphic to $\pi_1(X)/f_*\pi_1(Z)$.
For each component $\wti{Z}_i$ of $Z\times_X \wti{X}$, its fundamental group satisfies
$\pi_1(\wti{Z}_i)=\mathrm{ker}(f_*)$ and there exists a lift $\wh{f}_i$ as in (i)
that factors through the cover $\wh{Z}\rightarrow \wti{Z}_i$.
\item[(iii)]
$Z\times_X \wti{X}$ is connected if and only if $f_*:\pi_1(Z)\rightarrow\pi_1(X)$ is surjective.
\end{itemize}
\end{lm}
\begin{proof}
(i) Fix $\wh{z}\in\wh{Z}$ a lift of $z\in Z$.
The lifting $\wh{f}$ is uniquely determined by the choice of $\wh{f}(\wh{z})\in p^{-1}(f(z))$.
The set $p^{-1}(f(z))$ is acted on simply transitively by $\pi_1(X)$.
(ii) Fix a point $z\in Z$ and $\tilde{x}\in p{-1}(f(z))$.
The group $\pi_1(X)$ acts by simply and transitively permuting the elements
in $p^{-1}(f(z))$, and so $\pi_1(X)\cdot\tilde{x}=p^{-1}(f(z))$.
It can be easily seen that the surjective map $\pi_1(X)\cdot (z,\tilde{x})=\tilde{p}^{-1}(z)\rightarrow \pi_0(Z\times_X\wti{X})$
is a map of $\pi_1(X)$-sets.
Moreover,
two elements $(z,\tilde{x}),(z,\tilde{x}')$ belong to the same
connected component of $Z\times_X \wti{X}$ if and only if there exists a path $\alpha\in\pi_1(Z,z)$
such that $f\circ\alpha$ lifts to a path in $\wti{X}$ that joins $\tilde{x}$ and $\tilde{x}'$,
namely $f_*(\alpha)\cdot \tilde{x}=\tilde{x}'$. Hence, $\pi_0(Z\times_X\wti{X})$
can be identified to $\pi_1(X)/f_*\pi_1(Z)$.
By the universal property, any lifting $\tilde{f}$ factors through $Z\times_X \wti{X}$ and covers a connected component of $Z\times_X\wti{X}$. If the component $\wti{Z}_i$ contains the point $(z,\tilde{x})$, then the lift that satisfies $\wh{f}(\wh{z})=\tilde{x}$ induces a cover $\wh{Z}\rightarrow \wti{Z}_i$.
Pick a connected component $\wti{Z}_i$ of $Z\times_X \wti{X}$. It covers $Z$, and so
$\pi_1(\wti{Z}_i)\hookrightarrow\pi_1(Z)$ is injective.
Moreover, the composition $\pi_1(\wti{Z}_i)\rightarrow\pi_1(Z)\rightarrow \pi_1(X)$ vanishes,
as it factors through $\pi_1(\wti{X})=\{e\}$, and so $\pi_1(\wti{Z}_i)$ injectively
maps into $\mathrm{ker}(f_*)$. It also maps surjectively, since all elements of $\mathrm{ker}(f_*)$ lift
to closed loops in $\wti{Z}_i$.
(iii) follows from (ii).
\end{proof}
An argument analogous to Lemma \ref{lm:lift}(ii) also shows the following.
\begin{lm}[Liftings via finite covers]\label{lm:finite-lift}
Let $p':X'\rightarrow X$ be a covering space
and $f:Z\rightarrow X$ a map of connected topological spaces.
Then the $\pi_1(X)$-set of
the connected components of $Z\times_X X'$ is in bijective
correspondence with $\pi_1(X)/G'$, where
$G'$ is the subgroup generated by $f_*\pi_1(Z)$ and $p'_*\pi_1(X')$.
Hence, $Z\times_X X'$ is connected if and only if
the induced map $\pi_1(Z)\rightarrow \pi_1(X)/p'_*\pi_1(X')$ is surjective.
\end{lm}
Out of the above lemmas, we can already draw a first consequence,
which is an example of the two strategies outlined at the beginning of the section.
\begin{cor}[Connected lifting of analytic subspaces]\label{cor:connected-LHT}
Let $X'\rightarrow X$ be a cover of connected complex-analytic spaces, $Z\subset X$ a connected analytic subspace
and $Z'$ its preimage inside $X'$.
\begin{itemize}
\item[(i)]
Suppose that there exists an analytic subspace $Y\subseteq Z$
such that $\pi_1(Y)\twoheadrightarrow\pi_1(X)$. Then $Z'$ is connected.
\item[(ii)]
Suppose that $X$ admits a compactification $\ol{X}$ such that
$(\ol{X},\partial X,\ol{Z})$ satisfy properties (I)-(III$^{ci}$)-(III$^{ci}_2$)
in Section \ref{sec:LHT}. Then $Z'$ is connected.
\end{itemize}
\end{cor}
\begin{proof}
In view of Lemma \ref{lm:lift}, it is enough to show that $\pi_1(Z)\rightarrow \pi_1(X)$ is surjective.
(i) This is immediate because of the factorization $\pi_1(Y)\rightarrow\pi_1(Z)\rightarrow\pi_1(X)$.
(ii) It follows from Corollary \ref{cor:LHT}.
\end{proof}
\subsection{Liftings and irreducibility}\label{sec:lift}
Let $X,Z$ be as in Section \ref{sec:connected}
and assume that both $X$ and $Z$ are
connected analytic spaces.
We begin by stating our fundamental irreducibility criterion,
which relies of Lemma \ref{lm:lift} and Lemma \ref{lm:finite-lift}.
\begin{cor}[Irreducibility criterion of $Z\times_X \wti{X}$]\label{cor:criterion}
Let $f:Z\rightarrow X$ be a map of complex-analytic varieties, with $Z$ irreducible.
\begin{itemize}
\item[(i)]
The analytic space $Z\times_X \wti{X}$ is irreducible if and only if $f_*:\pi_1(Z_{\mathrm{sm}})\rightarrow\pi_1(X)$ is surjective.
\item[(ii)]
If $p':X'\rightarrow X$ is a covering space, then the irreducible components of $Z\times_X X'$
bijectively correspond to the cokernel of the map $\pi_1(Z_{\mathrm{sm}})\rightarrow \pi_1(X)/p'_*\pi_1(X')$ induced by $f_*$.
\end{itemize}
\end{cor}
\begin{proof}
(i) It is enough to observe that $Z_{\mathrm{sm}}\times_X\wti{X}$ is smooth and Zariski-dense in $Z\times_X\wti{X}$
and that $Z_{\mathrm{sm}}\times_X\wti{X}$ is irreducible if and only if it is connected.
The conclusion follows from Lemma \ref{lm:lift}(iii).
(ii) is analogous to (i), by applying Lemma \ref{lm:finite-lift} instead of Lemma \ref{lm:lift}(iii).
\end{proof}
We can also formulate a variation of the above result in which we bound the number
of irreducible components.
\begin{cor}[Number of irreducible components of $Z\times_X \wti{X}$]\label{cor:irred-Z}
Suppose that $\wh{Z}$ is irreducible.
Then $Z\times_X \wti{X}$ has $[\pi_1(X):f_*\pi_1(Z)]$ irreducible components.
\end{cor}
\begin{proof}
By Lemma \ref{lm:lift}(ii) the locus $Z\times_X \wti{X}$ has $[\pi_1(X):f_*\pi_1(Z)]$
connected components. Each component is a quotient of $\wh{Z}$, and so is irreducible.
\end{proof}
The following proposition is an incarnation of the first strategy mentioned at the beginning of the section, namely
how to prove the irreducibility of $Z\times_X\wti{X}$ by using certain subvarieties of
$Z$ with large fundamental group.
\begin{prop}[Irreducibility of $Z\times_X \wti{X}$ via subvarieties of $Z$]\label{prop:irred}
Let $Z$ be an irreducible complex-analytic variety, and let $f:Z\rightarrow X$ be a morphism.
\begin{itemize}
\item[(i)]
Suppose that $Z$ is locally irreducible and
$f_*:\pi_1(Z)\rightarrow \pi_1(X)$ is surjective.
Then $Z\times_X\wti{X}$ is irreducible.
\end{itemize}
Let now $Y\subset Z$ be an irreducible and locally irreducible subvariety.
\begin{itemize}
\item[(ii)]
If $\pi_1(Y)\twoheadrightarrow\pi_1(X)$, and $Z$ has $k$ branches at the general point of $Y$,
then $Z\times_X\wti{X}$ has at most $k$ irreducible components.
\item[(iii)]
If the image of $\pi_1(Y)\rightarrow\pi_1(X)$ has finite index,
and $Z\times_X X'$ is irreducible for all finite \'etale covers $p':X'\rightarrow X$,
then $Z\times_X \wti{X}$ is irreducible.
\end{itemize}
\end{prop}
\begin{proof}
By Lemma \ref{lm:irred}(i), the smooth locus $Z_{\mathrm{sm}}$ is connected.
(i) By Lemma \ref{lm:normal}(ii), the map
$\pi_1(Z_{\mathrm{sm}})\twoheadrightarrow\pi_1(Z)$ is surjective.
Thus the composition $\pi_1(Z_{\mathrm{sm}})\rightarrow\pi_1(Z)\rightarrow \pi_1(X)$ is surjective too.
We conclude by Corollary \ref{cor:criterion}(i).
(ii) Since $\pi_1(Y)\rightarrow\pi_1(X)$ is surjective, so is $\pi_1(Z)\rightarrow\pi_1(X)$.
By Lemma \ref{surjects}, the image of $\pi_1(Z_{\mathrm{sm}})\rightarrow\pi_1(Z)$ has index at most $k$
and so the same holds for the image of $\pi_1(Z_{\mathrm{sm}})\rightarrow\pi_1(X)$.
The conclusion then follows from Corollary \ref{cor:criterion}(ii) applied to the universal cover $\wti{X}\rightarrow X$.
(iii) Since $Z$ has finitely many branches at the general point of $Y$,
the image of $\pi_1(Z_{\mathrm{sm}})\rightarrow\pi_1(X)$ has finite index by Lemma \ref{surjects}: let $d$ be such index.
Let $p':X'\rightarrow X$ be the \'etale cover of degree $d$ such that $p'_*\pi_1(X')=f_*\pi_1(Z_{\mathrm{sm}})$.
By Corollary \ref{cor:criterion}(ii) the fiber product $Z\times_X X'$ has $d$ irreducible components,
and our hypothesis implies that $d=1$. It follows that $\pi_1(Z_{\mathrm{sm}})\rightarrow\pi_1(X)$ is surjective and we conclude
by Corollary \ref{cor:criterion}(i).
\end{proof}
The following special case is a direct consequence
of Proposition \ref{prop:irred}(ii) with $k=1$.
\begin{cor}[A criterion of irreducibility of $Z\times_X\wti{X}$]
Let $f:Z\rightarrow X$ be a morphism, where
$Z$ be an irreducible complex-analytic variety.
Suppose that there exists an irreducible and locally irreducible subvariety $Y\subset Z$
such that $\pi_1(Y)\twoheadrightarrow\pi_1(X)$ and $Y\cap Z_{\mathrm{sm}}\neq\emptyset$.
Then $Z\times_X\wti{X}$ is irreducible.
\end{cor}
\subsection{Absolutely irreducible subvarieties}\label{sec:absol}
The second hypothesis in Proposition \ref{prop:irred}(iii) determines a class of subvarieties in $X$
that we will call ``absolutely irreducible'' (following Freitag).
\begin{dfx}[Absolutely irreducible and totally irreducible subvarieties]
A subvariety $Z$ of $X$ is {\it{absolutely irreducible}}
if its preimage through every finite \'etale cover $X'\rightarrow X$ is irreducible.
Such $Z$ is {\it{totally irreducible}} if its preimage through the universal
cover $\wti{X}\rightarrow X$ is irreducible.
\end{dfx}
Obviously, a totally irreducible subvariety is absolutely irreducible.
The result we wish to prove gives sufficient conditions for absolutely irreducible subvarieties to be totally irreducible.
We consider the following property of a complex variety $X$:
\begin{itemize}
\item[(IV)]
the normal subgroups of $\pi_1(X)$ are either finite or of finite index,
and $\pi_1(X)$ has finite-index subgroups of arbitrarily high index
(the latter condition is implied by $\pi_1(X)$ being residually finite).
\end{itemize}
Note that, if $n\geq 3$, then every finite \'etale cover $X$ of ${\mathcal A}_g(n)$
satisfies property (IV) above. In fact, it also holds for $X$ any finite \'etale (in the orbifold sense) cover of ${\mathcal A}_g$,
provided $\pi_1$ is intended in the orbifold sense.
\begin{thm}[Promoting absolutely irreducible to totally irreducible]\label{thm:prime}
Let $X$ be a smooth connected variety that satisfies property (IV).
Then every absolutely irreducible subvariety $Z\subset X$ is totally irreducible.
\end{thm}
\begin{proof}
For brevity, denote $\pi_1(X)$ just by $\Gamma$.
Let $h:Z\hookrightarrow X$ be the natural inclusion
and let $\wti{Z}$ be the preimage
of $Z$ via the universal cover $\wti{X}\rightarrow X$.
Decompose then $\wti{Z}$ into the union
\[
\wti{Z}=\bigcup_{i\in I}\wti{Z}_i
\]
of its irreducible components $\wti{Z}_i$. We shall prove that $|I|=1$.\\
The group $\Gamma$ acts on $\wti{Z}$ via biholomorphisms, and permutes its irreducible components.
Hence we get a homomorphism
\[
r:\Gamma\longrightarrow \mathfrak{S}(I).
\]
The irreducibility of $Z$ implies that the action of $\Gamma$ on the set $I$ is transitive, and we consider the normal subgroup
$K=\mathrm{ker}(r)$ of $\Gamma$.
Again by the irreducibility of $Z$,
the smooth locus $Z_{\mathrm{sm}} \subset Z$ is connected and we denote by $\j:Z_{\mathrm{sm}}\hookrightarrow Z$ the inclusion.
Moreover, we call $G$ the image of the
homomorphism $(h\circ \j)_*:\pi_1(Z_{\mathrm{sm}})\rightarrow \Gamma$ induced by $h\circ \j:Z_{\mathrm{sm}}\hookrightarrow X$.
Note that a loop contained in $Z_{\mathrm{sm}}$ lifts to a path
contained in $\wti{Z}_{\mathrm{sm}}$, which is thus contained in a unique
irreducible component of $\wti{Z}_{\mathrm{sm}}$. This shows that
$G$ acts trivially on the set $I$, and so $G\subseteq K$. \\
Now, because of property (IV), there are two possibilities.\\
{\it{Case (a): $K$ has finite index in $\Gamma$}}.\\
We denote by $Z'=\wti{Z}/K$ the divisor in $X'=\wti{X}/K$
obtained as inverse image of $Z$ through the
finite \'etale cover $X'\rightarrow X$.
Since $Z'$ is irreducible, $K$ must act
transitively on $I$ via the restriction of $r$
and so $|I|=1$.\\
{\it{Case (b): $K$ is a finite subgroup in $\Gamma$.}}\\
We will look for a contradiction, which will show that this case cannot occur.\\
Since $G\subseteq K$, the group $G$ has finite order too.
By (IV) there exists a (normal) subgroup
$\Gamma'$ in $\Gamma$ of index $d >|K|$.
We set $X':=\wti{X}/\Gamma'$ and we call $Z'\subset X'$ the inverse image of $Z$ via the cover $X'\rightarrow X$ of degree $d$.
Moreover, let $\check{Z}_{\mathrm{sm}}\rightarrow Z_{\mathrm{sm}}$ be the finite cover of degree $|G|$ corresponding to the kernel of $\j_*:\pi_1(Z_{\mathrm{sm}})\twoheadrightarrow G<\Gamma$ and call $\check{\j}$ the composition $\check{Z}_{\mathrm{sm}}\rightarrow Z_{\mathrm{sm}}\rightarrow Z$ of such finite cover with the inclusion $\j$.
We thus have the following commutative diagrams
\[
\xymatrix{
& Z' \ar[d]\ar@{^(->}[r] & X'\ar[d] && & \pi_1(Z') \ar@{^(->}[d] \ar[r] & \Gamma' \ar@{^(->}[d]\\
\check{Z}_{\mathrm{sm}}\ar[r]_{\check{\j}} \ar@{-->}[ur]^{\check{\j}'} & Z\ar@{^(->}[r]_h & X &&
\pi_1(\check{Z}_{\mathrm{sm}})\ar[r]_{\check{\j}_*} \ar@{-->}[ur]^{\check{\j}'_*} & \pi_1(Z) \ar[r]_{h_*} & \Gamma
}
\]
Since $(h\circ \check{\j})_*$ is trivial, the map $h\circ\check{j}:\check{Z}_{\mathrm{sm}}\rightarrow X$ lifts
to a map $\check{Z}_{\mathrm{sm}}\rightarrow X'$, whose image thus lies in $Z'$. Hence,
there exists a map $\check{\j}':\check{Z}_{\mathrm{sm}}\rightarrow Z'$ as in the above diagram.
Now, the image of $\check{\j}'$ is open and Zariski-dense in $Z'$ and so ${\rm deg}(\check{\j}')\geq 1$.
This leads to a contradiction, in fact the degree of the induced
map $\check{Z}_{\mathrm{sm}}\rightarrow Z$ is
\[
|G|\cdot 1= {\rm deg}[\check{Z}_{\mathrm{sm}}\rightarrow Z_{\mathrm{sm}}]\cdot {\rm deg}[Z_{\mathrm{sm}}\rightarrow Z]=
{\rm deg}[\check{Z}_{\mathrm{sm}}\rightarrow Z']\cdot {\rm deg}[Z'\rightarrow Z]\geq 1\cdot d,
\]
but we had chosen $d>|K|\geq |G|$.
\end{proof}
Now we focus on the case of divisors.
In order to ensure that, up to a finite \'etale cover, a divisor splits into the union
of absolutely irreducible divisors, we will consider projective varieties $\ol{X}$
with a subscheme $\partial X$ that satisfy the following property:
\begin{itemize}
\item[(V)]
the codimension of $\partial X$ inside $\ol{X}$ is at least two (III$'_1$);
$\mathrm{Pic}(\ol{X})/\!\text{tors}={\mathbb{Z}}\cdot \alpha$ with $\alpha$ ample;
the Picard number of $X'$ is $1$ and the divisibility
of $\alpha$ in $\mathrm{Pic}(X')/\!\text{tors}$ is uniformly bounded from above,
for every finite \'etale cover $X'\rightarrow X=\ol{X}\setminus\partial X$.
\end{itemize}
The following is essentially due to Freitag.
\begin{lm}[Existence of absolutely irreducible divisors]\label{lemma:prime}
Let $\ol{X}$ be a projective variety with a subscheme $\partial X$
that satisfy properties (I) and (V).
Then, for every effective Cartier divisor $\ol{D}\subset\ol{X}$,
there exists a finite \'etale cover $X'\rightarrow X$ such that the preimage $D'\subset X'$
of $D\subset X$ is the union of finitely many absolutely irreducible divisors.
\end{lm}
\begin{proof}
By property (V) there an integer $d>0$ such that $[\ol{D}]=d\cdot \alpha$.
Let now $X'\rightarrow X$ be any finite \'etale cover and let $\ol{X}'$ be the normalization of $\ol{X}$ in the function field of $X'$.
Since $\ol{X}'$ is projective and $\partial X'$ has codimension at least two in $\ol{X}'$, property (V) ensures that
$\mathrm{Pic}(\ol{X}')/\!\text{tors}\subseteq{\mathbb{Z}}\cdot(\alpha/d_0)$ for some integer $d_0\geq 1$.
Thus, each effective Cartier divisor in $\ol{X}'$ must be a positive multiple of $\alpha/d_0$. In particular, the divisor $D'$ obtained by pulling back $D$ via $X'\rightarrow X$ has class $d\alpha$, and so
it can have at most $d\cdot d_0$ irreducible components.
It is then enough to pick $X'$ to be a finite \'etale cover of $X$ on which the number of irreducible components of $D'$ is maximal.
\end{proof}
Putting together Lemma \ref{lemma:prime} and Theorem \ref{thm:prime}, we obtain the following.
\begin{cor}[Splitting into finitely many totally irreducible divisors]\label{cor:finite-totally-prime}
Let $\ol{X}$ be a complex variety and $\partial X$ be a subscheme that
satisfy (I)-(IV)-(V).
Then, for every Cartier divisor $\ol{D}\subset\ol{X}$,
there exists a finite \'etale cover $X'\rightarrow X$ such that
the pull-back $D'\subset X'$ of $D=\ol{D}\setminus\partial X$
is a union of finitely many totally irreducible divisors.
\end{cor}
We remark that the couple $(\ol{{\mathcal A}}_g(n),\partial{\mathcal A}_g(n))$ satisfies
(I)-(IV)-(V) for all $g\geq 3$ and $n\geq 3$.
\subsection{The case of orbispaces}\label{sec:orbispaces}
Most of the above results hold in the category of complex-analytic
orbispaces, namely objects locally modelled on $[T/G]$, where $T$ is a complex-analytic space
and $G$ is a finite group that acts on $T$ via biholomorphisms).
In this case, we must use open orbifold charts instead of open subsets, and we must modify the
definition of neighborhoods accordingly. Moreover, the word smooth, singular, unramified cover,
fiber bundle, universal cover and fundamental group must all be intended in the orbifold sense.
Note in particular that an orbipace is smooth where it is locally modelled as $[T/G]$ with $T$ smooth.
Analogous interpretation must be reserved to the words normal, irreducible, locally irreducible.
With the above caveat, Sections \ref{sec:stratification}-\ref{sec:cpx-irred}-\ref{sec:connected}-\ref{sec:lift}
go through.
A version of LHT can be also phrased for orbifolds that are global quotients.
Let $\ol{X}$ be an analytic space and let $\partial X$ and $\ol{Z}$ be subspaces of $\ol{X}$.
Suppose moreover that a finite group $G$ acts on $\ol{X}$, preserving
$\partial X$ and $\ol{Z}$, and so preserving $Z=\ol{Z}\setminus\partial X$.
\begin{cor}[Improved LHT for c.i.~inside global quotients]\label{cor:LHT-orbifold}
Suppose that $\ol{X}$, $\partial X$ and $\ol{Z}$ satisfy properties (I)-(III$^{ci}$)-(III$^{ci}_h$)
in Section \ref{sec:LHT} with $h=2,3$.
Then
\begin{itemize}
\item[(i)]
the locus $[Z/G]$ inside the orbifold $[X/G]$ is connected
and $\pi_1([Z/G])\rightarrow\pi_1([X/G])$ is an isomorphism if $h=3$ (resp. is surjective, if $h=2$);
\item[(ii)]
the preimage of $[Z/G]$ via an unramified cover of $[X/G]$ is connected.
\end{itemize}
\end{cor}
\begin{proof}
Claim (i) follows from Corollary \ref{cor:LHT} and claim (ii) from Corollary \ref{cor:connected-LHT}(ii).
\end{proof}
Also the definition of absolutely irreducible and totally irreducible divisors
can be verbatim understood in the orbispace setting.
Thus, if $(\ol{X},\partial X)$ satisfy (I) and (V), then
the conclusion of Lemma \ref{lemma:prime} holds for every effective
Cartier divisor in $[\ol{X}/G]$.
As a further example, Theorem \ref{thm:prime} has a formulation in the present
setting of orbispaces.
Namely, if $X$ is a smooth connected variety that satisfies (IV), and $G$ is a finite group
that acts on $X$, then every absolutely irreducible divisor in $[X/G]$ is totally irreducible.
Both the above assertions are immediate consequences of Lemma \ref{lemma:prime}
and Theorem \ref{thm:prime}.
|
{
"timestamp": "2021-05-17T02:09:03",
"yymm": "2105",
"arxiv_id": "2105.03861",
"language": "en",
"url": "https://arxiv.org/abs/2105.03861"
}
|
\section*{Introduction}
\label{intr}
Recent experimental discovery of the superconductivity in hole-doped NdNiO$_{2}$ generated renewed interest in nickelates. Particularly, the similarities and differences between nickelates and cuprates were studied intensely during the last couple of years. The study, obviously, is important as it can potentially reveal the features in the electronic structure which are responsible for the differences in superconducting properties. Very often, the study was concerned with LaNiO$_{2}$ and CaCuO$_{2}$ as simple representatives of both families of materials (nickelates and cuprates respectively). On the theoretical (calculational) side, the majority of the work was based on the Density Functional Theory\cite{prb_59_7901,prb_70_165109,prb_102_205130,prx_10_021061,prb_102_161118,prx_10_011024,prb_100_201106} (DFT), or on the DFT plus Dynamical Field Theory (DFT+DMFT)\cite{prx_10_021061,prb_102_161118,prb_101_064513,commphys_3_84} calculations. Only one calculation based on the GW terminology applying its non-self-consistent version (G0W0) was published recently.\cite{prb_101_161102} From the methodological point of view it is important to mention also the application of the GW+DMFT approach to the related compound NdNiO$_{2}$.\cite{prx_10_041047} Before proceeding with the present work, let us capitalize briefly the most important for the present study results from other works.
At the DFT level, the principal difference between the two materials consists of the increased energy separation\cite{prb_70_165109} of the Ni 3d$_{x^{2}-y^{2}}$ orbitals from the O 2p orbitals in LaNiO$_{2}$ as compared to the corresponding separation of the Cu 3d$_{x^{2}-y^{2}}$ and the O 2p orbitals in CaCuO$_{2}$. Also, the 5d orbitals of La cross the Fermi level in LaNiO$_{2}$ and, therefore, are coupled with the Ni 3d$_{x^{2}-y^{2}}$ orbitals. At the DFT+DMFT level, Wang et al.\cite{prb_102_161118} studied two nickelates, SrNiO$_{2}$ and LaNiO$_{2}$. In their study, all Ni 3d orbitals were considered as correlated with the Hubbard U parameter 5 eV. Visual comparison of the electronic structure of LaNiO$_{2}$ obtained in [\onlinecite{prb_102_161118}] at the DFT and the DFT+DMFT levels (Fig. 2 in their work) does not show any qualitative differences on the scale of a few electron-volts. One can see the renormalization of bands only in the immediate vicinity of the Fermi level. Karp et al.\cite{prx_10_021061} used the DFT+DMFT theory to compare NdNiO$_{2}$ and CaCuO$_{2}$. Instead of considering all Ni(Cu) 3d states as correlated, authors of Ref. [\onlinecite{prx_10_021061}] performed two types of the DFT+DMFT calculations: one with only Ni(Cu) 3d$_{x^{2}-y^{2}}$ as correlated orbital, and the second with Ni(Cu) 3d$_{x^{2}-y^{2}}$ and 3d$_{3z^{2}-r^{2}}$ as correlated. Hubbard parameter U was, correspondingly, increased from 3.1 eV in the first type of calculations to 7 eV in the second one. The important conclusion from this work is that nickelates are more correlated than cuprates (see Fig. 2 in Ref. [\onlinecite{prx_10_021061}]). Also, authors place nickelates in the same charge-transfer category of materials as the cuprates despite the larger separation between the d$_{x^{2}-y^{2}}$ and the O 2p states in the nickelates. One more important difference is that the rare earth d states appear in both the addition and removal spectra in nickelates which is a sign of their hybridization with Ni 3d states.
Cited above DFT and DFT+DMFT works provide insightful information on the materials. One can point out, however, that there are certain issues, particularly with the DFT+DMFT, which can affect the robustness of the conclusions. Firstly, the DFT+DMFT results depend on the choice of the U parameter. In this respect, the choice of U in Refs.[\onlinecite{prx_10_021061}] and [\onlinecite{prb_102_161118}] seems to be inconsistent. In the first work the U parameter was 3.1 eV for 1 correlated orbital and 7 eV for two correlated orbitals. The more orbitals we consider as correlated, the larger U should be because the screening by the rest of the (uncorrelated) orbitals is reduced. However, in Ref. [\onlinecite{prb_102_161118}], where all five Ni 3d orbitals were correlated, U value was only 5 eV. Secondly, because of the apparent importance of the energy separation between the Ni(Cu) 3d and the O 2p levels and of the degree of the hybridization between the Ni 3d and the La 5d, neglect by the inter-site (non-local) components of self energy in both the DFT and the DFT+DMFT studies seems to be highly questionable when considering relative positioning of the Ni(Cu) 3d and O 2p states, and Ni 3d and La 5d. Thirdly, low energy physics (immediate vicinity of the Fermi level) which is the principal goal of the DFT+DMFT studies can most likely be also affected by the effects not included in the DFT+DMFT: electron-phonon interaction, the same non-local self energy effects, and by the frequency dependence of the effective interaction. Therefore, the conclusions might change when all important contributions are properly taken into account.
Thus, it seems to be important and interesting to also apply other methods which include correlation effects and which are free of at least some of the mentioned issues of the DFT+DMFT. In this respect, the work by Olevano et al., [\onlinecite{prb_101_161102}], represents an important step. In their work, the non self-consistent GW approximation (G0W0) was used to study the electronic structure of LaNiO$_{2}$. G0W0 represents only the first term in the expansion of self energy but includes all non-local physics on the same footing as the local one. Plus, it has no adjustable parameters and it considers full frequency dependent effective interaction. Authors of Ref. [\onlinecite{prb_101_161102}] have shown that the La 4f states undergo 2 eV upward shift with respect to their DFT position, whereas the O 2p states are pulled down by 1.5 eV. Thus, they stress the importance of the non-local physics in this compound. As a drawback of the G0W0 approximation one can consider its obvious dependence on the starting point (because of the lack of self consistency). G0W0 relies on the assumption that GW wave functions
are similar to the DFT wave functions (if the DFT is used as a starting point). This assumption works well in simple semiconductors (like Si or LiF) but can be seriously questioned in more complicated materials. For instance, the G0W0 (with the DFT as a stating point) applied to the monoclinic M1 phase of VO$_{2}$ results in a metal (similar to the DFT) whereas it is an insulator in experiments.\cite{prl_99_266402} Only the self-consistent quasiparticle GW calculation provides correct insulating state.\cite{prr_2_023076} With this consideration, it is clear that the self consistent and based on GW calculations can provide essential new information on the differences in the electronic structure of LaNiO$_{2}$ and CaCuO$_{2}$.
The principal goal of this work is, therefore, to apply self-consistent GW method to the representatives of nickelates and cuprates. We also apply self-consistent GW+Vertex approach to determine the strength of the correlation effects beyond GW approximation. Plus, we apply self-consistent quasiparticle GW approximation (QSGW) as it stresses the importance of the Ward Identity (WI) in the limit of low frequency and low momenta but neglects the dynamical effects (frequency dependence) in self energy. The scGW, on the other hand, treats the high frequency part of self-energy on the same footing as the low frequency part but neglects the WI altogether.
The paper begins with a brief discussion of the distinctive features of the methods used in this work and setup parameters for the calculations (the first section). The second section provides the
results obtained and a discussion. The conclusions are given afterwords. Finally, three sections in the Appendix provide supporting for the main text information.
\section*{Methods and calculation setups}\label{meth}
All calculations in this work were performed using the code FlapwMBPT.\cite{flapwmbpt} For the DFT calculations, we used the local density approximation (LDA) as parametrized by Perdew and Wang.\cite{prb_45_13244} Recently, a number of improvements in the quality of the basis set in the FlapwMBPT code have been implemented\cite{prb_103_165101} which allowed, for instance, more accurate evaluation of the atomic forces.\cite{jcm_press} Our scGW and sc(GW+Vertex) calculations are based on the L. Hedin's theory.\cite{pr_139_A796} They can also be defined using $\Psi$-functional formalism of Almbladh et al.\cite{ijmpb_13_535} As it is shown in Ref. [\onlinecite{ijmpb_13_535}], $\Psi$-functional can be constructed starting from Luttinger-Ward $\Phi$-functional\cite{pr_118_1417} and using screened Coulomb interaction W instead of bare Coulomb interaction V as an independent variable (besides Green's function G). It is defined by the following expression:
\begin{equation}\label{psi}
\Psi[G,W]=\Phi[G,V]-\frac{1}{2}Tr [PW-ln(1+PW)],
\end{equation}
where $P$ is irreducible polarizability. In materials science, $\Psi$-functional is more convenient than the $\Phi$-functional of Luttinger and Ward. The first reason is connected to the infinite range of the bare Coulomb interaction which makes the screened interaction W a much more suitable quantity than the bare interaction
V. The second reason is the simplicity of $\Psi$-functional. For instance, at the level of GW approximation, $\Phi$-functional is represented by an infinite sequence of ring diagrams whereas $\Psi$-functional is represented by just one diagram (first diagram in Fig. \ref{diag_Psi}). In this work, the simplest approximation for $\Psi$-functional which includes vertex corrections has been adapted (Fig. \ref{diag_Psi}). As it was already mentioned, the first diagram in Fig. \ref{diag_Psi} corresponds to GW approximation whereas the second one represents the first order vertex correction.
\begin{figure}[t]
\begin{center}\begin{axopicture}(200,56)(0,0)
\SetPFont{Arial-bold}{28}
\SetWidth{0.8}
\Text(10,10)[l]{$\Psi$ =}
\Text(35,10)[l]{$-\frac{1}{2}$}
\GCirc(75,10){20}{1}
\Photon(55,10)(95,10){2}{5.5}
\Text(110,10)[l]{+}
\Text(128,10)[l]{$\frac{1}{4}$}
\GCirc(160,10){20}{1}
\Photon(160,-10)(160,30){2}{5.5}
\Photon(140,10)(180,10){2}{5.5}
\end{axopicture}
\end{center}
\caption{Diagrammatic representation of $\Psi$-functional which includes the simplest non-trivial vertex.}
\label{diag_Psi}
\end{figure}
\begin{figure}[t]
\begin{center}\begin{axopicture}(200,56)(0,0)
\SetPFont{Arial-bold}{28}
\SetWidth{0.8}
\Text(10,10)[l]{$P$ =}
\Photon(48,10)(55,10){2}{2.5}
\GCirc(75,10){20}{1}
\Photon(95,10)(102,10){2}{2.5}
\Text(120,10)[l]{$-$}
\Photon(143,10)(150,10){2}{2.5}
\GCirc(170,10){20}{1}
\Photon(190,10)(197,10){2}{2.5}
\Photon(170,-10)(170,30){2}{5.5}
\end{axopicture}
\end{center}
\caption{Diagrammatic representation of irreducible polarizability in the simplest vertex corrected scheme.}
\label{diag_P}
\end{figure}
\begin{figure}[t]
\begin{center}\begin{axopicture}(200,56)(0,0)
\SetPFont{Arial-bold}{28}
\SetWidth{0.8}
\Text(10,10)[l]{$\Sigma$ = -}
\Line(38,10)(112,10)
\PhotonArc(75,10)(30,0,180){2}{8.5}
\Text(120,10)[l]{$+$}
\Line(133,10)(207,10)
\PhotonArc(160,10)(20,0,180){2}{6.5}
\PhotonArc(180,10)(20,180,360){2}{6.5}
\end{axopicture}
\end{center}
\caption{Diagrammatic representation of self energy in the simplest vertex corrected scheme.}
\label{diag_S}
\end{figure}
Diagrammatic representations for irreducible polarizability (Fig. \ref{diag_P}) and for self energy (Fig. \ref{diag_S}) follow from the chosen approximation for $\Psi$-functional. The set of diagrams for polarizability and self energy shown in Figs. \ref{diag_P} and \ref{diag_S} corresponds to the scheme B introduced earlier in Ref. [\onlinecite{prb_94_155101}]. In order to make notations more self-explaining, here we introduce another abbreviation. Following the convention for the GW approach which corresponds to the lines of the GW diagram (first diagram in Fig. \ref{diag_S}), we will use the term sc(GW+G3W2) (instead of sc(GW+Vertex) or "scheme B") which corresponds to all diagrams in Fig. \ref{diag_S}. Specific diagrammatic representation of polarizability defines the approximation for screening. Thus, from Fig. \ref{diag_P} we can state that in sc(GW+G3W2) the screening is defined by one-loop diagram (Random Phase Approximation, RPA) plus first order electron-hole interaction diagram. scGW includes only the RPA part. Technical details of the GW part were described in Refs. [\onlinecite{prb_85_155129,cpc_219_407}]. Numerical algorithm for the evaluation of first order polarizability was the same in this study as described in details in Ref. [\onlinecite{prb_94_155101}]. For the evaluation of second order self-energy, however, more efficient algorithm (as compared to the one described in [\onlinecite{prb_94_155101}]) is used. The brief account of the details of this new algorithm can be found in Appendix. The diagrammatic (GW and G3W2) parts of the FlapwMBPT code take full advantage of the fact that certain diagrams can more efficiently be evaluated in reciprocal (and frequency) space whereas other diagrams are easier to evaluate in real (and time) space. As a result, the GW part of the code scales as $N_{k}N_{\omega}N^{3}_{b}$ where $N_{k}$ is the number of \textbf{k}-points in the Brillouin zone, $N_{\omega}$ is the number of Matsubara frequencies, and $N_{b}$ stands for the size of the basis set. The vertex part of the code scales as $N^{2}_{k}N^{2}_{\omega}N^{4}_{b}$. For comparison, if one uses naive (all in reciprocal space and frequency) implementation then the GW part scales as $N^{2}_{k}N^{2}_{\omega}N^{4}_{b}$ (i.e. exactly as the vertex part when the implementation is efficient), and the vertex part scales as $N^{3}_{k}N^{3}_{\omega}N^{5}_{b}$. Besides of the efficiency of implementation, we have to mention two more factors which make the use of the diagrams beyond GW feasible. First is the fact that the higher order diagrams converge much faster than the GW diagram with respect to the basis set size and to the number of \textbf{k}-points.\cite{prb_94_155101,prb_95_195120} Second is that the higher order diagrams are very well suited for massive parallelization.
scGW has a certain advantage as compared to non self consistent (one shot) G0W0 approach: there is no dependence on the starting point in scGW. Also, being based on the functional formalism, it allows (at least in principle) the direct way to evaluate total energies.\cite{prb_57_2108,prb_80_041103,jcm_29_465503} However, from the purely theoretical point of view, scGW has certain issues which one can relate to rather "nonsymmetric" dressing of the Green function during the self-consistency course: adding more and more self-consistency diagrams while retaining at each iteration only the lowest order skeleton diagram for polarizability and for self energy. This "nonsymmetric" dressing results in, for instance, incorrect long wave limit of polarizability. There is quite a number of documented limitations of the approach: too big (as compared to the correct result) bandwidth in electron gas\cite{prb_57_2108} and in alkali metals\cite{prl_81_1662}, absence of satellites in electron gas\cite{prb_57_2108}, overestimation of the band gap in simple semiconductors.\cite{prl_81_1662,prb_98_155143} In order to "defend" scGW a bit, one can observe that the above listed limitations pertain mostly to the materials where non local physics is prevalent (electron gas, alkali metals, sp semiconductors). There is only limited number of scGW applications to the realistic materials where local effects are the most important or, at least, contribute considerably to observable properties. Existing applications, however, are not as conclusive as in the case of simple materials. Just to name a few, one can point out that scGW overestimates magnetic moment in iron\cite{jcm_29_465503} and band gap in NiO\cite{arx_2106_03800}. Also, in SrVO$_{3}$, there is an indication of worsening of the calculated spectra when go from G0W0 to scGW.\cite{prb_94_201106} However, scGW in Ref. [\onlinecite{prb_94_201106}] was implemented for the basis set of rather small size (only $t2g$ orbitals), which makes the conclusion though plausible but not very convincing. On the other hand, scGW describes perfectly well the experimental photoemission spectrum of metal americium\cite{prb_85_155129} whereas G0W0 fails completely. Also, applications of scGW are rather popular in atomic and molecular physics\cite{epl_76_298,jcp_130_114105,prb_86_081102} which supports an idea that in the "finite systems" world, scGW has certain merits.
sc(GW+G3W2) adds skeleton diagrams of the next order (as compared to scGW) to both polarizability and self energy. Therefore, the problems occuring because of the above described "nonsymmetric" dressing of Green's function should be less dramatic. From this point of view, one can expect sc(GW+G3W2) to be more accurate than scGW. Indeed, there is noticeable improvement in the calculated bandwidth of the electron gas\cite{prb_96_035108} and alkali metals\cite{prb_94_155101}. Improvements in the calculated band gap of sp semiconductors are especially remarkable\cite{prb_95_195120}. In the case of simple semiconductors, sc(GW+G3W2) not only considerably outperforms scGW and QSGW (see introduction below), but also is better than G0W0 in most of the cases. For more complicated materials, one can point out recent calculation of the band gap in NiO\cite{arx_2106_03800} where sc(GW+G3W2) resulted in almost perfect reproduction of the experimental gap whereas scGW overestimated it by about 25\%. Also, the improvement in the calculated band gap of the van der Waals ferromagnet CrI$_{3}$ is considerable.\cite{arx_2105_07798} Of course, one cannot expect that sc(GW+G3W2) will be considerably better than scGW in the case of really strongly correlated materials, i.e. where non-perturbative treatment is necessary.
We also use the quasiparticle self consistent GW (QSGW) approach. Similar to the scGW and the sc(GW+G3W2) approaches, it is based on the finite temperature (Matsubara) formalism and in this respect it is different from the well known QSGW implementation by Kotani et al.\cite{prb_76_165106} Quasiparticle approximation includes linearization of self energy near zero frequency (see for details Refs. [\onlinecite{prb_85_155129,cpc_219_407}]) and, therefore, the method is reliable only not very far from the Fermi level - usually within a few electron-volts. The approach adopted by Kotani et al. in Ref. [\onlinecite{prb_76_165106}] uses specially designed procedure of averaging of non-diagonal elements of self energy for each quasiparticle state instead of the linearization near the chemical potential. This fact, presumably, should make the approach of Kotani et al. more accurate in broad energy range than QSGW used in this study. However, for the energies not far from the chemical potential (the range of interest in this work) two types of QSGW are quite similar. The differences, in fact, are mostly related to the differences in basis sets and in the degree of convergence.\cite{prb_95_195120} In both variants of QSGW, the effective self energy is static (frequency independent, see App. \ref{sig_qsgw}) and the method is not diagrammatic. Special (or rather manual) construction of the effective self energy breaks relation to the $\Psi$-functional. However, as it was explained by Kotani et al.\cite{prb_76_165106} QSGW satisfies the zero frequency and long wave limit of the Ward Identity because of the so called Z-factor cancellation. This fact makes it often quite accurate, especially in simple metals and semiconductors where the above mentioned limit is important. Band gaps, calculated with the QSGW, for instance, are usually more accurate than the ones calculated with the scGW.\cite{prl_99_246403, prb_95_195120,prb_98_155143} In more complicated solids (especially where d or f electrons play an important role) the QSGW approach is not necessary better than the scGW: the frequency dependence of self energy could be more important than the zero frequency+momentum limit of the WI. Good example is the metal americium, where both the DFT and the QSGW fail to describe the experimentally determined\cite{prl_52_1834} position of the occupied 5f$_{5/2}$ states whereas scGW describes them very well.\cite{prb_85_155129} For simple (sp) semiconductors with large band gap (C, MgO, LiF, NaCl) scGW outperforms QSGW\cite{prb_98_155143,prb_95_195120} (not considerably though). Also, as it seems\cite{arx_2105_07798}, scGW is slightly more accurate than QSGW in the case of CrI$_{3}$. Additional insight into the differences between the approximate methods of this work is provided in Appendix \ref{Im_W}. Considering their differences, the three approaches (scGW, sc(GW+G3W2), and QSGW) represent a good set of methods to study new materials.
Our algorithm for the analytical continuation of self energy which was needed, for instance, to plot Figs. \ref{pdos_ca} and \ref{pdos_la} is based on the Ref. [\onlinecite{jltp_29_179}] and it is described in the Appendix of Ref. [\onlinecite{cpc_257_107502}]. The band plotting associated with the scGW/sc(GW+G3W2) approach (see Fig.\ref{gzr_bnd}) needs some additional clarification. Strictly speaking, one-electron features (band dispersions) in these two approaches should be obtained as the peak positions of the $\mathbf{k}$-resolved spectral functions. Evaluation of spectral functions includes the analytical continuation of the correlation part of self energy from the imaginary to the real frequency axis. However, as it was demonstrated in Refs. [\onlinecite{prb_85_155129,cpc_257_107502}], the peak positions of the spectral function near the chemical potential can often be accurately reproduced by a simplified procedure. This procedure involves the linearization of the frequency dependence of self energy near the chemical potential and, consequently, results in the effective one-electron energies (see details in Appendix of the Ref. [\onlinecite{cpc_257_107502}]). The one-electron energies, such obtained, can obviously be used for the band plotting purposes.
Let us now specify the setup parameters used in the calculations. In order to make presentation more compact, principal structural parameters for the studied solids have been collected in Table \ref{list_s} and the most important set up parameters have been collected in Table \ref{setup_s}. All calculations have been performed for the electronic temperature $600K$. As the long range magnetic order has not yet been found in LaNiO$_{2}$, all calculations were non-magnetic for simplicity. The DFT, scGW, QSGW, and the GW part in the sc(GW+G3W2) calculations were performed with the $6\times 6\times 6$ mesh of \textbf{k}-points in the Brillouin zone. 300 band states were used to expand Green's function and self energy. The convergence which the above parameters provide was checked by doing calculations with $4\times 4\times 4$ mesh of \textbf{k}-points and with smaller number of bands for the GW part. From the analysis we conclude that further increase in the number of \textbf{k}-points and bands should not change the effective band energies near the Fermi level (Fig. \ref{gzr_bnd}) by more than 5\% which is sufficient for the comparison of methods. The diagrams beyond the GW approximation were evaluated using $3\times 3\times 3$ mesh of \textbf{k}-points in the Brillouin zone and with about 26 bands (closest to the Fermi level). With the above mentioned faster convergence of the higher order diagrams with respect to these parameters, this choice represented a reasonable compromise between the accuracy and the computational cost. Similar to GW part, the convergence was checked by doing calculations with smaller number of \textbf{k}-points ($2\times 2\times 2$) and of bands (10-22 instead of final 26). We estimate the error of the vertex part (i.e. the difference between sc(GW+G3W2) and scGW results) to be about or less than 10-15\%. Again, from Fig. \ref{gzr_bnd} one can see that the above difference is pretty small itself so that if it changes by 10-15\% the conclusions will be the same.
\section*{Results}
\label{res}
\begin{table}[t]
\caption{Structural parameters of the solids studied in this work. Lattice parameters are in Angstroms, MT radii are in atomic units (1 Bohr radius), and atomic positions are given relative to the three primitive translation vectors.} \label{list_s}
\small
\begin{center}
\begin{tabular}{@{}c c c c c c} &Space&&&Atomic&\\
Solid &group&a&c&positions&$R_{MT}$\\
\hline\hline
CaCuO$_{2}$&123 &3.86 &3.20&Ca: 0;0;0 &2.032\\
& & & &Cu: 1/2;1/2;1/2 &2.032\\
& & & &O: 1/2;0;1/2 &1.563\\
LaNiO$_{2}$&123 &3.966 &3.376&La: 0;0;0 &2.087\\
& & & &Ni: 1/2;1/2;1/2 &2.087\\
& & & &O: 1/2;0;1/2 &1.606\\
\end{tabular}
\end{center}
\end{table}
\begin{table}[b]
\caption{Principal setup parameters of the studied solids are given. The following abbreviations are introduced: $\Psi$ is for wave functions, $\rho$ is for the electronic density, $V$ is for Kohn-Sham potential, and PB is for the product basis.} \label{setup_s}
\small
\begin{center}
\begin{tabular}{@{}c c c c c c} &Core&&$L_{max}$&$L_{max}$&\\
Solid &states&Semicore&$\Psi/\rho,V$&PB & $RK_{max}$ \\
\hline\hline
CaCuO$_{2}$&Ca: [Ne]& 3s,3p&6/6&6&8.0 \\
& Cu: [Ne]& 3s,3p&6/6&6& \\
& O: [He]& 2s&5/5&5& \\
LaNiO$_{2}$&La: [Ar]3d& 4s,4p,4d,5s,5p&6/6&6&8.0 \\
& Ni: [Ne]& 3s,3p&6/6&6& \\
& O: [He]& 2s&5/5&5& \\
\end{tabular}
\end{center}
\end{table}
\begin{figure*}[t]
\fbox{\includegraphics[width=6.5 cm]{ca_PSI4_dft.pdf}}
\hspace{0.02 cm}
\fbox{\includegraphics[width=6.5 cm]{ca_PSI4_gw.pdf}}
\hspace{0.02 cm}
\fbox{\includegraphics[width=6.5 cm]{ca_QP2_qp.pdf}}
\hspace{0.02 cm}
\fbox{\includegraphics[width=6.5 cm]{ca_PSI5_psi.pdf}}
\caption{Partial (atom and orbital resolved) spectral functions of CaCuO$_{2}$.}
\label{pdos_ca}
\end{figure*}
Partial (atom and orbital resolved) spectral functions are presented in Fig. \ref{pdos_ca} (CaCuO$_{2}$) and in Fig. \ref{pdos_la} (LaNiO$_{2}$). First, let us point out that there are a few important differences in the electronic structure of these two materials at the DFT level. First, the La 4f levels in LaNiO$_{2}$ dominate in the energy range immediately above the Fermi level. The La 5d states are spread in energy and are above of the 4f-states by 2-5 eV. Absence of the f-states in CaCuO$_{2}$ makes the presence of the Ca 3d states more prominent among the unoccupied bands. The character of the levels at the Fermi level also represents an important qualitative difference. In CaCuO$_{2}$, they are almost equally represented by the Cu 3d$_{x^{2}-y^{2}}$ and the O 2p states. In LaNiO$_{2}$, however, the 3d$_{x^{2}-y^{2}}$ states of Ni dominate. The states below the Fermi level also look different. In CaCuO$_{2}$, the Cu 3d states are mixed with the O 2p states and together they occupy the same energy range from -7eV to almost the Fermi level. In LaNiO$_{2}$, the Ni 3d states are well separated from the O 2p states and occupy the energy range from -3eV to the Fermi level, whereas the O 2p states occupy the energy range from -8eV to -3.5 eV.
\begin{figure*}[t]
\fbox{\includegraphics[width=6.5 cm]{la_PSI4_dft.pdf}}
\hspace{0.02 cm}
\fbox{\includegraphics[width=6.5 cm]{la_PSI4_gw.pdf}}
\hspace{0.02 cm}
\fbox{\includegraphics[width=6.5 cm]{la_QP13_qp.pdf}}
\hspace{0.02 cm}
\fbox{\includegraphics[width=6.5 cm]{la_PSI8_psi.pdf}}
\caption{Partial (atom and orbital resolved) spectral functions of LaNiO$_{2}$.}
\label{pdos_la}
\end{figure*}
Let us now discuss the changes in the electronic structure (as compared to the DFT) when we apply the fully self-consistent GW approach. In CaCuO$_{2}$, there are no qualitative changes. For instance, states at the Fermi level still are equally represented by the Cu 3d$_{x^{2}-y^{2}}$ and the O 2p states. Also, part of the spectral weight associated with
the Cu 3d$_{x^{2}-y^{2}}$ states still resides in the occupied valence bands. The occupied Cu 3d states are strongly mixed with the O 2p states as in the DFT case. However, these joint occupied states are shifted down by about 2 eV as compared to the DFT case. The unoccupied Ca 3d states are shifted up by about 2 eV. In LaNiO$_{2}$, the states immediately at the Fermi level are still almost completely represented by the Ni 3d$_{x^{2}-y^{2}}$ orbitals, as in the DFT calculations. However, the rest of the electronic structure is qualitatively different from the DFT case. Firstly, the La 4f states are pushed up by about 5 eV in the scGW calculations as compared to the DFT. Now they are above of the La 5d bands and, supposedly, are not very important for the low energy physics. But even more noticeable change is related to the fact, that the occupied Ni 3d and the O 2p states which were very well separated in the DFT calculations now are strongly mixed and reside in the same energy range from approximatelu -10 ev to -2 ev relative to the Fermi level. As one can notice, this was achieved by a considerable down push of the occupied Ni 3d states and by a slight (about 1 eV) push of the O 2p states up in energy.
The self consistent vertex corrected GW calculations do not change the scGW result very much. One can notice, however, that in both materials the occupied Ni(Cu) 3d and O 2p states were pushed up in energy by about 0.5 eV (compared to the scGW result). Very slight down push of the Ca 3d (La 5d) can also be noticed.
The QSGW calculations for CaCuO$_{2}$ result in the electronic structure very similar to the electronic structure obtained with the scGW/sc(GW+G3W2) approach which can be verified by comparison of the positions of all principal peaks in Fig. \ref{pdos_ca}. The situation with LaNiO$_{2}$ is, however, quite different. Opposite to the scGW and the sc(GW+G3W2) calculations, the QSGW shows only quantitative (but not qualitative) changes in the electronic structure (as compared to the DFT). The only obvious similarity with the scGW results is the upward shift of the La 4f states. The oxygen 2p and the occupied Ni 3d states are pushed downward by -2 eV and -1 eV correspondingly, but there is no mixing among them as in the scGW or the sc(GW+G3W2) case. Obviously, in the case of LaNiO$_{2}$ the differences in methods, the QSGW on one hand and the scGW/sc(GW+G3W2) on the other hand, are a lot more prominent than in the case of CaCuO$_{2}$.
It is interesting to compare the tendencies in the electronic structure of LaNiO$_{2}$ (when we go from LDA to more complicated methods) with the tendencies discovered in Ref. [\onlinecite{prb_101_161102}]. Principal finding of Ref. [\onlinecite{prb_101_161102}] is that La 4f states are pushed up by about 2 eV in G0W0 calculation (as compared to the DFT), O 2p states are pushed down by about -1.5 eV, and the energy levels near the Fermi level do not change noticeably. Energy levels near the Fermi level are represented by Ni 3d$_{x^{2}-y^{2}}$) orbitals in all our calculations. In this respect, we agree with earlier G0W0 calculations. Further, all our post-DFT approaches push La 4f states up by about 5 eV which is larger than 2 eV in G0W0 case and can naturally be explained by self consistency effects. The change in the position of O 2p states is, however, different. Only our QSGW approach agrees with G0W0 finding: down push by about -1.5 eV. As it was already discussed above, the methods with dynamic self energy (scGW and sc(GW+G3W2)) demonstrate qualitatively different change: O 2p states split into two groups with the boundary between groups at about - 4 eV and mix considerably with Ni 3d states. The difference in this tendency is, most likely, related to the incoherence effects in self-consistency diagrams which are not included in G0W0 or QSGW approaches.
In order to take measure of the strength of the correlation effects, the renormalization factor Z has been evaluated. Results are shown in Fig. \ref{z_factor} for the $\Gamma$ point of the Brillouin zone. As one can see, all approaches (scGW, sc(GW+G3W2), and QSGW) result in quite similar and moderate correlation effects. Minimal value of Z is unmistakebly obtained for one band near the Fermi level which has Ni(Cu) 3d$_{x^{2}-y^{2}}$ character. This holds for all points in the Brillouin zone in case of LaNiO$_{2}$. For CaCuO$_{2}$, however, there is noticeable admix of the O 2p character in some parts of the Brillouin zone (not shown in Fig. \ref{z_factor}). In those points, Z factor is slightly larger (up to 0.77$\div$0.80). Generally, analysis of Z factor confirms that the correlations are slightly stronger in LaNiO$_{2}$ case. In this respect, our calculations are in line with all published works. Also, similar to the G0W0 calculations\cite{prb_101_161102}, we obtained very little variation of Z across the Brillouin zone for the Ni 3d$_{x^{2}-y^{2}}$ band in case of LaNiO$_{2}$. Its value also is very close to the value $0.70\pm0.02$ reported in the G0W0 calculations. There are, however, differences with the DFT+DMFT results. Most notable is that Z factor in the DFT+DMFT calculations\cite{prx_10_021061} is considerably smaller for the most correlated Ni 3d$_{x^{2}-y^{2}}$ orbital. Its value was reported to be 0.36 (LaNiO$_{2}$, Ref. \onlinecite{prb_102_161118}). In the case of the Cu 3d$_{x^{2}-y^{2}}$ orbital, the DFT+DMFT values are 0.50$\pm$0.75 (CaCuO$_{2}$, Ref. \onlinecite{prx_10_021061}) which are not much different from ours. There are a few possible sources of the differences for LaNiO$_{2}$: i) insufficient number of diagrams included in our calculations; ii) single site approximation in the DFT+DMFT calculations; iii) the Hubbard U was taken too large in the DFT+DMFT case. Experimental research is, therefore, imperative for the purpose of comparison. However, as all calculations neglect the electron-phonon interaction, direct comparison with future experimental mass enhancement (for instance) will require inclusion of the electron-phonon interaction in the theoretical predictions.
\begin{figure*}[t]
\fbox{\includegraphics[width=6.5 cm]{ca_z.pdf}}
\hspace{0.02 cm}
\fbox{\includegraphics[width=6.5 cm]{la_z.pdf}}
\caption{Diagonal elements of the renormalization factor Z versus band index for $\Gamma$ point in the Brillouin zone. Left window: CaCuO$_{2}$, right window: LaNiO$_{2}$. The inserts magnify the area around the minimum of Z.}
\label{z_factor}
\end{figure*}
\begin{figure*}[t]
\fbox{\includegraphics[width=6.5 cm]{ca_gzr_bnd.pdf}}
\hspace{0.02 cm}
\fbox{\includegraphics[width=6.5 cm]{la_gzr_bnd1.pdf}}
\caption{Quasiparticle band structure along $\Gamma$-Z-R path in the Brillouin zone. Left window: CaCuO$_{2}$, right window: LaNiO$_{2}$. Only the bands near Fermi level are shown.}
\label{gzr_bnd}
\end{figure*}
Distinctive feature of the DFT+DMFT calculations is the renormalization (narrowing) of the bands near the Fermi level. For LaNiO$_{2}$ it is shown in Fig. 1 in Ref. \onlinecite{prb_102_161118} and for CaCuO$_{2}$ it is shown in Fig. 4 in Ref. \onlinecite{prx_10_021061}. Particularly well the narrowing is seen along the $\Gamma$-Z direction in the Brillouin zone for the band immediately under the Fermi level (LaNiO$_{2}$). In CaCuO$_{2}$ case, the actual bands are well entangled near the Fermi level, so that in the DMFT applications the disentaglement procedure is used. For the purpose of comparison, we show in Fig. \ref{gzr_bnd} the bands near the Fermi level along the path $\Gamma$-Z-R in the Brillouin zone. The case of LaNiO$_{2}$ is a bit simpler, so we discuss it first. As one can see at the $\Gamma$ point, all three correlated methods (QSGW, scGW, and sc(GW+G3W2)) result in large narrowing of the DFT band of the Ni 3d$_{x^{2}-y^{2}}$ character which is the second band from the Fermi level at $\Gamma$ point. This band can be easily identified: it starts at -1.2 eV in the DFT case, and at about -0.5 eV in the other cases. The strongest renormalization is in the sc(GW+G3W2) case (more than by a factor of two) which is approximately the same renormalization as in the DFT+DMFT case.\cite{prb_102_161118} The QSGW and the scGW result in only slightly smaller renormalization. The similarity with the DFT+DMFT result is interesting considering the number of differences in the methods. In the G0W0 calculations\cite{prb_101_161102} the band narrowing along the $\Gamma$-Z path was also observed but almost by a factor of two smaller than in our calculations because of lack of self consistency. If we look at the same band along the $\Gamma$-Z path, we can see the difference between methods. The band is flat in the DFT, the DFT+DMFT, and in the QSGW cases but it has a slight but noticeable dispersion in the scGW and in the sc(GW+G3W2). In case of the DFT+DMFT, it is flat because the DMFT self energy is independent of momenta and, correspondingly, the flatness of the DFT band remains. But the difference between the QSGW on one hand and the scGW/sc(GW+G3W2) on the other hand deserves attention. Taking into account the specifics of the methods, one can speculate that dynamic effects (frequency dependence) in self energy which is included in the scGW and in the sc(GW+G3W2) but not in the QSGW is the reason. Self energy is momentum-dependent in all three methods, so the difference in its frequency dependence from one \textbf{k}-point to another can result in dispersion. In the DFT, this band is crossing with another band at Z point. But there is no such crossing in any of the correlated methods including the DFT+DMFT. Thus, this is another similarity of our results with the DFT+DMFT results.
There is interesting difference with the DFT+DMFT in the size of the electron pocket near the $\Gamma$ point. In the DFT+DMFT,\cite{prb_102_161118} its size is slightly reduced as compared to the DFT case. In our calculations, all three correlated methods show an increase of the pocket. Slight increase of the electron pocket at $\Gamma$ was also reported in the G0W0 calculations.\cite{prb_101_161102} This would be interesting to compare with experiment. However, as it represents one of the low energy effects, the electron-phonon interaction has to be taken into account in the theoretical evaluations for proper comparison.
In CaCuO$_{2}$ case, the comparison is more complicated because of entanglement of the bands (O 2p orbitals contribute significantly). The band of interest (Cu 3d$_{x^{2}-y^{2}}$) is the fourth band (down from the Fermi level) and starts at the $\Gamma$ point at about -2.1 eV in the DFT case. In the correlated methods, however, this band is the first one down from the Fermi level and starts at about -1.5$\div$-1.8 eV (scGW and sc(GW+G3W2)) and at about -1.2 eV in the QSGW case. So, the renormalization is smaller (scGW and sc(GW+G3W2)) than in the DMFT case (Fig. 4 in Ref. \onlinecite{prx_10_021061}). Interesting, however, that in this case the narrowing is the strongest in the QSGW case which is close enough to the DFT+DMFT result. Once again, we need experimental information in order to decide which method is the best. If the QSGW is more accurate than we should conclude that long range static correlation effects are more important for this material than dynamic effects. If, however, the scGW/sc(GW+G3W2) is more accurate the conclusion would be opposite.
\section*{Conclusions}
\label{concl}
In conclusion, we have applied three correlated methods (scGW, sc(GW+G3W2), and QSGW) to study the electronic structure of CaCuO$_{2}$ and LaNiO$_{2}$. In some aspects, our results are consistent with the previous DFT+DMFT studies: band narrowing near the Fermi level, orbital differentiation in the Ni(Cu) 3d shell, stronger correlation effects in LaNiO$_{2}$ as compared to CaCuO$_{2}$. There are also differences with the DFT+DMFT studies. One of them consists in quite noticeable repositioning of the spectral features away from the Fermi level in our correlated calculations. In the DMFT case, repositioning is small because only the Ni(Cu) 3d electrons are considered as correlated. Another notable difference consists in a lot smaller Z factor obtained in the DFT+DMFT works for LaNiO$_{2}$. This could be a result of the insufficient number of diagrams in our calculations, or simply the artifact of the single-site approximation and/or the too large Hubbard U parameter in the DFT+DMFT studies. Also, the change in the size of the electron pocket near the $\Gamma$ point is different: small decrease in the DMFT case and an increase in all our GW based methods. In this respect, out GW based approaches agree with the result found in Ref. [\onlinecite{prb_101_161102}] using G0W0 approximation. Concerning the above mentioned repositioning of the Ni 3d and O 2p levels below the Fermi level, we have found similarity with G0W0 calculations\cite{prb_101_161102} in our QSGW studies, but not in our scGW or sc(GW+G3W2) studies. We have also found that our three correlated methods differ between each other more prominently in the case of LaNIO$_{2}$ which is consistent with the conclusion that this material is more correlated.
Principal results of this work show that the correlations in LaNiO$_{2}$ being stronger than in CaCuO$_{2}$ still, however, are weak enough to allow applications of the totally ab-initio methods such as the scGW or the more advanced sc(GW+G3W2) for both materials. Whether there is a physics which cannot be captured by perturbative methods like scGW or sc(GW+G3W2) and, therefore, one needs to use non-perturbative approaches like DFT+DMFT, still has to be explored by future photoemission experiments.
\section*{Acknowledgments}
\label{acknow}
This work was supported by the U.S. Department of energy, Office of Science, Basic
Energy Sciences as a part of the Computational Materials Science Program.
|
{
"timestamp": "2021-06-29T02:37:16",
"yymm": "2105",
"arxiv_id": "2105.03770",
"language": "en",
"url": "https://arxiv.org/abs/2105.03770"
}
|
\section{Experimental Results}
a sample array design was sent out for manufacturing Fig.~\ref{fig:array_sample_m}.
(i) The simulated antenna and the fabricated one agree with a 1dB difference in terms of directivity as measured in an anechoic chamber.
(ii) The measured antenna array efficiency,$\eta = 99.6\%$.
(iii) Connected to a Galaxy phone that has also debug telemetries from a cellular modem for testing, the logs extracted from the phone show that with the manufactured antenna array the SNR of receiving the cell signals improves,in comparison to the phones original antenna, by 10dB.
\begin{figure}[h]
\includegraphics[width=0.98\linewidth]{csps_1.JPG}
\vspace{-.12in}
\caption{Manufactured antenna array. Front and back views.}
\vspace{-.12in}
\label{fig:array_sample_m}
\end{figure}
\section{Signal Model}
In order to select the number of antennas for the antenna array use case, we analyze a specific yet common specification of the mobile channel and its physical layer.
For the received signal model, we assuming a QPSK transmitted narrowband symbol and Short Observation Interval Approximation (SOIA approximation), propagate through Rician channel \cite{roberts_two-state_1995}.
The justification for QPSK signal can be seen from the vast majority of protocols using it\footnote{{{Tektronix} {Overview} {of} {802.11} {Physical} {Layer}}, \url{https://download.tek.com/document/37W-29447-2_LR.pdf}}.
Let $s$ be a QPSK symbol, i.e.
\begin{equation}
s \in [1+j,1-j,-1+j,-1-j]/\sqrt{2}
\end{equation}
Let subscript $i$ denote the number of receiving antenna in the array. The physical channel $h_i$ can be modeled as a complex phasor for the i-th antenna physical channel \cite{roberts_two-state_1995}.
\begin{equation}
h_i \sim rice(K,\Omega)
\end{equation}
Where $K$ is the ratio between the direct path and the reflections, $\Omega$ is the total received power.
The noise is modeled as complex normal additive noise,
\begin{equation}
n\sim CN(0,\sigma^2)
\end{equation}
The received signal at the i-th antenna reads,
\begin{equation}
s_i = h_i\dots + n
\end{equation}
The MIMO scheme chosen for the evaluation task is the Multi Rate Combiner (MRC) \cite{poor_wireless_1998},
We define the Signal to Noise Ratio (SNR) as
\begin{equation}
SNR_{i} = \frac{|h_{i}|^2}{|\sigma^2|}
\end{equation}
Thus the post-processing signal $s_p$ reads
\begin{equation}
s_p = \frac{SNR_0s_0+...SNR_is_i}{SNR_0+...SNR_i}
\end{equation}
\paragraph{Simulations}
Let $SNR_{i} \sim 4dB, K = 2.8, Delay spread = 58 [ns]$, we ran a Monte-Carlo of Bit Error Rate (BER) as function of number of antennas in receiving.
\begin{figure}[]
\centering
\includegraphics [scale=0.5]{ber_mrc.png}
\caption{BER performance of MRC combiner with different number of antennas. With $10^5$ Monte Carlo iterations.}
\label{fig:bervsnumberofantennas}
\end{figure}
From the simulation, it is observable that while increasing the number of antennas from 1 to 6 improves the Bit Error Rate (BER) by an order of magnitude, in order to achieve another order of magnitude improvement we are ought to increase the number of elements by three times. This implies that for the given (fairly common) scenario, 6 elements are a reasonable upper bound. We, therefore, set $N_{array} = 6$.
The Array Gain for the observation angle $\phi$, $\theta$ reads
\begin{equation}
AG(\theta,\phi) = \sum_{ant}(G_{ant}(\theta,\phi) w_{ant} \exp(-jkr_{ant}))\\
\label{eq:ag}
\end{equation}
\begin{equation}
k = 2\pi/\lambda \times[sin(\theta)cos(\phi),sin(\theta)sin(\phi),cos(\theta)]
\end{equation}
Where $G_{ant}(\theta,\phi) \in\mathbb{R}$ is the real valued gain of an element in the array, $r_{ant}\in\mathbb{R}^3$ is the position of this element relative to zero phase of the array.
A natural selection for $w_{ant}$ is the following beamforming coefficients
\begin{equation}
w_{ant} = exp(j\frac{2\pi}{\lambda}sin(\theta_d)(cos(\phi_d)r_x + sin(\phi_d)r_y))
\end{equation}
Where $\theta_d,\phi_d$ is the beamforming angular direction.
Choosing the centered steered array (zero direction) Eq.~\ref{eq:ag} reads
\begin{equation}
\label{eq:AG}
AG(\theta,\phi) = \sum_{ant}(G_{ant}(\theta,\phi)\exp(-jkr_{ant}))\\
\end{equation}
\section{Reproducibility}
\subsection{Dataset}
Our synthetic dataset includes 3,000 examples of simulated multi-layer printed circuit board (PCB) antennas. We use \cite{openEMS} engine with the MATLAB API to generate all of those examples. In order to span a wide range of designs we opt for random designs,
We draw the following random parameters over uniform distribution : (i) number of polygons in each layer (ii) number of layers in the antenna (iii) number of cavities in the polygon. The feeding point is fixed during all simulations, as the dielectric constant. The dimensions of the antenna were also allowed to change in the scale of $[\frac{\lambda}{10},\frac{\lambda}{4}]$.
For the array antennas a post-simulation process was done as stated in the article, where for each array training example the following parameters are chosen:
(i) a Random number of elements over uniform probability $U(1,6)$ (ii) Random position out of 6 predefined center phase locations (iii) Calculating the array gain, $AG$, according to Eq.~\ref{eq:AG}.
\subsection{Code}
We use the PyTorch framework, all the models are given as separate modules: 'array\_network\_transformer','array\_network\_resnet',\\'array\_network\_ablation',\\'designer\_resnet','desinger\_transformer','simulator\_network'.
The training sequence 'array\_training.py' is also given to give an example of running the models.
In our code we make a use of external repositories:
\begin{enumerate}
\item https://github.com/FrancescoSaverioZuppichini/ResNet,
\item https://github.com/VainF/pytorch-msssim (multi scale SSIM)
\item https://github.com/facebookresearch/detr (for the transformer's positional encoding)
\end{enumerate}
The code for the block selection method is 'block\_selector.py', which is a class that gets as input a PyTorch model with loss function, input, and constraints generators as input and outputs the entropy for each layer index.
\subsection{Training}
The training time of the different networks: Simulator network 10 hours, Single antenna Designer 3-7 hours (depending on the variant), Array designer 2-10 hours (depending on the variant). All the networks trained with Adam optimizer and learning rate of $10^{-4}$ with decay factor of $0.98$ every two epochs.
After training the simulator network $h$ another $10^4$ examples were generated within 5 hours. The dataset is then divided into train/test sets with a 90\%-10\% division.
\subsection{Initialization details}
The initialization scheme we purposed assumes we have beforehand the variance of the input constraints.
In order to estimate this variance, we calculate the mean value of the constraint variance over the train set.
This mean valued variance is then used as a constant for the initialization method when constructing the network class.
\subsection{Computing infrastructure}
The generation of the 3,000 examples takes 2 weeks over a machine with I7-9870H CPU, 40GB RAM, and Nvidia RTX graphics card. In addition, another machine was in use with 16GB RAM and Nvidia P100 graphics card.
\section{Block Selection Experiments on Hypernetworks}
In order to further evaluate our heuristic method for selecting the important blocks, we evaluate it on one of the suggested networks of \cite{Chang2020Principled}. Specifically, we test it on the "All Conv" network, trained on the CIFAR-10 dataset.
Our block selection method first predicts the important layers, based on the entropy metric, then using a knapsack and chooses the most significant layers.
Fig.~\ref{fig:blockselectionres} shows (a) the entropy metric (normalized to the maximal value) for "AllConv" network, (b) Test accuracy for both networks. Using only 2.8\% of the weights (first and last layer) as the output of the hypernetwork, the suggested network is able to achieve the same results over the test set. The total size of the network is reduced by a factor of 3.4.
\begin{figure}[t]
\centering
\begin{tabular}{c}
\includegraphics[width=0.85\linewidth]{principled_weight_entropy.png} \\(a)\\
\includegraphics[width=0.84\linewidth]{res_block_fig.png}\\
(b)\\% & (c)\\
\end{tabular}
\caption{(a) Entropy loss vs layer index for the "AllConv" network. (b) Test accuracy of our block selected method (blue) and original network (orange).}
\label{fig:blockselectionres}
\end{figure}
\section{Additional Figures}
Fig.~\ref{fig:array_sample} is an enlarged version of Fig.~5 from the main paper. This figure also includes the results of the ablation experiments (panels e,f), which were omitted from the main paper for brevity.
Fig.~\ref{fig:multiplefiga} shows the effect of the different initialization schemes for the Transformer architecture (same as papers Fig.~4(a) but for a Transformer instead of a ResNet).
Fig.~\ref{fig:multiplefigb} shows the computed entropy of different layers un-normalized, whereas in the main manuscript Fig.~4(b) depicted the same score normalized by the maximum over the different layers.
\begin{figure*}
\begin{tabular}{@{}c@{~}c@{}}
\includegraphics[width=0.4830255\linewidth]{probability_of_design_certainty.png} &
\includegraphics[width=0.4830255\linewidth]{design with constraints.png} \\
(a) & (b)\\
\includegraphics[width=0.4830255\linewidth]{soltted_antenna_real.png} &
\includegraphics[width=0.4830255\linewidth]{soltted_antenna_design.png} \\
(c) & (d)\\
\includegraphics[width=0.30255\linewidth]{soltted_antenna_super_ablation.png}&
\includegraphics[width=0.30255\linewidth]{soltted_antenna_superduper_ablation.png}\\
(e)&(f)\\
\end{tabular}
\caption{(a) The probability of points belong to a valid antenna in a synthetic test instance. The constraint plane is marked as black. (b) Same sample, the regions correctly classified as antenna are marked in brown, misclassified is marked in red. (c) The ground truth of a slotted antenna array. (d) Our network design. (e) The design of ablation (i) that does not use a hyperhypernetwork. (f) The design of ablation (ii) that does not use a hypernetwork.}
\label{fig:array_sample}
\end{figure*}
\begin{figure}[t]
\centering
\begin{tabular}{c}
\includegraphics[width=0.98483\linewidth]{different_initializations_tr.png} \\(a)\\
\end{tabular}
\caption{Loss per epochs for the different initialization scheme of $q$ (Transformer $f$).}
\label{fig:multiplefiga}
\end{figure}
\begin{figure}[t]
\centering
\begin{tabular}{c}
\includegraphics[width=1.1\linewidth]{unnormalized_H.png}\\
\end{tabular}
\caption{ Un-normalized (relative to maximal value) Entropy of both ResNet and Transformer networks.}
\label{fig:multiplefigb}
\end{figure}
\section{Electronic Submission}
\label{submission}
Submission to ICML 2021 will be entirely electronic, via a web site
(not email). Information about the submission process and \LaTeX\ templates
are available on the conference web site at:
\begin{center}
\textbf{\texttt{http://icml.cc/}}
\end{center}
The guidelines below will be enforced for initial submissions and
camera-ready copies. Here is a brief summary:
\begin{itemize}
\item Submissions must be in PDF\@.
\item Submitted papers can be up to eight pages long, not including references, plus unlimited space for references. Accepted papers can be up to nine pages long, not including references, to allow authors to address reviewer comments. Any paper exceeding this length will automatically be rejected.
\item \textbf{Do not include author information or acknowledgements} in your
initial submission.
\item Your paper should be in \textbf{10 point Times font}.
\item Make sure your PDF file only uses Type-1 fonts.
\item Place figure captions \emph{under} the figure (and omit titles from inside
the graphic file itself). Place table captions \emph{over} the table.
\item References must include page numbers whenever possible and be as complete
as possible. Place multiple citations in chronological order.
\item Do not alter the style template; in particular, do not compress the paper
format by reducing the vertical spaces.
\item Keep your abstract brief and self-contained, one paragraph and roughly
4--6 sentences. Gross violations will require correction at the
camera-ready phase. The title should have content words capitalized.
\end{itemize}
\subsection{Submitting Papers}
\textbf{Paper Deadline:} The deadline for paper submission that is
advertised on the conference website is strict. If your full,
anonymized, submission does not reach us on time, it will not be
considered for publication.
\textbf{Anonymous Submission:} ICML uses double-blind review: no identifying
author information may appear on the title page or in the paper
itself. Section~\ref{author info} gives further details.
\textbf{Simultaneous Submission:} ICML will not accept any paper which,
at the time of submission, is under review for another conference or
has already been published. This policy also applies to papers that
overlap substantially in technical content with conference papers
under review or previously published. ICML submissions must not be
submitted to other conferences and journals during ICML's review
period.
Informal publications, such as technical
reports or papers in workshop proceedings which do not appear in
print, do not fall under these restrictions.
\medskip
Authors must provide their manuscripts in \textbf{PDF} format.
Furthermore, please make sure that files contain only embedded Type-1 fonts
(e.g.,~using the program \texttt{pdffonts} in linux or using
File/DocumentProperties/Fonts in Acrobat). Other fonts (like Type-3)
might come from graphics files imported into the document.
Authors using \textbf{Word} must convert their document to PDF\@. Most
of the latest versions of Word have the facility to do this
automatically. Submissions will not be accepted in Word format or any
format other than PDF\@. Really. We're not joking. Don't send Word.
Those who use \textbf{\LaTeX} should avoid including Type-3 fonts.
Those using \texttt{latex} and \texttt{dvips} may need the following
two commands:
{\footnotesize
\begin{verbatim}
dvips -Ppdf -tletter -G0 -o paper.ps paper.dvi
ps2pdf paper.ps
\end{verbatim}}
It is a zero following the ``-G'', which tells dvips to use
the config.pdf file. Newer \TeX\ distributions don't always need this
option.
Using \texttt{pdflatex} rather than \texttt{latex}, often gives better
results. This program avoids the Type-3 font problem, and supports more
advanced features in the \texttt{microtype} package.
\textbf{Graphics files} should be a reasonable size, and included from
an appropriate format. Use vector formats (.eps/.pdf) for plots,
lossless bitmap formats (.png) for raster graphics with sharp lines, and
jpeg for photo-like images.
The style file uses the \texttt{hyperref} package to make clickable
links in documents. If this causes problems for you, add
\texttt{nohyperref} as one of the options to the \texttt{icml2021}
usepackage statement.
\subsection{Submitting Final Camera-Ready Copy}
The final versions of papers accepted for publication should follow the
same format and naming convention as initial submissions, except that
author information (names and affiliations) should be given. See
Section~\ref{final author} for formatting instructions.
The footnote, ``Preliminary work. Under review by the International
Conference on Machine Learning (ICML). Do not distribute.'' must be
modified to ``\textit{Proceedings of the
$\mathit{38}^{th}$ International Conference on Machine Learning},
Online, PMLR 139, 2021.
Copyright 2021 by the author(s).''
For those using the \textbf{\LaTeX} style file, this change (and others) is
handled automatically by simply changing
$\mathtt{\backslash usepackage\{icml2021\}}$ to
$$\mathtt{\backslash usepackage[accepted]\{icml2021\}}$$
Authors using \textbf{Word} must edit the
footnote on the first page of the document themselves.
Camera-ready copies should have the title of the paper as running head
on each page except the first one. The running title consists of a
single line centered above a horizontal rule which is $1$~point thick.
The running head should be centered, bold and in $9$~point type. The
rule should be $10$~points above the main text. For those using the
\textbf{\LaTeX} style file, the original title is automatically set as running
head using the \texttt{fancyhdr} package which is included in the ICML
2021 style file package. In case that the original title exceeds the
size restrictions, a shorter form can be supplied by using
\verb|\icmltitlerunning{...}|
just before $\mathtt{\backslash begin\{document\}}$.
Authors using \textbf{Word} must edit the header of the document themselves.
\section{Format of the Paper}
All submissions must follow the specified format.
\subsection{Dimensions}
The text of the paper should be formatted in two columns, with an
overall width of 6.75~inches, height of 9.0~inches, and 0.25~inches
between the columns. The left margin should be 0.75~inches and the top
margin 1.0~inch (2.54~cm). The right and bottom margins will depend on
whether you print on US letter or A4 paper, but all final versions
must be produced for US letter size.
The paper body should be set in 10~point type with a vertical spacing
of 11~points. Please use Times typeface throughout the text.
\subsection{Title}
The paper title should be set in 14~point bold type and centered
between two horizontal rules that are 1~point thick, with 1.0~inch
between the top rule and the top edge of the page. Capitalize the
first letter of content words and put the rest of the title in lower
case.
\subsection{Author Information for Submission}
\label{author info}
ICML uses double-blind review, so author information must not appear. If
you are using \LaTeX\/ and the \texttt{icml2021.sty} file, use
\verb+\icmlauthor{...}+ to specify authors and \verb+\icmlaffiliation{...}+ to specify affiliations. (Read the TeX code used to produce this document for an example usage.) The author information
will not be printed unless \texttt{accepted} is passed as an argument to the
style file.
Submissions that include the author information will not
be reviewed.
\subsubsection{Self-Citations}
If you are citing published papers for which you are an author, refer
to yourself in the third person. In particular, do not use phrases
that reveal your identity (e.g., ``in previous work \cite{langley00}, we
have shown \ldots'').
Do not anonymize citations in the reference section. The only exception are manuscripts that are
not yet published (e.g., under submission). If you choose to refer to
such unpublished manuscripts \cite{anonymous}, anonymized copies have
to be submitted
as Supplementary Material via CMT\@. However, keep in mind that an ICML
paper should be self contained and should contain sufficient detail
for the reviewers to evaluate the work. In particular, reviewers are
not required to look at the Supplementary Material when writing their
review.
\subsubsection{Camera-Ready Author Information}
\label{final author}
If a paper is accepted, a final camera-ready copy must be prepared.
For camera-ready papers, author information should start 0.3~inches below the
bottom rule surrounding the title. The authors' names should appear in 10~point
bold type, in a row, separated by white space, and centered. Author names should
not be broken across lines. Unbolded superscripted numbers, starting 1, should
be used to refer to affiliations.
Affiliations should be numbered in the order of appearance. A single footnote
block of text should be used to list all the affiliations. (Academic
affiliations should list Department, University, City, State/Region, Country.
Similarly for industrial affiliations.)
Each distinct affiliations should be listed once. If an author has multiple
affiliations, multiple superscripts should be placed after the name, separated
by thin spaces. If the authors would like to highlight equal contribution by
multiple first authors, those authors should have an asterisk placed after their
name in superscript, and the term ``\textsuperscript{*}Equal contribution"
should be placed in the footnote block ahead of the list of affiliations. A
list of corresponding authors and their emails (in the format Full Name
\textless{}email@domain.com\textgreater{}) can follow the list of affiliations.
Ideally only one or two names should be listed.
A sample file with author names is included in the ICML2021 style file
package. Turn on the \texttt{[accepted]} option to the stylefile to
see the names rendered. All of the guidelines above are implemented
by the \LaTeX\ style file.
\subsection{Abstract}
The paper abstract should begin in the left column, 0.4~inches below the final
address. The heading `Abstract' should be centered, bold, and in 11~point type.
The abstract body should use 10~point type, with a vertical spacing of
11~points, and should be indented 0.25~inches more than normal on left-hand and
right-hand margins. Insert 0.4~inches of blank space after the body. Keep your
abstract brief and self-contained, limiting it to one paragraph and roughly 4--6
sentences. Gross violations will require correction at the camera-ready phase.
\subsection{Partitioning the Text}
You should organize your paper into sections and paragraphs to help
readers place a structure on the material and understand its
contributions.
\subsubsection{Sections and Subsections}
Section headings should be numbered, flush left, and set in 11~pt bold
type with the content words capitalized. Leave 0.25~inches of space
before the heading and 0.15~inches after the heading.
Similarly, subsection headings should be numbered, flush left, and set
in 10~pt bold type with the content words capitalized. Leave
0.2~inches of space before the heading and 0.13~inches afterward.
Finally, subsubsection headings should be numbered, flush left, and
set in 10~pt small caps with the content words capitalized. Leave
0.18~inches of space before the heading and 0.1~inches after the
heading.
Please use no more than three levels of headings.
\subsubsection{Paragraphs and Footnotes}
Within each section or subsection, you should further partition the
paper into paragraphs. Do not indent the first line of a given
paragraph, but insert a blank line between succeeding ones.
You can use footnotes\footnote{Footnotes
should be complete sentences.} to provide readers with additional
information about a topic without interrupting the flow of the paper.
Indicate footnotes with a number in the text where the point is most
relevant. Place the footnote in 9~point type at the bottom of the
column in which it appears. Precede the first footnote in a column
with a horizontal rule of 0.8~inches.\footnote{Multiple footnotes can
appear in each column, in the same order as they appear in the text,
but spread them across columns and pages if possible.}
\begin{figure}[ht]
\vskip 0.2in
\begin{center}
\centerline{\includegraphics[width=\columnwidth]{icml_numpapers}}
\caption{Historical locations and number of accepted papers for International
Machine Learning Conferences (ICML 1993 -- ICML 2008) and International
Workshops on Machine Learning (ML 1988 -- ML 1992). At the time this figure was
produced, the number of accepted papers for ICML 2008 was unknown and instead
estimated.}
\label{icml-historical}
\end{center}
\vskip -0.2in
\end{figure}
\subsection{Figures}
You may want to include figures in the paper to illustrate
your approach and results. Such artwork should be centered,
legible, and separated from the text. Lines should be dark and at
least 0.5~points thick for purposes of reproduction, and text should
not appear on a gray background.
Label all distinct components of each figure. If the figure takes the
form of a graph, then give a name for each axis and include a legend
that briefly describes each curve. Do not include a title inside the
figure; instead, the caption should serve this function.
Number figures sequentially, placing the figure number and caption
\emph{after} the graphics, with at least 0.1~inches of space before
the caption and 0.1~inches after it, as in
Figure~\ref{icml-historical}. The figure caption should be set in
9~point type and centered unless it runs two or more lines, in which
case it should be flush left. You may float figures to the top or
bottom of a column, and you may set wide figures across both columns
(use the environment \texttt{figure*} in \LaTeX). Always place
two-column figures at the top or bottom of the page.
\subsection{Algorithms}
If you are using \LaTeX, please use the ``algorithm'' and ``algorithmic''
environments to format pseudocode. These require
the corresponding stylefiles, algorithm.sty and
algorithmic.sty, which are supplied with this package.
Algorithm~\ref{alg:example} shows an example.
\begin{algorithm}[tb]
\caption{Bubble Sort}
\label{alg:example}
\begin{algorithmic}
\STATE {\bfseries Input:} data $x_i$, size $m$
\REPEAT
\STATE Initialize $noChange = true$.
\FOR{$i=1$ {\bfseries to} $m-1$}
\IF{$x_i > x_{i+1}$}
\STATE Swap $x_i$ and $x_{i+1}$
\STATE $noChange = false$
\ENDIF
\ENDFOR
\UNTIL{$noChange$ is $true$}
\end{algorithmic}
\end{algorithm}
\subsection{Tables}
You may also want to include tables that summarize material. Like
figures, these should be centered, legible, and numbered consecutively.
However, place the title \emph{above} the table with at least
0.1~inches of space before the title and the same after it, as in
Table~\ref{sample-table}. The table title should be set in 9~point
type and centered unless it runs two or more lines, in which case it
should be flush left.
\begin{table}[t]
\caption{Classification accuracies for naive Bayes and flexible
Bayes on various data sets.}
\label{sample-table}
\vskip 0.15in
\begin{center}
\begin{small}
\begin{sc}
\begin{tabular}{lcccr}
\toprule
Data set & Naive & Flexible & Better? \\
\midrule
Breast & 95.9$\pm$ 0.2& 96.7$\pm$ 0.2& $\surd$ \\
Cleveland & 83.3$\pm$ 0.6& 80.0$\pm$ 0.6& $\times$\\
Glass2 & 61.9$\pm$ 1.4& 83.8$\pm$ 0.7& $\surd$ \\
Credit & 74.8$\pm$ 0.5& 78.3$\pm$ 0.6& \\
Horse & 73.3$\pm$ 0.9& 69.7$\pm$ 1.0& $\times$\\
Meta & 67.1$\pm$ 0.6& 76.5$\pm$ 0.5& $\surd$ \\
Pima & 75.1$\pm$ 0.6& 73.9$\pm$ 0.5& \\
Vehicle & 44.9$\pm$ 0.6& 61.5$\pm$ 0.4& $\surd$ \\
\bottomrule
\end{tabular}
\end{sc}
\end{small}
\end{center}
\vskip -0.1in
\end{table}
Tables contain textual material, whereas figures contain graphical material.
Specify the contents of each row and column in the table's topmost
row. Again, you may float tables to a column's top or bottom, and set
wide tables across both columns. Place two-column tables at the
top or bottom of the page.
\subsection{Citations and References}
Please use APA reference format regardless of your formatter
or word processor. If you rely on the \LaTeX\/ bibliographic
facility, use \texttt{natbib.sty} and \texttt{icml2021.bst}
included in the style-file package to obtain this format.
Citations within the text should include the authors' last names and
year. If the authors' names are included in the sentence, place only
the year in parentheses, for example when referencing Arthur Samuel's
pioneering work \yrcite{Samuel59}. Otherwise place the entire
reference in parentheses with the authors and year separated by a
comma \cite{Samuel59}. List multiple references separated by
semicolons \cite{kearns89,Samuel59,mitchell80}. Use the `et~al.'
construct only for citations with three or more authors or after
listing all authors to a publication in an earlier reference \cite{MachineLearningI}.
Authors should cite their own work in the third person
in the initial version of their paper submitted for blind review.
Please refer to Section~\ref{author info} for detailed instructions on how to
cite your own papers.
Use an unnumbered first-level section heading for the references, and use a
hanging indent style, with the first line of the reference flush against the
left margin and subsequent lines indented by 10 points. The references at the
end of this document give examples for journal articles \cite{Samuel59},
conference publications \cite{langley00}, book chapters \cite{Newell81}, books
\cite{DudaHart2nd}, edited volumes \cite{MachineLearningI}, technical reports
\cite{mitchell80}, and dissertations \cite{kearns89}.
Alphabetize references by the surnames of the first authors, with
single author entries preceding multiple author entries. Order
references for the same authors by year of publication, with the
earliest first. Make sure that each reference includes all relevant
information (e.g., page numbers).
Please put some effort into making references complete, presentable, and
consistent. If using bibtex, please protect capital letters of names and
abbreviations in titles, for example, use \{B\}ayesian or \{L\}ipschitz
in your .bib file.
\section*{Software and Data}
If a paper is accepted, we strongly encourage the publication of software and data with the
camera-ready version of the paper whenever appropriate. This can be
done by including a URL in the camera-ready copy. However, \textbf{do not}
include URLs that reveal your institution or identity in your
submission for review. Instead, provide an anonymous URL or upload
the material as ``Supplementary Material'' into the CMT reviewing
system. Note that reviewers are not required to look at this material
when writing their review.
\section*{Acknowledgements}
\textbf{Do not} include acknowledgements in the initial version of
the paper submitted for blind review.
If a paper is accepted, the final camera-ready version can (and
probably should) include acknowledgements. In this case, please
place such acknowledgements in an unnumbered section at the
end of the paper. Typically, this will include thanks to reviewers
who gave useful comments, to colleagues who contributed to the ideas,
and to funding agencies and corporate sponsors that provided financial
support.
\nocite{langley00}
\section{Introduction}
\label{sec:intro}
Since electronic devices are getting smaller, the task of designing suitable antennas is becoming increasingly important \cite{anguera_advances_2013}.
However, the design of small antennas, given a set of structural constraints and the desired radiation pattern, is still an iterative and tedious task \cite{miron_small_2014}. Moreover, to cope
with an increasing demand for higher data rates in dynamic communication channels, almost all of the current consumer devices include antenna arrays, which adds a dimension of complexity to the design problem \cite{Bogale_massive_mimo}.
Designing antennas is a challenging inverse problem: while mapping the structure of the antenna to its radiation properties is possible (but inefficient) by numerically solving Maxwell's equations, the problem of obtaining a structure that produces the desired radiation pattern, subject to structural constraints, can only be defined as an optimization problem, with a large search space and various trade-offs~\cite{H_small_antenna}. Our novel approach for designing a Printed Circuit Board (PCB) antenna that produces the desired radiation pattern, resides in a 3D bounding box, and includes a predefined set of metallic locations. We then present a method for the design of an antenna array that combines several such antennas.
The single antenna method first trains a simulation network $h$ that replaces the numerical solver, based on an initial training set obtained using the solver.
This network is used to rapidly create a larger training set for solving the inverse problem, and, more importantly, to define a loss term that measures the fitting of the obtained radiation pattern to the desired one. The design networks that solve the inverse problem include a hypernetwork~\cite{ha2016hypernetworks} $f$ that is trained to obtain an initial structure, which is defined by the functional $g$. This structure is then refined by a network $t$ that incorporates the metallic locations and obtains the final design.
For the design of an antenna array, on top of the parameters of each antenna, it is also necessary to determine the number of antennas and their position. For this task, we introduce the hyper-hypernetwork framework, in which an outer hypernetwork $q$ determines the weights of an inner hypernetwork $f$, which determines the weights of the primary network $g$.
Our experiments demonstrate the success of the trained models in producing solutions that comply with the geometric constraints and achieve the desired radiation pattern. We demonstrate that both the hypernetwork $f$ and the refinement network $t$ are required for the design of a single antenna and that the method outperforms the baseline methods. In the case of multiple antennas, the hyperhypernetwork, which consists of networks q, f, g, outperforms the baseline methods on a realistic synthetic dataset. Furthermore, it is able to predict the structure of real-world antenna designs \cite{chen_design_2018,singh_design_2016} and to suggest an alternative design that has improved array directivity for the iPhone 11 Pro Max.
\section{Related Work}
\label{sec:prev}
\citet{misilmani_machine_2019} survey design methods for large antennas, i.e., antennas the size of $\lambda/2 - \lambda/4$, where $\lambda$ is the corresponding wavelength of their center frequency. Most of the works surveyed are either genetic algorithms \cite{geneticalgo} or SVM based classifiers \cite{svm_antenna}. None of the surveyed methods incorporates geometrical constraints, which are crucial for the design of the small antennas we study, due to physical constraints.
A limited number of attempts were made in the automatic design of small antennas, usually defined by a scale that is smaller than $\lambda/10$ \cite{bulus2014center}.
\citet{Hornby2006NASA,geneticalgo} employ genetic algorithms to obtain the target gain. \citet{Military} employ hierarchical Bayesian optimization with genetic algorithms to design an electrically small antenna and show that the design obtained outperforms classical man-made antennas.
None of these methods employ geometric constraints, making them unsuitable for real-world applications. They also require running the antenna simulation over and over again during the optimization process. Our method requires a one-time investment in creating a training dataset, after which the design process itself is very efficient.
A hypernetwork scheme~\cite{ha2016hypernetworks} is often used to learn dynamic networks that can adjust to the input~\cite{bertinetto2016learning,Oswald2020Continual} through multiplicative interactions~\cite{jayakumar2020multiplicative}. It contains two networks, the hypernetwork $f$, and the primary network $g$. The weights $\theta_g$ of $g$ are generated by $f$ based on $f$'s input.
We use a hypernetwork to recover the structure of the antenna in 3D. Hypernetworks were recently used to obtain state of the art results in 3D reconstruction from a single image~\cite{Littwin_2019_ICCV}.
We present multiple innovations when applying hypernetworks. First, we are the first, as far as we can ascertain, to apply hypernetworks to complex manufacturing design problems.
Second, we present the concept of a hyperhypernetwork, in which a hypernetwork provides the weights of another hypernetwork. Third, we present a proper way to initialize hyperhypernetworks, as well as a heuristic for selecting which network weights should be learned as conventional parameters and which as part of the dynamic scheme offered by hypernetworks.
\section{Single Antenna Design}
\label{single_antenna}
Given the geometry of an antenna, i.e. the metal structure, one can use a Finite Difference Time Domain (FDTD) software, such as OpenEMS FDTD engine~\cite{openEMS}, to obtain the antenna's radiation pattern in spherical coordinates $(\theta,\phi)$. Applying such software to this problem, under the setting we study, has a runtime of 30 minutes per sample, making it too slow to support an efficient search for a geometry given the desired radiation pattern, i.e., solve the inverse problem. Additionally, since it is non-differentiable, its usage for optimizing the geometry is limited.
Therefore, although our goal is to solve the inverse problem, we first build a simulation network $h$. This network is used to support a loss term that validates the obtained geometry, and to propagate gradients through this loss. The simulation network $h$ is given two inputs (i) the scale in terms of wavelength $S$ and (ii) a 3D voxel-based description of the spatial structure of the metals $V$. $h$ returns a 2D map $U$ describing the far-field radiation pattern, i.e., $U = h(S,V)$.
Specifically, $S\in \mathbb{R}^3$ specifies the arena limits. This is given as the size of the bounding box of the metal structure, in units of the wavelength $\lambda$ corresponding to the center frequency. $V$ is a voxel grid of size $64\times 64 \times 16$, which is sampled within the 3D bounding box dimensions provided by $S$. In other words, it represents a uniform sampling on a grid with cells of size $[S_1/64,S_2/64,S_3/16]$. The lower resolution along the $z$ axis stems from the reduced depth of many mobile devices. Each voxel contains a binary value: 0 for nonmetallic materials, 1 for conducting metal. The output tensor is a 2D ``image'' ${U(\theta,\phi)}$, sampled on a grid of size $64\times 64$, each covering a region of $\pi/64\times 2\pi/64$ squared arc lengths. The value in each grid point denotes the radiation power in this direction.
\noindent The directivity gain $D={\tens{N}}(U)$ is a normalized version of $U$:
\begin{equation}
D(\theta,\phi) = \frac{U(\theta,\phi)}{ \int_{\phi = 0}^{2\pi} \int_{\theta = 0}^{\pi} U(\theta,\phi)\sin(\theta) \,d\theta \,d\phi }\label{eq:D}
\end{equation}
The design network solves the inverse problem, i.e., map from the required antenna's directivity gain $D$ to a representation of the 3D volume $V$. We employ a hypernetwork scheme, in which an hypernetwork $f$ receives the design parameters $D$ and $S$ and returns the weights of an occupancy network $g$. $g:[0,1]^3\rightarrow [0,1]$ is a multi-layered perceptron (MLP) that maps a point $p$ in 3D, given in a coordinate system in which each dimension of the bounding box $S$ is between zero and one, into the probability $o$ of a metallic material at point $p$.
\begin{equation}
\theta_g = f(D,S)\,,\quad o = g(p;\theta_g)
\end{equation}
The weights of the hypernetwork $f$ are learned, while the weights of the primary network $g$ are obtained as the output of $f$. Therefore $g$, which encodes a 3D shape, is dynamic and changes based on the input to $f$.
To obtain an initial antenna design in voxel space, we sample the structure defined by the functional $g$ along a grid of size $64\times 64\times 16$ and obtain a 3D tensor $O$. However, this output was obtained without considering an additional design constraint that specifies unmovable metal regions
To address this constraint, we introduce network $t$. We denote the fixed metallic regions by the tensor $M\in \mathbb{R}^{64\times 64\times 16}$, which resides in the same voxel space as $V$.
$t$ acts on $M,O$ and returns the desired metallic structure $\bar V$, i.e., $V = t(M,O)$.
\paragraph{Learning Objectives
For each network, a different loss function is derived according to its nature. Since the directivity gain map is smooth with regions of peaks and nulls, the multiscale SSIM~\cite{ZWang} (with a window size of three) is used to define the loss of $h$. Let $U^*$ be the ground truth radiation pattern, which is a 2D image, with one value for every angle $\theta,\phi$. The loss of the simulation network is given by
\begin{equation}
L_h = -\text{msSSIM}(U^*,h(S,V))
\end{equation}
The simulation network $h$ is trained first before the other networks are trained.
The loss of the hypernetwork $f$ is defined only through the output of network $g$ and it backpropagates to $f$.
\begin{equation}
\label{eq:LG}
L_g = \text{CrossEntropy}(g(p,f(D,S)),y_p)
\end{equation}
where $y_p$ is the target metal structure at point $p$. This loss is accumulated for all samples on a dense grid of points $p$.
For $t$, the multitask paradigm~\cite{kendall2018multitask} is used, in which the balancing weights $\alpha_i$ are optimized as part of the learning process.
\begin{equation}
\label{eq:multiloss}
multiloss([l_1 ...l_n]^{T}) = \sum_{i\in [1,n]} exp(-\alpha_i)\cdot l_i + \alpha_i
\end{equation}
\noindent where $l_i$ are individual loss terms and $\alpha\in\mathbb{R}^n$ is a vector of learned parameters. Specifically, $t$ is trained with the loss $L_t = multiloss(L_{OBCE},L_{msSSIM})$ for
\begin{align}
L_{OBCE} &= -\frac{1}{|M_p|}\sum_{p\in \{M_p\}} M_p\cdot log(\bar V_p) \label{eq:LBCE}\\
L_{msSSIM} &= -msSSIM({\tens{N}}(h(S,\bar V)),D)\,.
\end{align}
The first loss $L_{OBCE}$ is the binary cross entropy loss that considers only the regions that are marked by the metallic constraint mask $M$ as regions that must contain metal. The second loss $L_{msSSIM}$ is the SSIM of the radiation patterns (${\tens{N}}$ is the normalization operator of Eq.~\ref{eq:D})
\begin{figure}
\begin{tabular}{@{}c@{}}
\includegraphics[width=1\linewidth]{schematic.png} \\
\end{tabular}
\caption{The architecture of the hyperhypernetwork $q$, the hypernetwork $f$, and the primary network $g$ used in the antenna array design. In the case of the single antenna design, only $f,q$ are used. The hyperhypernetwork $q$ is given the constraint plane $C$ and outputs the parameters $\theta_f$ of network $f$. The hypernetwork $f$, given the bounding box $S$ and target directivity $D$ or array gain $AG$, produces the weights $\theta_g$ of network $g$. The primary network $g$ maps a point $p$ in 3D into the probability of metal occupancy at that point. The simulation network $h$ used to compute the loss and the refinement network $t$ are not shown in the diagram.}
\label{fig:farch}
\end{figure}
\begin{figure}[t]
\centering
\begin{tabular}{c}
\includegraphics[trim={0 0 0 0},width=0.45\textwidth]{f_resnett.png}\\
(a)\\
\includegraphics[trim={0 0 0 0},width=0.45\textwidth]{f_transformer.png}\\
(b)\\
\end{tabular}
\caption{The two variants of network f. (a) The ResNet variant. (b) The Transformer variant.}
\label{fig:farchbothoptions}
\end{figure}
\section{Antenna Array}
Antenna arrays are used in the current design of consumer communication devices (and elsewhere) to enable complex radiation patterns that are challenging to achieve by a single antenna. For example, a single cellular device needs to transmit and receive radio frequencies with multiple wifi and cellular antennas ~\cite{Bogale_massive_mimo}.
We present a method for designing antenna arrays that is based on a new type of hypernetwork. For practical reasons, we focus on up to $N_a=6$ antennas. See appendix for the reasoning behind this maximal number.
While a single antenna is designed based on a target directivity $D$, an array is defined based on a target array gain $AG$. Assuming, without loss of generality, that we consider a beamforming direction of zero, $AG$ is defined, for the observation angle $\phi$, $\theta$ as
\begin{equation}
AG(\theta,\phi) = \sum_{ant}U_{ant}(\theta,\phi)\exp(-jkr_{ant})
\label{eq:AG}
\end{equation}
where $U_{ant}(\theta,\phi) \in\mathbb{R}$ is the real valued radiation pattern of single element in the array, $k$ is the wave vector, defined as
\begin{equation}
k = 2\pi/\lambda \times[sin(\theta)cos(\phi),sin(\theta)sin(\phi),cos(\theta)]
\end{equation}
and $r_{ant} = [r_{x},r_{y},r_{z}] \in \mathbb{R}^3$ is the element center phase position in 3D coordinates.
In addition to the array gain pattern $AG$ and the global 3D bounding box $S$, antenna arrays adhere to physical design constraints. For example, for mobile devices, multiple areas exists where antenna elements can not be fitted, due to electromagnetic interference, product design, and regulations~\cite{fcc_sar}.
Unlike the single antenna case, which is embedded in a 3D electrical board, which is captured by the constraint tensor $M$, multiple antennas are mostly placed outside such boards. We, therefore, assume that the constraints are independent of the $z$ and formulate these as a binary matrix $C$.
\begin{equation}
C(x,y)=\left\{
\begin{array}{@{}ll@{}}
1, & \text{if}\ (x,y)\not\in AP \\
0, & \text{otherwise}
\end{array}\right.
\label{eq:constraint}
\end{equation}
Where $AP$ is the subset of positions in the XY plane in which one is allowed to position metal objects.
{\bf Hyperhypernetworks\quad} We introduce a hyperhypernetwork scheme, which given high-level constraints on the design output, determines the parameters of an inner hypernetwork that, in return, outputs those of a primary network.
{Let $q$ be the hyperhypernetwork, and let $i_s$ be the indices of the subset of the parameters $\theta_f$ of network $f$. $q$ returns this subset of weights $\theta^{i_s}_{f}$, based on the constraint matrix $C$:
\begin{equation}
\theta^{i_s}_{f} = q(C;\theta_{q})\
\end{equation}
where $\theta_{q}$ are learned parameters. Below, ${i_c}$ indexes parameters of $f$ and is complementary to $i_s$. The associated weights $\theta^{i_c}_{f}$ are learned conventionally and are independent of $C$
Network $f$ gets as input ${AG,S}$ (defined in Eq.~\ref{eq:AG} and Sec.~\ref{single_antenna}, respectively). The former is given in the form of a 2D tensor sampled on a grid of size $64\times64$, each covering a region of $\pi/64\times2\pi/64$ squared arc lengths, and the latter is in $\mathbb{R}^3$. The output of $f$ are the weights of network $g$, $\theta_g$
\begin{equation}
\theta_{g} = f(AG,S;\theta^{i_s}_{f},\theta^{i_c}_{f})
\end{equation}
The output of the primary network $g$ given a 3D coordinate vector $p \in \mathbb{R}^3$ is a tuple $O = g(p;\theta_{g})$,
where $O=[Pr(p\in Metal),Pr(p\in Valid Antenna)] $ is concatenation of two probabilities, the probability of 3D point $p$ being classified as metal voxel or a dielectric voxel, and the probability of this point belongs to a valid antenna structure.}
The high-level architecture, including the networks $q,f,g$ is depicted in Fig.~\ref{fig:farch} and the specific architectures are given in Sec.~\ref{sec:arch}.
Unlike the single antenna case, where the metallic constraints are tighter, for the antenna array, we do not employ a refinement network $t$. The training loss is integrated along points $p$ in 3D and is a multiloss (Eq.~\ref{eq:multiloss}) of a structural and a constraint loss, similar to the single antenna. The structural loss is given by $L_{s} = \sum_p CrossEntropy(O_1[p], y_{p})$, where $y_p$ is the target metal structure at point p and $O_k$ is the $k$th element of $O$. The constraint loss is $L_{C} = -\frac{1}{|\{C_{p}\}|}\sum_{p\in \{C_p\}} C_p\cdot log(1-O_{2{[p]}})$, where $(p_x,p_y)$ are the X and Y coordinates of the input point $p$.
{\bf Initialization\quad} Hypernetworks
present challenges with regards to the number of learned parameters and are also challenging to initialize~\cite{Littwin_2019_ICCV,Chang2020Principled}. Below, we (i) generalize the initialization scheme of \cite{Chang2020Principled} to the hyperhypernetwork, and (ii) propose a way to select which subset of parameters $i_{s}$ would be determined by the network $q$, and which would be learned conventionally as fixed parameters that do not depend on the input.
Define the hypernwetwork $f=f_{n}\circ \dots \circ f_2 \circ f_1$ as a composition of $n$ layers. Similarly we define the hyperhypernetwork $q=q_{n}\circ \dots \circ q_2 \circ q_1$ as a composition of $n$ layers, and define $w^{(c)}_{j} = q_{j}\circ \dots \circ q_{1}(c)$, where $c$ is the hyperhypernetwork input.
We assume that $i_s$, the set of parameters that are determined by $q$ contains complete layers, which we denote as $j_s \subset [n]$.
The computation of $f_j$ on the embedding $e$,
\begin{equation}
f_j(e)=\left\{
\begin{array}{@{}ll@{}}
f_j(e), & \text{if}\ j\not\in j_s \\
(q_{n}^{j}\times w^{(c)}_{n-1}) e, & j\in j_s
\end{array}\right.
\end{equation}
Where $\times$ denotes tensor multiplication along the relevant dimension and $q_{n}^{j}$ is the portion of the hyperhypernetwork last layer corresponds the j-th layer of $f$
For $f_j$ where $j\not\in j_s$ we use \cite{Chang2020Principled} results for initialization.
We initialize $q_{n-1},\dots,q_1$ using the Xavier fan in assumption, obtaining $Var(w^{(c)}_{j}) = Var(c)$.
The variance of the output of the primary network, denoted $y$, given primary network input $x$, hypernetwork input $e$, and hyperhypernetwork input $c$ is
\begin{equation}
\label{eq:varexp}
\begin{split}
Var(y) &= \sum_j\sum_k Var(f_{n}[j,k])Var(f_k(e))Var(x_j) \\ &=\sum_j(\sum_{k\not\in j_s} Var(f_{n}[j,k])Var(f_k(e))Var(x_j)\\
&+\sum_{k\in j_s}\sum_m (Var(q_{n}[k,m])Var(w^{(c)}_{j}[m])Var(e)\\
&Var(f_{n}[j,k])Var(x_j)))
\end{split}
\end{equation}
where we use brackets to index matrix or vector elements. We propose to initialize elements $k$ of the last layer of $q$ as
\begin{equation}
Var(q_{n}[k]) = ({d_m Var(c)})^{-1}
\end{equation}
where $d_m$ is the fan-in of the last hyperhypernetwork layer $q_{n}$. This way we obtain the desired
\begin{multline}
\label{eq:finalinitialzation}
Var(y) = Var(x_j)\frac{d_k-Q}{d_k} + \sum_{j,k\in j_s,m} \frac{1}{d_k d_m d_j} Var(x_j)\\ = Var(x_j)\frac{d_k-Q}{d_k} +\frac{Q}{d_k}Var(x_j) = Var(x_j)
\end{multline}
where $Q = |i_s|$ is the number of parameters that vary dynamically as the output of the hyperhypernetwork.
{\bf Block selection\quad} Since the size of network $q$ scales with the number $Q$ of parameters in $f$ it determines, we limit, in most experiments, $Q<10,000$. This way, we maintain the batch size we use despite the limits of memory size.
The set of parameters $i_s$ is selected heuristically as detailed below.
Let $n$ be the number of layers in network $f$. We arrange the parameter vector $\theta_f$ by layer, where $\theta_f^j$ denotes the weights of a single layer $\theta_f = [\theta_f^0,...,\theta_f^{n-1}]$
and $|\theta_f^j|$ is the number of parameters in layer $j$ of $f$. The layers are ordered by the relative contribution of each parameter, which is estimated heuristically per-layer as a score $H_j$
$H_j$ is computed based on the distribution of losses on the training set that is obtained when fixing all layers, except for layer $j$, to random initialization values, and re-sampling the random weights of layer $j$ multiple times. This process is repeated $10,000$ times and the obtained loss values are aggregated into $1,000$ equally spaced bins. The entropy of the resulting $1,000$ values of this histogram is taken as the value $H_j$. Since random weights are used, this process is efficient, despite the high number of repetitions.
The method selects, using the Knapsack algorithm \cite{dantzig_1955}, a subset of the layers with the highest sum of $H_j$ values such that the total number of parameters (the sum of $|\theta_f^j|$ over $j_s$) is less than the total quota of parameters $Q$.
\section{Architecture}
\label{sec:arch}
The network $h$, which is used to generate additional training data and to backpropagate the loss, consists of a CNN ($3\times3$ kernels) applied to $V$, followed by three ResNet Blocks, a concatenation of $S$, and a fully connected layer. ELU activations~\cite{clevert2015fast} are used.
The primary network
$g$ is a four layer MLP, each with 64 hidden neurons and ELU activations, except for the last activation, which is a sigmoid (to produce a value between zero and one). The MLP parametrization, similarly to \cite{Littwin_2019_ICCV}, is given by separating the weights and the scales, where each layer $j$ with input $x$ performs $(x^\top\theta_w^j)\cdot \theta^j_s + \theta^j_b$,
where $\theta^j_w\in \mathbb{R}^{d_1\times d_2},\theta^j_s\in \mathbb{R}^{d_2}$,and $\theta^j_b\in\mathbb{R}^{d_2}$ are the weight-, scale- and bias-parameters of each layer, respectively, and $\cdot$ is the element-wise multiplication. The dimensions are $d_1=3$ for the first layer, $d_1=64$ for the rest, and $d_2=64$ for all layers, except the last one, where it is one.
For the hypernetwork $f$, we experiment with two designs, as shown in Fig.~\ref{fig:farchbothoptions}. Architecture (a) is based on a ResNet and architecture (b) has a Transformer encoder \cite{vaswani2017attention} at its core. Design (a) $f$ has four ResNet blocks, and two fully connected (FC) layers. $D$ propagates through the ResNet blocks and flattened into a vector. $S$ is concatenated to this vector, and the result propagates through two FC layers. The weights of the last Linear unit in $f$ are initialized, according to \cite{Chang2020Principled}, to mitigate vanishing gradients.
Design (b) of the hypernetwork $f$ consists of three parts: (1) a CNN layer that is applied to $AG$, with a kernel size of $3\times3$. (2) a Transformer-encoder that is applied to the vector of activations that the CNN layer outputs, consisting of four layers each containing: (i) multiheaded self-attention, (ii) a fully connected layer. The self attention head was supplemented with fixed sine positional encoding \cite{parmar_image_2018}. (3) two fully connected (FC) layers, with an ELU activation in-between. The bounding box $S$ is concatenated to the embedding provided by the transformer-encoder, before it is passed to the FC layers.
The hyperhypernetwork $q$ is a CNN consisting of four layers, with $ELU$ as activations and a fully connected layer on top. The input for $q$ is the constraint image $C$, of size $192\times128$, divided to a grid of $2\times3$ cells ($64\times 64$ regions), denoting the possible position of the individual antennas. The network $q$ outputs are the selected weights $\theta_f^{i_s}$ of $f$.
The network $t$, which is used to address the metallic constraints in the single antenna case, consists of three ResNets $m_1,m_2,m_3$ and a vector $w\in\mathbb{R}^2$ of learned parameters. It mixes, using the weights $w$, the initial label $O$ and the one obtained from the sub-networks: $T_1 = m_1(M), T_2 = m_2(O), T_3 = m_3([T_1,T_2]), \bar V = w_1 O + w_2 T_3$,
where $T_1,T_2,T_3\in \mathbb{R}^{64 \times 64 \times 16}$, and $[T_1,T_2]$ stands for the concatenation of two tensors, along the third dimension. Both $m_1,m_2$ are ResNets with three blocks of 16 channels. $m_3$ is a single block ResNet with 32 channels.
\begin{table*}[t]
\caption{The performance obtained by our method, as well as the baseline methods and ablation variants. See text for details.}
\label{tab:results}
\centering
\begin{tabular}{lcccc}
\toprule
& \multicolumn{2}{c}{Radiation pattern} & \multicolumn{2}{c}{3D shape}\\
\cmidrule(lr){2-3}
\cmidrule(lr){4-5}
Method & MS-SSIM $\uparrow$ & SNR[dB] $\uparrow$ & IOU $\uparrow$ & $M$-Recall $\uparrow$\\
\midrule
(i) Baseline nearest neighbor &0.88 $\pm$ 0.06& 32.00 $\pm$ 0.40 &0.80 $\pm$ 0.11 & 0.05 $\pm$ 0.03 \\
(ii) Baseline nearest neighbor under metallic constraints &0.89 $\pm$ 0.09& 32.30 $\pm$ 0.30 &0.79 $\pm$ 0.13 & 0.89 $\pm$ 0.07 \\
(ours ResNet variant) $\bar V=t(M,O)$& {\bf 0.96} $\pm$ 0.03 & 36.60 $\pm$ 0.45 & 0.86 $\pm$ 0.09& {\bf 0.96} $\pm$ 0.01\\
(ours Transformer variant) $\bar V=t(M,O)$& {\bf 0.96} $\pm$ 0.04 & {\bf 36.62} $\pm$ 0.52 & {\bf 0.88} $\pm$ 0.12& 0.95 $\pm$ 0.02\\
\midrule
(iii.a) No refinement ResNet variant $V=g(p,\theta_{D,S})$ & 0.91 $\pm$ 0.05 & 32.80 $\pm$ 0.41 & 0.84 $\pm$ 0.08& 0.06 $\pm$ 0.03 \\
(iii.b) No refinement Transformer variant $V=g(p,\theta_{D,S})$ & 0.93 $\pm$ 0.02 & 34.80 $\pm$ 0.60 & 0.86 $\pm$ 0.11& 0.04 $\pm$ 0.02 \\
(iv.a) No hypernetwork ResNet variant (training $t$ with $f'$) &0.78 $\pm$ 0.12& 22.90 $\pm$ 1.60 &0.81 $\pm$ 0.11 & 0.90 $\pm$ 0.05 \\
(iv.b) No hypernetwork Transformer variant (training $t$ with $f'$) &0.75 $\pm$ 0.17& 21.30 $\pm$ 2.20 &0.79 $\pm$ 0.11 & 0.90 $\pm$ 0.05 \\
(v.a) ResNet variant,$t$ is trained using a structure loss &0.92 $\pm$ 0.07& 33.00 $\pm$ 0.55 &0.84 $\pm$ 0.09 & 0.91 $\pm$ 0.06 \\
(v.b) Transformer variant, $t$ is trained using a structure loss &{0.94} $\pm$ 0.04& {33.70} $\pm$ 0.55 &{0.87 }$\pm$ 0.04 & {\bf0.96} $\pm$ 0.04\\
\bottomrule
\end{tabular}
\end{table*}
\begin{figure}[t]
\begin{tabular}{@{}c@{~}c@{}}
\includegraphics[width=0.483\linewidth]{origin_geometry_with_constraint.png} &
\includegraphics[width=0.483\linewidth]{designer_without_t_geometry.png} \\
(a)&(b)\\
\includegraphics[width=0.483\linewidth]{fine_tuning_t_geometry.png} &
\includegraphics[width=0.483\linewidth]{ground_trute_a.png}\\
(c)&(d)\\
\includegraphics[width=0.483\linewidth]{designer_without_t_simulation.png} &
\includegraphics[width=0.483\linewidth]{designed_a.png} \\
(e)&(f)\\
\end{tabular}
\caption{A typical test-set instance. (a) The ground truth structure $V$, with the metallic constraint regions $M$ marked in red. (b) The output structure $g(\cdot,\theta_{D,S})$ of the hypernetwork. (c) The output $\bar V$ of the complete method. (d-f) The directivity gain of $V$, $g(\cdot,\theta_{D,S})$, and $\bar V$, respectively.}
\label{fig:sample}
\end{figure}
\begin{figure}[t]
\begin{tabular}{@{}c@{~}c@{}}
\includegraphics[width=0.4832\linewidth,height=0.46\linewidth]{different_initializations_res.png} &
\includegraphics[width=0.4832\linewidth,height=0.46\linewidth]{blockselectionres.png}\\%&
(a) & (b)\\% & (c)\\
\end{tabular}
\caption{(a) Loss per epochs for the different initialization scheme of $q$ (ResNet $f$) ,Transformer in appendix. (b) The mean per-layer score obtained for the entropy based selection heuristic. the selected layers are [1,9,16] (ResNet) and [1,2,15] (Transformer).}
\label{fig:initial}
\end{figure}
\begin{figure}[t]
\begin{tabular}{@{}c@{~}c@{}}
\includegraphics[width=0.4830255\linewidth]{probability_of_design_certainty.png} &
\includegraphics[width=0.4830255\linewidth]{design_with_constraints.png} \\
(a) & (b)\\
\includegraphics[width=0.4830255\linewidth]{soltted_antenna_real.png} &
\includegraphics[width=0.4830255\linewidth]{soltted_antenna_design.png} \\
(c) & (d)\\
\end{tabular}
\caption{(a) The probability of points belong to a valid antenna in a synthetic test instance. The constraint plane is marked as black. (b) Same sample, the regions correctly classified as antenna are marked in brown, misclassified is marked in red. (c) The ground truth of a slotted antenna array. (d) Our network design. See appendix for ablations.
\label{fig:array_sample}
\end{figure}
\begin{table}[t]
\caption{The performance for designing antenna arrays. $C$-ratio is the fraction of the antenna volume that complies with $C$.}
\label{tab:results_array}
\centering
\begin{tabular}{@{}l@{~}c@{~~}c@{}}
\toprule
Method & IOU $\uparrow$ & $C$-ratio $\uparrow$\\
\midrule
Nearest neighbor baseline &0.48 $\pm$ 0.07& 0.25 $\pm$ 0.08 \\
(ours ResNet version, $Q=10^4$) &0.86 $\pm$ 0.03& 0.79$\pm$ 0.03\\
(ours ResNet version, $Q=\inf$) &0.88 $\pm$ 0.05& 0.80$\pm$ 0.04\\
{(ours Transformer ver, $Q=10^4$)} & {\bf 0.93} $\pm$ 0.01 &
{\bf 1.0} $\pm$ 0.0\\
(ours Transformer ver, $Q=\inf$) &{\bf 0.93} $\pm$ 0.02& {\bf 1.0}$\pm$ 0.01\\
\midrule
(i.a) Hypernet, ResNet $f$ &0.75 $\pm$ 0.06& 0.14$\pm$ 0.01 \\
(i.b) Hypernet, Transformer $f$ &0.85 $\pm$ 0.01& 0.18 $\pm$ 0.04 \\
(ii.a) ResNet w/o hypernet & 0.70 $\pm$ 0.06& 0.09 $\pm$ 0.03\\
(ii.b) Transformer w/o hypernet &0.79 $\pm$ 0.03& 0.10 $\pm$ 0.02 \\
\bottomrule
\end{tabular}
\caption{The performance obtained for two real-world antennas. $C$R is the fraction of the antenna volume that complies with $C$.}
\label{tab:results_fab}
\begin{center}
\begin{tabular}{@{}l@{~}c@{~}c@{~}c@{~~}c@{}}
\toprule
& \multicolumn{2}{c}{Slotted Patch} & \multicolumn{2}{c}{Patch}\\
\cmidrule(lr){2-3}
\cmidrule(lr){4-5}
Method &IOU & $C$R & IOU & $C$R \\
\midrule
Nearest neighbor &0.57 &0.70 &0.81& 0.80 \\
(ours ResNet version) &0.85 &0.91 &0.89& 0.96 \\
(ours ResNet version $Q=\inf$) &0.87 &0.93 &{\bf 0.90}& 0.96 \\
{(ours Transformer ver)} & {\bf 0.89} & 0.93 &{\bf0.90}&{\bf1.0}\\
(ours Transformer ver $Q=\inf$) &0.88 &{\bf0.94} &{\bf 0.90}& {\bf 1.0} \\
(i.a) Hypernet ResNet &0.82 &0.83 &0.87& 0.94 \\
(i.b) Hypernet Transformer &0.76 & 0.79 &0.88 & {\bf 1.0} \\
(ii.a) Transformer w/o hypernet &0.63 &0.79&0.82& 0.83 \\
(ii.b) ResNet without hypernet &0.61 & 0.75 &0.82& 0.80 \\
\bottomrule
\end{tabular}
\end{center}
\end{table}
\begin{table}[t]
\caption{The performance for the iPhone 11 Pro Max design.}
\label{tab:results_iphone}
\centerin
\begin{tabular}{@{}l@{~}cc@{}}
\toprule
Method & Directivity[dBi] & $C$-ratio \\
\midrule
(Apple's original design) &3.1 &1.0\\
\midrule
Nearest neighbor & 1.5 & 0.05\\
(ours ResNet ) & 4.7 &0.96\\
(ours ResNet $Q=\inf$) & 4.7 &0.97\\
{(ours Transformer )} & {\bf 5.2} & {\bf 1.0}\\
(ours Transformer $Q=\inf$) & 5.1 & {\bf 1.0}\\
\midrule
(i.a) Hypernet ResNet & 2.1 & 0.68\\
(i.b) Hypernet Transformer &5.0 & 0.70\\
(ii.a) ResNet w/o hypernet &1.1& 0.37\\
(ii.b) Transformer w/o hypernet &1.8& 0.33\\
\bottomrule
\vspace{-7mm}
\end{tabular}
\end{table}
\section{Experiments}
We present results on both synthetic datasets and on real antenna designs. In addition we describe a sample of manufactured antenna array and its' performance in the appendix. Training, in all cases, is done on synthetic data. For all networks, the Adam optimizer~\cite{kingma2014adam} is used with a learning rate of $10^{-4}$ and a decay factor of 0.98 every 2 epochs for $2,000$ epochs, the batch size is 10 samples per mini-batch .
{\bf Single Antenna\quad}
\label{sec:expsingle}
The single antenna synthetic data is obtained at the WiFi center frequency of 2.45GHz. The dielectric slab size, permeability, and feeding geometry are fixed during all the experiments. The dataset used in our experiments consists of $3,000$ randomly generated PCB antenna structures, with a random metal polygons structure. The OpenEMS FDTD engine~\cite{openEMS} was used to obtain the far-field radiation pattern $U$. The dataset is then divided into train/test sets with a 90\%-10\% division.
We train the simulation network $h$ first, and then the design networks $f,t$. For the simulating network, an average of 0.95 Multiscale-SSIM score over the validation set was achieved. Once $h$ is trained on the initial dataset, another $10^4$ samples were generated and the radiation pattern is inferred by $h$ (more efficient than simulating). When training the design networks, the weight parameters of the multiloss $L_t$ are also learned. The values obtained are $\alpha_{msSSIM} \sim 10 \alpha_{OBCE}$.
For the design problem, which is our main interest, we use multiple evaluation metrics that span both the 3D space and the domain of radiation patterns. To measure the success in obtaining the correct geometries, we employ two metrics: the IOU between the obtained geometry $\bar V$ and the ground truth one $V$, and the recall of the structure $\bar V$ when considering the ground truth metallic structure constraints $M$. The latter is simply the ratio of the volume of the intersection of $M$ and the estimated $\bar V$ over the volume of $M$.
To evaluate the compliance of the resulting design to the radiation specifications, we use either the MS-SSIM metric between the desired radiation pattern $D$ and the one obtained for the inferred structure of each method, or the SNR between the two
We used the following baseline methods:
(i) nearest neighbor in the radiation space, i.e., the training sample that maximizes the multiscale SSIM metric relative to the test target $D$, and (ii) a nearest neighbor search using the SSIM metric out of the samples that have an M-recall score of at least 85\%. In addition, we used the following ablations and variants of our methods in order to emphasize the contribution of its components: (iii) the output of the hypernetwork, before it is refined by network $t$, (iv) an alternative architecture, in which the hypernetwork $f$ is replaced with a ResNet/Transformer-based $f'$ of the same capacity as $f$, which maps $D,S$ to O directly $O = f'(D,S)$, and (v) the full architecture, where the loss $L_{msSSIM}$ is replaced with the cross entropy loss on $\bar V$ with respect to the ground target structure (similar to $L_g$, Eq.~\ref{eq:LG}).
This last variant is to verify the importance of applying a loss in the radiation domain.
The results are reported in Tab.~\ref{tab:results}. As can be seen, the full method outperforms the baseline and ablation methods in terms of multiscale SSIM, which is optimized by both our method and the baseline methods. Our method also leads with respect to the SNR of the radiation patterns, and with respect to IOU. A clear advantage of the methods that incorporate the metallic surface constraints over the method that do not is apparent for the $M$-Recall score, where our method is also ranked first.
The hypernetwork ablation (iii), which does not employ $t$, performs well relative to the baselines (i,ii), and is outperformed by the ablation variant (v) that incorporates a refinement network $t$ that is trained with a similar loss to that of $f$. The difference is small with respect to the radiation metrics and IOU and is more significant, as expected, for $M$-Recall, since the refinement network incorporates a suitable loss term. Variant (iv) that replaces the hypernetwork $f$ with a ResNet/Transformer $f'$ is less competitive in all metrics, except the IOU score, where it outperforms the baselines but not the other variants of our method.
Comparing the two alternative architectures of $f$, the Transformer design outperforms the ResNet design in almost all cases. A notable exception is when hypernets are not used. However, in this case the overall performance is low.
Fig.~\ref{fig:sample} presents sample results for our method. As can be seen, in the final solution, the metallic region constraints are respected, and the final radiation pattern is more similar to the requirement than the intermediate one obtained from the hypernetwork before the refinement stage.
{\bf Antenna Arrays\quad} For the Antenna Arrays experiments, the synthetic dataset used for the single antenna case was repurposed by sampling multiple antennas. For each instance, we selected
(i) the number of elements in the array, uniformly between 1 and 6, (ii) single antennas from the synthetic dataset, and (iii) the position of each element. In order to match the real-world design, we made sure no antenna is selected more than once (the probability of such an event is admittedly small). The array gain was computed based on Eq.~\ref{eq:AG}. All the ablations and our method were trained only over the train set of the synthetic dataset. For testing, we employed a similarly constructed syntehtic dataset, as well as two different fabricated antennas \cite{chen_design_2018,singh_design_2016}. In addition, in order to ensure that our suggestion solves a real-world problem, we evaluate the network suggestion for an alternative design of iPhone 11 Pro Max's antenna array. In this case, we do not know the ground truth design. Therefore, we use a theoretic array response of isotropic elements, simulate the suggested design with openEMS \cite{openEMS}, and compare the result with the same figure of merit from the FCC report of the device\footnote{{{iPhone} 11 {Pro} {Max} {FCC} {report}}, \url{fccid.io/BCG-E3175A}}.
We apply our method with both architectures of $f$, and with $Q=10,000$ or when $q$ determines all of the parameters of $f$ ($Q=\inf$). In addition to the nearest neighbor baseline, which performs retrieval from the training set by searching for the closest example in terms of highest $msSSIM$ metric of the input's $AG_{input}$ and the sample's $AG_{nn}$ . We also consider the following baselines: (i) a baseline without a hyperhypernetwork, consisting of $f$ and $g$. (ii) A no hypernetwork variant that combines $f$ and $g$ to a single network, by adding a linear layer to arrange the dimensions of the embedding before the MLP classifier.
The results on the synthetic dataset
are reported in Tab.~\ref{tab:results_array}. As can be seen, the full method outperforms the baseline and ablation methods. In addition, the Transformer based architectures outperforms the ResNet variants. The additional gain in performance when predicting all of $\theta_f$ ($Q=\inf$), if exists, is relatively small. We note that this increases the training time from 2 (3) hours to 7 (10) hours for the ResNet (Transformer) model.
Fig.~\ref{fig:initial}(a) presents the training loss as a function of epoch for the hyperhypernetwork that employs the Transformer hypernet ($Q=10^4$), with the different initialization techniques. See appendix for the ResNet case and further details. The hyperhypernetwork fan-in method shows better convergence and a smaller loss than both hypernetwork-fan-in \cite{Chang2020Principled} and Xavier.
Fig.~\ref{fig:initial}(b) presents the score $H_j$ that is being used to select parameters that are predicted by $q$. Evidently, there is a considerable amount of variability between the layers in both network architectures.
Fig.~\ref{fig:array_sample}(a,b) presents sample results for our method. The metallic structure probability $O_2$ is shown in (a) in log scale, and the constraint plane $C$ (Eq.~\ref{eq:constraint}) is marked in black. As required, the probabilities are very small in the marked regions. Panel (b) presents the hard decision based on $O_1$. Misclassified points (marked in red) are relatively rare.
Tab.~\ref{tab:results_fab} presents the reconstruction of two real-world antennas: a slotted patch antenna \cite{chen_design_2018}, and a generic patch antenna \cite{singh_design_2016}. The results clearly show the advantage of our method upon the rest of the baselines and ablations in reconstructing correctly the inner structure of these examples, while preserving the constraint of localization of the array elements. Fig.~\ref{fig:array_sample}(c,d) show our method results for reconstructing real fabricated slotted patch antenna. See appendix for the ablation results; our results are much more similar to the ground truth design than those of the ablations.
Tab.~\ref{tab:results_iphone} presents our method's result, designing an antenna array that complies with the iPhone 11 Pro Max physical constraints. The resulting array was simulated and compared with the reported directivity (max of Eq.~\ref{eq:D} over all directions) in Apple's certificate report. Our method achieved very high scores on both directivity and compliance to the physical assembly constraints.
\section{Conclusions}
We address the challenging tasks of designing antennas and antenna arrays, under structural constraints and radiation requirements. These are known to be challenging tasks, and the current literature provides very limited solutions. Our method employs a simulation network that enables a semantic loss in the radiation domain and a hypernetwork. For the design of antenna arrays, we introduce the hyperhypernetwork concept and show how to initialize it and how to select to which weights of the inner hypernetwork it applies. Our results, on both simulated and real data samples, show the ability to perform the required design, as well as the advantage obtained by the novel methods.
\section*{Acknowledgments}
This project has received funding from the European Research Council (ERC) under the European Unions Horizon 2020 research and innovation programme (grant ERC CoG 725974).
|
{
"timestamp": "2021-05-11T02:17:26",
"yymm": "2105",
"arxiv_id": "2105.03838",
"language": "en",
"url": "https://arxiv.org/abs/2105.03838"
}
|
\section{Introduction}
The end state of a star resulting from continued gravitational
collapse is still a much debated topic in relativistic astrophysics.
It has been shown in some classes of models that shear plays an
important role in producing naked singularities. On the other hand,
the absence of shear during gravitational collapse of reasonable
matter distributions (for example in the case of perfect fluids)
always results in a black hole \cite{pankaj1}. An interesting study of gravitational collapse in the shear-free regime was proposed by Banerjee, Chaterjee and
Dadhich (hereafter referred to as the {\em BCD} model \cite{bcd}) in which the collapse proceeds without the formation of the horizon.
The so-called horizon-free collapse model has been explored in various contexts including higher dimensional spacetimes, Euclidean stars and class-one spacetimes\cite{hd1,euc1,k1,k2,k3}.
In this work we explore the boundary condition arising from the matching of a spherically symmetric imperfect fluid configuration undergoing dissipative collapse and matched to a Vaidya atmosphere. The boundary condition describing conservation of momentum flux across the bounding hypersurface is a second order nonlinear differential equation governing the temporal behaviour of the model. A particular solution of this equation which is linear in time has been widely used to model dissipative collapse in which the horizon never forms. We utilise a symmetry approach to generate the general solution of the boundary condition. We explore the physics associated with the collapsing core, particularly during the late stages of collapse.
This paper is structured as follows. In section $\S$2 we present the field equations which govern the interior spacetime of the collapsing sphere and the junction conditions required for the smooth matching of the interior spacetime to the Vaidya exterior. We study the temperature profiles of the collapsing star in section $\S$3. A physical analysis of the thermodynamical variables and time of formation of the horizon is carried out in $\S$4. We conclude with an overall discussion in section $\S$5.
\section{Dissipative collapse}
When modeling a radiating star undergoing shear-free gravitational collapse the interior spacetime is described by the spherically symmetric line element\cite{bon1}
\begin{equation} \label{1} ds^2 = -A^2(r,t) dt^2 + B^2(r,t)[dr^2 +
r^2 d \theta^2 + r^2 \sin^2 \theta d \phi^2]\,,
\end{equation} in which the metric functions $A$ and $B$ are yet to
be determined. The energy momentum tensor for the interior matter
distribution is described by an imperfect fluid given by \begin{equation} \label{2} T_{ab} = (\rho +
p) \, u_au_b + p g_{ab} + q_au_b + q_bu_a\,.
\end{equation} where $\rho$ and $p$ are the fluid energy density and pressure.
The heat flow vector $q^a$ is orthogonal to the
velocity vector so that $q^a u_a = 0$. The Einstein field
equations governing the interior of the stellar fluid is
\begin{eqnarray} \label{t4}
\rho &=& 3\frac{1}{A^2}\frac{{B_{t}}^2}{B^2} - \frac{1}{B^2}
\left( 2\frac{B_{rr}}{B} - \frac{{B_{r}}^2}{B^2} +
\frac{4}{r}\frac{B_{r}}{B} \right) \label{t4a} \\ \nonumber \\
p &=& \frac{1}{A^2} \left(-2\frac{B_{tt}}{B} -
\frac{{B_{t}}^2}{B^2} +
2\frac{A_{t}}{A}\frac{B_{t}}{B} \right) \nonumber \\ \nonumber \\
&& + \frac{1}{B^2} \left(\frac{{B_{r}}^2}{B^2} +
2\frac{A_{r}}{A}\frac{B_{r}}{B} + \frac{2}{r}\frac{A_{r}}{A} +
\frac{2}{r}\frac{B_{r}}{B} \right) \label{t4b} \\ \nonumber \\
p &=& -2\frac{1}{A^2}\frac{B_{tt}}{B} +
2\frac{A_{t}}{A^3}\frac{B_{t}}{B} -
\frac{1}{A^2}\frac{{B_{t}}^2}{B^2} +
\frac{1}{r}\frac{A_{r}}{A}\frac{1}{B^2} \nonumber \\ \nonumber \\
&& + \frac{1}{r}\frac{B_{r}}{B^3} + \frac{A_{rr}}{A}\frac{1}{B^2} -
\frac{{B_{r}}^2}{B^4} + \frac{B_{rr}}{B^3}
\label{t4c} \\ \nonumber \\
Q &=& -\frac{2}{AB} \left(-\frac{B_{rt}}{B} +
\frac{B_{r}B_{t}}{B^2} + \frac{A_{r}}{A}\frac{B_{t}}{B} \right)
\label{t4d} \end{eqnarray}
where $Q = \left(q_aq^a\right)^{1/2}$ is the magnitude of the heat flux.
We obtain the condition of pressure isotropy by equating
(\ref{t4b}) and (\ref{t4c}) \begin{equation} \label{pi} 0 =
\frac{1}{B^2}\left[ \displaystyle\frac{A_{rr}}{A} +
\displaystyle\frac{B_{rr}}{B} - \left(2 \displaystyle\frac{B_r}{B} +
\displaystyle\frac{1}{r} \right) \left( \displaystyle\frac{A_r}{A} +
\displaystyle\frac{B_r}{B} \right)\right] \end{equation}
This equation has been solved under various assumptions and transformations. It was Ivanov who observed that if the constants of integration of a static solution to the pressure isotropy condition in comoving and isotropic coordinates are allowed to evolve with time then the time-dependent 'solution' will automatically satisfy (\ref{pi}).
Since the star is radiating
energy, the exterior spacetime is described by the
Vaidya metric\cite{v1}
\begin{equation}
ds^2 = -\left(1-\frac{2m(v)}{\sf r}\right)dv^2 -
2dvd{\sf r} + {\sf r}^2[d \theta^2 + {\sf r}^2 \sin^2 \theta d
\phi^2] \label{vr}
\end{equation}
where $v$ is the retarded time and $m$ is the total mass inside
the comoving surface $\Sigma$ forming the boundary of the star. The
necessary junction conditions for the smooth matching of the
interior line element (\ref{1}) to the exterior spacetime (\ref{vr})
was first obtained by Santos \cite{santos} and has been succintly presented here for easy reference \begin{eqnarray}
(r B)_{\Sigma} &=& {\sf r}_{\Sigma} \label{radius}\\
p_{\Sigma} &=& (q^1 B)_{\Sigma} \label{p} \\
m_{\Sigma} &=& \Bigg[\frac{r^3 B {\dot B}^2}{2 A^2} - r^2 B' -
\frac{r^3 B'^2}{2 B} \Bigg]_{\Sigma} \label{mass}\end{eqnarray} where
$m_{\Sigma}$ is the total mass within a sphere of radius
$r_{\Sigma}$ and (\ref{p})represents the conservation of the
momentum flux across the boundary $\Sigma$.
An interesting approach to dissipative collapse is to explore the non-formation of the horizon in which the collapse rate is balanced by the rate at which energy is radiated to the exterior spacetime. The trio Banerjee, Chaterjee and Dadhich studied such a scenario by considering a simple radiative model in which the metric ansatz assumed
\begin{eqnarray}
A &=& 1 + \zeta_0 r^2 \\
B &=& R(t)
\end{eqnarray}
where $\zeta_0$ and $C$ are positive constants. The
collapse evolves from $t = -\infty$ until $t=0$.
Utilising the above
ansatz together with (\ref{t4b}) and (\ref{t4d}) in (\ref{p}) we
obtain
\begin{equation}
2R{\ddot R} + {\dot R}^2 + \alpha {\dot R} = \beta \label{bc}
\end{equation}
where $\alpha$ and $\beta$ are constants.
A special and simple solution to this equation is $R = -Ct$ where $C>0$ is a constant. Since the star is collapsing, we require the expansion scalar, $\Theta = \frac{3}{A}\frac{\dot R}{R} < 0$. This solution first made its appearance in the literature in 1989 when Banerjee et al \cite{bhui} presented the most general class of conformally flat radiating solutions. While the solution has a simple form it is remarkable that it has revealed rich and diversified toy models of dissipative collapse. It has been observed in the {\em BCD} paper that the ratio
\begin{equation}
1 - \frac{2m_\Sigma}{r_\Sigma}
\end{equation}
is independent of time. This can be easily seen from equations (\ref{radius}) and (\ref{mass}) in which both the area radius and mass are in linear in $t$. Thus the ratio of mass to area radius is independent of time and the boundary surface cannot reach the horizon.
\subsection{Stability analysis of exact solutions}
We have the main equation
\begin{equation}
2R{\ddot R} + {\dot R}^2 + \alpha {\dot R} = \beta \label{bc}
\end{equation}
where $\alpha$ and $\beta$ are constants. By convenience we assume $\alpha>0$.
A special and simple solution to this equation is $R = -Ct$ defined for $-\infty < t \leq 0$, where $C>0$ is a constant given by the positive root of $-\beta +C^2-\alpha C=0$, i.e., for $\beta>0$, we have $C=\frac{1}{2} \left(\sqrt{\alpha ^2+4 \beta }+\alpha \right)$. For $\beta<0$, we have two positive roots: \newline
$C_{-}= \frac{1}{2} \left(\alpha -\sqrt{\alpha ^2+4 \beta }\right), \quad C_{+}= \frac{1}{2} \left(\sqrt{\alpha ^2+4 \beta }+\alpha \right)$.
To avoid ambiguities we prefer to use the relation $\beta= C^2 - C \alpha$. An in the analysis separate the cases $\beta<0$ and $\beta \geq 0$.
For the analysis of stability of the scaling solution $R_s(t)=-C t$, with $C=\frac{1}{2} \left(\sqrt{\alpha ^2+4 \beta }+\alpha \right)>0$ in the interval $-\infty < t <0$ we use a similar methods as in Liddle \& Scherrer \cite{Liddle:1998xm} and Uzan \cite{Uzan:1999ch}.
Defining the new time variable
\begin{equation}
t= -e^{-\tau}, -\infty <\tau <\infty,
\end{equation}
such that $t\rightarrow -\infty$ as $\tau \rightarrow -\infty$ and $t\rightarrow 0$ as $\tau \rightarrow +\infty$, as well the
ratio
\begin{equation}
u(\tau)= \frac{R(\tau)}{R_s(\tau)},
\end{equation}
where $R(\tau)= R(-e^{-\tau})$ and $R_s(\tau)= -C t(\tau)= C e^{-\tau}$.
Let
\begin{equation}
R' \equiv \frac{d R}{d \tau},
\end{equation}
then,
\begin{equation}
\dot R = \frac{d \tau}{d t} R' = e^{\tau} R', \quad
\ddot R = e^{2 \tau} \left(R'' + R' \right)
\end{equation}
and
\begin{equation}
\frac{R_s'}{R_s}= -1, \quad R_s(\tau)= C e^{-\tau}.
\end{equation}
Hereafter, prime means derivative with respect the logarithmic time $\tau$.
Therefore, the equation (\ref{bc}) becomes
\begin{equation}
-\beta +\alpha e^{\tau } R'+e^{2 \tau } \left(R'^2+2 R \left(R''+R'\right)\right)=0. \label{new1}
\end{equation}
By definition we have
\begin{eqnarray}
&& R''= C e^{-\tau } \left(u''-2 u'+u\right),
\\
&& R'= C e^{-\tau }
\left(u'-u\right),
\\
&& R= C e^{-\tau } u.
\end{eqnarray}
Combining with equation (\ref{new1}) we obtain
\begin{eqnarray}
&& 2 C^2 u u''+C^2 u'^2 +C u' (\alpha -4 C u) \nonumber \\
&& +C (u-1) (-\alpha +C u+C)=0.
\end{eqnarray}
where we have used the relation
$\beta= C^2 - C \alpha$.
Defining
\begin{equation}
v(\tau)= u'(\tau),
\end{equation}
we obtain the dynamical system
\begin{eqnarray}
&& u'=v, \label{equ}\\
&& v'=-\frac{(u-1) (-\alpha +C u+C)}{2 C u} +v
\left(2-\frac{\alpha }{2 C u}\right)-\frac{v^2}{2 u}. \label{eqv}
\end{eqnarray}
The scaling solution corresponds to the fixed point $u=1, v=0$.
Defining $\varepsilon$ through $u=1+\varepsilon$, we obtain the final dynamical system
\begin{small}
\begin{eqnarray}
&& \epsilon '= v, \\
&& v'=\frac{\epsilon (\alpha -C \epsilon -2 C)}{2 C (\epsilon
+1)}+v \left(2-\frac{\alpha }{2 C \epsilon +2 C}\right)-\frac{v^2}{2 (\epsilon +1)},
\end{eqnarray}
\end{small}
where the fixed point is translated to the origin.
Then, linearising around the fixed point, $u=1+\varepsilon, v=0$, we obtain
\begin{equation}
\left(\begin{array}{c}
\varepsilon' \\
v'
\end{array}\right)= \left( \begin{array}{cc}
0 & 1\\
-1+ \frac{\alpha}{2 C} & 2 -\frac{\alpha}{2 C}
\end{array} \right)\left(\begin{array}{c}
\varepsilon \\
v
\end{array}\right).
\end{equation}
The linearisation matrix
\begin{equation}
J(0,0)= \left(
\begin{array}{cc}
0 & 1 \\
\frac{\alpha }{2 C}-1 & 2-\frac{\alpha }{2 C} \\
\end{array}
\right)
\end{equation}
has eigenvalues
$\left\{1,1-\frac{\alpha }{2 C}\right\}$.
Assume first $\beta\geq 0$. In this case, the origin is always unstable as $\tau \rightarrow +\infty$ due to $2 C= \alpha +\sqrt{\alpha^2 + 4\beta}>\alpha$. That is, the origin is stable as $\tau \rightarrow -\infty$.
An additional fixed point is
\begin{equation}
\epsilon =\frac{\alpha }{C}-2<0, \quad u=\frac{\alpha }{C} -1, \quad v=0.
\end{equation}
Evaluating the linearisation matrix
\begin{equation}
J(\varepsilon, v)= \left(
\begin{array}{cc}
0 & 1 \\
\frac{1}{2} \left(\frac{(v+1) (v C-C+\alpha )}{C (\epsilon +1)^2}-1\right) & 2-\frac{\alpha +2 C
v}{2 \epsilon C+2 C} \\
\end{array}
\right),
\end{equation}
say $J(\frac{\alpha }{C}-2,0)$, we obtain the eigenvalues
$\left\{1,\frac{2 C-\alpha }{2 (C-\alpha )}\right\}$. Due to $2 C-\alpha>0$ it is unstable as
$\tau \rightarrow \infty$. Indeed for $0<\frac{\alpha}{2}<C <\alpha$ it is a saddle, whereas for $C>\alpha>0$ is an unstable node. The last conditions is forbidden due to the physical condition $u\geq 0$ evaluated at $(u,v)=(\frac{\alpha }{C} -1,0)$ implies $\alpha \geq C$.
\begin{figure*}
\centering
\includegraphics[width=0.5\textwidth]{Plot1.pdf}
\caption{Phase plot of (\ref{equ}), (\ref{eqv}) for some $\alpha>0, \beta>0$ and the choice $2 C:=\alpha + \sqrt{\alpha^2 + 4 \beta}$. The origin represents the solution $R_s(t)=-C t$, i.e., $R_s(\tau)= C e^{-\tau}$, which is unstable (node). The physical region corresponds to $u\geq 0$.}
\label{fig:my_label1}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.5\textwidth]{Plot2.pdf}
\caption{Phase plot of (\ref{equ}), (\ref{eqv}) for some $\alpha>0, \beta<0$ and the choice $2 C_{-}:=\alpha - \sqrt{\alpha^2 - 4 |\beta|}$. The origin represents the solution $R_{s-}(t)=-C_{-} t$, i.e., $R_{s-}(\tau)= C_{-} e^{-\tau}$, which is unstable (saddle). The physical region corresponds to $u\geq 0$.}
\label{fig:my_label2}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.5\textwidth]{Plot3.pdf}
\caption{Phase plot of (\ref{equ}), (\ref{eqv}) for some $\alpha>0, \beta<0$ and the choice $2 C_{+}:=\alpha + \sqrt{\alpha^2 - 4 |\beta|}$. The origin represents the solution $R_{s+}(t)=-C_{+} t$, i.e., $R_{s+}(\tau)= C_{+} e^{-\tau}$, which is unstable (node). The physical region corresponds to $u\geq 0$.}
\label{fig:my_label3}
\end{figure*}
Now, let us study the case $\beta <0, \alpha <-2 \sqrt{-\beta }$ or $\beta <0, \alpha >2 \sqrt{-\beta } $. Henceforth,
we have two solutions
\begin{equation}
R_{s \pm}(\tau)= -C_\pm t(\tau)= C_{\pm} e^{-\tau},
\end{equation}
where
\begin{equation}
2 C_{\pm} = \alpha \pm \sqrt{\alpha^2 - 4|\beta|}.
\end{equation}
Observe that $2 C_{+} > \alpha$,
implies that $R_{s +}(\tau)$ is an unstable solution (unstable node) as $\tau\rightarrow \infty$.
Due to
$\alpha> 2 C_{-} > 0$, $R_{s -}(\tau)$ is an unstable (saddle) solution.
Figure \ref{fig:my_label1} shows the phase plot of (\ref{equ}), (\ref{eqv}) for some $\alpha>0, \beta>0$ and the choice $2 C:=\alpha + \sqrt{\alpha^2 + 4 \beta}$. The origin represents the solution $R_s(t)=-C t$, i.e., $R_s(\tau)= C e^{-\tau}$, which is unstable (node). The physical region corresponds to $u\geq 0$. Recall, the physical solution $R = -Ct$ is defined for $-\infty < t \leq 0$, where $C>0$ is a constant.
Figure \ref{fig:my_label2} shows the phase plot of (\ref{equ}), (\ref{eqv}) for some $\alpha>0, \beta<0$ and the choice $2 C_{-}:=\alpha - \sqrt{\alpha^2 - 4 |\beta|}$. The origin represents the solution $R_{s-}(t)=-C_{-} t$, i.e., $R_{s-}(\tau)= C_{-} e^{-\tau}$, which is unstable (saddle). The physical region corresponds to $u\geq 0$. Recall, the physical solution $R = -Ct$ is defined for $-\infty < t \leq 0$, where $C>0$ is a constant.
Figure \ref{fig:my_label3} shows the phase plot of (\ref{equ}), (\ref{eqv}) for some $\alpha>0, \beta<0, \alpha^2+4 \beta\geq 0$ and the choice $2 C_{+}:=\alpha + \sqrt{\alpha^2 - 4 |\beta|}$. The origin represents the solution $R_{s+}(t)=-C_{+} t$, i.e., $R_{s+}(\tau)= C_{+} e^{-\tau}$, which is unstable (node). The physical region corresponds to $u\geq 0$. Recall, the physical solution $R = -Ct$ is defined for $-\infty < t \leq 0$, where $C>0$ is a constant.
As can be seen in figures
\ref{fig:my_label1}, \ref{fig:my_label2} and \ref{fig:my_label3}, there are non-trivial dynamics as $(u,v)$ are unbounded.
Assume that there are
$u_0> 0$, and a coordinate transformation $\phi=h(u)$, with inverse $h^{(-1)}(\phi)$, which maps the interval
$[u_0,\infty)$ onto $(0, \delta]$, where $\delta=h(u_0)$, satisfying \newline $\lim_{u\rightarrow +\infty}h(u)=0$, and has the following additional
properties:
\begin{enumerate}
\item $h$ is $C^{k+1}$ and strictly decreasing,
\item \begin{equation}
\bar{h'}(\phi)=\left\{\begin{array}{cc}
h'(h^{(-1)}(\phi)), & \phi>0,\\
\lim_{\phi\rightarrow \infty} h'(\phi), & \phi=0 \end{array}\right. \label{eq23}
\end{equation} is $C^k$ on the closed interval $[0, \delta]$ and
\item $\frac{d \bar{h'}}{d \phi}(0)$ and higher derivatives $\frac{d^m\bar{h'}}{d \phi^m}(0)$ satisfy \begin{equation}
\frac{d \bar{h'}}{d \phi}(0)=\frac{d^m\bar{h'}}{d \phi^m}(0)=0.
\end{equation}
\end{enumerate}
It can be proved using the above conditions that
\begin{eqnarray}
&& \lim_{\phi\rightarrow 0}\frac{1}{h^{(-1)}(\phi )}=0,
\\
&& \lim_{\phi\rightarrow 0}\frac{h'\left(h^{(-1)}(\phi )\right)}{\phi
}=0,
\\
&& \lim_{\phi\rightarrow 0}\quad \frac{h''\left(h^{(-1)}(\phi )\right)}{h'\left(h^{(-1)}(\phi )\right)}=0.
\end{eqnarray}
In the following, we say that $g$ is well-behaved at infinity (WBI) of exponential order $N$, if there is $N$ such that
\begin{equation}
\lim_{u\rightarrow \infty} \left(\frac{g'(u)}{g(u)}-N\right)=0.
\end{equation}
Let $g$ be a WBI function of exponential order $N$ then, exponential dominated means: for all $\lambda>N$,
\begin{equation}
\lim_{ u \to \infty} \, e^{-\lambda u} g(u)=0.
\end{equation}
From
\begin{equation}
\lim_{\phi \to 0} \, \frac{h''\left(h^{(-1)}(\phi )\right)}{h'\left(h^{(-1)}(\phi )\right)}=0,
\end{equation}
it follows that $g(u)=1/h'(u)$ is WBI of exponential order $0$, that is, $\lim_{u\rightarrow \infty} \frac{g'(u)}{g(u)}-N=0$ for $N=0$, and hence it is exponential dominated. This implies in turn that $1/h(u)$ is also exponential dominated. The function $h(u)$ must therefore obey the following condition: for all $k>0$,
\begin{equation}
\lim_{u \rightarrow \infty} \frac{e^{k u}}{h'\left(u\right)}= \lim_{u \rightarrow \infty} \frac{e^{k u}}{h(u)}=0.
\end{equation}
In general, we can obtain functions $\phi = h(u)$ satisfying the above conditions 1,2, 3, and previously commented facts if we demand the existence of $n>1$ such that the functions
\begin{equation}
\frac{1}{h^{(-1)}(\phi )}, \quad \frac{h'\left(h^{(-1)}(\phi )\right)}{\phi
}, \quad \frac{h''\left(h^{(-1)}(\phi )\right)}{h'\left(h^{(-1)}(\phi )\right)},
\end{equation}
behaves as $\mathcal{O}(\phi^n)$,
and
\begin{equation}
h^{(m)}\left(h^{(-1)}(\phi )\right) \sim \mathcal{O}(\phi^{(m n+1)}), \quad m \in \mathbb{N}, \quad m \geq 1,
\end{equation} as $\phi\rightarrow 0$, where the superscript $(m)$ means $m$-th derivative with respect the argument.
Let be defined
\begin{equation}
\theta=1-u+v.
\end{equation}
Then we obtain
\begin{eqnarray}
&& \phi'= \bar{h'}(\phi) \left(h^{(-1)}(\phi )+\theta-1\right), \label{eq28}\\
&& \theta'= \frac{(2 C-\alpha)
\theta }{2 C h^{(-1)}(\phi)}-\frac{\theta^2}{2 h^{(-1)}(\phi)}, \label{eq29}
\end{eqnarray}
where $\bar{h'}(\phi)$ is defined in (\ref{eq23}) and $2 C-\alpha>0$.
The system (\ref{eq28}), (\ref{eq29}) defines a flow in the phase region
\begin{equation}
\Omega_\delta :=\left\{(\phi, \theta)\in \mathbb{R}^2: 0<\phi < h(\delta^{-1}), \theta \in K\right\},
\end{equation}
where $K$ is a compact set, such that $\Omega_\delta$ is a positive invariant set for large $\tau$.
The linearisation matrix of system (\ref{eq28})
(\ref{eq29}) is
\begin{eqnarray}
&& J(\phi,\theta) \nonumber \\
&& = \left(
\begin{array}{cc}
1+\frac{\left(\theta +h^{(-1)}(\phi )-1\right) h''\left(h^{(-1)}(\phi )\right)}{h'\left(h^{(-1)}(\phi )\right)} &
h'\left(h^{(-1)}(\phi )\right) \\
\frac{(\alpha +C (\theta -2)) \theta }{2 C h^{(-1)}(\phi )^2 h'\left(h^{(-1)}(\phi )\right)} & -\frac{\alpha +2 C
(\theta -1)}{2 C h^{(-1)}(\phi )} \\
\end{array}
\right)\nonumber \\
&&
= \left(
\begin{array}{cc}
1 + \frac{h^{(-1)}(\phi ) h''\left(h^{(-1)}(\phi )\right)}{h'\left(h^{(-1)}(\phi )\right)}+\mathcal{O}(\phi^n) & h'\left(h^{(-1)}(\phi )\right) \\
\frac{(\alpha +C (\theta -2)) \theta }{2 C h^{(-1)}(\phi )^2 h'\left(h^{(-1)}(\phi )\right)} & \mathcal{O}(\phi^{n}) \\
\end{array}
\right)
\nonumber \\
&& = \left(
\begin{array}{cc}
1 + \frac{h^{(-1)}(\phi ) h''\left(h^{(-1)}(\phi )\right)}{h'\left(h^{(-1)}(\phi )\right)}+\mathcal{O}(\phi^n) & \mathcal{O}(\phi^{n+1}) \\
\mathcal{O}(\phi^{n-1}) & \mathcal{O}(\phi^{n}) \\
\end{array}
\right)
\nonumber \\
&& \sim \left(
\begin{array}{cc}
1+\frac{h^{(-1)}(\phi ) h''\left(h^{(-1)}(\phi )\right)}{h'\left(h^{(-1)}(\phi )\right)} & 0 \\
0 & 0 \\
\end{array}
\right), \quad \text{as}\; \phi \rightarrow 0,
\end{eqnarray}
We have,
\begin{eqnarray}
&& J(\phi,\theta) \nonumber \\
&& = \left(
\begin{array}{cc}
1 + \frac{h^{(-1)}(\phi ) h''\left(h^{(-1)}(\phi )\right)}{h'\left(h^{(-1)}(\phi )\right)}+\mathcal{O}(\phi^n) & h'\left(h^{(-1)}(\phi )\right) \\
\frac{(\alpha +C (\theta -2)) \theta }{2 C h^{(-1)}(\phi )^2 h'\left(h^{(-1)}(\phi )\right)} & \mathcal{O}(\phi^{n}) \\
\end{array}
\right)
\end{eqnarray}
has characteristic polynomial
\begin{eqnarray}
&&\left(1 + \frac{h^{(-1)}(\phi ) h''\left(h^{(-1)}(\phi )\right)}{h'\left(h^{(-1)}(\phi )\right)} - \lambda +\mathcal{O}(\phi^n)\right) \nonumber \\
&& \times(-\lambda+\mathcal{O}(\phi^n)) - \underbrace{\frac{(\alpha +C (\theta -2)) \theta }{2 C h^{(-1)}(\phi )^2}}_{\mathcal{O}(\phi^{(2n)})}=0\nonumber \\
&& \lambda \left(\lambda-1-\frac{h^{(-1)}(\phi ) h''\left(h^{(-1)}(\phi
)\right)}{h'\left(h^{(-1)}(\phi )\right)}\right)+\mathcal{O}(\phi^n)=0.
\end{eqnarray}
with eigenvalues $\left\{1+ \frac{h^{(-1)}(0) h''\left(h^{(-1)}(0)\right)}{h'\left(h^{(-1)}(0)\right)}, 0\right\}$ as
$\phi\rightarrow 0$. That is, exists a line of fixed point $L: (\phi, \theta)= (0, \theta^*)$ as $\tau\rightarrow \infty$ for bounded $\theta$ which is normally hyperbolic. Therefore, the stability condition as $\phi \rightarrow 0$ is $\frac{h^{(-1)}(0) h''\left(h^{(-1)}(0)\right)}{h'\left(h^{(-1)}(0)\right)}<-1$.
Setting, for example
$h(u)=u^{-1/n}$, with $n>1$, which satisfies the previous conditions 1, 2, and 3, we obtain
\begin{eqnarray}
&& \phi'=-\frac{\phi}{n} +\left(\frac{1}{n}-\frac{\theta}{n}\right) \phi^{n+1}, \label{eq30}\\
&& \theta'=\phi^n
\left(\left(1-\frac{\alpha }{2 C}\right) \theta -\frac{\theta^2}{2}\right). \label{eq31}
\end{eqnarray}
The curve of fixed points $L: (\phi, \theta)= (0, \theta^*)$ as $\tau\rightarrow \infty$ for bounded $\theta$ has eigenvalues $\left\{-\frac{1}{n},0\right\}$, Therefore, it is normally hyperbolic and stable.
Defining the compact variables
\begin{equation}
\Phi=\frac{2 \tan ^{-1}(\phi)}{\pi }, \quad \Theta=\frac{2 \tan ^{-1}(\theta )}{\pi },
\end{equation}
we obtain
\begin{small}
\begin{eqnarray}
&& \Phi'= \frac{\sin (\pi \Phi ) \left(-\left(\tan \left(\frac{\pi \Theta }{2}\right)-1\right) \tan ^n\left(\frac{\pi
\Phi }{2}\right)-1\right)}{\pi n}, \label{eqPhi}\\
&& \Theta'= \frac{\tan ^n\left(\frac{\pi \Phi }{2}\right) ((2 C-\alpha ) \sin (\pi \Theta)+C (\cos (\pi \Theta )-1))}{2 \pi C}. \label{eqTheta}
\end{eqnarray}
\end{small}
In these coordinates, the points with $\Phi=0$ corresponds to $u\rightarrow \infty$. The points with $\Phi=\pm 1$ are representations of $u \rightarrow 0^{+}$ or $u \rightarrow 0^{-}$, respectively.
Moreover, $\Theta=\pm 1$ are representations of
$\theta=1-u+v \rightarrow \pm \infty.$ As before, the physical region corresponds to $\Phi\geq 0$ (corresponding to $u\geq 0$).
Finally, if
$\theta$ is bounded as $\tau \rightarrow \infty$ (which we will have so), we would have from (\ref{eq30}) and
(\ref{eq31}) that
\begin{eqnarray}
&& \phi'=-\frac{\phi}{n} +\mathcal{O}(\phi^{n+1}), \label{asympt1}\\
&& \theta'=\phi^n
\left(\left(1-\frac{\alpha }{2 C}\right) \theta -\frac{\theta^2}{2}\right)+\mathcal{O}(\phi^{n+1}). \label{asympt2}
\end{eqnarray}
\begin{figure*}
\centering
\includegraphics[width=0.5\textwidth]{Plot4.pdf}
\caption{Phase plot of (\ref{eqPhi}), (\ref{eqTheta}) for some $\alpha>0, \beta>0$, $n=2$, and the choice $2 C:=\alpha + \sqrt{\alpha^2 + 4 \beta}$. The red dashed line is a stable line of fixed points $L: (\phi, \theta)= (0, \theta^*)$. The physical region is $\Phi\geq 0$.}
\label{fig:my_label4}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.5\textwidth]{Plot5.pdf}
\caption{Phase plot of (\ref{eqPhi}), (\ref{eqTheta}) for some $\alpha>0, \beta<0$ and the choice $2 C_{-}:=\alpha - \sqrt{\alpha^2 - 4 |\beta|}$. The red dashed line is a stable line of fixed points $L: (\phi, \theta)= (0, \theta^*)$. The physical region is $\Phi\geq 0$.}
\label{fig:my_label5}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.5\textwidth]{Plot6.pdf}
\caption{Phase plot of (\ref{eqPhi}), (\ref{eqTheta}) for some $\alpha>0, \beta<0$ and the choice $2 C_{+}:=\alpha + \sqrt{\alpha^2 - 4 |\beta|}$. The red dashed line is a stable line of fixed points. The physical region is $\Phi\geq 0$.}
\label{fig:my_label6}
\end{figure*}
The asymptotic equations (\ref{asympt1}),
(\ref{asympt2}) as $\phi\rightarrow 0$ are integrable with solution
\begin{equation}
\left(
\begin{array}{c}
\phi(\tau)\\
\theta(\tau)
\end{array}
\right)= \left(
\begin{array}{c}
e^{-\frac{\tau }{n}} c_1 \\\\ \frac{2 C-\alpha }{C-\exp \left(\frac{(2 C-\alpha ) \left(e^{-\tau } c_1^n+2 C c_2\right)}{2
C}\right)}
\end{array}
\right),
\end{equation}
converging to $L: (\phi, \theta)= (0, \theta^*)$ as $\tau\rightarrow \infty$ for bounded $\theta$.
Figure \ref{fig:my_label4} shows the phase plot of (\ref{eqPhi}), (\ref{eqTheta}) for some $\alpha>0, \beta>0$, $n=2$, and the choice $2 C:=\alpha + \sqrt{\alpha^2 + 4 \beta}$. The red dashed line is a stable line of fixed points $L: (\phi, \theta)= (0, \theta^*)$. The physical region is $\Phi\geq 0$.
Figure \ref{fig:my_label5} shows the phase plot of (\ref{eqPhi}), (\ref{eqTheta}) for some $\alpha>0, \beta>0$, $n=2$, and the choice $2 C_{-}:=\alpha - \sqrt{\alpha^2 - 4 |\beta|}$. The red dashed line is a stable line of fixed points $L: (\phi, \theta)= (0, \theta^*)$. The physical region is $\Phi\geq 0$.
Figure \ref{fig:my_label6} shows the phase plot of (\ref{eqPhi}), (\ref{eqTheta}) for some $\alpha>0, \beta>0$, $n=2$, $\alpha^2+4 \beta\geq 0$ and the choice $2 C_{+}:=\alpha + \sqrt{\alpha^2 - 4 |\beta|}$. The red dashed line is a stable line of fixed points $L: (\phi, \theta)= (0, \theta^*)$. The physical region is $\Phi\geq 0$.
All these plots illustrate our analytical findings. These are: (i)
The solution of (\ref{bc})
$R = -Ct$ defined for $-\infty < t \leq 0$, where $C>0$ is a fixed constant, is unstable. (ii) The curve of fixed points $L: (\phi, \theta)= (0, \theta^*)$ (i.e., $ u\rightarrow \infty$, and $v\rightarrow \infty$, in such a way that $1-u+v \rightarrow \theta^*$) is stable as $\tau\rightarrow \infty$ for bounded $\theta$.
Result (ii) means that, as $\tau \rightarrow \infty$, we have
\begin{equation}\label{eq47}
1- u(\tau ) +u'(\tau)= \theta^*, \quad u(\infty)=\infty.
\end{equation}
The solution of (\ref{eq47}) is
\begin{equation}
u(\tau)= c_1 e^{\tau }-\theta^*+1, \quad c_1 \neq 0.
\end{equation}
Then,
\begin{eqnarray}
&& R(\tau)= R_s(\tau) u(\tau)= C e^{-\tau} u(\tau) \nonumber \\
&& = C \left(c_1-(\theta^*-1) e^{-\tau }\right).
\end{eqnarray}
Hence,
\begin{equation}
R(t)=C \left(c_1+(\theta^*-1) t\right). \label{eq50}
\end{equation}
Substituting back in (\ref{bc}), we have that in order of (\ref{eq50}) to be an exact solution for (\ref{bc}) we must impose the compatibility condition:
\begin{equation}
C \theta^* (\alpha +C (\theta^*-2))=0.
\end{equation}
We have some specific solutions when
$\theta^* \in \left\{0, 2-\frac{\alpha
}{C}\right\}$.
However, recall that $\theta^*$ is an arbitrary constant value by definition of line $L$. So, the natural condition is
\begin{equation}
C= \frac{\alpha }{2-\theta^*}.
\end{equation}
Then,
the solution of (\ref{bc}) given by
\begin{equation}
R(t)= \frac{\alpha c_1}{2-\theta^*}+\frac{\alpha (\theta^*-1) t}{2-\theta^*}, \quad c_1\neq 0,
\end{equation}
defined in the semi-infinite-interval
$-\infty< t\leq 0$,
is stable as $t\rightarrow 0^{-}$ ($\tau \rightarrow +\infty$). Finally,
\begin{equation}
\lim_{t\rightarrow 0^{-}} R(t) = \frac{\alpha c_1}{2-\theta^*} \neq 0
\end{equation}
by construction.
\subsection{Symmetries and Singularity analysis}
Let us now turn our attention to the boundary condition (\ref{bc}). Equation
(\ref{bc}) is invariant under the infinitesimal transformation \cite{kumei}%
\begin{eqnarray}
\bar{t} &\rightarrow &t+\varepsilon \left( \alpha _{1}+\alpha _{2}t\right) ,
\\
\bar{R} &\rightarrow &R+\varepsilon \left( \alpha _{2}R\right) .
\end{eqnarray}%
where $\varepsilon $ is the infinitesimal parameter, that is, $\varepsilon
^{2}\simeq 0.~$From the latter we infer that equation (\ref{bc}) admits as
Lie point symmetries the elements of the two-dimensional Lie algebra $%
\left\{ \partial _{t},t\partial _{t}+R\partial _{R}\right\} $ which form the
$A_{2,2}$ Lie algebra in the Morozov-Mubarakzyanov classification scheme
\cite{mb1,mb2,mb3,mb4}. Thus the Lie symmetries can be used to simplify the
differential equation thought the similarity transformations~\cite{kumei}.
The application of the Lie symmetries in gravitational physics has provided
us with many interesting results, for instance see \cite%
{sym1,sym2,sym3,sym4,sym5} and references therein.
Application of the symmetry vector $\partial _{t}$ field provides the reduced equation%
\begin{equation}
2xy\frac{dy\left( x\right) }{dx}+y^{2}\left( x\right) +\alpha y\left(
x\right) -\beta =0~, y\left( x\right) =\dot{R}~,~x=R
\label{bcc1}
\end{equation}%
which admits the symmetry vector $\Gamma ^{1}=\left( y\left( x\right)
+\alpha y-\frac{\beta }{y\left( x\right) }\right) \partial _{y}$. The
first-order nonlinear equation (\ref{bcc1}) is separable. However, since it
admits a Lie point symmetry we can apply Lie's integration factor to
simplify it \cite{kumei}. Lie's integration factor is derived $\mu =\left(
2x\left( y^{2}\left( x\right) +\alpha y-\beta \right) \right) ^{-1}$; hence,
by multiply equation (\ref{bcc1}) with it we end with the equation
\begin{equation}
\int \frac{y}{\left( y^{2}\left( x\right) +\alpha y-\beta \right) }dy=-\frac{%
dx}{2x} \label{bcc2}
\end{equation}%
that is,%
\begin{eqnarray}
\ln \left( x-x_{0}\right) &=&-\ln \left( y^{2}+\alpha y-\beta \right) + \\
&&-\frac{2\alpha }{\sqrt{\alpha ^{2}+4\beta }}\arctan h\left( \frac{%
2y+\alpha }{\sqrt{\alpha ^{2}+4\beta }}\right).
\end{eqnarray}%
However, in the special case where $\beta =-\frac{\alpha ^{2}}{4}$, the solution
of (\ref{bcc2}) is expressed as follows
\begin{equation}
\ln \left( 2+y+\alpha \right) +\frac{\alpha }{2y+\alpha }=-\frac{1}{2}\ln
\left( x-x_{0}\right) .
\end{equation}
On the other hand, application of the symmetry vector $t\partial
_{t}+R\partial _{R}$ in (\ref{bc}) provides the first-order ordinary
differential equation%
\begin{equation}
2z\left( y\left( z\right) -z\right) \frac{dy\left( z\right) }{dz}%
+y^{2}\left( z\right) +\alpha y\left( z\right) -\beta =0. \label{bcc3}
\end{equation}%
The latter equation belongs to the family of Abel equations of the second
kind. Equation (\ref{bcc3}) can be integrated similarly with equation (\ref%
{bcc1}) with the derivation of Lie's integration factor. A special solution
of equations (\ref{bcc2}) and (\ref{bcc3}) is the constant value of $y=y_{0}$
with $y_{0}^{2}+\alpha y_{0}-\beta =0$, however, such solution leads to the
closed-form solution of Banerjee et al \cite{bhui}.
Let us proceed our analysis by writing a closed form solution of equation (%
\ref{bc}) derived with the singularity analysis. The modern treatment of the
singularity analysis is described by the ARS algorithm \cite{ars1,ars2,ars3}%
. For more details on the ARS algorithm we refer the reader in \cite{anl1},
where also a discussion on the connection between the connection of the Lie
symmetries and of the singularity is presented.
For equation (\ref{bc}) \ the leading-order term is found to be $%
R_{leading}\left( t\right) =R_{0}\left( t-t_{0}\right) ^{\frac{2}{3}}$,
where $t_{0}$ indicates the location of the movable singularity and $R_{0}$
is arbitrary. The resonances, are derived to be $s_{1}=-1$ and $s_{4}=4$,
which means that the analytic solution of (\ref{bc}) can be expressed in
terms of the Right Painlev\'{e} Series%
\begin{equation}
R\left( t\right) =R_{0}\left( t-t_{0}\right) ^{\frac{2}{3}%
}+R_{1}t+R_{2}\left( t-t_{0}\right) ^{\frac{4}{3}}+R_{3}\left(
t-t_{0}\right) ^{\frac{5}{3}}+...~.
\end{equation}%
We replace in (\ref{bc}) from where we find that $R_{1}=-\frac{3\alpha }{4}%
,~R_{2}=\frac{9}{320R_{0}}\left( 3\alpha ^{2}+16\beta \right) ~,~R_{2}=\frac{%
3\alpha }{320R_{0}^{2}}\left( 3\alpha ^{2}+16\beta \right) ,...~$.~However,
in the special case in which $\left( 3\alpha ^{2}+16\beta \right) =0$, that
is $\beta =-\frac{3\alpha ^{2}}{16}$, we find that $R_{I}=0,~I>1$.
In the following, we investigate the physical properties of this new
solution, for simplicity we assume for the nonessential constant $t_{0}=0$.
\subsection{A new radiating model}
We now model a radiating star undergoing dissipative collapse by digressing from the simple linear time-dependence of $B(r,t)$. To this end we utilise the truncated solution reported in the previous section
\begin{equation} \label{tbc}
R(t) = R_0t^{2/3} - \frac{3\alpha}{4}t\end{equation}
with $\beta = -\frac{3\alpha^2}{4}$
The Einstein field equations (\ref{t4a})--(\ref{t4d})
reduce to
\begin{eqnarray}
\rho &=& \frac{\left(8R_0 - 9\alpha t^{1/3}\right)^2}{3t^2\left(-4R_0 + 3\alpha t^{1/3}\right)^2\left(1 + \zeta_0 r^2\right)^2}\\
p&=& \frac{32\alpha R_0 + 3t^{1/3}\left(-9\alpha^2 + 64\zeta_0(1+ r^2\zeta_0)\right)}{3t^{5/3}\left(4R_0 - 3\alpha t^{1/3}\right)^2\left(1 + \zeta_0r^2\right)^2}\\
qB&=&-\frac{55\left(-32\alpha R_0 + 27\alpha^2t^{1/3} - 192\zeta_0t^{1/3}(1 + \zeta_0r^2)\right)}{3t^{5/3}\left(-4R_0 + 3\alpha t^{1/3}\right)^2\left(1+ \zeta_0r^2\right)^2}
\end{eqnarray}
We also calculate the mass function and luminosity at infinity to be
\begin{eqnarray}
m &=& \frac{r^2\left(8R_0 - 9\alpha t^{1/3}\right)^2\left(4R_0 - 3\alpha t^{1/3}\right)}{1152\left(1 + \zeta_0r^2\right)^2} \label{mass}\\
L_\infty &=& \frac{\alpha r_0\left(32R_0 - 27\alpha t^{1/3}\right)\left(8R_0 - 9\alpha t^{1/3}\right)\Xi}{864t^{7/3}\left(4R_0 - 3\alpha t^{1/3}\right)^2\left(1 + \zeta_0 r_0^2\right)^4} \label{lu}
\end{eqnarray}
where we have defined
\begin{equation}
\Xi = 8R_0r_0 + 3t^{1/3}(4 - 3\alpha r_0 + 4\zeta_0r_0^2)
\end{equation} and
$r = r_0$ defines the boundary of the star at some fixed time.
\section{Causal heat flow}
The role of causal heat flow during dissipative collapse has been extensively studied by Herrera and co-workers\cite{hh1,hh2,hh3} and references therein. It has been demonstrated that relaxational effects lead to higher core temperatures as the collapse proceeds with cooling being enhanced in the surface layers. The noncausal Eckart formalism may hold during an epoch when the fluid is close to hydrostatic equilibrium. However, as the collaspe proceeds the noncausal nature of the Eckart framework leads to infinite propagation speeds ofthe thermal signals and unstable equilibrium states. Earlier work by Di Prisco {\em et al}\cite{hh1} has shown that relaxational effects impact on the luminosity profiles of radiating stars.
In order to study the impact of relaxation times on the temperature profiles we adopt a causal heat transport equation of Maxwell-Cattaneo form \cite{mg1}
\begin{equation}
\tau {h_a}^b {\dot{q}}_b + q_a = - \kappa (D_a T +
T {\dot{u}}_a), \label{r2.26}
\end{equation} where the relaxation time is given by
\begin{equation} \label{r2.24}
\tau = \kappa T \beta
\end{equation} for the heat flux.
The appearance of the the relaxation time restores causality and has been successful in modelling high-frequency phenomena in electronics and fluid flow \cite{royy}. For the line element (\ref{1}) the causal heat transport
equation (\ref{r2.26}) becomes \begin{equation} \label{ca1}
\tau(qB)_{,t} + A(qB) = -\kappa \frac{(AT)_{,r}}{B},
\end{equation} which governs the behavior of the temperature.
Setting $\tau = 0$ in (\ref{ca1}) we obtain the familiar Fourier
heat transport equation \begin{equation} \label{ca2} A(qB) =
-\kappa \frac{(AT)_{,r}}{B}, \end{equation} which predicts
reasonable temperatures when the fluid is close to
quasi--stationary equilibrium.
Following the work of \cite{mg2} we adopt the following thermodynamic coefficients for radiative transfer where we assume that heat is being carried away from the core via thermally generated neutrinos. The thermal
conductivity assues the form \begin{equation} \kappa =\chi
T^3{\tau}_{\mathrm c} \label{a28}\,,\end{equation} where $\chi$ ($\geq0$) is a constant and ${\tau}_{\mathrm c}$ represents the mean
collision time between massless and massive particles. Martinez\cite{mart} has shown that $\tau_{\mathrm c} \propto T^{-3/2}$ for thermally generated neutrinos within the core of neutron stars. To this end we assume
\begin{equation} \label{a29} \tau_{\mathrm c}
=\left({\psi\over\chi}\right) T^{-\omega} \,,\end{equation}
where $\psi$ ($\geq 0$) and $\omega$ ($\geq 0$) are constants and for $\omega={3\over2}$ we regain the treatment due to Martinez. We observe that this form implies that the mean collision time decreases with an increase in temperature, as expected. We assume that relaxation time is proportional to the
collision time: \begin{equation} \tau =\left({\lambda \chi \over
\psi}\right) \tau_{\mathrm c} \label{a30}\,,\end{equation} where
$\tau$ ($\geq 0$) is a constant.
This assumption may hold for a limited epoch of the collapse process.
Putting all together in (\ref{ca1}) we obtain \begin{equation} \lambda (qB)_{,t} T^{-\omega} + A (q
B) = - \psi \frac{T^{3-\omega} (AT)_{,r}}{B} \label{temp1}
\,.\end{equation}
It has been shown that in the case $\omega=0$ (which corresponds to constant mean collision time), the causal transport equation (\ref{temp1}) yields the following temperature profile\cite{kesh1}
\begin{equation} (AT)^4 = - \frac{4}{\psi} \left[\lambda\int A^3 B
(qB)_{,t}{\mathrm d} r + \int A^4 q B^2 {\mathrm d} r\right] +
{\cal{F}}(t) \label{caus0} \end{equation} where ${\cal{F}}(t)$ is an integration function.
We can evaluate ${\cal{F}}(t)$ by recalling that the effective surface temperature of a star is given by
\begin{equation}
\left(T^4\right)_\Sigma = \left(\frac{1}{r^2B^2}\right)_\Sigma\left(\frac{L_\infty}{4\pi \sigma}\right)_\Sigma
\end{equation}
where $\delta (> 0)$ is a constant. For our model, we are able to complete the integration in (\ref{caus0}) and the temperature is plotted in figure 6.
\section{Physical analysis}
In order to establish the physical viability of our model, we have plotted the density, pressure, heat flow and the energy conditions as functions of the radial and temporal coordinates in Figs. 1-6. We observe that the density
and pressure are monotonically decreasing functions of the radial coordinate. Furthermore, as the collapse proceeds, the density and pressure increase. This is expected as the core collapses the matter gets squeezed into smaller volumes thus increasing the density and the pressure within the smaller sphere. In the case of the BCD model, the rate at which the body radiates energy is balanced by the rate at which it collapses which leads to the final 'evaporation' of the star. It is worth pointing out that the luminosity as given by (\ref{lu}) vanishes when
\begin{eqnarray}
t_{(bh)_1} &=& 0.702332\nu^3\\
t_{(bh)_2} &=& 1.66479 \nu^3\\
t_{(bh)_3} &=& -18.963\frac{\alpha^3\nu^3}{(4 - 3r_0\alpha +4r_0^2\zeta_0)^3}
\end{eqnarray}
where we have defined $\nu = \frac{R_0}{\alpha}$. Since the collapse proceeds from $-\infty < t < 0$, we must have $t_{(bh)_i} < 0$ if we want the horizon to form before the collapse ends. For $\alpha > 0$ we must $R_0 < 0$ which ensures that $t_{(bh)_1}$ and $t_{(bh)_2}$ are both negative. For $t_{(bh)_3} < 0$ we must have $4 - 3r_0\alpha + 4r_0^2\zeta_0$ which places a restriction on $\zeta_0$. In the BCD model where the horizon never forms, no such restriction is placed on $\zeta_0$.
Fig. 3 shows that the heat flux (energy output) increases with time. As the density increases, the thermonuclear processes become more effcient resulting in higher energy outputs for late times. The mass profile is depicted in Fig. 4. The mass of the sphere increases monotonically from the center to the boundary. The mass decreases with time as star radiates energy as it collapses. Figs. 5 and 6 confirm that the energy conditions required for a realistic stellar core are obeyed everywhere inside the star. We have plotted the causal temperature profiles in figure 7. The blue curve represents the BCD temperature profile whereas the orange curve describes the evolution of the temperature as a function of the radial coordinate for our nonlinear solution. It is evident that the nonlinear solution predicts higher core temperatures than the linear BCD model. In a recent paper, it was shown that the condition for pressure isotropy is unstable in the sense that an initially isotropic matter configuration will evolve into an anisotropic regime as it leaves hydrostatic equilibrium. It was shown that dissipative fluxes (in the form of heat flow), density inhomogeneities and/or nonzero shear within the fluid flow, can disrupt the condition of pressure isotropy leading to unequal radial and tangential stresses within the collapsing fluid\cite{pi}.
Our aim of this work was to demonstrate the existence of the general solution of the junction condition which encodes the temporal behaviour of the model.
\section{Concluding remarks}
In this exposition we analysed a differential equation arising from the modeling of a star undergoing dissipative collapse. We have presented the general solution to the boundary condition for a particular type of shear-free, dissipative collapse. In this model the gravitational potentials are separable in the radial and temporal coordinates and the pressure is isotropic at each interior point of the collapsing distribution. We carried out an extensive stability ananlysis of the solution arising from the temporal equation. We have proved analytically and numerically the following results: (i)
The solution of \eqref{bc}
$R = -Ct$ defined for $-\infty < t \leq 0$, where $C>0$ is a fixed constant by Banerjee
et al \cite{bhui}, is unstable as $t\rightarrow 0^-$ and stable as $t\rightarrow -\infty$. (ii) We found a family of solutions of \eqref{bc} given by
\begin{equation}
R(t)= \frac{\alpha c_1}{2-\theta^*}+\frac{\alpha (\theta^*-1) t}{2-\theta^*}, \quad c_1\neq 0,
\end{equation}
parametrised by
$-\infty<\theta^*<\infty$. They are defined in the semi-infinite-interval
$-\infty< t\leq 0$,
and are stable as $t\rightarrow 0^{-}$. Our solutions are intrinsically different from the closed-form solution of Banerjee
et al [1] in terms of their stability as well as in nature, given that
\begin{equation}
\lim_{t\rightarrow 0^{-}} R(t) = \frac{\alpha c_1}{2-\theta^*} \neq 0
\end{equation}
as a difference with the closed-form solution of Banerjee
et al \cite{bhui} which satisfies $ \lim_{t\rightarrow 0^{-}} R(t) =0$. The affine parameter $\frac{\alpha c_1}{2-\theta^*}$ makes a subtle difference concerning stability as $t\rightarrow 0^{-}$.
We showed that a particular nonlinear temporal dependence produces drastically different physics from the linear model. This is an important point to note, albeit that the collapsing sphere described here represents a toy model of dissipative collapse.
\begin{figure}[h]
\centering
\includegraphics[scale=0.5]{rho.pdf}
\caption {Density as a function of the radial and temporal coordinates}
\label{Fig:Density}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[scale=0.5]{p.pdf}
\caption {Pressure as a function of the radial and temporal coordinates}
\label{Fig:p}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[scale=0.5]{Q.pdf}
\caption {Heat flow as a function of the radial and temporal coordinates}
\label{Fig:Q}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[scale=0.5]{mass.pdf}
\caption {Mass profile as a function of the radial and temporal coordinates}
\label{Fig:mass}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[scale=0.5]{e1.pdf}
\caption {$E1 = (\rho + p)^2 - 4Q^2 > 0$ as a function of the radial and temporal coordinates}
\label{Fig:E1}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[scale=0.5]{e2.pdf}
\caption {$E2 = \rho - 3p + \left[(\rho + p)^2 - 4Q^2\right]^{1/2} > 0$ as a function of the radial and temporal coordinates}
\label{Fig:E2}\end{figure}
\begin{figure}[h]
\centering
\includegraphics[scale=0.5]{t.pdf}
\caption {Temperature profiles as a function of the radial coordinate}
\label{Fig:t}\end{figure}
\section{Acknowledgments}
AP \& GL were funded by Agencia Nacional de Investigaci\'on y Desarrollo - ANID through the program FONDECYT Iniciaci\'on grant no. 11180126. Additionally, GL thanks the support of Vicerrector\'ia de Investigaci\'on y Desarrollo Tecnol\'ogico at Universidad Catolica del Norte.
|
{
"timestamp": "2021-05-11T02:21:50",
"yymm": "2105",
"arxiv_id": "2105.03970",
"language": "en",
"url": "https://arxiv.org/abs/2105.03970"
}
|
\section{Introduction}
\subsection{Overview}
\label{Section_Overview}
This text comes out of two circles of ideas. On one side, we are interested in $\beta$--ensembles of random matrix theory, where $\beta=1,2,4$ correspond to matrices with real/complex/quaternionic entries, but many distributions admit natural extensions to general real values of $\beta>0$. The theoretical physics tradition refers to the $\beta$ parameter as the inverse temperature. The matrices of interest are $N\times N$ and self--adjoint; we study the $N\to\infty$ asymptotic behavior of their eigenvalues on large scales. It was noticed by many authors (first, at classical $\beta=1,2,4$ and later for all $\beta>0$, see, e.g.\, \cite{BenArous-Guionnet, Johansson_CLT, BG} for the results of the latter type) that in the global regime, when we deal with all eigenvalues together and describe the asymptotics of their empirical measures through Laws of Large Numbers and Central Limit Theorems, the only dependence of the answers on $\beta$ is in simple normalization prefactors. In other words, the limits as $N\to\infty$ essentially do not depend on $\beta$, as long as $\beta>0$ remains fixed. Recently, it was shown that the situation changes, if one varies $\beta$ together with $N$ in such a way that $\beta N$ tends to a constant $2 \gamma>0$ as $N\to\infty$ (high-temperature regime). \cite{ABG,ABMV,TT_Jacobi} prove that for all classical ensembles of random matrices (Gaussian/Wigner, Laguerre/Wishart, and Jacobi/MANOVA) there is a different Law of Large Numbers in the high-temperature regime, and the resulting limit shapes non-trivially depend on the $\gamma$ parameter. A subsequent wave produced many more results in the $\beta N\to 2 \gamma$ asymptotic regime, such as the study of local statistics in \cite{KS,BGP,Pa}, or of central limit theorems in \cite{NT,HL}, or of the loop equations in \cite{FM}, or of the spherical integrals in \cite{MP}, or of the 2D systems in \cite{AB}, or connections to Toda chain in \cite{Spohn}, or of dynamic versions in \cite{NTT}; this is very far from the complete list of results and we refer to the previously mentioned articles for further references.
From another side, a classical tool of the probability theory for establishing asymptotic theorems is by using the characteristic functions or Fourier transforms. In the last 10 years, a Fourier approach has been developed for the strongly correlated $N$--particle systems (with distributions of random-matrix type) in the series of papers \cite{GP, BuG1, BuG2, BuG3, NovakM, Huang, GS, C,Ahn}. The central idea is to replace the exponents in the Fourier transform by symmetric functions of the representation-theoretic origin (such as Schur symmetric polynomials or multivariate Bessel functions) and to further connect the partial derivatives of the logarithm of the new transform to the asymptotic behavior of the particle system (mostly, in the global regime) by using differential operators diagonalized by these symmetric functions.
\smallskip
In this article we develop a theory of integral transforms of $N$--tuples of real numbers (which should be thought of as eigenvalues of a random $N\times N$ matrix) using multivariate Bessel functions of general parameter $\theta=\tfrac{\beta}{2}>0$ and generalizing conventional Fourier transform at $\theta=0$; such transforms are also known as symmetric Dunkl transforms in the special functions literature, see \cite{A} for a review. We prove a very general theorem stating that the partial derivatives of the logarithms of our transforms at $0$ have prescribed limits as $N\to\infty$, $\theta\to 0$, $\theta N\to \gamma$ if and only if the associated random $N$--tuples satisfy a form of the Law of Large Numbers as $N\to\infty$, see Theorem \ref{thm_small_th}. In our theory these partial derivatives play the same role as cumulants in classical probability and free cumulants in the free probability. We further develop a combinatorial theory of our new $\gamma$--cumulants in Theorems \ref{theorem_cumuls_moms} and \ref{thm:mom_cums2}.
We present several applications of our theory:
\begin{itemize}
\item We recover previous results about Gaussian and Laguerre $\beta$--ensembles of random matrices as $\beta\to 0$, $N\to\infty$, $\beta N\to2\gamma$, and recast them in the framework of $\gamma$--cumulants, see Section \ref{Section_GbE}, Example \ref{laguerre_exam}, and Remark \ref{Remark_Gauss_Laguerre}.
\item We investigate eigenvalues of the sum of two independent self--adjoint matrices in the limit $\beta N\to 2\gamma$. We prove the Law of Large Numbers in this regime and encounter a new operation of $\gamma$-convolution, interpolating between usual convolution at $\gamma=0$ and free (additive) convolution at $\gamma=\infty$, see Theorem \ref{Theorem_gamma_convolution}.
\item We obtain the Law of Large Numbers for ergodic Gibbs measures on the $\beta$--corners branching graph of \cite{OV, AN} in the regime $\beta N\to 2\gamma$, see Theorem \ref{Theorem_ergodic}. The limits are infinitely-divisible with respect to $\gamma$--convolution.
\item We find that each probability measure $\mu$ gives rise to a 1-parametric family of probability measures $\mu^{\tau, \gamma}$, $\tau\in [1,+\infty)$ which are $\beta N\to 2\gamma$ limits of empirical measures of spectra of $\lfloor N/\tau\rfloor\times \lfloor N/\tau\rfloor$ submatrix of $N\times N$ matrix whose spectrum approximates $\mu$ as $N\to\infty$. An intriguing property of the family is that all these measures are constructed from the same sequence of numbers, which are interpreted as the $\tfrac{\gamma}{\tau}$--cumulants of $\mu^{\tau, \gamma}$.
\end{itemize}
\subsection{Addition of matrices as $\theta=\tfrac{\beta}{2}\to 0$}
\label{Section_addition_intro}
Rather than explaining our results in the most general and abstract setting, we focus on describing a particular application which was the original motivation for this work: the addition of random matrices. We start from a classical question. Let $A$ and $B$ be two self-adjoint $N\times N$ matrices with (real) eigenvalues $a_1\le a_2\le\dots\le a_N$ and $b_1\le b_2\le \dots \le b_N$, respectively. What can we say about the eigenvalues $c_1\le c_2\le \dots\le c_N$ of the sum $C=A+B$?
The deterministic version of this problem asks to describe all possible values for $c_1\le \dots\le c_N$ if $A$ and $B$ are allowed to vary arbitrarily while preserving their eigenvalues. This question was first posed by Weyl \cite{Weyl} in 1912 and it took the full XX century before it was completely resolved, see \cite{KT} for a review. The answer is given by a convex set determined by the equality $\sum_{i=1}^N c_i=\sum_{i=1}^N a_i + \sum_{i=1}^N b_i$ (coming from ${\rm Trace}(C)={\rm Trace}(A)+{\rm Trace}(B)$) and a large list of inequalities satisfied by $c_1,\dots,c_N$: the simplest ones are well-known, for instance, $c_N\le a_N+b_N$, but there are many much more delicate relations.
The stochastic version of the same problem starts from random and independent matrices $A$ and $B$. We assume that $A$ is sampled from the uniform measure on the set of all matrices with prescribed eigenvalues\footnote{Say, we deal with complex Hermitian matrices. Then this set is an orbit of the unitary group $U(N)$ under the action by conjugations, and the uniform measure on the orbit is the image of the Haar (uniform) measure on $U(N)$ with respect to this action.} $a_1\le\dots\le a_N$ and, similarly, $B$ is a uniformly random matrix with eigenvalues $b_1\le\dots\le b_N$. Then the eigenvalues $c_1\le \dots\le c_N$ are random and we would like to obtain some description of them, with the most interesting questions pertaining to the situation of a very large $N$. The first asymptotic answer as $N\to\infty$ was obtained by Voiculescu in the context of the free probability theory.
\begin{thm}[\cite{Vo3}; see also \cite{Col, ColSn}] \label{Theorem_Voi} Suppose that $A$ and $B$ are independent $N\times N$ uniformly random self-adjoint matrices with spectra $a_1(N)\le \dots \le a_N(N)$ and $b_1(N)\le\dots\le b_N(N)$, respectively, and let $c_1(N)\le \dots \le c_N(N)$ be the (random) eigenvalues of $C=A+B$. Suppose that for two probability measures $\mu_A$, $\mu_B$, we have:
$$
\lim_{N\to\infty} \frac{1}{N}\sum_{i=1}^N \delta_{a_i(N)}=\mu_A,\qquad
\lim_{N\to\infty} \frac{1}{N}\sum_{i=1}^N \delta_{b_i(N)}=\mu_B.
$$
Then the random empirical measures $\frac{1}{N}\sum_{i=1}^N \delta_{c_i(N)}$ converge as $N\to\infty$ (weakly, in probability) to a deterministic measure $\mu_C:=\mu_A \boxplus \mu_B$, which is called the free convolution of $\mu_A$ and $\mu_B$.
\end{thm}
In order to use this theorem, it is important to be able to efficiently describe the measure $\mu_A\boxplus \mu_B$. Let us briefly present two points of view on such description and refer to textbooks \cite{NS,MS} for more details. The first point of view is analytic and it relies on the notion of the Voiculescu $R$--transform of a probability measure $\mu$, defined through:
$$
R_\mu(z)=(G_\mu(z))^{(-1)}-\frac{1}{z},\qquad G_\mu(z)=\int_{\mathbb R} \frac{1}{z-x} \mu(dx),
$$
where $G_\mu(z)$ is the Stieltjes transform of $\mu$ and $(G_\mu(z))^{(-1)}$ is the functional inverse. For a compactly supported $\mu$, $R_\mu(z)$ is holomorphic in a complex neighborhood of $0$. The measure $\mu_A\boxplus \mu_B$ is determined by:
\begin{equation}
\label{eq_free_conv_R}
R_{\mu_A\boxplus \mu_B}(z)=R_{\mu_A}(z)+R_{\mu_B}(z).
\end{equation}
The relation \eqref{eq_free_conv_R} is a free probability version of the linearization of conventional convolution by logarithms of the characteristic functions: if $\xi$ and $\eta$ are independent random variables, then
\begin{equation}
\label{eq_conv_lin}
\ln \mathbb E e^{\mathbf i t (\xi+\eta)}= \ln \mathbb E e^{\mathbf i t \xi}+ \ln \mathbb E e^{\mathbf i t \eta}.
\end{equation}
An alternative combinatorial approach to the free convolution uses free cumulants of a probability measure $\mu$ denoted $\kappa^\mu_n$, $n=1,2,\dots$. They are defined as certain explicit polynomials in the moments of the measure $\mu$. Simultaneously, the free cumulants are coefficients of the Taylor-series expansion of $R_\mu(z)$ at the origin, so \eqref{eq_free_conv_R} gets restated as
\begin{equation}
\label{eq_free_conv_cum}
\kappa^{\mu_A\boxplus \mu_B}_n=\kappa^{\mu_A}_n+\kappa^{\mu_B}_n, \quad n=1,2,\dots.
\end{equation}
This relation is a free probability version of the statement that conventional cumulants of a sum of independent random variables are sums of the cumulants of the summands.
\bigskip
Note that in Voiculescu's Theorem \ref{Theorem_Voi} we never specified, whether we deal with real symmetric, or complex Hermitian, or quaternionic Hermitian random matrices. And in fact, the theorem remains exactly the same in all these settings, which are usually referred as the $\beta=1,2,4$ cases in the random matrix literature. What we would like to do is to go one step further and to extend the setting of the Theorem \ref{Theorem_Voi} to the general $\beta$ setting. However, there is no (skew-)field of general real dimension $\beta>0$, and therefore, there are no independent random matrices $A$ and $B$ over such field, which we could add. Hence, we first need to address a question:
\begin{question}
What does it mean to add two independent self-adjoint $\beta$-random matrices $A$ and $B$?
\end{question}
Our answer to this question is based on the Fourier point of view on the addition of matrices. Suppose that $Q=[Q_{ij}]_{i,j=1}^N$ is a random real symmetric matrix. Its Fourier-Laplace transform is a function of another (deterministic) matrix $X$ given by:
\begin{equation}
\chi_Q(X)=\mathbb E \exp\Bigl({\rm Trace} (XQ)\Bigr)=\mathbb E \exp\biggl(\,\sum_{i,j=1}^N x_{ij} Q_{ji}\,\biggr).
\end{equation}
Let us assume that the law of $Q$ is invariant under conjugations by orthogonal matrices (which is the case for all three matrices $A$, $B$, and $C$ in the Theorem \ref{Theorem_Voi}). In addition assume that the matrix $X$ is normal (i.e.\ $X X^*=X^* X$), which implies that $X$ can be diagonalized by orthogonal conjugations\footnote{If we know that $Q$ is invariant under orthogonal conjugations and we know the values of $\chi_Q(X)$ for all normal $X$, then we can uniquely determine the law of $Q$. In fact it is sufficient to take $X$ to be symmetric (or $\mathbf i$ times symmetric).}. In this situation, conjugating $X$ and noting invariance of the trace, we see that $\chi_Q(X)$ is a function of the eigenvalues $x_1,\dots,x_N$ of $X$ and we can write it as $\chi_Q(x_1,\dots,x_N)$.
If we specialize to the case when $Q$ is a uniformly random real symmetric matrix with deterministic eigenvalues $q_1\le \dots\le q_N$, then $\chi_Q$ is known as a \emph{multivariate Bessel function} at $\theta=\tfrac{\beta}{2}=\tfrac12$:
\begin{equation}
\label{eq_Bessel_as_Fourier}
\chi_Q(x_1,\dots,x_N)= B_{(q_1,\dots,q_N)}\bigl(x_1,\dots,x_N; \, \tfrac{1}{2}\bigr).
\end{equation}
Going further, the definition of $\chi_Q$ and linearity of the trace immediately imply that for independent conjugation-invariant matrices $A$ and $B$ we have
\begin{equation}
\label{eq_Fourier_product}
\chi_{A+B}(x_1,\dots,x_N)=\chi_A(x_1,\dots,x_N) \chi_{B}(x_1,\dots,x_N).
\end{equation}
Moreover, we can take \eqref{eq_Fourier_product} as a \emph{definition} of $A+B$: the matrix $A+B$ is defined as a random $N\times N$ real symmetric matrix, whose law is invariant under orthogonal conjugations, and whose Fourier-Laplace transform is given by the right-hand side of \eqref{eq_Fourier_product}.
The same argument can be given for complex Hermitian matrices and for quaternionic Hermitian matrices with the only difference being that the parameter of the Bessel functions in \eqref{eq_Bessel_as_Fourier} changes to $\theta=1$ and $\theta=2$, respectively. But in fact, the multivariate Bessel functions make sense for any real $\theta>0$, see Section \ref{sec:bessel} for a formal definition. They are intimately connected to many topics, in particular, they are eigenfunctions of rational Calogero-Sutherland Hamiltonian and of (symmetric versions of) Dunkl operators; they are also limits of Jack and Macdonald symmetric polynomials.
We are now ready to define the general $\beta$-analogue of addition of random matrices:
\begin{definition} \label{Def_theta_addition} Fix $\theta=\tfrac{\beta}{2}>0$. Given deterministic $N$--tuples of reals $\mathbf a=(a_1\le\dots\le a_N)$ and $\mathbf b=(b_1\le \dots\le b_N)$, we define a random $N$--tuple $\mathbf c=(c_1\le \dots \le c_N)$ by specifying its law through
\begin{equation}
\label{eq_def_theta_addition}
\mathbb E B_{(c_1,\dots,c_N)}(x_1,\dots,x_N;\, \theta)= B_{(a_1,\dots,a_N)}(x_1,\dots,x_N;\, \theta) B_{(b_1,\dots,b_N)}(x_1,\dots,x_N;\, \theta), \qquad x_1,\dots,x_N\in\mathbb C.
\end{equation}
We say that $\mathbf c$ is the eigenvalue distribution for the $\theta$--sum of independent Hermitian matrices with spectra $\mathbf a$ and $\mathbf b$. We write $\mathbf c= \mathbf a +_{\theta} \mathbf b$.
\end{definition}
Let us remark that the uniqueness of the law of $(c_1,\dots,c_N)$ defined through \eqref{eq_def_theta_addition} is not hard to prove by expressing expectations of various test functions through expectations of multivariate Bessel functions.\footnote{For a reader who is not familiar with the theory of multivariate Bessel functions, we remark that at $N=1$, $B_{(a)}(z;\, \theta)=\exp(az)$. Hence, choosing $z=\mathbf i t$, the Bessel functions turn into the exponents $\exp (\mathbf i a t)$ and uniqueness turns into the well-known uniqueness of a measure with a given Fourier transform.}
In the existence part, there is a caveat. It is known that \eqref{eq_def_theta_addition} defines $\mathbf c$ as a compactly supported generalized function (or distribution), see \cite{Tri}, \cite[Section 3.6]{A}. It is also straightforward to see that the total mass of distribution of $\mathbf c$ is $1$ by inserting $x_1=\dots=x_N=0$ into \eqref{eq_def_theta_addition}. However, the \emph{positivity} of the law of $\mathbf c$, i.e.\ the fact there exists a \emph{(positive) measure} on $\mathbf c$'s, such that \eqref{eq_def_theta_addition} holds, is a well-known open question (at $\beta=1,2,4$ the positivity is automatic, since we have a construction for $\mathbf c$ as eigenvalues of bona fide random matrices). The positivity conjecture and its generalizations have been mentioned in \cite[Conjecture 8.3]{Stanley_Jack}, \cite{Rosler_pos}, \cite[Conjecture 2.1]{GM}, \cite[Section 1.2]{Matveev} and are believed to be true, yet we do not address it in our paper. Instead we state our results in such a way that they continue to hold even if the conjecture was wrong.
\medskip
The previous paragraph argues that the binary operation $+_\th$ takes two deterministic $N$--tuples $\mathbf{a}$, $\mathbf{b}$ as input and outputs a distribution $\mathbf{c}$ on $\mathbb{R}^N$ (though conjecturally $\mathbf{c}$ is a random $N$--tuple).
Even though there is no matrix interpretation for the operation $+_\th$ for general values of $\th>0$, it is helpful to think of $\mathbf{a}, \mathbf{b}$ and $\mathbf{c}$ as spectra of (nonexistent) self-adjoint $N\times N$ matrices.
From our experience with random matrix theory, then the following question is natural: How does $(\mathbf a,\mathbf b)\mapsto \mathbf a +_\theta \mathbf b$ behave as $N\to\infty$? While this has not been written down in any published text, there are strong reasons to believe that as long as $\theta>0$ is kept fixed, we get the same free convolution as in Theorem \ref{Theorem_Voi}.\footnote{The reasons are: widespread independence of the Law of Large Number from the value $\beta$ for the random matrix $\beta$-ensembles, cf.\ \cite{BenArous-Guionnet, Johansson_CLT, BG}; the same answer in Theorem \ref{Theorem_Voi} for three values $\theta=\tfrac{\beta}{2}=\tfrac{1}{2},1,2$; existence of $\theta$--independent observables for $\mathbf a+_\theta \mathbf b$, see \cite[Theorem 1.1]{GM}; $\theta$--independence in a discrete version of the same problem, see \cite{Huang}.} There are two boundary cases which need separate consideration: $\theta\to\infty$ and $\theta\to 0$. The former was addressed in \cite{GM}, where it was proven that for fixed $N$, the $\theta\to\infty$ limit of $\mathbf a+_\theta \mathbf b$ is a deterministic operation known as finite free convolution; it was further shown in \cite{Marcus} that as $N\to\infty$ we again recover the free convolution of Theorem \ref{Theorem_Voi}. The final case $\theta\to 0$ turns out to be very different. The $\theta=0$ version of multivariate Bessel function is a simple symmetric combination of exponents:
\begin{equation}
\label{eq_Bessel_at_0}
B_{(q_1,\dots,q_N)}\bigl(x_1,\dots,x_N; \, 0)=\frac{1}{N!}\sum_{\sigma\in S(N)} \prod_{i=1}^N \exp\bigl( x_i q_{\sigma(i)}\bigr),
\end{equation}
where the sum goes over $N!$ different permutations of $\{1,2,\dots,N\}$. The formula \eqref{eq_Bessel_at_0} implies a transparent probabilistic interpretation: $\mathbf c=\mathbf a+_0 \mathbf b$ is obtained by choosing a permutation $\sigma\in S(N)$ uniformly at random and letting $(c_1,\dots,c_N)$ be $(a_1+b_{\sigma(1)},\dots, a_N+b_{\sigma(N)})$ rearranged in the increasing order. From this interpretation it is not hard to see that as $N\to\infty$ the operation $\mathbf a+_0 \mathbf b$ becomes the usual convolution of the empirical measures corresponding to $\mathbf a$ and to $\mathbf b$.
Hence, we see a discontinuity in the $N\to\infty$ behavior of the operation $(\mathbf a,\mathbf b)\mapsto (\mathbf a+_\theta \mathbf b)$: at $\theta=0$ the limit is described by the conventional convolution, while at $\theta>0$ the limit is described by the free convolution. This motivates us to consider an intermediate scaling regime, in which $\theta$ goes to $0$ as $N\to\infty$. This is the topic of the following Theorem \ref{Theorem_gamma_convolution}, which is proven in Section \ref{Section_applications}.
\begin{definition}
\label{Def_mom_convergence}
We say that real random vectors $\mathbf a(N)=(a_1(N)\le a_2(N)\le \dots \le a_N(N))$ converge as $N\to\infty$ in the sense of moments, if there exists a sequence of real numbers $\{m_k\}_{k\ge 1}$, such that for any $s=1,2,\dots$ and any $k_1,k_2,\dots,k_s\in \mathbb Z_{\ge 1}$, we have:
\begin{equation}
\label{eq_moments_convergence}
\lim_{N\to\infty} \mathbb E\left[\prod_{i=1}^s\left( \frac{1}{N} \sum_{j=1}^N \bigl(a_j(N)\bigr)^{k_i}\right)\right] =\prod_{i=1}^s m_{k_i}.
\end{equation}
In this situation we write $\displaystyle \lim_{N\to\infty} \mathbf a(N)\stackrel{m}{=} \{m_k\}_{k\ge 1}$.
\end{definition}
Note that \eqref{eq_moments_convergence} implies that the random empirical measures $\frac{1}{N} \sum_{i=1}^N \delta_{a_i(N)}$ converge as $N\to\infty$ weakly, in probability towards a deterministic measure with moments $m_k$, as long the moments problem associated with $\{m_k\}_{k\ge 1}$ has a unique solution. Also note that we can use Definition \ref{Def_mom_convergence} in the situations where positivity of the distribution of $\mathbf a(N)$ is unknown: we may interpret $\mathbb E$ in \eqref{eq_moments_convergence} as the integral with respect to the distribution of $\mathbf a(N)$.
\begin{thm} \label{Theorem_gamma_convolution} Fix $\gamma>0$ and suppose that $\theta>0$ varies with $N$ in such a way that $\theta N\to \gamma$ as $N\to\infty$. Take two sequences of random vectors $\mathbf a(N)$, $\mathbf b(N)$, $N=1,2,\dots$, such that
$$
\lim_{N\to\infty} \mathbf a(N)\stackrel{m}{=} \{ m_k^{\mathbf a} \}_{k\ge 1}, \qquad \lim_{N\to\infty} \mathbf b(N)\stackrel{m}{=} \{ m_k^{\mathbf b} \}_{k\ge 1}.
$$
In addition, assume that $\mathbf a(N)$, $\mathbf b(N)$ satisfy the tail condition of Definition \ref{df_decaying}. Then
$$
\lim_{\begin{smallmatrix} N\to\infty\\ \theta N\to \gamma\end{smallmatrix} } \bigl(\mathbf a(N)+_\theta \mathbf b(N)\bigr) \stackrel{m}{=} \{\tilde m_k \}_{k\ge 1},
$$
where we call $\{\tilde m_k\}_{k\ge 1}$ the $\gamma$-convolution of $\{m_k^\mathbf a\}_{k\ge 1}$ and $\{m_k^\mathbf b\}_{k\ge 1}$ denoted through
$$
\{\tilde m_k \}_{k\ge 1}=\{ m_k^\mathbf a \}_{k\ge 1} \boxplus_\gamma \{ m_k^\mathbf b \}_{k\ge 1}.
$$
\end{thm}
\bigskip
We further investigate the $\gamma$--convolution and establish the following properties:
\begin{enumerate}
\item There exist quantities called \emph{$\gamma$--cumulants}, with the $l$th $\gamma$--cumulant $\kappa_l^{(\gamma)}$ being a homogeneous polynomial of degree $l$ in the moments $m_1, \dots, m_l$ (where $m_k$ is treated as a variable of degree $k$), such that for each $l=1,2,\dots$
\begin{equation}
\label{eq_convolution_cumulants}
\kappa_l^{(\gamma)} \left[ \{ m_k^\mathbf a \}_{k\ge 1} \boxplus_\gamma \{ m_k^\mathbf b \}_{k\ge 1} \right]=
\kappa_l^{(\gamma)} \left[\{ m_k^\mathbf a \}_{k\ge 1}\right] +\kappa_l^{(\gamma)}\left[ \{ m_k^\mathbf b \}_{k\ge 1}\right].
\end{equation}
\item Each moment $m_k$ can be expressed as a polynomial in $\kappa_l^{(\gamma)}$, $1\le l\le k$, whose coefficients are explicit polynomials in $\gamma$ with positive integer coefficients, see Theorem \ref{theorem_cumuls_moms}.
\item A generating function of the $\gamma$-cumulants $\kappa_l^{(\gamma)}$ is related to a generating function of the moments $m_k$ through a simple relation, see Theorem \ref{thm:mom_cums2}.
\item As $\gamma\to 0$ the $\gamma$--convolution turns into the conventional convolution (i.e., if $\{m_k^\mathbf a\}_{k\ge 1}$ and $\{m_k^\mathbf b\}_{k\ge 1}$ are moments of two independent random variables, then $\lim_{\gamma\to 0}\,\{m_k^\mathbf a \}_{k\ge 1} \boxplus_\gamma \{ m_k^\mathbf b \}_{k\ge 1}$ gives moments of their sum). After proper renormalization the $\gamma$--cumulants turn into conventional cumulants, see Section \ref{Section_limit_to_0}.
\item As $\gamma\to \infty$ the $\gamma$--convolution turns into the free convolution of Theorem \ref{Theorem_Voi}. After proper renormalization the $\gamma$--cumulants turn into the free cumulants, see Section \ref{Section_limit_to_infinity}.
\end{enumerate}
\begin{remark}
We do not discuss in this text the \emph{microscopic limits} of $\mathbf a(N)+_\theta \mathbf b(N)$ as $N\to\infty$, $\theta N\to \gamma$, i.e.\ the asymptotic questions in which individual eigenvalues remain visible in the limit. Yet, we expect to see the Poisson point process in the bulk of the spectrum, as hinted by general universality considerations and the $\theta\to 0$ asymptotic results in \cite{KS,AD,BGP}.
\end{remark}
\subsection{Law of Large Numbers through Bessel generating functions}
Let us now outline the main technical tool, underlying the proof of Theorem \ref{Theorem_gamma_convolution} and other asymptotic results mentioned at the end of Section \ref{Section_Overview}.
Suppose that $\mathbf q=(q_1\le q_2\le\dots \le q_N)$ is a random $N$--tuple of reals. We define its Bessel generating function (BGF) through:
$$
G_\theta(x_1,\dots,x_N; \mathbf q)=\mathbb E_{\mathbf q}\left[ B_{(q_1,\dots,q_N)}(x_1,\dots, x_N;\, \theta)\right].
$$
Our main result, Theorem \ref{thm_small_th}, establishes an equivalence of the following two conditions for random sequences $\mathbf q(N)=(q_1(N),\dots,q_N(N))$ as $N\to\infty$ and $\theta\to 0$ in such a way that $\theta N\to \gamma$:
\begin{enumerate}
\item Partial derivatives of arbitrary order in $x_1$ of $\ln\bigl(G_\theta(x_1,\dots,x_N; \mathbf q(N))\bigr)$ at $(0,\dots,0)$ converge to prescribed limits and partial derivatives in two (or more) different variables converge to $0$.
\item Random vectors $\mathbf q(N)$ converge in the sense of moments, as in Definition \ref{Def_mom_convergence}.
\end{enumerate}
The same theorem also establishes explicit polynomial formulas connecting the limiting value of the partial derivatives to the limiting values of the moments. The benefit of Theorem \ref{thm_small_th} is that it allows us to convert probabilistic information about $\mathbf q(N)$ into the analytic information about partial derivatives of its BGF and vice versa. For instance, Theorem \ref{Theorem_gamma_convolution} is then proven by three straightforward applications of Theorem \ref{thm_small_th}: to $\mathbf q(N)=\mathbf a(N)$, to $\mathbf q(N)=\mathbf b(N)$, and to $\mathbf q(N)=\mathbf a(N)+_\theta \mathbf b(N)$. This and several other applications of Theorem \ref{thm_small_th} are detailed in Section \ref{Section_applications}.
Similar methods to the ones used in our proof of Theorem \ref{thm_small_th} have led to recent results in the literature, and even though our Theorem \ref{thm_small_th} bears resemblance to these other results, there are important differences. For example, in \cite{BuG3} Bufetov and the third author developed a theory of Schur generating functions (SGF) for discrete $N$--particle systems as $N\to\infty$ (see also \cite{Huang} for an extension): they show that asymptotic information on partial derivatives of logarithms of SGF is in correspondence with asymptotic information on the moments in Law of Large Numbers as in Definition \ref{Def_mom_convergence} and with covariances in a version of the Central Limit Theorem for global fluctuations. This is different from our Theorem \ref{thm_small_th}: on the analytic side \cite{BuG3} requires more refined control on partial derivatives and on the probabilistic side \cite{BuG3} requires Central Limit Theorems in addition to Laws of Large Numbers.
In another similar framework related to multiplication of random matrices \cite{GS} established a statement in one direction: control on partial derivatives implies the Law of Large Numbers and Central Limit Theorem, but in that framework a statement in the opposite direction remains out of reach.
Going further, we show in Section \ref{Section_Appendix_LLN} that an analogue of Theorem \ref{thm_small_th} with fixed (rather than tending to $0$) $\theta$ is wrong: there is no direct correspondence between partial derivatives of the logatithm of BGF and asymptotics of moments; one probably needs to use in such situation more complicated (and not yet understood) combinations of mixed partial derivatives in several variables. Thus, Theorem \ref{thm_small_th} is not an extension of the results of previous papers, but rather a brand new statement.
\subsection{Connection to $\theta=\tfrac{\beta}{2}\to 0$ limits}
One intriguing aspect of general $\beta$ random matrix theory is existence of dualities between parameters $\theta$ and $1/\theta$ (i.e.\ between $\beta$ and $4/\beta$). In the theory of symmetric polynomials such a duality manifests through the existence of an automorphism of the algebra of symmetric functions, which transposes the label of Jack symmetric polynomials and simultaneously inverts $\theta$, see \cite[Section 3]{Stanley_Jack}. In the study of classical ensembles of random matrices the duality appears as a symmetry in expectations of power sums of eigenvalues, see, e.g., \cite[Section 2.1]{DE}, \cite[Section 4.4]{FD}, \cite{For_du}, and references therein.
In our context, the duality suggests to look for a relation between $\theta\to 0$ limits of our paper and $\theta\to\infty$ limits. While this relation is not yet fully understood, we observe it in two forms.
First, the limit of the empirical measures of Gaussian $\beta$--ensembles as $\beta\to 0$, $N\to\infty$, $\beta N\to 2\gamma$ turns out to coincide with the orthogonality measure of the associated Hermite polynomials, see Remark \ref{Remark_aHerm}. Simultaneously, the same polynomials play an important role in the study of centered fluctuations of Gaussian $\beta$--ensembles as $\beta\to \infty$ with $N$ kept fixed, see \cite[Section 4.5]{Gorin_Klept} and \cite{AHV}.
Second, let us fix $N=d$ and send $\theta\to\infty$. \cite[Theorem 1.2]{GM} claims that in this regime the operation $\mathbf a+_\theta \mathbf b$ turns into the finite free convolution, which is a deterministic binary operation on $d$--tuples of real numbers. Further, \cite{AP} introduced for each $d$ a family of $d$ finite free cumulants $\kappa^{\mathrm{ff}}_{1;d}, \kappa^{\mathrm{ff}}_{2;d},\dots,\kappa^{\mathrm{ff}}_{d;d}$, which depend on a $d$--tuple of real numbers and play the same role for the finite free convolution, as our $\gamma$--cumulants play for the $\gamma$--convolution. Comparing the generating function of finite free cumulants from \cite{AP}, \cite{Marcus}, with the generating function of $\gamma$-cumulants of our Theorem \ref{thm:mom_cums2}, one sees\footnote{One should compare \cite[(3.1), (4.2)]{AP} with our pair of equations \eqref{eq_cums_moments_2} and notice that the conventions are slightly different: $(d)_n$ is a falling factorial in \cite{AP} and $(\gamma)_n$ is a rising factorial in our work.
One can also directly compare the formulas for the first four cumulants of \eqref{ex_c_to_m} and \eqref{ex_m_to_c} with similar formulas above Corollary 4.3 in the journal version of \cite{AP}. We are grateful to Octavio Arizmendi and Daniel Perales for pointing this connection to us.}that upon setting $\gamma=-d$, they are very similar and only differ by normalizations, see Section \ref{Section_gen_functions} for more details. However, it is important to note that in our setting $\gamma>0$, while in the setting of \cite{AP}, \cite{Marcus}, $d$ is a positive integer and, thus, $-d$ is a negative integer. Hence, a correct point of view is that our $\gamma$--cumulants and the finite free cumulants are analytic continuations of each other. It would be interesting to see whether this observation can be used to produce new formulas for finite free cumulants along the lines of our Theorem \ref{theorem_cumuls_moms}.
\subsection*{Acknowledgements} The authors would like to thank Alexey Bufetov and Greta Panova for helpful discussions. We are thankful to Maciej Do\l \c ega for pointing us to the articles \cite{D}, \cite{BDEG}, and for sending us a draft of a new version of the latter paper. We thank Octavio Arizmendi and Daniel Perales for directing us to their work \cite{AP}.
The work of V.G.\ was partially supported by NSF Grants DMS-1664619, DMS-1949820, by BSF grant 2018248, and by the Office of the Vice Chancellor for Research and Graduate Education at the University of Wisconsin--Madison with funding from the Wisconsin Alumni Research Foundation.
\section{Bessel generating functions}
We define here the Bessel generating function of a probability measure on $\mathbb{R}^N$ --- this is a one-parameter generalization of the characteristic function (or Laplace transform) of a probability measure.
The real parameter $\th$ is assumed to be positive, with $\th\to 0$ corresponding to the usual characteristic function. In this section, $\theta>0$ remains fixed.
\subsection{Difference and differential operators}
\label{Section_operators}
We work with functions of $N$ variables $x_1,\dots,x_N$.
Denote the operator that permutes the variables $x_i$ and $x_j$ by $s_{i, j}$. For instance,
$$
[s_{1,2} f](x_1,x_2,x_3,x_4,\dots,x_N)=f(x_2,x_1,x_3,x_4,\dots,x_N).
$$
Define the \emph{Dunkl operators} by
\begin{equation}\label{dunkl_ops}
\mathcal D_i := \frac{\partial}{\partial x_i} + \theta\sum_{j : j\neq i}{\frac{1}{x_i - x_j}\circ (1 - s_{i, j})},\quad i = 1, \dots, N.
\end{equation}
These operators were introduced in \cite{Du}; see also \cite{Ki_lect,Rosler,Et_lect} for further studies. Their key property is commutativity:
$$\mathcal D_i\mathcal D_j = \mathcal D_j\mathcal D_i,\quad i, j = 1, \dots, N.$$
We often work with symmetrized versions of the Dunkl operators:
\begin{equation*}
\P_k := (\mathcal{D}_1)^k + \dots + (\mathcal{D}_N)^k,\quad k\in\mathbb{Z}_{\geq 1}.
\end{equation*}
Let $U\subseteq\mathbb{C}^N$ be any domain which is symmetric with respect to permutations of the axes.
If $f$ is a holomorphic function on $U$, then $\mathcal D_i f$ and $\P_k f$ are both well-defined and holomorphic on $U$.
\smallskip
We also need the \emph{degree-lowering operators} $d_1, \dots, d_N$, which are defined on monomials by
\begin{equation}
\label{eq_lowering_operator}
d_i(x_1^{r_1}\cdots x_N^{r_N}) := \begin{cases} x_1^{r_1}\cdots x_i^{r_i - 1}\cdots x_N^{r_N}, & \text{if}\ r_i\in\mathbb{Z}_{\geq 1}, \\ 0, & \text{if}\ r_i = 0, \end{cases}
\end{equation}
and extended by linearity to the space of polynomials of $N$ variables.
They can be further extended to the ring of germs of analytic functions at the origin $(0, \cdots, 0)\in\mathbb{C}^N$.
\subsection{Multivariate Bessel functions}\label{sec:bessel}
A central role in our studies is played by the simultaneous eigenfunctions of the operators $\P_k$ known as \emph{multivariate Bessel functions}.
They are given by very explicit formulas, which we describe next.
For each $N=1,2,\dots$, a \emph{Gelfand--Tsetlin pattern of rank $N$} is an array $\{y_{i}^k\}_{1\le i \le k \le N}$ of real numbers satisfying
$y^{k+1}_i\le y^{k}_i \le y^{k+1}_{i+1}$. Denote by $\mathcal G_N$ the space of all Gelfand--Tsetlin patterns of rank $N$.
\begin{definition}\label{def_betacorner} Fix $\theta>0$. The \emph{$\th$-corners process with top row $a_1<\dots<a_N$}
is the probability distribution on the arrays
$\{y^k_i\}_{1\leq i\leq k\leq N}\in \mathcal G_N$, such that $y^N_i=a_i$, $i=1,\dots,N$, and
the remaining $N(N-1)/2$ coordinates have the density
\begin{equation}
\label{eq_beta_corners_def}
\frac{1}{Z_{N; \th}} \cdot
\prod_{k=1}^{N-1} \left[\prod_{1\le i<j\le k} (y_j^k-y_i^k)^{2-2\theta}\right] \cdot \left[\prod_{a=1}^k \prod_{b=1}^{k+1}
|y^k_a-y^{k+1}_b|^{\theta-1}\right],
\end{equation}
where $Z_{N; \th}$ is the normalization constant:
\begin{equation}
\label{eq_normalization}
Z_{N; \th} =\left[\prod_{k=1}^N \frac{ \Gamma(\theta)^k}{\Gamma(k\theta)}\right] \cdot \prod_{1\le i < j \le N}
(a_j-a_i)^{2\theta-1}.
\end{equation}
\end{definition}
\begin{remark}
By taking limits (in the space of probability measures on $\mathcal G_N$), we can allow equalities and extend the definition to arbitrary $a_1\le a_2\le\dots\le a_N$.
\end{remark}
\begin{remark}
The distribution \eqref{eq_beta_corners_def} is the joint law of eigenvalues of principal corners of Hermitian conjugation-invariant real/complex/quaternion matrices at $\th=\frac{1}{2},1,2$, respectively, see \cite{Ner}. This connects Definition \ref{def_betacorner} to the Laplace-Fourier point of view of Section \ref{Section_addition_intro}.
\end{remark}
\begin{remark}
The calculation of the normalization constant for a general $\th > 0$ is contained in \cite{Ner} (the author there does far more general calculations; see also \cite[Lem. 2.1]{C} for a short derivation of $Z_{N; \th}$ from Anderson's integral identity \cite{And}).
\end{remark}
\begin{definition} \label{Definition_Bessel_function}
The \emph{multivariate Bessel function} $B_{(a_1, \ldots, a_N)}(x_1, \ldots, x_N; \theta)$ is defined as the following (partial) Laplace transform of the
$\th$-corners process with top row $(a_1,\dots,a_N)$ from Definition \ref{def_betacorner}:
\begin{equation}\label{eq_Bessel_combinatorial}
B_{(a_1,\dots,a_N)}(x_1,\dots,x_N;\,\theta)= \mathbb E_{\{y^k_i\}}\!\left[\exp\left(\sum_{k=1}^{N} x_k
\cdot \left(\sum_{i=1}^{k} y_i^k-\sum_{j=1}^{k-1} y_j^{k-1}\right) \right) \right]\!.
\end{equation}
The function $B_{(a_1, \dots, a_N)}(x_1, \dots, x_N)$ is defined for any reals $a_1<\dots <a_N$ and any complex numbers $x_1, \dots, x_N$.
\end{definition}
Often, we will abbreviate multivariate Bessel function as MBF.
It follows from the definition that
\begin{equation*}
B_{(a_1, \dots, a_N)}(0, \dots, 0; \th) = 1.
\end{equation*}
Our definition is called the \emph{combinatorial formula} for the multivariate Bessel functions; to our knowledge, the formula \eqref{eq_Bessel_combinatorial} first appeared in \cite{GK}. There are several alternative definitions of these functions.
For example, from the algebraic combinatorics point of view, they can be defined as limits of (properly normalized) Jack symmetric polynomials. Then \eqref{eq_Bessel_combinatorial} is a limit of the combinatorial formulas for the Jack polynomials, cf.\ \cite[Section 4]{Ok_Olsh_shifted_Jack}.
The MBF $B_{(a_1, \cdots, a_N)}(x_1, \cdots, x_N; \th)$, which was defined for ordered tuples $a_1<\dots<a_N$, can be extended to weakly ordered tuples $a_1\le \dots\le a_N$ by continuity: there is no singularity on the diagonals $a_i=a_j$. In fact, much more is true: $B_{(a_1,\dots,a_N)}(x_1,\dots,x_N;\th)$ admits an analytic continuation on the $2N+1$ variables $a_1, \dots, a_N$, $x_1, \dots, x_N, \th$, to an open subset of $\mathbb{C}^{2N+1}$ containing $\{(a_1, \cdots, a_N, x_1, \cdots, x_N, \th)\in\mathbb{C}^{2N+1} \mid \Re\th\ge 0\}$; see \cite{O}.
In particular, for a fixed $\th>0$, the MBF $B_{(a_1,\dots,a_N)}(x_1,\dots,x_N;\th)$ is an entire function on the variables $a_1, \cdots, a_N, x_1, \cdots, x_N$.
Another important property is that the MBF $B_{(a_1,\dots,a_N)}(x_1,\dots,x_N;\,\theta)$ is \emph{symmetric} in its arguments $x_1,\dots,x_N$. In the particular case $\theta=1$ (but not in general), the symmetry is transparent from the following determinantal formula, which arises as the evaluation of the Harish-Chandra-Itzykson-Zuber (HCIZ) integral:
\begin{equation}\label{eq_Bessel_1}
B_{(a_1,\dots,a_N)}(x_1,\dots,x_N;\,1)= 1!\cdot 2! \cdots (N-1)! \cdot \frac{\det\bigl[ e^{a_i x_j}\bigr]_{i,j=1}^N}{\prod_{i<j} (x_i-x_j)(a_i-a_j)}.
\end{equation}
A link of MBF to the operators of Section \ref{Section_operators} is given by the following statement.
\begin{thm}[\cite{O}]\label{thm:opdam}
For each $k=1,2,\dots,$ and each $N$--tuple of reals $a_1\le a_2 \le \dots \le a_N$,
\begin{equation}\label{eqn:hypersystem}
\P_k B_{(a_1,\dots,a_N)} = \left(\sum_{i=1}^N{a_i^k}\right)\cdot B_{(a_1,\dots,a_N)}.
\end{equation}
\end{thm}
\subsection{Bessel generating functions} \label{Section_BGF}
Let $\mathcal M_N$ be the convex set of Borel probability measures on ordered $N$--tuples $a_1\le a_2\le \dots\le a_N$ of real numbers.
\begin{df}
The \emph{Bessel generating function} (or BGF) of $\mu\in\mathcal M_N$ is defined as a function of the variables $x_1, \dots, x_N$ given by:
\begin{equation}\label{BGF_def}
G_\th(x_1, \dots, x_N; \mu) := \int_{a_1\le a_2\le \dots \le a_N}{B_{(a_1, \dots, a_N)}(x_1, \dots, x_N; \theta)\mu(\d a_1, \dots, \d a_N)}.
\end{equation}
\end{df}
Because the MBFs $B_{(a_1,\dots,a_N)}(x_1,\dots,x_N;\theta)$ are symmetric functions on the variables $x_1,\dots, x_N$, so is $G_\th(x_1,\dots,x_N; \mu)$. Moreover,
$$
G_\th(0,\dots,0; \mu) = 1,
$$
as follows from $\mu$ being a probability measure and $B_{(a_1,\dots,a_N)}(0,\dots,0;\theta)=0$.
It will be important for us to assume that a BGF is defined in a \emph{complex} neighborhood of $(0,\dots,0)$. Unfortunately, this property fails for general measures, hence we need to restrict the class of measures that we deal with.\footnote{It is plausible that many of the results of our text extend to the situations where this restrictive condition fails.}
\begin{df}\label{df_decaying}
We say that a measure $\mu\in\mathcal M_N$ is \emph{exponentially decaying} with exponent $R>0$, if
\begin{equation}\label{int_bounded}
\int_{a_1\le a_2\le\dots\le a_N}{e^{N R \max_i |a_i| }\mu(\d a_1, \dots, \d a_N)} < \infty.
\end{equation}
\end{df}
\begin{lemma}\label{bgf_good}
If $\mu\in\mathcal M_N$ is exponentially decaying with exponent $R>0$, then the integral \eqref{BGF_def} converges for all $(x_1,\dots,x_N)$ in the domain
\begin{equation*}
\Omega_R := \left\{ (x_1, \dots, x_N)\in\mathbb{C}^N : |\Re x_i| < R,\ i = 1, \dots, N \right\},
\end{equation*}
and defines a holomorphic function in this domain.
\end{lemma}
\begin{proof}
Note that if $\{y_i^k\}\in\mathcal G_N$ satisfies $y_i^N=a_i$, $i=1,\dots,N$, then due to interlacing inequalities, for each $k=1,2,\dots,N$ we have
$$
\left|\sum_{i=1}^k y_i^k-\sum_{i=1}^{k-1} y_i^{k-1}\right|\le \max\left(|y_1^k|, |y_k^k|\right) \le \max_{i} |a_i|.
$$
Hence, the integrand in the definition of the multivariate Bessel function \eqref{eq_Bessel_combinatorial} is upper bounded by
$$
\exp\left( \sum_{j=1}^N |\Re x_j| \max_i |a_i|\right),
$$
which implies
$$
\left|B_{(a_1,\dots,a_N)}(x_1,\dots,x_N; \th)\right|\le \exp\left(NR \max_i |a_i|\right)\!, \quad (x_1,\dots,x_N)\in \Omega_R.
$$
Hence, \eqref{int_bounded} implies convergence of the integral \eqref{BGF_def} in $\Omega_R$.
It remains to check holomorphicity of \eqref{BGF_def} as a function of $x_1,\dots,x_N$. This readily follows from holomorphicity of $B_{(a_1,\dots,a_N)}(x_1,\dots,x_N; \th)$. Indeed, $G_\th(x_1, \dots, x_N; \mu)$ is continuous as a uniformly convergent integral of continuous functions. Thus, by Morera's theorem, the holomorphicity follows from vanishing of the integrals over closed contours. The latter vanishing can be deduced by swapping the integrations using the Fubini's theorem and using vanishing of the similar integrals for $B_{(a_1,\dots,a_N)}(x_1,\dots,x_N; \th)$.
\end{proof}
The BGFs have recently been used in connection to problems in random matrix theory, see \cite{C}, \cite{GS}.
However, the BGF is not a new invention. The formula \eqref{BGF_def} is essentially the definition of (a symmetric version of) the \emph{Dunkl transform}, a one-parameter generalization of the Fourier transform; this is a rich and well-studied subject, see e.g. the survey \cite{A} and references therein.
The next two propositions will be important in our developments.
\begin{prop}\label{ops_1}
Let $k\in\mathbb{Z}_{\geq 1}$ and let $\mu\in\mathcal M_N$ be an exponentially decaying measure. Then
$$\bigl[\P_k\, G_\th(x_1, \dots, x_N; \mu)\bigr]_{x_1=\dots=x_N=0} = \mathbb E_{\mu}\!\left[\sum_{j=1}^N (a_j)^k\right],$$
where $(a_1, \dots, a_N)\in\mathbb{R}^N$ is random and $\mu$-distributed on the right-hand side.
\end{prop}
\begin{proof} We apply $\P_k$ to \eqref{BGF_def} under the sign of the integral, use the eigenrelation of Theorem \ref{thm:opdam} and the normalization $B_{(a_1,\dots,a_N)}(0,\dots,0; \theta)=1$. We can exchange the order between the operator $\P_k$ and the integral because $\mu$ is exponentially decaying.
\end{proof}
The following generalization of Proposition \ref{ops_1} is proved in the same way.
\begin{prop}\label{proposition_moments_through_operators}
Let $k_1, \dots, k_s\in\mathbb{Z}_{\geq 1}$ and let $\mu\in\mathcal M_N$ be an exponentially decaying measure. Then
\begin{equation}\label{ops_2_eqn}
\left( \prod_{i=1}^s{\P_{k_i}}\!\right) G_\th(x_1, \dots, x_N; \mu)\Bigr|_{x_1=\dots=x_N=0} = \mathbb E_{\mu}\!\left[ \prod_{i=1}^s \left(\sum_{j=1}^N (a_j)^{k_i}\right) \right].
\end{equation}
\end{prop}
\medskip
Observe that the pairwise commutativity of the Dunkl operators implies the pairwise commutativity of the operators $\P_k$, $k\in\mathbb{Z}_{\geq 1}$.
As a result, the order of application of the operators $\P_{k_i}$ in the left-hand side of \eqref{ops_2_eqn} does not matter.
\subsection{Extension to distributions}
Ultimately, we treat Bessel Generating Functions as a tool for studying symmetric probability measures on $\mathbb R^N$ (which can be identified with probability measures on ordered $N$--tuples $a_1\le a_2\le\dots\le a_N$). One of the applications that we have in mind is to use them for the study of addition of independent general $\beta$ random matrices. While it is conjectured that the spectrum of such sum should be described by a probability measure, it is not proven yet: we only rigorously know that the spectrum can be described as a generalized function or distribution (the technical problem is in proving positivity; see \cite{Tri}, \cite[Section 3.6]{A}). In order to avoid the necessity to rely on the positivity conjectures, we explain in this section that the framework of Bessel generating functions can be extended to objects more general than probability measures.
Let $\mu$ be a \emph{distribution} on $\mathbb R^N$ with coordinates $(a_1,\dots,a_N)$, i.e.\ $\mu$ is an element of the dual space to the space of compactly supported infinitely--differentiable \emph{test-functions}.\footnote{The space of test-functions $f$ is equipped with a topology: $f^{n}$ converge to $0$ as $n\to\infty$, if the supports of all these functions belong to the same compact set and all partial derivatives of $f^{n}$ converge to $0$ uniformly.} $\mu$ is said to be symmetric if for any test-function $f$ and any permutation $\sigma$:
$$
\langle \mu, f(a_1,\dots,a_N)\rangle = \langle \mu, f(a_{\sigma(1)},\dots,a_{\sigma(N)})\rangle,
$$
where we use the notation $\langle \mu,f\rangle$ for the value of the functional $\mu$ on the test-function $f$.
\begin{df}\label{bgf_dist}
For a symmetric distribution (generalized function) $\mu$ on $\mathbb R^N$, its \emph{Bessel generating function} (or BGF) is a function of $(x_1,\dots,x_N)$ given by
\begin{equation}\label{BGF_def_gen}
G_\theta(x_1, \dots, x_N; \mu) :=\frac{1}{N!} \left\langle \mu, B_{(a_1, \dots, a_N)}(x_1, \dots, x_N; \theta)\right\rangle,
\end{equation}
where in the right-hand side $B_{(a_1, \dots, a_N)}(x_1, \dots, x_N; \theta)$ is treated as a test-function in $(a_1,\dots,a_N)$ variables with parameters $(x_1,\dots,x_N)$.
\end{df}
There are two tricky points in this definition. First, the $N$--tuple $(a_1,\dots,a_N)$ was ordered in the original definition of the multivariate Bessel function, whereas $\mu$ is a distribution on $\mathbb R^N$. However, multivariate Bessel functions can be extended to $\mathbb R^N$ in a symmetric way. The $\frac{1}{N!}$ prefactor is introduced to match the integral over \emph{ordered} $N$--tuples in \eqref{BGF_def} with distribution on whole $\mathbb{R}^N$ in \eqref{BGF_def_gen}.
More importantly, for general distributions $\mu$ \eqref{BGF_def_gen} is not defined, since $B_{(a_1, \dots, a_N)}(x_1, \dots, x_N; \theta)$ is not compactly supported and therefore not a valid test function. Hence, one needs to impose some growth conditions similar to Definition \ref{df_decaying} on $\mu$, in order to make \eqref{BGF_def_gen} meaningful. Rather than exploring the full generality, let us only consider the case of compactly supported $\mu$ (which means that $\mu$ vanishes on any test
function whose support does not intersect a certain compact set), which is all we need for our application.
For compactly supported distributions $\mu$, Definition \ref{bgf_dist} is well-posed and in fact the pairing $\langle \mu, f\rangle$ makes sense for any infinitely--differentiable function $f$. In this text, we will be interested in compactly supported distributions $\mu$ of total mass equal to $N!$, meaning that $\langle \mu, \mathbf{1}\rangle = N!$, where $\mathbf{1}$ is the test function on $\mathbb{R}^N$ that is identically equal to $1$; in this case, $G_\th(0, \cdots, 0; \mu) = 1$.
\begin{proposition} \label{Proposition_BGF_dist}
Suppose that $\mu$ is a symmetric compactly supported distribution on $\mathbb R^N$. Then its BGF $G_\theta(x_1, \dots, x_N; \mu)$ is an entire function. We also have
\begin{equation}\label{eq_expectation_operator_general}
\left( \prod_{i=1}^s{\P_{k_i}}\right)G_\theta(x_1, \dots, x_N; \mu)\Bigr|_{x_1=\dots=x_N=0} =\frac{1}{N!}\left\langle \mu,\, \prod_{i=1}^s \left(\sum_{j=1}^N (a_j)^{k_i}\right) \right\rangle.
\end{equation}
\end{proposition}
\begin{proof}
Each compactly supported distribution can be identified with a (higher order) derivative of a compactly supported continuous function (see, e.g., \cite[Section 6]{Rudin}). Hence, we have
$$
G_\theta(x_1, \dots, x_N; \mu)=\int_{\mathbb R^N} \frac{\partial^{|\alpha|}}{\partial (a_i)^\alpha} \left[B_{(a_1,\dots,a_N)}(x_1,\dots,x_N; \theta)\right] f(a_1,\dots,a_N)\, \d a_1\cdots \d a_N,
$$
where $f$ is a compactly supported continuous function and $\frac{\partial^{|\alpha|}}{\partial (a_i)^\alpha}$ is a partial derivative of multi-index $\alpha$ in variables $(a_1,\dots,a_N)$. It remains to repeat the arguments of Section \ref{Section_BGF}.
We remark that the condition of $\mu$ being exponentially decaying, required by Proposition \ref{proposition_moments_through_operators}, has been substituted by the condition of $\mu$ being compactly supported.
\end{proof}
\section{Statements of the Main Results}
\label{Section_main_results}
Throughout this section, we fix a real parameter $\gamma > 0$.
\subsection{Law of Large Numbers at high temperature}
Let $\{\mu_N\}_{N \geq 1}$ be a sequence of exponentially decaying probability measures, such that $\mu_N\in\mathcal M_N$ for each $N$, that is, $\mu_N$ is a probability measure on $N$--tuples $a_1\le a_2\le \dots \le a_N$.
Alternatively, we can assume that each $\mu_N$ is a compactly supported symmetric distribution on $\mathbb R^N$ of total mass (i.e., the pairing against test function $1$) equal to $N!$. Denote their Bessel generating functions by
$$G_{N; \th}(x_1, \dots, x_N) := G_\th(x_1, \dots, x_N; \mu_N).$$
By the results from the previous section, each $G_{N; \th}(x_1, \dots, x_N)$ is holomorphic in a neighborhood of the origin and satisfies $G_{N; \th}(0, \dots, 0) = 1$. Thus, for each $N$, the logarithm $\ln(G_{N; \th})$ is a well-defined holomorphic function in a neighborhood of $(0, \cdots, 0)\in\mathbb{C}^N$, and
\begin{equation*}
\ln(G_{N; \th}) \bigr|_{x_1=\dots=x_N=0} = 0.
\end{equation*}
We are interested in the interplay between the partial derivatives of $\ln(G_{N; \th})$ at the origin and asymptotic properties of random $\mu_N$--distributed\footnote{In our wordings we stick to the situation when $\mu_N$ are bona fide probability measures. If they are distributions (i.e.\ generalized functions possibly without any positivity), then all the random variables produced from them should be interpreted in formal sense: the laws of such random variables can be identified with expectations of various smooth functions of them, which are readily computed as pairings of $\mu_N$ with appropriate test functions. (One also should divide by $N!$ to adjust for differences between ordered and arbitrary $N$--tuples.)} $N$--tuples $(a_1,\dots,a_N)$. We deal with the latter through the random variables
$$
p_k^N := \frac{1}{N}\sum_{i=1}^N (a_i)^k, \qquad (a_1,\dots,a_N)\text{ is }\mu_N\text{--distributed.}
$$
\begin{df}[LLN--satisfaction]\label{Definition_LLN_sat_ht}
We say that a sequence $\{\mu_N\}_{N \geq 1}$ \emph{satisfies a Law of Large Numbers} if there exist real numbers $\{m_{k}\}_{k\geq 1}$ such that for any
$s=1,2,\dots$ and any $k_1, \dots, k_s\in\mathbb{Z}_{\geq 1}$, we have
$$\lim_{N\to\infty} \mathbb E_{\mu_N} \prod_{i=1}^s p_{k_i}^N= \prod_{i=1}^{s} m_{k_i}.$$
\end{df}
\begin{remark}\label{rem_uniqueness}
Consider the empirical measure of $(a_1,\dots,a_N)$ given by $\frac{1}{N} \sum_{i=1}^N \delta_{a_i},$ where $\delta_x$ is the Dirac delta mass at $x\in\mathbb R$.
Since the $N$-tuples $(a_1,\dots,a_N)$ are random, their empirical measures are random probability measures on $\mathbb R$.
Under mild technical conditions (uniqueness of a solution to the moments problem, which holds whenever the numbers $m_k$ do not grow too fast, see, e.g., \cite[Section VII.3]{Feller}), LLN--satisfaction implies that these measures converge weakly, in probability, to a non-random measure whose moments are $m_1,m_2,\cdots$.
\end{remark}
\begin{df}[$\gamma$-LLN--appropriateness]\label{Definition_LLN_appr_ht}
We say that the sequence $\{\mu_N\}_{N \geq 1}$ is \emph{$\gamma$-LLN--appropriate} if there exists a sequence of real numbers $\{\kappa_l\}_{l\geq 1}$ such that
\begin{enumerate}[label=(\alph*)]
\item $\displaystyle \lim_{\begin{smallmatrix} N\to\infty,\, \theta \to 0\\ \theta N\to \gamma \end{smallmatrix}}
\frac{\partial^l}{\partial x_i^l} \ln{(G_{N; \th})}\Bigr|_{x_1=\dots=x_N=0} = (l-1)!\cdot \kappa_l$,\quad for all $l, i\in\mathbb{Z}_{\geq 1}$.
\item $\displaystyle \lim_{\begin{smallmatrix} N\to\infty,\, \theta \to 0\\ \theta N\to \gamma \end{smallmatrix}}\left.\frac{\partial}{\partial x_{i_1}}\cdots\frac{\partial}{\partial x_{i_r}}\ln{(G_{N; \th})}\right|_{x_1=\dots=x_N=0} = 0$,\quad for all $r\ge 2$, and $i_1, \dots, i_r\in\mathbb{Z}_{\geq 1}$ such that the set $\{i_1, \dots, i_r\}$ is of cardinality at least two.
\end{enumerate}
\end{df}
\begin{remark}
Because the BGF $G_{N; \th}(x_1, \cdots, x_N)$ is symmetric on the variables $x_1, \cdots, x_N$, the condition (a) is equivalent to:
(a') $\displaystyle \lim_{\begin{smallmatrix} N\to\infty,\, \theta \to 0\\ \theta N\to \gamma \end{smallmatrix}}
\frac{\partial^l}{\partial x_1^l} \ln{(G_{N; \th})}\Bigr|_{x_1=\dots=x_N=0} = (l-1)!\cdot \kappa_l$,\quad for all $l\in\mathbb{Z}_{\geq 1}$.
\noindent Likewise, we could also simplify condition (b).
\end{remark}
\begin{remark}
Suppose that
$$
\displaystyle \lim_{\begin{smallmatrix} N\to\infty,\, \theta \to 0\\ \theta N\to \gamma \end{smallmatrix}}
{\frac{\partial}{\partial z} \ln(G_{N; \th}(z,0,\dots,0))} = g(z),
$$
uniformly over $z$ in a complex neighborhood of $0$. Then $\kappa_l$ are the Taylor coefficients of $g$, that is, $$g(z)=\sum_{l=1}^{\infty} {\kappa_l z^{l-1}}.$$
\end{remark}
To state the main theorem, we use the language of formal power series in a formal variable $z$, namely series of the form
\begin{equation*}
h_0 + h_1 z + h_2 z^2 +\cdots.
\end{equation*}
\begin{df}\label{df:ops}
Let $\mathbb{R}[[z]]$ be the space of formal power series in $z$ with real coefficients.
Let $a(z)=a_0+a_1 z+a_2z^2 + \cdots$ be any power series in $\mathbb{R}[[z]]$.
We define three operators in $\mathbb{R}[[z]]$ by their action on a generic element $h(z) = h_0 + h_1 z + h_2 z^2+\cdots\in\mathbb{R}[[z]]$, as follows.
\begin{itemize}
\item Derivation operator $\partial$:
$$
\partial(h_0 + h_1 z + h_2 z^2+\cdots) := h_1 + 2 h_2 z + 3 h_3 z^2 + \cdots.
$$
\item Lowering operator $d$:
$$
d(h_0 + h_1 z + h_2 z^2+\cdots) := h_1 + h_2 z + h_3 z^2 + \cdots.
$$
\item Multiplication operator $*_a$:
$$
*_a(h(z)) := h(z) a(z).
$$
\end{itemize}
\end{df}
\begin{df}\label{def_R_map}
Define the map $T^\ga_{\ka\to m} : \mathbb{R}^{\infty} \to \mathbb{R}^{\infty}$ that takes as input a countable real sequence $\{\kappa_l\}_{l\ge 1}$ and outputs the countable real sequence $\{m_k\}_{k\ge 1}$ by means of the relations
\begin{equation}\label{eq_moments_through_f_cumulants}
m_k = [z^0] (\partial + \gamma d + *_g)^{k-1}(g(z)),\quad k=1, 2, \cdots,
\end{equation}
where $[z^0]$ is the constant term of the expression following it and
\begin{equation*}
g(z) = \sum_{l=1}^{\infty} {\kappa_l z^{l-1}}.
\end{equation*}
\end{df}
\medskip
For notation purposes, in the remainder of the paper the input of the map $T^\ga_{\ka\to m}$ is denoted by $\{\kappa_l\}_{l\ge 1}$ and the output is denoted by $\{m_k\}_{k\ge 1}$.
Whenever $T^\ga_{\ka\to m}(\{\kappa_l\}_{l\geq 1}) = \{m_k\}_{k\ge 1}$, the quantities $\kappa_l$ are called \emph{$\gamma$-cumulants} and the $m_k$'s are called \emph{moments}.
This is meant to draw an analogy with the sequences of classical cumulants and moments of a probability measure.
The motivation for this terminology is explained by the results in Section \ref{Section_semifree}.
Roughly speaking, the map $T^\ga_{\ka\to m}$ degenerates to the relation between cumulants and moments when $\gamma\to 0$, and to the relation between free cumulants and moments when $\gamma\to \infty$.
\begin{thm}[Law of Large Numbers for high temperature]\label{thm_small_th}
The sequence $\{\mu_N\}_{N \geq 1}$ is $\gamma$-LLN--appropriate if and only if it satisfies a LLN.
In case this occurs, the sequences $\{\kappa_l\}_{l\geq 1}$ and $\{m_k\}_{k\geq 1}$ are related by
\begin{equation}
\label{eq_x28}
\{m_k\}_{k\ge 1} = T^\ga_{\ka\to m}(\{\kappa_l\}_{l\ge 1}).
\end{equation}
\end{thm}
\medskip
The proof of this theorem is given later in Section \ref{sec_proof_LLN} below.
Our next results describe in more detail the map $T^\ga_{\ka\to m}$ from Definition \ref{def_R_map}.
\subsection{Combinatorial formula for the map $T^\ga_{\ka\to m}$}\label{sec_Tcm}
From Definition \ref{def_R_map}, we are able to obtain the values of $m_k$ by doing calculations with formal power series and isolating the constant term of the resulting expansion.
For example, for $k=1, 2, 3,4$, the resulting formulas are the following:
\begin{equation}\label{ex_c_to_m}
\begin{aligned}
m_1 &= \kappa_1,\\
m_2 &= (\gamma+1)\kappa_2 + \kappa_1^2,\\
m_3 &= (\gamma+1)(\gamma+2)\kappa_3 + 3(\gamma+1)\kappa_2\kappa_1 + \kappa_1^3,\\
m_4&=(\gamma+1)(\gamma+2)(\gamma+3)\kappa_4+4(\gamma+1)(\gamma+2)\kappa_3 \kappa_1+(\gamma+1)(2\gamma+3)\kappa_2^2+6(\gamma+1)\kappa_2 \kappa_1^2+\kappa_1^4.
\end{aligned}
\end{equation}
However, the defining formula \eqref{eq_moments_through_f_cumulants} is not explicit enough and becomes complicated when $k$ is large.
Our next main theorem is a simpler combinatorial formula that expresses $m_k$ as a polynomial of the variables $\kappa_1, \kappa_2, \cdots, \kappa_k$.
To state it, we need some terminology.
For any $k\in\mathbb{Z}_{\geq 1}$, denote $[k] := \{1, 2, \dots, k\}$.
A \emph{set partition} $\pi$ of $[k]$ is an (unordered) collection of pairwise disjoint nonempty subsets of $[k]$ such that $[k] = B_1\cup\dots\cup B_m$.
The subsets $B_1, \dots, B_m$ are called the \emph{blocks} of the set partition $\pi$ and we use the notation $\pi = B_1\sqcup \dots\sqcup B_m$.
The cardinalities of the blocks are denoted $|B_1|, \dots, |B_m|$.
We denote the collection of all set partitions of $[k]$ by $\mathscr P(k)$.
Given a set partition $\pi$, we denote by $\#(\pi)$ its number of blocks. For example, there are seven set partitions of $[4]$ with two blocks; they are:
$$
\{1\}\sqcup \{2,3,4\},\quad \{1,3,4\}\sqcup \{2\},\quad \{1,2,4\}\sqcup\{3\},\quad \{1,2,3\}\sqcup\{4\},
$$
$$
\{1,2\}\sqcup \{3,4\},\quad \{1,3\}\sqcup \{2,4\},\quad \{1,4\}\sqcup \{2,3\}.
$$
We also use the Pochhammer symbol notation:
\begin{equation*}
(x)_q := \begin{cases}
x(x+1)\cdots (x+q-1), &\text{ if }q\in\mathbb{Z}_{\ge 1},\\
1, &\text{ if }q=0. \end{cases}
\end{equation*}
\begin{df}\label{W_def}
For any $\pi\in\mathscr P(k)$ and $\gamma\in\mathbb R$, define the quantity $W(\pi)$, that will be called the \emph{$\gamma$-weight of $\pi$}, as follows\footnote{We omit the dependence on $\gamma$ from the notation $W(\pi)$.}.
Let $m = \#(\pi)$ and label the blocks of $\pi$ by $B_1, \cdots, B_m$ in such a way that the smallest element from $B_i$ is smaller than all elements from $B_j$, whenever $i<j$.
That is, if the blocks are $B_i = \{b^i_1 < \dots < b^i_{|B_i|}\}$, then $b^1_1 < b^2_1 < \cdots < b^m_1$.
For each $i\in\{1, \cdots, m\}$, define $p(i)$ as the number of indices $j\in\{1, \dots, |B_i| - 1\}$ such that $\{b^i_j + 1, \dots, b^i_{j+1} - 1\}\cap B_t \neq\emptyset$ for some block $B_t$ with $t < i$, and set $q(i) := |B_i|-1-p(i)$. Then define
\begin{equation}\label{W_formula}
W(\pi) := \prod_{i=1}^m\Bigl( p(i)!\cdot (\gamma+p(i)+1)_{q(i)} \Bigr).
\end{equation}
\end{df}
For a set partition $\pi = B_1\sqcup \dots\sqcup B_m$ of $[k]$, we can think of the quantity $p(i)!\cdot (\gamma+p(i)+1)_{q(i)}$ as a weight associated to the block $B_i$.
Therefore the $\gamma$-weight $W(\pi)$ is the product of all weights of the blocks of $\pi$.
The weight of a block $B_i$ depends on the integer $p(i)$, whose computation can be visualized through a geometric procedure involving arc diagrams:
\begin{itemize}
\item Draw each block $B_j=(b_1^j<b_2^j<\dots<b_{r}^j)$ as an arc with $r$ vertical legs at positions $b_a^j$, $a=1,\dots,r$ and with $r-1$ horizontal roofs joining adjacent legs at height $m+1-j$.
\item $p(i)$ is the number of roofs in $B_i$, which intersect legs of other blocks. Note that each roof is counted only once, no matter how many legs it intersects.
\end{itemize}
Let us provide several examples. First, consider set partition $\{1, 2, 5, 7\}\sqcup \{3, 4, 6\}\in\mathscr P(7)$ corresponding to the following arc diagram:
\begin{figure}[H]
\centering
\includegraphics[width=0.3\linewidth]{partition_1.pdf}
\caption{Set partition $\{1, 2, 5, 7\}\sqcup \{3, 4, 6\}$}
\label{fig_1}
\end{figure}
\noindent The blocks are labeled $B_1 = \{1, 2, 5, 7\}$ and $B_2 = \{3, 4, 6\}$. We have $p(1) = 0$, $q(1) = 3$, $p(2) = 1$, $q(2) = 1$, and therefore
$$W(\pi) = 0!\cdot (\gamma+1)_3\cdot 1!\cdot (\gamma+2)_1 = (\gamma+1)(\gamma+2)^2(\gamma+3).$$
For a different example, consider set partition $\{1, 4\}\sqcup\{2, 6\}\sqcup\{3, 5, 7\}\in\mathscr P(7)$ corresponding to the following arc diagram:
\begin{figure}[H]
\centering
\includegraphics[width=0.3\linewidth]{partition_2.pdf}
\caption{Set partition $\{1, 4\}\sqcup\{2, 6\}\sqcup\{3, 5, 7\}$}
\label{fig_2}
\end{figure}
\noindent The blocks are labeled $B_1 = \{1, 4\}$, $B_2 = \{2, 6\}$, and $B_3 = \{3, 5, 7\}$.
We have $p(1) = 0$, $q(1) = 1$, $p(2) = 1$, $q(2) = 0$, $p(3) = 2$, $q(3) = 0$, and therefore
$$W(\pi) = 0!\cdot (\gamma+1)_1\cdot 1!\cdot (\gamma+2)_0\cdot 2!\cdot (\gamma+3)_0 = 2(\gamma+1).$$
For the final example, consider set partition $\{1,3,4,5,6\}\sqcup\{2,7\}\in\mathscr P(7)$ corresponding to the following arc diagram:
\begin{figure}[H]
\centering
\includegraphics[width=0.3\linewidth]{partition_3.pdf}
\caption{Set partition $\{1, 3, 4, 5, 6\}\sqcup\{2, 7\}$}
\label{fig_3}
\end{figure}
\noindent The blocks are labeled $B_1=\{1,3,4,5,6\}$ and $B_2=\{2,7\}$. We have $p(1)=0$, $q(1)=4$, $p(2)=1$, $q(2)=0$, and therefore
$$
W(\pi)=0! \cdot (\gamma+1)_{4} \cdot 1!\cdot (\gamma+1)_0= (\gamma+1)(\gamma+2)(\gamma+3)(\gamma+4).
$$
Let us also mention two useful properties which directly follow from the definition of $p(i)$:
\begin{itemize}
\item $p(1)=0$.
\item If $|B_i|=1$, then $p(i)=q(i)=0$.
\end{itemize}
\medskip
We have introduced all notations and can now state the main theorem of this section:
\begin{thm}[$\gamma$--cumulants to moments formula]\label{theorem_cumuls_moms}
Let $\{m_k\}_{k\ge 1}$ and $\{\kappa_l\}_{l\ge 1}$ be real sequences that are related by $\{m_k\}_{k\ge 1} = T^\ga_{\ka\to m}(\{\kappa_l\}_{l\ge 1})$.
Let $k\in\mathbb{Z}_{\ge 1}$ be arbitrary. Then
\begin{equation*}
m_k = \sum_{\pi\in\mathscr P(k)}{W(\pi)\prod_{B\in\pi}{\kappa_{|B|}}}.
\end{equation*}
\end{thm}
\medskip
The proof is presented in Section \ref{mom_cum_sec} below. In Section \ref{Section_semifree} we explain how in the limits $\gamma\to 0$ and $\gamma\to\infty$, Theorem \ref{theorem_cumuls_moms} turns into the expression of moments through classical cumulants and through free cumulants, respectively.
\begin{comment}
\begin{rem}[Limit of the $\gamma$-weights when $\gamma\to 0^+$]
When $\gamma\to 0^+$, the $\gamma$-weight of $\pi$ defined by \eqref{W_formula} becomes
$$\lim_{\gamma\to 0^+}{W(\pi)} = \prod_{i=1}^m\left(p(i)!\cdot (p(i)+1)_{q(i)}\right) = \prod_{i=1}^m{(|B_i| - 1)!}.$$
If we define the quantities $c_k^{(0)}$ by
$$
c_k^{(0)} := (k-1)!\cdot c_k,\quad k = 1, 2, \cdots,
$$
then in the limit $\gamma\to 0^+$, the formula from Theorem \ref{theorem_cumuls_moms} turns into
$$
m_{\l} = \sum_{\pi\in\mathscr P(\l)}{\prod_{B\in\pi} \kappa_{|B|}^{(0)} }.
$$
This is the same relation as the one between moments and cumulants of a probability measure.
\end{rem}
\begin{rem}[Limit of the $\gamma$-weights when $\gamma\to\infty$]
Let us consider a countable sequence $\{c_k^{(\infty)}\}_{k\ge 1}$ of real numbers, and then define the sequence $\{c_k\}_{k\ge 1}$ depending on the variable $\gamma>0$ by
$$
c_k := \gamma^{1-k}c_k^{(\infty)},\quad k=1, 2, \cdots.
$$
If $\pi = \{B_1, \cdots, B_m\}\in\mathscr P(\l)$ is any set partition, recall the statistics $p(i)$, $q(i)$, $1\le i\le m$, and $W(\pi)$ introduced in Definition \ref{W_def}.
Then for each $1\le i\le m$, we have
$$
\lim_{\gamma\to\infty}{ \gamma^{1 - |B_i|} \cdot (\gamma + p(i) + 1)_{q(i)} } =
\begin{cases}
1, &\text{ if }p(i) = 0 \text{ and }q(i) = |B_i| - 1,\\
0, &\text{ otherwise}.
\end{cases}
$$
As a result, for each set partition $\pi = \{B_1, \cdots, B_m\}$ of $[\l]$, the expression
$$W(\pi)\prod_{i=1}^m{\kappa_{|B_i|}} = \prod_{i=1}^m\left(p(i)!\cdot (\gamma+p(i)+1)_{q(i)} \cdot \kappa_{|B_i|}\right)$$
converges to $\prod_{i=1}^m{\kappa_{|B_i|}^{(\infty)}}$ in the limit $\gamma\to\infty$ if $p(1) = \cdots = p(m) = 0$, and otherwise it vanishes.
The condition $p(1) = \cdots = p(m) = 0$ holds if and only if there are not set-crossings among the blocks of $\pi$.
In this case, $\pi$ is called a \emph{non-crossing set partition}.
If we let $NC(\l)$ be the collection of all non-crossing set partitions $\pi$ of $[\l]$, then the formula from Theorem \ref{theorem_cumuls_moms} turns into
$$m_{\l} = \sum_{\pi\in NC(\l)}\prod_{B\in\pi}{\kappa_{|B|}^{(\infty)}}.$$
This is the well-known formula between moments and \emph{free cumulants} of a probability measure, see e.g. \cite{S}, \cite{MS}.
\end{rem}
\end{comment}
\subsection{ $T^\ga_{\ka\to m}$ and its inverse $T^\ga_{m\to\ka}$ through generating functions}
\label{Section_gen_functions}
The map $T^\ga_{\ka\to m}: \{\kappa_l\}_{l\ge 1}\mapsto\{m_k\}_{k\ge 1}$ is equivalent to relations of the form
\begin{equation}
\label{eq_mom_through_cum}
m_k = (\gamma+1)_{k-1}\cdot \kappa_k + \text{ certain polynomial in the variables }\kappa_1,\dots, \kappa_{k-1},
\end{equation}
Recursively using \eqref{eq_mom_through_cum}, each $\kappa_l$ can be expressed as a polynomial in the variables $m_1, \cdots, m_l$.
In other words, the map $T^\ga_{\ka\to m}$ has an inverse denoted by
$$
T^\ga_{m\to\ka} := (T^\ga_{\ka\to m})^{-1}: \{m_k\}_{k\ge 1}\mapsto\{\kappa_l\}_{l\ge 1}.
$$
For example, inverting the formulas in \eqref{ex_c_to_m} we get
\begin{equation}\label{ex_m_to_c}
\begin{aligned}
\kappa_1 &= m_1,\\
\kappa_2 &= \frac{1}{\gamma+1}\bigl(m_2 - m_1^2\bigr),\\
\kappa_3 &= \frac{1}{(\gamma+1)_2}\bigl(m_3 - 3 m_2 m_1 + 2 m_1^3\bigr),\\
\kappa_4 &= \frac{1}{(\gamma+1)_{3}}\Biggl(m_4-4 m_3 m_1-\left[2+\frac{1}{\gamma+1}\right] m_2^2 + \left[10+\frac{2}{\gamma+1}\right] m_2 m_1^2 - \left[5+\frac{1}{\gamma+1}\right] m_1^4\Biggr).
\end{aligned}
\end{equation}
One way to write the formulas connecting moments and cumulants in a compact form is through generating function:
\begin{thm}\label{thm:mom_cums2}
Let $\{m_k\}_{k\ge 1}$ and $\{\kappa_l\}_{l\ge 1}$ be real sequences related by $\{\kappa_l\}_{l\ge 1} = T^\ga_{m\to\ka}(\{m_k\}_{k\ge 1})$.
Then
\begin{equation}\label{eq_cums_moments}
\exp\left( \sum_{l = 1}^{\infty}\frac{\kappa_l y^l}{l} \right) =
[z^0]\left\{ \sum_{n=0}^{\infty}\frac{(yz)^n}{(\gamma)_n} \cdot\exp\left( \gamma\sum_{k=1}^{\infty}\frac{m_k}{k} z^{-k} \right)\right\}.
\end{equation}
Equivalently, \eqref{eq_cums_moments} can be rewritten as a combination of two identities involving an auxiliary sequence $\{c_n\}_{n\ge 0}$ through:
\begin{equation} \label{eq_cums_moments_2}
\begin{dcases}
\exp\left(\sum_{l=1}^{\infty} \frac{\kappa_l}{l} z^l\right)=\sum_{n=0}^{\infty} \frac{c_n}{(\gamma)_n} z^n,\\
\exp\left( \gamma \sum_{k=1}^{\infty} \frac{m_k z^k}{k}\right)=\sum_{n=0}^{\infty} c_n z^n.
\end{dcases}
\end{equation}
\end{thm}
As we explain in Section \ref{Section_semifree}, in the limit $\gamma\to 0$, the statement of Theorem \ref{thm:mom_cums2} turns into the well-known formula expressing the generating function of (classical) cumulants as a logarithm of the generating function of moments (equivalently, of the characteristic function of a random variable). On the other hand, in the limit $\gamma\to\infty$, Theorem \ref{thm:mom_cums2} can be converted into the identification of the free cumulants with Taylor series coefficients of the Voliculescu $R$--transform of a probability measure.
A close examination of \eqref{eq_cums_moments_2} reveals an unexpected connection to the $d$--cumulants for the (additive) finite free convolution. We recall that the latter is a deterministic binary operation on $d$--tuples of real numbers, which was shown in \cite[Theorem 1.2]{GM} to be the $\theta\to\infty$ limit of the operation $(\mathbf a,\mathbf b)\mapsto \mathbf a +_\theta \mathbf b$ for fixed $N=d$. Generating functions and certain combinatorial formulas for $d$--cumulants were developed in \cite{Marcus}, \cite{AP}. Comparing with \cite{AP}, we observe a match under the following change in notations, where in the left column we use notations from \cite{AP} and in the right column we use notations from our work:
\begin{equation} \label{eq_match_in_notations}
\begin{split}
d &\longleftrightarrow -\gamma,\\
m_n &\longleftrightarrow m_n,\\
\kappa_n &\longleftrightarrow \gamma^{n-1}\kappa_n,\\
a_n &\longleftrightarrow (-1)^{n} c_n.
\end{split}
\end{equation}
Indeed, under \eqref{eq_match_in_notations} the first formula of \eqref{eq_cums_moments_2} becomes \cite[(3.1) or (3.3)]{AP} and the second formula of \eqref{eq_cums_moments_2} becomes \cite[(4.2)]{AP}. Note that the symbol $(x)_n$ has the meaning $x(x-1)\dots(x-n+1)$ in \cite{AP}, which is different from the convention that we use.
It is important to emphasize that in our work $\gamma>0$, while in \cite{Marcus}, \cite{AP}, $d$ is a positive integer. Hence, using \eqref{eq_match_in_notations} we see that there are no values of parameters under which finite free cumulants coincide with our $\gamma$--cumulants. Instead, one family of cumulants should be treated as an analytic continuation of another. There are two consequences of this correspondence. First, Theorem \ref{theorem_cumuls_moms} translates into a new combinatorial formula for finite free cumulants. Second, \cite[Theorem 4.2]{AP} explains how the generating function identity equivalent to \eqref{eq_cums_moments_2} leads to transition formulas (involving double sums over set partitions) between moments and finite free cumulants and vice versa. Hence, substituting \eqref{eq_match_in_notations} we can obtain similar formulas between moments and our $\gamma$--cumulants.
\begin{comment}
In that paper, the authors also propose a definition of cumulant that interpolates between classical and free cumulants.
They start from the limit $N\to\infty$, $\beta N\to\text{const}$, of \emph{rank one HCIZ integrals} and derive \eqref{eq:mom_to_cums} from there.
Our starting point of view is different but we end up with an equivalent notion of cumulants.
This is not surprising, as the rank one HCIZ integrals in their paper are exactly certain specializations of multivariate Bessel functions, which we use extensively.
\end{comment}
\section{Applications}
\label{Section_applications}
In this section we list several applications of the general theorems from Section \ref{Section_main_results}.
\subsection{Law of Large Numbers for Gaussian $\beta$ ensembles}
\label{Section_GbE}
For each $N\ge 1$, let $\mu_{N,\,\th}$ be the $N$--particle \emph{Gaussian $\beta$ ensemble} with parameter $\beta = 2\th$ --- this is a probability distribution on $N$-tuples of real numbers $x_1\le x_2\le\cdots\le x_N$ with density proportional to
\begin{equation}
\label{eq_x29}
\propto \prod_{1\le i < j\le N}{\!\!\!\!(x_i - x_j)^{2\theta}}\, \prod_{k=1}^N{e^{-x_k^2/2}}.
\end{equation}
The eigenvalue distributions of the celebrated Gaussian Orthogonal/Unitary/Symplectic ensembles of random matrices are given by \eqref{eq_x29} at $\theta=\tfrac{1}{2}/1/2$, respectively.
To state the result of this subsection, we need a few definitions.
Denote by $\mathscr{M}(k)$ the collection of all \emph{perfect matchings} of $[k]$, that is, the collection of set partitions of $[k]$ where each block has size $2$.
$\mathscr{M}(k)$ is empty if $k$ is odd, and if $k = 2m$ is even, then $\mathscr{M}(k)$ has cardinality $(2m-1)!! = (2m-1)(2m-3)\cdots 3\cdot 1$.
Any perfect matching $\pi = \{B_1, \cdots, B_m\}$ of $[2m]$ is also a set partition of $[2m]$, so we can draw its arc diagram, as described in Section \ref{sec_Tcm}.
Denote by roof$(\pi)$ the number of roofs that do not intersect some leg.
Roof$(\pi)$ is an integer between $1$ and $m$, and roof$(\pi) = m$ if and only if the perfect matching $\pi$ is non-crossing, see Figure \ref{fig_roofs} for an illustration.
\begin{figure}[H]
\centering
\includegraphics[width=0.98\linewidth]{Match_G_6.pdf}
\caption{Perfect matchings of $[6]$ with three possible values of roof$(\pi)$}
\label{fig_roofs}
\end{figure}
Finally, consider the empirical measures
$$
\rho_{N\!,\,\th} := \frac{1}{N}\sum_{i=1}^N{\delta_{x_i}},\qquad (x_1, \cdots, x_N) \text{ is $\mu_{N,\th}$--distributed}.
$$
\begin{thm}\label{thm_Gauss}
As $N\to\infty$, $\th\to 0$, $\th N\to\gamma$, the (random) measures $\rho_{N\!,\,\th}$ converge weakly, in probability to a deterministic probability measure $\mu_\gamma$ which is uniquely determined by its moments:
\begin{equation}\label{moms_Gaussian}
\int_{-\infty}^{\infty}{x^k \mu_\gamma(dx)} = \sum_{\pi\in\mathscr{M}(k)}{\!\!\!(\gamma+1)^{\mathrm{roof}(\pi)}},
\end{equation}
which is set to be $0$ for odd $k$.
\end{thm}
\begin{rem}
In our Theorem \ref{thm_Gauss}, the limiting measure $\mu_\gamma$ is an analogue of Wigner's semicircle law from free probability theory and of the Gaussian distribution from classical probability, because the only nonzero $\gamma$-cumulant is the second one. Similarly to these measures, $\mu_\gamma$ is also present in a Central Limit Theorem with respect to the operation of $\gamma$-convolution discussed in the next subsection, see \cite[Section 5.3]{MP}. In fact, $\mu_\gamma$ degenerates into these measures at special values of $\gamma$. Indeed, when $\gamma = 0$ and $k = 2m$, the right-hand side of \eqref{moms_Gaussian} is equal to $|\mathscr{M}(2m)| = (2m-1)!!$, which coincides with the $(2m)$-th moment of the standard normal distribution.
When $\gamma\to\infty$ and $k = 2m$, the right-hand side of \eqref{moms_Gaussian} (when divided by $\gamma^m$) becomes the number of non-crossing perfect matchings of $[2m]$, which is the $m$-th Catalan number $C_m = \frac{1}{m+1}{2m \choose m}$. This is the $(2m)$-th moment of the standard Wigner's semicircle law.
\end{rem}
\begin{rem}
While identification of $\mu_\gamma$ through \eqref{moms_Gaussian} was not stated explicitly in the literature before, the LLN itself, i.e.\ existence of the limiting measures $\mu_\gamma$ is known at least from \cite{ABG}. \cite{BGP} provides other (more complicated) formulas for the moments of $\mu_\gamma$, and in \cite{DS} the measure $\mu_\gamma$ is identified with the mean spectral measure of a certain random Jacobi matrix.
\end{rem}
\begin{remark} \label{Remark_aHerm}
The measures $\mu_\gamma$ were also previously studied by other authors without knowing about their connections to Gaussian $\beta$ ensembles.
Askey and Wimp \cite{AW} studied $\mu_\gamma$ as an orthogonality measure for the \emph{associated Hermite polynomials} (\cite{D} obtained a formula for the moments that is equivalent to ours, and \cite{SZ}, \cite{BDEG} contain generalizations: see (4.7) in the former paper and Proposition 4.19 in the latter). Interestingly, the same polynomials also play a role in studying $\beta\to\infty$ limits of Gaussian $\beta$ Ensembles, see \cite[Section 4.3]{Gorin_Klept}. From another direction, Kerov \cite{K} studied $\mu_\gamma$ in connection to the Markov-Krein transform and noticed that these measures interpolate between Gaussian and semicircle laws.
\end{remark}
\begin{proof}[Proof of Theorem \ref{thm_Gauss}]
The Bessel generating function of $\mu_{N\!,\, \th}$ is known\footnote{This computation is folklore, for which, however, it is not easy to locate an exact reference, as the articles often use very different notations. One can produce a discrete version of the desired statement for Jack measures, which reduces to a direct application of the Cauchy-Littlewood summation identity for the Jack polynomials and then take a limit to the Gaussian $\beta$ ensemble, as in \cite[Proposition 2.10]{GSh}. Alternatively, the computation for the Laguerre $\beta$ ensemble is equivalent to \cite[(3.1)]{Ner}, from which the Gaussian $\beta$ Ensemble can be obtained through a limit transition. As yet another approach, Gaussian $\beta$ Ensembles can be identified with measures from \cite[Theorem 1.13 and Section 6]{AN} with a single non-zero parameter $\gamma_2$, see also \cite[Corollary 2.6]{OV} for the $\theta=1$ case. } to be
$$
G_N(x_1, \cdots, x_N; \th) = \exp\left( \frac{x_1^2 + \cdots + x_N^2}{2} \right).
$$
Hence, $\{\mu_{N\!,\,\th}\}$ is $\gamma$-LLN--appropriate with $\{\kappa_l\}_{l\ge 1}$ given by
\begin{equation}\label{gamma_cums_1}
\kappa_l = \begin{cases}
1, & \text{if }l = 2,\\
0,& \text{otherwise}.
\end{cases}
\end{equation}
The corresponding sequence $\{m_k\}_{k\ge 1} = T^\ga_{\ka\to m}(\{\kappa_l\}_{l\ge 1})$ is given by the formula in Theorem \ref{theorem_cumuls_moms}.
Because the only nonzero $\gamma$--cumulant $\kappa_l$ is the one with $l = 2$, the summation for $m_k$ reduces from all set partitions of $[k]$ to all perfect matchings of $[k]$. In particular, $m_k = 0$ if $k$ is odd.
In the case that $k$ is even, say $k = 2m$, consider any perfect matching $\pi = \{B_1, \cdots, B_m\}$ of $[2m]$; each block $B_i$ has cardinality $2$, so $p(i)$ is $1$ if the roof of the arc $B_i$ intersects some leg in the arc-diagram of $\pi$, and otherwise $p(i)$ is $0$. As a result, the weight $W(\pi)$ in \eqref{W_formula} is equal to $(\gamma+1)^{\text{roof}(\pi)}$.
Then Theorem \ref{thm_small_th} shows that the sequence $\{\mu_{N, \th}\}$ satisfies a LLN, and this proves the desired convergence in the statement of the theorem, see Remark \ref{rem_uniqueness}.
It remains to show that the right-hand sides of \eqref{moms_Gaussian} are the moments of a \emph{unique} probability measure. For this, we check the Carleman's condition: the moments problem for a sequence $\{\alpha_k\}_{k\ge 1}$ determines a unique probability measure if
\begin{equation} \label{eq_x30}
\sum_{m=1}^{\infty}{(\alpha_{2m})^{-1/(2m)}} = +\infty.
\end{equation}
Indeed, elementary bounds show
\begin{gather*}
\alpha_{2m} = \!\!\!\sum_{\pi\in\mathscr{M}(2m)}{\!\!\!(\gamma+1)^{\text{roof}(\pi)}}
\le (\gamma+1)^m \cdot |\mathscr{M}(2m)| = (\gamma+1)^m \cdot (2m-1)!! \le (\gamma+1)^m \cdot (2m)^m
\end{gather*}
and so $(\alpha_{2m})^{-1/(2m)} \ge \text{const}\cdot\frac{1}{\sqrt{m}}$, thus proving \eqref{eq_x30}.
\end{proof}
\subsection{$\gamma$--convolution}
\begin{proof}[Proof of Theorem \ref{Theorem_gamma_convolution}]
Let $G_N^{\mathbf a}$ be the BGF of the distribution\footnote{In this section the word ``distribution'' is used in probabilistic meaning, as in ``distribution of a random variable'', rather than in functional-analytic meaning, where a distribution is a synonym of a generalized function.} of $\mathbf a(N)$ and let $G_N^{\mathbf a}$ be the BGF of the distribution of $\mathbf b(N)$. Since Definition \ref{Def_mom_convergence} is the same as Definition \ref{Definition_LLN_sat_ht}, Theorem \ref{thm_small_th} implies that the distributions of $\mathbf a(N)$ and $\mathbf b(N)$ are $\gamma$--LLN appropriate. Let us denote the corresponding $\gamma$--cumulants (right-hand sides in (a) of Definition \ref{Definition_LLN_appr_ht}) through $\kappa_l^{\mathbf a}$ and $\kappa_l^{\mathbf b}$, respectively.
Further, let $G_N^{\mathbf a+_\theta \mathbf b}$ be the BGF of $\mathbf a(N)+_\theta \mathbf b(N)$. Definition \ref{Def_theta_addition} means that
$$
G_N^{\mathbf a+_\theta \mathbf b}(x_1,\dots,x_N;\, \theta)=G_N^{\mathbf a}(x_1,\dots,x_N;\, \theta) \cdot G_N^{\mathbf b}(x_1,\dots,x_N;\, \theta).
$$
Hence, partial derivatives of $\ln(G_N^{\mathbf a+_\theta \mathbf b})$ are sums of those of $\ln(G_N^{\mathbf a})$ and those of $\ln(G_N^{\mathbf a})$. Therefore, the sequence of distributions of $\mathbf a(N)+_\theta \mathbf b(N)$ is $\gamma$--LLN appropriate with $\gamma$--cumulants given by the sums
$\kappa_l^{\mathbf a}+\kappa_l^{\mathbf b}$, $l=1,2,\dots$. Applying Theorem \ref{thm_small_th} again, we conclude that $\mathbf a(N)+_\theta \mathbf b(N)$ converges in the sense of moments and get formula \eqref{eq_convolution_cumulants}.
\end{proof}
\medskip
One remark is in order. We never prove (or claim) that the $N\to\infty$ limit of empirical distributions of $\mathbf a(N) +_\theta \mathbf b(N)$ is given by a probability measure; we only show that the moments converge to some limiting values. Of course, if we knew that $\mathbf a(N)+_\theta \mathbf b(N)$ is a bona fide random $N$--tuple of integers (which is widely believed to be true, as we explain after Definition \ref{Def_theta_addition}), then we could say that a deterministic limit of random empirical distributions is necessarily given by a probability measure. Hence, if the positivity conjecture was true, then the binary operation $\boxplus_\gamma$ would preserve probability measures on $\mathbb{R}$. For the results of the next three subsections the positivity problem does not arise, as all the prelimit objects are known to be probability measures.
\subsection{Examples of $\gamma$--convolutions}
\begin{exam}
Let $\{m_k\}_{k\ge 1}$ be the sequence of moments of a probability measure $\mu$ on $\mathbb{R}$.
Also consider the sequence $\{a^k\}_{k\ge 1}$ of powers of a real number $a\in\mathbb{R}$; evidently this is the sequence of moments of the Dirac delta mass at point $a$.
Let $\{\tilde m_k\}_{k\ge 1}$ be the sequence of moments of the conventional convolution $\delta_a * \mu$, in other words, if we set $m_0 := 1$ then
$$
\tilde m_k = \sum_{i=0}^k{{k \choose i}a^i m_{k-i}},\qquad k=1, 2, \cdots.
$$
For any $\gamma>0$ we have
$$
\{\tilde m_k\}_{k\ge 1} = \{a^k\}_{k\ge 1} \boxplus_\gamma \{m_k\}_{k\ge 1}.
$$
In words, $\gamma$--convolution with Dirac mass at $a$ is identified with shift by $a$.
\end{exam}
\begin{exam}\label{gauss_exam}
Let $\sigma^2 > 0$ be any positive number and consider the sequence of $\gamma$--cumulants:
\begin{equation}\label{gamma_cums_ex}
\kappa_{l}^{\sigma^2} := \begin{cases}
\sigma^2, & \text{if }l = 2,\\
0,& \text{otherwise}.
\end{cases}
\end{equation}
Denote the corresponding sequence of moments as $\{m_k^{\sigma^2}\}_{k\ge 1} := T^\ga_{\ka\to m}(\{\kappa_l^{\sigma^2}\}_{l\ge 1})$.
From \eqref{gamma_cums_ex} and the definition of $\gamma$--convolution, it follows that for any $\sigma_1^2, \sigma_2^2 >0$ we have
$$
\{m_k^{\sigma_1^2 + \sigma_2^2}\}_{k\ge 1} = \{m_k^{\sigma_1^2}\}_{k\ge 1} \boxplus_\gamma \{m_k^{\sigma_2^2}\}_{k\ge 1}.
$$
Observe that $\{m_k^{\sigma^2}\}_{k\ge 1}$ is the sequence of moments of a rescaled version of the distribution $\mu_\gamma$ of Theorem \ref{thm_Gauss}.
\end{exam}
\begin{exam}\label{laguerre_exam}
Let $\lambda > 0$ be arbitrary and consider the constant sequence of $\gamma$--cumulants: $\kappa_{l}^{\lambda} := \lambda$, for all $l = 1, 2, \cdots$.
Denote the corresponding sequence of moments as $\{ m_k^\lambda \}_{k\ge 1} := T^\ga_{\ka\to m}(\{ \kappa_l^\lambda \}_{l\ge 1})$. It follows that for any $\lambda_1, \lambda_2>0$ we have
$$
\{m_k^{\lambda_1 + \lambda_2}\}_{k\ge 1} = \{m_k^{\lambda_1}\}_{k\ge 1} \boxplus_\gamma \{m_k^{\lambda_2}\}_{k\ge 1}.
$$
It is known that $\{m_k^\lambda\}_{k\ge 1}$ is the sequence of moments of a probability measure $\nu_\gamma^\lambda$. This measure was studied in \cite{ABMV,TT_Laguerre}, where it was shown to be the limit of the empirical measures of beta Laguerre ensembles in the limit $N\to\infty$, $\beta N\to 2\gamma$, $N/M\to\lambda$. The density of $\nu_\gamma^\lambda$ can be obtained from \cite[Lemma 2.1]{TT_Laguerre}; note that in that paper our parameters $\gamma, \lambda$ are denoted by $c, \alpha$, respectively. We also refer to \cite[Section 5.4 and Figure 5]{MP} for additional details and plots of the densities.
Since all the $\gamma$--cumulants of $\nu_\gamma^{\lambda}$ are equal to each other, this measure is similar to the Poisson and the Marchenko-Pastur distributions whose cumulants, respectively, free cumulants, are all the same.
\end{exam}
\subsection{Law of Large Numbers for ergodic measures}\label{sec_ergodic}
We start this section by providing some context in the complex case $\theta=1$ (or $\beta=2$). The infinite-dimensional unitary group $U(\infty)$ is defined as the union of the groups of $N\times N$ unitary matrices, $\bigcup_{N=1}^{\infty} U(N)$, where we embed $U(N)$ into $U(N+1)$ as the subgroup of operators fixing the $(N+1)$st basis vector. Each element of $U(\infty)$ is an infinite matrix, such that for some $N=1,2,\dots$, its top $N\times N$ corner is unitary and outside this corner we have $1$s on the diagonal and $0$s everywhere else. Consider the space $\mathcal H$ of infinite complex Hermitian matrices with rows and columns parameterized by positive integers $i$ and $j$. $U(\infty)$ acts on $\mathcal H$ by conjugations and one can ask about random matrices in $\mathcal H$ whose laws are invariant under such action. Their probability distributions form a simplex and much of the work on conjugation-invariant matrices comes down to study extreme points of this simplex --- ergodic conjugation-invariant random matrices in $\mathcal H$. In \cite{Pickrell,OV} these matrices were completely classified: they depend on a sequence of real parameters $\{\alpha_i\}_{i=1}^{\infty}$ with $\sum_{i=1}^{\infty} (\alpha_i)^2\le \infty$ and two reals $\delta_1\in\mathbb R$, $\delta_2\ge 0$ and are given by an infinite sum:
\begin{equation}
\label{eq_OV_expansion}
\delta_1 \mathcal I + \sqrt{\delta_2}\, \frac{X+X^*}{2} + \sum_{i=1}^\infty \alpha_i \left(\tfrac{1}{2}V_i V_i^*-\mathcal I\right),
\end{equation}
where $\mathcal I$ is the identical matrix (with $1$s on the diagonal and $0$s everywhere else), $X$ is a matrix with i.i.d.\ Gaussian $\mathcal N(0,1)+\mathbf i \mathcal N(0,1)$ elements, $V_i$ is an infinite (column-)vector with i.i.d.\ Gaussian $\mathcal N(0,1)+\mathbf i \mathcal N(0,1)$ components and all the involved matrices are independent. Note that if the only non-zero parameter is $\delta_2$, then \eqref{eq_OV_expansion} gives the Gaussian Unitary Ensemble (a particular case of Wigner matrices). If the only non-zero parameters are $\alpha_1=\alpha_2=\dots=\alpha_K=1$ and $\delta_1=K$, then \eqref{eq_OV_expansion} gives the Laguerre Unitary Ensemble (a particular case of Wishart or sample-covariance matrices).
As was first mentioned in \cite[Remark 8.3]{OV} and recently studied in details in \cite{AN}, the problem of classification of conjugation-invariant infinite complex Hermitian matrices has a general $\theta$--version related to the $\theta$-corners processes of Definition \ref{def_betacorner}.
Roughly speaking, while there are no infinite self-adjoint matrices in the general $\th$--version, one can make sense of the \emph{distribution of eigenvalues} of the top-left principal submatrices (corners) of the infinite self-adjoint matrices.
One of the problems addressed in \cite{AN} (see Theorem 1.13 there) is the classification of ergodic random matrices at general values of $\th$. Since there are no bona fide matrices, this problem actually asks for distributions of $N$-tuples, for $N=1, 2, \cdots$, satisfying certain coherence relations --- the distributions should be regarded as the eigenvalue distributions of the $N\times N$ corners of an ergodic matrix.
It turns out that the set of parameters remains the same as in the $\theta=1$ case. The law of the top-left $1\times 1$ corner $\eta$ of an ergodic matrix at general values of $\theta$ has characteristic function
\begin{equation}
\label{eq_Fourier_theta}
\mathbb E e^{\mathbf i t \eta}= \mathcal F_{\theta; \{\alpha_i\}, \delta_1, \delta_2} (\mathbf i t),\quad \text{ where }\quad \mathcal F_{\theta; \{\alpha_i\}, \delta_1, \delta_2} (z):= \exp\Bigl(\delta_1 z+\tfrac{\delta_2}{2\theta} z^2\Bigr) \cdot \prod_{i=1}^{\infty} \frac{\exp(-\alpha_i z)}{\left(1-\tfrac{\alpha_i}{\theta} z\right)^\theta}.
\end{equation}
More generally, there is a formula determining the eigenvalue distribution of the corners, namely if $\eta_1\le \eta_2\le\dots\le \eta_N$ are the random eigenvalues of the $N\times N$ corner, then their Bessel generating function is explicit:
\begin{equation}
\label{eq_BGF_ergodic}
\mathbb E\!\left[ B_{(\eta_1,\dots,\eta_N)}(x_1,\dots,x_N;\, \theta) \right] = \prod_{j=1}^N \mathcal F_{\theta; \{\alpha_i\}, \delta_1, \delta_2} (x_j),
\end{equation}
and matches \eqref{eq_OV_expansion} at $\theta=1$.
We prove a $\theta N\to\gamma$ Law of Large Numbers for the ergodic measures of \cite{OV,AN}:
\begin{thm} \label{Theorem_ergodic}
Suppose that $\theta$ and $\{\alpha_i\}_{i=1}^{\infty}$, $\delta_1$, $\delta_2$ vary with $N$ in such a way that $N\to\infty$, $\th\to 0$, $\theta N\to \gamma$ and
\begin{equation}
\label{eq_ergodic_convergence}
\ln \left( \mathcal F_{\theta; \{\alpha_i\}, \delta_1, \delta_2} (z) \right) \longrightarrow F(z)= \sum_{l=1}^{\infty} \frac{\kappa_l}{l} z^l,
\end{equation}
uniformly over a complex neighborhood of $0$. Then the eigenvalues $(\eta_1,\dots,\eta_N)$ of the $N\times N$ corners of the corresponding general $\theta$ ergodic random matrix converge in the sense of moments (as in Definitions \ref{Def_mom_convergence} or \ref{Definition_LLN_sat_ht}) to a probability distribution with $\gamma$--cumulants $\kappa_l$, i.e.\ its moments are found by the expression of Theorem \ref{theorem_cumuls_moms}.
\end{thm}
\begin{remark}\label{Remark_Gauss_Laguerre}
Choosing $\delta_2 = \theta$, so that $\mathcal F_{\theta; \{\alpha_i\}, \delta_1, \delta_2} (z)=\exp( z^2/2 )$, we recover the LLN for the Gaussian $\beta$--ensembles as $N\to\infty$, $\beta N\to 2\gamma$, as in Section \ref{Section_GbE}.
\noindent Choosing $\alpha_1=\alpha_2=\dots=\alpha_M=\theta$, $\delta_1=M\theta$ with $M=\lfloor \lambda N/\gamma \rfloor$, so that $\mathcal F_{\theta; \{\alpha_i\}, \delta_1, \delta_2} (z)=(1-z)^{-\theta \lfloor \lambda N / \gamma \rfloor}\to (1-z)^{-\lambda}$, we recover the LLN for the Laguerre $\beta$--ensembles as $N\to\infty$, $\beta N\to 2\gamma$ and $M/N\to \lambda/\gamma$.
The limiting probability measure is $\nu_\gamma^\lambda$, as described in Example \ref{laguerre_exam}.
\end{remark}
\begin{remark} The formula \eqref{eq_Fourier_theta} has a multiplicative structure: a product of $\mathcal F_{\theta; \{\alpha_i\}, \delta_1, \delta_2} (z)$ functions is again a function of the same type. This property leads to the limits in Theorem \ref{Theorem_ergodic} being infinitely-divisible with respect to $\gamma$--convolution $\boxplus_\gamma$.
This is in agreement with Examples \ref{gauss_exam} and \ref{laguerre_exam}, which show that the measures $\mu_\gamma^{\sigma^2}$ and $\nu_\gamma^\lambda$ are $\gamma$--infinitely-divisible.
\end{remark}
\begin{proof}[Proof of Theorem \ref{Theorem_ergodic}]
Combining \eqref{eq_BGF_ergodic} with \eqref{eq_ergodic_convergence}, we conclude that the BGF of $(\eta_1,\dots,\eta_N)$ is $\gamma$-LLN appropriate in the sense of Definition \ref{Definition_LLN_appr_ht}. Hence, by Theorem \ref{thm_small_th}, $(\eta_1,\dots,\eta_N)$ converge in the sense of moments and the asymptotic moments are recovered from the $\gamma$--cumulants $\kappa_l$ by using the map $T^\ga_{\ka\to m}$.
\end{proof}
\subsection{Limit of projections}
We again start from the complex case $\theta=1$. This time we fix $N=1,2,\dots$ and a deterministic $N$--tuple of reals $a_1\le \dots\le a_N$. Let $A_N$ be a uniformly random $N\times N$ complex Hermitian matrix with eigenvalues $a_1,\dots,a_N$ and let $A_m$ be the $m\times m$ top-left submatrix of $A_N$. We now fix $\tau>1$, set $m=\lfloor N/\tau\rfloor$ and send $N\to\infty$. If we assume that the empirical measures of eigenvalues of $A_N$, $\frac{1}{N}\sum_{i=1}^N \delta_{a_i}$, converge to a limiting probability measure $\mu$, then the (random) empirical measures of eigenvalues of $A_m$ converge to a (deterministic) measure $\mu^{\boxplus \tau}$. For integer $\tau$ this measure is the same as the free convolution of $\tau$ copies of $\mu$, hence, $\mu^{\boxplus \tau}$ can be called a fractional convolution power, see \cite{STJ} for a recent study and references.
We now present an analogue of the operation $\mu\mapsto \mu^{\boxplus \tau}$ in our $\theta\to 0$ asymptotic framework.
\begin{thm} \label{Theorem_projections} Fix real numbers $\gamma>0$ and $\tau>1$. Suppose that for each $N=1,2,\dots,$ we are given an $N$--tuple of reals $a_1\le\dots\le a_N$, and let $\{y_i^k\}_{1\le i \le k \le N}$ be the $\theta$--corners process with top row $a_1,\dots,a_N$, as in Definition \ref{def_betacorner}. (In particular, this means $y_1^N=a_1$,\dots, $y_N^N=a_N$.) Define the empirical measures
$$
\rho_N=\frac{1}{N}\sum_{i=1}^N \delta_{a_i},\qquad \rho_{N}^\tau=\frac{1}{\lfloor N/\tau\rfloor} \sum_{i=1}^{\lfloor N/\tau\rfloor }\delta_{y^{\lfloor N/\tau\rfloor}_i},
$$
and suppose that all measures $\rho_N$ are supported inside a segment $[-C,C]$ and as $N\to\infty$, $\rho_N$ weakly converge to a probability measure $\mu$ (supported inside the same segment). Then as $N\to\infty$, $\theta\to 0$ with $\theta N\to\gamma$, the (random) measures $\rho_N^\tau$ converge weakly, in probability to a deterministic measure $\mu^{\tau,\gamma}$. If $\{m_k\}_{k\ge 1}$ are the moments of $\mu$ and $\{m_k^{\tau}\}_{k\ge 1}$ are the moments of $\mu^{\tau,\gamma}$, then
\begin{equation}\label{same_cums}
T^{\gamma}_{m\to \kappa}\bigl(\{m_k\}_{k\ge 1}\bigr)=T^{\gamma/\tau}_{m\to \kappa}\bigl(\{m_k^{\tau}\}_{k\ge 1}\bigr).
\end{equation}
In other words, $\gamma$--cumulants of $\mu$ coincide with $\tfrac{\gamma}{\tau}$--cumulants of $\mu^{\tau,\gamma}$.
\end{thm}
\begin{remark}
The condition of support inside $[-C,C]$ is used to guarantee that all the involved measures are determined by their moments; it can be replaced by other uniqueness conditions for the moments problem.
\end{remark}
\begin{proof}[Proof of Theorem \ref{Theorem_projections}]
Convergence $\rho_N\to \mu$ and the condition on the support of $\rho_N$ imply that the moments of $\rho_N$ converge to those of $\mu$. Hence, the sequence of delta-measures (unit masses) on $N$--tuples $(a_1,\dots,a_N)$ satisfies LLN in the sense of Definition \ref{Definition_LLN_sat_ht}. Thus, Theorem \ref{thm_small_th} yields that it is $\gamma$--LLN appropriate, i.e., its BGF
$$
G_{N;\theta}(x_1,\dots,x_N)= B_{(a_1,\dots,a_N)}(x_1,\dots, x_N;\, \theta)
$$
satisfies the conditions of Definition \ref{Definition_LLN_appr_ht}. Let $\tilde G_{N;\theta}$ denote the BGF of the $\lfloor N/\tau\rfloor$--tuple of reals $y^{\lfloor N/\tau\rfloor}_1\le \dots \le y^{\lfloor N/\tau\rfloor}_{\lfloor N/\tau\rfloor}$. Then Definition \ref{Definition_Bessel_function} implies that
$$
\tilde G_{N;\theta}(x_1,\dots,x_{\lfloor N/\tau\rfloor})=G_{N;\theta}(x_1,\dots,x_{\lfloor N/\tau\rfloor}, 0,0,\dots,0),
$$
where there are $N-\lfloor N/\tau\rfloor$ in the right-hand side. Hence, the partial derivatives of $\ln(\tilde G_{N;\theta})$ coincide with partial derivatives of $\ln(G_{N;\theta})$ and, therefore, the former is $\gamma$--LLN appropriate. It is important to emphasize at this point that we use the same $\theta$ for $G_{N;\theta}$ and $\tilde G_{N;\theta}$, however, the number of variables for the latter is $\lfloor N/\tau\rfloor$ rather than $N$. This leads to $\gamma$ being divided by $\tau$. It remains to use Theorem \ref{thm_small_th} yet again to conclude that the random measures $\rho_{N}^\tau$ converge in the sense of moments and consequently also weakly, in probability.
\end{proof}
In general, we do not know any simple criteria on when a given sequence of numbers is a sequence of $\gamma$--cumulants corresponding to a probability measure. Yet Theorem \ref{Theorem_projections} leads to an interesting comparison between different $\gamma$'s.
\begin{corollary}
Take a sequence of real numbers $\kappa_1,\kappa_2,\dots$ and suppose that for some $\gamma_0>0$, these numbers are $\gamma_0$--cumulants of some probability measure $\mu$, i.e., $\{\kappa_l\}_{l\ge 1}=T^{\gamma_0}_{m\to\kappa}\bigl(\{m_k\}_{k\ge 1}\bigr)$ with $m_k=\int_{\mathbb R} x^k \mu(dx)$. Then for each $0<\gamma<\gamma_0$ the same numbers are also $\gamma$--cumulants of some probability measure. In particular, sending $\gamma\to 0$, we also have that the sequence $0! \kappa_1, 1!\kappa_2, 2!\kappa_3,\dots$ gives conventional cumulants of some probability measure.
\end{corollary}
\begin{proof}
We apply Theorem \ref{Theorem_projections} with $\tau=\gamma_0/\gamma$. The theorem was proven only for compactly supported measures, but we can approximate any measure by compactly supported ones. Finally, the convergence of $\gamma$--cumulants to conventional cumulants as $\gamma\to 0$ is discussed in Section \ref{Section_semifree}.
\end{proof}
\begin{remark}
Theorem \ref{Theorem_projections}, or just equation \eqref{same_cums}, defines for $\tau>1$ the \emph{$(\tau,\gamma)$--projection} map
$$
\Pi_{\tau,\gamma} : \mu \mapsto \mu^{\tau,\gamma},
$$
which maps the space of probability measures of compact support to itself.
It would be interesting to study the possibility of an extension of $(\tau, \gamma)$--projection map to all probability measures. In particular, the probability measures $\mu_\gamma^{\sigma^2}$ and $\nu_\gamma^\lambda$ from Examples \ref{gauss_exam} and \ref{laguerre_exam} should map to the measures of the same type:
$\Pi_{\tau, \gamma}(\mu_\gamma^{\sigma^2}) = \mu_{\gamma/\tau}^{\sigma^2}$ and $\Pi_{\tau, \gamma}(\nu_\gamma^\lambda) = \nu_{\gamma/\tau}^\lambda$.
\end{remark}
\section{Law of Large Numbers at high temperature}\label{sec_proof_LLN}
In this section, we prove Theorem \ref{thm_small_th}.
Recall that the real parameter $\gamma>0$ is fixed, and we are interested in the limit regime $N\to\infty$, $\th\to 0$, and $\th N\to\gamma$.
Let us recall some terminology about partitions of numbers (rather than set partitions of Section \ref{Section_main_results}).
A partition $\lambda$ is a weakly decreasing sequence of nonnegative integers $\lambda = (\lambda_1\ge \lambda_2\ge \cdots\ge 0 )$, $\lambda_i\in\mathbb{Z}_{\ge 0}$, such that $\sum_{i=1}^{\infty}\lambda_i<\infty$. The latter sum is denoted $|\lambda|$ and is called the \emph{size} of the partition $\lambda$.
If $\lambda$ is a partition of size $k$, we write $\lambda\vdash k$.
The \emph{length} $\ell(\lambda)$ of $\lambda$ is defined as the number of strictly positive parts of $\lambda$.
The partitions are often identified with Young diagrams, in which $\lambda_i$ become the row lengths. We also need column lengths $\lambda'_1\ge \lambda'_2\ge\dots$ defined by $\lambda_{j}'=|\{i\ge 1 \mid \lambda_i\ge j\}|$. In particular, $\lambda'_1=\ell(\lambda)$.
\subsection{The asymptotic expansion of Dunkl operators}
If $F$ is a smooth symmetric function of the $N$ variables $x_1,\dots,x_N$, then its Taylor series expansion is also symmetric and we can write the $k$--th order approximation as
\begin{equation} \label{eq_symmetric_Taylor}
F(x_1,\dots,x_N)= \sum_{\lambda:\, |\lambda|\le k,\, \ell(\lambda) \le N} c_F^{\lambda}\cdot M_\lambda(\vec{x})+ O(\|x\|^{k+1}),
\end{equation}
where the sum is over partitions $\lambda$ of size at most $k$ and length at most $N$. Finally, $M_\lambda(\vec{x})$ is the monomial symmetric function:
$$
M_\lambda(\vec{x})=\sum_{\begin{smallmatrix}(d_1,\dots,d_N)\in\mathbb Z^N_{\ge 0}, \text{ such that}\\ \lambda\text{ is the rearrangement of } d_i \text{ in nonincreasing order}\end{smallmatrix}} x_1^{d_1} \cdot x_2^{d_2}\cdots x_N^{d_N}.
$$
\begin{thm} \label{theorem_operators_expansion}
Fix $k=1,2,\dots$ and a partition $\lambda$ with $|\lambda|=k$. Let $F(x_1,\dots,x_N)$ be a symmetric function of $(x_1,\dots,x_N)\in\mathbb{R}^N$, which is $(k+1)$--times continuously differentiable in a neighborhood of $(0,\dots,0)$ and satisfies $F(0,\dots,0)=0$. Then we have:
\begin{multline}\label{eq_operator_small_th_expansion}
N^{-\ell(\lambda)} \left[\prod_{i=1}^{\ell(\lambda)} \P_{\lambda_i}\right] \exp\bigl( F(x_1,\dots,x_N)\bigr)\Bigr|_{x_1=\dots=x_N=0}= b^{\lambda}_{\lambda} \cdot c_F^{\lambda} + \sum_{\mu:\, |\mu|=k,\, \ell(\mu)>\ell(\lambda)} b^{\lambda}_{\mu} \cdot c_F^{\mu}\\ + L\Bigl(c_F^{(i)},\, 1\le i \le k-1\Bigr) + R_1\Bigl(c^{\nu}_F, \, |\nu|< k\Bigr) + N^{-1} R_2\Bigl(c^{\nu}_F, \, |\nu|\le k\Bigr),
\end{multline}
where $b^\lambda_\mu$ are coefficients, which are uniformly bounded in the regime $N\to\infty$, $\theta\to 0$, $\theta N\to \gamma$. In particular,
\begin{equation}
\label{eq_diagonal_asymptotics}
\lim_{\begin{smallmatrix} N\to\infty,\, \theta\to 0\\ \theta N\to\gamma\end{smallmatrix}}\, b_{\lambda}^{\lambda}= \prod_{i=1}^{\ell(\lambda)}\lambda_i (1+\gamma)_{\lambda_i-1}.
\end{equation}
Further,
\begin{equation} \label{eq_one_row_part}
L\Bigl(c_F^{(i)},\, 1\le i \le k-1\Bigr)= \prod_{i=1}^{\ell(\lambda)} \left([z^0](\partial+\gamma d+*_g)^{\lambda_i-1} g(z)\right)- k (1+\gamma)_{k-1} c^{(k)}_F {\mathbf 1}_{\ell(\lambda)=1},
\end{equation}
where $(m)$ is the one-row Young diagram of size $m$, the operators $\partial$, $d$, $*_g$ are the ones introduced in Definition \ref{df:ops}, and $g(z) := \sum_{n=1}^{\infty} n c_F^{(n)} z^{n-1}$.
Next, $R_1\Bigl(c^{\nu}_F, \, |\nu|< k\Bigr)$ is a polynomial in $c^{\nu}_F, \, |\nu|< k$, such that:
\begin{itemize}
\item If we assign the degree $|\nu|$ to each $c^\nu_F$, then $R_1$ is homogeneous of degree $k$.
\item The coefficients of the monomials in $R_1$ are uniformly bounded in the regime $N\to\infty$, $\theta\to 0$, $\theta N\to \gamma$.
\item Each monomial in $R_1$ has at least one factor $c^\nu_F$ with $\ell(\nu)>1$.
\end{itemize}
Finally, $R_2\Bigl(c^{\nu}_F, \, |\nu|\le k\Bigr)$ is a homogeneous polynomial in $c^{\nu}_F, \, |\nu|\le k$, of degree $k$ and with uniformly bounded coefficients (in the same regime).
\end{thm}
Before proving Theorem \ref{theorem_operators_expansion}, let us use it to deliver the proof of Theorem \ref{thm_small_th}.
\begin{proof}[Proof of Theorem \ref{thm_small_th}]
First, take a LLN--appropriate sequence $\{\mu_N\}_N$ with associated sequence of real numbers $\{\kappa_l\}_{l\ge 1}$.
Let $\{m_k\}_{k\ge 1}$ be the image of $\{\kappa_l\}_{l\ge 1}$ under the map $T^\ga_{\ka\to m}$, that is, each $m_k$ is the function of the $\kappa_l$'s given by \eqref{eq_moments_through_f_cumulants}.
We aim to show that $\{\mu_N\}_N$ satisfies a LLN with associated sequence of real numbers $\{m_k\}_{k\ge 1}$.
Let us denote the BGF of $\mu_N$ by $G_{N; \theta}(x_1, \dots, x_N)$. Let $s = 1, 2, \dots$ and $k_1, \dots, k_s\in\mathbb{Z}_{\geq 1}$ be arbitrary.
By Proposition \ref{proposition_moments_through_operators} (or Proposition \ref{Proposition_BGF_dist} for distributions), we have
\begin{equation}
\mathbb E_{\mu_N} \!\!\left[\prod_{i=1}^s p_{k_i}^N \right] =
N^{-s} \left( \prod_{i=1}^s{\P_{k_i}}\right)G_{N; \th}(x_1, \dots, x_N)\Bigr|_{x_1=\dots=x_N=0}.
\end{equation}
Without loss of generality, we assume that $k_1\ge k_2\ge\dots\ge k_s$, so that the $k_i$'s form a partition.
Since $G_{N; \th}$ is holomorphic in a neighborhood of the origin and $G_{N; \th}(0,\dots,0)=1$, then there is a holomorphic function $F_{N; \th}(x_1, \dots, x_N)$ in a neighborhood of the origin such that $G_{N; \th} = \exp(F_{N; \th})$ and $F_{N; \th}(0,\dots,0)=0$. The functions $F_{N; \th}(x_1, \dots, x_N)$ are smooth and symmetric in the real variables $x_1, \dots, x_N$, so we can consider their Taylor expansions:
$$
F_{N; \th}(x_1,\dots,x_N)= \sum_{\lambda:\, |\lambda|\le k,\, \ell(\lambda) \le N} c_{F_{N; \th}}^{\lambda}\cdot M_\lambda(\vec{x})+ O(\|x\|^{k+1}).
$$
By LLN--appropriateness,
$$
\lim_{\substack{N\to\infty,\, \th\to 0\\ \th N\to\ga}} c^{(n)}_{F_{N; \th}} = \frac{\kappa_n}{n},\qquad \lim_{\substack{N\to\infty,\, \th\to 0\\ \th N\to\ga}} c^{\mu}_{F_{N; \th}}=0,\quad \text{if }\ell(\mu)>1.
$$
Apply Theorem \ref{theorem_operators_expansion} to the function $F_{N; \th}$ and the partition $\lambda = (k_1 \ge \cdots \ge k_s)$.
Let us take the limit of each term in the resulting right-hand side of \eqref{eq_operator_small_th_expansion} in the limit regime $N\to\infty$, $\th\to 0$, $\th N\to\gamma$:
\begin{itemize}
\item In the first line, if $s>1$, then each term involves some $c^{\mu}_{F_{N; \th}}$ with $\ell(\mu)>1$, and therefore tends to $0$. Otherwise, if $s=1$, then there is a single asymptotically non-vanishing term, namely $b^{(k_1)}_{(k_1)} \cdot c^{(k_1)}_{F_{N; \th}}$, which converges to $(1+\gamma)_{k_1-1}\,\kappa_{k_1}$.
\item The polynomial $R_1$ converges to $0$, since each of its monomials involves some $c^{\mu}_{F_{N; \th}}$ with $\ell(\mu)>1$ and, therefore, vanishes asymptotically.
\item The polynomial $\frac{1}{N} R_2$ converges to $0$ due to the $\frac{1}{N}$ prefactor.
\item The polynomial $L$ converges to
$$
\prod_{i=1}^{s} \left([z^0](\partial+\gamma d+*_g)^{k_i-1} g(z)\right)- (1+\gamma)_{k_1-1} \kappa_{k_1} {\mathbf 1}_{s=1},
$$
where $g(z) = \sum_{n=1}^{\infty} {\kappa_n z^{n-1}}$, due to the fact that the power series $\sum_{n=1}^{\infty}nc_{F_{N; \th}}^{(n)}z^{n-1}$ converges coefficient-wise to $g(z)$.
\end{itemize}
Combining the terms coming from the above four items, we conclude that
$$
\lim_{N\to\infty} \mathbb E_{\mu_N} \!\!\left[\prod_{i=1}^s p_{k_i}^N \right] = \prod_{i=1}^s \left([z^0](\partial+\gamma d+*_g)^{k_i-1} g(z)\right)\!.
$$
We have thus arrived at the Law of Large Numbers with $m_k$ given by \eqref{eq_moments_through_f_cumulants}.
\bigskip
In the opposite direction, take a sequence $\mu_N$ which satisfies the Law of Large Numbers with associated sequence $\{m_k\}_{k\ge 1}$.
Let $\{\kappa_l\}_{l\ge 1}$ be the image of $\{m_k\}_{k\ge 1}$ under the map $T^\ga_{m\to\ka}$.
Again let $G_{N; \th}(x_1, \dots, x_N)$ be the BGF of $\mu_N$. We show that $\mu_N$ is LLN--appropriate with corresponding sequence $\{\kappa_l\}_{l\ge 1}$, that is, we are going to establish the conditions on partial derivatives of Definition \ref{Definition_LLN_appr_ht}. This will be done by induction on the total order of the derivative. For the inductive step, we assume that for all $s\le k-1$, the asymptotic behavior of all partial derivatives of order $s$ is already established, i.e.\ we assume that the limits
$$
\lim_{\begin{smallmatrix} N\to\infty,\, \theta \to 0\\ \theta N\to \gamma \end{smallmatrix}}\left.\frac{\partial}{\partial x_{i_1}}\cdots\frac{\partial}{\partial x_{i_s}}\ln{(G_{N; \th})}\right|_{x_1=\dots=x_N=0},\quad i_1, \cdots, i_s\in\mathbb{Z}_{\ge 1},
$$
exist and are equal to zero unless $i_1 = \cdots = i_s$, in which case the limit is equal to $(s-1)!\cdot c_s$.
Our task is to prove the two conditions of Definition \ref{Definition_LLN_appr_ht} for $\ell=k$ and for $r=k$.
Let $p(k)$ be the total number of partitions of $k$ and consider the $p(k)$ expressions \eqref{eq_operator_small_th_expansion} obtained by making $\lambda$ run over all partitions of $k$ and letting $F_N:=F_{N; \th}$ be determined through $G_{N; \th} = \exp(F_{N; \th})$.
We regard the left-hand sides and the coefficients $c^{\mu}_{F_{N; \th}}$ with $|\mu| < k$, as constants, while we regard the terms $c^{\lambda}_{F_{N; \th}}$ with $|\lambda| = k$, as variables; then we can treat these expressions as $p(k)$ linear equations for the $p(k)$ variables $c^{\lambda}_{F_{N; \th}}$ with $|\lambda|=k$.
The coefficients of these equations generally depend on $N$ and $\theta$, and moreover we know the $N\to\infty$, $\theta\to\infty$, $\theta N\to\infty$ asymptotic behavior of the left-hand sides of \eqref{eq_operator_small_th_expansion} as well as $L$ and $R_1$ in the right-hand side (by the inductive hypothesis).
The form of the first line of \eqref{eq_operator_small_th_expansion} implies that the matrix of coefficients of these equations becomes triangular as $N\to\infty$, $\th\to 0$, $\theta N\to \gamma$ in the lexicographic order $\leq$ on partitions of length $k$, viewed as vectors of column lengths $(\lambda'_1,\lambda'_2,\dots)$, because $\ell(\mu)>\ell(\lambda)$ implies $\mu>\lambda$. The diagonal elements have nonzero limits, because of \eqref{eq_diagonal_asymptotics}.
We can rewrite these linear equations in the matrix notation. Let $B^{N,\theta}$ be the $p(k)\times p(k)$ matrix with matrix elements
$$
B^{N,\theta}(\mu,\lambda)=b^{\lambda}_\mu.
$$
Further, let $\mathbf c^N$ denote the $p(k)$--dimensional column-vector with coordinates $c^{\lambda}_{F_{N; \th}}$, $|\lambda|=k$. Then the previous paragraph can be summarized as a matrix equation
\begin{equation}
\label{eq_x26}
B^{N,\theta}\cdot \mathbf c^N = \mathbf r^N,
\end{equation}
where the vector $\mathbf c^N$ is unknown and the right-hand side $\mathbf r^N$ is known. The key property of \eqref{eq_x26} is that the entries of the inverse matrix $(B^{N,\theta})^{-1}$ are bounded as $N\to\infty$, $\theta\to\infty$, $\theta N\to\gamma$; this follows from triangularity of $B^{N,\theta}$ and non-zero limits for its diagonal entries. Let $\mathbf c^{\infty}$ denote another $p(k)$--dimensional vector, in which the first coordinate (corresponding to the one-row partition $(k)$) is $\tfrac{\kappa_k}{k}$ (here $\kappa_k$ is found from \eqref{eq_x28}, in which the numbers $m_1,m_2,\dots$ are known us) and all other coordinates are zeros. The first part of the proof (where we showed that each LLN-appropriate sequence satisfies LLN) and the induction hypothesis imply that
\begin{equation}
\label{eq_x27}
B^{N,\theta}\cdot \mathbf c^\infty = \mathbf r^N +o(1),
\end{equation}
where $o(1)$ is a vanishing term as $N\to\infty$, $\theta\to\infty$, $\theta N\to\gamma$. Multiplying
\eqref{eq_x26} and \eqref{eq_x27} by $(B^{N,\theta})^{-1}$ and comparing the results, we conclude that
$$
\lim_{\begin{smallmatrix}N\to\infty, \theta\to 0,\\ \theta N\to \gamma \end{smallmatrix}} \mathbf c^N=\mathbf c^\infty. \qedhere
$$
\end{proof}
\subsection{Proof of Theorem \ref{theorem_operators_expansion}}\label{sec_proof_expansion}
We start by reducing to the case of $F$ being a symmetric polynomial.
\begin{lemma} \label{Lemma_replace_by_polynomial}
Suppose that $F$ is a $(k+1)$--times continuously differentiable function in a neighborhood of $(0, \dots, 0)\in\mathbb{C}^N$, with Taylor expansion \eqref{eq_symmetric_Taylor}. Then for any $\lambda$ with $|\lambda|=k$, we have
$$
\left[\prod_{i=1}^{\ell(\lambda)} \P_{\lambda_i}\right] \exp\bigl( F(x_1,\dots,x_N)\bigr)\Bigr|_{x_1=\dots=x_N=0}= \left[\prod_{i=1}^{\ell(\lambda)} \P_{\lambda_i}\right] \exp\bigl( \tilde F(x_1,\dots,x_N)\bigr)\Bigr|_{x_1=\dots=x_N=0},
$$
where
$$
\tilde F(x_1,\dots,x_N)= \sum_{\nu:\, |\nu|\le k} c_F^{\nu}\cdot M_\nu(\vec{x}).
$$
\end{lemma}
\begin{proof}
We have
$$
\exp\bigl( F(x_1,\dots,x_N)\bigr)=\exp\bigl( \tilde F(x_1,\dots,x_N)\bigr) +R(x_1,\dots,x_N),
$$
where $R$ is a $(k+1)$--times continuously differentiable function, satisfying $R=O(\|x\|^{k+1})$ as $(x_1,\dots,x_N)\to(0,\dots,0)$. It remains to show that after we apply $k$ operators of the form $\frac{\partial}{\partial x_i}$ or $\frac{1-s_{ij}}{x_i-x_j}$ to $R$, the resulting function $R^{(k)}$ is continuous and vanishes at $(0,\dots,0)$. For that we let $R^{(m)}$, $m=1,2,\dots,k$ be the result of application of $m$ such operators and prove by induction in $m$ that $R^{(m)}$ is $(k+1-m)$--times continuously differentiable and satisfies $R^{(m)}=O(\|x\|^{k+1-m})$. The induction step is proven by applying to $R^{(m)}$ the Taylor's theorem with remainder in the integral form.
\end{proof}
By virtue of Lemma \ref{Lemma_replace_by_polynomial}, we can (and will) assume for the remainder of this section that
$$F(x_1, \dots, x_N) = \sum_{\nu:\, |\nu|\le k}{c^{\nu}_F\cdot M_\lambda(\vec{x})}.$$
Next, consider any product of $k$ operators, each of which is either $\frac{\partial}{\partial x_i}$ for some $i$, or $\frac{1}{x_i-x_j}(1-s_{ij})$ for some $i$ and $j$. We apply these operators inductively to $\exp(F)$, using the following rules:
\begin{multline}
\label{eq_Leibnitz}
\frac{\partial}{\partial x_i} \bigl[ H(x_1,\dots,x_N) \cdot \exp (F(x_1,\dots,x_N)) \bigr]\\
= \left(\frac{\partial}{\partial x_i} H(x_1,\dots,x_N)+ H(x_1,\dots,x_N) \frac{\partial}{\partial x_i} F(x_1,\dots,x_N)\right) \cdot \exp (F(x_1,\dots,x_N)),
\end{multline}
\begin{multline}
\label{eq_Leibnitz_dif}
\frac{1}{x_i-x_j}(1-s_{ij}) \bigl[ H(x_1,\dots,x_N) \cdot \exp (F(x_1,\dots,x_N)) \bigr]\\ = \left(\frac{1}{x_i-x_j}(1-s_{ij}) H(x_1,\dots,x_N) \right) \cdot \exp(F(x_1,\dots,x_N)).
\end{multline}
Hence, taking into account that $F(0,\dots,0)=0$, the result of acting by such product on $\exp(F)$ and then setting all variables equal to $0$ is a finite linear combination of products of actions of $\frac{\partial}{\partial x_i}$ and $\frac{1}{x_i-x_j}(1-s_{ij})$ on the function $F$, and then picking up the constant term of the polynomial.
Since $F$ is a polynomial with coefficients $c_F^\lambda$ and the actions of $\frac{\partial}{\partial x_i}$ and $\frac{1}{x_i-x_j}(1-s_{ij})$ on monomials are clear, we conclude the following statement.
\begin{lemma} \label{Lemma_D_as_polynomial}
For any $k$ indices $1\le i_1,\dots,i_k\le N$, the expression
\begin{equation}\label{prod_Ds}
\left( \prod_{m=1}^k \mathcal D_{i_m} \right) \exp(F(x_1\dots,x_N)) \Bigr|_{x_1=\dots=x_N=0}
\end{equation}
is a homogeneous polynomial of degree $k$ in $c^\lambda_F$ (if we regard each $c^\lambda_F$ as a degree $|\lambda|$ variable), whose coefficients are uniformly bounded as $N\to\infty$, $\theta\to 0$, $\theta N\to\gamma$.
\end{lemma}
\begin{proof}
By definition, each $\mathcal D_i$ is linear combination of $N$ terms, each of which is $\frac{\partial}{\partial x_i}$ or $\frac{1}{x_i-x_j}(1-s_{ij})$.
Observe that any of these two simple operators decreases the degree of a polynomial in the variables $x_1, \cdots, x_N$ by $1$.
Therefore, using the rules \eqref{eq_Leibnitz} and \eqref{eq_Leibnitz_dif}, the expression \eqref{prod_Ds} is a polynomial in the coefficients of the degree $k$ component of $\exp(\sum_{\lambda:\, |\lambda|\le k}{c^\lambda_F \cdot M_\lambda(\vec{x})})$.
Such polynomial is therefore in the variables $c^\lambda_F$ and is homogeneous of degree $k$, because of how we assigned the degrees to the $c^F_\lambda$'s.
In the formula \eqref{dunkl_ops} for $\mathcal D_i$, the term $\frac{\partial}{\partial x_i}$ comes with unit coefficient, and the remaining terms $\frac{1}{x_i-x_j}(1-s_{ij})$ come with a prefactor $\th$, which decays as $\gamma/N$ as $N\to\infty$. Hence, expanding $\prod_{m=1}^k \mathcal D_{i_m}$ as a linear combination of products of the operators $\frac{\partial}{\partial x_i}$ and $\frac{1}{x_i-x_j}(1-s_{ij})$, we see that the coefficients of the polynomial \eqref{prod_Ds} in the variables $c_\lambda^F$ are uniformly bounded in the regime of our interest.
\end{proof}
\begin{corollary} \label{Corollary_reduce_terms}
Take any partition $\lambda$ with $|\lambda|=k$.
As $N\to\infty$, $\theta\to 0$, $\theta N\to \gamma$, we have
\begin{multline}
N^{-\ell(\lambda)} \left[\prod_{i=1}^{\ell(\lambda)} \P_{\lambda_i}\right]\! \exp\bigl( F(x_1,\dots,x_N)\bigr)\Bigr|_{x_1=\dots=x_N=0} =
\left[\prod_{i=1}^{\ell(\lambda)} (\mathcal D_i)^{\lambda_i}\right]\! \exp\bigl( F(x_1,\dots,x_N)\bigr)\Bigr|_{x_1=\dots=x_N=0}\\+ N^{-1} R_3,
\end{multline}
where $R_3$ is a homogeneous polynomial of degree $k$ in the coefficients $c^{\nu}_F$ (if we regard each $c^\nu_F$ as a degree $|\nu|$ variable), and with uniformly bounded coefficients.
\end{corollary}
\begin{proof}
Each $\P_{\lambda_i}$ is a sum of $N$ terms $(\mathcal D_j)^{\lambda_i}$, $j=1,\dots,N$. Hence, $\left[\prod_{i=1}^{\ell(\lambda)} \P_{\lambda_i}\right]$ is a sum of $N^{\ell(\lambda)}$ terms, each of which is a finite (independent of $N$ and $\theta$) product of $(\mathcal D_j)^{\lambda_i}$. For all but $O(N^{\ell(\lambda) - 1})$ of these terms, the indices $j$ are all distinct. Hence, by symmetry of $F$, the result of the action of such product on $\exp(F)$ is the same as that of $\prod_{i=1}^{\ell(\lambda)} (\mathcal D_i)^{\lambda_i}$, after setting all variables $x_i$ equal to zero. Dividing by $N^{-\ell(\lambda)}$, we get the desired statement.
\end{proof}
For the rest of the section, we analyze $\left[\prod_{i=1}^{\ell(\lambda)} (\mathcal D_i)^{\lambda_i}\right] \exp\bigl( F(x_1,\dots,x_N)\bigr)\Bigr|_{x_1=\dots=x_N=0}$. In view of Corollary \ref{Corollary_reduce_terms} we need to show that it has an expansion of the form of the right-hand side of \eqref{eq_operator_small_th_expansion}.
\begin{proposition} \label{Proposition_highest_derivatives}
Take any partition $\lambda$ with $|\lambda|=k$. We have
$$
\left[\prod_{i=1}^{\ell(\lambda)} (\mathcal D_i)^{\lambda_i}\right] \exp\bigl( F(x_1,\dots,x_N)\bigr)\Bigr|_{x_1=\dots=x_N=0}= b^{\lambda}_{\lambda} \cdot c_F^{\lambda} + \sum_{\mu:\, |\mu|=k,\, \ell(\mu)>\ell(\lambda)} b^{\lambda}_{\mu} \cdot c_F^{\mu}+R+O\left(\frac{1}{N}\right),
$$
where the coefficients $b^\lambda_\mu$ are uniformly bounded in the regime $N\to\infty$, $\theta\to 0$, $\theta N\to \gamma$. In particular,
$$
\lim_{\begin{smallmatrix} N\to\infty,\, \theta\to 0\\ \theta N\to\gamma\end{smallmatrix}}\, b_{\lambda}^{\lambda}= \prod_{i=1}^{\ell(\lambda)} \lambda_i (1+\gamma)_{\lambda_i-1}.
$$
Moreover, $R$ is a homogeneous polynomial of degree $k$ in the coefficients $c_F^\nu$ with $|\nu|<k$, i.e., it does not involve the coefficients $c_F^\nu$ with $|\nu|=k$. Finally, $O\left(\frac{1}{N}\right)$ stands for a linear polynomial in the coefficients $c_F^\nu$ with $|\nu| = k$, whose coefficients are of the order $O\left(\frac{1}{N}\right)$, as $N\to\infty$, $\theta\to 0$, $\theta N\to \gamma$.
\end{proposition}
\begin{proof}
By Lemma \ref{Lemma_D_as_polynomial}, $\left[\prod_{i=1}^{\ell(\lambda)} (\mathcal D_i)^{\lambda_i}\right] \exp\bigl( F(x_1,\dots,x_N)\bigr)\Bigr|_{x_1=\dots=x_N=0}$ is a homogeneous polynomial of degree $k$ in the coefficients $c^{\nu}_F$, $|\nu|\le k$.
Hence, its linear component is of the form
$$
\sum_{\mu :\, |\mu|=k} {b^\lambda_\mu \cdot c_F^\mu}.
$$
Therefore, two steps remain:
\begin{enumerate}
\item We need to show that $b^{\lambda}_\mu=O\left(\frac{1}{N}\right)$ unless $\ell(\mu)>\ell(\lambda)$ or $\mu=\lambda$.
\item We need to find the limit of $b^{\lambda}_\lambda$ as $N\to\infty$, $\theta\to 0$, $\theta N\to \gamma$.
\end{enumerate}
We first claim that the part of $\left[\prod_{i=1}^{\ell(\lambda)} (\mathcal D_i)^{\lambda_i}\right] \exp\bigl( F(x_1,\dots,x_N)\bigr)\Bigr|_{x_1=\dots=x_N=0}$ involving the coefficients $c^\mu_F$ with $|\mu|=k$ is given by
\begin{equation}
\label{eq_x1}
\left[\prod_{i=2}^{\ell(\lambda)} (\mathcal D_i)^{\lambda_i}\right] \cdot \left[\mathcal D_1^{\lambda_1-1}\right] \frac{\partial}{\partial x_1} F(x_1,\dots,x_N) \Bigr|_{x_1=\dots=x_N=0}.
\end{equation}
Indeed, the operators $\mathcal D_i$ commute, hence, we can apply $\mathcal D_1$ first. In the very first application of $\mathcal D_1$, the terms $\frac{1}{x_1-x_j}(1-s_{1j})$ can be omitted, since $(1-s_{1j})$ annihilates the symmetric function $\exp(F)$. Hence, the result of the first application of $\mathcal D_1$ is $\frac{\partial F}{\partial x_1}\cdot\exp(F)$. Using formula \eqref{eq_Leibnitz}, we see that all the next applications of partial derivatives $\frac{\partial}{\partial x_1}$ should never act on $\exp(F)$, as otherwise we are not getting the terms $c^\mu_F$ with $|\mu|=k$. Similarly, when we further apply $\prod_{i=2}^{\ell(\lambda)} (\mathcal D_i)^{\lambda_i}$, we should not act on $\exp(F)$.
Hence, we can omit $\exp(F)$, as it does not contribute to the computation. Therefore we get \eqref{eq_x1}.
We analyze \eqref{eq_x1} by using the expansion $F(x_1, \dots, x_N) = \sum_{\mu:\, |\mu|\le k}{c^\mu_F \cdot M_\mu(\vec{x})}$ in monomials and looking at each monomial separately.
Note that each operator $\mathcal D_i$ lowers by $1$ the degree of the monomial on which it acts. Since we apply $\frac{\partial}{\partial x_1}$, then $k-1$ operators $\mathcal D_i$, and then plug in all variables equal to $0$, the only way to get a non-zero contribution is by acting on a monomial of degree $k$.
We conclude that the coefficient $b^{\lambda}_\mu$ is computed by
\begin{equation}
\label{eq_b_through_monomial}
b^\lambda_\mu = \left. \left[\prod_{i=2}^{\ell(\lambda)} (\mathcal D_i)^{\lambda_i}\right] \cdot \left[\mathcal D_1^{\lambda_1-1}\right] \frac{\partial}{\partial x_1} M_\mu(\vec{x}) \right|_{x_1=\dots=x_N=0} = \left[\prod_{i=2}^{\ell(\lambda)} (\mathcal D_i)^{\lambda_i}\right] \cdot \left[\mathcal D_1^{\lambda_1-1}\right] \frac{\partial}{\partial x_1} M_\mu(\vec{x}).
\end{equation}
Each $\mathcal D_i$ is a sum of $N$ operators. Hence, the operator in \eqref{eq_b_through_monomial} can be represented as a sum of $N^{k-1}$ operators, each of which is a product of the factors $\frac{\partial}{\partial x_i}$ and $\frac{\theta}{x_i-x_j}(1-s_{ij})$.
\smallskip
{\bf Claim A.} Only the terms in which all indices $j$ are distinct and are all larger than $\ell(\lambda)$ contribute to the leading term of \eqref{eq_b_through_monomial}. All others combine together into a remainder of order $O\left(\frac{1}{N}\right)$.
Indeed, the terms with distinct indices are generic: the number of terms where two indices coincide is smaller, by a factor of $N$, than the number of similar terms with distinct indices.
\smallskip
Next, using Claim A, let us take a look at the first application of $\mathcal D_2$ after we computed $\mathcal D_1^{\lambda-1} \frac{\partial}{\partial x_1} M_\mu(\vec{x})$. We could either apply $\frac{\partial}{\partial x_2}$ or we can apply $\frac{\theta}{x_2-x_j}(1-s_{2j})$. But due to symmetry in $x_2$ and $x_j$ the result of the application of the latter operator vanishes. Hence, we have to use $\frac{\partial}{\partial x_2}$. Similarly, in the first application of $\mathcal D_3$ we need to use $\frac{\partial}{\partial x_3}$, etc. We conclude that
\begin{equation}
\label{eq_b_through_monomial_2}
b^\lambda_\mu= \left[\mathcal D_{\ell(\lambda)}^{\lambda_{\ell(\lambda)}-1} \frac{\partial}{\partial x_{\ell(\lambda)}} \right]\cdots \left[\mathcal D_2^{\lambda_2-1} \frac{\partial}{\partial x_2}\right] \cdot \left[\mathcal D_1^{\lambda_1-1} \frac{\partial}{\partial x_1}\right] \, M_\mu(\vec{x})+ O\left(\frac{1}{N}\right).
\end{equation}
We analyze the last expression in three steps.
\medskip
{\bf Step 1.} Let us show that if $\ell(\mu)<\ell(\lambda)$, then \eqref{eq_b_through_monomial_2} is $O\left(\frac{1}{N}\right)$. Indeed, if $\ell(\mu)<\ell(\lambda)$, then each monomial in $M_\mu(\vec{x})$ is missing one of the variables $x_1,\dots,x_{\ell(\lambda)}$. Say, it does not have $x_m$. Then, using the above Claim A, we see that when we apply $\frac{\partial}{\partial x_m}$ in \eqref{eq_b_through_monomial_2}, the expression has no dependence on $x_m$ and, hence, the derivative vanishes.
\medskip
{\bf Step 2.} If $\ell(\mu)>\ell(\lambda)$, then $b^{\lambda}_\mu$ are bounded as $N\to\infty$, $\th\to 0$, $\theta N\to\gamma$, by Lemma \ref{Lemma_D_as_polynomial} and we do not need to prove anything else about them.
\medskip
{\bf Step 3.} It remains to study the case $\ell(\mu)=\ell(\lambda)=\ell$. Let us expand $M_\mu(\vec{x})$ in monomials. If a monomial is missing one of the variables $x_1,\dots,x_{\ell}$, then by the argument of Step 1, it does not contribute to $b^\lambda_\mu$. Hence, since $\ell(\mu)=\ell(\lambda)$, it remains to study the monomials which involve $x_1,x_2,\dots, x_{\ell}$ and no other variables.
Note that for $1\le i \le \ell<j$, we have, using the degree-lowering operators \eqref{eq_lowering_operator}
\begin{multline}\label{eq_x2}
\frac{\theta}{x_i-x_j} (1-s_{ij}) [ x_1^{n_1}\cdots x_\ell^{n_\ell}] = \theta \left[\prod_{a\ne i} x_a^{n_a}\right] \frac{x_i^{n_i} -x_j^{n_i} }{x_i-x_j}\\
= \th \left[\prod_{a\ne i} x_a^{n_a}\right] \left(x_i^{n_{i}-1}+ x_i^{n_i-2} x_j+\dots + x_j^{n_i-1}\right)=\theta d_i [ x_1^{n_1}\cdots x_\ell^{n_\ell}] + x_j\cdot P,
\end{multline}
where $P$ is a polynomial of degree $n_1+\dots+n_\ell-2$.
Using the above Claim A, one sees that if a factor $x_j$, $j>\ell$ appears in a monomial, then this factor cannot be annihilated by applying any operator $\frac{\partial}{\partial x_i}$, $i\le \ell$, or any operator $\frac{\theta}{x_i-x_{j'}} (1-s_{ij'})$, $i\le \ell$, $j\ne j'$, unless this application makes the entire monomial vanish. Hence, the only way to get a non-zero contribution is by using the $d_i$ term, but not the $x_j\cdot P$ term in \eqref{eq_x2}. Thus, up to $O\left(\frac{1}{N}\right)$ error, the desired $b^{\lambda}_\mu$ can be alternatively computed as:
\begin{equation*}
b^\lambda_\mu= \left[ \left(\partial_\ell + \th(N-1) d_\ell\right)^{\lambda_{\ell}-1} \partial_{\ell}\right]\cdots \left[(\partial_1 + \th(N-1) d_1)^{\lambda_1-1} \partial_1\right] \, M_\mu(x_1,\dots,x_\ell)+ O\left(\frac{1}{N}\right).
\end{equation*}
(Above we denoted $\frac{\partial}{\partial x_i}$ by $\partial_i$ for all $i$.) The last operator lowers the degree of $x_1$ by $\lambda_1$, lowers the degree of $x_2$ by $\lambda_2$,\dots, lowers the degree of $x_\ell$ by $\lambda_\ell$. Since $\lambda_1+\dots+\lambda_\ell=\mu_1+\dots+\mu_\ell$, the only way to get a non-zero contribution after these lowerings is by having $\lambda=\mu$. Therefore, $b^{\lambda}_{\mu} = O(N^{-1})$ if $\ell(\mu) = \ell(\lambda)$ and $\mu\neq\lambda$.
Finally, in the case $\mu = \lambda$, we have
\begin{multline*}
\lim_{\begin{smallmatrix} N\to\infty,\, \theta\to 0\\ \theta N\to\gamma\end{smallmatrix}}{b_\lambda^\lambda}
= \lim_{\begin{smallmatrix} N\to\infty,\, \th\to 0\\ \th N\to\gamma\end{smallmatrix}}
\left[ \left(\partial_\ell + (N-1)\th d_\ell\right)^{\lambda_{\ell}-1} \partial_{\ell}\right]\cdots \left[(\partial_1 + (N-1)\th d_1)^{\lambda_1-1} \partial_1\right] \, x_1^{\lambda_1} x_2^{\lambda_2}\cdots x_\ell^{\lambda_\ell}\\
= \left[ \left(\partial_\ell + \gamma d_l\right)^{\lambda_{\ell}-1} \partial_{\ell}\right]\cdots \left[(\partial_1 + \gamma d_1)^{\lambda_1-1} \partial_1\right] \, x_1^{\lambda_1} x_2^{\lambda_2}\cdots x_\ell^{\lambda_\ell}\\
= \prod_{i=1}^{\ell} \lambda_i(\lambda_i-1+\gamma)(\lambda_i-2+\gamma)\cdots(1+\gamma). \qedhere
\end{multline*}
\end{proof}
Proposition \ref{Proposition_highest_derivatives} gives the linear part, that is, the first line in \eqref{eq_operator_small_th_expansion}. The next step is to identify $L(\cdot)$ in the second line of \eqref{eq_operator_small_th_expansion}.
\begin{proposition} \label{Proposition_LLN_leading_term}
Take any partition $\lambda$ with $|\lambda|=k$. We have
\begin{equation}\label{eq_x3}
\left[\prod_{i=1}^{\ell(\lambda)} (\mathcal D_i)^{\lambda_i}\right] \exp\bigl( F(x_1,\dots,x_N)\bigr)\Bigr|_{x_1=\dots=x_N=0}= \prod_{i=1}^{\ell(\lambda)} \left([z^0](\partial+\gamma d+*_g)^{\lambda_i-1} g(z)\right)+R+O\left(\frac{1}{N}\right),
\end{equation}
where $g(z)=\sum_{n=1}^{\infty} n c_F^{(n)} z^{n-1}$, and $\partial$, $d$, $*_g$ are the operators from Definition \ref{def_R_map}.
Moreover, $R$ is a homogeneous polynomial of degree $k$ in $c_F^\nu$ with $|\nu|\le k$, such that each monomial in it involves at least one $\nu$ with $\ell(\nu) > 1$. Finally, $O\left(\frac{1}{N}\right)$ is a (homogeneous of degree $k$) polynomial in $c_F^\nu$ with $|\nu|\le k$, whose coefficients are $O\left(\frac{1}{N}\right)$ as $N\to\infty$, $\theta\to 0$, $\theta N\to \gamma$.
\end{proposition}
\begin{proof}
We only need to figure out the monomials which involve $c_F^{(n)}$, $n = 1, \dots, k$, and no other coefficients, so we are only interested in the following part of the left-hand side of \eqref{eq_x3} with
\begin{equation}
\label{eq_x4}
\left[\prod_{i=1}^{\ell(\lambda)} (\mathcal D_i)^{\lambda_i}\right] \exp\left( \sum_{n=1}^k c_F^{(n)} M_{(n)}(\vec{x})\right)\Biggr|_{x_1=\dots=x_N=0}=
\left[\prod_{i=1}^{\ell(\lambda)} (\mathcal D_i)^{\lambda_i}\right] \prod_{t=1}^N \exp\left( \sum_{n=1}^k c_F^{(n)} (x_t)^n \right)\Biggr|_{x_1=\dots=x_N=0}.
\end{equation}
Next, we recall that each $\mathcal D_i$ is a sum of $N$ operators, so that the operator in \eqref{eq_x4} is a sum of $N^{k}$ operators, each of which is a product of the factors $\frac{\partial}{\partial x_i}$ and $\frac{\theta}{x_i-x_j}(1-s_{ij})$. As in Claim A in the proof of Proposition \ref{Proposition_highest_derivatives}, we can and will assume without loss of generality that all indices $j$ are distinct and larger than $\ell(\lambda)$ --- we only accumulate $O\left(\frac{1}{N}\right)$ error by making such assumption.
There are two consequences of this. First, like in \eqref{eq_b_through_monomial_2}, at this point for each $i$ the very first application of $\mathcal D_i$ can be replaced by $\frac{\partial}{\partial x_i}$, since the operators $\frac{\theta}{x_i-x_j}(1-s_{ij})$ act by $0$ due to symmetry in $i$ and $j$. Second, the operators no longer interact with each other in any way and the expression factorizes.
This reasoning is very similar to that in the proof of Proposition \ref{Proposition_highest_derivatives}, so we do not dwell on the details.
As a result, up to $O\left(\frac{1}{N}\right)$ error, \eqref{eq_x4} is equal to
\begin{equation}\label{eq_x5}
\prod_{i=1}^{\ell(\lambda)} \left( (\mathcal D_i)^{\lambda_i-1}\, \frac{\partial}{\partial x_i} \!\left.\left[\prod_{t=1}^N \exp\left( \sum_{n=1}^k c_F^{(n)} (x_t)^n \right)\right]\right|_{x_1=\dots=x_N=0}\right).
\end{equation}
It remains to study the factor in \eqref{eq_x5} corresponding to a single $i$; without loss of generality, let us consider the case $i=1$. We would like to understand
\begin{multline}\label{eq_x6}
(\mathcal D_1)^{l-1} \,\frac{\partial}{\partial x_1}\!\left.\left[\prod_{t=1}^N \exp\left( \sum_{n=1}^k c_F^{(n)} (x_t)^n \right)\right]\right|_{x_1=\dots=x_N=0}\\
= (\mathcal D_1)^{l-1} \left.\left[ g(x_1) \prod_{t=1}^N \exp\left( \sum_{n=1}^k c_F^{(n)} (x_t)^n \right)\right]\right|_{x_1=\dots=x_N=0},
\end{multline}
where
$$
g(x_1) := \sum_{n=1}^k n c_F^{(n)} (x_1)^{n-1}.
$$
Note that for any polynomial $H$ we have
\begin{equation}
\label{eq_x8}
\frac{\partial}{\partial x_1}\!\left[H\cdot \prod_{t=1}^N \exp\left( \sum_{n=1}^k c_F^{(n)} (x_t)^n \right)\right] = \left( \frac{\partial H}{\partial x_1} + H \cdot g(x_1) \right)\cdot \prod_{t=1}^N \exp\left( \sum_{n=1}^k c_F^{(n)} (x_t)^n \right)
\end{equation}
and
\begin{multline}
\label{eq_x9}
\frac{\theta}{x_1-x_j}(1-s_{1j}) \!\left[H\cdot \prod_{t=1}^N \exp\left( \sum_{n=1}^k c_F^{(n)} (x_t)^n \right)\right] \\=\left[ \frac{\theta}{x_1-x_j}(1-s_{1j}) H\right]\cdot \prod_{t=1}^N \exp\left( \sum_{n=1}^k c_F^{(n)} (x_t)^n \right).
\end{multline}
Combining \eqref{eq_x8} and \eqref{eq_x9}, we can rewrite \eqref{eq_x6} as
\begin{equation}\label{eq_x7}
(\mathcal D_1+*_g)^{l-1} g(x_1)\Bigr|_{x_1=\dots=x_N=0},
\end{equation}
where $*_g$ is the operator if multiplication by $g(x_1)$.
It remains to note we can replace each operator $\frac{\theta}{x_1-x_j}(1-s_{1j})$ in $\mathcal D_1$ by $\th d_1$.
Indeed, this is done by the exact same reasoning that we used in the proof of Proposition \ref{Proposition_highest_derivatives}, see \eqref{eq_x2}.
After we make this replacement, we conclude that (up to another $O\left(\frac{1}{N}\right)$ error) \eqref{eq_x7} and \eqref{eq_x6} are equal to
$$
\left(\frac{\partial}{\partial x_1}+\gamma d_1+*_g \right)^{l - 1} g(x_1)\Bigr|_{x_1=\dots=x_N=0} = [z^0](\partial+\gamma d+*_g)^{l-1} g(z).
$$
Plugging this expression back into \eqref{eq_x5} gives the desired result.
\end{proof}
\bigskip
After all these preparations, it remains to put everything together, as follows.
\begin{proof}[Proof of Theorem \ref{theorem_operators_expansion}]
By Lemma \ref{Lemma_D_as_polynomial} and Corollary \ref{Corollary_reduce_terms}, the left-hand side of \eqref{eq_operator_small_th_expansion} is a homogeneous polynomial in $c^{\nu}_F$, $\nu\le k$, of degree $k$ (if we regard each $c^\nu_F$ as a variable of degree $|\nu|$) with uniformly bounded coefficients as $N\to\infty$, $\theta\to 0$, $\theta N\to\gamma$.
Proposition \ref{Proposition_highest_derivatives} identifies the linear part of this polynomial (corresponding to $c^{\nu}_F$ with $|\nu|=k$) with the first line in the right-hand side of \eqref{eq_operator_small_th_expansion}.
Proposition \ref{Proposition_LLN_leading_term} identifies the polynomial $L$ from the second line of the right-hand side of \eqref{eq_operator_small_th_expansion} and from \eqref{eq_one_row_part}. It remains to note that subtraction of $k(1+\gamma)_{k-1} c^{(k)}_F$ in \eqref{eq_one_row_part} corresponds to the situation when the parts of the polynomial given by Propositions \ref{Proposition_highest_derivatives} and \ref{Proposition_LLN_leading_term} overlap.
\end{proof}
\section{From $\gamma$--cumulants to moments}\label{mom_cum_sec}
The goal of this section is to prove Theorem \ref{theorem_cumuls_moms}.
That is, let us begin with any real sequence $\kappa_1, \kappa_2, \cdots$, consider the power series $g(z) := \sum_{l=1}^{\infty} {\kappa_l z^{l-1}}$, and the operators $\partial$, $d$, and $*_g$ from Definition \ref{df:ops}.
We denote the constant term of a power series $h(z)$ by $[z^0]h(z)$.
Recall that $\mathscr P(k)$ denotes the collection of all set partitions of $[k]$, and for each $\pi\in\mathscr P(k)$ we introduced the $\gamma$-weight $W(\pi)$ in Definition \ref{W_def}.
If $\pi = B_1\sqcup \cdots\sqcup B_m\in\mathscr P(k)$, then the $B_i$'s are called the \emph{blocks} of $\pi$.
The cardinality of the block $B_i$ is denoted by $|B_i|$.
With these recollections, Theorem \ref{theorem_cumuls_moms} says that for any $k\in\mathbb{Z}_{\ge 1}$, we must have the equality
\begin{equation}\label{eqn_transition_2}
[z^0](\partial + *_g + \gamma d)^{k-1} (g(z)) \stackrel{?}{=} \sum_{\pi = B_1\sqcup \cdots\sqcup B_m\in\mathscr P(k)}{\!\!\!\!\!\!W(\pi)\prod_{i=1}^m{\kappa_{|B_i|}}}.
\end{equation}
This relation and actually a more general version (see Theorem \ref{refined_blocks}) will be proved in this section.
\subsection{A refined combinatorial theorem}\label{sec:refined}
Let $a_1, a_2, \dots$ and $\kappa_1, \kappa_2, \dots$ be two arbitrary sequences of real numbers.
\begin{df}\label{def_refined_weight}
For any $k\in\mathbb{Z}_{\ge 1}$ and $\pi\in\mathscr P(k)$, we define the quantity $w(\pi)$, that will be called the \emph{refined $\gamma$-weight of $\pi$} as follows.\footnote{We omit the dependence on $\gamma$ and on the sequences $a_1, a_2, \cdots$ and $\kappa_1, \kappa_2, \cdots$ from the notation $w(\pi)$.} Suppose that $\pi$ has $m$ blocks and label them $B_1, \cdots, B_m$ in such a way that the smallest element from $B_i$ is smaller than the smallest element from $B_j$ (hence, also smaller than all other elements from $B_j$), whenever $i<j$. Then define
$$
w(\pi) := W(\pi)\cdot \kappa_{|B_1|}\prod_{i=2}^m{a_{|B_i|}}.
$$
\end{df}
\medskip
From the formula of $W(\pi)$ in Definition \ref{W_def}, we can give an expanded formula for the refined $\gamma$-weight $w(\pi)$ of the set partition $\pi = B_1\sqcup\cdots\sqcup B_m$. Recall that the values $p(i)$, $q(i)$, $1\le i\le m$, are defined by
\begin{gather*}
p(i) := \#\{ j\in\{1, \dots, |B_i| - 1\} \mid \{b^i_j + 1, \dots, b^i_{j+1} - 1\}\cap B_t\neq\emptyset, \text{ for some block }B_t \text{ with } t < i\},\\
q(i) := |B_i|-1-p(i).
\end{gather*}
In particular, $p(1)=0$, $q(1)=|B_1|-1$. The quantity $p(i)$ can be computed by the graphical procedure described in Section \ref{sec_Tcm}.
Define the \emph{weight $w(B_i)$ of the block $B_i$ with respect to $\pi$} by
\begin{equation}
\label{eq_w_block}
w(B_i) :=
\begin{cases} (\gamma+1)_{|B_1| - 1}\cdot \kappa_{|B_1|}, &\text{ if } i = 1,\\
p(i)!\cdot(\gamma+p(i)+1)_{q(i)}\cdot a_{|B_i|}, &\text{ if }i \ge 2.
\end{cases}
\end{equation}
For example, if $i\ge 2$ and $B_i$ is a singleton, then $p(i)=q(i)=0$ and $w(B_i) = a_1$.
The refined $\gamma$-weight of the set partition $\pi = B_1\sqcup\cdots\sqcup B_m$ then equals
\begin{align}
w(\pi) =& \prod_{i=1}^m{w(B_i)}\nonumber\\
=& \left((\gamma+1)_{|B_1| - 1}\cdot \kappa_{|B_1|}\right)\cdot \prod_{i=2}^m{\left( p(i)!\cdot(\gamma+p(i)+1)_{q(i)}\cdot a_{|B_i|} \right)}.\label{w_formula}
\end{align}
Observe that under the identifications $a_i \mapsto \kappa_i$, for all $i$, we have
$$
\left. w(\pi) \right|_{a_i\mapsto \kappa_i} = W(\pi)\prod_{i=1}^m{\kappa_{|B_i|}}.
$$
This is why we call $w(\pi)$ the \emph{refined} $\gamma$-weight of $\pi$.
\medskip
We show the refined $\gamma$-weights for the same examples of set partitions given in Section \ref{sec_Tcm}.
The set partition $\{1, 2, 5, 7\}\sqcup \{3, 4, 6\}\in\mathscr P(7)$ graphically shown in Figure \ref{fig_1} has refined $\gamma$-weight
$$w(\pi) = (\gamma+1)(\gamma+2)^2(\gamma+3) \cdot\kappa_4 a_3.$$
For the set partition $\{1, 4\}\sqcup\{2, 6\}\sqcup\{3, 5, 7\}\in\mathscr P(7)$ shown in Figure \ref{fig_2}, the refined $\gamma$-weight is
$$w(\pi) = 2(\gamma+1)\cdot \kappa_2 a_2a_3.$$
As a final example, the set partition $\{1,3,4,5,6\}\sqcup\{2,7\}\in\mathscr P(7)$ shown in Figure \ref{fig_3} has refined $\gamma$-weight
$$
w(\pi) = (\gamma+1)(\gamma+2)(\gamma+3)(\gamma+4)\cdot \kappa_5a_2.
$$
\begin{thm}\label{refined_blocks}
Set
$$g(z) = \sum_{l=1}^{\infty}{\kappa_l z^{l-1}}, \qquad a(z) = \sum_{l=1}^{\infty}{a_l z^{l-1}}.$$
Then we have
\begin{equation}\label{blocks_thm}
[z^0](\partial + *_a + \gamma d)^{k-1} (g(z)) = \sum_{\pi = B_1\sqcup \dots\sqcup B_m\in\mathscr P(k)}{w(\pi)}.
\end{equation}
On the left side, we have the constant term of a power series. On the right side, the sum ranges over set partitions of $[k]$, and the refined $\gamma$-weight $w(\pi)$ is the one introduced in Definition \ref{def_refined_weight}.
\end{thm}
Note that this result implies Theorem \ref{theorem_cumuls_moms}: indeed, we apply Theorem \ref{refined_blocks} and set $a_i=\kappa_i$. In the rest of this section we prove Theorem \ref{refined_blocks}.
\subsection{Preliminary lemmas}
\begin{lemma}\label{tech_sums_3}
Let $x, y\in\mathbb{Z}_{\geq 0}$ be arbitrary, and let $z$ be any complex number. Then
\begin{equation}
(y+1)\sum_{i=1}^{x} (z+i)_y = (z+x)_{y+1} - (z)_{y+1}.\label{eqn_tech3}
\end{equation}
\end{lemma}
\begin{proof} The proof is induction on $x$. If $x=0$, then both sides of \eqref{eqn_tech3} vanish. The difference of the left-hand sides of \eqref{eqn_tech3} at $x=t$ and $x=t-1$ is $(y+1)(z+t)_y$. The difference of the right-hand sides of \eqref{eqn_tech3} is the same:
$$
(z+t)_{y+1}- (z+t-1)_{y+1}= (z+t)_y \bigl(z+t+y- (z+t-1)\bigr)=(z+t)_y \cdot (y+1).\qedhere
$$
\end{proof}
For a sequence of $0$s and $1$s, a \emph{descent} is defined as a substring $10$ in this sequence. Let $\des(\zeta)$ denote the number of descents in a $0$-$1$ sequence $\zeta$. For instance, $\des(1100)=1$ and $\des(0101010)=3$.
\begin{lemma} \label{Lemma_descent_sum}
For any two integers $N\ge 1$ and $0\le M\le N$ and any $\gamma\in\mathbb R$, we have
\begin{equation}
\sum_{\begin{smallmatrix} \zeta=(\zeta_1,\dots,\zeta_N)\in \{0,1\}^N\\ \sum_{i=1}^N \zeta_i=M\end{smallmatrix}} \des(\zeta)! \cdot (\gamma+\des(\zeta)+1)_{M-\des(\zeta)}=(\gamma+1+N-M)_M.
\end{equation}
\end{lemma}
\begin{proof}
Let $K(N,M,d)$ denote the total number of sequences $\zeta\in\{0,1\}^N$ with $\sum_{i=1}^N \zeta_i=M$ and $\des(\zeta)=d$. With this notation, we would like to prove that:
\begin{equation}
\label{eq_summation}
\sum_{d=0}^{M} K(N,M,d)\cdot d! \cdot (\gamma+d+1)_{M-d}=(\gamma+1+N-M)_M, \quad N\ge 1,\quad 0\le M\le N.
\end{equation}
Our proof is induction on $N$. If $N=1$, then both sides of \eqref{eq_summation} are $1$ at $M=0$ and both sides are $(\gamma+1)$ at $M=1$.
For the induction step, assume that \eqref{eq_summation} holds for all values $\le N$, and let us prove it for $N+1$, and an arbitrary $0 \le M \le N+1$. We notice that the statement is straightforward at $M=0$. If $M>0$, then we use the following recurrence for $K(N,M,d)$, which is obtained by considering the position of the right-most $1$ in a sequence $\zeta$:
\begin{equation}
\label{eq_count_recurrence}
K(N+1,M,d)=K(N,M-1,d)+\sum_{p=1}^{N-M+1} K(N-p, M-1,d-1).
\end{equation}
If $d=M$, then the first term in \eqref{eq_count_recurrence} in not needed; If $d=0$, then the second term in \eqref{eq_count_recurrence} is not needed.
Hence, the left-hand side of \eqref{eq_summation} for $N$ replaced with $N+1$ can be rewritten using \eqref{eq_count_recurrence} as
\begin{multline}
\label{eq_x10}
\sum_{d=0}^{M} K(N+1,M,d)\cdot d! \cdot (\gamma+d+1)_{M-d}\\=
\sum_{d=0}^{M-1} K(N,M-1,d)\cdot d! \cdot (\gamma+d+1)_{M-d}+ \sum_{d=1}^M \sum_{p=1}^{N-M+1} K(N-p, M-1,d-1) \cdot d! \cdot (\gamma+d+1)_{M-d}.
\end{multline}
For the first sum in the right-hand side of \eqref{eq_x10}, we use $d! \cdot (\gamma+d+1)_{M-d}= d! \cdot (\gamma+d+1)_{M-1-d} \cdot (\gamma+M)$ and induction assumption to evaluate it as
\begin{equation}
\label{eq_x11}
(\gamma+2+N-M)_{M-1}\cdot (\gamma+M).
\end{equation}
For the second sum in the right-hand side of \eqref{eq_x10} we change the order of summation and evaluate the sum over $d$ for fixed $p$ using the induction assumption and identity (valid for $d>0$)
$$
d! \cdot (\gamma+d+1)_{M-d}= (d-1)! \cdot (\gamma+d)_{M-d} \cdot (\gamma+M) - (d-1)! \cdot (\gamma+1+d)_{M-d} \cdot \gamma.
$$
Hence, the $p$th term evaluates to
\begin{equation}
\label{eq_x12}
(\gamma+2+N-p-M)_{M-1} \cdot (\gamma+M)- (\gamma+3+N-p-M)_{M-1} \cdot \gamma.
\end{equation}
Combining \eqref{eq_x11} with \eqref{eq_x12}, we transform \eqref{eq_x10} to
\begin{multline*}
(\gamma+M) \sum_{p=0}^{N-M+1} (\gamma+2+N-p-M)_{M-1} - \gamma \sum_{p=1}^{N-M+1}(\gamma+3+N-p-M)_{M-1}
\\= M \sum_{p=0}^{N-M} (\gamma+2+N-p-M)_{M-1}\, +\, (\gamma+M)\cdot (\gamma+1)_{M-1}.
\end{multline*}
We compute the last sum using Lemma \ref{tech_sums_3}, resulting in
$$
(\gamma+2+N-M)_{M}-(\gamma+1)_{M} + (\gamma+M)\cdot (\gamma+1)_{M-1}=(\gamma+2+N-M)_{M}.\qedhere
$$
\end{proof}
\subsection{Proof of Theorem \ref{refined_blocks}}
The proof is by induction on $k$.
\smallskip
{\bf Step 1.} For the base cases, let us consider $k = 1$ and $k = 2$.
\smallskip
\emph{Case $k = 1$.} Clearly, $[z^0]g(z) = [z^0](\kappa_1 + \kappa_2z + \dots) = \kappa_1$.
On the other hand, there is exactly one set partition of $[1]$, namely $\pi = \{1\}$ with $B_1 = \{1\}$. For this partition, $p(1) = q(1) = 0$, so it follows that $w(\pi) = \kappa_1$, as needed.
\emph{Case $k = 2$.} Here, $[z^0](\partial + *_a + \gamma d)g(z) = [z^0](g'(z) + g(z)a(z) + \gamma dg(z)) = \kappa_2 + \kappa_1a_1 + \gamma\kappa_2 = \kappa_1a_1 + (\gamma+1)\kappa_2$.
On the other hand, there are two set partitions of $[2]$, namely $\pi_1 = \{1\}\sqcup\{2\}$ with $B_1 = \{1\}$, $B_2 = \{2\}$, and $\pi_2 = \{1,2\}$ with $B_1 = \{1, 2\}$.
For $\pi_1$, $p(1) = p(2) = q(1) = q(2) = 0$, so $w(\pi_1) = \kappa_1a_1$.
For $\pi_2$, $p(1) = 0$ and $q(1) = 1$, so $w(\pi_2) = (\gamma + 1)\kappa_2$.
Therefore, $w(\pi_1) + w(\pi_2) = \kappa_1a_1 + (\gamma + 1)\kappa_2$.
\medskip
{\bf Step 2.}
Suppose that the statement of the theorem is true for certain $k\ge 2$.
For the induction step, we prove the statement for $k+1$, i.e. we aim to obtain the formula for $[z^0](\partial + *_a + \gamma d)^k (g(z))$.
Let $b_1, b_2, b_3, \dots$ be the quantities defined by
$$
(\partial + *_a + \gamma d)(g(z)) = b_1 + b_2z + b_3z^2 + \cdots.
$$
From the definition of the operators $\partial$, $*_a$, and $d$, we have
\begin{equation}\label{b_vars}
b_n = \sum_{j=1}^n{\kappa_{n+1-j} a_j} + (\gamma + n)\kappa_{n+1},\quad n\in\mathbb{Z}_{\geq 1}.
\end{equation}
Next, use the induction hypothesis to obtain the combinatorial formula:
\begin{equation}\label{ind_hyp}
[z^0](\partial + *_a + \gamma d)^k (g(z)) = [z^0](\partial + *_a + \gamma d)^{k-1} (b_1 + b_2 z + b_3 z^2 + \dots) = \sum_{\tilde \pi = \tilde B_1\sqcup \dots\sqcup \tilde B_m\in\mathscr P(k)}{\tilde w(\tilde \pi)},
\end{equation}
where $\tilde w(\tilde \pi)$ is given in the theorem (see \eqref{w_formula}), but instead of the $\kappa_i$'s, we should use the $b_i$'s:
\begin{equation} \label{eq_tilde_w}
\tilde w(\tilde \pi) = \left((\gamma+1)_{|\tilde B_1|-1}\cdot b_{|\tilde B_1|}\right)\cdot \prod_{i=2}^m{\left(p(i)!\cdot(\gamma+p(i)+1)_{q(i)}\cdot a_{|\tilde B_i|}\right)}.
\end{equation}
From \eqref{b_vars}, this equals
\begin{multline}\label{w_form}
\tilde w(\tilde\pi) = \sum_{j = 1}^{|\tilde B_1|} \left( (\gamma+1)_{|\tilde B_1|-1}\cdot \kappa_{|\tilde B_1|+1-j}\, a_j \right)\cdot \prod_{i=2}^m{\left(p(i)!\cdot(\gamma+p(i)+1)_{q(i)}\cdot a_{|\tilde B_i|}\right)}\\
+ \left( (\gamma+1)_{|\tilde B_1|-1}\cdot (\gamma+|\tilde B_1|)\kappa_{|\tilde B_1|+1} \right)\cdot \prod_{i=2}^m{\left(p(i)!\cdot(\gamma+p(i)+1)_{q(i)}\cdot a_{|\tilde B_i|}\right)}.
\end{multline}
Our next goal is to obtain a different combinatorial expression for \eqref{ind_hyp}, \eqref{w_form} --- we should get the right-hand side of \eqref{blocks_thm} for $k+1$, namely a formula that involves set partitions of $[k+1]$.
\medskip
{\bf Step 3.}
Given a set partition $\pi$ of $[k+1] = \{1, 2, 3, \cdots, k+1\}$, consider the set partition $\tilde \pi$ of $\{1, 3, 4, \cdots, k+1\}$ that is obtained from $\pi$ by taking the union of the blocks that contain $1$ and $2$, and then removing $2$.
If $\tilde \pi$ is obtained from $\pi$ in this fashion, we say that \emph{$\pi$ maps to $\tilde \pi$} and denote this relation by $\pi\to\tilde \pi$.
Observe that if $1$ and $2$ belong to the same block of $\pi$, then $\pi$ and $\tilde \pi$ have the same number of blocks. On the other hand, if $1$ and $2$ belong to different blocks of $\pi$, then $\tilde \pi$ has one block fewer than $\pi$. For instance, for $k=2$ we have $5$ set partitions of $\{1,2,3\}$ which are mapped to two set partitions of $\{1,3\}$:
$$
\{1\}\sqcup \{2,3\}\to \{1,3\}, \qquad \{1,3\}\sqcup \{2\}\to \{1,3\},\qquad \{1,2,3\}\to \{1,3\},
$$
$$
\{1\}\sqcup \{2\}\sqcup \{3\}\to \{1\} \sqcup \{3\}, \qquad \{1,2\}\sqcup \{3\}\to \{1\}\sqcup\{3\}.
$$
For a set partition $\tilde \pi$ of $\{1,3,4,\dots, k+1\}$ we define the numbers $p(i)$, $q(i)$ and the weight $\tilde w(\tilde \pi)$ by identifying $\{1,3,4,\dots,k+1\}$ with $\{1,2,\dots,k\}$ in a monotone way and using the previous formula \eqref{eq_tilde_w}. Note that essentially nothing changes in the definition, as the way we compute the numbers $p(i)$, $q(i)$ and the weight $\tilde w(\cdot)$ depends only on the order of the elements of the set that we are partitioning rather than the labels of these elements. Hence, we use the same $\tilde w(\tilde \pi)$ notation no matter whether $\tilde \pi$ is a partition of $\{1,3,4,\dots,k+1\}$ or $\tilde \pi$ is a partition of $\{1,2,\dots,k\}$.
Our goal now is to prove that for each set partition $\tilde \pi$ of $\{1,3,4,\dots, k+1\}$ we have an identity:
\begin{equation}\label{eq_inductive_sum}
\sum_{\begin{smallmatrix} \pi\in \mathscr P(k+1)\\ \pi\to \tilde \pi\end{smallmatrix}} w(\pi)\stackrel{?}{=}\tilde w(\tilde \pi).
\end{equation}
The last equation together with \eqref{ind_hyp} implies the induction step. We fix $\tilde \pi$ and let its blocks be $\tilde B_1,\dots,\tilde B_m$ ordered, as before, by their minimal elements, so that $\tilde B_1$ contains $1$. We calculate the sum \eqref{eq_inductive_sum} by splitting the terms into several subsets. Define $T\subseteq \mathscr P(k+1)$ as the subset of those set partitions $\pi$, mapped to $\tilde \pi$, for which $1$ and $2$ belong to the same block of $\pi$.
Next, for any $r\in\{1, 2, \dots, |\tilde B_1|\}$, define $T_r\subseteq \mathscr P(k+1)$ as the subset of those set partitions $\pi$, mapped to $\tilde \pi$, for which $1$ and $2$ belong to distinct blocks of $\pi$ and the block where $2$ belongs is of size $r$. The sets $T$ and $T_1,\dots,T_{|\tilde B_1|}$ are all disjoint; they depend on $\tilde \pi$, but we omit this dependence from the notations. With these notations the desired identity \eqref{eq_inductive_sum} is rewritten
\begin{equation}\label{eq_inductive_sum_2}
\sum_{\pi\in T\cup T_1\cup T_2\cup\dots\cup T_{|\tilde B_1|} } w(\pi)\stackrel{?}{=}\tilde w(\tilde \pi).
\end{equation}
In step 4 below, we prove that $\sum_{\pi\in T}{w(\pi)}$ is equal to the second line of \eqref{w_form}. In steps 5 and 6, we prove that for any $r\in\{1, 2, \dots, |\tilde B_1|\}$ the sum $\sum_{\pi\in T_r}{w(\pi)}$ is equal to the $j = r$ summand in \eqref{w_form}.
After these steps are done, the identity \eqref{eq_inductive_sum_2} would follow and thus the proof would be complete.
\medskip
{\bf Step 4.} In this step we calculate $\sum_{\pi\in T}{w(\pi)}$. Recall that $T$ contains all set partitions $\pi$ of $[k+1]$ that map to $\tilde \pi$ and such that $1$ and $2$ belong to the same block of $\pi$. In fact, for a given $\tilde \pi$ with blocks $\tilde B_1,\dots\tilde B_m$, there is only one set partition in $T$: it is $\pi=B_1\sqcup \dots\sqcup B_m$, where $B_1 = \tilde B_1\cup \{2\}$ and $B_h = \tilde B_h$, $h=2,3,\dots,m$. Since $1$ and $2$ are adjacent in the ordering of $[k+1]$ and they belong to the same block of $\pi$, it is clear that the weight of $B_i$, $i\ge 2$, in the computation of $w(\pi)$ is the same as the weight of $\tilde B_i$ in the computation of $\tilde w(\tilde \pi)$.
The weight of $B_1$ in the computation of $w(\pi)$ is $(\gamma+1)_{|B_1|-1}\cdot \kappa_{|B_1|} = (\gamma+1)_{|\tilde B_1|}\cdot \kappa_{|\tilde B_1|+1}$.
As a result,
\begin{equation*}
w(\pi) = \left((\gamma+1)_{|\tilde B_1|}\cdot \kappa_{|\tilde B_1|+1}\right) \cdot \prod_{i=2}^m{\left(p(i)!\cdot(\gamma+p(i)+1)_{q(i)}\cdot a_{|\tilde B_i|}\right)}.
\end{equation*}
This is exactly the second line of \eqref{w_form}, as desired.
\medskip
{\bf Step 5.} In this step we calculate $\sum_{\pi\in T_r}{w(\pi)}$, $1\le r \le |\tilde B_1|$, in the case when $\tilde \pi$ is a one-block set partition, $\tilde \pi=\tilde B_1=\{1,3,4,\dots,k+1\}$ and $|\tilde B_1|=k$. Hence, set partitions $\pi$ in $T_r$ have two blocks $\pi=B_1\sqcup B_2$, $1\in B_1$, $2\in B_2$ and $|B_2|=r$. Therefore, $|B_1|=k+1-r$.
We can identify elements of $T_r$ with $0$--$1$ sequences $\zeta=(\zeta_1,\zeta_2,\dots,\zeta_{k-1})$ of length $k-1$ and with $\sum_{i=1}^{k-1}\zeta_i=k-r$ through:
$$
B_1(\zeta)=\{1\}\cup\{i+2\mid \zeta_i=1\},\quad B_2(\zeta)=\{2\}\cup \{i+2\mid \zeta_i=0\}.
$$
In words, the positions where $\zeta_i=1$ encode the elements of $B_1$ from the set $\{3,4,\dots,k+1\}$. With this notation, we rewrite
$$
\sum_{\pi\in T_r} w(\pi)=\sum_{\begin{smallmatrix} \zeta\in \{0,1\}^{k-1}\\ \sum_{i=1}^{k-1}\zeta_i=k-r\end{smallmatrix}} w\bigl( B_1(\zeta)\sqcup B_2(\zeta)\bigr).
$$
Let us compute $w\bigl( B_1(\zeta)\sqcup B_2(\zeta)\bigr)$. By definition \eqref{eq_w_block}, the weight of block $B_1$ is
$$w(B_1)=(\gamma+1)_{k-r} \,\kappa_{k-r+1}.$$
Further, note that in the notations of Lemma \ref{Lemma_descent_sum}, $p(2)=\des(\zeta)$:
Indeed, in the arc diagrams (as in Figures \ref{fig_1}, \ref{fig_2}, \ref{fig_3}) a roof of the block $B_2$ with no legs intersecting it corresponds to a substring $00$ in $\zeta$, whereas a roof with $m \ge 1$ legs intersecting it corresponds to a substring $011\cdots 110$ (with $m$ ones), and this substring gives exactly one descent in $\zeta$. Hence, by definition \eqref{eq_w_block}, we have
$$w(B_2)=\des(\zeta)! (\gamma+\des(\zeta)+1)_{r-\des(\zeta)-1} a_r.$$
We obtain
\begin{multline}
\label{eq_x13}
\sum_{\pi\in T_r} w(\pi)=\sum_{\begin{smallmatrix} \zeta\in \{0,1\}^{k-1}\\ \sum_{i=1}^{k-1}\zeta_i=k-r\end{smallmatrix}} (\gamma+1)_{k-r} \, \kappa_{k-r+1} \cdot \des(\zeta)!\, (\gamma+\des(\zeta)+1)_{r-\des(\zeta)-1}\, a_r\\
= (\gamma+1)_{k-r} \, \kappa_{k-r+1} \, a_r \sum_{\begin{smallmatrix} \zeta\in \{0,1\}^{k-1}\\ \sum_{i=1}^{k-1}\zeta_i=k-r\end{smallmatrix}} \des(\zeta)!\, (\gamma+\des(\zeta)+1)_{k-r-\des(\zeta)} \cdot \frac{(\gamma+1)_{r-1}}{(\gamma+1)_{k-r}}
\\=(\gamma+1)_{r-1} \, \kappa_{k-r+1} a_r \, (\gamma+r)_{k-r}= (\gamma+1)_{k-1} \kappa_{k-r+1} a_r,
\end{multline}
where we used Lemma \ref{Lemma_descent_sum} with $N=k-1$ and $M=k-r$ for the equality between the second and the third lines of \eqref{eq_x13}. Since $|\tilde B_1|=k$ and there are no $\tilde B_h$ with $h>1$, the third line in \eqref{eq_x13} matches the $j=r$ term in \eqref{w_form}, as desired.
\medskip
{\bf Step 6.} We now extend the computation of Step 5 and calculate $\sum_{\pi\in T_r}{w(\pi)}$, $1\le r \le |\tilde B_1|$, for arbitrary $\tilde \pi=\tilde B_1\sqcup\dots\sqcup \tilde B_m$. By definition of $T_r$ each set partition $\pi\in T_r$ has $m+1$ blocks $B_1,\dots,B_{m+1}$ and we have $1\in B_1$, $2\in B_2$, $|B_2|=r$, and $B_i=\tilde B_{i-1}$, $3\le i\le m+1$. The key observation for this step is that for $i\ge 3$ the weight of the block $B_i$, $w(B_i)$, is the same as the weight of the block $\tilde B_{i-1}$, $\tilde w(\tilde B_{i-1})$: this is because the blocks $B_i$ and $\tilde B_i$ coincide as sets and the legs intersecting their roofs in the arc diagrams also coincide. Hence, we have
$$
w(\pi)=w(B_1) w(B_2) \prod_{i=2}^m{\left(p(i)!\cdot(\gamma+p(i)+1)_{q(i)}\cdot a_{|\tilde B_i|}\right)}
$$
It remains to sum the last formula over all possible choices of $B_1$ and $B_2$. Since $B_1\cup B_2=\tilde B_1 \cup \{2\}$ is fixed, this is the same computation as in Step 5, but with $k$ replaced by $|\tilde B_1|$. As a result, we get
$$
\sum_{\pi\in T_r} w(\pi)= (\gamma+1)_{|\tilde B_1|-1} \kappa_{|\tilde B_1|-r+1} a_r \prod_{i=2}^m{\left(p(i)!\cdot(\gamma+p(i)+1)_{q(i)}\cdot a_{|\tilde B_i|}\right)},
$$
which matches the $j=r$ term in \eqref{w_form}, as desired.
\section{From moments to $\gamma$--cumulants}\label{sec_mom_to_cums}
Let $\{m_k\}_{k\ge 1}$ and $\{\kappa_l\}_{l\ge 1}$ be real sequences related by $\{\kappa_l\}_{l\ge 1} = T^\ga_{m\to\ka}(\{m_k\}_{k\ge 1})$.
In this section, we prove Theorem \ref{thm:mom_cums2}, namely the following identity:
\begin{equation}
\label{eq_x14}
\exp\left(\sum_{l=1}^{\infty}{\frac{\kappa_l y^l}{l}}\right) \stackrel{?}{=} [z^0]\left( \sum_{n=0}^{\infty}{\frac{(yz)^n}{(\gamma)_n}} \right) \exp\left( \gamma \sum_{k=1}^{\infty} \frac{m_k}{kz^k} \right).
\end{equation}
The central idea of our proof is to apply Theorem \ref{thm_small_th} to the measure $\mu_N$ which is the Dirac delta-mass at a single $N$--tuple $(a_1^{(N)},\dots,a_N^{(N)})$. The Bessel generating function of $\mu_N$ is the multivariate Bessel function $B_{(a_1^{(N)},\dots,a_N^{(N)})}(x_1,\dots,x_N;\theta)$, which allows us to use the known formulas for $B_{(a_1,\dots,a_N)}(y,0^{N-1}; \theta)$ and get the asymptotic expressions for the partial derivatives of the logarithm of the BGF at $0$. We remark that it would be interesting to find a more direct combinatorial proof, explaining how \eqref{eq_x14} matches the expressions of Theorem \ref{theorem_cumuls_moms}.
\bigskip
As our proof of Theorem \ref{thm:mom_cums2} is based on the asymptotic analysis of $B_{(a_1,\dots,a_N)}(y,0^{N-1}; \theta)$, we start by collecting formulas for this function. Assume, as usual, that $a_1\le a_2\le\dots\le a_N$ are real and $y\in\mathbb C$. There are at least three different ways to think about $B_{(a_1,\dots,a_N)}(y,0^{N-1}; \theta)$:
\begin{enumerate}
\item The Taylor series expansion for $B_{(a_1,\dots,a_N)}(y,0^{N-1}; \theta)$ (which is a limit of the binomial formula for Jack polynomials of \cite{Ok_Olsh_shifted_Jack}) reads
\begin{equation}
\label{eq_x15}
B_{(a_1,\dots,a_N)}(y,0^{N-1}; \theta)=\sum_{k=0}^{\infty} \frac{Q_{(k)}(a_1,\dots, a_N; \, \theta)}{(\theta N)_{k}} y^k,
\end{equation}
where $Q_{(k)}(a_1,\dots, a_N; \, \theta)$ is the value of the $N$--variable Jack symmetric polynomial (with normalization as for $Q$--functions in \cite[Chapter VI, Section 10]{M} or \cite{Ok_Olsh_shifted_Jack}) parameterized by one-row partition $(k)$ at the point $(a_1,\dots,a_N)$. The expansion \eqref{eq_x15} is a particular case of \cite[(4.2)]{Ok_Olsh_shifted_Jack}; that article uses the same parameter $\theta$ for the Jack polynomials, but it is worth mentioning that some other authors (e.g., \cite{Stanley_Jack} or \cite[Section VI.10]{M}) use $\alpha=\theta^{-1}$ instead.
\item The contour integral representation for $B_{(a_1,\dots,a_N)}(y,0^{N-1}; \theta)$ claims that for any complex $y$ with $\Re y>0$ we have
\begin{equation}\label{eqn:int_repr}
B_{(a_1,\dots,a_N)}(y, 0^{N-1}; \theta) = \frac{\Gamma(\theta N)}{y^{\theta N - 1}}\frac{1}{2\pi\mathbf i}
\int_{\mathscr{C}_{\infty}}{\exp(yz)\prod_{j=1}^N{(z - a_j)^{-\theta}}dz},
\end{equation}
where the infinite contour $\mathscr{C}_\infty$ in this formula is positively oriented and is formed by the segment $[M-r\mathbf i, M+r\mathbf i]$ and the horizontal lines $[M+r\mathbf i, -\infty+r\mathbf i)$, $[M-r\mathbf i, -\infty-r\mathbf i)$, for real numbers $M>a_N$ and $r>0$. The proof of \eqref{eqn:int_repr} can be found in \cite[Theorem 5.1]{C} and the same article contains a complementary integral representation for $\Re y<0$.
\item A stochastic representation for $B_{(a_1,\dots,a_N)}(y,0^{N-1}; \theta)$ reads
\begin{equation}
\label{eq_x16}
B_{(a_1,\dots,a_N)}(y,0^{N-1}; \theta)= \mathbb E\left[ \exp\left({y\sum_{i=1}^N a_i \eta_i}\right)\right],
\end{equation}
where $(\eta_1,\dots,\eta_N)$ is a Dirichlet-distributed random vector with all parameters equal to $\theta$. The proof of \eqref{eq_x16} can be found in \cite[Proposition 5.1]{AN}, although in some forms this statement was known before, see, e.g., \cite[Remark 8.3]{OV}.
\end{enumerate}
Either of the above three approaches can be used to establish a formula for the generating function of derivatives of $B_{(a_1,\dots,a_N)}(y,0^{N-1}; \theta)$ at $0$:
\begin{prop} \label{Theorem_Bessel_derivatives} For any $\theta>0$, $N\in\mathbb Z_{>0}$, $y\in \mathbb C$, and $a_1\le a_2\le \dots\le a_N$ we have the expansion
\begin{equation}
\label{eq_x17}
B_{(a_1,\dots,a_N)}(y,0^{N-1}; \theta)=\sum_{k=0}^{\infty} \frac{c_k}{(\theta N)_k} y^k,
\end{equation}
where the numbers $c_k$ are found from the following Taylor series expansion:
\begin{equation}
\label{eq_x18}
\sum_{k=0}^{\infty} c_k z^k = \prod_{i=1}^{N} (1-a_i z)^{-\theta}.
\end{equation}
The series \eqref{eq_x17} is uniformly convergent over $y$ in compact subsets of $\mathbb C$.
\end{prop}
\begin{proof}
According to \eqref{eq_x15}, the coefficients $c_k$ in \eqref{eq_x17} are computed as $c_k=Q_{(k)}(a_1,\dots, a_N; \, \theta)$. The generating function for the one-row Jack polynomials is well-known, see \cite[Section VI.10, top formula on page 378]{M} or \cite[(9)]{Stanley_Jack}. We have:
$$
\sum_{k=0}^{\infty} Q_{(k)}(a_1,\dots, a_N; \, \theta) z^k = \prod_{i=1}^{N} (1-a_i z)^{-\theta},
$$
which proves \eqref{eq_x18}. Finally, uniform convergence of \eqref{eq_x17} follows either from the fact that we deal with a Taylor series expansion of an entire function, or from bounds $c_k< C r^k$ for some $C>0$, $r>0$, which can be extracted from \eqref{eq_x18}.
\end{proof}
In view of Theorem \ref{thm_small_th}, the desired identity \eqref{eq_x14} of Theorem \ref{thm:mom_cums2} becomes the limit as $N\to\infty$, $\theta N\to \gamma$ of Theorem \ref{Theorem_Bessel_derivatives}, as we now explain.
\begin{proof}[Proof of Theorem \ref{thm:mom_cums2}] {\bf Step 1.} We first show that the formulas \eqref{eq_cums_moments} and \eqref{eq_cums_moments_2} are equivalent. Indeed, \eqref{eq_cums_moments} says that the coefficient of $y^n$ in $\exp\left(\sum_{l=1}^{\infty} \tfrac{\kappa_l}{l} y^l\right)$ can be computed as the constant term of $$\frac{z^n}{(\gamma)_n} \exp\left(\gamma \sum_{k=1}^{\infty} \frac{m_k}{k} z^{-k}\right).$$ On the other hand, \eqref{eq_cums_moments_2} says that the same coefficient can be computed as $\tfrac{1}{(\gamma)_n}$ times the coefficient of $z^n$ in $$\exp\left(\gamma \sum_{k=1}^{\infty} \frac{m_k}{k} z^{k}\right).$$ Clearly, the latter and the former are two ways to compute the same number.
\medskip
{\bf Step 2.} Next, let us assume that $m_1$, $m_2$, $m_3$, \dots\, are moments of a compactly supported probability measure $\mu$, i.e., there exists $r>0$ and a probability measure $\mu$ supported inside $[-r,r]$, such that
\begin{equation}
\label{eq_x19}
m_k=\int_{-r}^r x^k \mu(dx),\qquad k=1,2,\dots.
\end{equation}
As it is true for any probability measure, $\mu$ can be approximated by discrete measures with atoms of weight $1/N$ as $N\to\infty$, and we can choose these measures to be supported inside $[-r,r]$. Let us fix such an approximation for $\mu$, that is,
we choose real numbers $-r\le a_1^{(N)}\le a_2^{(N)}\le \dots \le a_N^{(N)}\le r$, such that
$$
\lim_{N\to\infty} \frac{1}{N} \sum_{i=1}^N \delta_{a_i^{(N)}}=\mu. \qquad \text{(Weak convergence of measures.)}
$$
In particular, this implies
$$
\lim_{N\to\infty} \frac{1}{N} \sum_{i=1}^N (a_i^{(N)})^k=m_k,\qquad k=1,2,\dots.
$$
Let $\mu_N$ be the Dirac delta-mass at the point $(a_1^{(N)},\dots,a_N^{(N)})$ --- this is a probability measure on $\mathbb R^N$. Then the measures $\mu_N$ satisfy the Law of Large Numbers in the sense of Definition \ref{Definition_LLN_sat_ht} with sequence $m_k$ given by \eqref{eq_x19}. The BGF of the measure $\mu_N$ is $B_{(a_1^{(N)},\dots,a_N^{(N)})}(x_1,\dots,x_N;\, \theta)$. Hence, Theorem \ref{thm_small_th} yields that
\begin{equation}
\label{eq_x22}
\lim_{\begin{smallmatrix} N\to\infty,\, \theta\to 0 \\ \theta N\to \gamma \end{smallmatrix}} \frac{\partial^l}{\partial y^l} \ln B_{(a_1^{(N)},\dots,a_N^{(N)})}(y, 0^{N-1};\, \theta)\Bigr|_{y=0}= (l-1)!\cdot \kappa_l,
\end{equation}
where $\{\kappa_l\}_{l\ge 1} = T^\ga_{m\to\ka}(\{m_k\}_{k\ge 1})$.
On the other hand, note that the formulas \eqref{eq_x17} and \eqref{eq_x18} have a limit in the regime $N\to \infty$, $\theta\to 0$, $\theta N\to\gamma$, which reads:
\begin{equation}
\label{eq_x20}
\lim_{\begin{smallmatrix} N\to\infty,\, \theta\to 0 \\ \theta N\to \gamma \end{smallmatrix}} B_{(a_1^{(N)},\dots,a_N^{(N)})}(y,0^{N-1}; \theta)=\sum_{k=0}^{\infty} \frac{c_k}{(\gamma)_k} y^k,
\end{equation}
where the numbers $c_k$ are found from the following Taylor series expansion:
\begin{multline}
\label{eq_x21}
\sum_{k=0}^{\infty} c_k z^k = \lim_{\begin{smallmatrix} N\to\infty,\, \theta\to 0 \\ \theta N\to \gamma \end{smallmatrix}} \prod_{i=1}^{N} (1-a_i^{(N)} z)^{-\theta}=\lim_{\begin{smallmatrix} N\to\infty,\, \theta\to 0 \\ \theta N\to \gamma \end{smallmatrix}} \exp\left[-\theta \sum_{i=1}^{N} \ln\left(1-a_i^{(N)} z\right)\right]\\=\lim_{\begin{smallmatrix} N\to\infty,\, \theta\to 0 \\ \theta N\to \gamma \end{smallmatrix}} \exp\left[\theta N \sum_{k=1}^{\infty} \frac{z^k}{k} \frac{1}{N}\sum_{i=1}^{N} (a_i^{(N)})^k\right]= \exp\left[ \gamma \sum_{k=1}^{\infty} \frac{m_k z^k}{k}\right].
\end{multline}
Because $|a_i^{(N)}|\le r$ for all $1\le i \le N$, for any $\epsilon>0$ the convergence in \eqref{eq_x21} is uniform over $|z|\le r^{-1}-\epsilon$. We claim that the convergence in \eqref{eq_x20} is uniform over $y$ in compact subsets of $\mathbb C$. Indeed, the term-by-term
convergence of \eqref{eq_x17} to \eqref{eq_x20} is evident from \eqref{eq_x21}, while a tail bound on the series can be obtained from a uniform $N$--independent bound on the coefficients $c_k=c_k^{(N)}$ in \eqref{eq_x17}, \eqref{eq_x18} of the form $|c_k^{(N)}|\le C\cdot r^{-k}$ for $C>0$, which follows from the Cauchy integral formula applied to \eqref{eq_x18}.
Comparing \eqref{eq_x22} with \eqref{eq_x20} and noting that uniform convergence of analytic functions implies convergence of their derivatives, we conclude that
\begin{equation}
\label{eq_x23}
\exp\left(\sum_{l=1}^{\infty} \frac{\kappa_l}{l} y^l\right)=\sum_{k=0}^{\infty} \frac{c_k}{(\gamma)_k} y^k.
\end{equation}
The last identity together with \eqref{eq_x21} give \eqref{eq_cums_moments_2}.
\medskip
{\bf Step 3.} It remains to study the case when $\{m_k\}_{k\ge 1}$ is an arbitrary sequence of real numbers, rather than a sequence of moments of a compactly supported probability measure $\mu$ as in \eqref{eq_x19}. Note that the relation $\{\kappa_l\}_{l\ge 1} = T^\ga_{m\to\ka}(\{m_k\}_{k\ge 1})$ is equivalent to saying that for certain polynomials $Q^l$, we have
$$
\kappa_l= Q^l(m_1,m_2,\dots,m_l), \quad l=1,2,\dots.
$$
On the other hand, \eqref{eq_cums_moments} of Theorem \ref{thm:mom_cums2} is equivalent to saying that for certain polynomials $\tilde Q^l$, we have
$$
\kappa_l= \tilde Q^l(m_1,m_2,\dots,m_l), \quad l=1,2,\dots.
$$
Hence, in order to prove Theorem \ref{thm:mom_cums2}, we need to show that the polynomials $Q^l$ and $\tilde Q^l$ coincide for each $l=1,2,\dots$. Note that two polynomials in $l$ variables coincide if and only if they coincide as functions on a non-empty open set $D\subset \mathbb R^{l}$, and this $D$ can be chosen in an arbitrary way. Fix $l$ and define:
$$
D:=\left\{\left(\tfrac{1}{l}\sum_{i=1}^{l} d_i,\, \tfrac{1}{l}\sum_{i=1}^{l} (d_i)^2,\dots, \tfrac{1}{l}\sum_{i=1}^{l} (d_i)^l\right) \, \Bigg|\, d_1<d_2<\dots<d_l\right\}\subset \mathbb R^{l}.
$$
The set $D$ is an image of an open set $\{(d_1,\dots,d_l)\subset \mathbb R^l \mid d_1<\dots<d_l \}$ under a smooth map with non-vanishing Jacobian (equal to $l^{-l} \prod_{i<j} (d_i-d_j)$). Hence, $D$ is open. By Step 2 the polynomials $Q^l$ and $\tilde Q^l$ coincide as functions on $D$, because each element in $D$ is a moment sequence of the discrete probability measure with atoms $\tfrac{1}{l}$ at points $d_1,\dots,d_l$. Therefore, polynomials $Q^l$ and $\tilde Q^l$ coincide.
\end{proof}
\section{Limits of the maps $T^\ga_{\ka\to m}$ and $T^\ga_{m\to\ka}$ as $\gamma\to 0$ and $\gamma\to\infty$}
\label{Section_semifree}
In this section we investigate the behavior of the $\gamma$-cumulants and $\gamma$--convolution as $\gamma\to 0$ and $\gamma\to\infty$. We will see that in the former case the conventional cumulants and conventional convolution appear, while in the latter case we link to the free probability counterparts.
\subsection{$\gamma\to 0$ limit}
\label{Section_limit_to_0}
Let us recall the definition of the classical cumulants.
\begin{definition} Given a sequence of moments $\{m_k\}_{k\ge 1}$ we define the corresponding cumulants $\{c_l\}_{l\ge 1}= \tilde T_{m\to c}(\{m_k\}_{k\ge 1})$ through the identity for the generating functions:
\begin{equation}
\label{eq_cumulants_gen}
C(z) = \ln (M(z)),\qquad \text{where } \quad M(z) := 1 + \sum_{k=1}^\infty{\frac{m_k}{k!}z^k},\quad C(z) := \sum_{l=1}^\infty{\frac{c_l}{l!} z^l}.
\end{equation}
In the opposite direction, given a sequence of cumulants $\{c_l\}_{l\ge 1}$, we define the corresponding sequence of moments $\{m_k\}_{k\ge 1}=\tilde T_{c\to m}(\{c_l\}_{l\ge 1})$ through a combinatorial formula:
\begin{equation}\label{eq_cumulants_combinatorial}
m_k := \sum_{\pi=B_1\sqcup \dots\sqcup B_m \in\mathscr P(k)}\, \prod_{i=1}^m{c_{|B_i|}},\quad k = 1, 2, \cdots.
\end{equation}
\end{definition}
The two definitions \eqref{eq_cumulants_gen} and \eqref{eq_cumulants_combinatorial} are well-known to be equivalent and the maps $\tilde T_{m\to c}$ and $\tilde T_{c\to m}$ are inverse to each other, see, e.g., \cite[Sections 1.1--1.2]{MS}.
\begin{thm}\label{Theorem_gamma_to_0_limit}
Given a sequence of numbers $\{m_k\}_{k\ge 1}$, let
$$
\{\kappa^0_l\}_{l\ge 1}=\lim_{\gamma\to 0} T^\ga_{m\to\ka}(\{m_k\}_{k\ge 1}), \qquad \qquad \{c_l\}_{l\ge 1}= \tilde T_{m\to c}(\{m_k\}_{k\ge 1}).
$$
Then for each $l=1,2,\dots$, we have
$\displaystyle \kappa^0_l= \tfrac{1}{(l-1)!}c_l$.
\end{thm}
\begin{remark}
By continuous invertibility of the maps $T^\ga_{m\to\ka}$ and $\tilde T_{m\to c}$, we can equivalently write $\{m_k\}_{k\ge 1}=\lim_{\gamma\to 0} T^\ga_{\ka\to m}(\{ \tfrac{1}{(l-1)!}c_l \}_{l\ge 1})$ and $\{m_k\}_{k\ge 1}= \tilde T_{c\to m}(\{c_l\}_{l\ge 1})$. The statement of the theorem is that these two definitions of $\{m_k\}_{k\ge 1}$ coincide.
\end{remark}
\begin{proof}[Proof of Theorem \ref{Theorem_gamma_to_0_limit}] We use the description of the map $T^\ga_{m\to\ka}$ of Theorem \ref{thm:mom_cums2} and send $\gamma\to 0$ in the weight $W(\pi)$ of \eqref{W_formula}. Note that
$$
\lim_{\gamma\to 0} \bigl[ p(i)! (\gamma+p(i)+1)_{q(i)}\bigr]=(p(i)+q(i))!
$$
According to definitions of Section \ref{sec_Tcm}, $p(i)+q(i)+1=|B_i|$, i.e., the size of the $i$th block in $\pi$. Hence, the $\gamma\to 0$ limit of Theorem \ref{thm:mom_cums2} gives
\begin{equation}
\label{eq_x24}
m_k=\sum_{\pi=B_1\sqcup\dots\sqcup B_m\in\mathscr P(k)} \prod_{i=1}^m\left[(|B_i|-1)!\, \kappa^0_{|B_i|}\right], \qquad k=1,2,\dots.
\end{equation}
Comparing with \eqref{eq_cumulants_combinatorial} and noting that the relations \eqref{eq_x24} uniquely determine $\{\kappa^0_l\}_{l\ge 1}$, we conclude that $ \kappa^0_l= \tfrac{1}{(l-1)!}c_l$.
\end{proof}
As a corollary, we obtain the $\gamma\to 0$ behavior of the $\gamma$--convolution of Theorem \ref{Theorem_gamma_convolution}.
\begin{corollary}\label{Corollary_convolution_at_0}
Take two sequences of real numbers $\{m_k^{\mathbf a}\}_{k\ge 1}$ and $\{m_k^{\mathbf b}\}_{k\ge 1}$. Define
$$
\{\tilde m_k\}_{k\ge 1}:=\lim_{\gamma\to 0}\left[ \{m_k^{\mathbf a}\}_{k\ge 1}\boxplus_\gamma \{m_k^{\mathbf b}\}_{k\ge 1} \right].
$$
Then with the agreement $m_0^{\mathbf a}=m_0^{\mathbf b}=1$, we have
\begin{equation}
\label{eq_sum_moments}
\tilde m_k=\sum_{s=0}^k {k\choose s} m_s^{\mathbf a} m_{k-s}^{\mathbf b}, \quad k=1,2,\dots.
\end{equation}
\end{corollary}
\begin{remark}
Suppose that we are given two independent random variables $\mathbf a$ and $\mathbf b$, such that
$$
m_k^{\mathbf a} =\mathbb E \mathbf a^k, \quad m_k^{\mathbf b}=\mathbb E \mathbf b^k,\quad k=1,2,\dots.
$$
Then the formula \eqref{eq_sum_moments} says that $\tilde m_k=\mathbb E(\mathbf a+\mathbf b)^k$.
\end{remark}
\begin{proof}[Proof of Corollary \ref{Corollary_convolution_at_0}] By \eqref{eq_convolution_cumulants} we have for each $\gamma>0$
$$
T^\ga_{m\to\ka}( \{m_k^{\mathbf a}\}_{k\ge 1}\boxplus_\gamma \{m_k^{\mathbf b}\}_{k\ge 1} )= T^\ga_{m\to\ka}( \{m_k^{\mathbf a}\}_{k\ge 1}) + T^\ga_{m\to\ka}( \{m_k^{\mathbf b}\}_{k\ge 1}).
$$
Taking the limit $\gamma\to 0$ and using Theorem \ref{Theorem_gamma_to_0_limit}, we get
$$
\tilde T_{m\to c} \bigl( \{\tilde m_k\}_{k\ge 1}\bigr)= \tilde T_{m\to c}\bigl( \{m_k^{\mathbf a}\}_{k\ge 1}\bigr) + \tilde T_{m\to c}\bigl( \{m_k^{\mathbf b}\}_{k\ge 1}\bigr).
$$
Taking into account \eqref{eq_cumulants_gen}, the last identity is equivalent to \eqref{eq_sum_moments}.
\end{proof}
\subsection{$\gamma\to\infty$ limit}
\label{Section_limit_to_infinity}
Let us recall the definition of the free cumulants.
A set partition $\pi\in\mathscr P(k)$ is said to be \emph{crossing} if there exist two distinct blocks $B, B'\in\pi$ and integers $1\le x < y < z < w \le k$ such that $x, z\in B$ and $y, w\in B'$. A set partition $\pi\in\mathscr P(k)$ is said to be a \emph{non-crossing set partition} if it is not crossing. In the notations of Section \ref{sec_Tcm}, the non-crossing set partitions are those which have $p(i)=0$ for all $i$; in other words, there should be no crossings of roofs and legs.
We denote the collection of all non-crossing set partitions of $[k]$ by $NC(k)$. For instance, out the seven partitions of $[4]$ with two blocks, mentioned in Section \ref{sec_Tcm}, the following six are the non-crossing ones:
$$
\{1\}\sqcup \{2,3,4\},\quad \{1,3,4\}\sqcup \{2\},\quad \{1,2,4\}\sqcup\{3\},\quad \{1,2,3\}\sqcup\{4\},
$$
$$
\{1,2\}\sqcup \{3,4\},\quad \{1,4\}\sqcup \{2,3\}.
$$
\begin{definition} Given a sequence of moments $\{m_k\}_{k\ge 1}$ we define the corresponding free cumulants $\{r_l\}_{l\ge 1}= \tilde T^\infty_{m\to r}(\{m_k\}_{k\ge 1})$ through the identity for the generating functions:
\begin{equation}
\label{eq_free_cumulants_gen}
G\bigl(R(z)+z^{-1}\bigr) = z,\qquad \text{where } \quad G(z) := z^{-1} + \sum_{k=1}^\infty \frac{m_k}{z^{k+1}} ,\quad R(z) := \sum_{l=1}^\infty r_l z^{l-1}.
\end{equation}
In the opposite direction, given a sequence of free cumulants $\{r_l\}_{l\ge 1}$, we define the corresponding sequence of moments $\{m_k\}_{k\ge 1}=\tilde T^\infty_{r\to m}(\{r_l\}_{l\ge 1})$ through a combinatorial formula:
\begin{equation}\label{eq_free_cumulants_combinatorial}
m_k := \sum_{\pi=B_1\sqcup \dots\sqcup B_m \in NC(k)}\, \prod_{i=1}^m{r_{|B_i|}},\quad k = 1, 2, \cdots.
\end{equation}
\end{definition}
The relation $G\bigl(R(z)+z^{-1}\bigr) = z$ can be rewritten as $R(z)=G^{(-1)}(z)-z^{-1}$ and $R(z)$ defined in this way is called the \emph{Voiculescu $R$--transform.}
The equivalence between \eqref{eq_free_cumulants_gen} and \eqref{eq_free_cumulants_combinatorial} is explained in \cite[Sect. 2.4]{MS}.
\begin{thm}\label{Theorem_gamma_to_infinity_limit}
Given a sequence of numbers $\{m_k\}_{k\ge 1}$ define
$$
\{\kappa^\gamma_l\}_{l\ge 1}= T^\ga_{m\to\ka}(\{m_k\}_{k\ge 1}), \qquad r_l=\lim_{\gamma\to\infty} \gamma^{l-1} \kappa^\gamma_l.
$$
Then we have
$\displaystyle \{r_l\}_{l\ge 1}= \tilde T^\infty_{m\to r} (\{m_k\}_{k\ge 1})$.
\end{thm}
\begin{remark} \label{Remark_inverse_infinity}
By continuous invertibility of the maps $T^\ga_{m\to\ka}$ and $\tilde T^\infty_{m\to r}$, we can equivalently write $\{m_k\}_{k\ge 1}=\lim_{\gamma\to \infty} T^\ga_{\ka\to m}(\{\gamma^{1-l}r_l\}_{l\ge 1})$ and $\{m_k\}_{k\ge 1}= \tilde T^\infty_{r\to m}(\{r_l\}_{l\ge 1})$. The statement of the theorem is that these two definitions of $\{m_k\}_{k\ge 1}$ coincide.
\end{remark}
\begin{proof}[Proof of Theorem \ref{Theorem_gamma_to_infinity_limit}] We use the reformulation of Remark \ref{Remark_inverse_infinity} together with the description of the map $T^\ga_{\ka\to m}$ of Theorem \ref{theorem_cumuls_moms} and send $\gamma\to \infty$ in the weight $W(\pi)$ of \eqref{W_formula}. Note that
$$
\lim_{\gamma\to \infty } \bigl[ \gamma^{-p(i)-q(i)} p(i)! (\gamma+p(i)+1)_{q(i)}\bigr]=\begin{cases} 1,& \text{if }p(i)=0,\\ 0, & \text{otherwise.} \end{cases}
$$
According to the definitions of Section \ref{sec_Tcm}, $p(i)+q(i)+1=|B_i|$, so $\gamma^{1-|B_i|}r_{|B_i|} = \gamma^{-p(i)-q(i)}r_{|B_i|}$. Hence, the $\gamma\to \infty$ limit of Theorem \ref{theorem_cumuls_moms} gives
\begin{multline}
\label{eq_x25}
m_k=\lim_{\gamma\to\infty} \sum_{\pi=B_1\sqcup\dots\sqcup B_m\in\mathscr P(k)} \prod_{i=1}^m\left[ \gamma^{-p(i)-q(i)} p(i)! (\gamma+p(i)+1)_{q(i)} r_{|B_i|}\right]\\= \sum_{\pi=B_1\sqcup \dots\sqcup B_m\in NC(k)} \prod_{i=1}^m r_{|B_i|}, \quad k=1,2,\dots.
\end{multline}
This is the same expression as \eqref{eq_free_cumulants_combinatorial}.
\end{proof}
As a corollary, we obtain the $\gamma\to \infty$ behavior of the $\gamma$--convolution of Theorem \ref{Theorem_gamma_convolution}.
\begin{corollary}\label{Corollary_convolution_at_infty}
Take two sequences of real numbers $\{m_k^{\mathbf a}\}_{k\ge 1}$ and $\{m_k^{\mathbf b}\}_{k\ge 1}$. Define
$$
\{\tilde m_k\}_{k\ge 1}:=\lim_{\gamma\to \infty}\left[ \{m_k^{\mathbf a}\}_{k\ge 1}\boxplus_\gamma \{m_k^{\mathbf b}\}_{k\ge 1} \right].
$$
Then the sequence $\{\tilde m_k\}_{k\ge 1}$ is uniquely fixed by the identity
\begin{equation}
\label{eq_sum_free_moments}
\tilde T^{\infty}_{m\to r}\bigl(\{\tilde m_k\}_{k\ge 1}\bigr)= \tilde T^{\infty}_{m\to r}\bigl(\{m^{\mathbf a}_k\}_{k\ge 1}\bigr)+ \tilde T^{\infty}_{m\to r}\bigl( \{m^{\mathbf b}_k\}_{k\ge 1}\bigr)
\end{equation}
\end{corollary}
\begin{remark}
Suppose that we are given two random variables $\mathbf a$ and $\mathbf b$, such that
$$
m_k^{\mathbf a} =\mathbb E \mathbf a^k, \quad m_k^{\mathbf b}=\mathbb E \mathbf b^k,\quad k=1,2,\dots.
$$
Then the formula \eqref{eq_sum_free_moments} says that $\tilde m_k$ are the moments of the \emph{free convolution} of $\mathbf a$ and $\mathbf b$.
\end{remark}
\begin{proof}[Proof of Corollary \ref{Corollary_convolution_at_infty}] By \eqref{eq_convolution_cumulants} we have for each $\gamma>0$
$$
T^\ga_{m\to\ka}\bigl( \{m_k^{\mathbf a}\}_{k\ge 1}\boxplus_\gamma \{m_k^{\mathbf b}\}_{k\ge 1} \bigr)= T^\ga_{m\to\ka}\bigl( \{m_k^{\mathbf a}\}_{k\ge 1}\bigr) + T^\ga_{m\to\ka}\bigl( \{m_k^{\mathbf b}\}_{k\ge 1}\bigr).
$$
Taking the limit $\gamma\to \infty$ and using Theorem \ref{Theorem_gamma_to_infinity_limit}, we get \eqref{eq_sum_free_moments}.
\end{proof}
\section{Appendix: Law of Large Numbers for fixed temperature}
\label{Section_Appendix_LLN}
The aim of this Appendix is to probe the possibility of a version of Theorem \ref{thm_small_th} in which $\th > 0$ is fixed and is not changing with $N$.
The following claim is an analogue of \emph{one direction} of Theorem \ref{thm_small_th}.
\begin{claim}[LLN for finite temperature]\label{Claim_finite_th}
Let $\{\mu_N\}_{N \geq 1}$ be a sequence of exponentially decaying probability measures on tuples $a_1\le \cdots \le a_N$.
For each $N$, let $G_{N; \th}(x_1, \cdots, x_N) := G_\th(x_1, \cdots, x_N; \mu_N)$ be the BGF of $\mu_N$.
Assume that the sequence $\{G_{N; \th}\}_N$ satisfies the following conditions:
\begin{enumerate}[label=(\alph*)]
\item $\displaystyle \lim_{N\to\infty}\left.\frac{1}{N}\cdot\frac{\partial^l}{\partial x_i^l}\ln{(G_{N; \th})}\right|_{x_1=\dots=x_N=0} =
(l - 1)!\cdot c_l$,\ for all $l\in\mathbb{Z}_{\geq 1}$.
\item $\displaystyle \lim_{N\to\infty}\left.\frac{1}{N}\cdot\frac{\partial}{\partial x_{i_1}}\cdots\frac{\partial}{\partial x_{i_r}}\ln{(G_{N; \th})}\right|_{x_1=\dots=x_N=0} = 0$,\ for all $i_1, \dots, i_r\in\mathbb{Z}_{\geq 1}$ such that the set $\{i_1, \dots, i_r\}$ contains at least two distinct indices.
\end{enumerate}
\noindent Then the sequence $\{\mu_N\}_{N \geq 1}$ satisfies the following LLN (compare to Definition \ref{Definition_LLN_sat_ht}):
there exist real numbers $\{m_{k}\}_{k\geq 1}$ such that for any
$s=1,2,\dots$ and any $k_1, \dots, k_s\in\mathbb{Z}_{\geq 1}$, we have
$$\lim_{N\to\infty} \mathbb E_{\mu_N} \prod_{i=1}^s \left( \frac{1}{N} \sum_{i=1}^N\left( \frac{a_i}{N} \right)^{k_i} \right) = \prod_{i=1}^{s} m_{k_i}.$$
\medskip
If this occurs, $\{c_l\}_{l\geq 1}$ and $\{m_k\}_{k\ge 1}$ are related by either
\begin{equation*}
m_k = \sum_{\pi\in NC(k)}\, \prod_{B\in\pi}{\left( \theta^{|B| - 1}c_{|B|} \right)},\quad k = 1, 2, \cdots,
\end{equation*}
or, equivalently,
\begin{equation*}
m_k = \frac{1}{k+1} \cdot [z^{-1}] \left( \left( z^{-1} + \sum_{l = 1}^{\infty} \th^{l-1}c_l z^{l - 1} \right)^{\!\!k+1} \right)\!,\quad k\in\mathbb{Z}_{\geq 1}.
\end{equation*}
\end{claim}
\bigskip
We do not present a proof of the claim, but probably one can prove it with the same techniques that we have used in Section \ref{sec_proof_LLN}. At $\theta=1$, another approach is by a degeneration of \cite[Theorem 5.1]{BuG1}, see also \cite{NovakM}.
This claim would prove the one-sided implication
$$
\text{conditions (a) and (b) (fixed $\theta$ version of LLN--appropriateness)} \Longrightarrow \text{LLN--satisfaction}.
$$
Based on our Theorem \ref{thm_small_th}, on \cite[Theorem 2.6]{BuG3} (which studies the CLT at fixed $\theta = 1$), and on the classical theorem that relates the weak convergence of measures to the convergence of their characteristic functions, the reader may be inclined to believe that the reverse implication is also true and that this kind of ``if and only if'' results are always expected.
However, this turns out to be wrong. The naive analogue of Theorem \ref{thm_small_th} does not hold for fixed $\theta$: the ``expected if and only if statement" is false. Here is a counter-example.
Let us consider a sequence of probability measures $\mu_N$ such that a random $\mu_N$--distributed vector $a_1 \ge \cdots \ge a_N$ has $a_1 = \cdots = a_N$ almost surely and this common value $a$ is distributed according to a Gaussian measure of mean $0$ and variance $N$.
In this case, the random variable
$$
p_k^N = \frac{1}{N}\sum_{i=1}^N{\left( \frac{a_i}{N} \right)^k} = \left( \frac{a}{N} \right)^k
$$
is distributed as the $k$-th power of a Gaussian random variable of mean $0$ and variance $1/N$.
Consequently, the sequence $\{\mu_N\}_N$ satisfies a LLN, and all $m_k$'s are equal to zero.
On the other hand, by using $B_{(a, \dots, a)}(x_1, \dots, x_N; \th) = \exp(a\sum_{i=1}^N{x_i})$, it follows that the BGF of $\mu_N$ equals
$$
G_{N; \th}(x_1, \cdots, x_N) = \int_{-\infty}^{\infty}{\frac{e^{-a^2/(2N)}}{\sqrt{2\pi N}} B_{(a, \dots, a)}(x_1, \dots, x_N; \th) da}
= \exp\left( \frac{N}{2} \left(\sum_{i=1}^N{x_i}\right)^{\!\!\!2\,} \right).
$$
The sequence $\{G_{N; \th}\}_N$ of BGFs then satisfies
$$
\lim_{N\to\infty} \left. \frac{1}{N}\cdot\frac{\partial^2}{\partial x_1\partial x_2}\ln(G_{N;\th}) \right|_{x_1=\dots=x_N=0} = 1,
$$
therefore contradicting condition (b) from Claim \ref{Claim_finite_th}.
\bigskip
A more refined question is whether \emph{some} ``if and only if for LLN'' statement holds, if one modifies somehow the conditions (a) and (b) of Claim \ref{Claim_finite_th}.
Based on small calculations (obtained when trying to reverse-engineer the proof of Theorem \ref{thm_small_th}), it is plausible that the answer is yes.
Indeed, based on Proposition \ref{proposition_moments_through_operators}, we must study the limits of the expressions
\begin{equation}\label{P_lambda}
N^{-|\lambda|-\ell(\lambda)}\cdot \left[\prod_{i=1}^{\ell(\lambda)}{\P_{\lambda_i}}\right]\! G_{N; \th},
\end{equation}
where $\lambda$ ranges over the set of all partitions of a given size $k$.
We have performed calculations for $k=2, 3$; they indicate that the conditions on second-order derivatives in (a) and (b) from Claim \ref{Claim_finite_th} should be replaced by:
\begin{align*}
\bullet& \lim_{N\to\infty} \left.\frac{1}{N}\left\{\frac{\partial^2}{\partial x_1^2} - \frac{\partial^2}{\partial x_1\partial x_2} \right\} \ln(G_{N;\th}) \right|_{x_1=\dots=x_N=0} = \th^{-1}\cdot c_2,\\
\bullet& \lim_{N\to\infty} \left.\frac{1}{N^2} \frac{\partial^2 \ln(G_{N;\th})}{\partial x_1\partial x_2} \right|_{x_1=\dots=x_N=0} = 0,
\end{align*}
and the conditions on third-order derivatives should be replaced by:
\begin{align*}
\bullet& \lim_{N\to\infty} \left.\frac{1}{N}\left\{\frac{1}{2}\cdot\frac{\partial^3}{\partial x_1^3} - \frac{3}{2}\cdot\frac{\partial^3}{\partial x_1^2\partial x_2} + \frac{\partial^3}{\partial x_1\partial x_2 \partial x_3} \right\} \ln(G_{N;\th}) \right|_{x_1=\dots=x_N=0} = \th^{-2}\cdot c_3,\\
\bullet& \lim_{N\to\infty} \left.\frac{1}{N}\left\{ \frac{\partial^3}{\partial x_1^2\partial x_2} - \frac{\partial^3}{\partial x_1 \partial x_2 \partial x_3} \right\} \ln(G_{N;\th}) \right|_{x_1=\dots=x_N=0} = 0,\\
\bullet& \lim_{N\to\infty} \left.\frac{1}{N^2}\frac{\partial^3 \ln(G_{N;\th})}{\partial x_1\partial x_2\partial x_3} \right|_{x_1=\dots=x_N=0} = 0.
\end{align*}
These relations are much more involved than conditions (a) and (b) from Claim \ref{Claim_finite_th}, or than the conditions from Definition \ref{Definition_LLN_appr_ht}. What should be the correct ``if and only if'' relations for $k>3$? This is an interesting open question for future research.
|
{
"timestamp": "2021-06-29T02:23:21",
"yymm": "2105",
"arxiv_id": "2105.03795",
"language": "en",
"url": "https://arxiv.org/abs/2105.03795"
}
|
\section{Introduction}
The electronic structure problem of quantum chemistry if one of the main anticipated applications for future quantum computers.~\cite{cao2019quantum, reiher2017elucidating}
The core problem is to extract, primarily low-lying, eigenenergies from electronic Hamiltonians as well as preparing the corresponding eigenstates. Projective algorithms, like the quantum phase estimation~\cite{aspuru2005simulated} are promising candidates for application on future quantum computers. Since the first proposals, those algorithms have been improved significantly with regards to their resource requirements and estimated runtimes~\cite{babbush2018low-depth, vonburg2021quantum,lee2020efficient}, but their expected applicability remains out of reach for near and medium term devices.
Variational algorithms have been originally proposed~\cite{peruzzo2014variational, mcclean2016theory} as an applicable class of algorithms for those devices and as a potential bridging technology until scalable fault-tolerant hardware becomes available.
As the success probability of phase estimation based algorithms depends on the overlap of the initial state with the targeted eigenstate, variational algorithms might also play a role in the long term by providing improved initial states for projective algorithms.\\
Variational algorithms, usually aim to prepare wavefunctions directly by a parametrized quantum circuit and measure the associated energies as expectation values. The parameters of the quantum circuit (\textit{e.g.} the angles of Eq.~\eqref{eq:ucc_gate_general}) are then optimized successively by a classical optimizer via the variational principle.
In it's original form, which is most common in quantum chemical applications, the objective of the optimization is the plain expectation value of the electronic Hamiltonian. Detailed introductions to variational quantum algorithms can be found in recent reviews~\cite{bharti2021noisy, cerezo2020variational}, original articles~\cite{mcclean2016theory} or in recent works on unitary coupled-cluster with hands-on code examples~\cite{kottmann2020feasible, tequila}. The reviews~\cite{cao2019quantum, mcardle2020quantum} provide a general overview over quantum algorithms for quantum chemistry.
\\
The construction of suitable parametrized circuits for variational algorithms is a vibrant research topic, where physically inspired models like unitary coupled-cluster where part of the initial proposals~\cite{mcclean2016theory} and remain a source of inspiration for current approaches.
Standard unitary coupled-cluster approaches such as UCCSD suffer from high gate and parameter counts that both scale with quartic cost in the number of orbitals. In addition, the corresponding quantum circuits show, once compiled into primitive one- and two-qubit gates, high gate counts and depths even for small systems with a relatively small number of variational parameters.
The standard approach defines the whole unitary operation through a single exponential generated by a sum over primitive electronic excitation operators (as defined in Eq.~\eqref{eq:ucc_generator_general}).
This unitary operation is then decomposed into primitive excitation unitaries (such as Eq.~\eqref{eq:ucc_gate_general}) employing for example the Trotter decomposition.
Most modern approaches abandoned this formulation over a single exponential and instead follow a factorized~\cite{evangelista2019exact} approach where the quantum circuit is constructed as a product of primitive excitations following various strategies such as adaptive circuit construction~\cite{grimsley2019adaptive, kottmann2020feasible}, Lie-algebraic~\cite{Izmaylov2020order} and empirical~\cite{grimsley2019trotterized} strategies.
Although significantly reduced, the associated gate counts are still unfeasible on current day hardware platforms.
This leads to the paradoxical situation that many-body wavefunctions of chemical systems, which are considered ``easy'' to describe within the classical algorithmic framework (e.g. using MP2 or CCSD), already require non-local and high-depth quantum circuits.
Ideally such wavefunctions should be initializable with low-depth and local quantum circuits.\\
In this work, we will describe how such circuits can be realized by physical principles and classical pre-computation resulting in optimized circuits through a separable pair ansatz (SPA). As an explicit embodiment, we will combine this ansatz with the directly determined pair-natural orbitals representation of~\cite{kottmann2020reducing} resulting in reduced gate counts and depths by several orders of magnitude - e.g. the BeH2(4,14) system from a circuit with 192 controlled-not gates~\cite{kottmann2020reducing} to a depth-5 circuit with 15 local controlled-not operations shown in Fig.~\ref{fig:beh2_circuit}.
Apart from low gate counts and depths, such wavefunctions prepared by the quantum circuits can be represented efficiently on a classical computer, which allows classical simulation and parameter optimization of the corresponding circuits.
The classically optimized quantum circuits could then be used to prepare initial states for more advanced quantum algorithms like the quantum phase estimation~\cite{aspuru2005simulated} or variational algorithms that use this ansatz as a baseline for building more electron correlation. We will give explicit examples for both situations and identify good benchmark systems with small qubit sizes suitable for future improved schemes.
Our approach is integrated into the basis-set-free framework of Ref.~\cite{kottmann2020reducing} where we further improved the underlying surrogate model. We provide a full-stack open-source implementation in \textsc{tequila}~\cite{tequila}.
\section{Methodology}
Unitary coupled-cluster quantum circuits are formed from elementary $n$-electron excitation gates
\begin{align}
U_{\boldsymbol{pq}}\lr{\theta} = e^{-i\frac{\theta}{2} G_{\boldsymbol{pq}}}\label{eq:ucc_gate_general}
\end{align}
which describe excitations between spin orbitals $\boldsymbol{p}=\left\{p_0,p_1,\dots, p_{n}\right\}$ and $\boldsymbol{q}=\left\{q_0,q_1,\dots,q_{n}\right\}$ through the fermionic excitation generator
\begin{align}
G_{\boldsymbol{pq}} = i\left(\prod_{k} a_{p_k}^\dagger a_{q_k} - h.c.\right)\label{eq:ucc_generator_general}.
\end{align}
The gate acts like a rotation on a subspace spanned by all configurations with spin-orbitals $\boldsymbol{p}$ occupied and $\boldsymbol{q}$ unoccupied and vice versa, while acting trivially (as the identity) on all other configurations
\begin{align}
U_{\boldsymbol{p}\boldsymbol{q}}\lr{\theta} =& \cos\lr{\frac{\theta}{2}} -i\sin\lr{\frac{\theta}{2}}G_{\boldsymbol{pq}}\nonumber\\ &+ \lr{1-\cos\lr{\frac{\theta}{2}}}P^0_{\boldsymbol{pq}}.
\end{align}
Here, $P^0_{\boldsymbol{pq}}$ is the nullspace projector of the generator $G_{\boldsymbol{pq}}$
\begin{align}
P^0_{\boldsymbol{pq}} = 1 - \prod a^\dagger_{p_k}a_{p_k}a_{q_k}a^\dagger_{q_k} - \prod a^\dagger_{q_k}a_{q_k}a_{p_k}a^\dagger_{p_k}
\end{align}
and the generators $G_{\boldsymbol{pq}},P^0_{\boldsymbol{pq}}$ can be mapped to qubits via various transformations~\cite{bharti2021noisy}.\\ One of the most established qubit encodings, the Jordan-Wigner transformation, maps the creation and annihilation operators directly to qubit raising and lowering operators $\sigma^\pm = \frac{1}{2}\lr{\sigma_x+i\sigma-y}$ and $\sigma_z$ operators that ensure the correct anti-commutation properties
\begin{align}
a^\dagger_p \xrightarrow[Wigner]{Jordan}\sigma^+_p \prod_{k<p} \sigma^z_k.
\end{align}
Within the Jordan-Wigner encoding, each spin-orbital is mapped directly to a qubit. In terms of spatial orbitals this means we need twice as many qubits as spatial orbitals $\ensuremath{N_\text{q}}=2\ensuremath{N_\text{o}}$. In this work, we will use the Jordan-Wigner representation due to its intuitive simplicity. In future applications, we could imagine local encodings~\cite{chien2020custom, setia2018bravyikitaevsuperfast, derby2020compact} to be advantageous. In order to employ them within the separable pair ansatz in Eq.~\eqref{eq:pno_upccd}, solely the bridging unitary of Eq.~\eqref{eq:hcb_to_jw} that maps from the paired representation to the Jordan-Wigner representation needs to be adapted.
\begin{table}[]
\centering
\begin{tabular}{lr|rr|rr|rr}
\toprule
\multicolumn{2}{c}{ }&\multicolumn{6}{c}{Optimization level}\\
\multicolumn{2}{c}{ }&\multicolumn{2}{c}{2} &\multicolumn{2}{c}{1} &\multicolumn{2}{c}{0} \\
\midrule
\multicolumn{2}{l}{Method\phantom{aa}$N_\text{param}$}& \multicolumn{2}{c}{Depth\phantom{.}$N_\text{cnot}$}& \multicolumn{2}{c}{Depth\phantom{.}$N_\text{cnot}$} & \multicolumn{2}{c}{Depth\phantom{.}$N_\text{cnot}$}\\
\midrule
SPA & 3& 3 & 9 & 6 & 42 & 178 & 128 \\
UpCCD & 9& 18 & 27 & 21 & 126 & 545 & 368 \\
UpCCGD & 15& 28 & 45 & 31 & 210 & 844 & 608 \\
UpCCSD & 18& 172 & 243 & 536 & 558 & 934 & 692 \\
UpCCGASD & 30& 58 & 135 & 642 & 770 & - & - \\
2-UpCCGASD & 60& 275 & 625 & 1283 & 1540 & - & - \\
UpCCGSD & 30& 215 & 325 & 642 & 770 & 1377 & 1092 \\
2-UpCCGSD & 60& 432 & 815 & 1283 & 1540 & 2730 & 2184 \\
\bottomrule
\end{tabular}
\caption{Details about the circuits used in Fig.~\ref{fig:results_beh2_bh3_n2} for the N2(6,12) and BH$_3$(6,12) systems with different levels of optimization: None (0), gate decompositions similar to Ref.~\cite{yordanov2020efficient} (1), including initialization in the HCB subspace according to Eq.~\eqref{eq:upccgsd} using SPA as initial part of the circuit (2) . UpCCGASD and similar circuits employ approximate singles (similar to ``A'' gates in ~\cite{gard2020efficient}) where $\sigma_z$ terms in the generators are neglected. }
\label{tab:details_bh3}
\end{table}
\subsection{Paired Coupled-Cluster and the Hard-Core Boson Model}
Paired unitary coupled-cluster models build their wavefunctions from single excitations and double excitations restricted to paired electrons in the same spatial orbitals $p$ and $q$.
The corresponding generator for a primitive unitary excitation operator is
\begin{align}
\tilde{G}_{pq} = i\lr{a_{p_\uparrow}^\dagger a_{q_\uparrow} a_{p_\downarrow}^\dagger a_{q_\downarrow} - h.c}.\label{eq:ucc_paired_generators}
\end{align}
In the Jordan-Wigner encoding, all $\sigma_z$ operations in these restricted double excitations cancel out, yielding a qubit excitation generator
\begin{align}
\tilde{G}_{pq} \xrightarrow[Wigner]{Jordan}\mathcal{G}_{\boldsymbol{pq}} = i\left(\prod_k\sigma^+_{p_k} \sigma_{q_k}^- - h.c.\right)\label{eq:qubit_excitation_generator}
\end{align}
that can be compiled into a unitary circuit with depth 22 and 13 CNOT gates~\cite{yordanov2020efficient} (see Ref.~\cite{anselmetti2021local} for an alternative construction).
If we restrict the ansatz to only paired doubles, \textit{e.g.} UpCCD, and start from a restricted reference wavefunction, the resulting wavefunction will consist solely of doubly occupied configurations. Instead of representing the spin-up and spin-down part of a spatial orbital individually with two qubits, we can now encode the doubly occupied or non-occupied spatial orbitals by a single qubit. This restriction is commonly referred to as a hard-core Boson (HCB) model of the original fermionic system and has been employed for variational quantum algorithms in reduced qubit representations~\cite{elfving2020simulating, khamoshi2020correlating}. Instead of hard-core Boson, the model is often also labelled as seniority-free or simply as a doubly-occupied or paired model. The term seniority-free results from the seniority quantum number of the associated wavefunction which is a quantifier for unpaired electrons in the system. Associated classical algorithms are paired coupled-cluster (pCCD) or doubly-occupied configuration interaction (DOCI). In an $k$-UpCCGD ansatz the whole wavefunction can be represented in the hard-core Boson representation by encoding the pair-excitation generators of Eq.~\eqref{eq:ucc_paired_generators} as
\begin{align}
\tilde{G}_{PQ} \xrightarrow[]{HCB} i\lr{\sigma^+_P\sigma^-_Q - h.c.}\label{eq:paired_double_excitation_generator_hcb}
\end{align}
where qubits $P,Q$ represent electron pairs in spatial orbitals $P,Q$. Thus $\ensuremath{N_\text{o}}$ spatial orbitals are mapped to $\ensuremath{N_\text{o}}$ qubits. This is conceptually the same as in Ref.~\cite{elfving2020simulating} and the HCB-Hamiltonian can be mapped to qubits using the same principles.
Here, we aim to prepare good initial states of the original Hamiltonian, so we need to transfer the hard-core Boson wavefunction into a regular Jordan-Wigner represented qubit wavefunction. This can be achieved throughout a series of controlled-not operations
\begin{align}
U_{\text{HCB}}^{\text{JW}} = \prod_{p=1}^{\ensuremath{N_\text{o}}} \text{CNOT}\lr{p_\uparrow, p_\downarrow}\label{eq:hcb_to_jw}
\end{align}
that transfer the occupation information of the hard-core Boson wavefunction to a second register of qubits. The original qubit register $\boldsymbol{p}_\uparrow$, that represented electron pairs in the hard-core Boson representation, will then represent spin-up electrons in the Jordan-Wigner representation and the additional register $\boldsymbol{p}_\downarrow$ will represent the corresponding spin-down electrons.
In the case of standard unitary forms of pCDD and pCCSD an optimized circuit construction is given by
\begin{align}
&U_\text{pCCD}\lr{\boldsymbol{\theta_\text{D}}} = U_\text{HCB}^\text{JW} U^{HCB}_\text{D}\lr{\boldsymbol{\theta_\text{D}}} U_\text{RHF}^{HCB}\label{eq:upccd} \\
&U_\text{pCCSD}\lr{\boldsymbol{\theta_\text{SD}}} = U^{JW}_\text{S}\lr{\boldsymbol{\theta_\text{S}}} U_\text{pCCD}\lr{\boldsymbol{\theta_\text{D}}}\label{eq:upccsd}
\end{align}
representing the pair-double excitations in the hard-core Boson representation and the singles excitations in the standard Jordan-Wigner representation. The restricted Hartree{\textendash}Fock reference state in the HCB representation is constructed by a product of $\sigma_x$ gates in the unitary $U_\text{RHF}^\text{HCB}$.
This particular construction of pCC(S)D circuits partially fixes the order of the primitive circuits as it requires the singles block to be separated from the doubles block. This high-level order is however often empirically preferred~\cite{grimsley2019trotterized}.
Further integration into the $k$-UpCCGSD~\cite{lee2018generalized} hierarchy can be obtained by adding further layers of excitations to the circuit
\begin{align}
U_{k-\text{UpCCGSD}} = \prod_{l=1}^{k-1} U_\text{GSD}\left(\boldsymbol{\theta}_{k}\right) U_\text{UpCCGSD}\left(\boldsymbol{\theta}_{1}\right)\label{eq:upccgsd}
\end{align}
and by using generalized singles and paired-doubles blocks (including excitations between all orbitals).
As the associated wavefunction will loose it's pair structure after the first application of an unpaired operation, such as a single electron excitation, the hard-core Boson encoding will only reduce gate counts and depths for the first layer and the primitive excitations of the $U_\text{GSD}$ blocks are not restricted to any particular ordering.
In Tab.~\ref{tab:details_bh3} we give some explicit gate counts and depths; the savings for the first UpCCGSD layer are approximately one order of magnitude.
Note, that in principle, the reference does not need to be restricted Hartree{\textendash}Fock, as long as the state is created solely from doubly occupied orbitals.\\
So far the result of this section unified ideas from Refs.~\cite{elfving2020simulating, khamoshi2020correlating} that investigated paired unitary models with general unitary coupled-cluster approaches, in particular with the $k$-UpCCGSD~\cite{lee2018generalized} hierarchy. The combination of those approaches leads already leads to significantly reduced gate counts and depths. In addition we implemented optimized gate decompositions from Refs.~\cite{yordanov2020efficient} (and similar in Ref.~\cite{anselmetti2021local}) and~\cite{gard2020efficient} for the qubit excitation unitaries that allowed further reduction of the circuits. In the following section we will improve this circuit construction by using system-adapted pair-natural orbitals from Ref.~\cite{kottmann2020reducing} where they have been introduced, in order to construct compact Hamiltonians with significantly reduced qubit resources. Here we will further employ this representation of the Hamiltonian in order to construct low-depth quantum circuits.
\begin{table}[]
\centering
\begin{tabular}{lcccccccc}
\toprule
Molecule($\ensuremath{N_\text{e}}$,$\ensuremath{N_\text{q}}$) & $N_\text{param}$& $N_\text{cnot}$ & Depth \\
\midrule
H$_2$(2,4) & 1 & 3 & 3 \\
LiH(2,10) & 4 & 15 & 18 \\
BeH$_2$(4,8) & 2 & 6 & 3 \\
BeH$_2$(6,14) & 4 & 15 & 7 \\
BH$_3$(6,12) & 3 & 9 & 3 \\
N$_2$(6,12) & 3 & 9 & 3 \\
C$_2$H$_4$(12,24) & 6 & 18 & 3 \\
H$_2$O$_2$(14,28) & 7 & 21 & 3 \\
C$_2$H$_6$(14,28) & 7 & 21 & 3 \\
C$_2$H$_6$(2,12) & 5 & 19 & 23 \\
C$_2$H$_6$(14,84) & 35& 133 & 23 \\
\bottomrule
\end{tabular}
\caption{Resource requirements for SPA circuits according to Eq.~\eqref{eq:pno_upccd} used in this work. We show minimal configurations ($N_\text{q}=2\cdot N_\text{e}$ with two spatial orbital for each electron pair) with low-depth circuits as well as active space configurations with more spatial orbitals for individual electron pairs. E.g. LiH(2,10) and C$_2$H$_6$(2,12) with 5 and 6 spatial orbitals for a single electron pair and C$_2$H$_6$(14,84) with 6 spatial orbitals for each active electron pair. The BeH$_2$(6,14) circuit with two spatial orbitals for both active electron pairs and a single spatial orbital for the core orbital is shown explicitly in Fig.~\ref{fig:beh2_circuit}}
\label{tab:gate_counts}
\end{table}
\begin{figure}
\centering
\begin{tabular}{ccc}
\includegraphics[width=0.45\textwidth]{pics/circuit_cartoon.pdf}\\
\end{tabular}
\caption{Template SPA circuit (left) and required qubit connectivity (right) for a single electron pair represented by 3 spatial orbitals in ladder arrangement. The first \textit{NOT} operation on the lower left prepares the two-electron reference state in the hard-core Boson representation ($U_\text{RHF}^\text{HCB}$ in Eq.~\eqref{eq:pno_upccd}). The \textit{CNOT} layer on the right transfers the doubly-occupied wavefunction in the hard-core Boson representation to the Jordan-Wigner representation ($U_\text{HCB}^\text{JW}$ in Eq.~\eqref{eq:pno_upccd}).}
\label{fig:circuit_compiling_cartoon}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=0.8\textwidth]{pics/BeH2_pno_upccd_ladder_with_orbitals.pdf}
\caption{Example: Directly compiled low-depth SPA ladder arrangement for the BeH$_2$ molecule initializing a wavefunction as in Eq.~\eqref{eq:paired_wfn}. Controlled $R_y$ rotations can be compiled into two controlled-not operations and three single qubit rotations, leading to an overal CNOT count of 15 and a circuit depth of 8. The circuit corresponds to the BeH$_2$(6,14) circuit in Tab.~\ref{tab:gate_counts}. Without qubits $\phi^0_{0\uparrow},\phi^0_{0\downarrow}$ (representing the core-orbital $\phi^0_0$), the circuit corresponds to BeH$_2$(6,14) in Tab.~\ref{tab:gate_counts}. Pink gates represent individually parametrized Pauli-Y rotations $R_y(\theta_k)=e^{-i\frac{\theta_k}{2} \sigma_y }$ and $+$ labels represent (controlled)-not operations. }
\label{fig:beh2_circuit}
\end{figure*}
\subsection{Separable Pair Ansatz (SPA)} In Ref.~\cite{kottmann2020reducing}, system adapted orbitals are constructed from a classical surrogate model that tries to capture the most important physical effects of the molecule at hand, in order to yield an optimized spatial representation. Furthermore the information from the surrogate can be transferred to the design of initial quantum circuits. In Ref.~\cite{kottmann2020reducing} this led to the so-called PNO-UpCCD ansatz. Similar to UpCCD, only pairwise excitations of electrons are allowed and furthermore the excitation structure of the quantum circuit follows the underlying MP2-PNO surrogate. In the following, we will describe this model in a more general framework and combine it with the circuit compilation strategies of the last section to construct a separable pair ansatz (SPA) with significantly reduced gate count.\\
Assume we have an $\ensuremath{N_\text{e}}$ electron system and we want to create a wavefunction of $\frac{\ensuremath{N_\text{e}}}{2}$ electron pairs. This separable pair (SP) wavefunction can be written as
\begin{align}
\ket{{\ensuremath{\Psi_\text{SP}}}} = \prod_{k=1}^{\ensuremath{N_\text{e}}/2} \ket{\psi_k}\label{eq:paired_wfn}
\end{align}
where $\ket{\psi_k}$ are electron pair functions, that can themselves be represented by a linear combination of tensor-products of $\N{k}$ one-electron functions (spin-orbitals)
\begin{align}
\ket{\psi_k} = \sum_{mn} c^k_{mn} \ket{\phi^k_m}\otimes\ket{\phi^k_n}.
\end{align}
Each pair-function $\ket{\psi_k}$ is represented by an individual set of orbitals $\tilde{S}_{k} = \left\{\ket{\phi^k_l, l=0,\dots,\N{k}-1}\right\}$ and we will furthermore require all orbitals to be orthonormal
\begin{align}
\braket{\phi^k_l}{\phi^{k'}_{l'}} = \delta_{kk'}\delta_{ll'} \;\; \forall k,l,m,n.
\end{align}
In order to generate the wavefunction $\ket{{\ensuremath{\Psi_\text{SP}}}}$ in a qubit representation we only require the unitaries $U_k$ that create the pair-functions $\ket{\psi_k}$.
One strategy to realize $U_k$ is through one- and two-electron excitation gates as in Eq.~\eqref{eq:ucc_gate_general} acting on a closed-shell initial state
\begin{align}
\ket{{\ensuremath{\Psi_\text{SP}}}} = \prod_k^{\ensuremath{N_\text{e}}/2} U_{k}\left(\boldsymbol{\theta}_k\right)U_\text{RHF} \ket{00\dots0}\label{eq:product_structure}.
\end{align}
Note that the PNO-UpCCGSD circuits of Ref.~\cite{kottmann2020reducing} have exactly this product structure and the resulting wavefunctions can be fully simulated classically without facing an exponential memory bottleneck. It is only required to store $\frac{\ensuremath{N_\text{e}}}{2}$ individual pair functions that can each can be fully described with $\mathcal{O}\left(|\tilde{S}_k|^2\right)$ coefficients where $|\tilde{S}_k|$ denotes the number of orbitals in $\tilde{S}_k$. Solving the model with a variational quantum algorithm brings therefore no evident advantage on it's own besides potential benchmark applications. An interesting application is however to use the model as initial state for more advanced quantum algorithms. The advantage here is that the model can be solved classically while still being represented by quantum circuits.
The classical optimization results directly in optimized parameters of a low-depth quantum circuit that is ready to be used within an extended quantum algorithm. Furthermore, such classically simulable circuits naturally define a minimum benchmark that variational quantum algorithms need to beat in order to be considered for a potential advantage.
Note, that the associated variational algorithm will minimize the expectation value of the parametrized product of pair-functions over the full electronic Hamiltonian
\begin{align}
E = \min_{\boldsymbol{\theta}}\expval{H}{{\ensuremath{\Psi_\text{SP}}}\left(\boldsymbol{\theta}\right)}.\label{eq:pair-vqe}
\end{align}
In other words, while the wavefunction has a product structure, the individual pair-functions are not independent but coupled through the Hamiltonian. The latter basically defines a mean-field model for pairs, similar to generalized valence bond models (GVB) with strong orthogonality condition~\cite{larsson2020minimal}.\\
The individual pair-functions can again be restricted to only contain paired electron configurations. As before, we can construct the circuits by arranging double excitations $U_{k,l}$ in the hard-core Boson representation in order to construct the individual $U_k$ pair function unitaries
\begin{align}
U_\text{SPA} = U_\text{HCB}^\text{JW} \lr{\prod_k^{\ensuremath{N_\text{e}}/2} \prod_{l\in \tilde{S}_{k}} U_{k,l}\lr{\boldsymbol{\theta}_k} } U_\text{RHF}^\text{HCB}.\label{eq:pno_upccd}
\end{align}
The $k$ index denotes the reference orbital associated with the current pair function $\ket{\psi_k}$ and $l$ iterates over the associated orbitals $\tilde{S}_k^l$ assigned to this pair.
At this point, the memory requirements to represent the associated wavefunction are further reduced to $\mathcal{O}\left(|\tilde{S}_k|\right)$ for each pair.
One particular strategy to arrange the $U_{k,l}$ is a ladder arrangement of double excitations that requires only local connectivity of the associated qubits
\begin{align}
U_{k,l} = e^{-i\frac{\theta_k}{2}\tilde{G}_{(l,l+1)}}
\end{align}
where $\tilde{G}_{(l,l+1)}$ is a paired double excitation in the form of Eq.~\eqref{eq:paired_double_excitation_generator_hcb} that excites an electron pair from spatial orbital $l$ to spatial orbital $l+1$.
Since the unitaries $U_{k,l}$ that prepare the individual pair-functions act on an initial product state prepared by $U_\text{RHF}^\text{HCB}$, they can be efficiently compiled into controlled-not and controlled rotation gates as
\begin{align}
U_{k,l}\rightarrow \text{CR}_y(\theta,\tilde{S}_{k}^l,\tilde{S}_{k}^{l+1}) \cdot\text{CNOT}(\tilde{S}_k^{l+1},\tilde{S}_k^l)
\end{align}
where the control on the CR$_y$ can be dropped for the very first $U_{k,l}$. This circuit construction procedure is schematically depicted in Fig.~\ref{fig:circuit_compiling_cartoon}.
Alternatively the doubles can be arranged canonically by exciting from the reference orbital to all other orbitals in $\tilde{S}$ like in an UpCCD ansatz for the individual pair.\\
Both approaches have the same expressibility but differ in their locality and in their behaviour under optimization where the more local ladder arrangement usually required more iterations in gradient based optimizations starting from an initial point with all angles in zero. An intuitive explanation is, that within the first $m$ iterations, all gradients except for the $U_{k,n}$ with $n\leq m$ iterations of each pair are naturally zero since the corresponding qubits are not occupied yet in the wavefunction as the occupation of the qubits gradually needs to be distributed from the first orbital to the last. Within the canonical arrangement all qubits can be occupied after the first iteration. A more natural starting point for the ladder approximation would be a finite value for all angles. In the specific embodiment of this work, the orbitals that represent the individual pairs are ordered through the classical surrogate model that determines them (see next section). From this property we can already assume that the initial values for the angles in $U_{k,l}$ should decrease in magnitude with growing $l$. The exact behaviour and sophisticated initialization of the correct signs of the angles could be an interesting testing case for currently emerging initialization protocols.~\cite{cervera2020meta, sauvage2021flip}
\subsection{Orbital Determination}
In the previous section we constructed optimized quantum circuits that follow the excitation pattern of a paired model where each electronic pair function $\ket{\psi_k}$ is represented by individual orbital sets $\tilde{S}_k$. In order to determine these orbitals we will resort to a modified approach of Ref.~\cite{kottmann2020reducing} that takes a classical, basis-set-independent surrogate model, MRA-PNO-MP2~\cite{kottmann2020direct}, to determine the orbitals. Here the orbitals $\phi\in\tilde{S}_k$ are directly determined by numerically solving an integral equation
\begin{align}
\phi\lr{\mathbf{r}} = -2 \int\; \operatorname{d}^3\mathbf{r}\; G\lr{\mathbf{r}-\mathbf{r}'} V(\mathbf{r}') \phi\lr{\mathbf{r}'},
\end{align}
and the surrogate potential $V$ is the MRA-PNO-MP2 potential of Ref.~\cite{kottmann2020direct} for all orbitals in $\tilde{S}_k$ except the first, which is determined by the Hartree{\textendash}Fock potential. As in Ref.~\cite{kottmann2020reducing} the surrogate model is represented and optimized in an adaptive multiwavelet framework by a multiresolution analysis (MRA). However, the circuit construction schemes of this work are independent of the numerical representation of the orbitals. We employ the original MRA representation as it offers a computationally fast and reliable framework that allows the model to be formulated independently of static quantum chemical basis sets. This beneficial property is illustrated on some small chemical reactions in Fig.~\ref{fig:reactions}. We refer to the original article~\cite{kottmann2020direct} on the surrogate model for details about the implementation and to Ref.~\cite{kottmann2020reducing} for the integration into the framework of variational quantum algorithms.
The computational cost of the MRA-PNO-MP2 model scales formally as $\mathcal{O}\lr{\lr{\frac{\ensuremath{N_\text{e}}}{2}}^3R^2}$, where $R$ is the average size of the orbital sets $\tilde{S}_k$, but can in this case be reduced to $\mathcal{O}\lr{\lr{\frac{\ensuremath{N_\text{e}}}{2}}^2R^2}$ by neglecting all off-diagonal MP2 pairs, since these are not required to construct the orbital sets $\tilde{S}_k$. The square in the computational cost results from the exchange operator that is part of the surrogate potential $V$. It was shown before, that this cost can be mitigated to near-linear behaviour by efficiently exploiting locality in the multiresolution representation~\cite{yanai2004exchange}. In our current implementation, this technique is however not included yet. Currently the computational bottleneck of the classical pre-computation are the molecular integrals that define the final qubit Hamiltonian and scale $\mathcal{O}\left(\ensuremath{N_\text{o}}^4\right)$ with the total number of orbitals (here $\ensuremath{N_\text{o}} = \ensuremath{N_\text{e}}/2 + \sum_k |\tilde{S}_k|$. Note that this is not a particular property of this approach but intrinsic to second-quantized electronic structure Hamiltonians. Optimization of the SPA circuits requires only the integrals of the hard-core Boson Hamilonian that has only $\ensuremath{N_\text{o}}^2$ elements. \\
The strong orthogonality condition that enforces orthogonality between \textit{all} orbitals is not necessarily fulfilled in all models. This is for example the case in the original MRA-PNO-MP2~\cite{kottmann2020direct} model but has been enforced in the construction of the associated qubit Hamiltonians in Ref.~\cite{kottmann2020reducing} through a Cholesky decomposition. For larger systems with degenerate electron pairs (as for example the orbitals corresponding to the six C-H bonds in C$_2$H$_6$ in Fig.~\ref{fig:lih_and_c2h6}), we found that symmetric orthogonalization by diagonalizing of the overlap matrix resulted in more well-behaved Hamiltonians. A formulation with pair-wise non-orthogonal orbitals might be interesting as well. This would increase the flexibility of the surrogate model but on the other hand also the number of terms in the final qubit Hamiltonian. We restrict ourselves here to the fully symmetrically orthogonalized formulation.\\
In principle, basis-set-dependent approaches could be employed for orbital determination as well. Modified variants of frozen-natural methods~\cite{deprince2013accurate, gonthier2020identifying}, that avoid global recanonicalization of the predetermined pair-natural orbitals could be envisioned for this task for example.
\begin{figure*}
\centering
\begin{tabular}{ccc}
\includegraphics[width=0.3\textwidth]{plots/ch3oh.pdf}&
\includegraphics[width=0.3\textwidth]{plots/h2o2.pdf}&
\includegraphics[width=0.3\textwidth]{plots/c2h4.pdf}\\
CH$_3$OH + $\text{H}_{2}$ $\rightarrow$ CH$_4$ + H$_2$O &
H$_2$O$_2$ + $\text{H}_{2}$ $\rightarrow$ $2\cdot$ H$_2$O &
C$_2$H$_4$ + $\text{H}_{2}$ $\rightarrow$ C$_2$H$_6$ \\
\end{tabular}
\caption{Example Reactions: Performance of the SPA model in a system adapted orbital basis determined by MRA-PNOs, for chemical reaction energies compared to standard basis sets. Results are labeled as {method/basis/($N_\text{e}$,$N_\text{q}$)}. }
\label{fig:reactions}
\end{figure*}
\begin{figure*}
\centering
\begin{tabular}{cc}
\includegraphics[width=0.49\textwidth]{plots/lih_all.pdf}
\shiftleft{6.5cm}{\raisebox{3.5cm}[1cm][1cm]{\includegraphics[width=0.12\textwidth]{pics/lih.pdf}}}&
\includegraphics[width=0.49\textwidth]{plots/c2h6_all.pdf}
\shiftleft{6.5cm}{\raisebox{3.75cm}[1cm][1cm]{\includegraphics[width=0.12\textwidth]{pics/c2h6.pdf}}}\\
\end{tabular}
\caption{Simple Model Systems: Single bond dissociation of LiH(2,12) and C$_2$H$_6$(2,12). SPA circuits are constructed as in Eq.~\eqref{eq:pno_upccd} and in SPAingle layer of generalized orbital rotations is added. The performance of SPA+GS is expected to be equivalent to orbital optimized SPA with orbital optimization similar to Refs.~\cite{sokolov2020quantum,yalouz2021stateaveraged}. In addition we show C$_2$H$_6$(14,28) with all active electron pairs represented by two spatial orbitals (same representation as in Fig.~\ref{fig:reactions}). The associated non-parallelity errors (difference between largest and smallest absolute error) with fci/MRA(14,28) as reference are: 3 for SPA/MRA(14,28), 18 for fci/MRA(2,12) and 25 for SPA/MRA(2,12) millihartree.}
\label{fig:lih_and_c2h6}
\end{figure*}
\begin{figure*}
\centering
\begin{tabular}{ccc}
\includegraphics[width=0.33\textwidth]{plots/beh2_variances.pdf}
\shiftleft{4.75cm}{\raisebox{1cm}[1cm][1cm]{\includegraphics[width=0.1\textwidth]{pics/beh2}}} &
\includegraphics[height=0.25\textwidth]{plots/beh2_profile.pdf} \shiftleft{3.25cm}{\raisebox{1cm}[1cm][1cm]{\includegraphics[width=0.1\textwidth]{pics/beh2}}} & \includegraphics[width=0.33\textwidth]{plots/n2_variances.pdf}
\shiftleft{3cm}{\raisebox{1cm}[1cm][1cm]{\includegraphics[width=0.1\textwidth]{pics/n2}}}\\
\end{tabular}
\caption{Variances $\|\expval{H}_U^2 - \expval{H^2}_U\|$ of RHF (best classical mean-field solution) and SPA models for BeH$_2$(4,8) and N$_2$(6,12) as a quantifier for closeness to eigenstates. Shown at the center are the fidelities $\|\braket{\Psi_U}{\Psi_\text{exact}}\|^2$ with respect to all eigenstates of the BeH2$_2$(4,8) with both Be-H bond distances at 5.0~{\AA}ngstrom, providing more details on the eigenstates that overlap with the trial wavefunction. }
\label{fig:results_variances}
\end{figure*}
\section{Applications}
In the following, we will demonstrate explicit use-cases of the SPA and illustrate how it can be used as initial state within a fault-tolerant framework.
In Fig.~\ref{fig:lih_and_c2h6} we compute single bond stretches of the LiH and C$_2$H$_6$ molecules, similar to Ref.~\cite{kottmann2020reducing}. As expected, the separable pair ansatz performs well at not too far stretched bond distances. The shortcoming can be overcome by including orbital rotations in the form of a generalized singles layer in the circuit. This performance can be expected to be equivalent to an orbital-optimized form of SPA that could be implemented similar to Refs.~\cite{sokolov2020quantum,yalouz2021stateaveraged}.
We also included a calculation with a larger active space C$_2$H$_6$(14,28) to confirm that the performance of the separable pair ansatz stays consistent. Here we observe a consistent energy difference to the FCI energies (non-parallelity error of 3 millihartree) resulting from the missing correlation between the pairs. Although this does not represent a rigorous benchmark, we anticipate that the separable pair ansatz will be an appropriate model for single bond reactions and organic equilibrium structures, especially in an orbital optimized extension similar to related classical methods like pCCD~\cite{henderson2014}, its orbital-optimized variants~\cite{scuseria1987optimization,bozkaya2011quadratically, kossoski2021excited}, and low-order matrix-product states respectively generalized valence bond models~\cite{larsson2020minimal}. \\
In Fig.~\ref{fig:results_beh2_bh3_n2} we computed double and triple bond dissociation of more challenging systems and compared classical methods with SPA and extensions in the $k$-UpCCGSD hierarchy. With BeH$_2$(4,8) and N$_2$(6,12) we included 8 and 12 qubit test systems which show variational breakdowns of standard methods (MP2, CCSD and CCSD(T)) from classical quantum chemistry. All quantum models naturally don't show variational breakdowns. Furthermore we assume that the oscillating behaviour of CCSD is due to convergence problems. The SPA behaves fairly consistent over all three molecules, but, as for the single bond stretches, it shows large energy deviations for far stretched structures. In this case, including orbital rotations does improve, but not fully resolve, these differences.
The N$_2$(6,12) molecule remains the most challenging one; here, not even 2-UpCCGSD can reach FCI accuracy in all points. It was however shown before, that more layers of the ansatz systematically converge towards the FCI energy.~\cite{lee2018generalized}.\\
In Fig.~\ref{fig:results_variances} we performed a small numerical study where we assume to have access to a fault tolerant architecture, that is capable of performing a molecular quantum phase estimation~\cite{aspuru2005simulated}. In this proposed algorithm the physical measurement process of the full electronic Hamiltonian is implemented, and for simplicity we will assume a numerically exact implementation.
If a trial state $\Psi = \sum_k c_k \ket{E_k}$ is prepared, where $E_k$ denotes the eigenstates and energies of the molecular Hamiltonian, the algorithm results in measurement of $E_k$ (as well as the preparation of $\ket{E_k}$) with probability $|c_k|^2$. So, the success of the procedure will depend on the overlap $c_k$ of the trial function with the targeted state. In Fig.~\ref{fig:results_variances} we show absolute values of the variances $\text{Var}(U)=|\expval{H^2}_U - \expval{H}_U^2|$ with $U\in\left\{U_\text{RHF}, U_\text{SPA}\right\}$ as a quantifier for closeness to an eigenstate for BeH$_2$(4,8) and N$_2$(6,12). For both systems, SPA gives a significantly improved trial state and requires only depth 3 circuits with 6, respectively 9, controlled-not gates in total (see Tab.~\ref{tab:gate_counts}).
Note that for stretched geometries of BeH$_2$, the variances of both initialization methods become almost identical as both methods are comparably close to electronic eigenstates. Hartree{\textendash}Fock initialization would however not result in a clear preference for the ground state, as the trial wavefunction has similar overlap with an excited state (shown in the central plot of Fig.~\ref{fig:results_variances}).
If the expected energy range of the ground state is known, this deficiency could for example be overcome with the \textit{Philter} algorithm~\cite{jensen2020quantum}. The associated costs are however significantly larger than simply switching to an improved trial state in the form of $U_\text{SPA}$.\\
In this work we use the basis-set-free approach of Ref.~\cite{kottmann2020reducing} in order to determine the spatial orbitals that form the qubit Hamiltonian in a system-adapted bottom-up approach. Our approach is therefore independent of static basis sets and a first assessment of it's numerical performance compared to those basis sets would be interesting. In Ref.~\cite{kottmann2020reducing} first comparisons were already performed where the basis-set-free approach allowed significant improvements in numerical accuracy. In Fig.~\ref{fig:reactions} we provide three further examples in the form of small chemical reactions, that are significantly larger systems than in Ref.~\cite{kottmann2020reducing} and further confirm improved accuracy with a directly determined MRA-PNO basis.\\
In all calculations we observed fast convergence of the optimizations of the SPA circuits that usually took 4-10 BFGS iterations for canonical rearrangement of primitive excitations. In all cases, a single BFGS iteration, required a single energy and gradient evaluation. It was furthermore possible to initialize all angles to 0.0 (\textit{i.e.} using Hartree{\textendash}Fock as a starting point) without reaching local minima or plateaus in the process. This indicates that the optimization of SPA circuits can be achieved routinely and cheap with gradient based methods. UpCCGSD and 2-UpCCGSD behaved similar at points where it resulted in similar energies as with the SPA but became more difficult for stretched geometries. Here we needed to run several (5-10) optimizations with different starting points in order to achieve good convergence. With this strategy the optimization also took substantially more BFGS iterations (up to 100) which is however still a comparably small number~\cite{anselmetti2021local, jones2019variational, cervera2020meta}.\\
\begin{figure*}
\centering
\begin{tabular}{cc}
\includegraphics[width=0.49\textwidth]{plots/beh2_classical.pdf}
\shiftleft{6.5cm}{\raisebox{3.5cm}[1cm][1cm]{\includegraphics[width=0.15\textwidth]{pics/beh2}}} &
\includegraphics[width=0.49\textwidth]{plots/beh2_upccgsd.pdf}
\shiftleft{6.5cm}{\raisebox{3.5cm}[1cm][1cm]{\includegraphics[width=0.15\textwidth]{pics/beh2}}}\\
\includegraphics[width=0.49\textwidth]{plots/bh3_classical.pdf}
\shiftleft{6.5cm}{\raisebox{3.5cm}[1cm][1cm]{\includegraphics[width=0.12\textwidth]{pics/bh3}}} &
\includegraphics[width=0.49\textwidth]{plots/bh3_upccgsd.pdf}
\shiftleft{6.5cm}{\raisebox{3.5cm}[1cm][1cm]{\includegraphics[width=0.12\textwidth]{pics/bh3}}} \\
\includegraphics[width=0.49\textwidth]{plots/n2_classical.pdf}
\shiftleft{6.5cm}{\raisebox{3.5cm}[1cm][1cm]{\includegraphics[width=0.12\textwidth]{pics/n2}}} &
\includegraphics[width=0.49\textwidth]{plots/n2_upccgsd.pdf}
\shiftleft{6.5cm}{\raisebox{3.5cm}[1cm][1cm]{\includegraphics[width=0.12\textwidth]{pics/n2}}} \\
\end{tabular}
\caption{Challenging Model Systems: Comparison of standard classical methods (left) and pair-restricted quantum circuits (right) for the bonding electron pairs in BeH$_2$(4,8), BH$_3$(6,12) and N$_2$(6,12). See Tab.~\ref{tab:details_bh3} for the required resources. }
\label{fig:results_beh2_bh3_n2}
\end{figure*}
\section{Implementation}
All our circuit construction schemes are implemented in the \textsc{tequila}~\cite{tequila} library that also contains a convenient interface to \textsc{madness}~\cite{harrison2016madness} where the orbitals are computed according to Ref.~\cite{harrison2004multiresolution, bischoff2014regularizing} (Hartree{\textendash}Fock) and Ref.~\cite{kottmann2020direct} (MRA-PNO-MP2) with the standared (non-regularized) nuclear and electronic potentials.
A specific point of the LiH(4,16) molecule of Fig.~\ref{fig:lih_and_c2h6} can for example be computed as
\lstinputlisting[language=Python]{code/example_code.py}
Here, $\texttt{n\_pno}=\sum_k \left(|\tilde{S}_k|-1\right)$ defines the total number of requested pair-natural orbitals from the surrogate model. This leads to a total number of orbitals $\ensuremath{N_\text{o}} = \ensuremath{N_\text{e}}/2 + \texttt{n\_pno} = \sum_k |\tilde{S}_k|$, where $\ensuremath{N_\text{e}}/2$ is the number of occupied Hartree{\textendash}Fock orbitals. The \texttt{make\_upccgsd\_ansatz} automatically compiles circuits according to Eq.~\eqref{eq:pno_upccd} and adds the remaining unitary excitations.
It accepts and interprets all keywords assembled from \texttt{name="\{HCB\}-\{SPA\}-UpCC\{G\}\{A\}\{S\}\{D\}"} where \texttt{HCB} will result in the circuit being compiled entirely in the HCB representation (meaning that $U_\text{HCB}^\text{JW}$ in Eq.~\eqref{eq:pno_upccd} will be removed), \texttt{SPA} will restrict all excitations to the surrogates excitation pattern (\textit{i.e.} excitations are restricted within the $\tilde{S}_k$ orbital sets), \texttt{D} and \texttt{S} will include doubles and singles, \texttt{A} will result in approximated singles as qubit excitations and \texttt{G} will result in generalized singles and doubles.
The additional keyword \texttt{direct\_compiling="ladder"} will result in the laddered arrangement of the SPA (see Figs.~\ref{fig:circuit_compiling_cartoon} and~\ref{fig:beh2_circuit}).
Invalid combinations, like the combination of \texttt{HCB} and \texttt{S} will result in exceptions. Note, that the \texttt{UpCC} part is not necessary in the name, but can be included to enhance readability in the code. The standard method is \texttt{SPA} which corresponds to Eq.~\eqref{eq:pno_upccd} and is equivalent to \texttt{SPA-UpCCD}. In this sense, \texttt{SPA-UpCCGD} would result in the SPA circuit complemented with all unaccounted generalized double excitations within the $\tilde{S}_k$ orbital sets.
The frozen-core approximation, i.e., no correlation of the $N_\text{c}$ ($5N_\text{c}$) lowest orbitals of molecules with $N_\text{c}$ second (third) row atoms, is enabled by default as well as active-spaces that include only pairs represented by more than one (the Hartree{\textendash}Fock) orbital. The SPA energies for N2(6,12) in Fig.~\ref{fig:results_beh2_bh3_n2} can for example be computed as
\lstinputlisting[language=Python]{code/example_code_molecule.py}
where the active space is automatically constructed. Here, we exploited the fact, that PNO occupation numbers in the surrogate model are largest for the three orbitals that correspond to the triple bond, so that with \texttt{n\_pno=3} three PNOs from those pairs are selected automatically. More complicated active-spaces can be specified over the \texttt{active\_orbitals} keyword where information about all orbitals from the surrogate can be obtained over \texttt{print(mol)}. In the last code snipped we also illustrated how to optimize the separable pair ansatz directly in the hard-core Boson representation that allows simulations with $\ensuremath{N_\text{q}}=\ensuremath{N_\text{o}}$ qubits.
In this work, we used \textsc{qulacs}~\cite{qulacs} as quantum simulation backend, \textsc{scipy}~\cite{scipy} as optimization backend, and the Jordan-Wigner implementation of \textsc{openfermion}~\cite{OpenFermion}. Gradient compilation for the BFGS optimization is performed by the automatically differentiable framework of Ref.~\cite{kottmann2020feasible} where gradients for the controlled-$R_y$ operations of the optimized circuits follow the same principle. All of those options correspond to the defaults which do not need to be explicitly specified and we refer to Ref.~\cite{tequila} and~\cite{kottmann2020feasible} for more details on how to use \textsc{tequila} \textit{e.g.} for the manual construction of circuits that can be combined with the \texttt{U} objects constructed above.\\
Energies from classical quantum chemistry methods can be computed through \textsc{tequila} interfaces to \textsc{psi4}~\cite{psi4} and \textsc{pyscf}~\cite{pyscf1} for example via \texttt{mol.compute\_energy("ccsd")}.
\section{Conclusion \& Outlook}
We formulated a physically motivated recipe to construct low-depth and local quantum circuits that are able to approximate large parts of the electronic correlation in electronic structure problems.
If applied to a closed shell reference state, the resulting circuits are equivalent to the PNO-UpCCD circuits introduced in Ref.~\cite{kottmann2020reducing}. Note that they are not the same unitaries, but applied to a closed-shell reference they prepare the same wavefunction. In contrast to the PNO-UpCCD circuits the SPA circuits are significantly reduced in their depths and CNOT counts from several hundred to low one to two digit figures.
Due to their naturally separated form, the associated wavefunctions can be represented with linear memory requirement with respect to the system size which allows to optimize the parameters of the low-depth circuit in a classical pre-computation step. This bypasses challenges in variational quantum eigensolvers like finite-shot sensitive gradients and high measurement cost.
Due to the physically inspired construction we furthermore expect this model to be well behaved with gradient based optimization.
Within this work we observed fast convergence in a few epochs of the BFGS optimizer throughout all numerical computations without getting trapped in local minima or plateaus.
All these properties qualify this model to be a potential minimum benchmark that quantum algorithms have to outperform in order to claim any advantage over classical methods.
In this regard, BeH$_2$(4,8) and N$_2$(6,12) could be well suited test systems. \\
Within quantum algorithms for electronic structure, we see the separable pair ansatz as initial part of larger approaches which we illustrated within two scenarios. The first employs the optimized SPA circuits as initial parts of a larger variational algorithm, here illustrated within the $k$-UpCCGSD hierarchy. The second uses the SPA as significantly improved initial states for projective algorithms.\\
In this work, we integrated our methodologies into the basis-set-free framework of Ref.~\cite{kottmann2020reducing}, which is not a necessity to compile the low-depth circuits, but allows to compute basis-set-independent energies with high numerical accuracy.
For weakly correlated reactions, this provides a good balance between the one- and many-body aspects of the electronic wavefunctions which we illustrated on small organic reactions.
Our current implementation does not exploit the properties of the SPA wavefunction completely but rather takes advantage of high-performance simulators like \textsc{qulacs}~\cite{qulacs} within the \textsc{tequila}~\cite{tequila} framework. It is however well suited for systems treated in this work.
In the future, specialized high-performance implementations would be desirable and the combination with basis-set-free approaches could be interesting as a classical algorithm for weakly correlated molecular structures as they for example occur in a wide range of organic reactions.
Within this context, too further enhance the overall performance of the model, we expect improvements on the surrogate model that determines the orbital basis. Additionally, one can include orbital optimization, which allows optimized linear-combination within said orbital basis
As the SPA wavefunction is itself efficiently representable classically, it could be envisioned to employ this model, preferably in it's orbital-optimized form, directly as surrogate.
Furthermore, we explored first estimates on potential orbital-optimized algorithms for SPA by simulating the orbital rotations as unitary operation in the form of a generalized singles layer added to the circuit.
The techniques of Refs.~\cite{yalouz2021stateaveraged, sokolov2020quantum} and related classical methods~\cite{kossoski2021excited, scuseria1987optimization} could be employed as a first step towards an orbital-optimized implementation that does not require gate-based implementations of the corresponding unitaries.
\section{Acknowledgement}
We would like to thank Philipp Schleich for providing valuable suggestions and and comments on the manuscript and Garnet Kin-Lik Chan for pointing out similarities with generalized valence bond models.
This work was supported by the U.S. Department of Energy under Award No. DE-SC0019374.
A.A.-G. acknowledges the generous support from Google, Inc. in the form of a Google Focused Award.
A.A.-G. also acknowledges support from the Canada Industrial Research Chairs Program and the Canada 150 Research Chairs Program. Computations were performed on the niagara supercomputer at the SciNet HPC Consortium.~\cite{niagara1, niagara2} SciNet is funded by: the Canada Foundation for Innovation; the Government of Ontario; Ontario Research Fund - Research Excellence; and the University of Toronto.
We thank the generous support of Anders G. Fr\o{}seth.
|
{
"timestamp": "2021-05-11T02:17:22",
"yymm": "2105",
"arxiv_id": "2105.03836",
"language": "en",
"url": "https://arxiv.org/abs/2105.03836"
}
|
\section{Introduction}
Missing sensor measurements can arise in a variety of radar signal processing problems as for instance beamforming, direction of arrival estimation, interference cancellation, covariance estimation, and target detection. This demands the development of specific procedures in order to keep contained the loss with respect to the benchmark case where measurements are available from all the sensors. Some practical and interesting situations which are part of this challenging layout are described in the following.
\begin{itemize}
\item Distributed radar systems (DRSs)~\cite{4102876} which consist of a multitude of stand-alone coherent radar modules (radar satellites). These nodes are equipped with a transmitter, a receiver, an antenna element, and a small processor that conveys the raw data from the receiver via an Internet connection (cable, fiber or wireless) to the beamformer fusion center. Wirelessly networked aperstructure digital phased array radar (WNADPAR)~\cite{WNADPARThesis} represents a perfect example with reference to ship-based surveillance. Therein, the radar satellites are distributed all over the available areas of the ship surface to improve the radar detection range and form narrow beams. Missing observations may arise due to transmission-reception failures.
\item Switched array~\cite{1993DeLong, 8728188, 5494469, 5352421, 6955686} applications when the number of available channels is smaller than the number of sub-array (or array elements). The system can randomly choose the channel connections on a snapshot-to-snapshot (possibly a block-snapshot to block-snapshot) base or use a pre-fixed switching scheme. From a mathematical point of view, this is tantamount to puncturing dynamically the entire array output vector to get a reduced-size observation vector.
\item Intermittent sensor failures which can occur due to saturation of some sub-arrays (notice that they can exhibit different antenna patterns and hence can experience different saturation conditions), impulsive noise bursts originated within the channels of failing sensors~\cite{5422700}, and malfunctions at the analog-to-digital converter (ADC) level~\cite[pg. 32]{tuzlukov2017signal}. Actually, failure modes, effects, and criticality analysis (FMECA) on array system level faults reports cases of intermittent troubles, probably caused by material-interaction (i.e., package moulding contamination, surface-state effects, etc.), stress (i.e., burnout, electro-migration, etc.), mechanical (i.e., solder joint failure, die fracture, etc.) and environmental impairments (i.e., temperature, humidity and hydrogen effects)~\cite{WILEMAN201356}. Besides, random failure of array elements has also been discussed in~\cite{8765381, 5960622}.
\end{itemize}
Motivated by the above discussion, this paper deals with the problem of estimating a structured covariance matrix under the missing-data context.
First of all, the homogeneous Gaussian environment observation model with missing-data is introduced capitalizing on a-priori information about the covariance matrix structure and/or specific array configurations.
Then, the covariance matrix estimation process is formulated as an appropriately constrained optimization problem which is in general difficult to solve. Hence, an effective iterative solution technique, based on the expectation-maximization (EM) algorithm~\cite{10.2307/2984875, vantrees4, 10.2307/1165260, 10.2307/2240463}, is introduced together with some convergence properties and rate of convergence results.
Each iteration of the algorithm, for very common covariance structures of practical interest in radar signal processing applications, involves only closed form solutions for the unknowns.
It is worth pointing out that the EM algorithm has been already applied to some covariance estimation applications within the statistical literature~\cite{8648152, 9238471, LIU2019278} and in some radar signal processing contexts~\cite{1399075, WANG2005191}. In this respect the main contribution of this paper relies on the inclusion of the constrained case and the analysis of the corresponding convergence properties. Besides, the theory is contextualized for a beamforming application and for the problem of detecting the number of sources.
The study has lead to some efficient methods capable of operating in the presence of missing-data with satisfactory performance.
Even if the paper is focused on spatial processing, the methodology could be imported in a temporal processing background where some pulses of the received train may experience unwanted sporadic radio frequency interference (RFI). This means that some slow time samples from some given range cells are missed and the lack of this data has to be properly accounted for at the signal processing design stage.
The paper is organized as follows. Section II introduces the system model and defines some covariance matrix uncertainty sets of practical relevance. Section III formulates the structured covariance matrix estimation problem {in the presence of} missing-data and presents tailored iterative solution methods leveraging possible a-priori structural information. Besides, it addresses convergence issues about the proposed techniques.
In Section IV, the performance of the estimators is analyzed in the context of adaptive beamforming and detection of number of sources.
Finally, Section V draws some conclusions and highlights possible future research avenues.
\subsection{Notation}
Boldface is used for vectors $\boldsymbol{a}$ (lower case), and matrices $\boldsymbol{A}$ (upper case). The $(k, l)$-entry (or $l$-entry) of a generic matrix $\boldsymbol{A}$ (or vector $\boldsymbol{a}$) is indicated as $\boldsymbol{A}(k, l)$ (or $\boldsymbol{a}(l)$). $\boldsymbol{I}$ and ${\boldsymbol{0}}$ denote respectively the identity matrix and the matrix with zero entries (their size is determined from the context). The all-ones column vector of size $N$ is denoted by $\boldsymbol{1}_N$, whereas $\boldsymbol{e}_k$ denotes the $k$-th column vector of $\boldsymbol{I}$, whose size is determined from the context. Besides, $\textbf{diag}({\boldsymbol{m}{x}})$ indicates the diagonal matrix whose $i$-th diagonal element is $\boldsymbol{x}(i)$. The transpose, the conjugate, and the conjugate transpose operators are denoted by the symbols $(\cdot)^{\mathrm{T}}$, $(\cdot)^{*}$, and $(\cdot)^\dagger$, respectively. The determinant, the trace, and the rank of the matrix ${\boldsymbol{m}{A}} \in \mathbb{C}^{N\times N}$ are indicated with $\det \left( {\boldsymbol{m}{A}} \right)$, $\text{tr}\{\boldsymbol{m}{A}\}$, {and $\mbox{Rank}(\boldsymbol{A})$}, respectively.
$\mathbb{R}^N$ and ${\mathbb{C}}^N$ are respectively the sets of $N$-dimensional column vectors of real and complex numbers. ${\mathbb{H}}^N$ and ${\mathbb{H}}^{N}_{++}$ represent the set of $N\times N$ Hermitian matrices and Hermitian positive definite matrices, respectively. {Moreover, $\mathbb{R^{++}}$ denotes the set of real numbers greater than zero.} The curled inequality symbol $\succeq$ (and its strict form $\succ$) is used to denote generalized matrix inequality: for any $\boldsymbol{A}\in{\mathbb{H}}^N$, $\boldsymbol{A}\succeq\boldsymbol{0}$ means that $\boldsymbol{A}$ is a positive semi-definite matrix ($\boldsymbol{A}\succ\boldsymbol{0}$ for positive definiteness). Besides, for any $\boldsymbol{A}\in{\mathbb{H}}^N$, $\lambda_\text{max}(\boldsymbol{A})$ and $\lambda_\text{min}(\boldsymbol{A})$ are the maximum and the minimum eigenvalue of $\boldsymbol{A}$, respectively. The letter $j$ represents the imaginary unit (i.e., $j=\sqrt{-1}$). For any complex number $x$, $|x|$ indicates the modulus of $x$. Moreover, for any $\boldsymbol{x} \in \mathbb{C}^N$, $\|\boldsymbol{x}\|$ denotes the Euclidean norm{, whereas the Frobenius norm of a matrix $\boldsymbol{A}$ is indicated as $\|\boldsymbol{A}\|_F$.} Let $f(\boldsymbol{x}) \in \mathbb{R}$ be a real-valued function, $\nabla_{\boldsymbol{x}} f(\boldsymbol{x})$ denotes the gradient of $f(\cdot)$ with respect to $\boldsymbol{x}$, with the partial derivatives arranged in a column vector. Furthermore, for any $x, y \in \mathbb{R}$, $\max(x, y)$ returns the maximum between the two argument values. Finally, $\mathbb{E}[\cdot]$ stands for statistical expectation.
\section{Problem Formulation}\label{section:probl_form}
Let us consider a radar system that collects spatial data via a narrow-band array composed of $N$ antennas {and operating in the presence of} noise and interference, with unknown spectral characteristics. Let us suppose that a set of spatial snapshots $\boldsymbol{r}_i, \; i = 1,\dots, K$, modeled as independent and identically distributed (IID) zero-mean circularly symmetric Gaussian random vectors {(homogeneous environment)} with unknown but structured covariance matrix, is available. Specifically
\begin{equation}
\boldsymbol{r}_i \sim CN(0, \boldsymbol{M}), \; \boldsymbol{M} \in \mathcal{C} \subseteq {\mathbb{H}}^{N}_{++}, \; i=1,\dots,K ,
\end{equation}
where $\mathcal{C}$ denotes the subset of covariance matrices that can generate the observables. Enforcing $\boldsymbol{M}$ to belong to $\mathcal{C}$ is tantamount to exploiting some problem structure stemming from a-priori knowledge about the operating environment and/or the array configuration.
Some practical examples of covariance matrix uncertainty sets are now illustrated.
\begin{enumerate}
\item Structured covariance matrix with a lower bound on the white disturbance power level~\cite{892662, 6558039}
\begin{equation}\label{set:cm_lb_white_dist_pwr_level}
\mathcal{C} = \left\{\begin{matrix}
\boldsymbol{M} = \sigma_n^2 \boldsymbol{I} + \boldsymbol{R}_e \\
\boldsymbol{R}_e \succeq \boldsymbol{0} \\
\sigma_n^2 \ge \sigma^2
\end{matrix}\right. ,
\end{equation}
where $\boldsymbol{R}_e$ accounts for colored interference and clutter, $\sigma_n^2 > 0$ is the power of the white disturbance term, and $\sigma^2 > 0$ is a known lower bound on the white disturbance power.
\item Structured covariance matrix with a condition number constraint~\cite{6166345}
\begin{equation}
\mathcal{C} = \left\{\begin{matrix}
\boldsymbol{M} = \sigma_n^2 \boldsymbol{I} + \boldsymbol{R}_e \\
\boldsymbol{R}_e \succeq \boldsymbol{0} \\
\sigma_n^2 \ge \sigma^2 \\
\frac{\lambda_\text{max}(\boldsymbol{M})}{\lambda_\text{min}(\boldsymbol{M})} \le K_{max}
\end{matrix}\right. ,
\end{equation}
where $\boldsymbol{R}_e$, $\sigma_n^2$, and $\sigma^2$ are defined as in~(\ref{set:cm_lb_white_dist_pwr_level}), whereas $K_{max}$ is an upper bound to the covariance condition number.
\item {Structured covariance matrix with a rank constraint and a lower bound on the white disturbance power level}~\cite{6809931}
\begin{equation}
\mathcal{C} = \left\{\begin{matrix}
\boldsymbol{M} = \sigma_n^2 \boldsymbol{I} + \boldsymbol{R}_e \\
\boldsymbol{R}_e \succeq \boldsymbol{0} \\
\mbox{Rank}(\boldsymbol{R}_e) \le r \\
\sigma_n^2 \ge \sigma^2
\end{matrix}\right. ,
\end{equation}
where $\boldsymbol{R}_e$, $\sigma_n^2$, {and $\sigma^2$} are defined as in~(\ref{set:cm_lb_white_dist_pwr_level}), whereas $r$ is the maximum rank of $\boldsymbol{R}_e$.
\item {Structured covariance matrix with a rank constraint}~\cite{vantrees4, 206185}
\begin{equation}\label{set:fixed_rank}
\mathcal{C} = \left\{\begin{matrix}
\boldsymbol{M} = \sigma_n^2 \boldsymbol{I} + \boldsymbol{V}\boldsymbol{S}_f \boldsymbol{V}^\dagger \\
\boldsymbol{V}\boldsymbol{S}_f \boldsymbol{V}^\dagger \succeq \boldsymbol{0} \\
\mbox{Rank}(\boldsymbol{V}\boldsymbol{S}_f \boldsymbol{V}^\dagger) \le d \\
{\sigma_n^2 > 0}
\end{matrix}\right. ,
\end{equation}
where {$d$ is an upper bound to $\mbox{Rank}(\boldsymbol{V}\boldsymbol{S}_f \boldsymbol{V}^\dagger)$}, $\sigma_n^2$ is defined as in~(\ref{set:cm_lb_white_dist_pwr_level}), $\boldsymbol{V}$ is an $N \times d$ array manifold matrix {(which can be modeled either as a known or as an unknown parameter)}, $\boldsymbol{S}_f$ denotes the $d \times d$ diagonal sources covariance matrix, whereas $d \le N$ is the number of sources.
\item Structured covariance matrix with a centro-Hermitian symmetry~\cite{1093391}
\begin{equation}\label{set:persymmetry}
\mathcal{C} = \left\{\begin{matrix}
\boldsymbol{M} = \boldsymbol{J} {\boldsymbol{M}}^{*} \boldsymbol{J} \\
\boldsymbol{M} \succeq \boldsymbol{0} \\
\end{matrix}\right. ,
\end{equation}
with $\boldsymbol{J}$ the $N\times N$ permutation matrix given by
\begin{equation}\label{eq:J_exchange_matrix}
\boldsymbol{J} = \begin{bmatrix}
0 & 0 & \cdots & 0 & 1 \\
0 & 0 & \cdots & 1 & 0\\
\vdots & \vdots & \ddots & \vdots & \vdots\\
1 & 0 & \cdots & 0 & 0
\end{bmatrix}
\end{equation}
\end{enumerate}
Besides, any combination of the above uncertainty sets (corresponding to their intersection) constitutes additional interesting examples.
The estimation problem (object of the present paper) demands a data model endowed with the capability to handle missing-data arising from the lack of some entries within specific spatial snapshots.
To this end, each observed snapshot is modeled as
\begin{equation}
\boldsymbol{y}_i = \boldsymbol{A}_i \boldsymbol{r}_i, \; i=1,\dots,K ,
\end{equation}
where $\boldsymbol{A}_i$ is the $p_i\times N$ selection matrix, constructed by extracting from $\boldsymbol{I}$ the $p_i \le N$ rows corresponding to the available observations at the $i$-th snapshot.
In the following, the vectors $\boldsymbol{r}_i$ and $\boldsymbol{y}_i$ will be referred to as \textit{complete} and \textit{observed} data, respectively.
\section{Covariance Matrix Estimation Procedure}
This section is devoted to the derivation of a covariance matrix estimation procedure in the presence of missing-data accounting for model structures via suitable constraints.
As already pointed out, the problem is of primary importance for many applications in the field of radar signal processing~\cite{1100705, 31267, LINEBARGER199585, li2005robust, 937465, 1166614, 890324, 5514423} and, in most cases, a maximum likelihood (ML) estimator is usually demanded at least due to its favorable asymptotic properties. For the missing-data case, the constrained ML estimate of the covariance matrix, given the observed-data, can be formulated as
\begin{equation} \label{eq:problem}
\hat{\boldsymbol{M}}(\boldsymbol{\theta}) = \operatorname*{arg\,max}\limits_{\boldsymbol{M}(\boldsymbol{\theta}) \in \mathcal{C}} \mathcal{L}_y(\boldsymbol{M}(\boldsymbol{\theta}) | \boldsymbol{Y}, \boldsymbol{A}_1, \dots, \boldsymbol{A}_K) ,
\end{equation}
with
\begin{equation}
\begin{aligned}
\label{eq:observed_log_likelihood}
& \mathcal{L}_y(\boldsymbol{M}(\boldsymbol{\theta}) | \boldsymbol{Y}, \boldsymbol{A}_1, \dots, \boldsymbol{A}_K) =& \\
&\qquad - \sum_{i=1}^K p_i \ln(\pi) - \sum_{i=1}^K \ln(\det(\boldsymbol{A}_i \boldsymbol{M}(\boldsymbol{\theta}) \boldsymbol{A}_i^\dagger)) +& \\
&\quad \quad \quad \qquad \qquad \qquad +\text{tr}\{(\boldsymbol{A}_i \boldsymbol{M}(\boldsymbol{\theta})\boldsymbol{A}_i^\dagger)^{-1} \boldsymbol{y}_i \boldsymbol{y}_i^\dagger\}&
\end{aligned}
\end{equation}
the observed-data log-likelihood, $\boldsymbol{Y} = \{\boldsymbol{y}_1, \dots, \boldsymbol{y}_K\}$ the set of observed-data, and $\boldsymbol{\theta} \in \mathbb{R}^{V}$ the vector of the unknown parameters defining the underlying structure of $\boldsymbol{m}{M}$. This is tantamount to solving
\begin{equation} \label{eq:problem_theta}
\hat{\boldsymbol{\theta}}_{ML} = \operatorname*{arg\,max}\limits_{\boldsymbol{\theta}: \boldsymbol{M}(\boldsymbol{\theta}) \in \mathcal{C}} \mathcal{L}_y(\boldsymbol{\theta} | \boldsymbol{Y}, \boldsymbol{A}_1, \dots, \boldsymbol{A}_K).
\end{equation}
Computing $\hat{\boldsymbol{\theta}}_{ML}$ (or equivalently $\hat{\boldsymbol{M}}(\boldsymbol{\theta})$) is, in general, a difficult problem for which an analytic closed-form solution could not be available~\cite{1456695}. Besides, an optimization procedure based on a multi-dimensional grid search (MDGS) in the unknown parameter space could be computationally prohibitive. This motivates the interest toward iterative approximated procedures characterized by a more affordable computational cost than ML evaluation via MDGS.
\subsection{EM Algorithm}
EM is a widely adopted iterative technique to obtain approximate ML estimates of parameters
from incomplete-data\footnote{In situations where direct access to the \emph{complete} set of observations is not available, part of the data are missing or, more in general, data undergo a many-to-one mapping before becoming available to the observer.}~\cite{10.2307/2984875, vantrees4, 10.2307/1165260}.
The algorithm is composed of two steps. In the former, referred to as expectation (E) step, the conditional expectation of the complete-data likelihood, given the observed-data and the current estimate of the parameters, is evaluated (E-step score function). In the latter, referred to as the maximization (M) step, the E-step score function (corresponding to current estimate of the parameters) is maximized with respect to the unknowns.
The EM starts with an initial guess of the parameters, i.e., $\boldsymbol{\theta}^{(0)}$, and iterates between E and M steps. The procedure can also be interpreted as a minorization-maximization optimization technique where the surrogate function stems from the Jensen inequality~\cite{doi:10.1198/0003130042836}.
With reference to the estimation problem in~\eqref{eq:problem_theta}, at the $h$-th iteration, the E-step consists in the evaluation of the score function
\begin{equation}\label{eq:Q}
Q\left(\boldsymbol{\theta}, \boldsymbol{\theta}^{(h-1)}\right) = \mathbb{E}[\mathcal{L}_r(\boldsymbol{\theta}) | \boldsymbol{Y}, \boldsymbol{A}_1, \dots, \boldsymbol{A}_K, \hat{\boldsymbol{M}}(\boldsymbol{\theta}^{(h-1)})] ,
\end{equation}
where
\begin{equation}
\begin{aligned}\label{eq:complete_log_likelihood}
\mathcal{L}_r(\boldsymbol{\theta}) =& -K[N \ln(\pi) + \ln(\det(\boldsymbol{M}(\boldsymbol{\theta}))) \\
& + \text{tr}\{\boldsymbol{M}(\boldsymbol{\theta})^{-1} \boldsymbol{S}\}]
\end{aligned}
\end{equation}
is the complete-data log-likelihood,
\begin{equation}\label{eq:sample_covariance}
\boldsymbol{S} = \frac{1}{K} \sum_{i=1}^K \boldsymbol{r}_i \boldsymbol{r}_i^\dagger
\end{equation}
is the sample covariance matrix of the complete-data, and $\hat{\boldsymbol{M}}(\boldsymbol{\theta}^{(h-1)})$ is the estimate of the covariance matrix at the $(h-1)$-iteration.
Computing the conditional expectation involved in~\eqref{eq:Q} yields
\begin{equation}\label{eq_E_step}
\begin{aligned}
Q(\boldsymbol{\theta}, \boldsymbol{\theta}^{(h-1)}) =&-K[N \ln(\pi) + \ln(\det(\boldsymbol{M}(\boldsymbol{\theta}))) \\
& + \text{tr}\{\boldsymbol{M}(\boldsymbol{\theta})^{-1} \boldsymbol{\Sigma}^{(h-1)}\}],
\end{aligned}
\end{equation}
where
\begin{equation}\label{eq:sample_covariance_EM_obs_data}
\boldsymbol{\Sigma}^{(h-1)} = \frac{1}{K} \sum_{i=1}^K \boldsymbol{C}_i^{(h-1)}
\end{equation}
{is} the sample mean of the conditional correlation matrices
\begin{equation}\label{eq:cond_expectation_incomplete_data}
\boldsymbol{C}_i^{(h-1)} = \mathbb{E}[\boldsymbol{r}_i \boldsymbol{r}_i^\dagger | \boldsymbol{y}_i, \boldsymbol{A}_i, \hat{\boldsymbol{M}}(\boldsymbol{\theta}^{(h-1)})], \; i=1,\dots, K.
\end{equation}
A closed-form expression to
\begin{equation}
\mathbb{E}[\boldsymbol{r}_i \boldsymbol{r}_i^\dagger |\boldsymbol{y}_i, \boldsymbol{A}_i, \boldsymbol{M}] = \boldsymbol{C}_i ,
\end{equation}
is given by {(see Appendix A for the detailed derivation)}
\begin{equation}\label{eq:C_i}
\begin{aligned}
\boldsymbol{C}_i = & (\boldsymbol{A}_i^\dagger \boldsymbol{y}_i + \bar{\boldsymbol{A}_i}^\dagger \boldsymbol{\mu}_i) (\boldsymbol{A}_i^\dagger \boldsymbol{y}_i + \bar{\boldsymbol{A}_i}^\dagger \boldsymbol{\mu}_i)^\dagger + \bar{\boldsymbol{A}_i}^\dagger \boldsymbol{G}_i \bar{\boldsymbol{A}_i}
\end{aligned}
\end{equation}
with $\bar{\boldsymbol{A}}_i$ the $N-p_i\times N$ {selection matrix} complementary to $\boldsymbol{A}_i$ (obtained removing from $\boldsymbol{I}$ the $p_i$ rows corresponding to $\boldsymbol{A}_i$),
\begin{equation}
\boldsymbol{\mu}_i = \bar{\boldsymbol{A}}_i \boldsymbol{M} \boldsymbol{A}_i^\dagger (\boldsymbol{A}_i \boldsymbol{M} \boldsymbol{A}_i^\dagger)^{-1} \boldsymbol{y}_i
\end{equation}
and
\begin{equation}
\begin{aligned}
& \boldsymbol{G}_i = \bar{\boldsymbol{A}}_i \boldsymbol{M} \bar{\boldsymbol{A}}_i^\dagger - \bar{\boldsymbol{A}}_i \boldsymbol{M} \boldsymbol{A}_i^\dagger (\boldsymbol{A}_i \boldsymbol{M} \boldsymbol{A}_i^\dagger)^{-1} \boldsymbol{A}_i \boldsymbol{M} \bar{\boldsymbol{A}}_i^\dagger .
\end{aligned}
\end{equation}
After an E-step, an M-step is performed, corresponding to the maximization of the score function~\eqref{eq_E_step}, namely the estimate of the parameters is updated according to
\begin{equation}\label{eq:Mstep}
\boldsymbol{\theta}^{(h)} = \operatorname*{arg max}\limits_{\boldsymbol{\theta}:\boldsymbol{M}(\boldsymbol{\theta}) \in \mathcal{C}} Q(\boldsymbol{\theta}, \boldsymbol{\theta}^{(h-1)}) .
\end{equation}
The following proposition {outlines} the main features of the sequence of estimates.
\begin{prop}\label{proposition:1}
Provided that $\boldsymbol{M}(\boldsymbol{\theta}^{(0)})\succ \bf 0$, $K\geq N$, and $\mathcal{C} = \mathcal{B} \cap {\mathbb{H}}_{++}^N$, with $\mathcal{B}$ a closed set of ${\mathbb{H}}^N$, then
\begin{itemize}
\item $\boldsymbol{M}(\boldsymbol{\theta}^{(h)})\succ0$, for all $h \ge 0$ and $\mathcal{L}_y(\boldsymbol{M}(\boldsymbol{\theta}^{(h)}) | \boldsymbol{Y}, \boldsymbol{A}_1, \dots, \boldsymbol{A}_K)$ is a monotonically increasing sequence;
\item if $\mathcal{B} \in {\mathbb{H}}_{++}^N$ is a closed set of positive definite matrices, then $\boldsymbol{M}(\boldsymbol{\theta}^{(h)}), \; h\ge0, $ is a bounded sequence and $\mathcal{L}_y(\boldsymbol{M}(\boldsymbol{\theta}^{(h)}) | \boldsymbol{Y}, \boldsymbol{A}_1, \dots, \boldsymbol{A}_K)$, $h\ge0$, converges to a finite value. Besides, supposing $M(\boldsymbol{\theta})$ a norm coercive differentiable mapping, any limit point $\boldsymbol{\theta}^*$ to $\boldsymbol{\theta}^{(h)}$ is a B-stationary point~\cite{10.1287/moor.2016.0795, 10.2307/25151818, 7736116, 8239836} to Problem~(\ref{eq:problem_theta}).
\end{itemize}
\end{prop}
\begin{IEEEproof}
See Appendix B.
\end{IEEEproof}
A summary of the EM procedure is reported in~Algorithm~\ref{alg:EM}, where{, leveraging the results of Proposition 1,} the exit condition of the procedure is set as $|{P}^{(h)}- {P}^{(h-1)}| \le \varepsilon_1$ or $\|{\boldsymbol{\theta}}^{(h)}- {\boldsymbol{\theta}}^{(h-1)}\| \le \varepsilon_2$, where $\varepsilon_1, \varepsilon_2 > 0$ and
\begin{equation}\label{eq:P_Q}
P^{(h)} = \mathcal{L}_y(\boldsymbol{M}(\boldsymbol{\theta}^{(h)}) | \boldsymbol{Y}, \boldsymbol{A}_1, \dots, \boldsymbol{A}_K).
\end{equation}
\begin{algorithm}
\caption{EM {Covariance Matrix} Estimation Procedure}
\label{alg:EM}
\begin{algorithmic}
\REQUIRE $N$, $K$, $\boldsymbol{Y}$, $\boldsymbol{A}_1, \dots, \boldsymbol{A}_K$, $\mathcal{C}$, $\boldsymbol{\theta}^{(0)}$, $\varepsilon_1$, $\varepsilon_2$.\\
\ENSURE {EM} solution $\hat{\boldsymbol{\theta}}$ to Problem~(\ref{eq:problem_theta}).
\STATE{\textbf{Initialization}
\STATE $h = 0$;
\STATE ${P}^{(0)} = \mathcal{L}_y(\boldsymbol{M}(\boldsymbol{\theta}^{(0)}) | \boldsymbol{Y}, \boldsymbol{A}_1, \dots, \boldsymbol{A}_K)$};
\STATE \textbf{repeat}
\begin{enumerate}[label={\theenumi:}]
\STATE $h = h + 1 $;
\STATE E-Step: Compute $\boldsymbol{\Sigma}^{(h-1)}$ {given by}~\eqref{eq:sample_covariance_EM_obs_data};
\STATE M-Step: Find $\boldsymbol{\theta}^{(h)}$ using \eqref{eq:Mstep};
\STATE Compute ${P}^{(h)}$ using~\eqref{eq:P_Q};
\end{enumerate}
\STATE \textbf{until} {$|{P}^{(h)}- {P}^{(h-1)}| > \varepsilon_1$ or $\|{\boldsymbol{\theta}}^{(h)}- {\boldsymbol{\theta}}^{(h-1)}\| > \varepsilon_2$};
\STATE Output $\hat{\boldsymbol{\theta}} = \boldsymbol{\theta}^{(h)}$.
\end{algorithmic}
\end{algorithm}
{\bf Remark 1.} Before proceeding further, a useful digression on the convergence rate of Algorithm~\ref{alg:EM} is now in order. As shown in~\cite{10.2307/2984875}, assuming that ${\boldsymbol{\theta}}^{(h)}$ converges to the ML estimate {$\hat{\boldsymbol{\theta}}_{ML}$}, then the rate of convergence is ruled by the spectral radius $\rho(\boldsymbol{R}^{EM})$ of the rate matrix
\begin{equation}
\boldsymbol{R}^{EM} = \boldsymbol{I} - \boldsymbol{F}_{obs}^{\frac{1}{2}} \boldsymbol{F}_{EM}^{-1} \boldsymbol{F}_{obs}^{\frac{1}{2}} ,
\end{equation}
{where
\begin{equation}\label{eq:fobs}
\boldsymbol{F}_{obs} = \left. - \nabla_{\boldsymbol{\theta}} \nabla_{\boldsymbol{\theta}}^{\mathrm{T}} \mathcal{L}_y(\boldsymbol{\theta}) \right|_{ \boldsymbol{\theta} = {\hat{\boldsymbol{\theta}}_{ML}}}
\end{equation}
is the observed information matrix and
\begin{equation}\label{eq:fem}
\boldsymbol{F}_{EM} = \left. \mathbb{E}\left[ - \nabla_{\boldsymbol{\theta}} \nabla_{\boldsymbol{\theta}}^{\mathrm{T}} \mathcal{L}_r(\boldsymbol{\theta}) | \boldsymbol{Y}, \boldsymbol{\theta} \right] \right|_{\boldsymbol{\theta} = {\hat{\boldsymbol{\theta}}_{ML}}}
\end{equation}
is the expected complete information matrix.}
See Appendix D for the computation of \eqref{eq:fobs} and \eqref{eq:fem} with reference to \eqref{eq:observed_log_likelihood} and \eqref{eq:complete_log_likelihood}.
Just to provide a study example, let us consider $N=10$ and covariance matrix belonging to~\eqref{set:fixed_rank}. In particular, the true parameters are $\boldsymbol{V} = \boldsymbol{e}_1$, $\boldsymbol{S}_f = 10$, and $\sigma_n^2 = 1$. As to the missing-data {pattern}, the selection matrices obtained skipping the zero rows of
\begin{enumerate}
\item $\textbf{diag}(\boldsymbol{1}_N - \boldsymbol{e}_1 - \boldsymbol{e}_3)$,
\item $\textbf{diag}(\boldsymbol{1}_N - \boldsymbol{e}_2 - \boldsymbol{e}_5)$,
\item $\textbf{diag}(\boldsymbol{1}_N - \boldsymbol{e}_4 - \boldsymbol{e}_7)$,
\item $\textbf{diag}(\boldsymbol{1}_N - \boldsymbol{e}_6 - \boldsymbol{e}_8)$,
\item $\textbf{diag}(\boldsymbol{1}_N - \boldsymbol{e}_9 - \boldsymbol{e}_{10})$,
\end{enumerate}
{are cyclically used (according to the reported order) to select the observations at the different snapshots.}
Figs.~\ref{fig:convergence_rate} (a) and (b) display {respectively} the average convergence rate and the average number of iterations, required by Algorithm~\ref{alg:EM} ({with $\varepsilon_1 = \varepsilon_1 = 10^{-7}$ and }initialized using the sample covariance matrix of $2N$ IID zero-mean circularly symmetric Gaussian random vectors) to achieve convergence, versus the number of snapshots. {The {results} are obtained} via standard Monte Carlo counting techniques over $100$ independent trials.
Inspection of the figures outlines that a lower value of $\rho(\boldsymbol{R}^{EM})$ is {associated with} a faster convergence of Algorithm~\ref{alg:EM}.
In Fig.~\ref{fig:convergence_rate} (c){, for a given trial, the distance between the ML estimate and the EM solution at the $h$-th M-step, i.e.,} $\|\boldsymbol{\theta}^{(h)} - \hat{\boldsymbol{\theta}}_{ML}\|$, is {plotted versus the number of iterations}, assuming $K=40,60,80,100$.
This analysis corroborates that increasing the number of available snapshots a smaller number of iterations is required for Algorithm~\ref{alg:EM} to converge. Furthermore, the larger the number of snapshots the closer the final estimate of Algorithm~\ref{alg:EM} to $\hat{\boldsymbol{\theta}}_{ML}$.
\begin{figure}[htbp!]
\centering
\label{fig:convergence_rate_3} \includegraphics[width=0.95\linewidth]{figs/convergence_analysis.eps}
\caption{Convergence rate analysis for the case study discussed in the main text, with $N=10$. Fig. (a) displays the average rate of convergence versus the number of snapshots, while Fig. (b) displays the average number of iterations versus the number of snapshots. The {norm difference} $\|\boldsymbol{\theta}^{(h)} - \hat{\boldsymbol{\theta}}_{ML}\|$ in dB {versus the number of iterations for Algorithm~\ref{alg:EM} is reported in Fig. (c)}, assuming $K=40,60,80,100$.}
\label{fig:convergence_rate}
\end{figure}
{\bf Remark 2.}
It is worth pointing out that the main advantage connected with the use of an EM algorithm occurs when the optimization involved in~\eqref{eq:Mstep} is more tractable than the direct maximization of the observed-data likelihood~\eqref{eq:problem_theta}. {It is clear that the crucial point to devise an EM-based constrained covariance estimation procedure is the capability to obtain an optimal solution to~\eqref{eq:Mstep} with {an} {affordable} computational effort}.
Besides, it is important to remark that different system constraints generally induce distinct feasible sets that generally result in different solutions $\boldsymbol{\theta}^{(h)}$.
In particular, for the special case of unconstrained estimation~\cite{vantrees4}
\begin{equation}\label{eq:sol_M_step_uncontrained}
\hat{\boldsymbol{M}}(\boldsymbol{\theta}^{(h)}) = \boldsymbol{\Sigma}^{(h-1)}
\end{equation}
is the maximizer of $Q(\boldsymbol{\theta}, \boldsymbol{\theta}^{(h-1)})$ and therefore {it} constitutes the updated estimate $\boldsymbol{\theta}^{(h)}$.
In the following subsections, two well-known radar applications are analyzed in the missing-data scenario: adaptive beamforming and detection of the sources number. In particular, each application is presented and the underlying structured covariance matrix estimation problem is discussed. Then, EM-based solution methods, leveraging problem structure at different extents, are devised.
\subsection{Adaptive Beamforming}
Let us consider the minimum variance distortionless response (MVDR) (also known as the Capon) beamformer~\cite{vantrees4}
\begin{equation}\label{eq:Capon}
\boldsymbol{w} = \frac{{\boldsymbol{M}}^{-1} \boldsymbol{v}(\theta_0)}{\boldsymbol{v}(\theta_0)^\dagger {\boldsymbol{M}}^{-1} \boldsymbol{v}(\theta_0) },
\end{equation}
where $\boldsymbol{v}(\theta_0)$ is the steering vector in the direction {of interest} $\theta_0$.
In a practical scenario the covariance matrix must be estimated from the incoming data leading to an adaptive weight vector.
It {is crystal clear} that obtaining an accurate estimate of the unknown interference covariance matrix is a crucial task affecting the performance of the {resulting} adaptive beamformer.
In a typical case where a set of $K\ge N$ secondary data $\{\boldsymbol{r}_i\}, \;i=1,\dots,K$, is available, the unstructured ML estimate of $\boldsymbol{M}$ is given by the sample covariance matrix $\boldsymbol{S}$ (often with a diagonal loading), defined as in (\ref{eq:sample_covariance})~\cite{vantrees4}. Therefore, $\boldsymbol{S}$ (or possibly a diagonally loaded version) is employed in place of $\boldsymbol{M}$ in~\eqref{eq:Capon}, obtaining the MVDR adaptive beamformer.
Let us now focus on a missing-data context where the problem of computing the ML estimate of the covariance matrix from the observed-data is described in~\eqref{eq:problem} and a viable estimation procedure is reported in Algorithm~\ref{alg:EM}.
As a consequence, following the same guideline as in the definition of the MVDR adaptive beamformer\footnote{The analysis developed in the following can be also naturally extended to other kinds of beamformers.}, it is possible to gain adaptivity under the missing-data scenario using
\begin{equation}\label{eq:MVDR_EM}
\boldsymbol{w}_{EM} = \frac{\hat{\boldsymbol{M}}_{EM}^{-1} \boldsymbol{v}(\theta_0)}{\boldsymbol{v}(\theta_0)^\dagger \hat{\boldsymbol{M}}_{EM}^{-1} \boldsymbol{v}(\theta_0) },
\end{equation}
where $\hat{\boldsymbol{M}}_{EM}$ denotes the estimate of the covariance matrix obtained via the EM procedure described in Algorithm~\ref{alg:EM}.
As highlighted in the previous subsection, tailored solutions to the M-step of Algorithm~\ref{alg:EM} can be devised under the assumption of $\boldsymbol{M}$ belonging to a specific covariance matrix uncertainty set. In this respect, some case studies are discussed in the following.
\subsubsection{Unconstrained Estimation}
The special case of unconstrained estimation has been described in the previous subsection and a solution to the resulting M-step is given by~\eqref{eq:sol_M_step_uncontrained}.
\subsubsection{Constraint on the lower bound of the white noise power level}
The Fast ML (FML) procedure~\cite{892662, 6558039} provides the M-step
solution when $\boldsymbol{M}$ belongs to the uncertainty set~(\ref{set:cm_lb_white_dist_pwr_level}).
Specifically, denoting by $\boldsymbol{U} \boldsymbol{\Lambda}_{\boldsymbol{\Sigma}} \boldsymbol{U}^\dagger$ the eigenvalue decomposition (EVD) of $\boldsymbol{\Sigma}^{(h-1)}$ and by $\tilde{\lambda}_{v}, \;v=1,\dots, N$ its eigenvalues, at the M-step update under the uncertainty set~(\ref{set:cm_lb_white_dist_pwr_level}), is given by
\begin{equation}\label{eq:Mstep_FML}
\hat{\boldsymbol{M}}(\boldsymbol{\theta}^{(h)}) = \boldsymbol{U} \boldsymbol{\Lambda}_{FML} \boldsymbol{U}^\dagger ,
\end{equation}
with
\begin{equation}
\boldsymbol{\Lambda}_{FML} = \textbf{diag}(\lambda_{1, FML}, \dots, \lambda_{N, FML})
\end{equation}
and $\lambda_{v, FML} = \max(\tilde{\lambda}_{v}, \: \sigma^2), \;v=1,\dots, N$.
\subsubsection{Centro-Hermitianity constraint}
In many scenarios of practical interests (standard rectangular, hexagonal, uniform circular or cylindrical array), the covariance matrix exhibits a centro-Hermitian structure~\cite{vantrees4}, which is equivalent to consider $\boldsymbol{M}$ belonging to~(\ref{set:persymmetry}).
Capitalizing on the problem structure, the M-step solution can be obtained using the forward-backward (FB) averaged sample covariance matrix procedure~\cite{vantrees4}, resulting into
\begin{equation}\label{eq:Mstep_Persymmetry}
\hat{\boldsymbol{M}}(\boldsymbol{\theta}^{(h)}) = \boldsymbol{\Sigma}_{FB} ,
\end{equation}
where
\begin{equation}\label{eq:sigma_FB}
\boldsymbol{\Sigma}_{FB} = \frac{1}{2} (\boldsymbol{\Sigma}^{(h-1)} + \boldsymbol{J} {\boldsymbol{\Sigma}^{(h-1)}}^{*} \boldsymbol{J}) .
\end{equation}
\subsubsection{Lower bound constraint on the white noise power level plus Centro-Hermitianity}
This is tantamount to considering the uncertainty set characterizing the centro-Hermitian covariance matrices with a lower bound on the white disturbance power level
\begin{equation}
\mathcal{C} = \left\{\begin{matrix}
\boldsymbol{M} = \sigma_n^2 \boldsymbol{I} + \boldsymbol{R}_e \\
\boldsymbol{M} = \boldsymbol{J} {\boldsymbol{M}}^* \boldsymbol{J} \\
\boldsymbol{R}_e \succeq \boldsymbol{0} \\
\sigma_n^2 \ge \sigma^2
\end{matrix}\right. ,
\label{set2:cm_lb_white_dist_pwr_level_and_persymm}
\end{equation}
where $\boldsymbol{R}_e$, $\sigma_n^2$ and $\sigma^2$ are defined as in~(\ref{set:cm_lb_white_dist_pwr_level}), whereas $\boldsymbol{J}$ is given by~(\ref{eq:J_exchange_matrix}).
{In this situation}, denoting by $\boldsymbol{U}_{FB} \: \boldsymbol{\Lambda}_{FB} \: \boldsymbol{U}_{FB}^\dagger$ the EVD of $\boldsymbol{\Sigma}_{FB}$ {defined in~\eqref{eq:sigma_FB}}, with $\boldsymbol{\Lambda}_{FB} = \textbf{diag}(\tilde{\lambda}_{1, FB}, \dots, \tilde{\lambda}_{N, FB})$, it follows that the M-step update is now given by
\begin{equation}\label{eq:M_step_FML_persymmetry}
\hat{\boldsymbol{M}}(\boldsymbol{\theta}^{(h)}) = \boldsymbol{U}_{FB} \: \boldsymbol{\Lambda}_{FML-FB} \: \boldsymbol{U}_{FB}^\dagger ,
\end{equation}
where
\begin{equation}
\boldsymbol{\Lambda}_{FML-FB} = \textbf{diag}(\lambda_{1, FML-FB}, \dots, \lambda_{N, FML-FB}),
\end{equation}
with $\lambda_{v, FML-FB} = \max(\tilde{\lambda}_{v, FB},\: \sigma^2), \;v=1,\dots, N$.
The overall procedure used to find the proposed adaptive Capon beamformer in the context of missing-data is summarized in Algorithm~\ref{alg:adaptive_beamforming}.
\begin{algorithm}
\caption{Adaptive beamforming in the context of missing-data}
\label{alg:adaptive_beamforming}
\textbf{Input:} $N$, $K$, $\boldsymbol{Y}$, $\boldsymbol{A}_1, \dots, \boldsymbol{A}_K$, $\mathcal{C}$, $\boldsymbol{\theta}^{(0)}$, $\varepsilon_1$, $\varepsilon_2$.\\
\textbf{Output:} EM-based adaptive beamformer $\boldsymbol{w}_{EM}$.
\begin{enumerate}[label={\theenumi:}]
\item find $\hat{\boldsymbol{M}}_{EM}$ via~Algorithm~\ref{alg:EM} using the appropriate {bespoke} solution~\eqref{eq:sol_M_step_uncontrained}, \eqref{eq:Mstep_FML}, \eqref{eq:Mstep_Persymmetry}, \eqref{eq:M_step_FML_persymmetry} to the M-step;
\item compute $\boldsymbol{w}_{EM}$ using \eqref{eq:MVDR_EM};
\item output $\boldsymbol{w}_{EM}$ .
\end{enumerate}
\end{algorithm}
\subsection{Detection of Number of Sources}
Let us consider $d$ uncorrelated narrow-band sources impinging the array from distinct directions $\{\theta_s\}, s=1, \dots, d < N$.
After amplification, down-conversion, and digital sampling, the $i$-th received complete spatial snapshot $\boldsymbol{r}_i$ is given by
\begin{equation}
\boldsymbol{r}_i = \boldsymbol{V} \boldsymbol{s}_i + \boldsymbol{n}_i, \quad i=1,\dots, K
\end{equation}
where $\boldsymbol{V} = [\boldsymbol{v}(\theta_1), \boldsymbol{v}(\theta_2), \dots, \boldsymbol{v}(\theta_d)] \in \mathbb{C}^{N\times d}$ is the array manifold matrix (assumed full-rank), {$\boldsymbol{s}_i,\; i=1,\dots,K$ are IID zero-mean Gaussian random vectors of sources amplitudes (independent of each other) with powers $\sigma_s^2$, $s=1,\dots, d$, respectively, and $\boldsymbol{n}_i$ are IID zero-mean circularly symmetric Gaussian random vectors with power $\sigma_n^2$, assumed statistically independent from the sources.
For the case at hand, the covariance matrix of the received signal can be assumed belonging to~(\ref{set:fixed_rank}).
Resorting to the EVD, the complete-data covariance matrix takes on the convenient form
\begin{equation}
\boldsymbol{M} = \sum_{v=1}^d \lambda_v \boldsymbol{\Phi}_v \boldsymbol{\Phi}_v^\dagger + \sigma_n^2 \sum_{v=d+1}^N \boldsymbol{\Phi}_v \boldsymbol{\Phi}_v^\dagger ,
\end{equation}
where $\lambda_v, \sigma_n^2$, and ${\boldsymbol{\Phi}}_v,\; v=1,\dots, N$ denote to the eigenvalues and the corresponding eigenvectors of $\boldsymbol{M}$, respectively, {with ${\lambda}_1 \ge {\lambda}_2 \dots {\lambda}_d \ge 0$.}
As a consequence, {denoting by $\boldsymbol{\Phi}_{v,R}^{\mathrm{T}}$ and $\boldsymbol{\Phi}_{v,I}^{\mathrm{T}}$ the vectors of the real and imaginary components of $\boldsymbol{\Phi}_{v}$}, for a given $d$, the vector of the unknown parameters (underlying the covariance structure) is
{\begin{equation}\label{eq:theta_d}
{\boldsymbol{\theta}}(d) = [\lambda_1, \dots, \lambda_d, \sigma_n^2, \boldsymbol{\Phi}_{1,R}^{\mathrm{T}},\boldsymbol{\Phi}_{1,I}^{\mathrm{T}}, \dots, \boldsymbol{\Phi}_{d,R}^{\mathrm{T}}, \boldsymbol{\Phi}_{d,I}^{\mathrm{T}}]^{\mathrm{T}},
\end{equation}}
which explicitly reveals the role of $d$ in controlling the degrees of freedom of the covariance matrix.
The approach pursued in this subsection follows from~\cite{vantrees4, 1100705,31267, 10.2307/2985032}, where a source number detection algorithm, based on a data-adaptive test statistic {plus} a penalty function related to the degrees of freedom, is devised. Specifically, denoting by {$\hat{\boldsymbol{\theta}}_{ML}(d) = [\hat{\lambda}_1, \dots, \hat{\lambda}_d, \hat{\sigma}_n^2, \hat{\boldsymbol{\Phi}}_{1,R}^{\mathrm{T}},\hat{\boldsymbol{\Phi}}_{1,I}^{\mathrm{T}}, \dots, \hat{\boldsymbol{\Phi}}_{d,R}^{\mathrm{T}}, \hat{\boldsymbol{\Phi}}_{d,I}^{\mathrm{T}}]^{\mathrm{T}}$} the ML estimate of ${\boldsymbol{\theta}}(d)$, the problem of detecting the number of sources can be formulated as
\begin{equation}\label{eq:find_d_problem_max}
\hat{d} = \operatorname*{argmax}\limits_{d = 0, \dots, K_1} \mathcal{L}_r(\hat{\boldsymbol{\theta}}_{ML}(d)) - T(d) ,
\end{equation}
where $K_1 \le N-1$ is an upper bound to the number of sources, $\mathcal{L}_r(\hat{\boldsymbol{\theta}}_{ML}(d))$ is the statistic given by the complete-data log-likelihood (\ref{eq:complete_log_likelihood}) evaluated at $\hat{\boldsymbol{\theta}}_{ML}(d)$, and $T(d)$ is a penalty term accounting for the number of free parameters in the assumed model.
In particular, taking the negative value of $\mathcal{L}_r(\hat{\boldsymbol{\theta}}_{ML}(d))$ and dropping the terms functionally independent from $d$, the following decision statistic is obtained~\cite{anderson1963}
\begin{equation}\label{eq:L_d_statistic}
L(d, \hat{\lambda}_1, \dots, \hat{\lambda}_N) = K(N-d)\ln{\left\{\frac{ \frac{1}{N-d}\sum_{v=d+1}^{N} \hat{\lambda}_v }{(\prod_{v=d+1}^{N} \hat{\lambda}_v)^{\frac{1}{N-d}}}\right\}} .
\end{equation}
Exploiting the above result, problem (\ref{eq:find_d_problem_max}) is equivalently recast as\footnote{It is also worth pointing out that~\eqref{eq:find_d_problem_min} can be generalized to the case of covariance matrices with additional structured constraints.}
\begin{equation}\label{eq:find_d_problem_min}
\hat{d} = \operatorname*{argmin}\limits_{d = 0, \dots, K_1} L(d, \hat{\lambda}_1, \dots, \hat{\lambda}_N) + p(d),
\end{equation}
where $p(d)$ is a specific penalty function. In the following, three detection tests, Akaike information criterion (AIC)~\cite{1100705}, minimum description length (MDL)~\cite{31267}, and Hannan–Quinn information criterion (HQC)~\cite{10.2307/2985032}, are considered. Each test is characterized by a different penalty function $p(d)$; in particular
\begin{equation}\label{eq:p_d}
p(d) =
\left\{\begin{matrix*}[l]
d(2N - d), & \text{AIC}\\
1/2\;[d(2N - d) + 1] \ln{K}, & \text{MDL}\\
[d(2N - d)+1] \ln(\ln(K)) & \text{HQC}
\end{matrix*}\right. .
\end{equation}
Let us now frame the decision statistic in missing-data context.
Accordingly, the criterion~\eqref{eq:find_d_problem_max} can be modified as:
\begin{equation}\label{eq:criter_det_obs}
\hat{d} = \operatorname*{argmax}\limits_{d = 0, \dots, K_1}
\mathcal{L}_y(\hat{\boldsymbol{\theta}}_{ML}(d) | \boldsymbol{Y}, \boldsymbol{A}_1, \dots, \boldsymbol{A}_K) - T(d) .
\end{equation}
This requires, for a given $d$, the computation of the ML estimate $\hat{\boldsymbol{\theta}}_{ML}(d)$ from the observed-data.
To this end, a viable technique is represented by Algorithm~\ref{alg:EM} applied to a covariance uncertainty set including the fixed rank constraint in~(\ref{set:fixed_rank}). Two relevant case studies are thus developed in the following, providing tailored solutions to the M-step.
\subsubsection{Fixed rank constraint}
Let us exploit the knowledge that $\boldsymbol{M}$ belongs to~(\ref{set:fixed_rank}). Specifically, for a given $d$, the M-step at the $h$-th iteration is cast as
\begin{equation}\label{eq:problem_m_constrained}
{\boldsymbol{\theta}_d}^{(h)} = \operatorname*{argmax}\limits_{\boldsymbol{\theta}_d} Q\left(\boldsymbol{\theta}_d, {\boldsymbol{\theta}_d}^{(h-1)}\right) ,
\end{equation}
where $\boldsymbol{\theta}_d$ is defined as in~(\ref{eq:theta_d}).
The maximizer of Problem~(\ref{eq:problem_m_constrained}) is given by~\cite{vantrees4}
\begin{equation}\label{eq:solution_str_detection_mstep}
{\boldsymbol{\theta}_d}^{(h)} = [\tilde{\lambda}_1, \dots, \tilde{\lambda}_d, \tilde{\sigma}_{n}^2, \tilde{\boldsymbol{\Phi}}_{1,R}, \tilde{\boldsymbol{\Phi}}_{1,I}, \dots, \tilde{\boldsymbol{\Phi}}_{d,R}, \tilde{\boldsymbol{\Phi}}_{d,I} ]^{\mathrm{T}},
\end{equation}
where $\tilde{\lambda}_v$ and $\tilde{\boldsymbol{\Phi}}_{v},\; v=1,\dots, d$ are the $d$ greatest eigenvalues and the corresponding eigenvectors of $\boldsymbol{\Sigma}^{(h-1)}$, with $\tilde{\boldsymbol{\Phi}}_{v,R}$, and $\tilde{\boldsymbol{\Phi}}_{v,I}$ the real and imaginary components of $\tilde{\boldsymbol{\Phi}}_{v}$, whereas
\begin{equation}
\tilde{\sigma}_{n}^2 = \frac{1}{N-d} \sum_{v=N-d}^N \tilde{\lambda}_v
\end{equation}
is the arithmetic mean of the $N - d$ smallest eigenvalues of $\boldsymbol{\Sigma}^{(h-1)}$.
Exploiting the above results, the $h$-th estimate of the covariance matrix is given by
\begin{equation}\label{eq:m_est_det_num_src_str_set}
\hat{\boldsymbol{M}}({\boldsymbol{\theta}_d}^{(h)}) = \boldsymbol{U} \boldsymbol{\Lambda}_S \boldsymbol{U}^\dagger + \tilde{\sigma}_{n}^2 \; \boldsymbol{I} ,
\end{equation}
where
\begin{equation}
\boldsymbol{\Lambda}_S = \textbf{diag}(\tilde{\lambda}_1 - \tilde{\sigma}_{n}^2, \dots, \tilde{\lambda}_d - \tilde{\sigma}_{n}^2),
\end{equation}
and
\begin{equation}
\boldsymbol{U} = [\tilde{\boldsymbol{\Phi}}_1, \dots, \tilde{\boldsymbol{\Phi}}_d].
\end{equation}
Hence, taking the negative value and dropping the constant terms of the observed-data log-likelihood, the order estimate is given by
\begin{equation}\label{eq:det_str_test_Ly}
\hat{d}_{EM} = \operatorname*{argmin}\limits_{d = 0, \dots, K_1} L_y({\hat{\boldsymbol{\theta}}_d}) + p(d) ,
\end{equation}
where
\begin{equation}\label{eq:str_detection_decision_stat_Ly}
\begin{aligned}
L_y({\hat{\boldsymbol{\theta}}_d}) = \sum_{i=1}^{K} & \ln(\det(\boldsymbol{A}_i \hat{\boldsymbol{M}}({\hat{\boldsymbol{\theta}}_d})\boldsymbol{A}_i^\dagger )) \\ & + \text{tr}\{(\boldsymbol{A}_i \hat{\boldsymbol{M}}({\hat{\boldsymbol{\theta}}_d})\boldsymbol{A}_i^\dagger)^{-1} \boldsymbol{y}_i \boldsymbol{y}_i^\dagger\}
\end{aligned}
\end{equation}
with ${\hat{\boldsymbol{\theta}}_d}$ the final estimate of Algorithm~\ref{alg:EM} and $p(d)$ a specific penalty function~\eqref{eq:p_d} related to the AIC, MDL or HQC tests.
The overall procedure to find the sources number in the context of missing data is summarized in Algorithm~\ref{alg:structured_det}.
\begin{algorithm}
\caption{Detection of number of sources in the context of missing-data and fixed rank constraint}
\label{alg:structured_det}
\textbf{Input:} $N,\; K,\; \boldsymbol{Y},\; \boldsymbol{A}_1, \dots, \boldsymbol{A}_K$, $\boldsymbol{\theta}^{(0)}$, $\varepsilon_1$, $\varepsilon_2$, $K_1$, $p(d), \; d=1,...,K_1$.\\
\textbf{Output:} A solution $\hat{d}_{EM}$ to Problem~(\ref{eq:criter_det_obs}).
\begin{enumerate}[label={\theenumi:}]
\item \textbf{for} $\tilde{d}=0,\dots, K_1$ \textbf{do}
\begin{enumerate}[label=\alph*),leftmargin=2em]
\item compute the estimate ${\hat{\boldsymbol{\theta}}_{\tilde{d}}}$ via Algorithm~\ref{alg:EM} using \eqref{eq:solution_str_detection_mstep} as solution to the M-step with $d=\tilde{d}$;
\item compute $L_y({\hat{\boldsymbol{\theta}}_{\tilde{d}}})$ in~\eqref{eq:str_detection_decision_stat_Ly} using the estimate ${\hat{\boldsymbol{\theta}}_{\tilde{d}}}$.
\end{enumerate}
\textbf{end for}
\item evaluate \[ \hat{d}_{EM} = \operatorname*{argmin}\limits_{\tilde{d} = 0, \dots, K_1} L_y({\hat{\boldsymbol{\theta}}_{\tilde{d}}}) + p(\tilde{d}) ;\]
\item output $\hat{d}_{EM}$.
\end{enumerate}
\end{algorithm}
\subsubsection{{Rank constraint and centro-Hermitianity}}
Let us assume that the covariance matrix belongs to both the uncertainty sets (\ref{set:fixed_rank}) and (\ref{set:persymmetry}), i.e.
\begin{equation}\label{set:fixed_rank_and_persymm}
\mathcal{C} = \left\{\begin{matrix}
\boldsymbol{M} = \sigma_n^2 \boldsymbol{I} + \boldsymbol{V}\boldsymbol{S}_f \boldsymbol{V}^\dagger \\
\boldsymbol{M} = \boldsymbol{J} {\boldsymbol{M}}^{*} \boldsymbol{J} \\
\boldsymbol{V}\boldsymbol{S}_f \boldsymbol{V}^\dagger \succeq \boldsymbol{0} \\
\mbox{Rank}(\boldsymbol{V}\boldsymbol{S}_f \boldsymbol{V}^\dagger) \le d\\
\sigma_n^2 > 0
\end{matrix}\right. ,
\end{equation}
where $\boldsymbol{V}$, $\boldsymbol{S}_f$, $d$, and $\sigma_n^2$ are defined as in~(\ref{set:fixed_rank}), whereas $\boldsymbol{J}$ is given by~(\ref{eq:J_exchange_matrix}).
Therefore, for a given $d$, the maximizer of $Q(\boldsymbol{\theta}, \boldsymbol{\theta}^{(h-1)})$ is given by~\cite{stoica-jansson-1999}
{\begin{equation}\label{eq:sol_det_M_step_pers_str}
\begin{aligned}
\boldsymbol{\theta}_{d,FB}^{(h)} = [&\tilde{\lambda}_{1, FB}, \dots, \tilde{\lambda}_{d, FB}, \tilde{\sigma}_{n, FB}^2,\\& \tilde{\boldsymbol{\Phi}}_{1,FB, R}, \tilde{\boldsymbol{\Phi}}_{1,FB, I}, \dots, \tilde{\boldsymbol{\Phi}}_{d,FB,R}, \tilde{\boldsymbol{\Phi}}_{d,FB,I} ]^{\mathrm{T}},
\end{aligned}
\end{equation}}
where $\tilde{\lambda}_{v, FB}$ and $\tilde{\boldsymbol{\Phi}}_{v,FB}, \;v=1,\dots, d$ are the $d$ greatest eigenvalues and the corresponding eigenvectors of $\boldsymbol{\Sigma}_{FB}$, defined as in~\eqref{eq:sigma_FB}, with $\tilde{\boldsymbol{\Phi}}_{v,FB,R}$ and $\tilde{\boldsymbol{\Phi}}_{v,FB,I}$ the real and imaginary components of $\tilde{\boldsymbol{\Phi}}_{v,FB}$, respectively, and
\begin{equation}
\tilde{\sigma}_{n, FB}^2 = \frac{1}{N-d} \sum_{v=N-d}^N \tilde{\lambda}_{v, FB}
\end{equation}
is the arithmetic mean of the $N - d$ smallest eigenvalues of $\boldsymbol{\Sigma}_{FB}$.
As a consequence,
\begin{equation}\label{eq:estimate_det_centroh}
\begin{aligned}
\hat{\boldsymbol{M}}(\hat{\boldsymbol{\theta}}_{d,FB}) =& \sum_{v=1}^d (\tilde{\lambda}_{v, FB} - \tilde{\sigma}_{n, FB}^2) \tilde{\boldsymbol{\Phi}}_{v,FB} \tilde{\boldsymbol{\Phi}}_{v,FB}^\dagger +\\ & \quad \; + \tilde{\sigma}_{n, FB}^2 \; \boldsymbol{I}
\end{aligned}
\end{equation}
with $\hat{\boldsymbol{\theta}}_{d,FB}$ the resulting estimate of Algorithm~\ref{alg:EM}.
Along the same line as the previous case, the statistic is computed for each possible $d$, to get the order estimate
\begin{equation}\label{eq:det_str_pers_test}
\hat{d}_{EM-FB} = \operatorname*{argmin}\limits_{d = 0, \dots, K_1} L_y(\hat{\boldsymbol{\theta}}_{d,FB}) + \frac{1}{2} p(d) ,
\end{equation}
where $L_y({\hat{\boldsymbol{\theta}}_{d,FB}})$ is given by~\eqref{eq:str_detection_decision_stat_Ly} evaluated in correspondence of the estimate~\eqref{eq:estimate_det_centroh} and $p(d)$ is one of the penalty functions in~\eqref{eq:p_d}~\cite{258125}.
\section{Performance Analysis}\label{section:perf_analysis}
\begin{figure*}[ht]
\centering
\subfloat[\label{fig:rob_beam_sc_1_a}]
\includegraphics[width=0.40\linewidth]{figs/1_scfig4_sir_analysis_N20_p_fault0_10_phi0_Ntrials500.fig.eps}}
\hspace{40pt}\subfloat[\label{fig:rob_beam_sc_1_b}]
\includegraphics[width=0.40\linewidth]{figs/1_scfig4_sir_analysis_N20_p_fault0_30_phi0_Ntrials500.fig.eps}}
\\
\subfloat[\label{fig:rob_beam_sc_1_c}]
\includegraphics[width=0.40\linewidth]{figs/slc_1_scfig4_N20_p_fault0_10K60_phi0_Ntrials100.fig.eps}}
\hspace{40pt}\subfloat[\label{fig:rob_beam_sc_1_d}]
\includegraphics[width=0.40\linewidth]{figs/slc_1_scfig4_N20_p_fault0_30K60_phi0_Ntrials100.fig.eps}}
\caption{Adaptive beamformer performance for a ULA with 20 antennas in Scenario 1. Figs. \subref{fig:rob_beam_sc_1_a} and \subref{fig:rob_beam_sc_1_c} consider $p_{\mathrm m} = 0.1$ while Figs. \subref{fig:rob_beam_sc_1_b} and \subref{fig:rob_beam_sc_1_d} consider $p_{\mathrm m} = 0.3$. Figs. \subref{fig:rob_beam_sc_1_a} and \subref{fig:rob_beam_sc_1_b} {display} the normalized average S/I versus number of snapshots, while Figs. \subref{fig:rob_beam_sc_1_c} and \subref{fig:rob_beam_sc_1_d} display the resulting beampattern with $K = 60$ (therein, the red-Xs along the $\theta$-axis denote the sources directions).}
\label{fig:rob_beam_sc_1}
\end{figure*}
\begin{figure*}[ht]
\centering
\subfloat[\label{fig:rob_beam_sc_2_a}]
\includegraphics[width=0.40\linewidth]{figs/1_scfig6_sir_analysis_N20_p_fault0_10_phi0_Ntrials500.fig.eps}}
\hspace{40pt}\subfloat[\label{fig:rob_beam_sc_2_b}]
\includegraphics[width=0.40\linewidth]{figs/1_scfig6_sir_analysis_N20_p_fault0_30_phi0_Ntrials500.fig.eps}}
\\
\subfloat[\label{fig:rob_beam_sc_2_c}]
\includegraphics[width=0.40\linewidth]{figs/slc_1_scfig6_N20_p_fault0_10K60_phi0_Ntrials100.fig.eps}}
\hspace{40pt}\subfloat[\label{fig:rob_beam_sc_2_d}]
\includegraphics[width=0.40\linewidth]{figs/slc_1_scfig6_N20_p_fault0_30K60_phi0_Ntrials100.fig.eps}}
\caption{Adaptive beamformer performance for a ULA with 20 antennas in Scenario 2. Figs. \subref{fig:rob_beam_sc_2_a} and \subref{fig:rob_beam_sc_2_c} consider $p_{\mathrm m} = 0.1$ while Figs. \subref{fig:rob_beam_sc_2_b} and \subref{fig:rob_beam_sc_2_d} consider $p_{\mathrm m} = 0.3$. Figs. \subref{fig:rob_beam_sc_2_a} and \subref{fig:rob_beam_sc_2_b} display the normalized average S/I versus number of snapshots, while Figs. \subref{fig:rob_beam_sc_2_c} and \subref{fig:rob_beam_sc_2_d} display the resulting beampattern with $K = 60$ (therein, the red-Xs along the $\theta$-axis denote the sources directions).}
\label{fig:rob_beam_sc_2}
\end{figure*}
\begin{figure*}[ht]
\centering
\subfloat[\label{fig:sir_comparison_a}]
\includegraphics[width=0.40\linewidth]{figs/comparison_scfig4_sir_analysis.eps}}
\hspace{40pt}\subfloat[\label{fig:sir_comparison_b}]
\includegraphics[width=0.40\linewidth]{figs/comparison_scfig6_sir_analysis.eps}}
\caption{Normalized average S/I versus number of snapshots for a ULA with 20 antennas. Fig.~\subref{fig:sir_comparison_a} considers Scenario 1 while Fig. \subref{fig:sir_comparison_b} Scenario 2.}
\label{fig:sir_comparison}
\end{figure*}
\begin{figure*}[ht]
\centering
\subfloat[\label{fig:src_detection_pm_0_1_d2_AIC}]
\includegraphics[width=0.32\linewidth]{figs/comparison_d2_AIC.eps}}
\hspace{10pt}\subfloat[\label{fig:src_detection_pm_0_1_d2_MDL}]
\includegraphics[width=0.32\linewidth]{figs/comparison_d2_MDL.eps}}
\hspace{10pt}\subfloat[\label{fig:src_detection_pm_0_1_d2_HQC}]
\includegraphics[width=0.32\linewidth]{figs/comparison_d2_HQC.eps}}\\
\subfloat[\label{fig:src_detection_pm_0_1_d3_AIC}]
\includegraphics[width=0.32\linewidth]{figs/comparison_d3_AIC.eps}}
\hspace{10pt}\subfloat[\label{fig:src_detection_pm_0_1_d3_MDL}]
\includegraphics[width=0.32\linewidth]{figs/comparison_d3_MDL.eps}}
\hspace{10pt}\subfloat[\label{fig:src_detection_pm_0_1_d3_HQC}]
\includegraphics[width=0.32\linewidth]{figs/comparison_d3_HQC.eps}}\\
\subfloat[\label{fig:src_detection_pm_0_1_d4_AIC}]
\includegraphics[width=0.32\linewidth]{figs/comparison_d4_AIC.eps}}
\hspace{10pt}\subfloat[\label{fig:src_detection_pm_0_1_d4_MDL}]
\includegraphics[width=0.32\linewidth]{figs/comparison_d4_MDL.eps}}
\hspace{10pt}\subfloat[\label{fig:src_detection_pm_0_1_d4_HQC}]
\includegraphics[width=0.32\linewidth]{figs/comparison_d4_HQC.eps}}
\caption{Detection performance for a ULA with 20 antennas assuming $K=100$ and $p_{\mathrm m} \in \{0.1, 0.3\}$. Figs. (a), (b), and (c) assume $d=2$, Figs. (d), (e), and (f) assume $d=3$, whereas Figs. (g), (h), and (i) assume $d=4$ equal-power signals impinging the array, respectively, with signal separation corresponding to $0.891/N$.
Moreover, Figs. (a), (d), and (g) consider AIC, Figs. (b), (e), and (h) consider MDL, whereas Figs. (c), (f), and (i) consider HQC.}
\label{fig:src_detection_1}
\end{figure*}
\begin{figure*}[ht]
\centering
\subfloat[\label{fig:src_detection_comp_pm_0_1_d3_AIC}]
\includegraphics[width=0.32\linewidth]{figs/comparison_d3_AIC_p_fault0_10.fig_persy.eps}}
\hspace{10pt}\subfloat[\label{fig:src_detection_comp_pm_0_1_d3_MDL}]
\includegraphics[width=0.32\linewidth]{figs/comparison_d3_MDL_p_fault0_10.fig_persy.eps}}
\hspace{10pt}\subfloat[\label{fig:src_detection_comp_pm_0_1_d3_HQC}]
\includegraphics[width=0.32\linewidth]{figs/comparison_d3_HQC_p_fault0_10.fig_persy.eps}}\\
\subfloat[\label{fig:src_detection_comp_pm_0_3_d3_AIC}]
\includegraphics[width=0.32\linewidth]{figs/comparison_d3_AIC_p_fault0_30.fig_persy.eps}}
\hspace{10pt}\subfloat[\label{fig:src_detection_comp_pm_0_3_d3_MDL}]
\includegraphics[width=0.32\linewidth]{figs/comparison_d3_MDL_p_fault0_30.fig_persy.eps}}
\hspace{10pt}\subfloat[\label{fig:src_detection_comp_pm_0_3_d3_HQC}]
\includegraphics[width=0.32\linewidth]{figs/comparison_d3_HQC_p_fault0_30.fig_persy.eps}}
\caption{Comparison of the PD using EM and EM-FB estimation strategies for a ULA with 20 antennas assuming 3 equal-power signals impinging the array with signal separation corresponding to $0.891/N$. Figs. (a), (b), (c) consider $p_{\mathrm m} = 0.1$, whereas $p_{\mathrm m} = 0.3$ is assumed in Figs. (d), (e), (f). Besides, Figs. (a) and (d), Figs. (b) and (e), and Figs. (c) and (f) consider AIC, MDL, and HQC, respectively.}
\label{fig:src_detection_comp}
\end{figure*}
In this section, the performance of the proposed estimation strategy, framed in the context of adaptive beamforming and detection of number of sources, is analyzed.
For both applications it is considered a radar system equipped with a uniform linear array (ULA) pointing in the bore-sight direction ($\theta_0=0$). The array is composed of $N = 20$ antennas with inter-element spacing $d_x = \lambda/2$, where $\lambda$ denotes the radar operating wavelength. Moreover, two different values for the probability $p_{\mathrm m}$ of missing an observation are considered, i.e., $p_{\mathrm m}=0.1$ or $p_{\mathrm m}=0.3$. For a given $p_{\mathrm m}$, the selection matrix $A_i$ of the $i$-th snapshot is constructed from the diagonal matrix $\boldsymbol{D}_i$ whose diagonal entries are statistically IID Bernoulli random variables with parameter $1-p_{\mathrm m}$, skipping rows containing all zeros. Besides, the computation of the observed-data sample covariance matrix $\boldsymbol{S}_y = 1/K \sum_{i=i}^{K} \tilde{\boldsymbol{y}}_i \tilde{\boldsymbol{y}}_i^\dagger$ is performed employing $\tilde{\boldsymbol{y}}_i = \boldsymbol{D}_i \boldsymbol{r}_i,\;i=1,\dots,K$.
\subsection{Adaptive Beamforming}
The performance of the adaptive beamformer is analyzed in terms of beampattern shape and normalized average signal-to-interference power ratio (S/I) versus the number of snapshots. Standard Monte Carlo counting techniques over $100$ independent trials to compute the former performance metric and $500$ independent trials for the latter are used.
In the reported case studies the disturbance covariance matrix is modeled as ${\boldsymbol{M}}={\boldsymbol{M}}_J+\sigma^2_a{\boldsymbol I}$, where $\sigma^2_a$ is the white noise power level (assumed without loss of generality equal to $0$ dB) and ${\boldsymbol{M}}_J$ {is} the jamming covariance contribution. Specifically, denoting by $J_{NB}$ and $J_{WB}$ the number of narrow-band and wide-band jammers {(assumed separated in space)}, ${\boldsymbol{M}}_J = \boldsymbol{M}_1 + \boldsymbol{M}_2$,
where~\cite{farina1992antenna}
\begin{equation}
\boldsymbol{M}_1 = {\sum\limits_{l=1}^{J_{NB}}} \sigma_l^2\: \boldsymbol{v}(\theta_l) \boldsymbol{v}(\theta_l)^\dagger ,
\end{equation}
with
\begin{equation}\label{eq:steering_vector}
\boldsymbol{v}(\theta_l) = [1, e^{j \frac{2\pi}{\lambda} d_x \sin(\theta_l)}, \dots, e^{j (N-1) \frac{2\pi}{\lambda} d_x \sin(\theta_l)}]^{\mathrm{T}} \in \mathbb{C}^N
\end{equation}
the steering vector in the direction $\theta_l$ of the $l$-th jammer and $\sigma^2_l$ the power of the $l$-th jammer, while
\begin{equation}\label{eq:matrice_wide_band_jammers}
\begin{aligned}
&\boldsymbol{M}_2\left(n,\;m \right) = {\sum\limits_{r=1}^{J_{WB}}} \bar{\sigma}_r^2\: \mbox{sinc}[0.5 {B_f}_r \; (n-m) \zeta_r ]e^{j (n-m) \zeta_r} \;,
\end{aligned}
\end{equation}
with $(n,m)\in \{0,\dots,N-1\}^2$ and $\zeta_r = \pi \sin{\theta_r}$; moreover in~(\ref{eq:matrice_wide_band_jammers}), $\bar{\sigma}^2_r$, $\theta_r$, and ${B_f}_r$, represent the power, the DOA, and the fractional bandwidth $B_r/f_0$ (with $B_r$ the actual bandwidth and $f_0$ the carrier frequency) associated with the $r$-th interferer.
The sinc function appearing in~\eqref{eq:matrice_wide_band_jammers} is defined as $\mbox{sinc}(x) = \sin(x) / (x)$.
In the following, two different interfering environments are analyzed:
\begin{itemize}
\item Scenario 1: five narrow-band jammers located at $\theta_l=10+10l$ degrees, $l=1,\dots, 5$ with Jammer to Noise Ratio (JNR) given by $JNR_{l} = 30\text{ dB}$ ($\sigma_l^2={JNR}_l \:\sigma^2_a, \; l=1,\dots, 5$).
\item Scenario 2: five wide-band jammers $(B_f = 0.03)$ located at $\theta_r=10+10r$ degrees, $r=1,\dots, 5$ with $JNR_{r} = 30\text{ dB}$ ($\sigma_j^2={JNR}_r \:\sigma^2_a, \; r=1,\dots, 5$).
\end{itemize}
The performance of the adaptive beamformer, assuming either $p_{\mathrm m}=0.1$ or $p_{\mathrm m}=0.3$, is analyzed in terms of normalized average S/I in Figs.~\subref*{fig:rob_beam_sc_1_a}, \subref*{fig:rob_beam_sc_1_b}, \subref*{fig:rob_beam_sc_2_a}, and \subref*{fig:rob_beam_sc_2_b}. The resulting beampatterns (assuming $K = 60$), are displayed in Figs.~\subref*{fig:rob_beam_sc_1_c}, \subref*{fig:rob_beam_sc_1_d}, \subref*{fig:rob_beam_sc_2_c}, and \subref*{fig:rob_beam_sc_2_d}. In particular, Figs.~\ref{fig:rob_beam_sc_1} and~\ref{fig:rob_beam_sc_2} refer to the interference environments of Scenario 1 and 2, respectively.
The proposed strategy employs the EM procedure assuming the uncertainty set (\ref{set:cm_lb_white_dist_pwr_level}) with the FML computed from $\boldsymbol{S}_y$, {used to initialize the EM procedure}.
The beampattern and the normalized average S/I {obtained} using the sample covariance matrix of the complete-data ({as well as its variant based on} FML) and the FML of $\boldsymbol{S}_y$, are considered for comparison. As performance benchmark, the clairvoyant beampattern, based on a perfect knowledge of the covariance matrix, is reported too.
A close inspection of the results under the interference environment of Scenario 1 shows that for $p_{\mathrm m} = 0.1$ and $K \ge N$ the performance of the proposed procedure comes closer and closer to the complete-data FML whereas it exhibits for $p_{\mathrm m} = 0.3$ a slight degradation in terms of normalized average S/I in the order of $1$ dB {for $K > N$}, with respect to the {complete-data benchmark}. The effectiveness of the proposed algorithm is also {confirmed by} the more challenging interference environment of Scenario 2, where the performance is very close to the complete-data FML for $p_{\mathrm m} = 0.1$ and experiences a maximum degradation, in terms of normalized avg S/I, lower than $6$ dB, for $p_{\mathrm m} = 0.3$ and $K \ge N$.
{Nevertheless, for} all the configurations, the S/I of the EM-based beampattern approaches the complete-data {performance} as $K$ increases and this represents an indirect proof that the quality of the {proved} covariance estimation procedure improves when more and more snapshots are available for the estimation process.
{As to the} beampattern {analysis}, the inspection of the figures reveals that the EM FML is able to correctly nullify the jammers while preserving low side-lobes levels.
Finally, Fig.~\ref{fig:sir_comparison} compares the performance of EM FML and EM FML-FB, {highlighting} the capability of FML-FB to benefit from the underlying structure of the covariance matrix.
\subsection{Detection of Number of Sources}
In the following, equal-power signals impinging on the array from different directions $\theta_v$ are considered. The values of the parameters involved in the three analyzed scenarios, each related to a different number of sources, are listed in Table~\ref{tab-parameters}.
\begin{table}[htbp]
\small
\centering
\caption{\label{tab-parameters} Simulation Parameters}
\begin{tabular}{ccccc}
\hline
\hline
$d$ & $u_1 = \sin(\theta_1)$ & $u_2 = \sin(\theta_2)$ & $u_3 = \sin(\theta_3)$ & $u_4 = \sin(\theta_4)$ \\
\hline
$2$ & $-1/2 \;\text{SSBW}$ & $1/2 \;\text{SSBW}$ & & \\
$3$ & $-1/2 \;\text{SSBW}$ & $1/2 \;\text{SSBW}$ & $3/2 \;\text{SSBW}$ & \\
$4$ &$-1/2 \;\text{SSBW}$ & $1/2 \;\text{SSBW}$ & $3/2 \;\text{SSBW}$ & $-3/2 \;\text{SSBW}$ \\
\hline
\hline
\end{tabular}
\end{table}
Specifically, $\text{SSBW} = 0.891/N$ denotes the $3$dB single-side beam-width (SSBW) of the considered ULA~\cite{wirth2013radar}, whereas $u_v = \sin(\theta_v)$ is the target angular location of the $v$-th source in the space of directional cosine~\cite{vantrees4}.
Therefore, the covariance matrix is modeled as ${\boldsymbol{M}}={\boldsymbol{M}}_S+\sigma^2_n{\boldsymbol I}$, where $\sigma^2_n$ is the white noise power level (assumed without loss of generality equal to $0$ dB) and ${\boldsymbol{M}}_S$ refers to the useful covariance contribution, given by
\begin{equation}
\boldsymbol{M}_S = \sigma_s^2 {\sum\limits_{v=1}^d} \; \boldsymbol{v}(\theta_v) \boldsymbol{v}(\theta_v)^\dagger ,
\end{equation}
with $\sigma_s^2$ the power of each signal of interest and $\boldsymbol{v}(\theta_v)$ is defined as in~\eqref{eq:steering_vector}.
The metric used to assess the detection performance is the Probability of Detection (PD), namely the probability that $\hat{d} = d$~\cite{vantrees4}, {which is} estimated via standard Monte Carlo counting techniques over $500$ independent trials\footnote{Notice that a rank-deficient $\boldsymbol{S}_y$, due to a possible selection matrix configuration, is a non-zero probability event. Such realizations are excluded from the Monte Carlo trials.}.
Moreover, the array signal-to-noise ratio (ASNR) is defined as
\begin{equation}
ASNR = N \frac{\sigma_s^2}{\sigma_n^2} .
\end{equation}
Finally, the detection algorithm {assumes} $K=100$ and a maximum number of sources equal to $N/2 = 10$.
The detection performance is reported in Fig.~\ref{fig:src_detection_1} assuming $p_{\mathrm m} \in \{0.1,\: 0.3\}$ and $K=100$.
In particular, {denoting by $d$ the actual number of sources,} Figs.~\ref{fig:src_detection_1} (a), (b), and (c) {assume $d=2$}, Figs.~\ref{fig:src_detection_1} (d), (e), and (f) $d=3$, whereas Figs.~\ref{fig:src_detection_1} (g), (h), and (i) $d=4$. Moreover, Figs.~\ref{fig:src_detection_1} (a), (d), and (g) refer to AIC, Figs.~\ref{fig:src_detection_1} (b), (e), and (h) consider MDL whereas Figs.~\ref{fig:src_detection_1} (c), (f), and (i) {display} HQC.
The results highlight that for $p_{\mathrm m}=0.1$ the EM {approach leads to} a performance very close to the complete-data case (with a loss smaller than $1$ dB), and outperforms the basic approach of replacing the missing observations in the complete data with zeros (dashed brown curves), in most of the analyzed case. In fact, a close inspection of the curves shows that only when $d=4$, low ASNR, and with reference to the AIC (Fig.~\ref{fig:src_detection_1} (g)), the basic approach performs better than EM-based technique. This results is not surprising due to the overestimation behavior of the AIC~\cite{vantrees4}.
Besides, the basic strategy may not provide a monotonic behaviour with respect to the ASNR, reflecting the reasonable larger and larger discrepancy between the actual covariance matrix and that heuristically computed.
As expected, the EM-based order selection procedure experiences a performance degradation at $p_{\mathrm m}=0.3$, as compared with the complete-data counterpart. Remarkably, the gap between the EM and the complete-data curves, for $p_{\mathrm m}=0.3$, is less than $3$ dB in the worst case, whereas at the high ASNR regime it is almost absent. As in the case $p_{\mathrm m}=0.1$, EM-based strategy outperforms the basic counterpart, with the only exception of AIC with 4 sources, reported in Fig.~\ref{fig:src_detection_1} (g).
Finally, the detection performance using EM and EM-FB is compared in Fig.~\ref{fig:src_detection_comp}. Inspection of the curves pinpoints that capitalizing on the centro-Hermitian structure, EM-FB achieves higher $P_D$ levels than the unstructured EM in all the considered scenarios, except for the AIC at high ASNR regime where an expected saturation is experienced~\cite{vantrees4}.
\section{Conclusion}\label{section:conclusion}
This paper has considered the problem of structured covariance matrix estimation in the presence of missing data with special attention to a radar signal processing background. After providing a substantial motivation on the study and specifying some constraint sets of particular interest for the covariance matrix, the missing data model is described assuming Gaussian observations. Hence, the ML covariance estimation problem is formulated as the maximization of the observed data log-likelihood. To circumvent the analytical difficulties which are usually connected with the direct optimization of the mentioned function, an iterative maximization procedure based on the EM algorithm is developed and its convergence properties are established. Besides, a closed form expression is computed for the convergence rate.
The theoretical results are capitalized for some specific structural covariance models with reference to two radar applications: adaptive beamforming and detection of the number of sources. General procedures are suggested to construct adaptive beamformers and to detect the number of active sources in a collection of snapshots when missing observations are present. At the analysis stage, extensive numerical results have been discussed to show the effectiveness of the bespoke strategies to handle missing data scenarios.
In conclusion, the main contributions of the paper can be summarized as followed:
\begin{enumerate}[label=\alph*)]
\item the development of an EM-based technique for the estimation of a structured covariance matrix in the presence of missing data;
\item the study of the convergence properties for the resulting iterative procedure according to B-stationarity as well as the computation of the rate of convergence;
\item the application of the methodology in the context of two fundamental radar problems: beamforming and detection of the number of sources;
\item the presentation of numerical results aimed at corroborating the theoretical achievements.
\end{enumerate}
Possible future research avenues might include the validation of the approach on real data as well as its application in the context of adaptive target detection \cite{de2015modern} in the presence of missing observations, possibly accounting for compound Gaussian interference. Finally, it is absolutely worth of consideration a careful study on electronic protection techniques when some array elements (or sub-array) of the radar antenna are put in saturation by a strong interference source and, as a consequence, the data can be modeled as missing.
|
{
"timestamp": "2021-05-11T02:13:35",
"yymm": "2105",
"arxiv_id": "2105.03738",
"language": "en",
"url": "https://arxiv.org/abs/2105.03738"
}
|
\section*{Acknowledgment}
The authors wish to thank ... .
\section{Introduction and Main Results}
Let \rvX\ and \rvY\ be arbitrary random variables such that the joint distribution \meP[\rvX\rvY]\ is absolutely continuous \wrt\ the product $\meP[\rvX]\otimes\meP[\rvY]$ of the marginal distributions \meP[\rvX]\ and \meP[\rvY]. If $\frac{\dx\meP[\rvX\rvY]}{\dx\meP[\rvX]\otimes\meP[\rvY]}$ denotes the Radon-Nikodym derivative of \meP[\rvX\rvY]\ \wrt\ $\meP[\rvX]\otimes\meP[\rvY]$, then
\begin{align*}
i(\rvX;\rvY)=\log\bigg(\frac{\dx\meP[\rvX\rvY]}{\dx\meP[\rvX]\otimes\meP[\rvY]}(\rvX,\rvY)\bigg)
\end{align*}
is called the information density of \rvX\ and \rvY.
The expectation $\E{i(\rvX;\rvY)}=\mInf{\rvX}{\rvY}$ of the information density, called mutual information, plays a key role in characterizing the asymptotic channel coding performance in terms of channel capacity.
The non-asymptotic performance, however, is determined by the higher-order moments of the information density and its probability distribution.
Achievability and converse bounds that allow a finite blocklength analysis of the optimum channel coding rate are closely related to the distribution function of the information density, also called information spectrum by Han \cite{Han2003}.
Moreover, based on the variance of the information density tight second-order finite blocklength approximations of the optimum code rate can be derived for various important channel models.
Due to the seminal work of Polyanskiy et al.\ \cite{Polyanskiy2010} considerable progress has been made in non-asymptotic information theory and the requirements of future wireless networks regarding latency and reliability stimulated a significant interest in this type of analysis (Durisi et al.\ \cite{Durisi2016}).
The information density $i(\rvX;\rvY)$ in the case when $\rvX$ and $\rvY$ are jointly Gaussian is of special interest due to the prominent role of the Gaussian distribution.
Let $\rvX=(\rvX[1],\rvX[2],\ldots,\rvX[m])$ and $\rvY=(\rvY[1],\rvY[2],\ldots,\rvY[n])$ be real-valued random vectors\footnote{For notational convenience we write vectors as row vectors. However, in expressions where matrix or vector multiplications occur, we consider all vectors as column vectors.}
with nonsingular covariance matrices $R_{\rvX}$ and $R_{\rvY}$ and cross-covariance matrix $R_{\rvX\rvY}$ with rank $r=\rank(R_{\xi\eta})$.
Without loss of generality for the subsequent results, we assume the expectation of all random variables to be zero.
If $(\rvX[1],\rvX[2],\ldots,\rvX[m],\rvY[1],\rvY[2],\ldots,\rvY[n])$ is a Gaussian random vector, then Pinsker \cite[Ch.\,9.6]{Pinsker1964} has shown that the distribution of the information density $i(\rvX;\rvY)$ coincides with the distribution of the random variable
\begin{align}\label{EQ:SUM-REPRESENTATION-OF-INFODENSITY}
\rvIdnsI &=\frac{1}{2}\sum_{i=1}^r\ccaC[i]\big(\rvXtd[i][2]-\rvYtd[i][2]\big)+\mInf{\rvX}{\rvY}.
\end{align}
In this representation $\rvXtd[1], \rvXtd[2],\ldots,\rvXtd[r], \rvYtd[1], \rvYtd[2], $\ldots$, \rvYtd[r]$ are independent and identically distributed (i.i.d.) Gaussian random variables with zero mean and unit variance, $\ccaC[1]\geq\ccaC[2]\geq\ldots\geq\ccaC[r]>0$ denote the positive canonical correlations (see \Cref{PROPOSITION:CCA}) of \rvX\ and \rvY\ in descending order, and the mutual information \mInf{\rvX}{\rvY} has the form
\begin{align}\label{EQ:SUM-REPRESENTATION-OF-MUTUAL-INFO}
\mInf{\rvX}{\rvY}&=\frac{1}{2}\sum_{i=1}^{r}\log\bigg(\frac{1}{1-\ccaC[i][2]}\bigg).
\end{align}
The rank $r$ of the cross-covariance matrix $R_{\rvX\rvY}$ satisfies $0\leq r \leq \min\{m,n\}$ and for $r=0$ we have $i(\rvX;\rvY)\equiv 0$ almost surely and $\mInf{\rvX}{\rvY}=0$. This corresponds to $\meP[\rvX\rvY]=\meP[\rvX]\otimes\meP[\rvY]$ and the independence of \rvX\ and \rvY\ such that the information density is deterministic. Throughout the rest of the paper we exclude this degenerated case when the information density is considered and assume subsequently the setting and notation introduced above with $r \geq 1$.
Based on \eqref{EQ:SUM-REPRESENTATION-OF-INFODENSITY} we derive series representations of the probability density function (PDF) and the cumulative distribution function (CDF) of the information density $i(\rvX;\rvY)$ given subsequently in \Cref{thm:pdfinf} and \Cref{thm:cdfinf}.
These representations are useful as they allow tight approximations with errors as low as desired by finite sums
as shown in \Cref{SEC:NUMERICAL-APPROXIMATION}.
\begin{theorem}[PDF of information density]
\label{thm:pdfinf}
The PDF $f_{i(\xi;\eta)}$ of the information density $i(\rvX;\rvY)$ is given by
\begin{multline}\label{EQ:PDF-INFO-DENSITY}
f_{i(\rvX;\rvY)}(x)=\frac{1}{\ccaC[r]\sqrt{\pi}}\sum_{k_{1}=0}^{\infty}\sum_{k_{2}=0}^{\infty}\dots
\sum_{k_{r-1}=0}^{\infty}\left[\prod_{i=1}^{r-1}
\frac{\ccaC[r]}{\ccaC[i]}\frac{(2k_{i})!}{(k_{i}!)^{2}4^{k_{i}}}
\left(1-\frac{\ccaC[r][2]}{\ccaC[i][2]}\right)^{k_{i}}\right]
\times\\
\frac{\mathrm{K}_{\frac{r-1}{2}+k_{1}+k_{2}+\dots+k_{r-1}}
\left(\left|\frac{x-I(\xi;\eta)}{\ccaC[r]}\right|\right)}
{\Gamma\left(\frac{r}{2}+k_{1}+k_{2}+\dots+k_{r-1}\right)}
\left|\frac{x-I(\xi;\eta)}{2\ccaC[r]}\right|^{\left(\frac{r-1}{2}+k_{1}+k_{2}+\dots+k_{r-1}\right)},
\qquad x\in\R\backslash\{I(\xi;\eta)\},
\end{multline}
%
where $\Gamma(\cdot)$ denotes the gamma function
\cite[Sec.\,5.2.1]{Olver2010}
and
$\mathrm{K}_{\alpha}(\cdot)$ denotes the modified Bessel function of second kind and order $\alpha$
\cite[Sec.\,10.25(ii)]{Olver2010}.
%
If $r \geq 2$ then $f_{i(\xi;\eta)}(x)$ is also well defined for ${x=I(\xi;\eta)}$.
\end{theorem}
\begin{theorem}[CDF of information density]
\label{thm:cdfinf}
The CDF $F_{i(\xi;\eta)}$ of the information density $i(\rvX;\rvY)$ is given by
\begin{equation*}
F_{i(\xi;\eta)}(x)
\begin{dcases}
\rule{0ex}{3.5ex}\;\frac{1}{2}-V\left(I(\xi;\eta)-x\right)&\text{if}\quad x \leq I(\xi;\eta)\\
\;\frac{1}{2}+V\left(x-I(\xi;\eta)\right)&\text{if}\quad x > I(\xi;\eta)\\[1ex]
\end{dcases},
\end{equation*}
with $V(z)$ defined by
\begin{align}\nonumber
V(z)=&\sum_{k_{1}=0}^{\infty}\sum_{k_{2}=0}^{\infty}\dots
\sum_{k_{r-1}=0}^{\infty}\left[\prod_{i=1}^{r-1}
\frac{\ccaC[r]}{\ccaC[i]}\frac{(2k_{i})!}{(k_{i}!)^{2}4^{k_{i}}}
\left(1-\frac{\ccaC[r][2]}{\ccaC[i][2]}\right)^{k_{i}}\right]
\frac{z}{2\ccaC[r]}
\times\\ \nonumber
&\bigg[
\mathrm{K}_{\frac{r-1}{2}+k_{1}+k_{2}+\dots+k_{r-1}}\left(\frac{z}{\ccaC[r]}\right)
\mathrm{L}_{\frac{r-3}{2}+k_{1}+k_{2}+\dots+k_{r-1}}\left(\frac{z}{\ccaC[r]}\right)+\\ \label{EQ:CDF-INFO-DENSITY}
&\mathrm{K}_{\frac{r-3}{2}+k_{1}+k_{2}+\dots+k_{r-1}}\left(\frac{z}{\ccaC[r]}\right)
\mathrm{L}_{\frac{r-1}{2}+k_{1}+k_{2}+\dots+k_{r-1}}\left(\frac{z}{\ccaC[r]}\right)
\bigg],\qquad\quad z\geq 0,
\end{align}
where
$\mathrm{L}_{\alpha}(\cdot)$ denotes the modified Struve $\mathrm{L}$ function of order $\alpha$
\cite[Sec.\,11.2]{Olver2010}.
\end{theorem}
A simple but important special case, where the series representations in \Cref{thm:pdfinf} and \Cref{thm:cdfinf} simplify to a single summand is considered in the following \namecref{COR:PDF-CDF-EQUAL-CORRELATIONS}.
\newpage
\begin{corollary}[PDF and CDF of information density for equal canonical correlations] \label{COR:PDF-CDF-EQUAL-CORRELATIONS}
If all canonical correlations are equal
\begin{equation*}
\ccaC[1]=\ccaC[2]=\ldots=\ccaC[r],
\end{equation*}
then the PDF $f_{i(\xi;\eta)}$ of the information density $i(\rvX;\rvY)$ simplifies to
%
%
\begin{equation}\label{EQ:PDF-INFO-DENSITY-EQUAL-CCA}
f_{i(\xi;\eta)}(x)=\frac{1}{\ccaC[r]\sqrt{\pi}\Gamma\left(\frac{r}{2}\right)}
\mathrm{K}_{\frac{r-1}{2}}
\left(\left|\frac{x-I(\xi;\eta)}{\ccaC[r]}\right|\right)
\left|\frac{x-I(\xi;\eta)}{2\ccaC[r]}\right|^{\frac{r-1}{2}},
\qquad x\in\R\backslash\{I(\xi;\eta)\},
\end{equation}
where $I(\xi;\eta)$ is given by
\begin{equation*}
I(\xi;\eta)=-\frac{r}{2}\log\left(1-\ccaC[r][2]\right).
\end{equation*}
%
If $r \geq 2$ then $f_{i(\xi;\eta)}(x)$ is also well defined for ${x=I(\xi;\eta)}$.
%
Further the CDF $F_{i(\xi;\eta)}$ is given by
%
\begin{equation}\label{EQ:CDF-INFO-DENSITY-EQUAL-CCA}
F_{i(\xi;\eta)}(x)
\begin{dcases}
\rule{0ex}{3.5ex}\;\frac{1}{2}-V\left(I(\xi;\eta)-x\right)&\text{if}\quad x \leq I(\xi;\eta)\\
\;\frac{1}{2}+V\left(x-I(\xi;\eta)\right)&\text{if}\quad x > I(\xi;\eta)\\[1ex]
\end{dcases},
\end{equation}
with $V(z)$ defined by
\begin{equation*}
V(z)=\frac{z}{2\ccaC[r]}
\bigg[
\mathrm{K}_{\frac{r-1}{2}}\left(\frac{z}{\ccaC[r]}\right)
\mathrm{L}_{\frac{r-3}{2}}\left(\frac{z}{\ccaC[r]}\right)+
\mathrm{K}_{\frac{r-3}{2}}\left(\frac{z}{\ccaC[r]}\right)
\mathrm{L}_{\frac{r-1}{2}}\left(\frac{z}{\ccaC[r]}\right)
\bigg],\qquad z\geq 0.
\end{equation*}
\end{corollary}
Clearly, if all canonical correlations are equal, then the only nonzero term in the series \eqref{EQ:PDF-INFO-DENSITY} and \eqref{EQ:CDF-INFO-DENSITY} occur for $k_1=k_2=\ldots=k_{r-1}=0$. For this single summand the product in squared brackets in \eqref{EQ:PDF-INFO-DENSITY} and \eqref{EQ:CDF-INFO-DENSITY} is equal to $1$ by applying $0^0=1$, which yields the results of \Cref{COR:PDF-CDF-EQUAL-CORRELATIONS}.
\begin{numpar}[Special cases of \Cref{COR:PDF-CDF-EQUAL-CORRELATIONS}]
The case when all canonical correlations are equal is important because it occurs in various situations. The subsequent cases follow from the properties of canonical correlations given in \Cref{PROPOSITION:CCA}.
\begin{inparaenum}[(i)]
\item Assume that
\begin{gather}\label{EQ:INDEPENDENT-AND-EQUAL-CORRELATIONS-I}
\cor{\rvX[i]}{\rvY[i]}=\rho \neq 0,\qquad i=1,2,\ldots,k\leq\min\{m,n\}\\
\label{EQ:INDEPENDENT-AND-EQUAL-CORRELATIONS-II}
\cor{\rvX[i]}{\rvY[i]}=0,\qquad i=k+1,\ldots,\min\{m,n\},\\\label{EQ:INDEPENDENT-AND-EQUAL-CORRELATIONS-III}
\cor{\rvX[i]}{\rvX[j]}=0,\quad\cor{\rvY[i]}{\rvY[j]}=0,\quad\cor{\rvX[i]}{\rvY[j]}=0,\qquad i\neq j,
\end{gather}
where $\cor{\cdot}{\cdot}$ denotes the Pearson correlation coefficient.
Then $r=k$ and $\ccaC[i]=|\rho|$ for all $i=1,2,\ldots,r$.
Note, if $m=n=k$ then for \eqref{EQ:INDEPENDENT-AND-EQUAL-CORRELATIONS-I}--\eqref{EQ:INDEPENDENT-AND-EQUAL-CORRELATIONS-III} to hold it is sufficient that the two-dimensional random vectors $(\rvX[i],\rvY[i])$ are i.i.d. However, the identical distribution of the $(\rvX[i],\rvY[i])$'s is not necessary.
In Laneman \cite{Laneman2006} the distribution of the information density for an
additive white Gaussian noise channel with i.i.d.\ Gaussian input is determined. This is a special case of the case with i.i.d.\ random vectors $(\rvX[i],\rvY[i])$ just mentioned.
In Wu and Jindal \cite{WuJindal2011} and in Buckingham and Valenti \cite{BuckinghamValenti2008} an approximation of the information density by a Gaussian random variable is considered for the setting in \cite{Laneman2006}.
A special case very similar to that in \cite{Laneman2006} is also considered in Polyanskiy et al.\ \cite[Sec.\,III.J]{Polyanskiy2010}.
To the best of the authors knowledge explicit formulas for the general case as considered in this paper are not available yet in the literature.
%
\item Assume that \eqref{EQ:INDEPENDENT-AND-EQUAL-CORRELATIONS-I}--\eqref{EQ:INDEPENDENT-AND-EQUAL-CORRELATIONS-III} are satisfied. Further assume that $\hat{A}$ is a real nonsingular matrix of dimension $m\times m$ and $\hat{B}$ is a real nonsingular matrix of dimension $n\times n$. Then the random vectors
\begin{align*}
\rvXh=\hat{A}\,\rvX\qquad\text{ and }\qquad \rvYh=\hat{B}\,\rvY
\end{align*}
have the same canonical correlations as the random vectors \rvX\ and \rvY, \ie, $\ccaC[i]=|\rho|$ for all $i=1,2,\ldots,k\leq\min\{m,n\}$.
\item If $r=1$, \ie, the cross-covariance matrix $R_{\rvX,\rvY}$ has rank $1$, then \Cref{COR:PDF-CDF-EQUAL-CORRELATIONS} obviously applies, and the PDF further simplifies to
\begin{equation*}
f_{i(\xi;\eta)}(x)=\frac{1}{\ccaC[r]\pi}
\mathrm{K}_{0}
\left(\left|\frac{x-I(\xi;\eta)}{\ccaC[r]}\right|\right),
\qquad x\in\mathbb{R}\backslash\{I(\xi;\eta)\}.
\end{equation*}
Clearly, the most simple special case with $r=1$ occurs for $n=m=1$, where $\ccaC[1]=|\cor{\rvX[1]}{\rvY[1]}|$.
As a simple multivariate example let the covariance matrix of $(\rvX[1],\rvX[2],\ldots,\rvX[m],\rvY[1],\rvY[2],\ldots,\rvY[n])$ be given by the Kac-Murdock-Szeg\"o matrix %
\begin{align*}
\begin{pmatrix}
R_{\rvX} & R_{\rvX\rvY}\\
R_{\rvX\rvY} & R_{\rvY}
\end{pmatrix}
=
\Big(\rho^{|i-j|}\Big)_{i,j=1}^{m+n},
\end{align*}
which is related to the covariance function of a
first-order autoregressive process, where $0<|\rho|<1$. Then $\rank(R_{\rvX\rvY})=1$ and $\ccaC[1]=|\rho|$.
\item As yet another example assume $m=n$ and $R_{\rvX\rvY}=\rho R_{\rvX}^{\frac{1}{2}}R_{\rvY}^{\frac{1}{2}}$ for some $0<|\rho|<1$. Then $\ccaC[i]=|\rho|$ for $i=1,2,\ldots,r=n$.
\end{inparaenum}
\end{numpar}
The rest of the paper is organized as follows. In \Cref{SEC:PRELIMINARIES} we provide some background on the canonical correlation analysis and its application to the calculation of the information density and mutual information for Gaussian random vectors. Furthermore, \Cref{SEC:PRELIMINARIES} contains
auxiliary results required for the proofs of \Cref{thm:pdfinf} and \Cref{thm:cdfinf} given in \Cref{SEC:PROOFS-OF-MAIN-RESULTS}.
Finite sum approximations and uniform bounds of the approximation error are derived in \Cref{SEC:NUMERICAL-APPROXIMATION}, where also some examples and illustrations are provided and the (in)validity of Gaussian approximations is discussed.
{Finally, \Cref{SECTION:CONLUSIONS} summarizes the paper. }
\section{Approximations and Numerical Examples}
\label{SEC:NUMERICAL-APPROXIMATION}
\subsection{Finite Sum Approximations}
If there are at least two distinct canonical correlations, then the PDF $f_{i(\rvX;\rvY)}$ and CDF $F_{i(\rvX;\rvY)}$ of the information density $i(\rvX;\rvY)$ are given by the infinite series in \Cref{thm:pdfinf} and \Cref{thm:cdfinf}.
If we consider only a finite number of summands in these representations then we obtain approximations suitable in particular for numerical calculations.
Let us consider for $r \geq 2$ and at least two distinct canonical correlations the following approximated PDF
\begin{multline}\label{EQ:APPROXIMATED-PDF}
\hat{f}_{i(\rvX;\rvY)}(x,n_1,n_2,\ldots,n_{r-1})=\frac{1}{\ccaC[r]\sqrt{\pi}}\sum_{k_{1}=0}^{n_1}\,\sum_{k_{2}=0}^{n_2}\dots
\sum_{k_{r-1}=0}^{n_{r-1}}\left[\prod_{i=1}^{r-1}
\frac{\ccaC[r]}{\ccaC[i]}\frac{(2k_{i})!}{(k_{i}!)^{2}4^{k_{i}}}
\left(1-\frac{\ccaC[r][2]}{\ccaC[i][2]}\right)^{k_{i}}\right]
\times\\
\frac{\mathrm{K}_{\frac{r-1}{2}+k_{1}+k_{2}+\dots+k_{r-1}}
\left(\left|\frac{x-I(\xi;\eta)}{\ccaC[r]}\right|\right)}
{\Gamma\left(\frac{r}{2}+k_{1}+k_{2}+\dots+k_{r-1}\right)}
\left|\frac{x-I(\xi;\eta)}{2\ccaC[r]}\right|^{\left(\frac{r-1}{2}+k_{1}+k_{2}+\dots+k_{r-1}\right)},
\qquad x\in\R\backslash\{I(\xi;\eta)\},
\end{multline}
and CDF
\begin{equation}\label{EQ:APPROXIMATED-CDF}
\hat{F}_{i(\xi;\eta)}(x,n_1,n_2,\ldots,n_{r-1})
\begin{dcases}
\rule{0ex}{3.5ex}\;\frac{1}{2}-\hat{V}\left(I(\xi;\eta)-x,n_1,n_2,\ldots,n_{r-1}\right)&\text{if}\quad x \leq I(\xi;\eta)\\
\;\frac{1}{2}+\hat{V}\left(x-I(\xi;\eta),n_1,n_2,\ldots,n_{r-1}\right)&\text{if}\quad x > I(\xi;\eta)\\[1ex]
\end{dcases},
\end{equation}
with $\hat{V}\left(z,n_1,n_2,\ldots,n_{r-1}\right)$ defined by
\begin{align*}
\hat{V}\left(z,n_1,n_2,\ldots,n_{r-1}\right)=&\sum_{k_{1}=0}^{n_1}\,\sum_{k_{2}=0}^{n_2}\dots
\sum_{k_{r-1}=0}^{n_{r-1}}\left[\prod_{i=1}^{r-1}
\frac{\ccaC[r]}{\ccaC[i]}\frac{(2k_{i})!}{(k_{i}!)^{2}4^{k_{i}}}
\left(1-\frac{\ccaC[r][2]}{\ccaC[i][2]}\right)^{k_{i}}\right]
\frac{z}{2\ccaC[r]}
\times\\ \nonumber
&\bigg[
\mathrm{K}_{\frac{r-1}{2}+k_{1}+k_{2}+\dots+k_{r-1}}\left(\frac{z}{\ccaC[r]}\right)
\mathrm{L}_{\frac{r-3}{2}+k_{1}+k_{2}+\dots+k_{r-1}}\left(\frac{z}{\ccaC[r]}\right)+\\
&\mathrm{K}_{\frac{r-3}{2}+k_{1}+k_{2}+\dots+k_{r-1}}\left(\frac{z}{\ccaC[r]}\right)
\mathrm{L}_{\frac{r-1}{2}+k_{1}+k_{2}+\dots+k_{r-1}}\left(\frac{z}{\ccaC[r]}\right)
\bigg],\qquad\quad z\geq 0,
\end{align*}
where $n_1,n_2,\ldots n_{r-1}\in\N_0$ are constants specifying the number of summands taken into account for the approximation.
The following \namecref{THEOREM:APPROXIMATION-ERROR-PDF-CDF} provides suitable error bounds related to the approximation of the PDF and CDF by finite sums.
\begin{theorem}[Bounds of the approximation error for PDF and CDF]\label{THEOREM:APPROXIMATION-ERROR-PDF-CDF}
Let $r \geq 2$ and assume that at least two canonical correlations are distinct. Then we have the following error bounds
\begin{multline}\label{EQ:ERROR-BOUND-PDF-OF-IDENS}
\big|{f}_{i(\rvX;\rvY)}(x)-\hat{f}_{i(\rvX;\rvY)}(x,n_1,n_2,\ldots,n_{r-1})\big|\leq\\
\frac{\Gamma\left(\frac{r-1}{2}+n_1+n_2+\ldots+n_{r-1}\right)}{2\ccaC[r]\sqrt{\pi}\,\Gamma\left(\frac{r}{2}+n_1+n_2+\ldots+n_{r-1}\right)}\big(1-\hat{S}\left(n_1,n_2,\ldots,n_{r-1}\right)\big), \qquad x\in\R,
\end{multline}
and
\begin{align}\label{EQ:ERROR-BOUND-CDFV-OF-IDENS}
\big|V(z)-\hat{V}(z,n_1,n_2,\ldots,n_{r-1})\big|\leq
\frac{1}{2}\big(1-\hat{S}\left(n_1,n_2,\ldots,n_{r-1}\right)\big), \qquad z\geq 0,
\end{align}
where
\begin{align*}
\hat{S}\left(n_1,n_2,\ldots,n_{r-1}\right)=&\sum_{k_{1}=0}^{n_1}\,\sum_{k_{2}=0}^{n_2}\dots
\sum_{k_{r-1}=0}^{n_{r-1}}\left[\prod_{i=1}^{r-1}
\frac{\ccaC[r]}{\ccaC[i]}\frac{(2k_{i})!}{(k_{i}!)^{2}4^{k_{i}}}
\left(1-\frac{\ccaC[r][2]}{\ccaC[i][2]}\right)^{k_{i}}\right].
\end{align*}
\end{theorem}
\begin{proof}
From the special case where all canonical correlations are equal we can conclude from the CDF given in \Cref{COR:PDF-CDF-EQUAL-CORRELATIONS} that the function
\begin{align}\label{EQ:STRUVE-MCDONALD-FUNCTION}
z\mapsto z
\Big[
\mathrm{K}_{\alpha}\left(z\right)
\mathrm{L}_{\alpha-1}\left(z\right)+
\mathrm{K}_{\alpha-1}\left(z\right)
\mathrm{L}_{\alpha}\left(z\right)
\Big],\qquad z \geq 0,
\end{align}
is mononotically increasing for all $\alpha=(j-1)/2$, $j\in\N$ and that further
\begin{align}\label{EQ:LIMIT-STRUVE-MCDONALD}
\lim_{z\to \infty}z
\Big[
\mathrm{K}_{\alpha}\left(z\right)
\mathrm{L}_{\alpha-1}\left(z\right)+
\mathrm{K}_{\alpha-1}\left(z\right)
\mathrm{L}_{\alpha}\left(z\right)
\Big]=1
\end{align}
%
holds. Using \eqref{EQ:LIMIT-STRUVE-MCDONALD} we obtain from \eqref{EQ:CDF-INFO-DENSITY}
%
\begin{align*}
\lim_{z\to \infty}2V(z)=\sum_{k_{1}=0}^{\infty}\sum_{k_{2}=0}^{\infty}\dots
\sum_{k_{r-1}=0}^{\infty}\left[\prod_{i=1}^{r-1}
\frac{\ccaC[r]}{\ccaC[i]}\frac{(2k_{i})!}{(k_{i}!)^{2}4^{k_{i}}}
\left(1-\frac{\ccaC[r][2]}{\ccaC[i][2]}\right)^{k_{i}}\right]
\end{align*}
%
by exchanging the limit and the summation, which is justified by the monotone convergence theorem.
Due to the properties of the CDF we have $\lim_{z\to \infty}2V(z)=1$ and therefore
%
\begin{align}\label{EQ:SERIES-OF-PRODUCTS}
\sum_{k_{2}=0}^{\infty}\dots
\sum_{k_{r-1}=0}^{\infty}\left[\prod_{i=1}^{r-1}
\frac{\ccaC[r]}{\ccaC[i]}\frac{(2k_{i})!}{(k_{i}!)^{2}4^{k_{i}}}
\left(1-\frac{\ccaC[r][2]}{\ccaC[i][2]}\right)^{k_{i}}\right] =1.
\end{align}
We now obtain
\begin{align*}
\big|V(z)-\hat{V}(z,n_1,n_2,\ldots,n_{r-1})\big|
=\;
&\frac{1}{2}\sum_{(k_1,k_2,\ldots,k_{r-1})\in N(n_1,n_2,\ldots,n_{r-1})}\left[\prod_{i=1}^{r-1}
\frac{\ccaC[r]}{\ccaC[i]}\frac{(2k_{i})!}{(k_{i}!)^{2}4^{k_{i}}}
\left(1-\frac{\ccaC[r][2]}{\ccaC[i][2]}\right)^{k_{i}}\right]
\times\\
&\frac{z}{\ccaC[r]}\bigg[
\mathrm{K}_{\frac{r-1}{2}+k_{1}+k_{2}+\dots+k_{r-1}}\left(\frac{z}{\ccaC[r]}\right)
\mathrm{L}_{\frac{r-3}{2}+k_{1}+k_{2}+\dots+k_{r-1}}\left(\frac{z}{\ccaC[r]}\right)\\
&\quad+\mathrm{K}_{\frac{r-3}{2}+k_{1}+k_{2}+\dots+k_{r-1}}\left(\frac{z}{\ccaC[r]}\right)
\mathrm{L}_{\frac{r-1}{2}+k_{1}+k_{2}+\dots+k_{r-1}}\left(\frac{z}{\ccaC[r]}\right)
\bigg]\\
%
\leq\; & \frac{1}{2}\sum_{(k_1,k_2,\ldots,k_{r-1})\in N(n_1,n_2,\ldots,n_{r-1})}\left[\prod_{i=1}^{r-1}
\frac{\ccaC[r]}{\ccaC[i]}\frac{(2k_{i})!}{(k_{i}!)^{2}4^{k_{i}}}
\left(1-\frac{\ccaC[r][2]}{\ccaC[i][2]}\right)^{k_{i}}\right]\\
%
=\;&\frac{1}{2}\big(1-\hat{S}\left(n_1,n_2,\ldots,n_{r-1}\right)\big),
\end{align*}
where $N(n_1,n_2,\ldots,n_{r-1})=\N^{r-1}\setminus\{0,1,\ldots,n_1\}\times\{0,1,\ldots,n_2\}\times\ldots\times\{0,1,\ldots,n_{r-1}\}$.
The inequality follows from the monotonicity of the function in \eqref{EQ:STRUVE-MCDONALD-FUNCTION} and from \eqref{EQ:LIMIT-STRUVE-MCDONALD}. The last equality follows from \eqref{EQ:SERIES-OF-PRODUCTS}.
Similarly, we obtain
\begin{align*}
&\hspace*{-1em}\big|{f}_{i(\rvX;\rvY)}(x)-\hat{f}_{i(\rvX;\rvY)}(x,n_1,n_2,\ldots,n_{r-1})\big|\\
=\;
&\frac{1}{\ccaC[r]\sqrt{\pi}}\sum_{(k_1,k_2,\ldots,k_{r-1})\in N(n_1,n_2,\ldots,n_{r-1})}\left[\prod_{i=1}^{r-1}
\frac{\ccaC[r]}{\ccaC[i]}\frac{(2k_{i})!}{(k_{i}!)^{2}4^{k_{i}}}
\left(1-\frac{\ccaC[r][2]}{\ccaC[i][2]}\right)^{k_{i}}\right]
\times\\
&\hspace*{8em}\frac{\mathrm{K}_{\frac{r-1}{2}+k_{1}+k_{2}+\dots+k_{r-1}}
\left(\left|\frac{x-I(\xi;\eta)}{\ccaC[r]}\right|\right)}
{\Gamma\left(\frac{r}{2}+k_{1}+k_{2}+\dots+k_{r-1}\right)}
\left|\frac{x-I(\xi;\eta)}{2\ccaC[r]}\right|^{\left(\frac{r-1}{2}+k_{1}+k_{2}+\dots+k_{r-1}\right)} \\[1ex]%
%
\leq\; & \frac{1}{\ccaC[r]\sqrt{\pi}}\sum_{(k_1,k_2,\ldots,k_{r-1})\in N(n_1,n_2,\ldots,n_{r-1})}\left[\prod_{i=1}^{r-1}
\frac{\ccaC[r]}{\ccaC[i]}\frac{(2k_{i})!}{(k_{i}!)^{2}4^{k_{i}}}
\left(1-\frac{\ccaC[r][2]}{\ccaC[i][2]}\right)^{k_{i}}\right]\frac{\Gamma\left(\frac{r-1}{2}+k_1+k_2+\ldots+k_{r-1}\right)}{2\,\Gamma\left(\frac{r}{2}+k_1+k_2+\ldots+k_{r-1}\right)}\\[1ex]
%
\leq\; &\frac{\Gamma\left(\frac{r-1}{2}+n_1+n_2+\ldots+n_{r-1}\right)}{2\ccaC[r]\sqrt{\pi}\,\Gamma\left(\frac{r}{2}+n_1+n_2+\ldots+n_{r-1}\right)}\big(1-\hat{S}\left(n_1,n_2,\ldots,n_{r-1}\right)\big),
\end{align*}
where for the first inequality we have used \Cref{PROP:PROPERTIES-OF-BESSEL-FUNCTION} and for the second inequality we have used \eqref{EQ:SERIES-OF-PRODUCTS} and the decreasing monotonicity of $\Gamma(\alpha)/\Gamma(\alpha+\frac{1}{2})$ \wrt\ $\alpha\geq\frac{1}{2}$. This completes the proof.
\end{proof}
\begin{remark}
Note that the bound in \eqref{EQ:ERROR-BOUND-PDF-OF-IDENS} can be further simplified using the inequality
\begin{align*}
\frac{\Gamma(\alpha)}{\Gamma\left(\alpha+\frac{1}{2}\right)}\leq\sqrt{\pi}.
\end{align*}
Further note that the derived error bounds are uniform in the sense that they only depend on the parameters of the given Gaussian distribution and the number of summands considered. As can be seen from \eqref{EQ:SERIES-OF-PRODUCTS} the bounds converge to zero as the number of summands jointly increase.
\end{remark}
\subsection{Numerical Examples and Illustrations}
We illustrate the results of this paper with some examples. First, we consider the special case of \Cref{COR:PDF-CDF-EQUAL-CORRELATIONS} when all canonical correlations are equal.
The PDF and CDF given by \eqref{EQ:PDF-INFO-DENSITY-EQUAL-CCA} and \eqref{EQ:CDF-INFO-DENSITY-EQUAL-CCA} are illustrated in \Cref{FIGURE:ILLUSTRATION-PDF-CASE-I} and
\ref{FIGURE:ILLUSTRATION-CDF-CASE-I} in centered from, \ie, shifted by $I(\rvX;\rvY)$, for $r\in\{1,2,3,4,5\}$ and equal canonical correlations $\ccaC[i]=0.9, i=1,\ldots,r$.
In \Cref{FIGURE:ILLUSTRATION-PDF-CASE-II} and \ref{FIGURE:ILLUSTRATION-CDF-CASE-II}
a fixed number of $r=5$ equal canonical correlations $\ccaC[i]\in\{0.1,0.2,0.5,0.7,0.9\}, i=1,\ldots,r$ is considered.
When all canonical correlations are equal, then due to the central limit theorem the distribution of the information density $i(\rvX;\rvY)$ converges to a Gaussian distribution as $r\rightarrow\infty$.
\Cref{FIGURE:ILLUSTRATION-PDF-CASE-III} and
\ref{FIGURE:ILLUSTRATION-CDF-CASE-III} show for $r\in\{5,10,20,40\}$ and equal canonical correlations $\ccaC[i]=0.2,r=1,2,\ldots,r$ the PDF and CDF of the information density together with corresponding Gaussian approximations.
The approximations are obtained by considering Gaussian distributions, which have the same variance as the information density $i(\rvX;\rvY)$. The variance is given by
\begin{align*}
\var{i(\rvX;\rvY)}=\sum_{i=1}^r\ccaC[i][2],
\end{align*}
as can be easily derived from the representation in \eqref{EQ:SUM-REPRESENTATION-OF-INFODENSITY}.
The illustrations show that a high number of equal canonical correlations is required such that the distribution of the information density is approximately Gaussian.
To illustrate the case with different canonical correlations let us consider the sequence $\{\ccaC[1](T),\ccaC[2](T),$ $\ldots,\ccaC[r](T)\}$ with
\begin{align}\label{EQ:CCA-OU-AWGN}
\ccaC[i](T)=\sqrt{\frac{T^2}{T^2+\pi\left(i-\frac{1}{2}\right)^2}},\qquad i=1,2,\ldots,r.
\end{align}
These canonical correlations are related to the information density of a continuous-time additive white Gaussian noise channel confined to a finite time interval $[0,T]$ with a Brownian motion as input signal (see e.\,g.\ Huffmann \cite[Sec.\,8.1]{Huffmann2021} for more details).
\Cref{FIGURE:ILLUSTRATION-PDF-CASE-IV} and \ref{FIGURE:ILLUSTRATION-CDF-CASE-IV} show the approximated PDF $\hat{f}_{i(\rvX;\rvY)-I(\rvX;\rvY)}$ and CDF $\hat{F}_{i(\rvX;\rvY)-I(\rvX;\rvY)}$ for $r\in\{2,5,10,15\}$ and $T=1$ using the finite sums \eqref{EQ:APPROXIMATED-PDF} and
\eqref{EQ:APPROXIMATED-CDF}. The bounds of the approximation error given in \Cref{THEOREM:APPROXIMATION-ERROR-PDF-CDF} are chosen such that there are no differences visible in the plots by futher lowering the approximation error.
The number $n=n_1+n_2+\ldots+n_{r-1}$ required in \eqref{EQ:APPROXIMATED-PDF} and
\eqref{EQ:APPROXIMATED-CDF} to achieve these error bounds for $r\in\{2,5,10,15\}$ is equal to $n\in\{13,127,576,1349\}$.
Choosing $r$ larger than $15$ for the canonical correlations \eqref{EQ:CCA-OU-AWGN} with $T=1$ does not result in visible changes of the PDF and CDF compared to $r=15$.
This demonstrates together with \Cref{FIGURE:ILLUSTRATION-PDF-CASE-IV} and \ref{FIGURE:ILLUSTRATION-CDF-CASE-IV} that a Gaussian approximation is not valid for this example, even if $r\rightarrow\infty$.
This conclusion holds whenever the canonical correlations have a decaying behaviour as in this example.
\section{Summary of Contributions}
\label{SECTION:CONLUSIONS}
In this paper we derived series representations of the PDF and CDF of the information density for arbitrary Gaussian random vectors using canonical correlation analysis. We provided closed-form expressions for the important special case, where all canonical correlations are equal and derived error bounds for finite sum approximations of the general case.
These approximations are suitable for arbitrarily accurate numerical calculations, where the approximation error can be easily determined with the derived error bounds.
Furthermore, we provided examples showing the (in)validity of approximating the information density with a Gaussian random variable.
\newpage
\begin{figure}[h!]
\centering
\includegraphics[width=0.85\columnwidth]{./figures/pdf-rho-09-r-1-5.pdf}%
\caption{PDF $f_{i(\rvX;\rvY)-I(\rvX;\rvY)}$ for $r\in\{1,2,3,4,5\}$ equal canonical correlations $\ccaC[i]=0.9$.}%
\label{FIGURE:ILLUSTRATION-PDF-CASE-I}%
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=0.85\columnwidth]{./figures/cdf-rho-09-r-1-5.pdf}%
\caption{CDF $F_{i(\rvX;\rvY)-I(\rvX;\rvY)}$ for $r\in\{1,2,3,4,5\}$ equal canonical correlations $\ccaC[i]=0.9$.}%
\label{FIGURE:ILLUSTRATION-CDF-CASE-I}
\end{figure}
\newpage
\begin{figure}[h!]
\centering
\includegraphics[width=0.85\columnwidth]{./figures/pdf-rho-09-07-05-02-01-r5.pdf}%
\caption{PDF $f_{i(\rvX;\rvY)-I(\rvX;\rvY)}$ for $r=5$ equal canonical correlations $\ccaC[i]\in\{0.1,0.2,0.5,0.7,0.9\}$.}%
\label{FIGURE:ILLUSTRATION-PDF-CASE-II}%
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=0.85\columnwidth]{./figures/cdf-rho-09-07-05-02-01-r5.pdf}%
\caption{CDF $F_{i(\rvX;\rvY)-I(\rvX;\rvY)}$ for $r=5$ equal canonical correlations $\ccaC[i]\in\{0.1,0.2,0.5,0.7,0.9\}$.}%
\label{FIGURE:ILLUSTRATION-CDF-CASE-II}
\end{figure}
\newpage
\begin{figure}[h!]
\centering
\includegraphics[width=0.85\columnwidth]{./figures/pdf-rho-02-r-5-10-20-40-vs-gaussian.pdf}%
\caption{PDF $f_{i(\rvX;\rvY)-I(\rvX;\rvY)}$ for $r\in\{5,10,20,40\}$ equal canonical correlations $\ccaC[i]=0.2$ vs. Gaussian approximation.}%
\label{FIGURE:ILLUSTRATION-PDF-CASE-III}%
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=0.85\columnwidth]{./figures/cdf-rho-02-r-5-10-20-40-vs-gaussian.pdf}%
\caption{CDF $F_{i(\rvX;\rvY)-I(\rvX;\rvY)}$ for $r\in\{5,10,20,40\}$ equal canonical correlations $\ccaC[i]=0.2$ vs. Gaussian approximation.}%
\label{FIGURE:ILLUSTRATION-CDF-CASE-III}
\end{figure}
\newpage
\begin{figure}[h!]
\centering
\includegraphics[width=0.85\columnwidth]{./figures/pdf-decreasing-rho-ou-awgn-noise-r2-5-10-15-vs-gaussian.pdf}%
\caption{Approximated PDF $\hat{f}_{i(\rvX;\rvY)-I(\rvX;\rvY)}$ for $r\in\{2,5,10,15\}$ canonical correlations $\ccaC[i](T)$ given in \eqref{EQ:CCA-OU-AWGN} for $T=1$ (approximation error $<2\mathrm{e}\text{-}02$) vs. Gaussian approximation ($r=15$).}%
\label{FIGURE:ILLUSTRATION-PDF-CASE-IV}%
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=0.85\columnwidth]{./figures/cdf-decreasing-rho-ou-awgn-noise-r2-5-10-15-vs-gaussian.pdf}%
\caption{Approximated CDF $\hat{F}_{i(\rvX;\rvY)-I(\rvX;\rvY)}$ for $r\in\{2,5,10,15\}$ canonical correlations $\ccaC[i](T)$ given in \eqref{EQ:CCA-OU-AWGN} for $T=1$ (approximation error $<4\mathrm{e}\text{-}02$) vs. Gaussian approximation ($r=15$).}%
\label{FIGURE:ILLUSTRATION-CDF-CASE-IV}
\end{figure}
\section{Preliminaries}
\label{SEC:PRELIMINARIES}
First introduced by Hotelling \cite{Hotelling1936}, the canonical correlation analysis is a widely used linear method in multivariate statistics to determine the maximum correlations between two sets of random variables.
It allows a particularly simple and useful representation of the mutual information and the information density of Gaussian random vectors in terms of the so-called canonical correlations. This representation was first obtained by Gelfand and Yaglom \cite{Gelfand1959} and further extended by Pinsker \cite[$§$\,9]{Pinsker1964}.
For the convenience of the reader we summarize the essence of the canonical correlation analysis in the subsequent \Cref{PROPOSITION:CCA} and demonstrate how it is applied to derive the representations in \eqref{EQ:SUM-REPRESENTATION-OF-INFODENSITY} and \eqref{EQ:SUM-REPRESENTATION-OF-MUTUAL-INFO}. The formulation given below is particularly suitable for implementations.
The results regarding the canonical correlation analysis are given without proof. Corresponding details and thorough discussions can be found, \eg, in H\"ardle and Simar \cite{Haerdle2015}, Koch \cite{Koch2014} or Timm \cite{Timm2002}.
\begin{numpar}[Canonical correlation analysis] \label{PROPOSITION:CCA}
Based on the nonsingular covariance matrices $R_{\rvX}$ and $R_{\rvY}$ of the random vectors $\rvX=(\rvX[1],\rvX[2],\ldots,\rvX[m])$ and $\rvY=(\rvY[1],\rvY[2],\ldots,\rvY[n])$, and the cross-covariance matrix $R_{\rvX\rvY}$ with rank $r=\rank(R_{\xi\eta})$ satisfying $0\leq r\leq\min\{m,n\}$ define the matrix
\begin{equation*}
M=R_{\rvX}^{-\frac{1}{2}}R_{\rvX\rvY}R_{\rvY}^{-\frac{1}{2}},
\end{equation*}
where $R_{\rvX}^{-\frac{1}{2}}$ and $R_{\rvY}^{-\frac{1}{2}}$ can be obtain from diagonalizing $R_{\rvX}$ and $R_{\rvY}$.
Then the matrix $M$ has a singular value decomposition
\begin{equation*}
M=UD\transpose{V},
\end{equation*}
where the only non-zero entries $d_{1,1},d_{2,2},\ldots,d_{r,r}>0$ of the matrix $D= \big(d_{i,j}\big)_{i,j=1}^{m,n}$ are called canonical correlations of \rvX\ and \rvY,
denoted by $\ccaC[i]=d_{i,i},i=1,2,\ldots,r$. The singular value decomposition can be chosen such that $\ccaC[1]\geq\ccaC[2]\geq\ldots\geq\ccaC[r]$ holds, which is assumed throughout the paper.
Define the random vectors $\rvXh=(\rvXh[1],\rvXh[2],\ldots,\rvXh[m])$ and $\rvYh=(\rvYh[1],\rvYh[2],\ldots,\rvYh[n])$ by
\begin{align*}
\rvXh=A\,\rvX\qquad\text{ and }\qquad\rvYh=B\,\rvY,
\end{align*}
where the nonsingular matrices $A$ and $B$ are given by
\begin{align*}
A=\transpose{U}R_{\rvX}^{-\frac{1}{2}}\qquad\text{ and }\qquad B=\transpose{V}R_{\rvY}^{-\frac{1}{2}}.
\end{align*}
Then the random variables $\rvXh[i],\rvYh[j]$ have the following correlation properties
\begin{gather}\label{EQ:CCA-CORRELATIONS-I}
\cor{\rvXh[i]}{\rvYh[i]}=\ccaC[i],\qquad i=1,2,\ldots,r,\\\label{EQ:CCA-CORRELATIONS-II}
\cor{\rvXh[i]}{\rvYh[i]}=0,\qquad i=r+1,\ldots,\min\{m,n\},\\\label{EQ:CCA-CORRELATIONS-III}
\cor{\rvXh[i]}{\rvXh[j]}=\cor{\rvYh[i]}{\rvYh[j]}=\cor{\rvXh[i]}{\rvYh[j]}=0,\qquad i \neq j,
\end{gather}
and the variances are all equal to $1$
\begin{align*}
\var{\rvXh[i]}=1,\qquad i=1,2,\ldots,m,\qquad\var{\rvYh[j]}=1,\qquad j=1,2,\ldots,n.
\end{align*}
\end{numpar}
\begin{numpar}[Mutual information and information density in terms of canonical correlations]
Based on the results of \Cref{PROPOSITION:CCA} we obtain for the mutual information and the information density
\begin{alignat*}{2}
\mInf{\rvX}{\rvY}&=\mInf{A\rvX}{B\rvY}&&=\mInf{\rvXh}{\rvYh}\\
i(\rvX;\rvY)&=i(A\rvX;B\rvY)&&=i(\rvXh;\rvYh)\qquad\text{(almost surely)}
\end{alignat*}
because $A$ and $B$ are nonsingular matrices, which follows, \eg, from Pinsker \cite[Th.\,3.7.1]{Pinsker1964}.
Since we consider the case where \rvX\ and \rvY\ are jointly Gaussian, \rvXh\ and \rvYh\ are jointly Gaussian as well. Therefore, the conditions in \eqref{EQ:CCA-CORRELATIONS-I}--\eqref{EQ:CCA-CORRELATIONS-III} imply that all random variables $\rvXh[i],\rvYh[j]$ are independent except for the pairs $(\rvXh[i],\rvYh[i])$, $i=1,2,\ldots,r$.
This implies
\begin{align}\label{EQ:MUTUAL-INFO-CCA-SUM}
\mInf{\rvX}{\rvY}&=\sum_{i=1}^r\mInf{\rvXh[i]}{\rvYh[i]}\\\label{EQ:INFO-DENSITY-CCA-SUM}
i(\rvX;\rvY)&=\sum_{i=1}^r i(\rvXh[i];\rvYh[i])\qquad \text{(almost surely)}
\end{align}
where $i(\rvXh[1];\rvYh[1]),i(\rvXh[2];\rvYh[2]),\ldots,i(\rvXh[r];\rvYh[r])$ are independent. The sum representations follow from the chain rules of mutual information and information density and the equivalence between independence and vanishing mutual information and information density.
Then \eqref{EQ:SUM-REPRESENTATION-OF-MUTUAL-INFO} is obtained from \eqref{EQ:MUTUAL-INFO-CCA-SUM}, \eqref{EQ:CCA-CORRELATIONS-I}, and the formula of mutual information for the bivariate Gaussian case.
Since \rvXh[i]\ and \rvYh[i]\ are jointly Gaussian with zero mean, unit variance, and correlation $\cor{\rvXh[i]}{\rvYh[i]}=\ccaC[i]$ the information density $i(\rvXh[i];\rvYh[i])$ is given by
\begin{align}\label{EQ:INFO-DENSITY-STD-GAUSSIAN}
i(\rvXh[i];\rvYh[i])=-\frac{1}{2}\log(1-\ccaC[i][2])-\frac{\ccaC[i][2]}{2(1-\ccaC[i][2])}\bigg(\rvXh[i][2]-\frac{2\,\rvXh[i]\rvYh[i]}{\ccaC[i]}+\rvYh[i][2]\bigg),\qquad i=1,2,\ldots,r.
\end{align}
Now assume $\rvXtd[1], \rvXtd[2],\ldots,\rvXtd[r], \rvYtd[1], \rvYtd[2], $\ldots$, \rvYtd[r]$ are i.i.d.\ Gaussian random variables with zero mean and unit variance. Then for all $i=1,2,\ldots,r$ the distribution of the random vector
\begin{align*}
\frac{1}{\sqrt{2}}
\begin{pmatrix}
\sqrt{1+\ccaC[i]} & \sqrt{1-\ccaC[i]}\\
\sqrt{1+\ccaC[i]} & -\sqrt{1-\ccaC[i]}
\end{pmatrix}
\begin{pmatrix}
\rvXtd[i]\\
\rvYtd[i]
\end{pmatrix}
=\frac{1}{\sqrt{2}}\begin{pmatrix}
\sqrt{1+\ccaC[i]}\,\rvXtd[i] +\sqrt{1-\ccaC[i]}\,\rvYtd[i]\\
\sqrt{1+\ccaC[i]}\,\rvXtd[i] -\sqrt{1-\ccaC[i]}\,\rvYtd[i]
\end{pmatrix}
\end{align*}
coincides with the distribution of the random vector $(\rvXh[i],\rvYh[i])$.
Plugging this into \eqref{EQ:INFO-DENSITY-STD-GAUSSIAN} we obtain together with
\eqref{EQ:INFO-DENSITY-CCA-SUM} that the distribution of the information density $i(\rvX;\rvY)$ coincides with the distribution of
\eqref{EQ:SUM-REPRESENTATION-OF-INFODENSITY}.
\end{numpar}
To prove \Cref{thm:pdfinf} the following \namecref{LEMMA:CF-INFODENSITY} regarding the characteristic function of the information density is utilized.
The result of the \namecref{LEMMA:CF-INFODENSITY} is also used in Ibragimov and Rozanov \cite{Ibragimov1970} but without proof. Therefore, the proof is given below for completeness.
\begin{lemma}[Characteristic function of (shifted) information density]\label{LEMMA:CF-INFODENSITY}
The characteristic function of the shifted information density $i(\rvX;\rvY)-\mInf{\rvX}{\rvY}$ is equal to the characteristic function of the random variable
\begin{align}\label{EQ:SUM-REPRESENTATION-OF-INFODENSITY-SHIFTED}
\rvIdnsII &=\frac{1}{2}\sum_{i=1}^r\ccaC[i]\big(\rvXtd[i][2]-\rvYtd[i][2]\big),
\end{align}
where $\rvXtd[1], \rvXtd[2],\ldots,\rvXtd[r], \rvYtd[1], \rvYtd[2], $\ldots$, \rvYtd[r]$ are i.i.d.\ Gaussian random variables with zero mean and unit variance, and $\ccaC[1],\ccaC[2],\ldots,\ccaC[r]$ are the canonical correlations of \rvX\ and \rvY. The characteristic function of \rvIdnsII\ is given by
\begin{equation}
\label{eq:prodv}
\varphi_{\rvIdnsII}(t)=\prod_{i=1}^{r}\frac{1}{\sqrt{1+\ccaC[i][2]t^{2}}},\qquad\qquad
t\in\R.
\end{equation}
\end{lemma}
\begin{proof} Due to \eqref{EQ:SUM-REPRESENTATION-OF-INFODENSITY} the distribution of the shifted information density $i(\rvX;\rvY)-\mInf{\rvX}{\rvY}$ coincides with the distribution of the random variable \rvIdnsII\ in \eqref{EQ:SUM-REPRESENTATION-OF-INFODENSITY-SHIFTED} such that the characteristic functions of $i(\rvX;\rvY)-\mInf{\rvX}{\rvY}$ and \rvIdnsII\ are equal.
It is a well known fact that
$\rvXtd[i][2]$ and $\rvYtd[i][2]$ in \eqref{EQ:SUM-REPRESENTATION-OF-INFODENSITY-SHIFTED} are chi-squared distributed random variables with one
degree of freedom. Moreover, for the weighted random variables $\frac{\ccaC[i]}{2}\rvXtd[i][2]$ and
$\frac{\ccaC[i]}{2}\rvXtd[i][2]$ we obtain the following relation for the probability distribution
for a parameter $a\geq0$.
\begin{equation}\label{EQ:GAMMA-TRAFO}
\meP\bigg(\frac{\ccaC[i]}{2}\rvXtd[i][2]\leq a\bigg)=\meP\bigg(-\sqrt{2\ccaC[i][-1]a}
\leq\rvXtd[i]\leq\sqrt{2\ccaC[i][-1]a}\,\bigg)
=\frac{1}{\sqrt{2\pi}}\int\limits_{x=-\sqrt{2\ccaC[i][-1]a}}^{\sqrt{2\ccaC[i][-1]a}}\exp\bigg(-\frac{x^{2}}{2}\bigg)\dx x
\end{equation}
%
Substitution of $x=\sqrt{2\ccaC[i][-1]\tilde{x}}$ in \eqref{EQ:GAMMA-TRAFO} yields together
with the symmetry of the integral
%
\begin{equation}\label{EQ:GAMMA-TRAFO-II}
\meP\bigg(\frac{\ccaC[i]}{2}\rvXtd[i][2]\leq a\bigg)=\int\limits_{\tilde{x}=0}^{a}
\frac{1}{\Gamma\left(\frac{1}{2}\right)\sqrt{\ccaC[i]\tilde{x}}}
\exp\left(-\frac{\tilde{x}}{\ccaC[i]}\right)\dx\tilde{x}.
\end{equation}
%
From \eqref{EQ:GAMMA-TRAFO-II} the weighted random variables $\frac{\ccaC[i]}{2}\rvXtd[i][2]$ and
$\frac{\ccaC[i]}{2}\rvXtd[i][2]$ are seen to be gamma distributed with a scale
parameter of $1/\ccaC[i]$ and shape parameter of $1/2$. The characteristic function of these
random variables therefore admits the form
\begin{equation*}
\varphi_{\frac{\ccaC[i]}{2}\rvXtd[i][2]}(t)=\left(1-\ccaC[i]jt\right)^{-\frac{1}{2}}.
\end{equation*}
Furthermore, from the identity
$\varphi_{-\frac{\ccaC[i]}{2}\rvXtd[i][2]}(t)=\varphi_{\frac{\ccaC[i]}{2}\rvXtd[i][2]}(-t)$ for
the characteristic function and from the independence of $\rvXtd[i]$ and $\rvYtd[i]$ we obtain the
characteristic function of $\rvIdnsII_{i}=\frac{\ccaC[i]}{2}(\rvXtd[i][2]-\rvYtd[i][2])$ to be
given by
\begin{equation*}
\varphi_{\rvIdnsII_{i}}(t)=\left(1-\ccaC[i]jt\right)^{-\frac{1}{2}}\left(1+\ccaC[i]jt\right)^{-\frac{1}{2}}
=\left(1+\ccaC[i][2]t^{2}\right)^{-\frac{1}{2}}.
\end{equation*}
Finally, because $\rvIdnsII$ in \eqref{EQ:SUM-REPRESENTATION-OF-INFODENSITY-SHIFTED} is given by the sum of the independent random variables
$\rvIdnsII_{i}$ the characteristic function of $\rvIdnsII$ results from multiplying the individual characteristic functions of the random variables $\rvIdnsII_{i}$. By doing so we obtain \eqref{eq:prodv}, which completes the proof.
\end{proof}
\begin{proposition}[Properties related to the function $\mathrm{K}_{\alpha}$]\label{PROP:PROPERTIES-OF-BESSEL-FUNCTION}
For all $\alpha\in\R$ the function
\begin{align*}
y \mapsto y^{\alpha} \mathrm{K}_{\alpha}(y),\qquad y\in(0,\infty),
\end{align*}
where $\mathrm{K}_{\alpha}(\cdot)$ denotes the modified Bessel function of second kind and order $\alpha$ \cite[Sec.\,10.25(ii)]{Olver2010}, is strictly positive and strictly monotonically decreasing. Furthermore, if $\alpha>0$ then we have
\begin{align}\label{EQ:PROP-SUPREMUM}
\lim_{y\rightarrow+0} y^{\alpha} \mathrm{K}_{\alpha}(y)=\sup_{y\in(0,\infty)} y^{\alpha} \mathrm{K}_{\alpha}(y)=\Gamma(\alpha)2^{\alpha-1}.
\end{align}
\end{proposition}
\begin{proof}
If $\alpha\in\R$ is fixed, then $\mathrm{K}_{\alpha}(y)$ is strictly positive and strictly monotonically decreasing \wrt\ $y\in(0,\infty)$ due to \cite[Secs.\,10.27.3 and 10.37]{Olver2010}. Furthermore, we obtain
\begin{align*}
\frac{\dx y^{\alpha} \mathrm{K}_{\alpha}(y)}{\dx y} = -y^{\alpha} \mathrm{K}_{\alpha-1}(y),\qquad y\in(0,\infty)
\end{align*}
by applying the rules to calculate derivatives of Bessel functions given in \cite[Sec.\,10.29(ii)]{Olver2010}. It follows that $y^{\alpha} \mathrm{K}_{\alpha}(y)$ is strictly positive and strictly monotonically decreasing \wrt\ $y\in(0,\infty)$ for all fixed $\alpha\in\R$.
Consider now the Basset integral formula as given in \cite[Sec.\,10.32.11]{Olver2010}
\begin{align}
\label{eq:bassetint}
\mathrm{K_{\alpha}}(yz)=\frac{\Gamma\left(\alpha+\frac{1}{2}\right)(2z)^{\alpha}}{y^{\alpha}\sqrt{\pi}}
\int\limits_{u=0}^{\infty}\frac{\cos(uy)}{\left(u^{2}+z^{2}\right)^{\alpha+\frac{1}{2}}}\dx u
\qquad\text{for}\qquad|\arg(z)|<\frac{\pi}{2},\ y>0,\ \alpha>-\frac{1}{2}
\end{align}
and the integral
\begin{align}\label{EQ:RATIONAL-INTEGRAL}
\int\limits_{u=0}^{\infty}\frac{1}{\left(u^{2}+1\right)^{\alpha+\frac{1}{2}}}\dx u = \frac{\sqrt{\pi}\,\Gamma(\alpha)}{2\,\Gamma\left(\alpha+\frac{1}{2}\right)}\qquad\text{for}\qquad \alpha>0,
\end{align}
where the equality holds due to \cite[Secs.\,3.251.2 and 8.384.1]{Gradshteyn2007}.
Using \eqref{eq:bassetint} and \eqref{EQ:RATIONAL-INTEGRAL} we obtain for all $\alpha>0$
\begin{align*}
\lim_{y\rightarrow+0} y^{\alpha} \mathrm{K}_{\alpha}(y) &= \lim_{y\rightarrow+0}
\frac{\Gamma\left(\alpha+\frac{1}{2}\right)2^{\alpha}}{\sqrt{\pi}}
\int\limits_{u=0}^{\infty}\frac{\cos(uy)}{\left(u^{2}+1\right)^{\alpha+\frac{1}{2}}}\dx u\\
%
&= \frac{\Gamma\left(\alpha+\frac{1}{2}\right)2^{\alpha}}{\sqrt{\pi}}
\int\limits_{u=0}^{\infty}\frac{\lim_{y\rightarrow+0}\cos(uy)}{\left(u^{2}+1\right)^{\alpha+\frac{1}{2}}}\dx u\\
%
&= \frac{\Gamma\left(\alpha+\frac{1}{2}\right)2^{\alpha}}{\sqrt{\pi}}
\int\limits_{u=0}^{\infty}\frac{1}{\left(u^{2}+1\right)^{\alpha+\frac{1}{2}}}\dx u\\
&=\Gamma(\alpha)2^{\alpha-1},
\end{align*}
where the second equality holds due to the dominated convergence theorem since $\cos(uy)/\left(u^{2}+1\right)^{\alpha+\frac{1}{2}}\leq 1/\left(u^{2}+1\right)^{\alpha+\frac{1}{2}}$.
Using the previously derived monotonicity we obtain \eqref{EQ:PROP-SUPREMUM}, which completes the proof.
\end{proof}
\section{Proofs of Main Results}
\label{SEC:PROOFS-OF-MAIN-RESULTS}
\subsection{Proof of \Cref{thm:pdfinf}}
To prove \Cref{thm:pdfinf} we calculate the PDF $f_{\rvIdnsII}$ of the random variable \rvIdnsII\ introduced in \Cref{LEMMA:CF-INFODENSITY} by inverting
the characteristic function $\varphi_{\rvIdnsII}$ given in \eqref{eq:prodv} via the integral
\begin{align}\label{EQ:CF-INVERSION}
f_{\rvIdnsII}(v)=\frac{1}{2\pi}\int_{-\infty}^{\infty}\varphi_{\rvIdnsII}(t)\exp\big(-\jmath t v\big)\dx t, \qquad v\in\R.
\end{align}
Shifting the PDF of $\rvIdnsII$ by $\mInf{\rvX}{\rvY}$ we obtain the PDF $f_{i(\rvX;\rvY)}$ of the
information density $i(\rvX;\rvY)$
\begin{align}\label{EQ:SHIFTING-PDF}
f_{i(\rvX;\rvY)}(x)=f_{\rvIdnsII}(x-\mInf{\rvX}{\rvY}),\qquad x\in\R.
\end{align}
The method used subsequently is based on the work of Mathai \cite{Mathai1982}. To invert the characteristic function $\varphi_{\rvIdnsII}$ we expand the factors in \eqref{eq:prodv} as
\begin{align}\nonumber
\left(1+\ccaC[i]^{2}t^{2}\right)^{-\frac{1}{2}}
&=\left(1+\ccaC[r]^{2}t^{2}\right)^{-\frac{1}{2}}
\frac{\ccaC[r]}{\ccaC[i]}
\left(1+\left(\frac{\ccaC[r]^{2}}{\ccaC[i]^{2}}-1\right)
\left(1+\ccaC[r]^{2}t^{2}\right)^{-1}\right)^{-\frac{1}{2}}\\\label{EQ:EXPAND-FACTORS-OF-CF-II}
&=\left(1+\ccaC[r]^{2}t^{2}\right)^{-\frac{1}{2}}
\sum_{k=0}^{\infty}(-1)^{k}\binom{-\frac{1}{2}}{k}\frac{\ccaC[r]}{\ccaC[i]}
\left(1-\frac{\ccaC[r]^{2}}{\ccaC[i]^{2}}\right)^{k}
\left(1+\ccaC[r]^{2}t^{2}\right)^{-k},
\end{align}
where the binomial series is used for the expansion in \eqref{EQ:EXPAND-FACTORS-OF-CF-II}. Since
\begin{align*}
\left|\left(1-\frac{\ccaC[r]^{2}}{\ccaC[i]^{2}}\right)
\left(1+\ccaC[r]^{2}t^{2}\right)^{-1}\right|<1
\end{align*}
holds for all $t\in\R$ the series in \eqref{EQ:EXPAND-FACTORS-OF-CF-II} is absolutely convergent for all $t\in\R$.
Using the expansion in \eqref{EQ:EXPAND-FACTORS-OF-CF-II} and the absolute convergence together with the identity
\begin{align*}
\binom{-\frac{1}{2}}{k}=\frac{(-1)^k(2k)!}{(k!)^2 4^k}
\end{align*}
we can rewrite the characteristic function $\varphi_{\rvIdnsII}$ as
\begin{equation} \label{eq:sumvii}
\varphi_{\rvIdnsII}(t)=\sum_{k_{1}=0}^{\infty}\sum_{k_{2}=0}^{\infty}\dots
\sum_{k_{r-1}=0}^{\infty} \left[\prod_{i=1}^{r-1}
\frac{\ccaC[r]}{\ccaC[i]}\frac{(2k_{i})!}{(k_{i}!)^{2}4^{k_{i}}}
\left(1-\frac{\ccaC[r]^{2}}{\ccaC[i]^{2}}\right)^{k_{i}}\right]
\left(1+\ccaC[r]^{2}t^{2}\right)^{-\left(\frac{r}{2}+k_{1}+k_{2}+\dots+k_{r-1}\right)},\quad t\in\R.
\end{equation}
To obtain the PDF $f_{\rvIdnsII}$ we evaluate the inversion integral \eqref{EQ:CF-INVERSION} based on the series representation in \eqref{eq:sumvii}.
Since every series in \eqref{eq:sumvii} is absolutely convergent we can exchange summation and integration. Let \(p=\frac{r}{2}+k_{1}+k_{2}+\dots+k_{r-1}\). Then by symmetry we have for the integral of a summand
\begin{equation}\label{EQ:INTEGRAL-OF-SERIES-TERM}
\int\limits_{t=-\infty}^{\infty}\frac{\exp\left(-\jmath tv\right)}{(1+\ccaC[r]^{2}t^{2})^{p}} \dx t= 2\int\limits_{t=0}^{\infty}\frac{\cos\left(tv\right)}{(1+\ccaC[r]^{2}t^{2})^{p}} \dx t = \frac{2}{\ccaC[r]}\int\limits_{u=0}^{\infty}\frac{\cos\left(uv/\ccaC[r]\right)}{(1+u^2)^{p}} \dx u,
\end{equation}
where the second equality is a result of the substitution $t=u/\ccaC[r]$.
By setting $z=1$, $\alpha=p-\frac{1}{2}\geq 0$ and $y=v/\ccaC[r]$ in the Basset integral formula given in \eqref{eq:bassetint} in the proof of \Cref{PROP:PROPERTIES-OF-BESSEL-FUNCTION} and using the symmetry with respect to $v$ we can evaluate \eqref{EQ:INTEGRAL-OF-SERIES-TERM} to the following form.
\begin{equation}\label{EQ:INTEGRAL-OF-SERIES-TERM-II}
\int\limits_{t=-\infty}^{\infty}\frac{\exp\left(-\jmath tv\right)}{(1+\ccaC[r]^{2}t^{2})^{p}}\dx t=
\frac{\sqrt{\pi}}{\Gamma\left(p\right)2^{p-\frac{3}{2}}\ccaC[r]^{p+\frac{1}{2}}}
\mathrm{K}_{p-\frac{1}{2}}\left(\frac{|v|}{\ccaC[r]}\right)|v|^{p-\frac{1}{2}},
\qquad v\in\R\backslash\{0\}.
\end{equation}
Combining \eqref{EQ:CF-INVERSION}, \eqref{eq:sumvii} and \eqref{EQ:INTEGRAL-OF-SERIES-TERM-II} yields
\begin{multline}\label{eq:shftinfpd}
f_{\rvIdnsII}(v)=\frac{1}{2\sqrt{\pi}}\sum_{k_{1}=0}^{\infty}\sum_{k_{2}=0}^{\infty}\dots
\sum_{k_{r-1}=0}^{\infty}\left[\prod_{i=1}^{r-1}
\frac{\ccaC[r]}{\ccaC[i]}\frac{(2k_{i})!}{(k_{i}!)^{2}4^{k_{i}}}
\left(1-\frac{\ccaC[r]^{2}}{\ccaC[i]^{2}}\right)^{k_{i}}\right]
\times\\
\frac{\mathrm{K}_{\frac{r-1}{2}+k_{1}+k_{2}+\dots+k_{r-1}}\left(\frac{|v|}{\ccaC[r]}\right)
|v|^{\left(\frac{r-1}{2}+k_{1}+k_{2}+\dots+k_{r-1}\right)}}
{\Gamma\left(\frac{r}{2}+k_{1}+k_{2}+\dots+k_{r-1}\right)2^{\left(\frac{r-3}{2}+k_{1}+k_{2}+\dots+k_{r-1}\right)}
\ccaC[r]^{\left(\frac{r+1}{2}+k_{1}+k_{2}+\dots+k_{r-1}\right)}},\qquad v\in\R\backslash\{0\}.
\end{multline}
Slightly rearranging terms and applying \eqref{EQ:SHIFTING-PDF} yields the PDF of the information density $i(\rvX;\rvY)$ given in \eqref{EQ:PDF-INFO-DENSITY}.
It remains to show that $f_{i(\rvX;\rvY)}(x)$ is also well defined for $x=\mInf{\rvX}{\rvY}$ if $r \geq 2$.
Indeed, if $r\geq 2$ then we can use \Cref{PROP:PROPERTIES-OF-BESSEL-FUNCTION} to obtain
\begin{align*}
\lim_{x\rightarrow \mInf{\rvX}{\rvY}}f_{i(\rvX;\rvY)}(x)=%
\frac{1}{2\ccaC[r]\sqrt{\pi}}\sum_{k_{1}=0}^{\infty}\sum_{k_{2}=0}^{\infty}\dots
\sum_{k_{r-1}=0}^{\infty}\left[\prod_{i=1}^{r-1}
\frac{\ccaC[r]}{\ccaC[i]}\frac{(2k_{i})!}{(k_{i}!)^{2}4^{k_{i}}}
\left(1-\frac{\ccaC[r][2]}{\ccaC[i][2]}\right)^{k_{i}}\right]
\times\\
\frac{\Gamma\left(\frac{r-1}{2}+k_{1}+k_{2}+\dots+k_{r-1}\right)}
{\Gamma\left(\frac{r-1}{2}+k_{1}+k_{2}+\dots+k_{r-1}+\frac{1}{2}\right)}
\end{align*}
where we used the exchangeability of the limit and the summation due to the absolute convergence of the series.
Since $\Gamma(\alpha)/\Gamma(\alpha+\frac{1}{2})$ is decreasing \wrt\ $\alpha\geq\frac{1}{2}$, we have
\begin{align*}
\frac{\Gamma\left(\frac{r-1}{2}+k_{1}+k_{2}+\dots+k_{r-1}\right)}
{\Gamma\left(\frac{r-1}{2}+k_{1}+k_{2}+\dots+k_{r-1}+\frac{1}{2}\right)}\leq\frac{\Gamma\left(\frac{r-1}{2}\right)}
{\Gamma\left(\frac{r-1}{2}+\frac{1}{2}\right)}\leq\sqrt{\pi}.
\end{align*}
Then together with \eqref{EQ:SERIES-OF-PRODUCTS} in the proof of \Cref{THEOREM:APPROXIMATION-ERROR-PDF-CDF} it follows that $\lim_{x\rightarrow \mInf{\rvX}{\rvY}}f_{i(\rvX;\rvY)}(x)$ exists and is
finite, which completes the proof. \hfill $\blacksquare$
\subsection{Proof of \Cref{thm:cdfinf}}
To prove \Cref{thm:cdfinf} we calculate the CDF $F_{\rvIdnsII}$ of the random variable \rvIdnsII\ introduced in \Cref{LEMMA:CF-INFODENSITY} by integrating the PDF $f_{\rvIdnsII}$ given in \eqref{eq:shftinfpd}.
Shifting the CDF of $\rvIdnsII$ by $\mInf{\rvX}{\rvY}$ we obtain the CDF $F_{i(\rvX;\rvY)}$ of the
information density $i(\rvX;\rvY)$
\begin{align}\label{EQ:SHIFTING-CDF}
F_{i(\rvX;\rvY)}(x)=F_{\rvIdnsII}(x-\mInf{\rvX}{\rvY}),\qquad x\in\R.
\end{align}
Using the symmetry of $f_{\rvIdnsII}$
we can write
\begin{equation*
F_{\rvIdnsII}(z)=\mathrm{P}(\rvIdnsII\leq z)=
\begin{dcases}
\frac{1}{2}-\int_{v=0}^{-z}f_{\rvIdnsII}(v)\dx v&\text{for}\quad z\leq0\\
\frac{1}{2}+\int_{v=0}^{z}f_{\rvIdnsII}(v)\dx v&\text{for}\quad z> 0
\end{dcases}.
\end{equation*}
It is therefore sufficient to evaluate the integral
\begin{align}\label{EQ:INTEGRAL-VZ}
V(z):=\int_{v=0}^{z}f_{\rvIdnsII}(v)\dx v \quad\text{for}\quad z\geq 0.
\end{align}
To calculate the integral \eqref{EQ:INTEGRAL-VZ}, we plug \eqref{eq:shftinfpd} into \eqref{EQ:INTEGRAL-VZ} and exchange integration and summation, which is justified by the monotone convergence theorem.
To evaluate the integral of a summand consider the following identity
\begin{multline} \label{eq:struveint}
\int_{x=0}^{z}x^{\alpha}\mathrm{K}_{\alpha}(x)\dx x=
2^{\alpha-1}\sqrt{\pi}\Gamma\left(\alpha+\frac{1}{2}\right)
z\bigg[\mathrm{K}_{\alpha}(z)\mathrm{L}_{\alpha-1}(z)+
\mathrm{K}_{\alpha-1}(z)\mathrm{L}_{\alpha}(z)\bigg]
\quad\text{for}\quad\alpha >-\frac{1}{2}
\end{multline}
given in \cite[Sec.\,1.12.1.3]{Prudnikov1986a},
where $\mathrm{L}_{\alpha}(\cdot)$ denotes the modified Struve $\mathrm{L}$ function of order $\alpha$
\cite[Sec.\,11.2]{Olver2010}.
Using \eqref{eq:struveint}
with \(\alpha=\frac{r-1}{2}+k_{1}+k_{2}+\dots+k_{r-1}\geq 0\) we obtain
\begin{align*}
V(z)=&\sum_{k_{1}=0}^{\infty}\sum_{k_{2}=0}^{\infty}\dots
\sum_{k_{r-1}=0}^{\infty}\left[\prod_{i=1}^{r-1}
\frac{\ccaC[r]}{\ccaC[i]}\frac{(2k_{i})!}{(k_{i}!)^{2}4^{k_{i}}}
\left(1-\frac{\ccaC[r]^{2}}{\ccaC[i]^{2}}\right)^{k_{i}}\right]
\frac{z}{2\ccaC[r]}
\times\\
&\bigg[
\mathrm{K}_{\frac{r-1}{2}+k_{1}+k_{2}+\dots+k_{r-1}}\left(\frac{z}{\ccaC[r]}\right)
\mathrm{L}_{\frac{r-3}{2}+k_{1}+k_{2}+\dots+k_{r-1}}\left(\frac{z}{\ccaC[r]}\right)+\\
&\mathrm{K}_{\frac{r-3}{2}+k_{1}+k_{2}+\dots+k_{r-1}}\left(\frac{z}{\ccaC[r]}\right)
\mathrm{L}_{\frac{r-1}{2}+k_{1}+k_{2}+\dots+k_{r-1}}\left(\frac{z}{\ccaC[r]}\right)
\bigg]\qquad\text{for}\quad z\geq0,
\end{align*}
%
which completes the proof. \hfill $\blacksquare$
%
|
{
"timestamp": "2021-05-11T02:20:08",
"yymm": "2105",
"arxiv_id": "2105.03925",
"language": "en",
"url": "https://arxiv.org/abs/2105.03925"
}
|
\section{Introduction}
\justify
The \textit{octahedron recurrence} is given by the initial conditions $f_{i,j,k}=x_{i,j,k}$ for $k = -1,0$ and
\begin{align*}
f_{i,j,k-1}f_{i,j,k+1} &= f_{i-1,j,k}f_{i+1,j,k}+ \lambda f_{i,j-1,k}f_{i,j+1,k}. & (k\geq0)
\end{align*}
\justify
An \textit{alternating sign matrix} is a square matrix that satisfies:
\begin{enumerate}
\item all entries are $-1,0,\text{ or } 1$,
\item every row and column has sum $1$,
\item in every row and column the non-zero entries alternate in sign.
\end{enumerate}
\justify
In \cite{robbins}, Robbins and Rumsey found that the exponents of the $x_{i,j,k}$ in any monomial in $f_{i_0,j_0,k_0}$ formed an alternating sign matrix.
In \cite{propp}, James Propp introduced another recurrence called the \textit{cube recurrence}, which is given by $f_{i,j,k} = x_{i,j,k}$ for $i+j+k=-1,0,1$ and
\begin{align*}
f_{i,j,k}f_{i-1,j-1,k-1}&=f_{i-1,j,k}f_{i,j-1,k-1}+f_{i,j-1,k}f_{i-1,j,k-1}+f_{i,j,k-1}f_{i-1,j,k-1}. & (i+j+k>1)
\end{align*}
\justify
In \cite{carroll}, Carroll and Speyer introduced a combinatorial object that describe the cube recurrence called \textit{grove}, which will be defined rigorously in section 2. Carroll and Speyer proved that $f_{0,0,0}$ is a sum of Laurent monomials in the variables $x_{i,j,k}$, and in each monomial, the exponent of $x_{i,j,k}$ is either $-1,0,\text{ or } 1$. Carroll and Speyer also observed that the exponents of $x_{i,j,k}$ form a sort of alternating sign triangles. In section 3, we will give our definition for alternating sign triangles. In section 4, we will prove a simple characterization of \textit{permutation triangles}, which is a special case of alternating sign triangles in which there is no $-1$ entry. In section 5, we will prove some properties of alternating sign triangles.
\textbf{Acknowledgments.} I thank Max Glick for his conjectures about permutaion triangles, and I thank Pavlo Pylyavskyy for his helpful suggestions and continuous support.
\section{Background}
\justify
In \cite{carroll}, Carroll and Speyer defined groves as follows.
\justify
Define the \textit{lower cone} of any $(i,j,k)\in \mathbb{Z}^{3}$ to be
\begin{align*}
C(i,j,k) &= \left\{ (i',j',k')\in \mathbb{Z}^{3} | i'\leq i,j' \leq j,k'\leq k \right\}.
\end{align*}
\justify
Let $\mathcal{L}\subseteq \mathbb{Z}^{3}$ be a subset such that, whenever $(i, j, k) \in \mathcal{L}, C(i, j, k) \subseteq \mathcal{L}$. Let $\mathcal{U} = \mathbb{Z}^{3}-\mathcal{L}$, and define the set of \textit{initial conditions}
\begin{align*}
\mathcal{I} = \left\{(i, j, k) \in \mathcal{L} | (i + 1, j + 1, k + 1) \in \mathcal{U}\right\}.
\end{align*}
\justify
We also define a \textit{rhombus} to be any set of the form
\begin{align*}
r_{a}(i, j, k) &= \left\{(i, j, k),(i, j - 1, k),(i, j, k - 1),(i, j - 1, k - 1)\right\}\\
r_{b}(i, j, k) &= \left\{(i, j, k),(i - 1, j, k),(i, j, k - 1),(i - 1, j, k - 1)\right\}\\
r_{c}(i, j, k) &= \left\{(i, j, k),(i - 1, j, k),(i, j - 1, k),(i - 1, j - 1, k)\right\}
\end{align*}
\justify
In addition, define the \textit{edges} of each rhombus to be the pairs
\begin{align*}
e_{a}(i, j, k) &= \left\{(i, j - 1, k),(i, j, k - 1)\right\} & e'_{a}(i, j, k) &= \left\{(i, j, k),(i, j - 1, k - 1)\right\}\\
e_{b}(i, j, k) &= \left\{(i - 1, j, k),(i, j, k - 1)\right\} & e'_{b}(i, j, k) &= \left\{(i, j, k),(i - 1, j, k - 1)\right\}\\
e_{c}(i, j, k) &= \left\{(i - 1, j, k),(i, j - 1, k)\right\} & e'_{c}(i, j, k) &= \left\{(i, j, k),(i - 1, j - 1, k)\right\}
\end{align*}
\justify
Now suppose that $N$ is a cutoff for $\mathcal{I}$. Define $\mathcal{G}$ be a graph whose vertices are the points in $\mathcal{I}$ and edges are the edges of all rhombi occurring in $\mathcal{I}$. We define an $\mathcal{I}$-grove within radius $N$ to be a subgraph $G \subseteq \mathcal{G}$ with the following properties:
\begin{itemize}
\item (Completeness) the vertex set of $G$ is all of $\mathcal{I}$;
\item (Complementarity) for every rhombus, exactly one of its two edges occurs in $G$;
\item (Compactness) for every rhombus all of whose vertices satisfy $i+j+k<-N$, the short edge occurs in $G$;
\item (Connectivity) every component of $G$ contains exactly one of the following sets of vertices, and conversely, each such set is contained in some component:
\begin{itemize}
\item $\left\{(0,p,q),(p,0,q)\right\},\left\{(p,q,0),(0,q,p)\right\}$, and $\left\{(q,0,p),(q,p,0)\right\}$ for all $p, q$ with $0>p>q$ and $p+q\in \{-N-1,-N-2\}$;
\item $\left\{(0, p, p),(p, 0, p),(p, p, 0)\right\}$ for $2p\in \{-N-1,-N-2\}$;
\item $\left\{(0, 0, q)\right\}, \left\{(0, q, 0)\right\}$, and $\left\{(q, 0, 0)\right\}$ for $q\leq-N-1$.
\end{itemize}
\end{itemize}
\begin{figure}[h!]
\centering
\includegraphics[width=10cm]{Figure0.png}
\caption{Example of a grove}
\label{Figure 0}
\floatfoot{Source: \cite{carroll}}
\end{figure}
\justify
Carroll and Speyer also proved a bijection between groves and simplified groves. A simplified grove within radius $N$, where $N$ is a cutoff for $\mathcal{I}$ and furthermore is odd, is a subgraph $G'$ of $\mathcal{G}$ satisfying:
\begin{itemize}
\item (Vertex set) the vertex set of $G'$ is $\{(i,j,k)\in\mathcal{I} \text{ | } i+j+k\equiv0 \text{ mod } 2; i+j+k\geq-N-1\}$;
\item (Acyclicity) $G'$ is acyclic;
\item (Connectivity) every component of $G'$ contains exactly one of the following sets of vertices, and conversely, each such set is contained in some component:
\begin{itemize}
\item $\left\{(0,p,q),(p,0,q)\right\},\left\{(p,q,0),(0,q,p)\right\}$, and $\left\{(q,0,p),(q,p,0)\right\}$ for $p, q$ with $0>p>q$ and $p+q=-N-1$;
\item $\left\{(0, \frac{-N-1}{2}, \frac{-N-1}{2}),(\frac{-N-1}{2}, 0, \frac{-N-1}{2}),(\frac{-N-1}{2}, \frac{-N-1}{2}, 0)\right\}$;
\item $\left\{(0, 0, -N-1)\right\}, \left\{(0, -N-1, 0)\right\}$, and $\left\{(-N-1, 0, 0)\right\}$.
\end{itemize}
\end{itemize}
\begin{figure}[h!]
\centering
\includegraphics[width=12cm]{Figure1.png}
\caption{Example of a simplified grove}
\label{Figure 1}
\floatfoot{Source: \cite{carroll}}
\end{figure}
\justify
\section{Definition}
For our convenience, we will redefine a \textit{simplified grove of size $n$} to be a graph $G$ satisfying:
\begin{itemize}
\item (Vertex set) the vertex set of $G$ is $\{{(i,j)\in\mathbb{Z}^2\mid \lvert i+j\rvert\le n,j\leq 0,i+j\equiv n\bmod{2}}\}$;
\item (Acyclicity) $G$ is acyclic;
\item (Connectivity) the boundary vertices can be partitioned into the following sets so that each component of $G$ contains exactly one set, and conversely, each set is contained in some component:
\begin{itemize}
\item $\{(-n,0)\},\{(n,0)\}$ and $\{(0,-n)\}$,
\item (west pairs) $\{(-i,0),(\frac{-n-i}{2},\frac{-n+i}{2})\}$ for $0<i<n, i\equiv n\bmod{2}$,
\item (east pairs) $\{(i,0),(\frac{n+i}{2},\frac{-n+i}{2})\}$ for $0<i<n, i\equiv n\bmod{2}$,
\item (south pairs) $\{(-n+i,-i),(n-i,-i\}$ for $\frac{n}{2}<i<n$,
\item (middle triplet) $\{(0,0),(-\frac{n}{2},-\frac{n}{2}),(\frac{n}{2},-\frac{n}{2})\}$ if $n$ is even.
\end{itemize}
\end{itemize}
It can be checked that this new definition gives the same set of simplified groves as Carroll's. We also define a \textit{upward triangle of size $i$} to be a triangle whose vertices are $(a,b),(a-i,b-i),(a+i,b-i)$. Similarly, define an \textit{downward triangle of size $i$} to be a triangle whose vertices are $(a,b),(a-i,b+i),(a+i,b+i)$. For simplicity, we will refer to downward and upward triangle of size $1$ as downward and upward triangle respectively.
Now assign to each downward triangle a number $1-e$ where $e$ is the number of edge in that triangle. From the acyclicity condition, we have the number of edge in every triangle is less than $3$; hence, the number assigned to each downward triangle is either $-1,0$ or $1$. Define an \textit{alternating sign triangle} to be a configuration of numbers generated by our process. It can be checked that this definition gives the same set of alternating sign triangles as Carroll's proposed definition does.
\begin{figure}[h!]
\centering
\includegraphics[width=15cm]{Figure2.png}
\caption{Getting alternating sign triangle of size 4 from a grove of size 4}
\label{Figure 2}
\end{figure}
\begin{tikzpicture}
\end{tikzpicture}
\section{Permutation Triangle}
Define \textit{permutation triangles} to be alternating sign triangles whose entries are either $0$ or $1$. We will prove some characteristics of permutation triangles. The proof will need one preliminary result:
\begin{lemma}
The sum of all entries in every alternating sign triangle of size $n$ is exactly $\lfloor\frac{n+1}{2}\rfloor$.
\end{lemma}
\begin{proof}
It can easily be seen that an alternating sign triangle of size $n$ is taken from a grove of size $n$ whose vertices are divided into $3+\lfloor\frac{3n-3}{2}\rfloor$ components. In each component, the number of edges is fewer than the number of vertices by $1$. Hence, the number of edges in a grove of size $n$ is $\frac{(n+1)(n+2)}{2}-3-\lfloor\frac{3n-3}{2}\rfloor$.\\
Notice that in each downward triangle, the entry is defined as $1-e$, where $e$ is the number of edges, and every edge belongs to exactly one downward triangle. Therefore, the sum of all entries is exactly the difference between the number of downward triangles and the number of edges, which is
\begin{align*}
\frac{(n)(n+1)}{2}-\left(\frac{(n+1)(n+2)}{2}-3-\Big\lfloor\frac{3n-3}{2}\Big\rfloor\right)=\Big\lfloor\frac{n+1}{2}\Big\rfloor.
\end{align*}
\end{proof}
\justify
Now we are prepared for our first theorem which was conjectured by Max Glick in 2013.
\begin{theorem}
A configuration is a permutation triangle if and only if the following properties are satisfied:
\begin{enumerate}
\item The top $i$ rows have at most $i$ $1$-s,
\item The left-most $i$ columns have at most $i$ $1$-s,
\item The right-most $i$ columns have at most $i$ $1$-s,
\item Any upward triangle of size $i$ has at most $i$ $1$-s.
\end{enumerate}
\end{theorem}
\begin{proof}
First, we will prove that every permutation triangle has the first property. Note that the sum of all entries is exactly $\lfloor\frac{n+1}{2}\rfloor$; therefore, we only need to prove this property for $i<\lfloor\frac{n+1}{2}\rfloor$.
Consider a sub-graph whose vertices are those in the first $i+1$ rows of the grove and edges are those connecting such vertices, excluding edges connecting the vertices in row $i+1$. The connectivity conditions state that the vertices $(-n+2i,0),(-n+2i+2,0),...,(n-2i,0)$ have to be connected with some vertices in row $i+1$ and below. Hence, these vertices have to be connected with at least one vertex in row $i+1$. Therefore, at least $n+1-2i$ vertices in row $i+1$ have to be connected with a vertex in row $1$. Let $A$ be the set of such vertices, and $B$ be the set of vertices in row $i+1$ that does not belong to $A$. Since $\lvert A\rvert\geq n+1-2i$, and there are $n+1-i$ vertices in row $i+1$, $\lvert B\rvert \leq i$.
\begin{figure}[h!]
\centering
\includegraphics[width=15cm]{Figure3.png}
\caption{Example with $i=3$}
\label{Figure 3}
\end{figure}
The connectivity conditions also state that every vertex in the first $i$ rows has to be connected with either a vertex in row $1$, or a boundary vertex in row $i+1$ or below. Therefore, in our sub-graph, every vertex has to be connected with either a vertex in row $1$ or a vertex in $B$. This means that every components in the sub-graph has to contain at least a vertex in row $i$ or a vertex in $B$. Hence, we have at most $n+1+i$ components. Therefore, the minimum number of edges in the sub-graph is:
\begin{align*}
\frac{(2n+2-i)(i+1)}{2}-(n+1+i)=ni+\frac{-i^2-i}{2}
\end{align*}
\justify
On the other hand, the number of downward triangles in the sub-graph is:
\begin{align*}
\frac{(n+n+1-i)(i)}{2}=ni+\frac{-i^2+i}{2}
\end{align*}
\justify
Since each $0$-triangle gives $1$ edge while each $1$-triangle gives $0$ edge, the maximum number of $1$ is
\begin{align*}
ni+\frac{-i^2+i}{2}-(ni+\frac{-i^2-i}{2})=i
\end{align*}
\justify
This completes our proof for property $1$. Due to symmetry, property $2$ and $3$ can be proved in the same way. Now we prove property $4$. Consider an upward triangle of size $i$, and let $C$ be the set of vertices inside the triangle. We have $\lvert C\rvert=\frac{i(i-1)}{2}$. Assume that the vertices in $C$ are divided into $k$ components, then the number of edges connecting the vertices in $C$ is $\frac{i(i-1)}{2}-k$. Since each component need to be connected with at least $1$ boundary vertex, we need at least $k$ more edges. Therefore, in the triangle, there are at least $\frac{i(i-1)}{2}$ edges. On the other hand, there are $\frac{i(i+1)}{2}$ downward triangles in the triangle. Hence, the maximum number of $1$ is $\frac{i(i+1)}{2}-\frac{i(i-1)}{2}=i$. This completes our proof for property $4$.
\begin{figure}[h!]
\centering
\includegraphics[width=8cm]{Figure4.png}
\caption{Example with $i=4$}
\label{Figure 4}
\end{figure}
\justify
Now we will propose a method to construct a grove of size $n-1$ from a configuration of size $n-1$ that satisfies the four properties. First, we will assign our configuration to the upward triangles of a grove of size $n$. For each upward triangle that contains a $1$, we draw all three edges of that triangle. Now we connect the west pairs using the following procedure: assume that we can connect the pair $\{(-i,0),(\frac{-n-i}{2},\frac{-n+i}{2})\}$, \textit{step 1:} if there are two vertices $(a,b)$ and $(a-1,b-1)$ that are both connected with the pair $\{(-i,0),(\frac{-n-i}{2},\frac{-n+i}{2})\}$ and are not neighbors, we connect one of them with $(a+1,b-1)$. We repeat step 1 until no such pair exists, then \textit{step 2:} we connect the pair $\{(-i+2,0),(\frac{-n-i+2}{2},\frac{-n+i-2}{2})\}$ using the shortest way possible that does not contain any vertex that is connected with the pair $\{(-i,0),(\frac{-n-i}{2},\frac{-n+i}{2})\}$. We will prove that this can be done.
\begin{figure}[h!]
\centering
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{Figure5a.png}
\caption{Before step 1}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{Figure5b.png}
\caption{After step 1 repeatedly}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{Figure5c.png}
\caption{After step 2}
\end{subfigure}
\caption{Two steps illustration}
\label{Figure 5}
\end{figure}
First, for each pair $\{(-i,0),(\frac{-n-i}{2},\frac{-n+i}{2})\}$, we define the \textit{straight line} connecting the pair $\{(-i+k,0),(\frac{-n-i+k}{2},\frac{-n+i-k}{2})\}$ to be its $+k$ line. We will prove the following lemma:
\begin{lemma}
We can connect the west pairs using the procedure above, and if a vertex is connected to the pair $\{(-i,0),(\frac{-n-i}{2},\frac{-n+i}{2})\}$ and lies on its $+k$ line, then there are at least $k$ $1$-s to the northwest of the vertex.
\end{lemma}
\begin{proof}
We will prove the lemma by induction on $i$. The case when $i=n-2$ is trivia. Assume that the lemma is true for all $i\geq j+2$, we will prove that it is also true for $i=j$.
Assume that we cannot connect the pair $\{(-j,0),(\frac{-n-j}{2},\frac{-n+j}{2})\}$, then at least one of the vertices $(-j+1,-1),(-j+2,-2),...,(-j+\frac{n-j}{2}-1,\frac{-n+j}{2}+1)$ or $(\frac{-n-j}{2},\frac{-n+j}{2}),(\frac{-n-j}{2}+2,\frac{-n+j}{2}),...,(-j+\frac{n-j}{2}-2,\frac{-n+j}{2})$ has to be connected with the pair $\{(-j-2,0),(\frac{-n-j-2}{2},\frac{-n+j+2}{2})\}$. If it is the vertex $(-j+k,-k)$, then this vertex lies on the $+k+1$ line of the pair $\{(-j-2,0),(\frac{-n-j-2}{2},\frac{-n+j+2}{2})\}$. By our induction hypothesis, this means that their are at least $k+1$ $1$-s to the northwest of it, which contradicts the first property. Similarly, if it is the vertex $(\frac{-n-j}{2}+2k,\frac{-n+j}{2})$, then the upward triangle containing the vertices $(\frac{-n-j}{2}+2k,\frac{-n+j}{2})$ and $(\frac{-n-j}{2}+2k-1,\frac{-n+j}{2}+1)$ has to have a $1$, and the vertex $(\frac{-n-j}{2}+2k-1,\frac{-n+j}{2}+1)$ is also connected with the pair $\{(-j-2,0),(\frac{-n-j-2}{2},\frac{-n+j+2}{2})\}$. This vertex also lies on the $+k$ line of the pair $\{(-j-2,0),(\frac{-n-j-2}{2},\frac{-n+j+2}{2})\}$. By our induction hypothesis, this means that their are at least $k$ $1$-s to the northwest of $(\frac{-n-j}{2}+2k-1,\frac{-n+j}{2}+1)$, which means there are at least $k+1$ $1$-s to the northwest of $(\frac{-n-j}{2}+2k,\frac{-n+j}{2})$. This contradicts the second property. Therefore, we can connect the pair $\{(-j,0),(\frac{-n-j}{2},\frac{-n+j}{2})\}$.
Now we will prove that if a vertex is connected to the pair $\{(-j,0),(\frac{-n-j}{2},\frac{-n+j}{2})\}$ and lies on its $+k$ line, then there are $k$ $1$-s to the northwest of the vertex by induction on $k$. The case when $k=0$ is trivia. Assume that the statement is true with $k-1$, we will prove that it is also true for $k$. Let $(a,b)$ be the vertex satisfying the condition of the statement. If $(a,b)$ is on the shortest path connecting the pair $\{(-j,0),(\frac{-n-j}{2},\frac{-n+j}{2})\}$, then on the line $+k-1$ of the pair, there must be at least one vertex that is connected to the pair $\{(-j+2,0),(\frac{-n-j-2}{2},\frac{-n+j+2}{2})\}$. This vertex now lie on line $+k$ of the second pair, which means that there are at least $k$ $1$-s to the northwest of it. Hence there are at least $k$ $1$-s to the northwest of $(a,b)$.
If $(a,b)$ is not on the shortest path connecting the pair $\{(-j,0),(\frac{-n-j}{2},\frac{-n+j}{2})\}$, then there are two cases: either it is a vertex of a $1$-triangle or it is connected with the boundary pair in step 1. In the former case, the upward triangle containing $(a,b)$ and $(a-2,b)$ has to have a $1$, and $(a-2,b)$ is also connected to the pair $\{(-j,0),(\frac{-n-j}{2},\frac{-n+j}{2})\}$. Since $(a-2,b)$ lies on line $+k-1$ of this pair, there are at least $k-1$ $1$-s to the northwest of $(a-2,b)$, which means that there are at least $k$ $1$-s to the northwest of $(a,b)$. In the latter case, since $(a-2,b)$ and $(a-1,b+1)$ are not neighbors but are both connected with the boundary pair, at least one of them has to be a vertex of a $1$-triangle. Without loss of generality, assume that it is $(a-2,b)$, then since $(a-1,b+1)$ lies on the $+k-1$ line, there are at least $k-1$ $1$-s to the northwest of $(a-1,b+1)$. This together with the $1$-triangle containing $(a-2,b)$ means that there are at least $k$ $1$-s to the northwest of $(a,b)$. This completes our inductive step.
\end{proof}
Back to our proof. \textit{Lemma 2} tells us that we can connect the west pairs. Similarly, we can connect the east and south pairs. Now we will prove that no two paths are connected.
If $n$ is odd, if a vertex $(a,-i)$ is on line $+k$ of the pair $\{(-1,0),(\frac{-n-1}{2},\frac{-n+1}{2})\}$, then it is on line $+i+1-k$ of the pair $\{(1,0),(\frac{n+1}{2},\frac{-n+1}{2})\}$. If this vertex is connected with both pairs, then there are at least $k$ $1$-s to its northwest and $i+1-k$ $1$-s to its northeast. This means that there are at least $i+1$ $1$-s on the top $i$ rows which contradicts the first property. Therefore, there is no vertex that is connected with both pairs. Hence the two pairs are not connected, and hence no two west and east pairs are connected. Similarly, no two west and south pairs or east and south pairs are connected.
If $n$ is even, similarly, using the same process as above, we can prove that no two paths are connected, and we can also connect the middle triplet so that it is not connected with any path.
Now mark the center of the downward triangles and color the new vertices red. We can see that the red vertices are divided into different regions by the edges we have drawn. Also, we will prove that every red vertex is in the same region with a boundary vertex. Assume there is a region that does not contain any boundary vertex, then this region has to be bounded by the edges we have drawn. If no boundary edge is connected to a boundary pair, then all boundary edges are edges of $1$-triangle, which contradicts property $4$. If any boundary edge is connected with a boundary pair, then using \textit{lemma 2} and the same technique as above, we can prove that this contradicts one of the first three properties. Hence, every red vertex is in the same region with exactly one pair of boundary red vertices.
Now we connect the red vertices by red edges so that each edge goes through exactly one upward triangle. This can be done easily for the west, east, and south triangles since \textit{step 1} guarantees that each upward triangle has exactly two neighbour downward triangles in the same region, and we have proved that each downward triangle has at least one neighbour upward triangles in the same region. This also gives that in each west, east and south region, the number of upward triangles is fewer than that of downward triangles by one. Therefore, with some simple calculations, we can prove that in the middle region, the number of upward triangles is also fewer than that of downward triangles by one. We also have that each upward triangle has at least two neighbour downward triangles in the same region, and we have proved that each downward triangle has at least one neighbour upward triangles in the same region. Therefore, we can connect the red vertices in this region.
Now we can see that the red vertices and edges form a grove of size $n-1$ and our configuration of numbers lie inside the downward triangles. Indeed, since each region contains exactly one boundary pair, each component of the new grove also contains exactly one boundary pair. Since each $0$-triangle has exactly one red edge going through it while each $1$-triangle has none, in the new grove, each $0$-triangle has exactly one edge it while each $1$-triangle has none. Last but not least, since in each region, every red vertices are connected using exactly one fewer edges (since there are one fewer upward triangle than downward triangle), the new grove is acyclic. Hence, we have finished constructing a grove from a configuration. This completes the proof.
\end{proof}
\begin{figure}[h!]
\centering
\begin{subfigure}[b]{8cm}
\centering
\includegraphics[width=\textwidth]{Figure6a.png}
\caption{Connecting red vertices}
\end{subfigure}
\hfill
\begin{subfigure}[b]{7.5cm}
\centering
\includegraphics[width=\textwidth]{Figure6b.png}
\caption{Resulting grove}
\end{subfigure}
\caption{Final grove}
\label{Figure 6}
\end{figure}
\justify
\textbf{Remark:} An interesting point to note from our proof is that in every permutation triangle of size $n$, there is a permutation triangle of size $n-1$. This is also true for alternating sign triangles. The proof for this remark will not be discussed here since it use the same argument as the proof above. Though this remark will not be discussed further in this paper, this may lead to some potential recursive relations among alternating sign triangles.
\section{Alternating sign triangles}
It is still an open problem to give a simple direct characterization of alternating sign triangles, without going through groves. Nevertheless, in this section, we will discuss some properties of alternating sign triangles. Note that due to symmetry, all properties discussed below are also true for the left-columns and right-columns.
\begin{property}
The sum of the entries in any isosceles trapezoid of height $i$ whose top row lies on the top row of the alternating sign triangle (in case the bottom row contains only $1$ entry, this become a downward triangle) is at most $i-v$ where $v$ is the number of $-1$ on the boundary of the trapezoid.
\end{property}
\begin{proof}
The proof of this property use the same argument as property $1$ of permutation triangles. Consider the corresponding grove. Assume that there are $k$ vertices on the first row, then there are $\frac{(k+k-i)(i+1)}{2}$ vertices in the trapezoid, $2k+i-2$ of which lied on the boundary. Each top vertex except the two left-most and right-most has to be connected with at least one other boundary vertex, and each $-1$ on the boundary connect two boundary vertices. Therefore, the boundary vertices are divided into at most $2k+i-2-(k-2)-v=k+i-v$ components. Therefore, there are at least $\frac{(k+k-i)(i+1)}{2}-(k+i-v)=ki-\frac{i^2+i}{2}-i+v$ edges. However, there are $\frac{(k-1+k-i)i}{2}=ki-\frac{i^2+i}{2}$ triangles. Hence, the sum of all entries is at most $i-v$.
\end{proof}
\begin{figure}[h!]
\centering
\includegraphics[width=15cm]{Figure7.png}
\caption{Example with $i=3,v=1$}
\label{Figure 7}
\end{figure}
\justify
\textbf{Remark:} This property is a generalization of property $1$ of permutation triangles.
\begin{property}
The sum of the entries in an upward triangle of size $i$ is at most $i$.
\end{property}
This property can be proven using the same argument as property $1$ and $4$ of permutation triangles.
\begin{property}
The $i$th row has at most $i$ $1$-s and $i-1$ $-1$-s.
\end{property}
\begin{proof}
Consider a sub-graph whose vertices are those in the first $i+1$ rows of the grove and edges are those connecting such vertices, excluding edges connecting the vertices in row $i+1$. The connectivity conditions state that the vertices $(-n+2i,0),(-n+2i+2,0),...,(n-2i,0)$ have to be connected with some vertices in row $i+1$ and below. Therefore, at least $n+1-2i$ vertices in row $i+1$ have to be connected with a vertex in row $1$, which means there are at least $n+1-2i$ $0$-s in row $i$. Hence, there are at most $i$ $1$-s in row $i$.
On the other hand, at least $n+1-2i+2$ vertices in row $i$ have to be connected with a vertex in row $1$. This means that the vertices in row $i$ are divided into at least $n+1-2i+2$ components. Since each $-1$ in row $i$ reduces the number of components by $1$, there are at most $i-1$ $-1$-s in row $i$.
\end{proof}
\begin{property}
In the union of any set of downward triangles whose first row contains entries in the first row of the alternating sign triangles, the sum of the entries is non-negative.
\end{property}
\begin{proof}
Let the number of vertices in the union be $k$, and $A$ be the set of the vertices in the union that are also in the first row. The vertices in $A$ needs to be in different components; hence, in the union, there are at least $\lvert A\rvert$ components. Therefore, there are at most $k-\lvert A\rvert$ edges in the union. On the other hand, for every vertex in the union that are not in $A$, the downward triangle above it is also in the union; conversely, for any downward triangle in the union, its bottom vertex is also in the union and is also not in $A$. Therefore, the number of downward triangles is exactly $k-\lvert A\rvert$. Hence, the sum of the entries is non-negative.
\end{proof}
\begin{figure}[h!]
\centering
\includegraphics[width=9cm]{Figure8a.png}
\caption{Property 4}
\label{Figure 8}
\end{figure}
\newpage
\begin{corollary}
For any $-1$-entry, the downward triangle above it has to contain at least $1$ $1$-entry. Similarly, the downward triangles to the left and right of it have to contain at least $1$ $1$-entry.
\end{corollary}
\begin{figure}[h!]
\centering
\includegraphics[width=8cm]{Figure8.png}
\caption{Corollary 1}
\label{Figure 9}
\end{figure}
|
{
"timestamp": "2021-05-11T02:21:42",
"yymm": "2105",
"arxiv_id": "2105.03969",
"language": "en",
"url": "https://arxiv.org/abs/2105.03969"
}
|
\section{Introduction}
We are motivated by a desire to understand better the dynamical
stability of equilibria of a nematic liquid crystal in an electric
field created by electrodes held at constant potential. As modeled in
the Oseen-Frank macroscopic continuum theory, the free-energy
functional that governs the coupled equilibrium states of the liquid
crystal and the electric field can, in the simplest cases, be
expressed in the form
\begin{equation}\label{eqn:FnU}
\mathcal{F}[\nhat,U] = \int_\Omega \Bigl[ W_\text{e}(\nhat,\nabla\nhat) -
\frac12 \mathlarger{\bfeps}(\nhat) \nabla U \cdot \nabla U \Bigr] \, \td{V} .
\end{equation}
Here $\nhat$ is the director field (unit-length vector field), $U$ the
electric potential (related to the electric field via
$\bmE = - \nabla U$), $\Omega$ the domain of the liquid crystal cell,
$W_\text{e}$ the density of distortional elastic energy, and
${\mathlarger\mathlarger{\bfeps}}$ the dielectric tensor. These terms are
characterized more carefully in the next section. The tensor
${\mathlarger\mathlarger{\bfeps}}$ is symmetric positive definite, and as a
result, the problem has an intrinsic ``minimax'' nature to it, with
locally stable equilibria locally minimizing with respect to $\nhat$
but maximizing with respect to $U$. The assessment of local stability
is well understood from a variational point of view, studied in
\cite{gartland:21}. However, the negative definiteness of $\mathcal{F}$
with respect to $U$ causes confusion about how to assess stability
from a dynamical point of view. The source of this confusion, as we
shall see, comes from the modeling assumptions that must be made in
order to put the free energy in the form above. In particular, one
must assume either a static electric field or an electric field that
adjusts instantaneously to changes in the director field in order for
\eqref{eqn:FnU} to be valid.
The dynamics of such a system can be modeled at different levels of
fidelity. A director field that is evolving in time will have
associated with it fluid flow in the cell, changes in the local
electric field (caused by changes in the dielectric tensor), and
changes in the capacitance of the cell (again caused by the changes in
the dielectric tensor), which will in turn cause changes in the charge
distributions on the electrodes. Hydrodynamics is often of secondary
importance (and ignoring it greatly simplifies matters); so we shall
assume no fluid flow. In many cases, the time scale for director
dynamics is orders of magnitude slower than the time scales for the
electric field and circuit dynamics; so a common modeling
approximation is to assume that both the electric field and the charge
distributions on the electrodes adjust instantaneously to any changes
in the director field. This leaves a reduced free energy that is a
functional of $\nhat$ only, which makes the ``minimax problem'' go
away. The equilibrium Euler-Lagrange equation for the reduced model
is nonlocal, however---the electric field at a point depends on the
director field everywhere---though the assessment of stability is
``normal,'' in that locally stable equilibrium director fields are
local minimizers of the reduced free energy.
There are, however, experiments involving ``fast switching''
(motivated by potential applications for light modulators and the
like) in which the time scale for director reorientation is comparable
to the time scale for circuit dynamics---see, for example,
\cite{geis:lyszczarz:osgood:kimball:10,
gu:yin:shiyanovskii:lavrentovich:07,takanashi:maclennan:clark:98}.
There are, in fact, experiments in which the circuit dynamics are a
limiting factor \cite{baier-saip:bostanjoglo:eichler:macdonald:95,
jang:clark:01}. The time scale for the evolution of the electric
field, on the other hand, comes from the time-dependent Maxwell
equations and is invariably several orders of magnitude faster than
any other macroscopic time scale present. We examine these issues
more carefully in \S\ref{sec:TimeScales}.
Here then we assume that the electric field adjusts instantaneously to
the director field, but we model faithfully the coupling between the
dynamics of the director field and those of the charge distribution.
The modeling highlights the role of the voltage source in giving rise
to the troubling minus sign in the free energy in \eqref{eqn:FnU}
above and the assumptions that must be made in order to put the free
energy in that form. Also, the coupling between director dynamics and
circuit dynamics introduces an additional mechanism for energy
dissipation: Joule heating due to current in the circuit.
We choose a ``textbook'' model system for illustration (a
splay-Fr\'{e}edericksz\ cell) so that all formulas can be worked out explicitly
and are not too cumbersome. Several of our results apply to more
general situations, and we try to indicate this at appropriate points.
The model problem, free energy, and equilibrium equations are
presented in \S\ref{sec:model}. There, both the coupled formulation
(in terms of $\nhat$ and $U$) and the reduced formulation (in terms of
$\nhat$ only) are discussed. The basic equations characterizing RC
circuits are reviewed in \S\ref{sec:RCcircuits}. In
\S\ref{sec:joint}, the linkage is made between the state of the
director field in the cell and the state of the electric circuit, with
a combined potential energy and dissipation function and resulting
coupled dynamical system. A simple numerical illustration of the
coupled dynamics is given in \S\ref{sec:numerics}, and what
conclusions can be drawn are discussed in \S\ref{sec:conclusions}.
\section{Model and equilibrium equations}\label{sec:model}
For a concrete realization of the ideas explored here, we consider a
nematic liquid crystal cell in the splay-Fr\'{e}edericksz\ geometry (as depicted
in figure\,\ref{fig:geom}) subject to an electric
\begin{figure}
\centering
\subfloat[]{\label{fig:geom}
\includegraphics[width=.49\linewidth]{figure1a}}
\subfloat[]{\label{fig:RCcircuit}
\raisebox{3.25ex}{\includegraphics[width=.49\linewidth]{figure1b}}}
\caption{Model problem. Figure\,\ref{fig:geom}: electric-field
splay-Fr\'{e}edericksz-geometry cell (geometry, coordinate system, ground
state) as a capacitor in an RC circuit,
figure\,\ref{fig:RCcircuit}. The liquid crystal film is confined
to $0<z<d$. Strong anchoring is assumed on $z=0$ and $z=d$, and
the liquid-crystal director is assumed to remain in
$\operatorname{span}\{\ehat_x,\ehat_z\}$. RC circuit parameters:
resistance $R$, capacitance $C$, voltage $V$, charge $Q$ (upper
electrode $+$, lower electrode $-$), current $I$ (positive
direction indicated).}
\end{figure}
field created by electrodes held at constant potential by a voltage
source. In devices, the voltage source is usually just a battery,
while in experiments, the electromotive force could come from a
variable power supply. The system is modeled using the Oseen-Frank
macroscopic continuum theory \cite[Ch.\,3]{degennes:prost:93},
\cite[Ch.\,2]{stewart:04}, \cite[Ch.\,3]{virga:94}. The lateral
dimensions of the cell are assumed to be much larger than the cell
gap, enabling us to treat all fields as uniform in $x$ and $y$ (the
coordinates in the lateral directions), with spatial dependence only
on the $z$ coordinate (the coordinate across the cell gap)---thus we
ignore the influence of fringe fields and effects near the edges of
the cell. We assume strong anchoring conditions and further assume
that the director $\nhat$ remains in the $x$-$z$ tilt plane:
\begin{equation*}
\nhat = n_x \ehat_x + n_z \ehat_z =
\cos \theta \, \ehat_x + \sin \theta \, \ehat_z , \quad
\theta = \theta(z) .
\end{equation*}
The free-energy functional will contain contributions from
distortional elasticity plus terms associated with the electric
field. The elastic part has the form
\begin{equation}\label{eqn:Fe}
\calF_\text{e}[\nhat] = \int_\Omega W_\text{e}(\nhat,\nabla\nhat) \, \td{V} =
A \int_0^d W_\text{e}(\nhat,\partial_z\nhat) \, \td{z} ,
\end{equation}
where $\Omega$ is the domain of the cell, $A$ the $x$-$y$
cross-section area, $d$ the cell gap, and
\begin{equation}\label{eqn:We}
\begin{aligned}
2 W_\text{e} &= K_1 (\div\nhat)^2 + K_2 (\nhat\cdot\curl\nhat)^2 +
K_3 |\nhat\times\curl\nhat|^2 \\
&= K_1 n_{z,z}^2 + K_3 n_{x,z}^2 \\
&= \bigl( K_1\CC+K_3\SS \bigr) \theta_z^2 .
\end{aligned}
\end{equation}
See \cite[\S3.1.2]{degennes:prost:93}, \cite[\S2.2.1]{stewart:04},
\cite[\S3.2]{virga:94}. Here we denote $n_{x,z}=\text{d} n_x/ \td{z}$,
$n_{z,z}=\text{d} n_z/\td{z}$, and $\theta_z=\text{d}\theta/\td{z}$. Thus the
distortional free energy per unit area, $\widetilde{\calF}_\text{e} := \calF_\text{e} / A$, as a
function of the tilt angle $\theta$ is given by
\begin{equation*}
\widetilde{\calF}_\text{e}[\theta] = \frac12 \int_0^d \bigl(
K_1\CC+K_3\SS \bigr) \theta_z^2 \, \td{z} .
\end{equation*}
The appropriate contribution to the free-energy density associated
with a static electric field arising from electrodes held at constant
potential is
\begin{equation*}
W_\text{E} = - \frac12 \bmD \cdot \bmE ,
\end{equation*}
where $\bmD$ is the displacement field and $\bmE$ the electric field.
This is discussed in a general context in \cite[\S4.7]{jackson:75} and
\cite[\S5, \S10]{landau:lifshitz:pitaevskii:93}; it is discussed in
the particular context of liquid crystals in \cite[\S3.6.2,
\S3.6.3]{barbero:evangelista:01}, \cite[\S10.1]{collings:hird:97}, and
\cite[\S7.1]{yang:wu:15}. Another perspective on this expression will
be developed in \S\ref{sec:system-potential-energy}. In a system such
as ours (a transversely isotropic medium in the linear regime), it is
usually assumed that
\begin{equation}\label{eqn:DepsE}
\bmD = \mathlarger{\bfeps}(\nhat) \bmE , \quad
\mathlarger{\bfeps} = \epsilon_0 \bigl[ \eps_{\scriptscriptstyle\perp} \mathbf{I} +
\eps_\text{a} ( \nhat \otimes \nhat ) \bigr] , \quad
\eps_\text{a} := \eps_{\scriptscriptstyle\parallel} - \eps_{\scriptscriptstyle\perp} ,
\end{equation}
with ${\mathlarger\mathlarger{\bfeps}}$ the dielectric tensor and $\eps_{\scriptscriptstyle\perp}$ and
$\eps_{\scriptscriptstyle\parallel}$ the relative dielectric permittivities perpendicular to
$\nhat$ and parallel to $\nhat$, giving
\begin{equation*}
W_\text{E} = - \frac12 \epsilon_0 \bigl[ \eps_{\scriptscriptstyle\perp} E^2 +
\eps_\text{a} ( \bmE\cdot\nhat )^2 \bigr] , \quad E = | \bmE | .
\end{equation*}
The macroscopic modeling of electric fields in liquid crystals is
discussed in \cite[\S3.3.1]{degennes:prost:93},
\cite[\S2.3.1]{stewart:04}, and \cite[\S4.1]{virga:94}. We note that
the electric field is in general nonhomogeneous \cite{gartland:21} and
that the linear relation between static $\bmD$ and $\bmE$ fields can
be nonlocal in space, though such spatial dispersion is generally
viewed as negligible in macroscopic models of pure dielectrics---see
\cite[\S I.4]{jackson:75} or \cite[\S103]{landau:lifshitz:pitaevskii:93}.
The relevant Maxwell equations for electrostatics (assuming no
distribution of free charge in $\Omega$) are
\begin{equation*}
\curl \bmE = \bfzero , \quad \div \bmD = 0 .
\end{equation*}
Tangential components of $\bmE$ are continuous across a material
interface, while the normal component of $\bmD$ suffers a jump equal
to the surface charge density $\sigma_{\text{f}}$ on the interface:
\begin{equation*}
\llbracket E_{\text{t}} \rrbracket = 0 , \quad
\llbracket D_\nu \rrbracket = \sigma_{\text{f}} .
\end{equation*}
Both $\bmD$ and $\bmE$ vanish inside electrodes. The basic
electrostatics that we require can be found in any of \cite[Ch.\,I,
Ch.\,1]{jackson:75}, \cite[Ch.\,I,
Ch.\,II]{landau:lifshitz:pitaevskii:93},
\cite[Chs.\,2--6]{reitz:milford:67}, or \cite[Ch.\,III]{stratton:41}.
Using $\curl\bmE=\bfzero$ and the interface conditions, we conclude
that in our system
\begin{equation}\label{eqn:EUzDzsigma}
\bmE = - U_z \ehat_z , \quad U_z = \text{d} U / \td{z} , \quad
D_z(0+) = D_z(d-) = - \sigma ,
\end{equation}
where $U=U(z)$ is the electric potential and $\sigma$ is the (uniform)
surface charge density on the upper electrode ($-\sigma$ on the lower
electrode). Given the polarity indicated in our circuit diagram in
figure\,\ref{fig:RCcircuit}, we have $\sigma > 0$. Thus $W_\text{E}$
simplifies to
\begin{equation*}
W_\text{E} = - \frac12 \epsilon_0 \bigl( \eperp n_x^2+\epara n_z^2 \bigr) U_z^2 .
\end{equation*}
For equilibrium states of our model problem, then, the total free
energy (per unit area) expressed as a functional of $\theta$ and $U$
is given by
\begin{equation}\label{eqn:Fcoupled}
\widetilde{\calF}[\theta,U] = \frac12 \int_0^d \bigl[ \bigl(K_1\CC+K_3\SS\bigr) \theta_z^2 -
\epsilon_0 \bigl(\eperp\!\CC+\epara\SS\bigr) U_z^2 \bigr] \, \td{z} .
\end{equation}
\subsection{Coupled system}
When viewed as a coupled system in this way, the equilibrium
Euler-Lagrange equations that follow from $\delta_\theta\widetilde{\calF}=0$ and
$\delta_U\widetilde{\calF}=0$ are given by
\begin{subequations}\label{eqn:coupled-system}
\begin{gather}
\frac{\text{d}}{\td{z}} \bigl[ \bigl(K_1\CC+K_3\SS\bigr) \theta_z \bigr] =
\sin\theta \cos\theta \bigl[ (K_3-K_1) \theta_z^2 -
\epsilon_0 \eps_\text{a} U_z^2 \bigr] ,
\label{eqn:thetaODE} \\
\frac{\text{d}}{\td{z}} \bigl[ \bigl(\eperp\!\CC+\epara\SS\bigr) U_z \bigr] = 0 ,
\label{eqn:UODE}
\end{gather}
with boundary conditions
\begin{equation}
\theta(0) = \theta(d) = 0 , \quad
U(0) = - \Delta U / 2 , ~ U(d) = \Delta U / 2 .
\end{equation}
\end{subequations}
Here $\Delta U$ is the difference between the electric potential on the
upper electrode and that on the lower electrode, and the director
field and electric field in the cell depend only on this
difference---an arbitrary constant added to $U$ has no effect on
solutions of \eqref{eqn:coupled-system} (one could just as well impose
boundary conditions $U(0)=0$, $U(d)=\Delta U$). Equation \eqref{eqn:UODE}
corresponds to $\div\bmD=0$ for our model cell and emerges in a
natural way as a condition of stationarity of $\widetilde{\calF}[\theta,U]$.
The characterization of the stability properties of solutions of the
coupled system \eqref{eqn:coupled-system} is a little nonstandard
because of the ``minimax'' nature of the free energy $\widetilde{\calF}$ in
\eqref{eqn:Fcoupled}. Locally stable solutions are locally minimizing
with respect to $\theta$ but maximizing with respect to $U$; while
globally stable solutions are globally minimizing with respect to
$\theta$, maximizing with respect to $U$. Globally stable solutions
can also be characterized as equilibrium solutions of least free
energy. These topics are taken up in \cite{gartland:21}. From the
point of view of \emph{dynamical} stability, the picture is somewhat
confusing, and a goal of this note is to try to understand this
better.
\subsection{Reduced system}
Instead of viewing the system as a coupled system with two state
variables, one can eliminate $U$ and model the system in terms of
$\theta$ only. An integration of \eqref{eqn:UODE} gives
\begin{equation}\label{eqn:Uz}
U_z = \Delta U \biggl[ \int_0^d \frac{\td{z}}{\eperp\!\CC+\epara\SS} \biggr]^{-1}
\!\! \frac1{\eperp\!\CC+\epara\SS} \, ,
\end{equation}
which when substituted into \eqref{eqn:thetaODE} produces
\begin{multline}\label{eqn:thetaODEnonlocal}
\frac{\text{d}}{\td{z}} \bigl[ \bigl(K_1\CC+K_3\SS\bigr) \theta_z \bigr] =
\sin\theta \cos\theta \biggl\{ (K_3-K_1) \theta_z^2 \\
{} - \epsilon_0 \eps_\text{a} (\Delta U)^2 \biggl[ \int_0^d \frac{\td{z}}{\eperp\!\CC+\epara\SS}
\biggr]^{-2} \!\! \frac1{(\eperp\!\CC+\epara\SS)^2} \biggr\} .
\end{multline}
This same equation can be obtained from the first variation of the
reduced free energy $\widetilde{\calF}_\text{r}$ that results when the expression for $U_z$
above is substituted into \eqref{eqn:Fcoupled}:
\begin{equation}\label{eqn:tFr}
\widetilde{\calF}_\text{r}[\theta] = \frac12 \int_0^d \bigl(K_1\CC+K_3\SS\bigr) \theta_z^2 \, \td{z} -
\frac12 \epsilon_0 (\Delta U)^2 \biggl[ \int_0^d \frac{\td{z}}{\eperp\!\CC+\epara\SS}
\biggr]^{-1} \!\! .
\end{equation}
If the voltage of the battery is $V$, then in equilibrium
$\Delta U=V$, and the above expression agrees with
\cite[(3.221)]{stewart:04}. When the system is \emph{not} in
equilibrium, however, $\Delta U$ need not be equal to $V$, and this
will be an issue in what follows.
The formulation in terms of the reduced free energy is natural and has
been widely used. It was used by Deuling in \cite{deuling:72}, which
is recounted in \cite[\S3.5]{stewart:04}. It was used by Hardt,
Kinderlehrer, and Lin in their analytical paper
\cite{hardt:kinderlehrer:lin:86}, and it was also employed by the
author in \cite{gartland:21}. The formulation has certain advantages
in terms of stability assessment, in that locally stable states are
local minimizers of $\widetilde{\calF}_\text{r}$ with respect to $\theta$, and globally
stable states are global minimizers (the second variation
$\delta^2\!\widetilde{\calF}_\text{r}$ being positive definite in both cases). Using $\widetilde{\calF}_\text{r}$
is equivalent to minimizing the free energy $\widetilde{\calF}$ in
\eqref{eqn:Fcoupled} subject to the ODE constraint \eqref{eqn:UODE}.
The approach amounts to viewing the electric field as slaved to the
director field. A disadvantage of the reduced-free-energy formulation
is that the equilibrium Euler-Lagrange equation
\eqref{eqn:thetaODEnonlocal} is \emph{nonlocal}.
When the system is out of equilibrium and the director field is
evolving in time ($\nhat=\nhat(z,t)$, $\theta=\theta(z,t)$), then the
capacitance of the liquid-crystal cell will be changing with time as
well. In this case, the battery will need to move charge on or off
the electrodes in order to re-establish the equilibrium potential
difference $\Delta U = V$. The associated currents in the electric circuit
will suffer some energy loss, due to Joule heating, and we need a way
to combine these effects with the dynamics and viscous dissipation in
the liquid-crystal cell. We begin by reviewing the charge dynamics of
a standard RC circuit.
\section{RC circuits}\label{sec:RCcircuits}
The typical kind of experimental setup that we envision can be viewed
as an RC circuit with the liquid-crystal cell forming a capacitor
containing a complex time-varying dielectric, as depicted in
figure\,\ref{fig:RCcircuit}. The resistance $R$ could come from the
presence of an actual resistor, or it could just be thought of as a
surrogate to account for the total resistance of all the elements in
the circuit (wires, connectors, conduction layers in the cell, etc.).
The equation governing charge dynamics in such a circuit follows from
the Kirchhoff Voltage Law and the formulas for the potential drops
across a resistor and a capacitor \cite[\S6.6,
\S7.8]{reitz:milford:67}:
\begin{equation*}
\Delta U_{\text{res}} = I R , ~ \Delta U_{\text{cap}} = Q / C ~ \Rightarrow ~
I R + Q / C = V .
\end{equation*}
Here $Q(t)$ is the instantaneous total charge on the upper (positive)
electrode, and $I = \td{Q}/\td{t}$ is the current. Appending an initial
condition leads to the IVP
\begin{equation}\label{eqn:QODEIVP}
R \frac{\td{Q}}{\td{t}} + \frac1{C} Q = V , ~~ Q(0) = Q_0 ,
\end{equation}
the solution of which can be written
\begin{equation}\label{eqn:Qt}
Q(t) = Q_\infty + ( Q_0 - Q_\infty ) \exp(-t/\tau) , \quad
Q_\infty := C V , ~~ \tau := RC .
\end{equation}
Thus the steady state of the circuit corresponds to
\begin{equation*}
Q = C V , ~~ \Delta U = V ,
\end{equation*}
and the characteristic time scale for the dynamics is $\tau =
RC$---from here on, we simply denote $\Delta U_{\text{cap}} = \Delta U$ (as we
have used in the previous section).
The state of the circuit could just as well be characterized in terms
of $\Delta U$ (instead of $Q$), in which case \eqref{eqn:QODEIVP} would
take the form
\begin{equation*}
R \frac{\text{d}}{\td{t}} ( C \Delta U ) + \Delta U = V , ~~ \Delta U(0) = \Delta U_0 .
\end{equation*}
As we shall see in what follows, $C$ depends on $\nhat$ (which will be
changing in time, leading to $C=C(t)$); so modeling the circuit in
terms of $Q$ proves to be more convenient (at least for our model
problem).
The potential energies of the capacitor and battery, relative to a
value of zero at $Q=0$, are given by
\begin{equation*}
\calE_\text{cap} = \frac12 Q \Delta U , ~~ \calE_\text{bat} = - Q V .
\end{equation*}
The relation $Q=C\Delta U$ makes it possible to write these expressions in
various equivalent forms, e.g., $\calE_\text{cap}=Q^2/2C$ or $\calE_\text{cap}=C(\Delta U)^2/2$.
These potential energies correspond to the work done in a reversible
process building up the charge on the capacitor electrodes in an
incremental way. As the charge on the capacitor is being built up,
the electric potential is changing there as well (according to the
relation $Q = C \Delta U$), leading to the factor of $1/2$: the increment
of work done in moving an increment of charge from a location of
potential zero to a location of potential $\Delta U$ is $\delta W = \delta
Q \Delta U$, giving
\begin{equation*}
W_{\text{cap}} =
\int_0^{Q_\text{f}} \!\! \Delta U \, \td{Q} =
\int_0^{Q_\text{f}} \! \frac{Q}{C} \, \td{Q} =
\frac1{C} \int_0^{Q_\text{f}} \!\! Q \, \td{Q} =
\frac1{2C} Q_\text{f}^2 =
\frac12 Q_\text{f} \DU_\text{f} ,
\end{equation*}
where $Q_\text{f}$ and $\DU_\text{f}$ are the final (fully charged) values of $Q$ and
$\Delta U$. The battery, on the other hand, always maintains a constant
potential of $V$; so
\begin{equation*}
W_{\text{bat}} = \int_{Q_\text{f}}^0 \! V \, \td{Q} =
V \! \int_{Q_\text{f}}^0 \! \td{Q} = - Q_\text{f} V .
\end{equation*}
A discussion of these textbook formulas can be found in
\cite[\S6.6]{reitz:milford:67}. By convention, the accounting is done
in terms of just the positive electrode, but it takes into account the
contribution of the negative electrode---in actuality, one has charge
$Q$ on the positive electrode at potential $\Delta U/2$ and charge $-Q$ on
the negative electrode at potential $-\Delta U/2$. We conclude that the
total potential energy at any instant, expressed in terms of $Q$, is
given by
\begin{equation*}
\mathcal{E} := \calE_\text{cap} + \calE_\text{bat} = \frac1{2C} Q^2 - Q V .
\end{equation*}
The discussion above assumes a \emph{constant} capacitance $C$. We
will revisit this calculation later with $C = C(Q)$.
We note that whether the system is in equilibrium or not, we always
have equal and opposite total excess charge on the upper and lower
electrode surfaces. This relates to conservation of charge and is a
consequence of Gauss's Law. In the full time-dependent Maxwell
equations (as well as in Maxwell electrostatics), we always have
$\div\bmD=0$ in the absence of any free-charge distribution in
$\Omega$ (which we have assumed to be the case throughout). From this
follows
\begin{equation*}
0 = \int_\Omega \div \bmD \, \td{V} =
\int_{\partial\Omega} \! \bmD \cdot \nuhat \, \td{S} =
- \int_{\partial\Omega} \! \sigma_{\text{f}} \, \td{S} .
\end{equation*}
In our system, $\sigma_{\text{f}}$ is supported on the top and bottom electrode
surfaces; so the total charge on the top and bottom must sum to zero
($Q$ on the top and $-Q$ on the bottom, in our notation). We also
note that we use interchangeably the terms ``potential energy,''
``electrostatic energy,'' and ``free energy''---all refer to the
energy associated with reversible work processes.
Current flowing through a resistor leads to energy dissipation (by
Joule heating) at a rate $RI^2$---see
\cite[(21.6)]{landau:lifshitz:pitaevskii:93} or
\cite[\S1.8, (4)]{stratton:41}. The Rayleigh dissipation
function associated with this is
\begin{equation}\label{eqn:calRcircuit}
\mathcal{R} = \frac12 R I^2 = \frac12 R \Bigl( \frac{\td{Q}}{\td{t}} \Bigr)^{\!2} .
\end{equation}
From a variational point of view, then, the circuit dynamics can be
obtained from a dissipation principle:
\begin{equation*}
\frac{\partial\mathcal{E}}{\partial Q} +
\frac{\partial\mathcal{R}}{\partial\dot{Q}} = 0 ~ \Rightarrow ~
\frac1{C} Q - V + R \frac{\td{Q}}{\td{t}} = 0 ,
\end{equation*}
in agreement with \eqref{eqn:QODEIVP}. Here we follow the formalism
of Lagrangian mechanics with frictional forces and potential energy
only (no kinetic energy), denoted in that setting $L=T-V=-V$ (with
$V=\mathcal{E}$ here)---see, for example, \cite[\S2.2.1]{sonnet:virga:12},
where a historical perspective and classical references can be found.
One can try to put some of this in context. The total energy
dissipated in the dynamical process is given by
\begin{equation*}
\mathcal{D} \! = \int_0^\infty \!\! R I^2 \, \td{t} = \frac1{2C} ( Q_\infty - Q_0 )^2 ,
\end{equation*}
using \eqref{eqn:Qt}, compared to the potential-energy changes of the
capacitor and battery:
\begin{equation*}
\Delta\Ecap = \frac1{2C} \bigl( Q_\infty^2 - Q_0^2 \bigr) ,
\quad \Delta\Ebat = - ( Q_\infty - Q_0 ) V =
\frac1{C} ( Q_0 - Q_\infty ) Q_\infty .
\end{equation*}
Thus we always have
\begin{equation*}
\Delta\Ecap + \Delta\Ebat + \mathcal{D} = 0 ,
\end{equation*}
as we must. In the special case of charging the capacitor from
\emph{zero} ($Q_0=0$), we have
\begin{equation*}
\mathcal{D} = \Delta\Ecap = \frac12 C V^2 , \quad \Delta\Ebat = - C V^2 .
\end{equation*}
In this case, half the work done by the battery goes into the final
electrostatic energy of the capacitor, while the other half is lost to
dissipation. Similar phenomena (in which half of the work is lost to
dissipation) occur in a number of other settings, including simple
spring-mass-damper systems and linear elasticity
\cite{fosdick:truskinovsky:03}.
At the other extreme, if the initial charge on the
capacitor were just slightly out of equilibrium,
$Q_0 = ( 1 + \varepsilon ) Q_\infty $, say, then one would obtain
\begin{equation*}
\Delta\Ecap = - \varepsilon ( 1 + \varepsilon/2 ) C V^2 , \quad
\Delta\Ebat = \varepsilon C V^2 , \quad
\mathcal{D} = \varepsilon^2 C V^2 / 2 .
\end{equation*}
In this case, both $\Delta\Ecap$ and $\Delta\Ebat$ are $O(\varepsilon)$, but
$\mathcal{D} = O(\varepsilon^2)$---and the dissipation would essentially be
negligible when $|\varepsilon|\ll1$. While we always have
\begin{equation*}
0 \le \mathcal{D} \le | \Delta\Ecap | ,
\end{equation*}
we only have $\mathcal{D} = | \Delta\Ecap |$ in the case $Q_0 = 0$ or in the
trivial cases $Q_0=Q_\infty$ (no dynamics) or $Q_\infty=0$ (no
battery).
\section{Total potential energy and dynamical system}
\label{sec:joint}
There is a mutual influence between the director field in the cell and
the state of the electric circuit: the director field determines the
capacitance of the cell (which affects the circuit), while the state
of the circuit (whether characterized by $Q$ or by $\Delta U$) affects the
electric field in the cell (and hence the director field). In order
to put together a coupled set of equilibrium equations or dynamical
equations, we require expressions for the total potential energy and
for the dissipation of the full system. These, in turn, require
certain ``building blocks,'' which we now derive.
\subsection{Capacitance of the cell}
Due to the simplicity of our model problem (fields depending on only
one space variable, stratified nature of the dielectric), it is
possible to express the capacitance of the cell in analytical form:
\begin{equation}\label{eqn:Cn}
C[\nhat] = \epsilon_0 A \biggl[
\int_0^d \!\! \frac{\td{z}}{\eperp n_x^2+\epara n_z^2} \biggr]^{-1} \!\! .
\end{equation}
This can be derived in various ways. Using the basic relation for a
parallel-plate capacitor $Q=C\Delta U$ (with $Q$ the total charge on the
positive electrode, $C$ the capacitance, and $\Delta U$ the potential
difference between the electrodes, as before) and the relations in
\eqref{eqn:DepsE} and \eqref{eqn:EUzDzsigma}, we have
\begin{equation*}
D_z = \epsilon_{zz} E_z = - \sigma ~ \Rightarrow ~
U_z = - E_z = \frac{\sigma}{\,\epsilon_{zz}} ~ \Rightarrow ~
\Delta U = \! \int_0^d \! U_z \, \td{z} =
\sigma \! \int_0^d \! \frac{\td{z}}{\,\epsilon_{zz}} \, ,
\end{equation*}
which gives
\begin{equation*}
C = \frac{Q}{\Delta U} =
\frac{\sigma A}{\sigma \! \int_0^d (1/\epsilon_{zz}) \, \td{z}} =
A \biggl[ \int_0^d \! \frac{\td{z}}{\,\epsilon_{zz}} \biggr]^{-1} \!\!\! =
\epsilon_0 A \biggl[ \int_0^d \!\! \frac{\td{z}}{\eperp n_x^2+\epara n_z^2} \biggr]^{-1} \!\! .
\end{equation*}
In this situation, $C$ is simply an integral functional of the
director field. One also sees such expressions derived by
approximating the stratified dielectric as a collection of thin
capacitive elements in series \cite[\S5.4.3.1]{dunmur:toriyama:99}.
Note that if the material were isotropic with relative permittivity
$\eps_\text{r}$ (i.e., $\eps_{\scriptscriptstyle\perp}=\eps_{\scriptscriptstyle\parallel}=\eps_\text{r}$), then \eqref{eqn:Cn} would
simplify to $C=\epsilon A/d$ with $\epsilon=\epsilon_0\eps_\text{r}$, which is the
textbook expression for the capacitance of a parallel-plate capacitor
\cite[\S6.6]{reitz:milford:67}.
In more general circumstances (such as a director field that is a
function of more than one space variable), it is not possible to
derive a formula such as the above: there is no explicit analytical
expression for the capacitance of a cell with a two- or
three-dimensional inhomogeneity of the dielectric. In such cases, one
can characterize the capacitance in terms of the director field
$\nhat$ in $\Omega$ given either $Q$ or $\Delta U$, but to determine $C$,
one must solve an auxiliary problem from an appropriate formulation of
Maxwell electrostatics, then determine $\Delta U$ from this solution (if
$Q$ was given) or $Q$ (if $\Delta U$ was given), and finally compute the
ratio $C=Q/\Delta U$.
\subsection{Potential energy of the cell}
\label{sec:potential-energy-cell}
The textbook calculation of the electrostatic energy of a capacitor
was recounted in \S\ref{sec:RCcircuits}. When combined with the
relation $Q=C\Delta U$, it yielded several equivalent ways of writing this
potential energy:
\begin{equation}\label{eqn:Ecap}
\calE_\text{cap} = \frac12 Q \Delta U = \frac1{2C} Q^2 = \frac12 C (\Delta U)^2 .
\end{equation}
The calculation of $\calE_\text{cap}$ in \S\ref{sec:RCcircuits} depended on the
assumption that the capacitance of the cell was \emph{constant}. In
our system, however, the capacitance changes with changes in the
director field in the cell, and we must modify this calculation
accordingly.
As before, we start from the fact that the increment of work
$\delta W$ done in moving an increment of charge $\delta Q$ from one
location to another location of potential difference $\Delta U$ is
$\delta W = \delta Q \Delta U$, giving (as before)
\begin{equation*}
W_{\text{cap}} = \int_0^{Q_\text{f}} \!\! \Delta U \, \td{Q} =
\int_0^{Q_\text{f}} \! \frac{Q}{C} \, \td{Q} .
\end{equation*}
Now, however, $C$ depends on $Q$ (in a way that could be complicated
and not easy to express)---as charge is added to the upper electrode
(depleted from the lower electrode), the electric field in the cell
will eventually become strong enough to distort the director field
(and change the capacitance). An integration by parts provides a
simple way to assess the new situation:
\begin{multline*}
W_{\text{cap}} = \int_0^{Q_\text{f}} \!\! \frac{Q}{C(Q)} \, \td{Q} =
\int_0^{Q_\text{f}} \!\! \frac1{C(Q)} \, \text{d} \Bigl( \frac{Q^2}2 \Bigr) \\ =
\frac12 \frac{Q_\text{f}^2}{C(Q_\text{f})} + \frac12 \int_0^{Q_\text{f}} \!\!
\frac{Q^2}{C(Q)^2} \, \frac{\text{d} C}{\td{Q}} \, \td{Q} =
\frac12 \frac{Q_\text{f}^2}{C(Q_\text{f})} + \frac12 \int_{C(0)}^{C(Q_\text{f})} \!\!
( \Delta U )^2 \, \text{d} C .
\end{multline*}
The first term on the right-hand side above is one form of the
electrostatic energy $\calE_\text{cap}$ of a capacitor with a constant
capacitance $C=C(Q_\text{f})$, as in \eqref{eqn:Ecap}. The second term above
can be interpreted as follows.
The increment of work $\delta W$ done in changing the capacitance by
an incremental amount $\delta C$ is $\delta W=\frac12\delta C(\Delta U)^2$,
which can be seen from the alternate form of the potential energy of
the capacitor $\calE_\text{cap}=\frac12C(\Delta U)^2$. Thus the integral term on the
right-hand side above is the reversible work done in changing the
capacitance of the cell from its initial value at $Q=0$ to its final
value at $Q=Q_\text{f}$. In our system, the only way the capacitance can be
changed is by distorting the director field in $\Omega$, and the work
function for that process is the distortional elastic energy $\calF_\text{e}$.
Thus the integral in question gives the change in distortional elastic
energy from its value at $Q=0$ to its value at $Q=Q_\text{f}$:
\begin{equation*}
\frac12 \int_{C(0)}^{C(Q_\text{f})} \!\! ( \Delta U )^2 \, \text{d} C =
\calF_\text{e}\bigr|_{Q=Q_\text{f}} - \calF_\text{e}\bigr|_{Q=0} = \calF_\text{e}\bigr|_{Q=Q_\text{f}} ,
\end{equation*}
since $\calF_\text{e}|_{Q=0}=0$ in our system. Under more general circumstances,
one could have $\calF_\text{e}|_{Q=0}\not=0$, but that term would just add a
constant to the potential energy and could be ignored.
We see that the potential energy of the liquid-crystal cell in our
system is given by
\begin{equation*}
W_{\text{cap}} = \calE_\text{cap} + \calF_\text{e}[\nhat] , \quad
\calE_\text{cap} = \frac12 Q \Delta U =
\frac12 C[\nhat]^{-1} Q^2 =
\frac12 C[\nhat] ( \Delta U )^2 ,
\end{equation*}
where $\nhat$, $Q$, and $\Delta U$ are the instantaneous values of these
state variables. This expression captures, in a clean and decoupled
way, both the work done in moving charge and that done by inducing
change in the capacitance (by distorting the director field). We note
that we continue to use the notation $\calE_\text{cap}$ for any of the equivalent
formulas for the electrostatic energy of a capacitor of constant
capacitance \eqref{eqn:Ecap}, with the understanding that the value of
$C$ is always taken to be that associated with the current state of
the director field: $C=C[\nhat]$.
\subsection{Potential energy of the system and coupled equilibrium equations}
\label{sec:system-potential-energy}
The total potential energy of the system $\mathcal{G}$ is thus given by
\begin{equation}\label{eqn:calG}
\mathcal{G} = \calF_\text{e} + \calE_\text{cap} + \calE_\text{bat} = \calF_\text{e} + \frac12 Q \Delta U - Q V .
\end{equation}
The first two terms represent the potential energy of the
liquid-crystal cell, as derived in \S\ref{sec:potential-energy-cell},
while the third term is the potential energy associated with the
battery. The expression for $\mathcal{G}$ can be related to more familiar
forms by noting that for static electric fields, $\calE_\text{cap}$ can be
expressed in terms of field intensities inside the cell:
\begin{equation}\label{eqn:QDUvsDE}
\frac12 Q \Delta U = \frac12 \int_\Omega (\bmD\cdot\bmE) \, \td{V} ,
\end{equation}
where $\bmD$ and $\bmE$ are the fields associated with the
electrostatic problem $\div\bmD=0$ in $\Omega$ with a potential
difference $\Delta U$ between the electrodes. This can be established as
follows. Using the relations $\bmD\cdot\nuhat = -\sigma_{\text{f}}$ (with
$\nuhat$ the outward normal from $\Omega$), $\div\bmD=0$, and
$\bmE = -\nabla U$, we obtain the following relation for a general
surface charge density $\sigma_{\text{f}}$ and electric potential $U$:
\begin{multline*}
\int_{\partial\Omega} \! \sigma_{\text{f}} \, U \, \td{S} =
- \int_{\partial\Omega} ( U \bmD ) \cdot \nuhat \, \td{S} =
- \int_\Omega \div ( U \bmD ) \, \td{V} \\ =
- \int_\Omega ( U \div \bmD + \nabla U \cdot \bmD ) \, \td{V} =
\int_\Omega ( \bmD \cdot \bmE ) \, \td{V} .
\end{multline*}
See, for example, \cite[\S6.3]{reitz:milford:67} or
\cite[\S2.8]{stratton:41}. In our system, the surface charge density
is supported on the upper and lower boundary electrodes, and the
electric potential $U$ is constant on each electrode ($-\Delta U/2$ on the
lower, $\Delta U/2$ on the upper), giving
\begin{multline*}
\int_{\partial\Omega} \! \sigma_{\text{f}} \, U \, \td{S} =
\int_{\Gamma_1} \! \sigma_{\text{f}} \, U \, \td{S} +
\int_{\Gamma_2} \! \sigma_{\text{f}} \, U \, \td{S} =
- \frac{\Delta U}2 \int_{\Gamma_1} \! \sigma_{\text{f}} \, \td{S}
+ \frac{\Delta U}2 \int_{\Gamma_2} \! \sigma_{\text{f}} \, \td{S} \\ =
- \frac{\Delta U}2 (-Q) + \frac{\Delta U}2 Q = Q \Delta U .
\end{multline*}
Here $\Gamma_1$ and $\Gamma_2$ are the lower and upper boundary
electrode interfaces. Combining these two calculations establishes
the validity of \eqref{eqn:QDUvsDE}. Note that this argument does not
require $\sigma_{\text{f}}$ to be constant on $\Gamma_1$ or $\Gamma_2$.
Concerning these equivalent formulas for electrostatic energy
\begin{equation*}
\frac12 \int_{\partial\Omega} \! \sigma_{\text{f}} \, U \, \td{S} =
\frac12 \int_\Omega ( \bmD \cdot \bmE ) \, \td{V} ,
\end{equation*}
the left-hand side is in fact the more primitive expression (how
electrostatic energy associated with surface charge distributions is
often first derived in electromagnetics textbooks)---see, for example,
\cite[\S6.2]{reitz:milford:67} or \cite[\S2.7]{stratton:41}. We note
that when there is no current flowing in the circuit, then $\Delta U = V$,
and the combination $\calE_\text{cap}+\calE_\text{bat}$ satisfies
\begin{equation}\label{eqn:calEequilib}
\calE_\text{cap} + \calE_\text{bat} = \frac12 Q V - Q V = - \frac12 Q V =
- \frac12 \int_\Omega ( \bmD \cdot \bmE ) \, \td{V} ,
\end{equation}
which is the appropriate contribution to the free energy to be used in
modeling liquid crystal equilibrium states in a setting such as ours.
We emphasize that in order for \eqref{eqn:calEequilib} to be valid,
one requires equilibrium conditions in the electrical circuit ($I=0$,
$\Delta U=V$) and also equilibrium of the electric field in the cell
($\curl\bmE=\bfzero$, $\bmE=-\nabla U$)---with time-varying electric
fields, the time-dependent Maxwell equations have
$\curl\bmE\not=\bfzero$, in general, and $\bmE$ cannot be expressed as
the gradient of a scalar potential.
The expression for $\mathcal{G}$ in \eqref{eqn:calG} can be written in
different forms, depending upon the choice of state variable for the
circuit:
\begin{subequations}
\begin{equation}\label{eqn:GnDU}
\mathcal{G}[\nhat,\Delta U] = \calF_\text{e}[\nhat] +
C[\nhat] \Bigl[ \frac12 (\Delta U)^2 - V \Delta U \Bigr]
\end{equation}
or
\begin{equation}\label{eqn:GnQ}
\mathcal{G}[\nhat,Q] = \calF_\text{e}[\nhat] + \frac12 C[\nhat]^{-1} Q^2 - V Q .
\end{equation}
\end{subequations}
Here $\nhat$ is the current state of the director field in $\Omega$,
$\calF_\text{e}[\nhat]$ is the distortional elastic energy of that state (as in
\eqref{eqn:Fe}), and $C[\nhat]$ is the capacitance of the cell in that
state (given by \eqref{eqn:Cn} for our model problem). The above
expressions apply more generally than just to our model problem; they
are valid, for example, even if the fields in the cell depend on more
than one space variable (though in that case, one would not have an
explicit formula for $C[\nhat]$, in general).
Stable equilibrium states of the system correspond to minimizers of
the total potential energy $\mathcal{G}$ with respect to $(\nhat,\Delta U)$ or
$(\nhat,Q)$, depending on the choice of state variables. The
capacitance is positive; so
\begin{equation*}
\min \mathcal{G} ~ \Rightarrow ~ \Delta U = V , ~ Q = C V ,
\end{equation*}
and the following minimization problem for $\nhat$ results:
\begin{equation*}
\min_{\nhat} \Bigl\{ \calF_\text{e}[\nhat] - \frac12 C[\nhat] V^2 \Bigr\} .
\end{equation*}
Using \eqref{eqn:QDUvsDE} and the equilibrium condition $Q=C[\nhat]V$,
the objective functional above can be written
\begin{equation*}
\calF_\text{e}[\nhat] - \frac12 C[\nhat] V^2 =
\calF_\text{e}[\nhat] - \frac12 \int_\Omega ( \bmD \cdot \bmE ) \, \td{V} .
\end{equation*}
Here the terms $\bmD$ and $\bmE$ in the last integral correspond to
the solution of the electrostatics problem in $\Omega$ with the
dielectric tensor associated with the current director field,
${\mathlarger\mathlarger{\bfeps}}={\mathlarger\mathlarger{\bfeps}}(\nhat)$, and with a
potential difference of $V$ across the cell. The minimization with
respect to $\nhat$ is subject to the pointwise constraint $|\nhat|=1$
and appropriate boundary conditions. The functional
$\calF_\text{e} - \frac12 \int_\Omega ( \bmD \cdot \bmE ) \, \td{V}$ is the correct
form of the free energy for equilibrium states of $\nhat$ in an
electric field, as in \eqref{eqn:FnU} (with the identification
$\bmD=\mathlarger{\bfeps}(\nhat)\bmE$, $\bmE=-\nabla U$).
All of these expressions can be put in explicit forms for our model
problem, for which it is convenient to use the formulation in terms of
$Q$. The state variables are $\nhat=\nhat(z,t)$ (the director field
in the cell) and $\sigma=\sigma(t)$ (the charge density on the upper
electrode). Scaling our energies and capacitance by the
cross-sectional area of the cell,
\begin{equation*}
\widetilde{\calG} := \mathcal{G} / A , \quad \widetilde{\calF}_\text{e} := \calF_\text{e} / A , \quad \widetilde{C} := C / A ,
\end{equation*}
and using $Q=\sigma A$, we obtain
\begin{align*}
\widetilde{\calG}[\nhat,\sigma]
&= \widetilde{\calF}_\text{e}[\nhat] + \frac12 \widetilde{C}[\nhat]^{-1} \sigma^2 - \sigma V \\
&= \frac12 \int_0^d \! \bigl( K_1 n_{z,z}^2 + K_3 n_{x,z}^2 \bigr) \, \td{z} +
\frac12 \frac{\,\,\sigma^2}{\,\epsilon_0} \!
\int_0^d \! \frac{\td{z}}{\eperp n_x^2+\epara n_z^2} - \sigma V .
\end{align*}
Here we have used \eqref{eqn:Fe}, \eqref{eqn:We}, and \eqref{eqn:Cn}
to provide the expressions for $\calF_\text{e}[\nhat]$ and $C[\nhat]$. We shall
work with this mostly in terms of the tilt-angle representation
$\nhat=\cos\theta\,\ehat_x+\sin\theta\,\ehat_z$:
\begin{equation}\label{eqn:calGtilde}
\widetilde{\calG}[\theta,\sigma] =
\frac12 \int_0^d \bigl(K_1\CC+K_3\SS\bigr) \theta_z^2 \, \td{z} +
\frac12 \frac{\,\,\sigma^2}{\,\epsilon_0} \!
\int_0^d \! \frac{\td{z}}{\eperp\!\CC+\epara\SS} - \sigma V .
\end{equation}
The associated coupled equilibrium equations from $\delta_\theta\widetilde{\calG}=0$
and $\partial\widetilde{\calG}/\partial\sigma=0$ are
\begin{gather*}
\frac{\text{d}}{\td{z}} \bigl[ \bigl( K_1\CC+K_3\SS \bigr) \theta_z \bigr] =
\sin\theta \cos\theta \biggl[ (K_3-K_1) \theta_z^2 -
\eps_\text{a} \frac{\,\,\sigma^2}{\,\epsilon_0} \frac1{(\eperp\!\CC+\epara\SS)^2} \biggr] , \\
\frac{\,\sigma}{\,\epsilon_0} \int_0^d \! \frac{\td{z}}{\eperp\!\CC+\epara\SS} = V .
\end{gather*}
The latter equation is equivalent to $Q=CV$, which is the correct
equilibrium condition for the charge. Substituting the equilibrium
value for $\sigma$ above into \eqref{eqn:calGtilde} and into the
equilibrium ODE for $\theta$ above correctly gives the reduced free
energy \eqref{eqn:tFr} and equilibrium equation
\eqref{eqn:thetaODEnonlocal}, in agreement with
\cite[(3.221), (3.226)]{stewart:04}.
\subsection{Dissipation and dynamics}
We wish to use a dissipation principle to obtain a dynamical system
involving the coupled state variables $\nhat$ and $\sigma$. The
simplest expression for dissipation associated with the dynamics of
the director field is usually given in terms of the single rotational
viscosity parameter $\gamma_1$ via the Rayleigh function
\begin{equation*}
\frac12 \gamma_1 \! \int_\Omega \,
\Bigl| \frac{\partial\nhat}{\partial t} \Bigr|^2 \td{V} =
\frac12 \gamma_1 A \! \int_0^d
\Bigl( \frac{\partial\theta}{\partial t} \Bigr)^{\!2} \td{z} =: \calR_\theta .
\end{equation*}
Per unit area, then, we have
\begin{equation*}
\widetilde{\calR}_\theta := \frac1{A} \calR_\theta = \frac12 \gamma_1 \! \int_0^d
\Bigl( \frac{\partial\theta}{\partial t} \Bigr)^{\!2} \td{z} .
\end{equation*}
We have already seen in \eqref{eqn:calRcircuit} that the Rayleigh
dissipation function for the electric circuit is
\begin{equation*}
\frac12 R \Bigl( \frac{\td{Q}}{\td{t}} \Bigr)^{\!2} =
\frac12 R A^2 \Bigl( \frac{\text{d}\sigma}{\td{t}} \Bigr)^{\!2} =: \calR_\sigma ,
\end{equation*}
which leads to
\begin{equation*}
\widetilde{\calR}_\sigma := \frac1{A} \calR_\sigma =
\frac12 R A \Bigl( \frac{\text{d}\sigma}{\td{t}} \Bigr)^{\!2} .
\end{equation*}
Combining these gives
\begin{equation}\label{eqn:calRtilde}
\widetilde{\calR} := \widetilde{\calR}_\theta + \widetilde{\calR}_\sigma = \frac12 \gamma_1 \! \int_0^d
\Bigl( \frac{\partial\theta}{\partial t} \Bigr)^{\!2} \td{z} +
\frac12 R A \Bigl( \frac{\text{d}\sigma}{\td{t}} \Bigr)^{\!2} ,
\end{equation}
which is the form we shall adopt for our combined dissipation
function.
The total potential energy \eqref{eqn:calGtilde} and Rayleigh function
\eqref{eqn:calRtilde} produce the correct circuit dynamics:
\begin{equation*}
\frac{\partial\widetilde{\calG}}{\partial\sigma} +
\frac{\partial\widetilde{\calR}}{\partial\dot{\sigma}} = 0 ~ \Rightarrow ~
\frac{\,\sigma}{\,\epsilon_0} \int_0^d \! \frac{\td{z}}{\eperp\!\CC+\epara\SS} -
V + R A \frac{\text{d}\sigma}{\td{t}} = 0 ~ \Rightarrow ~
R \frac{\td{Q}}{\td{t}} + \frac1{C} Q = V ,
\end{equation*}
using \eqref{eqn:Cn} and $Q = \sigma A$---this agrees with
\eqref{eqn:QODEIVP}. The dynamics for the director angle results from
the analogous expression for $\theta$:
\begin{equation*}
\delta_\theta\widetilde{\calG} + \delta_{\dot{\theta}}\widetilde{\calR} = 0 ~ \Rightarrow ~
\frac{\partial W}{\partial\theta} - \frac{\partial}{\partial z}
\Bigl( \frac{\partial W}{\partial\theta_z} \Bigr) +
\gamma_1 \frac{\partial\theta}{\partial t} = 0 ,
\end{equation*}
with $W$ the free-energy density that results from combining the
integrated terms in \eqref{eqn:calGtilde}. In expanded form, the PDE
for $\theta$ is given by
\begin{subequations}\label{eqn:coupled-dynamical-system}
\begin{multline}\label{eqn:dthetadt}
\gamma_1 \frac{\partial\theta}{\partial t} =
\frac{\partial}{\partial z}
\Bigl[ \bigl( K_1\CC+K_3\SS \bigr) \frac{\partial\theta}{\partial z} \Bigr] \\
{} - \sin\theta \cos\theta \biggl[ (K_3-K_1)
\Bigl( \frac{\partial\theta}{\partial z} \Bigr)^{\!2} -
\eps_\text{a} \frac{\,\,\sigma^2}{\,\epsilon_0} \frac1{(\eperp\!\CC+\epara\SS)^2} \biggr] .
\end{multline}
Our dynamical system, then, consists of the PDE \eqref{eqn:dthetadt}
above for $\theta$, supplemented by auxiliary conditions
\begin{equation}
\theta(0,t) = \theta(d,t) = 0 , \quad \theta(z,0) = \theta_0(z) ,
\end{equation}
and the ODE IVP for $\sigma$
\begin{equation}\label{eqn:dsigmadt}
R A \frac{\text{d}\sigma}{\td{t}} +
\frac{\,\sigma}{\,\epsilon_0} \int_0^d \! \frac{\td{z}}{\eperp\!\CC+\epara\SS} = V , \quad
\sigma(0) = \sigma_0 ,
\end{equation}
\end{subequations}
with the initial states $\theta_0$ and $\sigma_0$ prescribed.
The variational approach used here to obtain the director dynamics
equation \eqref{eqn:dthetadt} is similar to that given in
\cite[\S4.3]{stewart:04}---compare \eqref{eqn:dthetadt} with
\cite[(4.152)]{stewart:04}. See also \cite[\S5.9]{stewart:04},
where the same ideas are used to model the dynamics of certain Fr\'{e}edericksz\
transitions.
Some of the terms above can be related to more familiar expressions.
The term involving $\sigma^2$ in \eqref{eqn:dthetadt} corresponds to
the ``dielectric torque'' $\bmD\times\bmE$, the couple per unit volume
exerted by the electric field on the director field. In our system,
as in \eqref{eqn:DepsE} and \eqref{eqn:EUzDzsigma}, we have
\begin{equation*}
\bmE = E \ehat_z , \quad
\bmD = \mathlarger{\bfeps}(\nhat) \bmE =
\epsilon_0 E \bigl[ \eps_\text{a} n_x n_z \ehat_x + \bigl( \eperp n_x^2+\epara n_z^2 \bigr) \ehat_z \bigr] .
\end{equation*}
In terms of $\theta$, then,
\begin{equation*}
\bmD \times\bmE = - \epsilon_0 \eps_\text{a} E^2 \sin\theta \cos\theta \, \ehat_y .
\end{equation*}
The connection between $E^2$ and $\sigma^2$ follows from
\begin{equation*}
\sigma = - D_z ~ \Rightarrow ~
\sigma^2 = \epsilon_0^2 E^2 ( \eperp\!\CC+\epara\SS )^2 ,
\end{equation*}
which gives
\begin{equation*}
\bmD \times \bmE = - \eps_\text{a} \frac{\,\,\sigma^2}{\,\epsilon_0}
\frac{\sin\theta\cos\theta}{(\eperp\!\CC+\epara\SS)^2} \ehat_y ,
\end{equation*}
as in \eqref{eqn:dthetadt}. Also, using \eqref{eqn:Cn} we have
\begin{equation*}
C[\theta]^{-1} = \frac1{\epsilon_0 A} \int_0^d \! \frac{\td{z}}{\eperp\!\CC+\epara\SS} ,
\end{equation*}
and \eqref{eqn:dsigmadt} can be written
\begin{equation*}
R A \frac{\text{d}\sigma}{\td{t}} + C[\theta]^{-1} \! A \sigma = V
~~ \text{or} ~~
R \frac{\text{d} Q}{\td{t}} + C[\theta]^{-1} Q = V ,
\end{equation*}
as in \eqref{eqn:QODEIVP}.
As we have already noted (and as we shall see in
\S\ref{sec:TimeScales}), the time scale for \eqref{eqn:dsigmadt} is
often faster than that for \eqref{eqn:dthetadt}. If one were to
choose to model the charge distribution as adjusting instantaneously
to changes in the director field, then one would have
$\text{d}\sigma/\td{t}=0$ in \eqref{eqn:dsigmadt}, and that equation would
collapse to the equilibrium condition for $\sigma$ in the previous
section. Substituting the equilibrium value for $\sigma$ into
\eqref{eqn:dthetadt} would give the director-dynamics equation for the
reduced free energy $\widetilde{\calF}_\text{r}$ in \eqref{eqn:tFr} (with $\Delta U=V$), that is,
the dynamical version of \eqref{eqn:thetaODEnonlocal} (again with
$\Delta U=V$).
\subsection{Time-varying $\bmD$ versus $\bmE$}
With an electric field that varies in space and in time
($\bmE = \bmE(\bmx,t)$), the linear relationship between $\bmD$ and
$\bmE$ is in general nonlocal in space and in time; however, for
macroscopic models of pure dielectrics, nonlocality in space can
generally be ignored \cite[\S I.4]{jackson:75}, \cite[\S77, \S78,
\S103]{landau:lifshitz:pitaevskii:93}. Nonlocality in time can be an
issue in some circumstances, such as fast-switching experiments with
electric fields from electric pulses of large voltage and short
duration \cite{gu:yin:shiyanovskii:lavrentovich:07}. If the
dielectric properties of the medium remain constant, spatial
dispersion is ignored, and the electric field is time harmonic, then
the induced polarization will be time harmonic with the same frequency
and hence so will $\bmD$ (via $\bmD=\epsilon_0\bmE+\bmP$):
\begin{equation*}
\bmE(\bmx,t) = e^{-i\omega_0t} \bmE_0(\bmx) ~~ \Rightarrow ~~
\bmD(\bmx,t) = e^{-i\omega_0t} \mathlarger{\bfeps}(\bmx,\omega_0) \bmE_0(\bmx) .
\end{equation*}
See \cite[\S I.4]{jackson:75},
\cite[\S77]{landau:lifshitz:pitaevskii:93}. This includes the special
case of a static electric field ($\omega_0=0$):
$\bmD(\bmx)={\mathlarger\mathlarger{\bfeps}}(\bmx)\bmE(\bmx)$.
The relationship between $\bmD$ and $\bmE$ in the time-harmonic case
is \emph{frequency dependent}, however
(${\mathlarger\mathlarger{\bfeps}}={\mathlarger\mathlarger{\bfeps}}(\bmx,\omega)$), and
so for a general time-varying electric field (again in a medium of
constant dielectric properties and no spatial dispersion), one has
\begin{equation}\label{eqn:DEnonlocal}
\bmD(\bmx,t) = \epsilon_0 \bmE(\bmx,t) + \epsilon_0 \! \int_{-\infty}^t \!
\mathlarger{\bfalpha}(\bmx,t-t') \bmE(\bmx,t') \, \text{d} t' ,
\end{equation}
as written in \cite{gu:yin:shiyanovskii:lavrentovich:07}. The tensor
field ${\mathlarger\mathlarger{\bfalpha}}$ comes from the inverse Fourier
transform of the dispersion relation, that is,
\begin{equation*}
\mathlarger{\bfeps}(\bmx,\omega) = \epsilon_0 \Bigl[ \mathbf{I} + \! \int_0^{\infty} \!
e^{i\omega t} \mathlarger{\bfalpha}(\bmx,t) \, \td{t} \Bigr] .
\end{equation*}
An illustrative example (for a homogeneous, isotropic medium) is given
in \cite[\S7.10]{jackson:75}, where a ``one-resonance dispersion
model'' is employed (a complex rational function of $\omega$). The
poles of the dispersion model are in the lower half of the complex
$\omega$ plane, which gives rise to a one-sided inverse Fourier
transform and an appropriately causal relationship of the form
\eqref{eqn:DEnonlocal}. From a physical point of view, at high
frequencies, changes in the induced polarization can lag changes in
the electric field. An electric pulse resolves into Fourier modes of
arbitrarily high frequency, and because of this, these
nonlocal-in-time effects (``dielectric relaxation'') can become
important. In \cite{gu:yin:shiyanovskii:lavrentovich:07} and
\cite{shiyanovskii:lavrentovich:10}, the term ``Dielectric Memory
Effects'' is used to describe them.
The dynamics of liquid-crystal systems induced by time-varying
electric fields is even more complicated, since in that case, the
dielectric properties of the medium are changing in time as well, and
the convolution representation \eqref{eqn:DEnonlocal} is no longer
valid. A research program described in the review article
\cite{shiyanovskii:lavrentovich:10} (which contains references to
earlier works) addressed this issue and developed a generalization of
\eqref{eqn:DEnonlocal} to the case of a time-varying director field
$\nhat=\nhat(\bmx,t)$. The theory was used to model successfully the
experiments with fast-switching (pulse-driven) dynamics of
\cite{takanashi:maclennan:clark:98} and others.
These issues are beyond the scope of our investigation here. Our
assessment is that the local relationship
$\bmD={\mathlarger\mathlarger{\bfeps}}\bmE$ is valid in static equilibrium and
more generally if $\bmE$ is time harmonic and the dielectric
properties of the medium are constant. The relationship
$\bmD={\mathlarger\mathlarger{\bfeps}}\bmE$ is a good approximation if $\bmE$
contains only low-frequency content and the director dynamics are
relatively slow. For our purposes here, we accept and adopt the local
relationship $\bmD={\mathlarger\mathlarger{\bfeps}}\bmE$ and acknowledge the
limitations and shortcomings of that in some settings.
\subsection{Time scales}\label{sec:TimeScales}
The dynamic processes associated with a system such as the one under
consideration here exhibit several different time scales, including
those for director dynamics, circuit dynamics, and the evolution of
the electric field in the cell. The time scales for the changes in
the circuit and for the dynamics of the director field can be gleaned
from rescalings of \eqref{eqn:dsigmadt} and \eqref{eqn:dthetadt}.
First, it is convenient to relate the integral expression in
\eqref{eqn:dsigmadt} to the capacitance \eqref{eqn:Cn} as follows:
\begin{equation*}
C[\theta] = \epsilon_0 A \biggl[
\int_0^d \!\! \frac{\td{z}}{\eperp\!\CC+\epara\SS} \biggr]^{-1} =
\frac{\epsilon_0 A}{d} \eps_\text{r}[\theta] ,
\end{equation*}
where $\eps_\text{r}[\theta]$ is the \emph{effective relative dielectric
constant} of the cell, given by
\begin{equation*}
\eps_\text{r}[\theta]^{-1} = \frac1d \int_0^d \!\! \frac{\td{z}}{\eperp\!\CC+\epara\SS} .
\end{equation*}
This quantity is dimensionless, $O(1)$, and satisfies
\begin{equation*}
\eps_{\scriptscriptstyle\perp} \le \eps_\text{r}[\theta] \le \eps_{\scriptscriptstyle\parallel} ,
\end{equation*}
since $\eps_{\scriptscriptstyle\perp} < \eps_{\scriptscriptstyle\parallel}$ for our system. In terms of it,
\eqref{eqn:dsigmadt} can be written
\begin{subequations}\label{eqn:partially_scaled}
\begin{equation}
\tau_\sigma \frac{\text{d}\bar{\sigma}}{\td{t}} + \eps_\text{r}[\theta]^{-1} \bar{\sigma} = 1 , \quad
\tau_\sigma := R \frac{\epsilon_0 A}{d} , \quad
\bar{\sigma} := \frac{\sigma}{\epsilon_0 V / d} .
\end{equation}
Note that $\epsilon_0 A / d$ would be the capacitance of the cell and
$\epsilon_0 V/d$ would be the surface charge density on the upper
electrode if there were a vacuum between the electrodes. Note also
that in steady state, we have $\text{d}\bar{\sigma} / \td{t} = 0$ and
$\bar{\sigma} = \eps_\text{r}[\theta]$.
For a fairly typical experimental setup, the ``RC load'' $\tau_\sigma$ is
generally in the sub-microsecond range. For example, with
$R=100\,\Omega$, $\epsilon_0=8.854\times10^{-12}\,\text{F/m}$,
$A=10\,\text{cm}^2$, and $d=5\,\mu\text{m}$, we obtain
$\tau_\sigma \doteq 1.77\times10^{-7}\,\text{s}$. In
\cite{jang:clark:01}, $\tau_\sigma$ is estimated to be $2\,\mu\text{s}$
and is a limiting factor in the experiment presented there. In the
fast-switching experiments discussed in
\cite{gu:yin:shiyanovskii:lavrentovich:07} and
\cite{takanashi:maclennan:clark:98}, measures are taken to minimize
$\tau_\sigma$ (including the use of gold connectors to minimize resistance
and cells of small area to minimize capacitance), and values for
$\tau_\sigma$ in the nanosecond range are reported.
If one scales $z$ by the cell gap ($\bar{z}=z/d$), then
\eqref{eqn:dthetadt} can be put in the form
\begin{multline}
\frac{\partial\theta}{\partial t} = \frac1{\,\tau\!\raisebox{-.6ex}{\tiny{$K$}}} \biggl\{
\frac{\partial}{\partial\bar{z}} \Bigl[
(\bar{K}_1\cos^2\!\theta+\bar{K}_3\SS) \frac{\partial\theta}{\partial\bar{z}} \Bigr] -
\sin\theta \cos\theta (\bar{K}_3-\bar{K}_1)
\Bigl( \frac{\partial\theta}{\partial\bar{z}} \Bigr)^{\!2} \biggr\} \\
{} + \frac1{\,\tau\!\raisebox{-.6ex}{\tiny{$V$}}} \bar{\sigma}^2
\frac{\sin\theta \cos\theta}{(\eperp\!\CC+\epara\SS)^2} ,
\end{multline}
where
\begin{equation}
\tau\!\raisebox{-.6ex}{\tiny{$K$}} := \frac{\gamma_1d^2}{K} , \quad
\tau\!\raisebox{-.6ex}{\tiny{$V$}} := \frac{\gamma_1d^2}{\epsilon_0\eps_\text{a} V^2} , \quad
\bar{K}_1 := \frac{\,K_1}{K} , \quad \bar{K}_3 := \frac{\,K_3}{K} ,
\end{equation}
\end{subequations}
with $K$ a representative value for $K_1$ and $K_3$. This exposes two
times scales associated with director reorientation: $\tau\!\raisebox{-.6ex}{\tiny{$K$}}$ and
$\tau\!\raisebox{-.6ex}{\tiny{$V$}}$. These correspond to ``switch off'' and ``switch on'' times,
which are usually written
\begin{equation*}
\tau_{\text{off}} = \frac{\gamma_1d^2}{K\pi^2} , \quad
\tau_{\text{on}} = \frac{\gamma_1}{\epsilon_0\eps_\text{a} E^2} .
\end{equation*}
See \cite[\S5.9]{stewart:04}.
The switch-off time $\tau_{\text{off}}$ is the time scale for the slowest
decaying mode of $\gamma_1 \theta_t = K \theta_{zz}$, $\theta(0) =
\theta(d) = 0$. It gives the time it takes for a distorted director
field to relax back to its ground state under the influence of only
distortional elastic forces. This is a relatively slow process,
usually estimated to be in the range of 10s of milliseconds for
typical cells and materials (e.g., with $\gamma_1=0.1\,\text{Pa\,s}$,
$d=5\,\mu\text{m}$, and $K=10\,\text{pN}$, we obtain $\tau_{\text{off}} \doteq
2.53\times10^{-2}\,\text{s}$).
The switch-on time $\tau_{\text{on}}=\tau\!\raisebox{-.6ex}{\tiny{$V$}}$ (with $E=V/d$) corresponds to the
time it takes to align the director field with an applied electric
field. It can be made quite small by using large voltages. For
example, with $\gamma_1=0.1\,\text{Pa\,s}$, $d=5\,\mu\text{m}$,
$\epsilon_0=8.854\times10^{-12}\,\text{F/m}$, $\eps_\text{a}=10$, and
$V=100\,\text{volts}$, we have
$\tau\!\raisebox{-.6ex}{\tiny{$V$}} \doteq 2.82 \times 10^{-6}\,\text{s}$. For small voltages near
the Fr\'{e}edericksz\ threshold, on the other hand (e.g., with
$V=1\,\text{volt}$), we have
$\tau\!\raisebox{-.6ex}{\tiny{$V$}} \doteq 2.82 \times 10^{-2}\,\text{s}$. In the fast-switching
experiments in \cite{geis:lyszczarz:osgood:kimball:10,
gu:yin:shiyanovskii:lavrentovich:07,takanashi:maclennan:clark:98},
using cells of narrow gaps and electrical pulses of 100s of volts,
$\tau\!\raisebox{-.6ex}{\tiny{$V$}}$\!'s of the order of 10s of nanoseconds are reported.
By comparison, the evolution of the electric field in the cell is
governed by the time-dependent Maxwell equations, and the time scale
associated with this type of wave equation (which we denote $\tau\!\raisebox{-.6ex}{\tiny{$E$}}$)
corresponds to the width of the cell gap divided by the speed of light
in the medium. For a cell gap $d=10\,\mu\text{m}$ with relative
dielectric permittivity $\eps_\text{r}=10$ and relative magnetic permeability
$\mu_\text{r}=1$, we obtain $\tau\!\raisebox{-.6ex}{\tiny{$E$}} \doteq 1.05 \times 10^{-13}\,\text{s}$,
which is four orders of magnitude faster than the smallest
$\tau_\sigma$\!'s and $\tau\!\raisebox{-.6ex}{\tiny{$V$}}$\!'s we have found in published results on
experiments on fast switching of liquid crystal cells. This justifies
treating the electric field as adjusting instantaneously to any
changes in the circuit and cell.
Thus, for the kinds of experimental systems that we have in mind, we
consider time scales in the following ranges:
\begin{equation*}
10^{-9}\,\text{s} \lesssim \tau_\sigma \lesssim 10^{-6}\,\text{s} , \quad
\tau\!\raisebox{-.6ex}{\tiny{$K$}} \approx 10^{-2}\,\text{s} , \quad
10^{-8}\,\text{s} \lesssim \tau\!\raisebox{-.6ex}{\tiny{$V$}} \lesssim 10^{-2}\,\text{s} , \quad
\tau\!\raisebox{-.6ex}{\tiny{$E$}} \approx 10^{-13}\,\text{s} .
\end{equation*}
In the numerical examples discussed in the next section, we have taken
$\tau_\sigma=10^{-6}\,\text{s}$, $\tau\!\raisebox{-.6ex}{\tiny{$K$}}=10^{-2}\,\text{s}$, and
$\tau\!\raisebox{-.6ex}{\tiny{$V$}}=10^{-7}\,\text{s}$, $10^{-6}\,\text{s}$, and
$10^{-5}\,\text{s}$, for purposes of illustration, with the electric
field taken to adjust instantaneously ($\tau\!\raisebox{-.6ex}{\tiny{$E$}}=0$, in essence, as we
have assumed throughout).
\section{Numerical illustration}\label{sec:numerics}
We illustrate the coupled dynamics of
\eqref{eqn:coupled-dynamical-system} by the numerical modeling of a
simple ``switch on'' experiment. For simplicity, we assume equal
elastic constants: $K_1=K_3=K$. Starting from the partially scaled
system \eqref{eqn:partially_scaled}, we express times in units of
$\tau_\sigma$ to obtain
\begin{subequations}\label{eqn:fully_scaled}
\begin{gather}
\label{eqn:fully_scaled_a}
\frac{\text{d}\bar{\sigma}}{\td{\tbar}} + \eps_\text{r}[\theta]^{-1} \bar{\sigma} = 1 , \quad
\eps_\text{r}[\theta]^{-1} = \int_0^1 \!\! \frac{\td{\zbar}}{\eperp\!\CC+\epara\SS} , \\
\label{eqn:fully_scaled_b}
\frac{\partial\theta}{\partial\bar{t}} =
\frac1{\,\bar{\tau}\!\raisebox{-.6ex}{\tiny{$K$}}} \frac{\partial^2\theta}{\partial\bar{z}^2} +
\frac1{\,\bar{\tau}\!\raisebox{-.6ex}{\tiny{$V$}}} \bar{\sigma}^2
\frac{\sin\theta \cos\theta}{(\eperp\!\CC+\epara\SS)^2} , \quad 0 < \bar{z} < 1 ,
\end{gather}
where
\begin{equation*}
\bar{t} := \frac{t}{\,\tau_\sigma} , \quad
\bar{\tau}\!\raisebox{-.6ex}{\tiny{$K$}} := \frac{\tau\!\raisebox{-.6ex}{\tiny{$K$}}}{\,\tau_\sigma} , \quad
\bar{\tau}\!\raisebox{-.6ex}{\tiny{$V$}} := \frac{\tau\!\raisebox{-.6ex}{\tiny{$V$}}}{\,\tau_\sigma} .
\end{equation*}
We add a small pretilt to the boundary conditions on $\theta$, in
order to bias the director to rotate counter clockwise when the
electric field is switched on, and we take the initial state of the
director field to be parallel to this. There is assumed to be no
excess charge on the electrodes when the circuit is closed at time
$\bar{t}=0$. The boundary and initial conditions are thus given by
\begin{equation}
\theta(0,\bar{t}) = \theta(1,\bar{t}) = 0.1 , \quad
\theta(\bar{z},0) = 0.1 , \quad
\bar{\sigma}(0) = 0 .
\end{equation}
\end{subequations}
For the relative dielectric permittivities, we use $\eps_{\scriptscriptstyle\perp}=5$ and
$\eps_{\scriptscriptstyle\parallel}=15$, which are comparable to the values for the typical liquid
crystal 5CB---see \cite[Appendix D, Table\,D.3]{stewart:04}. Based
upon the discussion of time scales in the previous section, we choose
the following values for our numerical experiment:
\begin{equation*}
\tau\!\raisebox{-.6ex}{\tiny{$K$}} = 10^{-2}\,\text{s} , \quad
\tau_\sigma = 10^{-6}\,\text{s} , \quad
\tau\!\raisebox{-.6ex}{\tiny{$V$}} = 10^{-7}\,\text{s} , ~ 10^{-6}\,\text{s} , ~ 10^{-5}\,\text{s} ,
\end{equation*}
giving
\begin{equation*}
\bar{\tau}\!\raisebox{-.6ex}{\tiny{$K$}} = 10^4 , \quad
\bar{\tau}\!\raisebox{-.6ex}{\tiny{$V$}} = 10^{-1} , ~ 10^0 , ~ 10^1 .
\end{equation*}
The three values of $\bar{\tau}\!\raisebox{-.6ex}{\tiny{$V$}}$ cover the situations when the director
switching dynamics are faster than, equal to, and slower than the
circuit dynamics.
Standard finite differences were employed to discretize our model
\eqref{eqn:fully_scaled}: explicit Euler for
\eqref{eqn:fully_scaled_a}, Forward Time Centered Space (FTCS) for
\eqref{eqn:fully_scaled_b}. The same time step was used for both
equations ($\Delta\bar{t}=0.1$, 10 times steps per unit $\tau_\sigma$), with
a spatial grid of 128 uniform cells in $\bar{z}$. A Composite Trapezoid
Rule (on the same spatial grid) was used to approximate the integral
in the functional $\eps_\text{r}[\theta]$ in \eqref{eqn:fully_scaled_a}. For
each of the values $\bar{\tau}\!\raisebox{-.6ex}{\tiny{$V$}}=10^{-1}$, $10^0$, and $10^1$, $\bar{\sigma}$
is plotted against $\bar{t}$, along with ``time snapshots'' of $\theta$
versus $\bar{z}$ for every 16th time step. In all three cases, the same
number of time steps per snapshot (16) and the same number of total
time steps (1024) were used, in order to facilitate comparison. The
results are presented in figure\,\ref{fig:sigma_t_theta_z}.
\begin{figure}
\centering
\subfloat[]{\label{fig:tauV_pointone_a}
\includegraphics[width=.49\linewidth]{figure2a}}
\subfloat[]{\label{fig:tauV_pointone_b}
\includegraphics[width=.49\linewidth]{figure2b}} \\
\subfloat[]{\label{fig:tauV_one_a}
\includegraphics[width=.49\linewidth]{figure2c}}
\subfloat[]{\label{fig:tauV_one_b}
\includegraphics[width=.49\linewidth]{figure2d}} \\
\subfloat[]{\label{fig:tauV_ten_a}
\includegraphics[width=.49\linewidth]{figure2e}}
\subfloat[]{\label{fig:tauV_ten_b}
\includegraphics[width=.49\linewidth]{figure2f}}
\caption{Coupled dynamics \eqref{eqn:fully_scaled}.
Figures\,\ref{fig:tauV_pointone_a}, \ref{fig:tauV_one_a},
\ref{fig:tauV_ten_a}: electrode charge density versus
time ($\bar{\sigma}=\sigma/(\epsilon_0 V/d)$, $\bar{t}=t/\tau_\sigma$).
Figures\,\ref{fig:tauV_pointone_b}, \ref{fig:tauV_one_b},
\ref{fig:tauV_ten_b}: time snapshots of director tilt
angle $\theta$ versus position ($\bar{z}=z/d$, 16 time steps per
snapshot, time step $\Delta\bar{t}=0.1$). Parameters: $\eps_{\scriptscriptstyle\perp}=5$,
$\eps_{\scriptscriptstyle\parallel}=15$, $\bar{\tau}\!\raisebox{-.6ex}{\tiny{$K$}}=\tau\!\raisebox{-.6ex}{\tiny{$K$}}/\tau_\sigma=10^4$,
$\bar{\tau}\!\raisebox{-.6ex}{\tiny{$V$}}=\tau\!\raisebox{-.6ex}{\tiny{$V$}}/\tau_\sigma=10^{-1}$
(figures\,\ref{fig:tauV_pointone_a}, \ref{fig:tauV_pointone_b}),
$10^0$ (figures\,\ref{fig:tauV_one_a}, \ref{fig:tauV_one_b}),
$10^1$ (figures\,\ref{fig:tauV_ten_a}, \ref{fig:tauV_ten_b}).}
\label{fig:sigma_t_theta_z}
\end{figure}
The results are as one would anticipate. The case $\bar{\tau}\!\raisebox{-.6ex}{\tiny{$V$}}=10^{-1}$
corresponds to a high voltage and fast switching: the director quickly
saturates (after 5--6 snapshots), and the charge density behaves
accordingly ($\bar{\sigma} \approx \eps_{\scriptscriptstyle\parallel} ( 1 - e^{-\bar{t}} )$). We note
that in the steady state limit $\bar{t} \rightarrow \infty$, we have
$\bar{\sigma} \rightarrow \eps_\text{r}[\theta_{\infty}] \approx \eps_{\scriptscriptstyle\parallel}$, since
$\theta_{\infty} \approx \pi/2$ (except for boundary layers near
$\bar{z}=0$ and $\bar{z}=1$). The case $\bar{\tau}\!\raisebox{-.6ex}{\tiny{$V$}}=10^0$ corresponds to a
moderate voltage and switching ($\tau\!\raisebox{-.6ex}{\tiny{$V$}}=\tau_\sigma$): $\theta$ saturates
after 15--16 snapshots, and one starts to see an inflection in
$\bar{\sigma}$. The last case, $\bar{\tau}\!\raisebox{-.6ex}{\tiny{$V$}}=10^1$, is associated with a low
voltage and slow switching: it takes 50--60 snapshots for the director
to align with the electric field, and one sees a pronounced inflection
in $\bar{\sigma}$ versus $\bar{t}$. This is because the time scale for the
$\bar{\sigma}$ dynamics is approximately $R \epsilon_0 \eps_{\scriptscriptstyle\perp} A / d$ in the
early stages (when $\theta \approx 0$ and
$\eps_\text{r}[\theta] \approx \eps_{\scriptscriptstyle\perp}$), but the time scale is approximately
$R \epsilon_0 \eps_{\scriptscriptstyle\parallel} A / d$ in the later stages (when
$\theta \approx \pi/2$ and $\eps_\text{r}[\theta] \approx \eps_{\scriptscriptstyle\parallel}$). Since
$\eps_{\scriptscriptstyle\perp}=5$ and $\eps_{\scriptscriptstyle\parallel}=15$, these time scales differ by a factor of
three, and this leads to the change in the rate of approach of
$\bar{\sigma}$ to its limiting value. We emphasize that this is merely an
illustration of the coupled dynamics that emerge from our simple
model; in order to model carefully the dynamics in an actual
fast-switching experiment, for example, one would need to take into
account other influences, such as ``dielectric relaxation''
\cite{gu:yin:shiyanovskii:lavrentovich:07,shiyanovskii:lavrentovich:10}.
\section{Conclusions}\label{sec:conclusions}
We have modeled a nematic-liquid-crystal cell subject to an electric
field created by electrodes held at constant potential as a variable
capacitor in an RC circuit. The general model couples the state of
the liquid-crystal director field $\nhat$ in the cell with the state
of the electric circuit, characterized in terms of either the total
charge on the upper electrode, $Q$, or the potential difference
between the electrodes, $\Delta U$. A dynamical system was derived for an
example in the splay-Fr\'{e}edericksz\ geometry, subject to several simplifying
assumptions: no fluid flow in the cell, an electric field that adjusts
instantaneously to changes in $\nhat$, fields in the cell that are
functions of one space variable only, and a single rotational
viscosity for energy dissipation associated with
$\partial\nhat/\partial t$. The dynamical system, given in
\eqref{eqn:coupled-dynamical-system}, involves a PDE for director
dynamics ($\nhat=\nhat(z,t)$) coupled to an ODE for charge dynamics
($\sigma=\sigma(t)$, where $\sigma$ is the density of free charge on
the surface of the upper electrode). We have produced estimates for
the time scales of the various dynamic processes and have provided
numerical examples illustrating the coupled dynamics for three
different cases relating the time scale for director dynamics to that
of the circuit dynamics. We have made an effort to show consistency
with established results, where possible.
The original motivation for this exercise was to understand better the
dynamical characterization of local stability of equilibrium states of
such systems, which can be characterized as stationary points of a
free-energy functional of the form \eqref{eqn:FnU}:
\begin{equation*}
\mathcal{F}[\nhat,U] = \int_\Omega \Bigl[ W_\text{e}(\nhat,\nabla\nhat) -
\frac12 \mathlarger{\bfeps}(\nhat) \nabla U \cdot \nabla U \Bigr] \, \td{V} .
\end{equation*}
Here ${\mathlarger\mathlarger{\bfeps}}$ is the (positive definite) dielectric
tensor and $U$ is the electric potential (related to the electric
field $\bmE$ via $\bmE=-\nabla U$). The minimax nature of the
critical points of $\mathcal{F}$ is at odds with the expected picture of an
out-of-equilibrium pair $(\nhat,U)$ relaxing to a locally stable state
by minimizing free energy. This confusion, as we have seen from our
analysis, stems from the fact that the free-energy functional above
presumes either a static electric field or an electric field that
adjusts instantaneously to changes in $\nhat$.
The more primitive expression for the potential energy of the system
is $\mathcal{G}$ as in \eqref{eqn:calG}:
\begin{equation*}
\mathcal{G} = \calF_\text{e} + \calE_\text{cap} + \calE_\text{bat} = \calF_\text{e} + \frac12 Q \Delta U - Q V .
\end{equation*}
Here $\calF_\text{e}$ is the distortional elasticity, as in \eqref{eqn:Fe} and
\eqref{eqn:We}, and $V$ is the voltage of the battery. The
combination $\calF_\text{e}+\calE_\text{cap}$ gives the potential energy of the cell
(associated with the work done in distorting the director field plus
work done in moving charge on/off the electrodes); while $\calE_\text{bat}$ is
the potential energy associated with the battery. The total potential
energy for the system can be expressed in different forms depending
the choice of state variable for the circuit, as in \eqref{eqn:GnDU}
and \eqref{eqn:GnQ}:
\begin{equation*}
\mathcal{G}[\nhat,\Delta U] = \calF_\text{e}[\nhat] +
C[\nhat] \Bigl[ \frac12 (\Delta U)^2 - V \Delta U \Bigr] , \quad
\mathcal{G}[\nhat,Q] = \calF_\text{e}[\nhat] + \frac12 C[\nhat]^{-1} Q^2 - V Q .
\end{equation*}
In either case, equilibria are locally minimizing with respect to the
pair $(\nhat,\Delta U)$ or $(\nhat,Q)$.
In order to transform $\mathcal{G}$ into the form $\mathcal{F}$ above, one must
assume (1)~either a static electric field or an electric field that
adjusts instantaneously to any changes in $\nhat$ (so that
$\curl\bmE=\bfzero$ and $\bmE=-\nabla U$) and (2)~either equilibrium
conditions in the circuit (no current) or a circuit that adjusts
instantaneously to any changes in the capacitance of the cell (so that
$\Delta U=V$ and $Q=CV$ at all times). In
\S\ref{sec:system-potential-energy}, we have described how $\mathcal{G}$
above can be transformed to $\mathcal{F}$ if one makes these assumptions
(and also employs the constitutive assumption
$\bmD={\mathlarger\mathlarger{\bfeps}}(\nhat)\bmE$), the main points being that
$\bmE=-\nabla U$ and $\div\bmD=0$ imply
\begin{equation*}
Q \Delta U = \int_\Omega ( \bmD \cdot \bmE ) \, \td{V} ,
\end{equation*}
$\Delta U=V$ gives
\begin{equation*}
\calE_\text{cap} + \calE_\text{bat} = \frac12 Q V - Q V = - \frac12 Q V =
- \frac12 \int_\Omega ( \bmD \cdot \bmE ) \, \td{V} ,
\end{equation*}
and $\bmD={\mathlarger\mathlarger{\bfeps}}(\nhat)\bmE$ and $\bmE=-\nabla U$
give
\begin{equation*}
- \frac12 \int_\Omega ( \bmD\cdot\bmE ) \, \td{V} =
- \frac12 \int_\Omega \bigl[
\mathlarger{\bfeps}(\nhat) \nabla U \cdot \nabla U \bigr] \, \td{V} .
\end{equation*}
Thus $\mathcal{F}$ is not the appropriate free energy for modeling dynamics
that include the coupled evolution of the electric field in the cell
or the charge dynamics of the electric circuit. The potential energy
$\mathcal{G}$, on the other hand, is valid in the absence of any of these
equilibrium or instantaneous-adjustment assumptions.
The time scales for the various dynamic processes in our system vary
widely and depend on details of any specific experiment being modeled.
At one end of the spectrum is the switch-off time ($\tau\!\raisebox{-.6ex}{\tiny{$K$}}$ in our
notation, the slowest time scale, of the order of $10^{-2}$\,s). At
the other end is the time scale for the evolution of the electric
field in the cell, $\tau\!\raisebox{-.6ex}{\tiny{$E$}}$, governed by the time-dependent Maxwell
equations, which is of the order of $10^{-13}$\,s for the kinds of
systems of interest to us. In between these extremes are the
switch-on time, $\tau\!\raisebox{-.6ex}{\tiny{$V$}}$, and the time scale for the dynamics of the
electric circuit, $\tau_\sigma$. The switch-on time $\tau\!\raisebox{-.6ex}{\tiny{$V$}}$ is
proportional to $1/V^2$ (where $V$ is the applied voltage) and can
vary from values comparable to $\tau\!\raisebox{-.6ex}{\tiny{$K$}}$ to values of the order of
$10^{-8}$\,s. There is some overlap between values that $\tau\!\raisebox{-.6ex}{\tiny{$V$}}$ can
take and those that $\tau_\sigma$ can take, and our modeling has been
concerned with such situations. The evolution of the electric field
in the cell is several orders of magnitude faster than any of these,
and its response has been treated as instantaneous, as is always
assumed in modeling such systems.
For the example problem that we have analyzed in detail (a
splay-Fr\'{e}edericksz\ cell), the final form of the dynamical system
\eqref{eqn:coupled-dynamical-system} is quite clean, with everything
given in simple explicit analytical expressions. This is, of course,
a consequence of our modeling assumptions. The assumption that fields
in the cell depend on only one space variable buys one a lot. It
gives $D_z=\text{const}$ and $\sigma=\text{const}$ (in equilibrium) or
$\sigma=\sigma(t)$ (in dynamics), and it enables us to write an
explicit analytical expression for the capacitance of the cell,
$C[\nhat]$ in \eqref{eqn:Cn}, and for the electric field in the cell,
as in \eqref{eqn:EUzDzsigma} and \eqref{eqn:Uz}. If one abandons this
assumption and allows fields in the cell to depend on more than one
space variable, then all these consequences are lost. The assumption
of fields depending on one space dimension is very common (and
appropriate) in modeling thin-film liquid-crystal systems.
The interplay between electric fields and liquid crystals has been of
interest and importance ever since the discovery of the electro-optic
effect in the 1970s---for an early review, see \cite{goodman:75} and
references therein. Bringing the electric circuit into the picture
(as we have done here) has illuminated the role of the battery in
producing the term
$-\frac12{\mathlarger\mathlarger{\bfeps}}(\nhat)\nabla U \cdot \nabla U$ in the
free energy \eqref{eqn:FnU}, and it has highlighted what assumptions
must be made to express the free energy in that form. If one merely
wanted to obtain a coupled dynamical system such as
\eqref{eqn:coupled-dynamical-system}, then one could have proceeded
more directly, starting with appropriate equations for director
dynamics and circuit dynamics
\begin{equation*}
\gamma_1 \frac{\partial\nhat}{\partial t} =
\div \Bigl( \frac{\partialW_\text{e}}{\partial\nabla\nhat} \Bigr) -
\frac{\partialW_\text{e}}{\partial\nhat} + \lambda \nhat +
\epsilon_0 \eps_\text{a} (\bmE\cdot\nhat) \bmE , \quad
R \frac{\td{Q}}{\td{t}} + \frac1C Q = V
\end{equation*}
and coupling them via appropriate expressions for the electric field
(which depends on $Q=\sigma A$, as in \eqref{eqn:EUzDzsigma} and
\eqref{eqn:Uz}) and the capacitance (which depends on $\nhat$, as in
\eqref{eqn:Cn}).
Our more elaborated approach was motivated by a desire to see the
``full picture'' in terms of the potential energies, where they come
from, and the assumptions needed for various simplifications and
reductions.
\section*{Acknowledgments}
The impetus for this analysis came from exchanges with John Ball and
Nigel Mottram, and we are grateful to both of them for their thoughts,
suggestions, and feedback on versions of this note. We are also
grateful to Lev Truskinovsky for discussions much earlier that planted
the seeds for some of these ideas, and for the reference
\cite{fosdick:truskinovsky:03}, which illuminates some related
concepts.
|
{
"timestamp": "2021-05-11T02:14:03",
"yymm": "2105",
"arxiv_id": "2105.03749",
"language": "en",
"url": "https://arxiv.org/abs/2105.03749"
}
|
\section{INTRODUCTION}
\label{sec:intro}
\IEEEPARstart{I}n real--world traffic situations, it is not uncommon that heterogeneous road users like vehicles and vulnerable road users (VRUs, e.\,g.,~pedestrians and cyclists) have to directly interact with each other at some particular locations.
Especially in city traffic, such type of locations include the turning areas of the so-called Turn-on-Red (TOR) intersections~\cite{mcgee1976right} or, more generally, intersections that allow vehicles to turn while other road users are crossing.
During the time window of vehicles' turning, their behavior is largely guided by social protocols, e.\,g.,~right-of-way or courtesy.
For example, in Germany, as shown in Fig.~\ref{fig:intersection}, a turning vehicle at a permissive right--turn intersection often encounters cyclists that are passing by and pedestrians that are cross walking in the conflict areas. In Japan, a similar situation can be found at a permissive left--turn intersection~\cite{alhajyaseen2012estimation} in left--hand traffic.
\begin{figure}[t!]
\centerline{\includegraphics[trim=0in 0.3in 0in 0.3in, clip=true, width=3.5in]{figs/int_germany.pdf}}
\caption{A right-turn intersection in Germany. A dedicated lane for cyclists is typically parallel to the crossing zone.}
\label{fig:intersection}
\end{figure}
Efficiently and accurately learning how vehicles and VRUs interact with each other at such intersections is important for many applications.
As statistics show, accidents often occur at places where vehicles and VRUs confront each other and there have been reports that VRUs were seriously injured by the overlook of car drivers at turning intersections~\cite{choi2010crash,habibovic2011requirements,shirazi2016looking}.
Thus, one important application is the analysis of interactions and critical situations.
In addition, the foreseeable advent of autonomous driving in urban areas~\cite{franke1998autonomous}, particularly in such locations, requires accurate recognition of road users' behavior. Another potential application would be an accident warning system for road users.
Nowadays, with the ubiquity of traffic data and the development of computer vision techniques, there is a high chance for automatically recognizing road users' behavior from massive video data.
Hence, in this paper we aim to investigate an efficient way that can automatically analyze whether the continuity of road users' behavior is interrupted when vehicles and VRUs meet at busy intersections, which we formalize as interaction detection by automatically extracting the user type, location and motion information from videos.
The concept of \textit{interaction} represents a changing level of reaction between road users. As defined by~\cite{saunier2008probabilistic} an interaction is ``a situation in which two or more road users are close enough in space and time and their distance is decreasing". Similarly, \cite{svensson2006estimating} describes an interaction between road users as ``a continuum of safety related events".
Moreover, \cite{svensson2006estimating, sayed1999traffic} relate interaction to conflict~\cite{perkins1968traffic} that interaction can range from collision to negligible conflict risks. However, in everyday traffic, collisions or accidents fortunately only account for a very small amount---the tip of the pyramid of interactions~\cite{svensson2006estimating}, see Fig.~\ref{fig:rw-interactionpyramid}. More frequent events are conflicts with different degrees of severity (serious, slight and potential) and undisturbed passages. Therefore, in this paper, a high level of classification is adopted for classifying events into non-interactions (undisturbed passages) and interactions (all the other events).
The task of \textit{interaction detection} is to differentiate interaction and non-interaction levels over the dynamics of a vehicle turning sequence. Interaction is needed if the turning vehicle drives into an intersection while any VRUs approaching or moving in the intersection space (see Fig.~\ref{fig:intersection}). In order to avoid any conflicts that might happen at any time during the vehicle's turning, they adapt their movement, i.\,e.,~velocity and orientation, accordingly. Otherwise, no interaction is needed if the target vehicle drives in an undisturbed manner with VRUs in its neighborhood, if there are any.
\begin{figure}[t!]
\centering
\includegraphics[trim=0in 0.3in 0.3in 0.0in, clip=true, width=2.5in]{figs/interaction_pyramid.pdf}
\caption[The pyramid of interactions]{The pyramid of interactions. The figure is partially adapted from~\cite{svensson2006estimating}.}
\label{fig:rw-interactionpyramid}
\end{figure}
There are many challenges for automated interaction detection between vehicles and VRUs using video data.
Road users' behavior is dynamic and stochastic as they have to adjust their motion according to the reaction from each other. In addition, mixed types and varying numbers of roads users, as well as direct confrontations greatly complicate this task. The following open questions have to be addressed: (\RNum{1}) How to efficiently acquire, process, and label a large amount of video data for training a deep learning model for interaction detection considering all the relevant road users?
(\RNum{2}) How can a system automatically detect the location and motion of the involved road users?
(\RNum{3}) How to represent the dynamics of interactions in vehicle turning sequences of varying duration?
To tackle the above challenges, we propose a deep conditional generative model based on Conditional Variational Auto-Encoder (CVAE)~\cite{sohn2015learning} for automated interaction detection. The model is conditioned on the information extracted from video data and performs probabilistic inference for interaction prediction. As opposed to a disciminative model~\cite{cheng2020automatic} that distinguishes interaction classes (interaction vs. non-interaction) from the observed information, a set of Gaussian latent variables are used to encode the dynamic and stochastic behavior patterns, which enables the generative model to perform diverse predictions at inference time. The contributions of this work are summarized as follows:
\begin{itemize}
\item[1)] Various activities among all road user types were recorded using a camera at a right--turn intersection in Germany and a left--turn intersection in Japan for very busy traffic flows. They were processed for interaction detection in both right-- and left--hand real--world traffic. In the future the data will be released for further research.
\item[2)] We combine a deep learning object detector to automatically detect all the relevant road users and optical flow to extract their motion. The combination captures the dynamics of all the road users and circumvents the tremendous work of manual tracking of trajectories.
\item[3)] Both sliding window and padding methods are explored to parse vehicle turning sequences of varying lengths.
\item[4)] We propose an end-to-end sequence-to-sequence conditional generative model with a self-attention mechanism~\cite{vaswani2017attention} for interaction detection, which simultaneously takes both the object and motion information sequences and generates probabilities of interaction at each short interval ({$<$\SI{0.1}{s}}). The probabilities change accordingly when the intensity of interaction changes between a turning vehicle and the involved VRUs over time.
\end{itemize}
The remainder of the paper is organized as follows. Sect.~\ref{sec:rw-interactiondetection} reviews the related studies on road user behavior at intersections. The proposed methodology is introduced in Section~\ref{sec:interactiondetection}. The detailed information of the datasets and evaluation metrics is provided in Sec.~\ref{sec:InteDetcExperimentSettings}. The experimental results are presented and analyzed in Sec.~\ref{sec:InteDetcResults} and the limitations of the model are further discussed in Sec.~\ref{sec:InteDetcDiscussion}. Finally, the conclusions are drawn in Sec.~\ref{sec:conclusion} with potential directions of future work.
\section{Related Work}
\label{sec:rw-interactiondetection}
Early studies on road user behavior at intersections are focused on collision and conflict analyses~\cite{allen1978analysis,compton1994safety,kaparias2010development,salamati2011development}. For example, \cite{allen1978analysis} manually observed and studied a total of 25 collision scenes taped by videos at an intersection over a period of one year. \cite{compton1994safety} conducted a study of the safety impact of permitting turns on red lights (TOR) based on data of crashes. Examining actual collision and conflict scenes has many limitations. First, the occurrence of crash and accident events are very rare in daily traffic and vary from case to case, which cannot represent the majority of road users' behavior~\cite{sayed2013automated} as undisturbed passages are not included. Second, as pointed out by~\cite{ismail2009automated}, collision--based safety analysis is a reactive approach, which requires a significant number of collisions to be collected before an action is warranted.
Third, those data are likely to be incompletely documented or protected for legal and privacy reasons, which leads to the data acquisition being complicated or even not possible. Most importantly, the above drawbacks make it almost impossible to automatically analyze the behavior of road users.
The development of computer vision techniques allows for automated analysis of road users' behavior at intersections. The work by Ismail et al.,~\cite{ismail2009automated}, one of the early studies using computer vision techniques, automatically analyzed pedestrian--vehicle conflicts at an intersection using the extracted trajecotries from video data.
Later on, similarly, several works used trajectories extracted from videos to analyze before--and--after vehicle-pedestrian conflicts~\cite{ismail2010automated}, in street designs with elements of shared space~\cite{kaparias2013analysis}, in less organized traffic environments~\cite{tageldin2016developing}, and vehicle--bicycle conflicts at intersections~\cite{sayed2013automated}.
The work carried out by Ni et al.,~\cite{ni2016evaluation} analyzed pedestrian--vehicle interaction patterns using indicators of Time-to-collision (TTC)~\cite{hayward1972near} and Gap Time (GT)~\cite{ismail2009automated}.
First, trajectories were extracted by the semi-automated image processing tool Traffic Analyzer~\cite{suzuki2006trafficanalyzer}, and then according to the TTC and GT values derived from the trajectory speed profiles interactions were classified into three classes: hard interaction, soft interaction and no interaction. On the one hand, their work is very close to the studies carried out in this paper. Both interactions and non-interactions are studied at permissive right--turn intersections. On the other hand, their work is not fully automated in terms of trajectory extraction. Acquiring reliable trajectory data is often costly and time--consuming. The quality of the data is difficult to guarantee. For example, tracking multiple objects from frame to frame is very challenging due to, e.\,g.,~abrupt object motion, change of appearance and occlusions \cite{yilmaz2006object}. Errors and failures in detection will propagate to the process of tracking, which later will directly lead to wrong conclusions in the analysis step~\cite{sayed2013automated}.
Moreover, the above works only consider either vehicle-pedestrian or vehicle-cyclist interactions. In real--world traffic situations at big intersections other heterogeneous road users are often involved at the same time.
In very recent years, deep learning methods have been successfully applied to understand road users' behavior at intersections using video data, whereas many of them~\cite{rasouli2017they,ghori2018learning,hoy2018learning,rasouli2019autonomous} are conducted from a perspective of a self-driving car for pedestrian intent detection.
With respect to a third--person perspective, \cite{cheng2020automatic} trained an encoder-decoder model to automatically detect interactions using sequences of video frames from a static camera facing a very busy left--turn intersection in a Japanese city. However, the discriminative model is trained to optimize the reconstruction loss between the pairs of ground truth and prediction, which tends to learn the ``average'' behavior of road users. The dynamics and stochastic behavior patterns are not fully captured. Hence, in this paper, we propose a CVAE--based model with Gaussian latent variables to encode various behavior patterns and perform diverse predictions. We not only test our model in left--hand traffic but also right--hand traffic in different countries for interaction detection between vehicles and all the other VRUs.
\section{Methodology}
\label{sec:interactiondetection}
This section explains the methodology of interaction detection in detail.
Sec.~\ref{subsec:interactionproblemformulation} formulates the problem,
Sec.~\ref{subsec:featureextraction} describes the extraction of the input features,
Sec.~\ref{subsec:cvaeclassifier} introduces the detection model, and
Sec.~\ref{subsec:interactionuncertainty} provides the estimation of the model's uncertainty.
\begin{figure}[hbpt!]
\centering
\includegraphics[trim=0.6in 0.1in 0.7in 0.1in, clip=true, width=3.5in]{figs/sliding_padding.pdf}
\caption[Sequence-to-sequence modeling using sliding window or padding method]{Sequence-to-sequence modeling using sliding window or padding method.}
\label{fig:seqtoseq}
\end{figure}
\begin{figure*}[hbpt!]
\centering
\includegraphics[trim=0in 1in 0in 0in, width=\textwidth]{figs/pipeline.pdf}
\caption[The pipeline of interaction detection]{The pipeline for interaction detection.}
\label{fig:pipelineInteractionDetection}
\end{figure*}
\subsection{Problem formulation}
\label{subsec:interactionproblemformulation}
Interaction detection is formulated as a classification problem using the information extracted from videos. Given a set of observed vehicle turning sequences of video frames $\mathbf{X}=\{\boldsymbol{X}_1, \cdots,\,\boldsymbol{X}_i,\cdots\}$, the \textit{input} of the $i$-th sequence is characterized as $\boldsymbol{X}_i^{(T)} = \{X_i^t\}_{t=0}^{T-1}$, where $X^t \in \mathbb{R}^{W{\times}H{\times}C}$ is the frame at time step $t$, and $T$ is the total number of observed frames for the sequence. $W$, $H$ and $C$ denote the width, height and the number of channels of each frame. Instead of using raw images, object and optical--flow information (see Sec.~\ref{subsec:featureextraction} for more details) is extracted from the frames and is used as the sequence of the input.
In this way, the personal information of road users,e.\,g., face, gender, age, driving plates can be protected.
$\boldsymbol{Y}_i$ is the corresponding \textit{ground truth} interaction label and $\hat{\boldsymbol{Y}}_i$ is the \textit{prediction}.
Moreover, sequence-to-sequence modeling is applied to learn the frame--wise dynamics of interactions over a turning sequence.
Similar to~\cite{ghori2018learning}, the task defined above is a weakly supervised learning problem due to the labelled data structure; The interaction label is a dichotomous class that represents the interaction level of the whole sequence. It does not provide detailed information about how the interaction level changes with time. In fact, it is not feasible to manually label each frame due to the tremendous amount of work. Without knowing the exact fine--grained frame--wise interaction label, the sequence--wise label is duplicated at each frame.
Hence, the form of the output is converted to obtain the time steps, denoted as $\boldsymbol{Y}_i^{(T)} = \{Y_i^t\}_{t=0}^{T-1}$ for the $i$-th turning sequence.
Thereafter, the input and output are aligned at each frame for a sequence-to-sequence modeling, denoted by Fig.~\ref{fig:seqtoseq} and Eq.~\eqref{eq.seqtoseq}.
\begin{equation}
\label{eq.seqtoseq}
f(\boldsymbol{Y}_i|\boldsymbol{X}_i) = \lambda\sum_{t=0}^{T-1}f(\boldsymbol{Y}_i^{({T})}|\boldsymbol{X}_i^{({T})}),
\end{equation}
where $f$ denotes the detection model and $\lambda$ is a voting scheme that summaries the frame--wise predictions to the sequence--wise prediction. In this paper an average voting scheme that weighs the prediction at each frame equally is adopted and the sequence--wise prediction is the class label voted by the majority~\cite{cheng2020automatic}.
The above conversion is based on the following hypotheses:
(1) Over a large dataset, sequence lengths vary from one to another, which provides rich interaction information of both long and short sequences.
(2) The prediction error between $\boldsymbol{Y}_i$ and $\hat{\boldsymbol{Y}}_i$ is still computed at the sequence level because each frame--wise prediction only partially contributes to the sequence--wise prediction using the voting scheme. This mechanism enables the model to automatically learn the frame--wise dynamics at each frame in training time.
In addition, sliding window and padding methods are proposed to deal with varying sequence lengths for training a recurrent neural network (RNN) based CVAE model, as shown in Fig.~\ref{fig:seqtoseq}. This is because the most commonly used RNNs, e.\,g.,~Long Short-Term Memory (LSTM)~\cite{hochreiter1997long}, for sequence modeling often require a fixed sequence length. However, at an intersection some vehicles can quickly complete the turning if the space happens to be free, whereas some vehicles may have to wait for a long time to let VRUs cross first.
The \textit{sliding window} method parses each sequence with a fixed window size $\mathsf{w}$. Eq.~\eqref{eq:slidingwindow} denotes the sliding window method with a stride being the same as $\mathsf{w}$. The overlap between two consecutive windows is allowed when the stride is set to be smaller than the window size.
\begin{equation}
\label{eq:slidingwindow}
\boldsymbol{X_i}^{(T)} = \{X_i^{0},\cdots,\, X_i^{\mathsf{w}1}, \cdots,\, ~X_i^{\mathsf{w}k}\}, \text{~where~} k = \frac{T}{\mathsf{w}}.
\end{equation}
The \textit{padding method} uses zero-paddings at the end to extend the sequences shorter than a predefined length $T^\ast > T$. The value of $T^\ast$ can be adjusted to cover most of the sequences, e.\,g.,~$T^\ast = \text{Max}\{{T}_1, \cdots,~{T}_i,~\cdots\}$, where ${T}_i$ denotes the length of an arbitrary vehicle turning sequence. Meanwhile, a padding mask is used to annotate the exact sequence length so that the padded zero values are treated differently to mitigate the negative impact on the learning process.
\begin{align}
\begin{split}
\boldsymbol{X}_i^{(T)} &= \{X_i^0, \cdots, ~X_i^{T-1}, ~0^{T}~, \cdots, \,0^{T^\ast-1}\}, \\
\text{Mask}_i^{(T)} &= \{1_i^0, \,\cdots, ~1_i^{T-1}, ~0^{T}~, \cdots, \,0^{T^\ast-1}\}. \\
\end{split}
\end{align}
After the formulation of the above sequence-to-sequence problem, we now introduce the pipeline of the proposed method for interaction detection, denoted by Fig.~\ref{fig:pipelineInteractionDetection}. It consists of two components: \textit{feature extraction} (Sec.~\ref{subsec:featureextraction}) and the sequence-to-sequence CVAE model (Sec.~\ref{subsec:cvaeclassifier}). Each component is explained in detail in the following subsections.
\subsection{Feature extraction}
\label{subsec:featureextraction}
Object information and optical--flow information extracted from video frames are used as input features for the interaction detection task, as shown in Table~\ref{tb:extractedfeatures} and Fig.~\ref{fig:inputfeaturesfordetection}.
\begin{table}[hbpt!]
\caption[Object and optical--flow information]{Object and optical flow information extracted by an object detector and the dense optical flow, respectively.}
\centering
\setlength{\tabcolsep}{4pt}
\begin{tabular}{llllll}
\toprule
Feature & C1 & C2 & C3 & C4 & value \\ \hhline{======}
Object & pedestrians & bikes/motors & cars/trucks & buses & \{0, 1\} \\
Optical flow$^{*}$ & orientation & 1 & velocity & & {\,[}0, 1{\,]} \\ \bottomrule
\end{tabular}
\begin{tabular}{@{}c@{}}
\multicolumn{1}{p{3.4in}}{$^{*}$The HSV (Hue, Saturation, Value) color representation is used to store the optical--flow information. The hue channel (C1) is used to store orientation, the saturation channel (C2) is set to its maximum, and the value channel (C3) is used to store velocity. Note that there are four channels in each object frame and only three channels in each optical--flow frame.} \\
\end{tabular}
\label{tb:extractedfeatures}
\end{table}
\begin{figure}[hbpt!]
\centering
\subfloat[Object detection]{
\label{subfig:detection}
\hspace{-0.1cm}\begin{minipage}{1.7in}
\centering
\includegraphics[trim=0in 0in 0in 0.5in, clip=true, width=\textwidth]{figs/stream_2019_11_08_08_42_544678_det.pdf}
\end{minipage}
}%
\subfloat[Binary mask]{
\label{subfig:mask}
\hspace{-0.1cm}\begin{minipage}{1.7in}
\centering
\includegraphics[trim=0in 0in 0in 1.45in, clip=true, width=\textwidth]{figs/mask.pdf}
\end{minipage}
}%
\subfloat[Object information]{
\label{subfig:bbox}
\hspace{-0.1cm}\begin{minipage}{1.7in}
\centering
\includegraphics[trim=0in 0in 0in 0.5in, clip=true, width=\textwidth]{figs/stream_2019_11_08_08_42_544678_bbox.pdf}
\end{minipage}
}%
\subfloat[Optical--flow information]{
\label{subfig:optical_flow}
\hspace{-0.1cm}\begin{minipage}{1.7in}
\centering
\includegraphics[trim=0in 0in 0in 0.5in, clip=true, width=\textwidth]{figs/stream_2019_11_08_08_42_544678_op.pdf}
\end{minipage}
}
\caption[Input features for interaction detection]{Input features for interaction detection. Note that (c) only exemplifies three channels with pedestrians denoted in blue, bicycle(s) in green and car(s) in red color. The overlaid bounding boxes in (d) only serve the purpose of showing the location of the objects, including the static ones. They are not integrated into the optical--flow information.}
\label{fig:inputfeaturesfordetection}
\end{figure}
Object information contains road users' type and location. The deep learning object detector, such as YOLOv3 \cite{redmon2016you} or M2Det \cite{zhao2019m2det}, is leveraged for detecting all the relevant road users at each frame. Namely they are pedestrians, cyclists, motorbikes, cars, trucks and buses. Different channels are used to store the road--user position information and each channel is dedicated to one or two similar road user types; Based on the acquired data that only very few motorbikes were detected, they are stored in the same channel for bicycles. Cars/trucks are stored in one channel given their very similar trace of turning, see Table~\ref{tb:extractedfeatures}. The location of the detected road users (Fig.~\ref{subfig:detection}) is mapped by the corresponding bounding boxes with values of one in each frame (Fig.~\ref{subfig:bbox}). Areas with no detected objects are set with values of zero, as shown in black color.
Optical flow is used to capture the motion of road users. It describes the distribution of velocities of moving pixels' brightness in two consecutive images \cite{horn1981determining}. Moving objects are captured by optical flow and static objects and background are ignored. The dense optical--flow algorithm~\cite{farneback2003two} is applied to map the displacement of moving objects and remove the static background information, see Fig~\ref{subfig:optical_flow}. Similarly, respective frame channels are dedicated to the information of orientation and velocity of the moving objects, see Table~\ref{tb:extractedfeatures}.
The area of interest is the tuning space of the intersection and is marked by a binary mask (Fig.~\ref{subfig:mask}), the other areas are not considered.
As shown in Fig.~\ref{subfig:detection} and \ref{subfig:mask}, the mask of the area of interest slightly extends into the through lane next to the turning lane. Due to the oblique view of the camera, the upper body of the vehicles in the turning lane are partially projected into the through lane. The extended mask aims to include the upper body of the turning vehicles to be detected as well. The lower middle point of the bounding boxes of the detected vehicles is used to filter out the vehicles in the through lane.
However, the extended mask introduces noise to the optical--flow information. For instance, as shown in Fig.~\ref{subfig:optical_flow}, the motion of the vehicles in the through lane is also captured by the optical--flow, which cannot be easily filtered out given the irregular shapes and occlusion. But it later turns out that this noise is not problematic when both the object information and the optical--flow information are combined as the input for training the interaction detection model. Interactions between vehicles and VRUs only happen in the crossing zone.
\subsection{CVAE model for interaction detection}
\label{subsec:cvaeclassifier}
The model of predicting the probabilities of interaction between a turning vehicle and the other crossing road users is denoted as $f(\boldsymbol{Y}|\boldsymbol{X})= \text{arg\,max}_{\boldsymbol{Y}}~p(\boldsymbol{Y}|\boldsymbol{X},\, \mathbf{z})$, where $f$ is a CVAE model that performs probabilistic prediction and $\mathbf{z}$ are the Gaussian latent variables. The model encodes the information of interaction into a latent space and predicts the interaction label $\hat{\boldsymbol{Y}}$ conditioned on the input $\boldsymbol{X}$ and $\mathbf{z}$. The variational lower bound~\cite{sohn2015learning} of the model is given as follows:
\begin{align}
\begin{split}
\log p_\theta(\boldsymbol{Y}|\boldsymbol{X}) \geq & -D_{KL}(q_\phi(\mathbf{z}|\boldsymbol{X}, \,\boldsymbol{Y})||p_\theta(\mathbf{z})) \\
& + \mathbb{E}_{q_\phi(\mathbf{z}|\boldsymbol{X}, \,\boldsymbol{Y})}
[\log p_\theta(\boldsymbol{Y}|\boldsymbol{X}, \,\mathbf{z})]. \label{eq:cvaeclassifier}
\end{split}
\end{align}
The model jointly trains a recognition model $q_\phi(\,\cdot\,)$ (a.\,k.\,a. encoder), and a generative model $p_\theta(\,\cdot\,)$ (a.\,k.\,a. decoder). In the training phase, the model is optimized via stochastic backpropogation~\cite{rezende2014stochastic}. $q_\phi(\,\cdot\,)$ encodes the observed information and the ground truth label into the latent variables $\mathbf{z}$. In other words, the inserted label in training is combined with the condition to parameterize the Gaussian latent space, which later can be used for structured prediction to map the many possible outputs~\cite{sohn2015learning}.
$p_\theta(\,\cdot\,)$ decodes the prediction of the interaction label conditioned on the input and the latent variables.
$-D_{KL}(\,\cdot\,)$ is the negative Kullback-Leibler divergence of the approximate posterior from the prior $p_\theta(\mathbf{z})$ and acts as a regularizer, which pushes the approximate posterior $q_\phi(\,\cdot\,)$ to the prior distribution $p_\theta(\mathbf{z})$. Note that in our model the prior is relaxed to make the latent variables statistically independent from the input variables so that $p_\theta(\mathbf{z}) = p_\theta(\mathbf{z}|\boldsymbol{X})$~\cite{DBLP:conf/nips/KingmaMRW14}. The prediction loss $\mathbb{E}_{q_\phi(\mathbf{z}|\boldsymbol{X}, \,\boldsymbol{Y})}(\,\cdot\,)$ measures the distance between $\hat{\boldsymbol{Y}}$ and $\boldsymbol{Y}$. The binary cross--entropy loss is used as the prediction loss, as denoted by Eq.~\eqref{eq:binarycrossentropy}.
\begin{equation}
\label{eq:binarycrossentropy}
\mathcal{H}(\hat{\boldsymbol{Y}}-\boldsymbol{Y}) = -\{\boldsymbol{Y} \log\hat{\boldsymbol{Y}}
+ (1-\boldsymbol{Y}) \log(1-\hat{\boldsymbol{Y}})\}.
\end{equation}
In the inference phase, the decoder predicts the interaction label conditioned on the input of the observed information concatenated with a latent variable directly sampled from the Gaussian prior $p_\theta(\mathbf{z})$. The sampling process is done multiple times to perform diverse predictions~\cite{sohn2015learning}.
Convolutional Neural Networks (CNNs) and RNNs, as well as the self-attention mechanism~\cite{vaswani2017attention} are employed in the CVAE model to learn the parameters of $\theta$ and $\phi$.
As shown in Fig.~\ref{fig:pipelineInteractionDetection}, the encoder has two branches: X-Encoder and Y-Encoder. They are dedicated to extracting low--level features from the condition (the object and optical--flow information) and the interaction label information, respectively. Each module, i.\,e.,~X-Encoder, Y-Encoder, Latent Space, and Decoder of the CVAE model, is explained in detail as follows.
The X-Encoder manipulates two CNNs for learning spatial features from the object frame sequence and the optical--flow frame sequence, respectively.
Without loss of generality, the object frame sequence using the sliding window (e.\,g.,~$\mathsf{w}=8$) method is taken as an example for explaining the learning process.
First, each frame from the sliding window is passed to a CNN to learn spatial features. As shown in Fig.~\ref{fig:CVAEclassifierCNN}, the CNN has three 2D convolutional (CONV) layers with each one followed by a Maximum Pooling (MP) layer and a Batch Normalization (BN)~\cite{ioffe2015batch}. It takes the frame that contains object information as input and outputs a flattened feature vector. This process is done frame by frame for all the frames in the sliding window.
Then, the output feature vectors of all the frames are timely distributed as a sequence that maintains the same length as the window size, as shown in Fig.~\ref{fig:pipelineInteractionDetection} for the X-Encoder\footnote{This process works in the same way for the padding method with the predefined sequence length, instead of the sliding window size.}. The optical--flow frame sequence is processed by another CNN in a similar way to get the sequence of optical--flow feature vectors.
In the end, the object feature vectors and the optical--flow feature vectors are concatenated into a 2D feature vector as the final output of the X-Encoder.
Note that the CNN for the optical--flow frame sequence has a similar structure, except for the input channel number. The CNN for the object frame sequence has four channels that are dedicated to different road user types, whereas the CNN for the optical--flow frame sequence has three channels that are dedicated to the motion information (see Table.~\ref{tb:extractedfeatures}).
\begin{figure}[t!]
\centering
\includegraphics[trim=0in 0.5in 0in 0in, width=3.5in]{figs/interaction_CNN.pdf}
\caption[The CNN used for learning spatial features]{The CNN used for learning spatial features from an object frame. CONV stands for 2D convolutional layer, MP for Maximum Pooling layer and BN for Batch Normalization.}
\label{fig:CVAEclassifierCNN}
\end{figure}
The Y-Encoder embeds the interaction label for each sequence. First, the sequence--wise label is replicated to align with the sequence length. Then, a fully connected (FC) layer is used to embed the replicated labels into a label vector. The original dimension of the label is only two after the one-hot encoding for the non-interaction and interaction classes, which is much smaller than the combined feature vector. The embedding increases the balance of the sizes of the label vector and the combined feature vector. The specific dimensionalities are shown in Fig.~\ref{fig:pipelineInteractionDetection}, which are hyper-parameters that can be changed in the experimental settings.
The prior Gaussian latent variables $\mathbf{z}$ are modulated by the encoded feature vector and the label vector from the X-Encoder and Y-Encoder, respectively. First, the outputs of the X-Encoder and Y-Encoder are concatenated along the time axis. Then, the concatenated features are passed to an FC layer and a following self-attention layer~\cite{vaswani2017attention}. The self-attention layer takes all the features along the time axis at the same time
and attentively learns the interconnections of the features globally. After that, an LSTM with two stacked hidden layers is used for learning the temporal features into a hidden state. In the end, the hidden state is fully connected by an FC layer and then split by two FC layers side by side. The two FC layers are trained to learn the mean and variance of the distribution of the latent variables $\mathbf{z}$, respectively.
The Decoder is trained conditioned on the encoded feature vector from the X-Encoder and the latent variables. First, the encoded feature vector is concatenated with the latent variables and passed to an FC layer. Then, an LSTM with two stacked layers is used to learn the temporal dynamics. After that, two FC layers are used for fusion and dimension reduction. The Softmax activation function is added to the last FC layer for generating the probability of the interaction class at each frame. The output of the Decoder are the frame--wise predictions of the interaction class. In the end, the average voting scheme is used to summarize the frame--wise predictions to get the sequence--wise prediction for the interaction class.
In inference time, the interactions for unseen vehicle turning sequences are classified using the trained CVAE model. First, the object and optical--flow information are encoded by the X-Encoder. A latent variable is sampled from the Gaussian distribution. Then the Decoder generates the probabilities of the interaction class for each sequence conditioned on the output of the X-Encoder and the sampled latent variable. The sampling is repeated multiple times at each step so that the Decoder generates diverse probabilities of the interaction class.
\subsection{Estimation of uncertainty}
\label{subsec:interactionuncertainty}
Kernel density estimation (KDE)~\cite{parzen1962estimation,loftsgaarden1965nonparametric} is used to measure the uncertainty of the diverse predictions generated by the above multi-sampling process.
At each frame, the predictions $\{Y_{i, 1}^t,\,\cdots,Y_{i, N}^t\}$ are assumed to be i.i.d. samples drawn from an unknown density function $g(Y)$. $N$ is the total number of predictions and $t\leq T$, and $T$ is the total steps of the give sequence $i$. The KDE is calculated as:
\begin{equation}
\hat{g}(Y)^t = \frac{1}{N}\sum_{i=1}^{N}K_h(Y-Y_i)=\frac{1}{Nh}\sum_{i=1}^{N}K\left(\frac{Y-Y_i}{h}\right),
\end{equation}
where $K(\cdot)$ is the Gaussian kernel function, $h$ is the smoothing parameter (also called \textit{bandwidth}). The log-likelihood of the average prediction at step $t$ is determined by $\mathcal{L}(\log(\hat{g}),\, \bar{Y_i})^t$, where $\bar{Y_i}$ is the average prediction. The uncertainty is defined as the residual of the normalized log-likelihood averaged over all the steps for the given sequence $i$, as denoted by Eq.~\eqref{eq:uncertainty}:
\begin{equation}
\label{eq:uncertainty}
\Gamma_i = 1- \frac{1}{T}\sum_{t=0}^{T-1}\omega\mathcal{L}(\log(\hat{g}),\, \bar{Y_i})^t,
\end{equation}
where $\omega$ is the normalization parameter that scales the values to $[0, 1]$ and $\Gamma_i$ stands for the degree of uncertainty for the prediction over the sequence $i$.
\section{Data Acquisition and Pre-processing}
\label{sec:InteDetcData}
\begin{figure}[t!]
\centering
\subfloat[The KoW left--hand intersection in Germany]{
\label{subfig:KoWscreenshot}
\begin{minipage}{3.5in}
\centering
\includegraphics[trim=0in 0in 0in 2.5in, clip=true, width=3.45in]{figs/KoW_intersection_annotation_yellow.png}
\end{minipage}
}%
\subfloat[The NGY right--hand intersection in Japan]{
\label{subfig:NGYscreenshot}
\begin{minipage}{3.5in}
\centering
\includegraphics[trim=0in 2in 0in 2.8in, clip=true, width=3.45in]{figs/NGY_intersection_annotation_yellow.png}
\end{minipage}
}%
\caption[Screenshots of two intersections]{The screenshots of KoW and NGY intersections. Vehicle turning sequences are constrained in the yellow contours.}
\label{fig:intersections}
\end{figure}
Real--world datasets were acquired to test the performance of the proposed model for interaction detection. Fig.~\ref{fig:intersections} shows the screenshots of the two intersections where various traffic scenes were recorded. The KoW dataset was acquired by \cite{koetsier2019trajectory} from a very busy right--turn intersection in a German city. The videos recorded traffic conditions from 00:02~a.\,m. to 11:58~p.\,m. on November 8th and 9th, 2019 in Hannover. The videos were recorded in $1280 \times 720$ pixels at \SI{25}{fps} by a camera module (Raspberry Pi Camera Module v2) installed inside a building (ca. \SI{20}{m} ground elevation) facing the intersection and stored in .h264 format. We use an approximately 14-hour sub-footage from two seven--hour segments (8~a.\,m. to 3~p.\,m. on both the 8th and 9th), when there was enough traffic and adequate ambient light to perform stable image processing for feature extraction. The NGY dataset was provided by Nagoya Toyopet Corporation. It was acquired from an extremely busy left--turn intersection in a Japanese city. In total, approximately 24 hours of traffic footage from an oblique view at one of the major intersections in Nagoya were recorded from 11~a.\,m. to 11~a.\,m. on April 23rd and 24th, 2019. The videos were recorded in $1600\times1200$ pixels at \SI{30}{fps} using a camera (Panasonic WV-SF781L) installed inside a building (ca. \SI{3}{m} ground elevation) adjacent to the intersection and stored in .mp4 format. Similarly, we use a twelve-hour sub-footage recorded from 11~a.\,m. to 6~p.\,m. on the 23th and from 6~a.m. to 11~a.m. on the 24th.
Both datasets were pre-processed for later usage. Due to the missing camera intrinsic and extrinsic parameters, no projection was done for extracting the trajectory data. The pre-process aimed to identify vehicle turning sequences and extract all the road users' type, position and motion information. First, two annotators for each dataset manually detected the sequence scenes where a vehicle turned right at the KoW intersection or left at the NGY intersection, and extracted the time intervals of the vehicle staying in the yellow contours (see Fig.~\ref{fig:intersections}).
The annotators independently determined whether or not interactions occurred in each scene, afterwards they revised and agreed with each other\footnote{Less than \SI{1}{\percent} of the sequences were initially annotated differently.} and labeled each scene as ``non-interaction'' or ``interaction''. Then, YOLOv3~\cite{redmon2016you} and M2Det~\cite{zhao2019m2det} were used to detect all the traffic related objects at the original frame rate of the KoW (\SI{25}{fps}) and NGY (\SI{30}{fps}) datasets, respectively. Note that these two sources of data were from different providers so that the camera settings and the object detection algorithms were not unified. Considering the change between two consecutive frames is small, a possibly failed detection in the current frame is supplemented by the detection in the previous or the next frame if either of them was available. Otherwise, the sequences with failed detection and no supplementation were discarded.
In addition, the dense optical--flow algorithm~\cite{farneback2003two} was used to extract the optical--flow information from the sequences. Different from the object detection, the frame rate was down--sampled to the half of the original frame rate, i.\,e.,~\SI{12.5}{fps} for KoW and \SI{15}{fps} for NGY.
This aims to reduce the computational cost and increase the offset of moving pixels between two consecutive frames, so as to improve the extraction performance~\cite{farneback2003two} of optical flow. In the end, both the object and the optical--flow sequences were aligned with the down--sampled frame rate in each dataset, which is used as the time step for interaction detection.
\begin{figure}[t!]
\centering
\subfloat[]{
\label{subfig:seq-nonvsin}
\begin{minipage}{0.236\textwidth}
\centering
\includegraphics[trim=0in -0.1in 0in 0in, width=\textwidth]{figs/seqdif_second_font.pdf}
\end{minipage}%
}%
\subfloat[]{
\label{fig:seq-frame-overdata}
\begin{minipage}{0.236\textwidth}
\centering
\includegraphics[trim=0in 0in 0in 0in, clip=true, width=\textwidth]{figs/seqlengthdis_frames_font.pdf}
\end{minipage}%
}
\caption[Sequence lengths in the KoW and NGY datasets]{Sequence lengths in the KoW and NGY datasets. (a) Standard deviation is denoted by the red error bar. (b) Sequence length measured by the number of frames and $T^*$ is the length threshold for the padding method.}
\label{fig:seqdistrubitions}
\end{figure}
The data processing yields over 2000 vehicle turning sequences with varying lengths, as denoted by Fig.~\ref{fig:seqdistrubitions}.
Within each dataset, sequence lengths measured in seconds are very different.
The non-interaction sequences are significantly shorter than the interaction ones (U-test, $U=382142$, $p\ll0.01$ for KoW and $U=67199$,~$p\ll0.01$ for NGY), and the standard deviation in each class in both datasets deviates in a large range.
This indicates that the duration of a sequence is not an accurate feature for the detection task; a short sequence duration does not necessarily imply no interaction. In addition, the sequence lengths of all the sequences over each dataset are very unevenly distributed, i.\,e.,~long--tail distributed, especially for the NGY dataset, see Fig.~\ref{fig:seq-frame-overdata}.
Across the datasets, the sequences in the KoW and NGY datasets are different in terms of not only travel direction but also frame size and rate, as well as sequence length in general.
Though non-interaction sequences from both datasets have similar sequence lengths, i.\,e.,~on average non-interaction sequences have a length of \SI{5.2}{s} in KoW and \SI{5.3}{s} in NGY, interaction sequences in NGY have a longer average sequence length (\SI{10.8}{s}) than the ones in KoW (\SI{7.2}{s}).
Due to the higher density of traffic at the NGY intersection compared to the KoW intersection, vehicles often had to wait for more pedestrians and cyclists crossing. The above differences make cross dataset validation very difficult (more details in Sec.~\ref{subsec:crossdatavalidation}).
\begin{table}[hbpt!]
\caption[Video frame sequences used for interaction detection]{Video frame sequences used for interaction detection. Sequence length is measured by frame numbers and the sample sizes of the classes were balanced for each sets.}
\setlength{\tabcolsep}{3pt}
\centering
\begin{tabular}{ccccccc}
\toprule
Name & Input form & Max. length$^{*}$ & Training & Validation & Test & Total \\ \hhline{=======}
KoW & sliding & 500 & 360/360 & 90/90 & 192/192 & 642/642 \\
KoW & padding & 100 & 352/352 & 88/88 & 188/188 & 628/628 \\
NGY & sliding & 500 & 291/291 & 74/74 & 159/159 & 530/530 \\
NGY & padding & 100 & 132/132 & 33/33 & 70/70 & 235/235 \\ \bottomrule
\end{tabular}
\label{tb:interactiondatapartition}
\end{table}
The acquired datasets were further prepared for training the detection models, which involves sample balancing, sequence padding and dataset partitioning.
The number of samples in each class was balanced to perform unbiased training. For both datasets, the maximum number of sequences in each class was set to a value that is close to the capacity of the smaller class. Note that the small amount of very long sequences (i.\,e.,~$>500$ frames, see Fig.~\ref{fig:seq-frame-overdata}) were not used for the experiments. All such sequences are from the class of interaction, i.\,e.,~vehicles had to wait for a long time to let other road users cross the intersection. The removal of these sequences balances sample size and length in both classes, in order to prevent a model being biased towards the interaction class.
Sequences with a smaller number of frames than the threshold $T^\ast$ are padded with zeros for the padding method (see Sec.~\ref{subsec:interactionproblemformulation}).
However, if $T^\ast$ is too large most of the sequences will be padded with zeros and this will lead to noisy samples; if $T^\ast$ is too small many long sequences will be excluded.
To balance the trade-off, based on the sequence length distributions over the datasets (Fig.~\ref{fig:seq-frame-overdata}), $T^\ast$ was set to 100 frames for both datasets so that the majority of all the sequences were included. The sequences shorter than or equal to $T^\ast$
were preserved for the experiments of the models that use the padding method. Sequences longer than $T^\ast$ exceed the maximum length of the input size that the models can handle. Therefore, they were discarded.
Under the balanced criteria above for each class, both the datasets were then randomly split into training and test sets by the ratio of $70:30$. Additionally $20\,\%$ of the training data was separated as an independent validation set to monitor the process of training. Table~\ref{tb:interactiondatapartition} lists the statistics of the final data used for the experiments after the preparation steps.
\section{Experiments}
\label{sec:InteDetcExperimentSettings}
\subsection{Baseline and ablative models}
\label{subsec:interactionbaseline}
To evaluate the performance of the proposed CVAE model, it is compared with a baseline model. The baseline model is a sequence-to-sequence encoder--decoder model that uses the same input features from the object and motion information for interaction detection~\cite{cheng2020automatic}. It has the same structure as the X-Encoder and the Decoder that are implemented in the CVAE model (Fig.~\ref{fig:pipelineInteractionDetection}). The difference between these two models is the sample generation process. The baseline model is a discriminative model and does not use the class label information and the conditional information for learning the latent variables that mimic the stochastic behavior in vehicle--VRU interactions. Without randomly sampling from the Gaussian latent variables, the output of the sequence-to-sequence encoder--decoder model is deterministic.
A series of ablative models are designed to analyze the contribution of the object information (\textit{ob}), the optical--flow information (\textit{op}), and the self-attention mechanism (\textit{att}). The ablative models are trained by removing one of the aforementioned parts, as denoted in Table~\ref{tb:interactionclassifiers}.
\begin{table}[hbpt!]
\caption[The models with different input structures]{The models with different input structures.}
\label{tb:interactionclassifiers}
\centering
\begin{tabular}{lcccc}
\toprule
Model name & (\textit{ob}) & (\textit{op}) & (\textit{att}) & Sample generation \\ \hhline{=====}
\textit{[S+ob+op+att]}$^1$ & $\surd$ & $\surd$ & $\surd$ & - \\
\textit{[C+op+att]} & - & $\surd$ & $\surd$ & $\surd$ \\
\textit{[C+ob+att]} & $\surd$ & - & $\surd$ & $\surd$ \\
\textit{[C+ob+op]} & $\surd$ & $\surd$ & - & $\surd$ \\
\textit{[C+ob+op+att]}$^2$ & $\surd$ & $\surd$ & $\surd$ & $\surd$ \\
\bottomrule
\end{tabular}
\begin{tabular}{@{}c@{}}
\multicolumn{1}{p{\textwidth}}{$^{1}$the baseline model; $^{2}$the complete CVAE model} \\
\end{tabular}
\end{table}
\subsection{Evaluation metrics}
\label{subsec:interactionevaluationmetrics}
Tested samples are categorized according to the comparison between their ground truth and the predicted labels, as listed in Table~\ref{tb:evaluationcategory}. Accuracy, Precision, Recall and F1-score are applied to measure the performance of interaction detection on the test data from both the KoW and NGY intersections.
\begin{table}[hbpt!]
\caption[Categories of tested samples]{Categories of tested samples}
\centering
\begin{tabular}{lll}
\toprule
Category name & Ground truth & Prediction \\ \hhline{===}
TP: true positive & interaction & interaction \\
TN: true negative & non-interaction & non-interaction \\
FP: false positive & non-interaction & interaction \\
FN: false negative & interaction & non-interaction \\ \bottomrule
\end{tabular}
\label{tb:evaluationcategory}
\end{table}
Accuracy is the fraction of the number of the correctly predicted samples over the total number of samples.
\begin{equation*}
\text{Accuracy}=(\text{TP}+\text{TN})/({\text{TP}+\text{TN}+\text{FP}+\text{FN}})\,.
\end{equation*}
Precision is the fraction of the number of the TP samples over the number of predicted positive samples.
\begin{equation*}
\text{Precision}={\text{TP}}/({\text{TP}+\text{FP}})\,.
\end{equation*}
Recall is the fraction of the number of the TP samples over the number of actual positive samples in the whole dataset.
\begin{equation*}
\text{Recall}={\text{TP}}/({\text{TP}+\text{FN}})\,.
\end{equation*}
F1-score is used to provide a measurement of the overall performance of a model. It is defined as the so-called harmonic mean of precision and recall.
\begin{equation*}
\label{eq:f1score}
\text{F1-score} = 2\times({\text{Precision}\cdot\text{Recall}})/({\text{Precision}+\text{Recall}})\,.
\end{equation*}
\subsection{Experimental Settings}
\label{subsec:settings}
The kernel size of the CNNs in each layer is set to 8, 4, and 2 respectively with a stride of 2 and the same padding for the borders.
The size of the first hidden layer of the LSTM is set to 64 and the second hidden layer is 32.
The size of the latent variables is set to 64.
All the models are trained by a learning rate of $10^{-4}$ with a zero decay using the Adam optimizer ($\beta_1=0.9$ and $\beta_2=0.999$)~\cite{DBLP:journals/corr/KingmaB14}.
The batch size is set to 32, and all the models were trained for 50 epochs on an NVIDIA Quadro T2000 GPU.
In inference time, the number of sampling $N$ is set to 100 for all the CVAE--based models.
\section{Results}
\label{sec:InteDetcResults}
This section presents the quantitative and qualitative results for each intersection, as well as the discussion of the results.
\subsection{Quantitative results}
\label{sec:InteDetcResults-quantitative}
The quantitative results are summarized in Table~\ref{tb:KoW-results} and \ref{tb:NGY-results} for the right--turn KoW intersection and the left--turn NGY intersection, respectively. Due to the multi--sampling process the results of the CVAE--based models are not deterministic, hence the corresponding standard deviations are provided.
Table~\ref{tb:KoW-results} shows the results of the interaction detection at the right--turn intersection. (1) Both the sliding window and padding methods yield similar and very accurate results for interaction detection using the combined information from object detection and optical flow. The accuracy and F1-score are both above 0.95. (2) Compared to the baseline models \textit{[S2S+ob+op+att]}, the proposed models \textit{[CVAE+ob+op+att]} have a slightly better performance using the sliding window method and comparable performance using the padding method. (3) Compared to the ablative models, the combined information improves the performance using the sliding window method. However, the improvement is not obvious using the padding method, especially compared to the ablative model that only uses the object information. On the other hand, regardless of the sliding window method or padding method, the ablative models that merely use optical--flow information only achieve an accuracy below 0.70. (4) The self-attention mechanism does not lead to an obviously better or worse performance using either the sliding window or padding method.
\begin{table}[hbpt!]
\caption[Detection results of a right--turn intersection]{Detection results of the right--turn intersection on the KoW dataset. Best values are highlighted in boldface.}
\centering
\setlength{\tabcolsep}{1.4pt}
\begin{tabular}{llllll}
\toprule
Model & shape & Accuracy & Precision & Recall & F1-score \\ \hhline{======}
\textit{[S+ob+op+att]} & sli. & 0.951 & 0.935 & 0.969 & 0.951 \\
\textit{[C+op+att]} & sli. & 0.692$_{\pm0.006}$ & 0.717$_{\pm0.007}$ & 0.635$_{\pm0.011}$ & 0.673$_{\pm0.008}$ \\
\textit{[C+ob+att]} & sli. & 0.952$_{\pm0.002}$ & 0.934$_{\pm0.002}$ & \textbf{0.973$_{\pm0.004}$} & 0.953$_{\pm0.002}$ \\
\textit{[C+ob+op]} & sli. & \textbf{0.965$_{\pm0.001}$} & \textbf{0.976$_{\pm0.003}$} & 0.953$_{\pm0.0}$ & \textbf{0.964$_{\pm0.001}$} \\
\textit{[C+ob+op+att]} & sli. & 0.961$_{\pm0.002}$ & 0.969$_{\pm0.004}$ & 0.953$_{\pm0.0}$ & 0.961$_{\pm0.002}$ \\ \midrule
\textit{[S+ob+op+att]} & pad. & 0.963 & 0.944 & 0.984 & 0.964 \\
\textit{[C+op+att]} & pad. & 0.610$_{\pm0.008}$ & 0.649$_{\pm0.011}$ & 0.479$_{\pm0.012}$ & 0.551$_{\pm0.010}$ \\
\textit{[C+ob+att]} & pad. & \textbf{0.967$_{\pm0.002}$} & \textbf{0.955$_{\pm0.003}$} & 0.980$_{\pm0.003}$ & \textbf{0.967$_{\pm0.002}$} \\
\textit{[C+ob+op]} & pad. & 0.966$_{\pm0.003}$ & 0.946$_{\pm0.004}$ & \textbf{0.989$_{\pm0.001}$} & \textbf{0.967$_{\pm0.002}$} \\
\textit{[C+ob+op+att]} & pad. & 0.962$_{\pm0.002}$ & 0.952$_{\pm0.003}$ & 0.973$_{\pm0.0}$ & 0.963$_{\pm0.002}$ \\ \bottomrule
\end{tabular}
\label{tb:KoW-results}
\end{table}
Table~\ref{tb:NGY-results} shows the results of the interaction detection at the left--turn intersection. (1) Both the sliding window and padding methods yield reasonable results for interaction detection using the combined information. However, the predictions of the sliding window method are more accurate than the padding method. (2) Compared to the baseline models, the CVAE models using the combined information achieve better performance, especially by using the sliding window method (e.\,g.,~ about 0.05 increment in F1-score). (3) Compared to the ablative models, the improvement by using the combined information can be found in both the sliding window and padding methods. (4) The best performance, especially measured by recall (0.916) and F1-score (0.892) on the NGY dataset, is achieved by the proposed CVAE model using the sliding window method with the self-attention mechanism.
\begin{table}[hbpt!]
\caption[Detection results of a left--turn intersection]{Detection results of the left--turn intersection on the NGY dataset. Best values are highlighted in boldface.}
\centering
\setlength{\tabcolsep}{1.4pt}
\begin{tabular}{llllll}
\toprule
Model & Shape & Accuracy & Precision & Recall & F1-score \\ \hhline{======}
\textit{[S+ob+op+att]} & sli. & 0.849 & 0.878 & 0.811 & 0.843 \\
\textit{[C+op+att]} & sli. & 0.878$_{\pm0.004}$ & 0.854$_{\pm0.004}$ & 0.912$_{\pm0.006}$ & 0.882$_{\pm0.004}$ \\
\textit{[C+ob+att]} & sli. & 0.734$_{\pm0.008}$ & 0.698$_{\pm0.007}$ & 0.824$_{\pm0.009}$ & 0.756$_{\pm0.007}$ \\
\textit{[C+ob+op]} & sli. & 0.882$_{\pm0.006}$ & \textbf{0.915$_{\pm0.004}$} & 0.842$_{\pm0.009}$ & 0.887$_{\pm0.006}$ \\
\textit{[C+ob+op+att]} & sli. & \textbf{0.889$_{\pm0.005}$} & 0.869$_{\pm0.005}$ & \textbf{0.916$_{\pm0.007}$} & \textbf{0.892$_{\pm0.004}$} \\ \midrule
\textit{[S+ob+op+att]} & pad. & 0.721 & 0.712 & 0.743 & 0.727 \\
\textit{[C+op+att]} & pad. & 0.764$_{\pm0.010}$ & 0.756$_{\pm0.011}$ & 0.808$_{\pm0.013}$ & 0.781$_{\pm0.009}$ \\
\textit{[C+ob+att]} & pad. & 0.683$_{\pm0.012}$ & 0.661$_{\pm0.011}$ & 0.751$_{\pm0.019}$ & 0.703$_{\pm0.013}$ \\
\textit{[C+ob+op]} & pad. & \textbf{0.782$_{\pm0.007}$} & \textbf{0.763$_{\pm0.010}$} & \textbf{0.819$_{\pm0.014}$} & \textbf{0.790$_{\pm0.009}$} \\
\textit{[C+ob+op+att]} & pad. & 0.742$_{\pm0.007}$ & 0.750$_{\pm0.009}$ & 0.728$_{\pm0.010}$ & 0.739$_{\pm0.007}$ \\ \bottomrule
\end{tabular}
\label{tb:NGY-results}
\end{table}
The Kernel Density Estimation (KDE) function (see Sec.~\ref{subsec:interactionuncertainty}) is used to measure the uncertainty levels of the CVAE--based models with different input structures. The uncertainties of the CVAE--based models are plotted in Fig.~\ref{fig:uncertainty-cvae} and compared by the Mann-Whitney U-test. Fig.~\ref{subfig:uncertainty-Kow-sliding} and \ref{subfig:uncertainty-Kow-pad} demonstrate that the CVAE--based models \textit{[CVAE+op+att]} using only the optical--flow information generate significantly more uncertain predictions than the other models tested on the KoW dataset.
This pattern is consistent with the prediction performance that they also yield less accurate predictions.
A similar pattern can be observed from the CVAE--based models \textit{[CVAE+ob+att]} using only the object information (Fig.~\ref{subfig:uncertainty-NGY-sliding} and \ref{subfig:uncertainty-NGY-pad}) tested on the NGY dataset. When the uncertainty level is high in the predictions, the accuracy level also drops.
\begin{figure}[t!]
\centering
\subfloat[KoW sliding window]{
\label{subfig:uncertainty-Kow-sliding}
\hspace{-0.2cm}\begin{minipage}{0.25\textwidth}
\centering
\includegraphics[trim=0in 0in 0in 0in, width=\textwidth]{figs/Kow_sliding_uncertainty_font.pdf}
\end{minipage}%
}%
\subfloat[KoW padding]{
\label{subfig:uncertainty-Kow-pad}
\hspace{-0.2cm}\begin{minipage}{0.25\textwidth}
\centering
\includegraphics[trim=0in 0in 0in 0in, width=\textwidth]{figs/Kow_padding_uncertainty_font.pdf}
\end{minipage}%
}%
\\
\subfloat[NGY sliding window]{
\label{subfig:uncertainty-NGY-sliding}
\hspace{-0.2cm}\begin{minipage}{0.25\textwidth}
\centering
\includegraphics[trim=0in 0in 0in 0in, width=\textwidth]{figs/NGY_sliding_uncertainty_font.pdf}
\end{minipage}
}%
\subfloat[NGY padding]{
\label{subfig:uncertainty-NGY-pad}
\hspace{-0.2cm}\begin{minipage}{0.25\textwidth}
\centering
\includegraphics[trim=0in 0in 0in 0in, width=\textwidth]{figs/NGY_padding_uncertainty_font.pdf}
\end{minipage}%
}%
\caption[Uncertainty measurement of the CVAE--based models]{Uncertainty measure of the CVAE--based models tested on the KoW and NGY datasets. The mean value is denoted by the yellow square in each box-plot. The uncertainty levels across the models are compared using the Mann-Whitney U-test. p-values are annotated using * or \text{ns} (not significant), where ns: \small{$0.05 < p <= 1.00$, *: $10^{-2} < p <= 0.05$, **: $10^{-3} < p <= 10^{-2}$, ***: $10^{-4} < p <= 10^{-3}$, and ****: $p <= 10^{-4}$}.}
\label{fig:uncertainty-cvae}
\end{figure}
The confusion matrices for the proposed CVAE model using both the object and the optical--flow information are presented in Fig.~\ref{fig:cm-cvae}. It can be seen that the model using either the sliding window (true negative rate 0.970 and true positive rate 0.953) or the padding method (true negative rate 0.951 and true positive rate 0.973) achieves high performance for interaction detection tested on the KoW dataset. The proposed model achieves good performance using the sliding window method (true negative rate 0.861 and true positive rate 0.916) on the NGY dataset and maintains a relatively low false negative rate (0.084). Nevertheless, the performance of the proposed CVAE model tested on the NGY dataset is inferior to the one on the KoW dataset.
In contrast, the padding method only achieves mediocre performance (true negative rate 0.757 and true positive rate 0.728) tested on the NGY dataset.
\begin{figure}[t!]
\centering
\subfloat[KoW sliding window]{
\label{subfig:cm-Kow-sliding}
\hspace{-0.2cm}\begin{minipage}{0.24\textwidth}
\centering
\includegraphics[trim=0in 0in 0in 0in, width=\textwidth]{figs/KoW_cvae_sliding_cm_its.pdf}
\end{minipage}
}
\subfloat[KoW padding]{
\label{subfig:cm-Kow-pad}
\hspace{-0.2cm}\begin{minipage}{0.24\textwidth}
\centering
\includegraphics[trim=0in 0in 0in 0in, width=\textwidth]{figs/KoW_cvae_pad_cm_its.pdf}
\end{minipage}
}
\\
\subfloat[NGY sliding window]{
\label{subfig:cm-NGY-sliding}
\hspace{-0.2cm}\begin{minipage}{0.24\textwidth}
\centering
\includegraphics[trim=0in 0in 0in 0in, width=\textwidth]{figs/NGY_cvae_sliding_cm_its.pdf}
\end{minipage}
}
\subfloat[NGY padding]{
\label{subfig:cm-NGY-pad}
\hspace{-0.2cm}\begin{minipage}{0.24\textwidth}
\centering
\includegraphics[trim=0in 0in 0in 0in, width=\textwidth]{figs/NGY_cvae_pad_cm_its.pdf}
\end{minipage}
}
\caption[Confusion matrices of CVAE model]{Confusion matrices for the proposed CVAE model tested on the KoW/NGY dataset using the sliding window (a)/(c) and padding (b)/(d) methods. The confusion matrices are normalized so that they can be compared across sliding window and padding methods, as well as across the datasets.}
\label{fig:cm-cvae}
\end{figure}
\subsection{Qualitative results}
\label{sec:InteDetcResults-qualitative}
The qualitative results intuitively showcase the process of interaction detection of the models. The fine--grained probability of the predicted interaction at each frame provides a clue of how the interaction intensity evolves over time.
Fig.~\ref{fig:results-KoW} demonstrates a non-interaction scenario at the KoW intersection between the right--turning target vehicle (in the blue bounding box) and the standstill pedestrian (in the red bounding box). There was no explicit interaction between them as the continuity of their behavior was not affected when the gap between them closed up, so the sequence was annotated as no interaction required.
The sequence-level prediction is the average vote of all the frame-level predictions.
At the sequence level, all the models correctly predict this scenario as non-interaction using both the sliding window (Fig.~\ref{subfig:result-KoW-sliding}) and padding (Fig.~\ref{subfig:result-KoW-pad}) methods, except the ablative model \textit{[CVAE+op+att]} (in cyan) that only uses the optical--flow information. However, all the models predict a high probability of interaction when the vehicle approached the pedestrian.
Also, the variance of the CVAE--based models in Fig.\ref{subfig:result-KoW-pad} increases when the probabilities change from under 0.5 to a higher value of interaction.
The baseline model \textit{[S2S+ob+op+att]} (in black) generates a similar pattern in frame--wise predictions. But it is deterministic at each frame and does not have the mechanism to represent the uncertainty of the predictions.
\begin{figure}[t!]
\centering
\subfloat[Sliding window method.]{
\label{subfig:result-KoW-sliding}
\begin{minipage}{0.235\textwidth}
\centering
\includegraphics[trim=0in 0in 0in 0in, width=\textwidth]{figs/True_stream_2019_11_09_10_23_30_0_sliding.pdf}
\end{minipage}%
}%
\subfloat[Padding Method. ]{
\label{subfig:result-KoW-pad}
\begin{minipage}{0.235\textwidth}
\centering
\includegraphics[trim=0in 0in 0in 0in, width=\textwidth]{figs/True_stream_2019_11_09_10_23_30_0_pad.pdf}
\end{minipage}%
}%
\captionsetup[subfigure]{labelformat=empty}
\subfloat[$\text{Time step}=0$]{
\label{subfig:result-KoW-sliding-00}
\begin{minipage}{0.23\textwidth}
\centering
\includegraphics[trim=0in 0in 0in 0in, width=\textwidth]{figs/stream_2019_11_09_10_23_30_00.png}
\end{minipage}%
}%
\subfloat[$\text{Time step}=8$]{
\label{subfig:result-KoW-pad-08}
\begin{minipage}{0.23\textwidth}
\centering
\includegraphics[trim=0in 0in 0in 0in, width=\textwidth]{figs/stream_2019_11_09_10_23_30_08.png}
\end{minipage}%
}%
\subfloat[$\text{Time step}=16$]{
\label{subfig:result-KoW-sliding-16}
\begin{minipage}{0.23\textwidth}
\centering
\includegraphics[trim=0in 0in 0in 0in, width=\textwidth]{figs/stream_2019_11_09_10_23_30_016.png}
\end{minipage}%
}%
\subfloat[$\text{Time step}=24$]{
\label{subfig:result-KoW-pad-24}
\begin{minipage}{0.23\textwidth}
\centering
\includegraphics[trim=0in 0in 0in 0in, width=\textwidth]{figs/stream_2019_11_09_10_23_30_024.png}
\end{minipage}%
}%
\subfloat[$\text{Time step}=32$]{
\label{subfig:result-KoW-sliding-32}
\begin{minipage}{0.23\textwidth}
\centering
\includegraphics[trim=0in 0in 0in 0in, width=\textwidth]{figs/stream_2019_11_09_10_23_30_032.png}
\end{minipage}%
}%
\subfloat[$\text{Time step}=40$]{
\label{subfig:result-KoW-pad-40}
\begin{minipage}{0.23\textwidth}
\centering
\includegraphics[trim=0in 0in 0in 0in, width=\textwidth]{figs/stream_2019_11_09_10_23_30_040.png}
\end{minipage}%
}%
\subfloat[$\text{Time step}=48$]{
\label{subfig:result-KoW-sliding-48}
\begin{minipage}{0.23\textwidth}
\centering
\includegraphics[trim=0in 0in 0in 0in, width=\textwidth]{figs/stream_2019_11_09_10_23_30_048.png}
\end{minipage}%
}%
\subfloat[$\text{Time step}=56$]{
\label{subfig:result-KoW-pad-56}
\begin{minipage}{0.23\textwidth}
\centering
\includegraphics[trim=0in 0in 0in 0in, width=\textwidth]{figs/stream_2019_11_09_10_23_30_056.png}
\end{minipage}%
}%
\caption[Examples of interaction probability for a right--hand intersection]{Examples of interaction probability at the frame level using the sliding window (a) and padding (b) methods, tested on the KoW dataset. The variance of the probabilities is visualized by the marginal shadow for the CVAE--based models. The corresponding video screenshots are aligned from upper left to the lower right at the bottom with a time interval of eight frames. The target vehicle is highlighted by the blue bounding box and the standstill pedestrian involved in the turning sequence is highlighted by the red bounding box.}
\label{fig:results-KoW}
\end{figure}
Fig.~\ref{fig:results-NGY} demonstrates an interaction scenario at the NGY intersection between the left--turning target vehicle (in the blue bounding box) and the crossing cyclist (in the red bounding box). Interaction was required between them as the vehicle had to decelerate or even briefly stop, yielding the way to the cyclist. All the models correctly predict this sequence as interaction using both the sliding window (Fig.~\ref{subfig:result-NGY-sliding}) and padding (Fig.~\ref{subfig:result-NGY-pad}) methods. Similar to the scenario above, the variance of the probabilities for the CVAE--based models using the sliding window method change with the modification of the distance between the target vehicle and the cyclist. As the distance between them decreases, the probability is higher and the variance is smaller for interaction, and vice versa.
The ablative model based on the object information using the padding method has higher uncertainty levels than the other models of the frame--wise predictions.
\begin{figure}[t!]
\centering
\subfloat[Sliding window method.]{
\label{subfig:result-NGY-sliding}
\begin{minipage}{0.235\textwidth}
\centering
\includegraphics[trim=0in 0in 0in 0in, width=\textwidth]{figs/True_190423170541_Camera1-082_3_sliding.pdf}
\end{minipage}%
}%
\subfloat[Padding Method.]{
\label{subfig:result-NGY-pad}
\begin{minipage}{0.235\textwidth}
\centering
\includegraphics[trim=0in 0in 0in 0in, width=\textwidth]{figs/True_190423170541_Camera1-082_3_pad.pdf}
\end{minipage}%
}%
\captionsetup[subfigure]{labelformat=empty}
\subfloat[$\text{Time step}=0$]{
\label{subfig:result-NGY-sliding-300}
\begin{minipage}{0.23\textwidth}
\centering
\includegraphics[trim=0in 1.3in 0in 0in, clip=true, width=\textwidth]{figs/190423170541_Camera1-082_30.png}
\end{minipage}%
}%
\subfloat[$\text{Time step}=8$]{
\label{subfig:result-NGY-pad-308}
\begin{minipage}{0.23\textwidth}
\centering
\includegraphics[trim=0in 1.3in 0in 0in, clip=true, width=\textwidth]{figs/190423170541_Camera1-082_38.png}
\end{minipage}%
}%
\subfloat[$\text{Time step}=16$]{
\label{subfig:result-NGY-sliding-316}
\begin{minipage}{0.23\textwidth}
\centering
\includegraphics[trim=0in 1.3in 0in 0in, clip=true, width=\textwidth]{figs/190423170541_Camera1-082_316.png}
\end{minipage}%
}%
\subfloat[$\text{Time step}=24$]{
\label{subfig:result-NGY-pad-324}
\begin{minipage}{0.23\textwidth}
\centering
\includegraphics[trim=0in 1.3in 0in 0in, clip=true, width=\textwidth]{figs/190423170541_Camera1-082_324.png}
\end{minipage}%
}%
\subfloat[$\text{Time step}=32$]{
\label{subfig:result-NGY-sliding-332}
\begin{minipage}{0.23\textwidth}
\centering
\includegraphics[trim=0in 1.3in 0in 0in, clip=true, width=\textwidth]{figs/190423170541_Camera1-082_332.png}
\end{minipage}%
}%
\subfloat[$\text{Time step}=40$]{
\label{subfig:result-NGY-pad-340}
\begin{minipage}{0.23\textwidth}
\centering
\includegraphics[trim=0in 1.3in 0in 0in, clip=true, width=\textwidth]{figs/190423170541_Camera1-082_340.png}
\end{minipage}%
}%
\subfloat[$\text{Time step}=48$]{
\label{subfig:result-NGY-sliding-348}
\begin{minipage}{0.23\textwidth}
\centering
\includegraphics[trim=0in 1.3in 0in 0in, clip=true, width=\textwidth]{figs/190423170541_Camera1-082_348.png}
\end{minipage}
}%
\caption[Examples of interaction probability for a left--hand intersection]{Examples of interaction probability at the frame level using the sliding window (a) and padding (b) methods, tested on the NGY dataset. The variance of the probabilities is visualized by the marginal shadow for the CVAE--based models. The corresponding video screenshots are aligned from upper left to the lower middle at the bottom with a time interval of eight frames. The target vehicle is highlighted by the blue bounding box and the passing cyclist involved in the turning sequence is highlighted by the red bounding box.}
\label{fig:results-NGY}
\end{figure}
\subsection{Analysis of the results}
\label{subsec:interactionresultsanalysis}
The results shown above are analyzed with respect to: (\RNum{1}) the pros and cons between the sliding window and padding methods; (\RNum{2}) the performance between the proposed CVAE model and the baseline model; (\RNum{3}) the contribution of the object information and the optical--flow information via the ablative models; (\RNum{4}) the impact of the self-attention mechanism.
The performances of the sliding window and padding methods are not only influenced by the size of the training data, but also the zero--padded values. The sliding window method does not depend on the sequence length, which is more flexible in dealing with various sequence lengths. Hence, the number of training samples was not compromised for the experiments. On the other hand, the padding method requires a pre-defined fixed sequence length, which is unable to deal with longer sequences. Hence, the number of training samples was compromised by excluding longer sequences. The impact of the training data size has been shown by the performance difference across the KoW and NGY datasets. The numbers of the training samples of KoW for the sliding window and padding methods are similar (see Table~\ref{tb:interactiondatapartition}), and their performances for interaction detection are comparable to each other (see Table~\ref{tb:KoW-results}). On the contrary, the number of training samples of NGY for the sliding window method is much larger than the one for the padding method (see Table~\ref{tb:interactiondatapartition}). The prediction by the sliding window method is more accurate than the padding method (see Table~\ref{tb:NGY-results}). In addition, the shorter sequences were padded with zeros. This is problematic for the information extracted by optical flow. Because the zero values in the optical--flow feature vector represent the background of the intersection or static road users. Even though a padding mask is incorporated into the sequence for indicating the actual sequence length, the negative impact cannot be fully remedied due to the complex learning process in training. The negative impact of padded zeros from the padding method has been uncovered by the impaired performance of the ablative model \textit{[CVAE+op+att]} compared to the sliding window method.
In general, the proposed CVAE model \textit{[CVAE+ob+op+att]} outperforms the baseline model [\textit{S2S+ob+op+att}] quantitatively (see Table~\ref{tb:KoW-results} and \ref{tb:NGY-results}) and qualitatively (see Fig.~\ref{fig:results-KoW} and \ref{fig:results-NGY}). In the CVAE model, the latent variables $\mathbf{z}$ are trained to capture the stochastic attributes of road users' behavior in various traffic situations, which is optimized by the Kullback-Leibler divergence loss against a Gaussian prior. In addition to the Kullback-Leibler divergence loss, the reconstruction loss is trained by minimizing the cross-entropy loss between ground truth and prediction.
Optimizing these two losses together enables the CVAE model to generate diverse predictions. With the multi--sampling process of the latent variables, the predicted probabilities of interaction at each frame vary, especially when the probabilities change over time, see Fig.~\ref{fig:results-KoW} and \ref{fig:results-NGY}; the variance of the probabilities indicates the uncertainty in the predictions. In contrast, the baseline model is trained only by optimizing the reconstruction loss. It tends to learn the ``average'' behavior of road users. Predictions by the baseline model are rather deterministic and there is no mechanism to interpret the uncertainty of the predictions.
The combined information of object detection and optical flow shows a stable performance for the interaction detection task. The performance of interaction detection highly depends on the quality of the input information extracted from videos, which is often impaired for many reasons. A single type of information may not be sufficient for this task.
As indicated by the limited performance of the ablative models that only use optical--flow information on the KoW dataset, without the object information the noisy optical--flow information from the through lane or padded zeros may impact the detection performance. Similarly, the ablative models that only use object information on the NGY dataset also achieved limited performance. The distorted object information, especially for the road users close to the camera at the NGY dataset, could lead to wrong interaction detection.
The combination of the extraction techniques increases the possibility to maintain a good quality of the input information, so as to achieve a stable performance of interaction detection.
The self-attention mechanism does not show a consistent benefit across the datasets. The CVAE models regardless of the self-attention mechanism yield very similar results for interaction detection using both the sliding window and padding methods on the KoW dataset, and using the padding method on the NGY dataset. The improvement with the self-attention mechanism can be found for the sliding window method on the NGY dataset. First, the self-attention layer is followed by an LSTM (Fig.~\ref{fig:pipelineInteractionDetection}), which may be redundant for learning the interconnections along the time axis. The self-attention layer is likely under-trained due to the small dataset size or redundant layers, whereas the LSTM is already sufficient for learning the temporal patterns of the sequence data from the KoW intersection. On the other hand, the sequence data from the NGY intersection is more complex, e.\,g.,~longer and more varying sequence lengths (see Fig.~\ref{fig:seqdistrubitions}) and high traffic density. On top of the LSTM, the self-attention mechanism has turned out to be beneficial for further learning the temporal patterns.
In summary, the sliding window method is more flexible than the padding method in dealing with various sequence lengths. The CVAE models using the combined information of both object detection and optical flow achieve a more stable performance compared to using a single type of information. The multi--sampling process enables the CVAE--based models to mimic the uncertainty of road users' behavior, and the self--attention mechanism is only beneficial for learning temporal patterns from complex data. Overall the proposed model \textit{[CVAE+ob+op+att]} using the sliding window method achieves a more desirable performance across the datasets.
\section{Discussion}
\label{sec:InteDetcDiscussion}
Here we discuss the failed detection by the proposed CVAE model using the sliding window method and the challenges to transfer the model from one intersection to the other smoothly.
\subsection{Failed detection}
\label{subsec:faileddetection}
Various reasons can lead to a wrong interaction classification. Table~\ref{tb:intersection-wrongdetection} categorizes the wrongly detected scenarios i.\,e.,~false negative (FN) and false positive (FP) tested on both the KoW and NGY datasets.
The false negative examples are visualized in Fig.~\ref{fig:KoW-falsenegatives} for KoW and in Fig.~\ref{fig:NGY-falsenegatives} for NGY, and the false positive examples are visualized in Fig.~\ref{fig:KoW-falsepositives} for KoW and in Fig.~\ref{fig:NGY-falsepositives} for NGY.
\begin{table}[hbpt!]
\caption[Categories of wrongly detected scenarios]{Categories of the wrongly detected scenarios by \textit{[CVAE+ob+op+att]} using the sliding window method.}
\setlength{\tabcolsep}{2.5pt}
\centering
\begin{tabular}{lllll}
\\ \toprule
Errors & Scenario description & Category & KoW & NGY \\ \hhline{=====}
& pedestrian entering the intersection & (FN-\RNum{1}) & 8 & 7 \\
FN & pedestrian leaving the intersection & (FN-\RNum{2}) & 1 & 4 \\
& cyclist entering the intersection & (FN-\RNum{3}) & - & 2 \\ \midrule
& car following & (FP-\RNum{1}) & 4 & 17 \\
FP & pedestrian standing near the intersection & (FP-\RNum{2}) & - & 3 \\
& pedestrian approaching from the sidewalk & (FP-\RNum{3}) & - & 1 \\
& pedestrian finishing crossing & (FP-\RNum{4}) & 1 & 1 \\ \midrule
Total$^*$ & & & 14 & 35 \\ \bottomrule
\end{tabular}
\label{tb:intersection-wrongdetection}
\begin{tabular}{@{}c@{}}
\multicolumn{1}{p{3.3in}}{$^{*}$The total number of the wrongly detected scenarios listed here is slightly different as shown in the above confusion matrices due to the multi--sampling of the CVAE model.}
\end{tabular}
\end{table}
The FN scenarios are associated with VRUs entering (FN-\RNum{1} and FN-\RNum{3}) or leaving (FN-\RNum{2}) the intersection space. Due to their relatively long distance to the target vehicle, but fast travel speed, they are erroneously classified as non-interaction.
\begin{figure}[t!]
\captionsetup[subfigure]{labelformat=empty}
\centering
\subfloat[FN-\RNum{1}]{
\label{subfig:KoW-FN-a}
\begin{minipage}{0.23\textwidth}
\centering
\includegraphics[trim=0in 0in 0in 0in, width=\textwidth]{figs/stream_2019_11_08_08_22_54_248.png}
\end{minipage}
}
\subfloat[FN-\RNum{2}]{
\label{subfig:KoW-FN-b}
\begin{minipage}{0.23\textwidth}
\centering
\includegraphics[trim=0in 0in 0in 0in, width=\textwidth]{figs/stream_2019_11_08_11_32_58_140.png}
\end{minipage}
}
\caption[Examples of false negative detection on the KoW dataset]{Examples of the false negative detection on the KoW dataset. The right--turning target vehicles are denoted by the blue bounding boxes and the involved VRUs are denoted by the red bounding boxes.}
\label{fig:KoW-falsenegatives}
\end{figure}
\begin{figure}[t!]
\captionsetup[subfigure]{labelformat=empty}
\centering
\subfloat[FN-\RNum{1}]{
\label{subfig:NGY-FN-a}
\begin{minipage}{0.23\textwidth}
\centering
\includegraphics[trim=0in 1.3in 0in 0in, clip=true, width=\textwidth]{figs/190423170541_Camera1-004_224.png}
\end{minipage}%
}%
\subfloat[FN-\RNum{2}]{
\label{subfig:NGY-FN-b}
\begin{minipage}{0.23\textwidth}
\centering
\includegraphics[trim=0in 1.3in 0in 0in, clip=true, width=\textwidth]{figs/190423170541_Camera1-102_456.png}
\end{minipage}%
}%
\subfloat[FN-\RNum{3}]{
\label{subfig:NGY-FN-c}
\begin{minipage}{0.23\textwidth}
\includegraphics[trim=0in 1.3in 0in 0in, clip=true, width=\textwidth]{figs/190423170541_Camera1-084_456.png}
\end{minipage}%
}%
\caption[Examples of false negative detection on the NGY dataset]{Examples of the false negative detection on the NGY dataset. The left--turning target vehicles are denoted by the blue bounding boxes and the involved VRUs are denoted by the red bounding boxes.}
\label{fig:NGY-falsenegatives}
\end{figure}
Most of the FP scenarios are associated with the target vehicle following a leading vehicle. As exemplified by FP-\RNum{1} in Fig.~\ref{fig:KoW-falsepositives}~and~\ref{fig:NGY-falsepositives}, only the leading vehicle (in the yellow bounding box) required direct interactions with the involved VRUs. After the leading vehicle finished turning, the pedestrian (in the red bounding box) also completed crossing. Afterwards, there was no interaction required from the target vehicle (in the blue bounding box) with the VRUs. However, the CVAE model has limited performance in handling this type of situations. Because the current model does not have explicit information to differentiate the leading and target vehicles, and the model is not specifically trained for car following situations.
In addition, a short distance from the VRUs to the intersection, e.\,g.,~standing on the sidewalk close to the intersection (FP-\RNum{2} Fig.~\ref{fig:NGY-falsepositives}) or just finishing crossing (FP-\RNum{4}, Fig.~\ref{fig:KoW-falsepositives}~and~\ref{fig:NGY-falsepositives}), can also lead to an FP case.
The distortion of the distance may lead to an FP case as well. For example, in FP-\RNum{3} in Fig.~\ref{fig:NGY-falsepositives}, even though the pedestrian on the sidewalk was relatively far from the turning vehicle, it has been still classified as an interaction by the model due to the distorted distance at the NGY intersection. However, the camera at the KoW intersection was installed at a higher elevation than the camera at the NGY intersection. The distortion is thus less harmful for the horizontal distance. Among other reasons, this might have contributed to the better performance of the model tested on the KoW dataset than on the NGY dataset.
\begin{figure}[t!]
\captionsetup[subfigure]{labelformat=empty}
\centering
\subfloat[FP-\RNum{1}]{
\label{subfig:KoW-FP-a}
\begin{minipage}{0.23\textwidth}
\centering
\includegraphics[trim=0in 0in 0in 0in, width=\textwidth]{figs/stream_2019_11_09_11_23_31_632.png}
\end{minipage}
}
\subfloat[FP-\RNum{4}]{
\label{subfig:KoW-FP-d}
\begin{minipage}{0.23\textwidth}
\centering
\includegraphics[trim=0in 0in 0in 0in, width=\textwidth]{figs/stream_2019_11_09_12_33_32_048.png}
\end{minipage}
}
\caption[Examples of the false positive detection on the KoW dataset]{Examples of the false positive detection on the KoW dataset. The right--turning target vehicles are denoted by the blue bounding boxes, the leading, but not target vehicle, is denoted by the yellow bounding box, and the involved VRUs are denoted by the red bounding boxes.}
\label{fig:KoW-falsepositives}
\end{figure}
\begin{figure}[t!]
\captionsetup[subfigure]{labelformat=empty}
\centering
\subfloat[FP-\RNum{1}]{
\label{subfig:NGY-FP-a}
\begin{minipage}{0.23\textwidth}
\centering
\includegraphics[trim=0in 1.1in 0in 0in, clip=true, width=\textwidth]{figs/190423170541_Camera1-091_9192.png}
\end{minipage}%
}%
\subfloat[FP-\RNum{2}]{
\label{subfig:NGY-FP-b}
\begin{minipage}{0.23\textwidth}
\centering
\includegraphics[trim=0in 1.1in 0in 0in, clip=true, width=\textwidth]{figs/190423170541_Camera1-094_464.png}
\end{minipage}%
}%
\subfloat[FP-\RNum{3}]{
\label{subfig:NGY-FP-c}
\begin{minipage}{0.23\textwidth}
\centering
\includegraphics[trim=0in 1.1in 0in 0in, clip=true, width=\textwidth]{figs/190423170541_Camera1-102_856.png}
\end{minipage}%
}%
\subfloat[FP-\RNum{4}]{
\label{subfig:NGY-FP-d}
\begin{minipage}{0.23\textwidth}
\centering
\includegraphics[trim=0in 1.1in 0in 0in, clip=true, width=\textwidth]{figs/190423170541_Camera1-101_040.png}
\end{minipage}%
}%
\caption[Examples of the false positive on the NGY dataset]{Examples of the false positive on the NGY dataset. The left--turning target vehicles are denoted by the blue bounding boxes, the leading, but not target vehicle, is denoted by the yellow bounding box, and the involved VRUs are denoted by the red bounding boxes.}
\label{fig:NGY-falsepositives}
\end{figure}
Based on the discussion of the failed detection scenarios, the limitations of the proposed model are summarized as follows:
1) The definition of interaction only considers the relationship between the target turning vehicle and the involved VRUs. The car--following relationship is not included. Without considering this relationship often leads to false positive detection between the following car and the VRUs.
2) The crossing directions of VRUs are not used as a factor to differentiate interaction types. For example, interactions between a turning vehicle and pedestrians or cyclists that are approaching the crossing area from near side and far side are labeled as the same interaction type. However, the discussion above indicates that the moving directions of VRUs are important for estimating the interactions between the turning vehicle and VRUs, especially when the VRUs are leaving the intersection.
3) The exact distances between a turning vehicle and the involved VRUs are not measured, thus the change of the distances between them cannot be correctly quantified. Without the measurement of distance, it is difficult for a model to distinguish the subtle difference between interaction and non-interaction. Especially, when the image distance is distorted by the camera's perspective or when an occlusion happens, the model's performance will be impaired.
\subsection{Challenges of cross dataset generalization}
\label{subsec:crossdatavalidation}
The models proposed in this chapter are adapted for interaction detection at different intersections, in order to analyze the generalizability of the above models.
In the previous experiment setting, all the models were trained and tested using the dataset from the same intersections. In this section, the models trained using the KoW dataset were tested on the NGY dataset, and vice versa. Frames from the test set were resized into the same size and mirrored into the same direction as the training set, so that the trained models could be tested on both datasets without changing the setting of the input size.
Table~\ref{tb:crossvalidation-results} lists the results for the cross dataset validation. It can be seen that, both the CVAE--based and sequence-to-sequence encoder--decoder models do not achieve good performance either using the sliding window or padding method. This could be because these two datasets (Sec.~\ref{sec:InteDetcData}) are very different in terms of, e.\,g.,~vehicle's travel direction, camera's perspective, frame size and rate, sequence length, intersection layout, traffic density, and cultural factors (Germany vs. Japan). Note that, because the camera parameters and reference coordinates were not available from the datasets,
in this paper projection is not applied to transform the perspective to a bird's-eye view. Under the cross dataset validation setting, the resized frames distort the motion and position of the dynamic objects and confused the models for predicting the interactions between vehicles and VRUs. However, this leads to the future research question---how to generalize the models for different intersections and traffic, and even different cultures?
\begin{table}[hbpt!]
\caption[Performance of cross dataset validation]{Performance of cross dataset validation for interaction detection on the KoW and NGY datasets.}
\centering
\setlength{\tabcolsep}{1.4pt}
\begin{tabular}{llllll}
\toprule
Model & Shape & Accuracy & Precision & Recall & F1-score \\ \hhline{======}
\multicolumn{6}{c}{{Trained on the NGY dataset and tested on the KoW dataset}} \\ \midrule
\textit{[S+ob+op+att]} & sli. & 0.490 & 0.430 & 0.490 & 0.350 \\
\textit{[C+ob+op+att]} & sli. & 0.490{$_{\pm0.003}$} & 0.495{$_{\pm0.001}$} & 0.934{$_{\pm0.005}$} & 0.647{$_{\pm0.002}$} \\
\textit{[S+ob+op+att]} & pad. & 0.473 & 0.450 & 0.470 & 0.400 \\
\textit{[C+ob+op+att]} & pad. & 0.485{$_{\pm0.002}$} & 0.491{$_{\pm0.001}$} & 0.804{$_{\pm0.004}$} & 0.609{$_{\pm0.001}$} \\ \hhline{======}
\multicolumn{6}{c}{{Trained on the KoW dataset and tested on the NGY dataset}} \\ \midrule
\textit{[S+ob+op+att]} & sli. & 0.535 & 0.526 & 0.704 & 0.602 \\
\textit{[C+ob+op+att]} & sli. & 0.541{$_{\pm0.005}$} & 0.540{$_{\pm0.005}$} & 0.557{$_{\pm0.007}$} & 0.548{$_{\pm0.006}$} \\
\textit{[S+ob+op+att]} & pad. & 0.464 & 0.420 & 0.460 & 0.380 \\
\textit{[C+ob+op+att]} & pad. & 0.490{$_{\pm0.002}$} & 0.475$_{\pm0.052}$ & 0.174{$_{\pm0.010}$} & 0.255{$_{\pm0.003}$} \\ \bottomrule
\end{tabular}
\label{tb:crossvalidation-results}
\end{table}
\section{Conclusion}
\label{sec:conclusion}
In this paper, an end-to-end sequence-to-sequence generative model based on CVAE has been proposed to automatically detect interactions between vehicles and VRUs at intersections using video data.
All the road users that appear during a vehicle's turning time are detected by a deep learning object detector, and their motion information is captured by optical flow, simultaneously. The sequences of object detection and optical--flow information together provide rich information for interaction detection.
Both sliding window and padding methods are explored to learn dynamic patterns from turning sequences of varying lengths.
The proposed model predicts fine--grained interaction class label at each frame of less than \SI{0.1}{s}. which provides a clue of how the intensity of an interaction between a turning vehicle and VRUs evolves as time unfolds.
The average voting scheme summarizes the frame--wise predictions so as to accurately get a class label for the overall sequence.
Besides, the multi--sampling process generates diverse predictions and the Kernel Density Estimation function is used to measure the uncertainty level.
The efficacy of the model was validated at a right--turn intersection in Germany and a left--turn intersection in Japan. It achieved an F1-score above 0.96 at the right--turn intersection and 0.89 at the left--turn intersection, and outperformed a sequence-to-sequence encoder--decoder model quantitatively and qualitatively.
Furthermore, a series of ablation studies investigated the effectiveness of the combined information from object detection and optical flow, and the self-attention mechanism for learning temporal patterns from complex sequences.
The comparison between the sliding window and padding methods showed that the former method is more flexible in coping with sequences of varying sequence length---the number of samples is not restricted to the maximum sequence length that a model can handle, which stands in contrast to the padding method. The self-attention mechanism has only shown a clear positive effect for interaction detection on the complex NGY dataset.
In future work, several improvements can be made to reduce the limitations of the detection model. First, the dichotomous classification of interaction should be extended to multi--class classification, e.\,g.,~taking the confrontation direction and car--following relationship into consideration. Second, the accuracy of feature extraction can be enhanced by using multiple cameras or even tracking.
Third, projective transformation techniques or data recorded by drones with a bird's-eye view can be explored to reduce the distortion caused by the camera's perspective and filter the noisy optical--flow information captured from the through lane next to the turning lane. Last but not least, the generalizability of the model for interaction detection at different intersections needs to be further studied.
\section*{ACKNOWLEDGMENTS}
The project is funded by the German Research Foundation (DFG) through the Research Training Group SocialCars (227198829/GRK1931). This work is a collaboration with Murase Lab at Nagoya University and supported by Nagoya Toyopet Corporation with the Nagoya intersection dataset.
\bibliographystyle{IEEEtran}
|
{
"timestamp": "2021-05-11T02:19:07",
"yymm": "2105",
"arxiv_id": "2105.03891",
"language": "en",
"url": "https://arxiv.org/abs/2105.03891"
}
|
\section{Introduction}
\label{sec:introduction}
Various types of indexes play an indispensable role in modern computational systems to enable fast information retrieval. As a traditional one,
inverted index~\cite{dean2009challenges} has been widely used in web search, e-commerce search, recommendation and advertising in the past few decades. Recently, with the advent of the deep learning era, embedding indexes~\cite{huang2013learning,youtube2016}, which embed user/query and item in a latent vector space, show excellent performance in many industrial retrieval systems~\cite{huang2020embedding,zhang2020towards,mind2019}.
Embedding index enjoys several appealing advantages: a) the embeddings can be learned to optimize downstream retrieval task of interests, and b) efficient algorithms for maximum inner product
search (MIPS) or approximate nearest neighbors (ANN), such as LSH~\cite{datar2004locality}, Annoy~\cite{Github:annoy} and state-of-the-art product quantization (PQ) based approaches~\cite{jegou2010product,johnson2019billion,guo2020accelerating}, can be leveraged to retrieve items in a few milliseconds.
Embedding indexes, however, also suffer from a few drawbacks. The major one resides in the separation between model training and index building, which results in extra index building time and decayed retrieval accuracy.
Thus, recently, there is a new trend of abandoning separately built embedding indexes but embracing jointly learned structural indexes which have shown improved performance than the former. In general, the approaches with joint learning structure can be summarized into two types, tree based ones~\cite{zhu2018learning,tdm2} and PQ based ones~\cite{yu2018product,cao2016deep,klein2017defense}. Those tree based approaches normally require special approximate training techniques, whose complications slow down their wide adoptions.
Those existing PQ based approaches are designed for only small computer vision tasks, such as retrieval from tens of thousands of images, thus inapplicable to large scale information retrieval tasks with at least millions of items, such as what we have in a real-world industrial retrieval system.
In this paper, we advance the approach of product quantization based embedding index jointly trained with deep retrieval model.
It is not trivial and we have to overcome a few hurdles by appropriate techniques:
1) the quantization steps, as the core of PQ based embedding indexes, have non-differentiable operations, such as $\arg\min$, which disable the standard back propagation training. Thus, we leverage the gradient straight-through estimator~\cite{bengio2013estimating} to bypass the non-differentiability in order to achieve end-to-end training.
2) The randomly initialized quantization centroids lead to very sparse centroid assignments, low parameter utilization and consequentially higher quantization distortion. Thus, we introduce a warm start strategy to achieve more uniform distributed centroid assigments.
3) The standard optimized product quantization (OPQ)~\cite{ge2013optimized} algorithm, which rotates the space by an orthonormal matrix to further reduce PQ distortion, can not iteratively run together with the joint model training. Thus, we develop a steepest block coordinate descent algorithm with Givens rotations~\cite{matrix_computations} to learn the orthonormal matrix in this end-to-end training.
As a result, our proposed method \emph{{Poeem}}, which stands for \textbf{p}roduct quantizati\textbf{o}n based \textbf{e}mbedding index jointly trained with deep r\textbf{e}trieval \textbf{m}odel,
enjoys advantages of almost no index building time and no decayed retrieval accuracy. Aiming at a more practical approach ready for wide adoptions, our method is capsulated in a standalone indexing layer, which can be easily plugged into any embedding retrieval models.
\section{Method}
\label{sec:method}
\subsection{Revisiting Retrieval Model}
\label{sec:retrieval_model}
A standard embedding retrieval model, as shown at the left side of Figure~\ref{fig:overview}, is composed of a query tower $Q$ and an item tower $S$. Thus, for a given query $q$ and an item $s$, the scoring output of the model is
\begin{equation}
f(q, s) = F(Q(q), S(s)) \label{eq:scoring}
\end{equation}
where $Q(q) \in \mathbb{R}^{d}$ denotes query tower output embeddings in $d$-dimensional space. Similarly, $S(s) \in \mathbb{R}^{d}$ denotes an item tower output in the same dimensional space. The scoring function $F(.,.)$, usually inner product or cosine value, computes the final score between the query and item, and has been proven to be successful in many applications~\cite{youtube2016,huang2020embedding,zhang2020towards}.
After the model is trained, traditional approaches still require an additional step -- computing item embeddings and building an embedding index for them -- before it is ready for online serving.
However, this additional step not only spends extra time to build index, but also causes decay of recall rate~\cite{jegou2010product}.
In the following sections, we will present our approach to the rescue of these two shortcomings.
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{figures/pqknn.pdf}
\caption{Illustration of a retrieval model with an embedding indexing layer, which is composed of four steps, coarse quantization function $\psi$ (yellow), product quantization function $\phi$ (blue), decoder function $\rho$ (green), and givens rotation (pink). }
\label{fig:overview}
\vspace{-0.1in}
\end{figure*}
\subsection{Embedding Indexing Layer}
\label{sec:indexing_layer}
Figure~\ref{fig:overview} illustrates an overview of the proposed embedding indexing layer inside a typical deep retrieval model. Formally, the indexing layer defines a \emph{full quantization function} $\mathcal{T}: \mathbb{R}^d \xrightarrow{}\mathbb{R}^d$ that maps an input embedding $\mathbf{x}$ to an output embedding $\mathcal{T}(\mathbf{x})$, which can be decomposed into four functions: a \emph{coarse quantization} function $\psi$, a \emph{product quantization} function $\phi$ and a \emph{decoder} function $\rho$, and a \emph{rotation} function with an orthonormal matrix $R$.
Note that we are using orthonormal matrix and rotation matrix interchangeably. Now let's explain in detail these operations in the following sections.
The \textbf{coarse quantization} function $\psi: \mathbb{R}^d \xrightarrow{} \{1,\cdots,J\}$, which maps a continuous vector $\mathbf{x}$ into a $J$-way discrete \emph{coarse code} $r$ by a coarse centroid matrix $\v \in \mathbb{R}^{J \times d}$, can be defined as follows
\begin{equation}
\psi(\mathbf{x}) = r = \arg \min_k \textrm{dist}(\mathbf{x}, \v_k)
\label{eq:coarse_argmin}
\end{equation}
where $\textrm{dist}(.,.)$ stands for a distance measure, typically L2 distance, between two vectors, and the vector $\v_k \in \mathbb{R}^d$ stands for the $k$-th centroid, \textit{i.e.}, $k$-th row in $\v$. It is not hard to see that this function just finds the nearest centroid for a given input vector $\mathbf{x}$, and outputs the centroid index. Thus, a coarse quantization residual exists, formally as follows,
\[
\mathbf{y} = \mathbf{x} - \v_r.
\]
We will further deal with it by product quantization.
The \textbf{product quantization (PQ)} function $\phi: \mathbb{R}^d \xrightarrow{} \{1,\cdots,K\}^D$, which maps the above residual vector $\mathbf{y}$ into a $K$-way $D$-dimensional \emph{PQ code} $\c$, can be defined as follows.
First, we denote the vector $\mathbf{y}$ as concatenation of $D$ subvectors:
\begin{equation}
\mathbf{y}=[\mathbf{y}^1, \ldots, \mathbf{y}^D]. \label{eq:decomp}
\end{equation}
For simplicity, it is a common practice to divide the original dimension $d$ evenly. In other words, each subvector is $\mathbf{y}^j \in \mathbb{R}^{d/D}$. The Cartesian product $\mathcal{C} = \mathcal{C}^1 \times \ldots \times \mathcal{C}^{D}$ is the set in which a \emph{PQ code} $\c\in \mathcal{C}$ is formed by concatenating the $D$ sub-codewords: $\c=[c^1, \ldots, c^D ]$, with each $c^j \in \mathcal{C}^j$. Note that the $D$ codes can be computed independently as follows
\begin{equation}
c^j = \arg \min_k \textrm{dist}(\mathbf{y}^j, \v_k^j)
\label{eq:argmin}
\end{equation}
where $\v_k^j$ stands for the $k$-th centroid for the $j$-th subvectors. Finally, we can define the function $\phi$ as follows
\begin{equation}
\phi(\mathbf{y}) = [c^1, \ldots, c^D].
\label{eq:pqcode}
\end{equation}
The \textbf{decoder} function
\[\rho: \left(\{1,\cdots,J\}, \{1,\cdots,K\}^D\right) \xrightarrow{} \mathbb{R}^d\]
that maps the discrete codes back to a continuous vector, can be formally defined as follows,
\begin{equation*}
\rho(r, [c^1, \ldots, c^D]) = \v_r + [\v^1_{c^{1}}, \ldots, \v^D_{c^{D}}]
\end{equation*}
which sums over the coarse quantized vector $\v_c$, and the concatenation of product quantized vector $\v^j_{c^j}$.
The \textbf{rotation} function by an orthonormal matrix $R$, also known as optimized product quantization (OPQ)~\cite{ge2013optimized}, rotates the input embedding $\mathbf{x}$ by
\begin{align*}
\mathbf{x}' &= R \mathbf{x}
\end{align*}
to further reduce the quantization distortion. Since the rotation allows product quantization to operate in a transformed space, thus relaxes the constraints on PQ centroids. For example, note that any reordering of the dimensions can be represented by a rotation matrix.
After the above decoder function, we can also ``rotate back'' to the original space by $R^{-1}$, so that this rotation is fully encapsulated inside the standalone indexing layer. Also note that the inverse of an orthonormal matrix is just its transpose~\cite{matrix_computations}, \textit{i.e.}, $R^{-1} = R^\top$, which is very cheap to compute.
Finally, with the above four functions, we can now define the \textbf{full quantization} function $\mathcal{T}$ as follows
\begin{equation}
\mathcal{T}(\mathbf{x}) = R^\top \rho (\psi(\mathbf{x}'), \phi(\mathbf{x}' - \v_{\psi(\mathbf{x}')}))
\label{eq:tau}
\end{equation}
where $\mathbf{x}'=R\mathbf{x}$.
\subsection{Training Algorithm}
\label{sec:algorithm}
\subsubsection{Loss Function}
Straightforward optimization of the above embedding indexing layer by the standard back propagation algorithm is infeasible, as the $\arg\min$ operation in Eqs. (\ref{eq:coarse_argmin}) and (\ref{eq:argmin}) is non-differentiable.
In fact, this is the difficulty that prevents previous researchers from training embedding indexes jointly with retrieval model.
Here we propose to leverage the gradient straight-through estimator~\cite{bengio2013estimating} by adjusting the original quantization function $\mathcal{T}$ in Eq. (\ref{eq:tau}) as follows
\begin{equation}
\H (\mathbf{x}) = \mathbf{x} - \textrm{sg}(\mathbf{x} - \mathcal{T}(\mathbf{x}))
\label{eq:approx_t}
\end{equation}
where $\textrm{sg}$ is the \emph{stop gradient} operation.
During the forward pass, the quantized embedding $\mathcal{T}(\mathbf{x})$ is emitted; During the backward pass, the gradient is passed to the original embedding $\mathbf{x}$ directly, bypassing the entire quantization function $\mathcal{T}$. Note that Eq. (\ref{eq:approx_t}) only approximates gradient for original embedding $\mathbf{x}$, does not update the quantization parameters (centroids) in the back propagation. Similar as previous works~\cite{van2017neural,chen2020differentiable}, we add a regularization term into the loss function to minimize the quantization distortion as follows,
\begin{equation*}
\mathcal{L}_{reg} = \sum_\mathbf{x}||\mathcal{T}(\mathbf{x}) - \textrm{sg}(\mathbf{x})||^2
\end{equation*}
which essentially updates the coarse and PQ centroids to be arithmetic mean of their members.
\subsubsection{Warm Start Centroids}
In practice, we find that random initialization of the quantization parameters (the centroids $\v_k$ and $\v_k^j$ in Eqs. (\ref{eq:coarse_argmin}) and (\ref{eq:argmin})) results in very sparse centroid assignments, which consequentially hurts the retrieval quality (see experiments, Section~\ref{sec:warm_start}). Fortunately, we are able to overcome this hurdle by simply warm starting the centroids. In detail, we train the model without the above indexing layer for a number of warm-start steps, then we plug in the indexing layer with centroids initialized by a standard k-means clustering.
\subsubsection{Givens Rotation}
\label{sec:givens_rotation}
Learning the optimal rotation matrix with fixed embeddings $\mathbf{x}$, previously formulated as the Orthogonal Procrustes problem~\cite{ge2013optimized,procrustes1966}, is incompatible with our end-to-end training, since the embeddings $\mathbf{x}$ is not fixed.
Owing to the previous work~\cite{hurwitz1963ueber} which shows that any rotation matrix can be represented by a product of Givens rotations~\cite{matrix_computations}, we are able to jointly learn the rotation matrix by a steepest block coordinate descent algorithm~\cite{wright2015coordinate, beck2013convergence} with each coordinate as a specific Givens rotation within two axes.
\section{Experiment}
\label{sec:experiment}
\subsection{Setup}
\label{sec:setup}
In Table~\ref{tab:dataset}, we evaluate our methods on three datasets: a JD.com search click log data where a user input query is used to retrieve items, and two public datasets of MovieLens~\cite{harper2015movielens} and Amazon Books~\cite{he2016ups} where user historical behavior is used to retrieve next behavior.
We evaluate retrieval accuracy by precision@$k$ (p@$k$) and recall@$k$ (r@$k$) metrics, which are standard retrieval quality metrics.
We implement {Poeem} in Tensorflow 1.15, and train models in a single machine with 4 Nvidia V100 GPU cards, and evaluate all methods in a 48-cores CPU machine. A typical set of parameters are as follows: a two-tower retrieval model with cosine scoring and hinge loss of margin $0.1$, embedding size $512$ for private dataset, $128$ for MovieLens and $128$ for Amazon Books, Adagrad optimizer with learning rate $0.01$, and batch size $1024$.
\begin{table}[t]
\centering
\caption{Dataset statistics.}
\small
\begin{tabular}{c|ccc}
\hline
Dataset & \# Examples & \# Users & \# Items \\ \hline
Private & 9,989,135 & 1,031,583 & 1,541,673 \\
MovieLens & 9,939,873 & 129,797 & 20,709 \\
Amazon Books & 8,654,619 & 294,739 & 1,477,922 \\ \hline
\end{tabular}
\vspace{-4mm}
\label{tab:dataset}
\end{table}
\subsection{Comparison with Offline Indexing}
\label{sec:comparison}
Table~\ref{tab:comparison} presents comparison results with the baseline methods, offline indexing with LSH~\cite{datar2004locality}, Annoy~\cite{Github:annoy}, ScaNN~\cite{guo2020accelerating} and Faiss~\cite{johnson2019billion}.
To conduct fair comparisons, Faiss uses the index type of IVFPQ which shares the same product quantization parameters as {Poeem}. Since other baselines do not have similar product quantization structure, we choose parameters which have the same retrieval time cost as {Poeem}.
We can observe that the proposed {Poeem} outperforms all the baseline methods by precision@100 and recall@100.
\begin{table}[tb]
\centering
\caption{Comparison between the baseline methods and {Poeem}.}
\label{tab:comparison}
\small
\setlength{\tabcolsep}{1mm}
\begin{tabular}{c|cc|cc|cc}
\hline
\multirow{2}{*}{Method} & \multicolumn{2}{c|}{Private} & \multicolumn{2}{c|}{MovieLens} & \multicolumn{2}{c}{Amazon Books} \\ \cline{2-7}
& p@100 & r@100 & p@100 & r@100 & p@100 & r@100 \\ \hline
LSH & $1.28\%$ & $25.64\%$ & $7.48\%$ & $34.50\%$ & $0.51\%$ & $4.11\%$ \\
Annoy & $1.24\%$ & $23.42\%$ & $7.85\%$ & $35.53\%$ & $0.72\%$ & $5.71\%$ \\
ScaNN & $2.25\%$ & $47.30\%$ & $7.91\%$ & $36.50\%$ & $0.72\%$ & $5.71\%$ \\
Faiss & $2.37\%$ & $49.54\%$ & $8.02\%$ & $36.72\%$ & $0.68\%$ & $5.46\%$ \\
\hline
\multirow{2}{*}{Poeem} & $\mathbf{2.42\%}$ & $\mathbf{51.13\%}$ & $\mathbf{8.22\%}$ & $\mathbf{37.48\%}$ & $\mathbf{0.73\%}$ & $\mathbf{5.90\%}$ \\
& $(+0.05\%)$ & $(+1.59\%)$ & $(+0.20\%)$ & $(+0.76\%)$ & $(+0.05\%)$ & $(+0.44\%)$ \\
\hline
\end{tabular}
\vspace{-4mm}
\end{table}
Figure~\ref{fig:comparison} shows the recall@$100$'s trend with varying values of $J$, $K$ and $D$, with or without $R$. It is clear that {Poeem} outperforms the baseline for all different parameter values. In detail, firstly, the rotation matrix has a positive impact on recall@$100$. Secondly, we can see a trend that, with smaller parameters, the gap between the proposed method and baseline is enlarged. This indicates that the best scenario of the proposed method is for indexes of large dataset where large compression ratio is required. Thirdly, the performance gap almost diminishes when $D$ approaches to $d$ ($512$ in this case), which, however, is actually impractical but just for reference purpose. Since if $D=d$, product quantization reduces to numerical quantization whose low compression ratio is infeasible for large data.
Note that a proper choice of $J$ is necessary to prevent imbalance distribution of coarse codes and consequentially unstable retrieval time.
\iffalse
\subsection{Effects of Reducing Quantization Distortion}
\label{sec:distortion}
Figure~\ref{fig:coarse} illustrates the effects on quantization distortion and recall@$100$, for varying indexing layer parameters: $J$, $K$ and $D$.
The curves clearly show the same trend: lower quantization distortion leads to higher retrieval quality that measured by recall@$100$. Moreover, we can observe that the parameter $D$ has the largest effect on quantization distortion, though one needs to be aware that larger $D$ is also more expensive in space than the other two parameters. Since the $D$ dimensional item PQ codes compose the major part of embedding indexes. In contrast, the coarse quantization is much cheaper since it adds only one coarse code to each item. Thus, the curve of $J$ shows the retrieval quality gain with marginal space cost of coarse quantization.
\fi
\begin{figure}
\centering
\begin{subfigure}[b]{0.153\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/warm_start_steps.pdf}
\vspace{-4mm}
\caption{}
\label{fig:warmstart}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/code_assign_hist.pdf}
\vspace{-4mm}
\caption{}
\label{fig:histogram}
\end{subfigure}
\caption{Effects of warm start centroids. (a) recall@100 within different warm-start steps, (b) histogram of coarse centroid assignments with cold start (upper) and warm start (lower).}
\label{fig:distortion}
\vspace{-2mm}
\end{figure}
\subsection{Effects of Warm Start Centroids}
\label{sec:warm_start}
Figure~\ref{fig:warmstart} illustrates the effect of warm-start centroids, where we can observe that the retrieval quality significantly improves with warm started centroids over cold started centroids, but only slightly improves with more warm-up steps. Note that the warm-up steps $0$ corresponds to cold start where the centroids are actually randomly initialized, which results in sparse utilization of both coarse centroids and PQ centroids. In a typical run as shown in Figure~\ref{fig:histogram}, the coarse centroid assignments with warm start distribute more uniformly than those with cold start. In more detail, only $67$ out of $1024$ cold start centroids are actually used by coarse quantization, which spikes to $1004$ out of $1024$ for warm start centroids.
\subsection{Computational Cost}
\label{sec:computational_cost}
Extensive experiments show that the training time of the proposed method slightly increases by around $1\%$. And we do not observe any significant memory increasing since the coarse centroids and PQ centroids are negligible compared to the other part of retrieval model. As for the indexing time, the proposed method only needs to save items' coarse codes and PQ codes into index file. Thus, for 1 million 512-dimensional embeddings, {Poeem} only needs 5 seconds of indexing time compared to 641 seconds of Faiss, 101 seconds of ScaNN, 93 seconds of Annoy, and 4.6 seconds of LSH.
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.24\columnwidth}
\centering
\includegraphics[width=\textwidth]{figures/tsne/org.pdf}
\caption{Raw}
\label{fig:raw}
\end{subfigure}
\begin{subfigure}[b]{0.24\columnwidth}
\centering
\includegraphics[width=\textwidth]{figures/tsne/faiss.64.pdf}
\caption{Faiss D64}
\label{fig:faiss_d64}
\end{subfigure}
\begin{subfigure}[b]{0.24\columnwidth}
\centering
\includegraphics[width=\textwidth]{figures/tsne/faiss.32.pdf}
\caption{Faiss D32}
\label{fig:faiss_d32}
\end{subfigure}
\begin{subfigure}[b]{0.24\columnwidth}
\centering
\includegraphics[width=\textwidth]{figures/tsne/faiss.16.pdf}
\caption{Faiss D16}
\label{fig:faiss_d16}
\end{subfigure}
\begin{subfigure}[b]{0.24\columnwidth}
\centering
\includegraphics[width=\textwidth]{figures/tsne/org.pdf}
\caption{Raw}
\label{fig:raw2}
\end{subfigure}
\begin{subfigure}[b]{0.24\columnwidth}
\centering
\includegraphics[width=\textwidth]{figures/tsne/pq10.64.pdf}
\caption{Poeem D64}
\label{fig:poeem_d64}
\end{subfigure}
\begin{subfigure}[b]{0.24\columnwidth}
\centering
\includegraphics[width=\textwidth]{figures/tsne/pq10.32.pdf}
\caption{Poeem D32}
\label{fig:poeem_d32}
\end{subfigure}
\begin{subfigure}[b]{0.24\columnwidth}
\centering
\includegraphics[width=\textwidth]{figures/tsne/pq10.16.pdf}
\caption{Poeem D16}
\label{fig:poeem_d16}
\end{subfigure}
\caption{t-SNE visualizations of Faiss and {Poeem} item embeddings with varying parameter $D$.
}
\label{fig:t_sne}
\vspace{-2mm}
\end{figure}
\subsection{Ablation Study}
\label{sec:ablation}
To have an intuitive understanding of how {Poeem} works, Figure~\ref{fig:t_sne} shows 2-D t-SNE~\cite{tsne2008} visualizations on randomly chosen items from top 15 popular categories in our private dataset. Figures~\ref{fig:raw} and~\ref{fig:raw2} show the same figure of raw item embedding distribution from the two-tower retrieval model, which serves as the best scenarios since there is no quantization distortion. Figures~\ref{fig:faiss_d64} to~\ref{fig:faiss_d16} illustrate the progressive quantization distortion effect with decreasing parameter $D$ for Faiss, and Figures~\ref{fig:poeem_d64} to~\ref{fig:poeem_d16} illustrate that for {Poeem}.
For both Faiss and {Poeem}, we can observe that product quantization has the effect of ``shrinking'' and ``collapsing'' those normally distributed clusters, with the progress that the well distributed cluster is first divided into sub-clusters, which then further shrink into much smaller ones.
This effect makes sense if we consider that the product quantization forces those embeddings to share the same set of subvector centroids. Thus, those nearby embeddings are ``pulled'' closer if they are assigned to the same subvector centroid.
With the above observation, we can now see that the proposed method {Poeem} improves on the baseline Faiss, by slowing down the progress of ``shrinking'' and ``collapsing'' those clusters. While the parameter $D$ is decreasing, {Poeem} maintains the cluster's normally distributed shape much better than Faiss, especially for those outskirt clusters where the comparison is clear.
\section{Conclusion}
\label{sec:conclusion}
In this paper, we have proposed a novel method called {Poeem} to learn embedding indexes jointly with any deep retrieval models. We introduce an end-to-end trainable indexing layer composed of space rotation, coarse quantization and product quantization operations. Experimental results show that the proposed method significantly improves retrieval metrics over traditional offline indexing methods, and reduces the index building time from hours to seconds.
|
{
"timestamp": "2021-05-31T02:13:00",
"yymm": "2105",
"arxiv_id": "2105.03933",
"language": "en",
"url": "https://arxiv.org/abs/2105.03933"
}
|
\section{ Introduction}
Let $\Ga=(V,E)$ be a graph with vertex set $V$ and edge set $E$, which is finite, simple and undirected.
The number of vertices $|V|$ is called the {\it order} of the graph.
A $2$-{\it arc} in $\Ga$ is a triple of distinct vertices $(\a,\b,\g)$
such that $\b$ is adjacent to both $\a$ and $\g$.
In general, for an integer $s\geqslant 1$, an $s$-{\it arc} is a sequence of $s+1$ vertices with any two consecutive vertices adjacent and any three consecutive vertices distinct.
A graph $\Ga$ is said to be {\it $(G,s)$-arc-transitive} if $G\leqslant\Aut\Ga$ is transitive on both the vertex set and the set $s$-arcs of $\Ga$, or simply called {\it $s$-arc-transitive}.
By the definition,
an $s$-arc-transitive graph is also $t$-arc-transitive for $1\leqslant t<s$.
The class of $s$-arc-transitive graphs has been one of the central topics in algebraic graph theory since Tutte's seminal result \cite{Tutte}: there is no 6-arc-transitive cubic graph,
refer to \cite{Trofimov, Weiss} and \cite{Baddeley,FP2,FP1,HNP,IP,Li3,LSS,Li4,Prag-o'Nan}, and references therein.
A great achievement in the area was due to Weiss \cite{Weiss} who proved that there is no 8-arc-transitive graph of valency at least 3.
Later in \cite{Li1}, the first named author proved that there is no 4-arc-transitive graph of odd order.
Moreover, it was shown in \cite{Li1} that an $s$-arc-transitive graph of odd order with $s=2$ or 3 is
a normal cover of some $(G,2)$-arc-transitive graph where $G$ is an almost simple group,
led to the problem:
\ \ \ \ {\em Classify $(G,2)$-arc-transitive graphs of odd order with $G$ almost simple}.
This is one of a series of papers aiming to solve this problem, and does this work for alternating groups and symmetric groups.
The first one \cite{odd-excep} of the series of papers solves the problem for the exceptional groups of Lie type, and the sequel will solve the problem for other families of almost simple groups.
Let $\Ga=(V,E)$ be a connected $(G,2)$-arc-transitive graph of odd order, where $G$ is an almost simple group with socle being an alternating group. For the case where $G$ is primitive on $V$, it is easily deduced from \cite{P-W} that $\Ga$ is one of the complete graphs and the odd graphs. The main result of this paper shows that these are all the graphs we expected.
\begin{theorem}\label{zthm1}
Let $G$ be an almost simple group with socle being an alternating group $\A_n$, and let $\Ga$ be a connected $(G,2)$-arc-transitive graph of odd order.
Then either
\begin{itemize}
\item[(i)] $\Ga$ is the complete graph $\K_n$, and $n$ is odd; or
\item[(ii)] $\Ga$ is the odd graph $\O_{2^e-1}$, and $n={2^{e+1}-1\choose 2^e-1}$ for some integer $e\geqslant 2$.
\end{itemize}
\end{theorem}
\begin{remark}
{\rm It would be infeasible to extend the classification in Theorem \ref{zthm1} to those graphs of even order.
This is demonstrated by the work of Praeger-Wang in \cite{P-W} which presents a description of $(G,2)$-arc-transitive and $G$-vertex-primitive graphs with socle of $G$ being an alternating group.
}
\end{remark}
As a byproduct,
the following result shows that subgroups of alternating groups and symmetric groups of odd index are very restricted: each insoluble composition factor is alternating except for three small exceptions.
\begin{theorem}\label{Alt-subgps}
Let $G$ be an almost simple group with socle $\A_n$, and let $H$ be an insoluble proper subgroup of $G$ of odd index.
Then $G\in \{\A_n,\S_n\}$ and either
\begin{itemize}
\item[(i)] every insoluble composition factor of $H$ is an alternating group; or
\item[(ii)] $(G,H)=(\A_7,\GL(3,2))$, $(\A_8,\AGL(3,2))$ or $(\A_9,\AGL(3,2))$.
\end{itemize}
\end{theorem}
The notation used in the paper is standard, see for example the Atlas \cite{Atlas}.
In particular, a positive integer $n$ sometimes denotes a cyclic group of order $n$, and
for a prime $p$, the symbol $p^n$ sometimes denotes an elementary abelian $p$-group. For groups $A$ and $B$, an upward extension of $A$ by $B$ is denoted by $A{.}B$, and a semi-direct product of $A$ by $B$ is denoted by $A{:}B$.
For a positive integer $n$ and a prime $p$, let $n_p$ denote the $p$-part of $n$, that is, $n=n_pn'$ such that $n_p$ is a power of $p$ and $\gcd(n_p,n')=1$.
For a subgroup $H$ of a group $G$, let $|G:H|=|G|/|H|$, the index of $H$ in $G$, and denote by $\N_G(H)$ and $\C_G(H)$ the normalizer and the centralizer of $H$ in $G$, respectively.
\vskip 20pt
\section{Examples}\label{exam-sec}
We study the graphs which appear in our classification.
It is easily shown that, for an integer $n\geqslant 3$, the complete graph $\K_n$ is $(G,2)$-arc-transitive if and only if $G$ is a $3$-transitive permutation group of degree $n$.
Thus, if $n\geqslant 5$ is odd then $\K_n$ is one of the desired graphs.
The second type of example is the odd graph, defined below.
\begin{example}\label{exam-odd}
{\rm
Let $\Ome=\{1,2,\dots,2m+1\}$, and let $\Ome^{\{m\}}$ consist of $m$-subsets of $\Ome$.
Define a graph $(V,E)$ with vertex set and edge set
\[
V=\Ome^{\{m\}},\,
E=\{(\a,\b)\mid \a\cap \b=\emptyset\},
\]
respectively, which is called an {\it odd graph} and denoted by $\O_m$.}
\end{example}
The graph $\O_m$ has valency $m+1$, and has $\Sym(\Ome)=\S_{2m+1}$ to be the automorphism group, see \cite[pp. 147, Corollary 7.8.2]{Godsil-Royle}.
The order of $\O_m$ is given by
\[|V|=|\Ome^{\{m\}}|={2m+1\choose m}={(2m+1)!\over m!(m+1)!}.\]
For example, the Petersen graph is $\O_2$, which has order ${5\choose2}=10$ and valency 3;
$\O_3$ has order ${7\choose 3}=35$ and valency 4.
The former has even order, and the latter has odd order.
We next give a necessary and sufficient condition for ${2m+1\choose m}$ to be odd.
For a positive integer $n$, letting $2^{t+1}>n\geqslant 2^t$ for some integer $t\geqslant 0$, set
\[s(n)=\left[\frac{n}{2}\right]+\left[\frac{n}{2^2}\right]+\cdots+\left[\frac{n}{2^i}\right]+\cdots+\left[{n\over2^t}\right],\]
where $[x]$ is the largest integer which is not larger than $x$.
Then $[{n\over 2^i}]$ is the number of integers in $\{1,2,\dots,n\}$ which are divisible by $2^i$, and it follows that the 2-part of $n!$ is equal to $2^{s(n)}$.
Clearly, $2^{s(n)}=2^{s(n-1)}n_2$ if $n\geqslant 2$, where $n_2$ is the $2$-part of $n$.
We observe that $[\frac{m}{2^i}]+[\frac{n}{2^i}]\leqslant [\frac{m+n}{2^i}]$ for all positive integers $i$.
It follows that
\begin{equation}\label{s(n)}
s(m)+s(n)\leqslant s(m+n),
\end{equation}
and
\begin{equation}\label{s(n)-1}
s(m)+s(n)=s(m+n) \Longleftrightarrow
\left[\frac{m}{2^i}\right]+\left[\frac{n}{2^i}\right]=\left[\frac{m+n}{2^i}\right] \mbox{ for all $i\geqslant 1$}.
\end{equation}
Further, if $s(m)+s(n)=s(m+n)$ then at least one of $n$ and $m$ is even.
Let $1\leqslant m\leqslant n$ and $\left[\frac{m}{2^i}\right]+\left[\frac{n}{2^i}\right]
=\left[\frac{m+n}{2^i}\right]$ for some $i\ge 1$.
Suppose that $a:=\left[\frac{m}{2^i}\right]\ne 0$. Then $b:=\left[\frac{n}{2^i}\right]\geqslant a$.
Write $m=a2^i+c$ and $n=b2^i+d$
for $c,d<2^i$.
We have \[\left[\frac{m+n}{2^{i+1}}\right]=\left[{a+b\over 2}+{c+d\over 2^{i+1}}\right]\geqslant \left[{a
+b\over 2}\right]\geqslant \left[a\over 2\right]+\left[b\over 2\right]=\left[\frac{m}{2^{i+1}}\right]+\left[\frac{n}{2^{i+1}}\right].\]
Noting that $\left[{a+b\over 2}\right]\geqslant 1$, if $\left[\frac{m+n}{2^{i+1}}\right]=\left[\frac{m}{2^{i+1}}\right]+\left[\frac{n}{2^{i+1}}\right]$
then $b\geqslant 2$, and so $\left[\frac{n}{2^{i+1}}\right]\ne 0$. Then, using (\ref{s(n)}) and (\ref{s(n)-1}), we have the following lemma.
\begin{lemma}\label{s(n)-2}
Assume that
$s(m+n)=s(m)+s(n)$. If $m\leqslant n$ and $\left[\frac{m}{2^i}\right]\ne 0$ then $\left[\frac{n}{2^{i+1}}\right]\ne 0$;
in particular, $m<n$, and $n\geqslant 2^t$ if $\left[\frac{m+n}{2^{t}}\right]\ne 0$.
\end{lemma}
The following is a criterion for ${2m+1\choose m}$ to be odd.
\begin{lem} \label{s(t)}
The number ${2m+1\choose m}={(2m+1)!\over m!(m+1)!}$ is odd if and only if $m+1$ is a $2$-power.
\end{lem}
\proof
Suppose that ${2m+1\choose m}$ is odd. Then $s(2m+1)=s(m)+s(m+1)$. Write $2^k\leqslant m<2^{k+1}$. By Lemma
\ref{s(n)-2}, $\left[\frac{m+1}{2^{k+1}}\right]\ne 0$,
yielding $m+1\geqslant 2^{k+1}$, and so $m+1= 2^{k+1}$.
Conversely, we assume $m+1=2^\ell$ for some positive integer $\ell$.
Since $m=2^\ell-1$ and $2m+1=2^{\ell+1}-1$, we obtain
\[\left[{m\over2^i}\right]=\left[{2^\ell-1\over 2^i}\right]=\left\{
\begin{array}{ll}
2^{\ell-i}-1, &\mbox{for $1\leqslant i\leqslant \ell-1$,}\\
0, &\mbox{for $i\geqslant \ell$.}
\end{array}\right.\]
\[\left[{2m+1\over2^i}\right]=\left[{2^{\ell+1}-1\over 2^i}\right]=\left\{
\begin{array}{ll}
2^{\ell+1-i}-1, &\mbox{for $1\leqslant i\leqslant \ell$,}\\
0, &\mbox{for $i\geqslant \ell+1$.}
\end{array}\right.\]
Therefore, we have
\[\begin{array}{rll}
s(m)&=&(2^{\ell-1}-1)+(2^{\ell-2}-1)+\dots+(2-1),\\
s(m+1)&=&2^{\ell-1}+2^{\ell-2}+\dots+2+1,\\
s(2m+1)&=&(2^{\ell+1-1}-1)+(2^{\ell+1-2}-1)+\dots+(2-1).\\
\end{array}\]
Then $s(m)+s(m+1)=s(2m+1)$, and ${2m+1\choose m}$ is odd.
\qed
By the above lemma, we get the following consequence.
\begin{corollary}\label{symmetric index odd}
The odd graph $\O_m$ is of odd order if and only if $m+1$ is a $2$-power.
\end{corollary}
\vskip 30pt
\section{Subgroups with odd index in $\A_n$ or $\S_n$}\label{proof-th1}
Let $G$ be an almost simple group with socle $\A_n$. Then either $G\in \{\A_n,\S_n\}$ or $n=6$ and $G\in\{\PGL(2,9),\M_{10},\PGammaL(2,9)\}$.
In this section, we shall determine the insoluble composition factors of subgroups of $G$ of odd index.
For the natural action of $\S_n$ on $\Ome=\{1,2, \ldots, n\}$ and a subset $\Del\subseteq \Ome$,
the symmetric group $\Sym(\Del)$ is sometimes identified with a subgroup of $\S_n$.
Thus we write the set-stabilizer $G_\Del$ as $(\Sym(\Del)\times \Sym(\Ome\setminus \Del))\cap G$ or simply, $G_\Del=(\S_m\times \S_{n-m})\cap G$ if $|\Del|=m$.
Also, $(\S_{m}\wr \S_k )\cap G$ stands for the stabilizer in $G$ of some
partition of $\Ome$ into $k$ parts with equal size $m$.
Based on O'Nan-Scott theorem, the following lemma was first obtained by Liebeck and Saxl \cite{LS}.
\begin{lemma}[\cite{LS})]\label{max-subgps}
Let $G$ have socle $T=\A_n$ with $n\geqslant 5$ and have a maximal subgroup $M$ of odd index.
Then one of the following holds:
\begin{itemize}
\item[(1)] $M=(\S_m\times\S_{n-m})\cap G$ with $1\leqslant m<{n\over 2}$; or
\item[(2)] $M=(\S_m\wr\S_k)\cap G$, where $n=mk$ and $m,k>1$; or
\item[(3)] $G=\A_7$ and $M\cong \SL(3,2)$, or $G=\A_8$ and $M\cong \AGL(3,2)$; or
\item[(4)] $G=\PGL(2,9)$, $\M_{10}$ or $\PGammaL(2,9)$, and $M$ is a Sylow
$2$-subgroup of $G$.
\end{itemize}
In particular, if $G\ne \A_7$ or $\A_8$, then each insoluble composition factor of $M$ is an alternating group.
\end{lemma}
For a subgroup $X\leqslant \S_n$ fixing a subset $\Del\subseteq\Ome$, denote by $X^\Del$ the permutation group induced by $X$ on $\Del$.
\begin{lemma}\label{tech}
Let $G=\S_n$ or $\A_n$ with $n\geqslant 5$, and let $H$ be a subgroup of $G$ with odd index $|G:H|>1$.
Suppose that $H$ normalizes a subgroup $L=\Sym({\Del_1})\times \cdots\times\Sym({\Del_t})$ of $\S_n$, where $t\geqslant 2$ and
$\Ome=\cup_{i=1}^t\Del_i$. Then
\begin{enumerate}
\item[(1)] $|(L\cap G):(L\cap H)|$ and $|(L\cap G)^{\Del_i}:(L\cap H)^{\Del_i}|$ are odd, where
$1\leqslant i\leqslant t$;
\item[(2)] each composition factor of $L\cap H$ is a composition factor of
some $(L\cap H)^{\Del_i}$.
\end{enumerate}
\end{lemma}
\proof
By the assumption $LH$ is a subgroup of $\S_n$, and so $H\leqslant LH\cap G=(L\cap G)H\leqslant G$.
Thus $|(L\cap G)H:H|$ is odd. Then $|(L\cap G):(L\cap H)|$ is odd as $|(L\cap G)H:H|={|L\cap G|\over |L\cap H|}$.
Let $L_i$ be the kernel of $L\cap G$ acting on $\Del_i$, where
$1\leqslant i\leqslant t$. Then $L^{\Del_i}\cong L/L_i$, $(L\cap G)^{\Del_i}\cong (L\cap G)/(L_i\cap G)$ and $(L\cap H)^{\Del_i}\cong (L\cap H)(L_i\cap G)/(L_i\cap G)$.
Since $|(L\cap G):(L\cap H)|$ is odd, $|(L\cap G):(L\cap H)(L_i\cap G)|$ is odd, and so is $|(L\cap G)^{\Del_i}:(L\cap H)^{\Del_i}|$, as in part~(1).
Let $S$ be a composition factor of $L\cap H$. Since $(L\cap H)^{\Del_t}\cong (L\cap H)(L_t\cap G)/(L_t\cap G)\cong (L\cap H)/ (L_t\cap H)$, it follows that
$S$ is a composition factor of one of $(L\cap H)^{\Del_t}$ and $L_t\cap H$. If $S$ is a composition factor of $(L\cap H)^{\Del_t}$, then part (2) holds by taking $i=t$.
Now let $S$ be a composition factor of $L_t\cap H$, and consider the triple $(L_t, L_t\cap G, L_t\cap H)$.
By induction, we may assume that $S$ is a composition factor of $(L_t\cap H)^{\Del_i}$ for some $i\leqslant t-1$.
Since $L_t\cap H\unlhd L\cap H$, we have $(L_t\cap H)^{\Del_i}\unlhd (L\cap H)^{\Del_i}$, and
thus $S$ is a composition factor of $(L\cap H)^{\Del_i}$. Then part (2) follows.
\qed
Now we prove Theorem \ref{Alt-subgps} for $G=\S_n$.
\begin{lemma}\label{Sym}
Let $G=\S_n$ with $n\geqslant 5$, and let $H$ be an insoluble subgroup of $G$ with odd index $|G:H|>1$.
Then each insoluble composition factor of $H$ is an alternating group.
\end{lemma}
\proof
We prove this lemma by induction on $n$.
Let $S$ be an insoluble composition factor of $H$.
Take a maximal subgroup $M$ of $G$ with $H\leqslant M$.
By Lemma~\ref{max-subgps}, either $M=\S_m\times\S_{n-m}$ with $1\leqslant m<n/2$, or $M=\S_m\wr\S_k$ with $mk=n$ and $m,k>1$.
For $M=\S_m\times\S_{n-m}$, Lemma \ref{tech} works for $H$ and $M$,
which yields that $S$ is a composition factor of a subgroup with odd index in $\S_k$ for some $k<n$,
and the lemma holds by induction.
Thus, let $M=\S_m\wr\S_k$ with $mk=n$ and $m,k>1$ in the following.
Let $L$ be the base subgroup of the wreath product $\S_m\wr\S_k$. Then Lemma \ref{tech}
works for the triple $(L,H, L\cap H)$, and hence the lemma holds by induction if $S$ is a composition factor of
$L\cap H$.
Assume that $S$ is not a composition factor of
$L\cap H$. Then $S$ is a composition factor of $H/(L\cap H)$. Noting that $HL/L\cong H/(L\cap H)$, it implies that
$S$ is a composition factor of $HL/L$. Consider that pair $M/L$ and $HL/L$.
Since $|G:H|$ is odd, $|M:(HL)|$ and hence $|(M/L):(HL/L)|$ is also odd. Further, $M/L\cong \S_k$. Then, since $k<n$, the lemma holds by induction.
\qed
Now we handle the case $G=\A_n$.
\begin{lemma}\label{Alt}
Let $G=\A_n$ with $n\geqslant 5$.
Let $H$ be an insoluble subgroup of $G$ with odd index $|G:H|>1$.
Then either
\begin{itemize}
\item[(i)] $(G,H)$ is one of $(\A_7,\GL(3,2))$, $(\A_8,\AGL(3,2))$ and $(\A_9,\AGL(3,2))$; or
\item[(ii)] every insoluble composition factor of $H$ is an alternating group.
\end{itemize}
\end{lemma}
\proof
If $n\leqslant 9$ then the lemma is easily shown by checking the subgroups of $\A_n$.
In the following, by induction on $n$, we show (ii) of this lemma always holds for $n\geqslant 10$.
Let $n\geqslant 10$, and let $S$ be an insoluble composition factor of $H$.
Take a maximal subgroup $M$ of $\A_n$ with $H\leqslant M$. By Lemma \ref{max-subgps}, $M=(\S_m\times\S_{n-m})\cap \A_n$ with $1\leqslant m<n/2$, or $M=(\S_m\wr\S_k)\cap \A_n$ with $mk=n$ and $m,k>1$.
Suppose that $n=10$. Then $M\cong \S_8$ or $2^4{:}\S_5$.
By the Atlas \cite{Atlas}, $\S_8$ has no insoluble subgroup of odd index. Then $M\cong 2^4{:}\S_5$, and we have $S=\A_5$.
Thus, in the following, we let $n\geqslant 11$, and process in two cases.
{\bf Case 1}. Let $M=(\S_m\times\S_{n-m})\cap \A_n$.
If $m=1$ then $M=\A_{n-1}$ and, since $10\leqslant n-1<n$, $S$ is alternating by induction.
Now let
$m\geqslant 2$. Writing $M=(\Sym(\Del)\times \Sym(\Ome\setminus\Del))\cap \A_n$ with $|\Del|=m$,
we have $M=(\Alt(\Del)\times \Alt(\Ome\setminus\Del))\l \s_1\s_2\r$, where $\s_1\in \Sym(\Del)$ and
$\s_2\in \Sym(\Ome\setminus \Del)$ are transpositions. Then $M^\Del\cong \S_m$ and $M^{\Ome\setminus \Del}\cong \S_{n-m}$.
By Lemma \ref{tech},
$S$ is a composition factor of a subgroup with odd index in either $\S_m$ or $\S_{n-m}$. Then $S$ is alternating by Lemma \ref{Sym}.
{\bf Case 2}.
Let $M=(\S_m\wr\S_k)\cap \A_n$. Let $L=\S_m^k$ be the base group of the wreath product $\S_m\wr\S_k$.
Note that $S$ is a composition factor of one of $H/(L\cap H)$ and $L\cap H$.
Assume that $S$ is a composition factor of $H/(L\cap H)$.
Then $S$ is a composition factor of $HL/L$ as
$HL/L\cong H/(L\cap H)$.
It is easily shown that $|(M/L):(HL/L)|$ is odd. Further, since $M/L\cong \S_k$, we know that $S$ is alternating by Lemma \ref{Sym}.
Now let $S$ be a composition factor of $L\cap H$.
Write $L=\Sym(\Del_1)\times\cdots\times \Sym(\Del_k)$, where $|\Del_i|=m$.
Then $L\cap \A_n=(\Alt(\Del_1)\times\cdots\times \Alt(\Del_k))\l\s_1\s_t,\s_2\s_t,\ldots,\s_{t-1}\s_t\r$, where $\s_i\in \Alt(\Del_i)$ are transpositions. It follows that $(L\cap \A_n)^{\Del_i}\cong \S_m$ for $1\leqslant i\leqslant t$.
Thus, using Lemmas \ref{tech} and \ref{Sym}, $S$ is an alternating group.
\qed
Finally, if $n=6$ and $G=\PGL(2,9)$, $\M_{10}$ or $\PGammaL(2,9)$ then, by Lemma \ref{max-subgps}, $G$ has no insoluble proper subgroup of odd index.
The proof of Theorem~\ref{Alt-subgps} now follows from Lemmas~\ref{Sym} and \ref{Alt}.
\vskip 20pt
\section{$2$-Arc-transitive graphs}\label{sect=proof-th3}
In this section, we assume that $\Ga=(V,E)$ is a connected $(G,2)$-arc-transitive graph of odd order and valency at least $3$, where $G\leqslant \Aut\Ga$.
\subsection{Stabilizers}\label{sub=stab}
Fix a 2-arc $(\a,\b,\g)$ of $\Ga$.
Let $G_\a$ be the stabilizer of $\a$ in $G$.
Then $G_\a$ acts 2-transitively on the neighborhood $\Ga(\a)$ of $\a$ in $\Ga$.
Let $G_\a^{[1]}$ be the kernel of $G_\a$ on $\Ga(\a)$, and let $G_\a^{\Ga(\a)}$ be the 2-transitive permutation group induced by $G_\a$ on $\Ga(\a)$.
Then $G_\a^{\Ga(\a)}\cong G_\a/G_\a^{[1]}$.
Clearly, $G_\a^{[1]}\unlhd G_{\a\b}$, and
\begin{equation}\label{exten}
(G_\a^{[1]})^{\Ga(\b)}\unlhd G_{\a\b}^{\Ga(\b)}\cong G_{\a\b}^{\Ga(\a)}.
\end{equation}
Let $G_{\a\b}^{[1]}=G_\a^{[1]}\cap G_\b^{[1]}$, the point-wise stabilizer of the `double star' $\Ga(\a)\cup\Ga(\b)$.
A fundamental result about 2-arc-transitive graphs characterizes $G_{\a\b}^{[1]}$.
\begin{theorem}\label{double-star}{\rm (Thompson-Wielandt Theorem)}
$G_{\a\b}^{[1]}$ is a $p$-group with $p$ prime.
\end{theorem}
By definition, we have $G_{\a\b}^{[1]}\unlhd G_\b^{[1]}\unlhd G_{\b\g}$, and so
\[(G_{\a\b}^{[1]})^{\Ga(\g)}\unlhd (G_\b^{[1]})^{\Ga(\g)}\unlhd G_{\b\g}^{\Ga(\g)}.\]
Let $O_p((G_\b^{[1]})^{\Ga(\g)})$ and $O_p(G_{\b\g}^{\Ga(\g)})$ be the maximal normal $p$-subgroups of $(G_\b^{[1]})^{\Ga(\g)}$ and $G_{\b\g}^{\Ga(\g)}$, respectively. Then
\[(G_{\a\b}^{[1]})^{\Ga(\g)}\unlhd O_p((G_\b^{[1]})^{\Ga(\g)})\unlhd O_p(G_{\b\g}^{\Ga(\g)}).\]
Suppose that $(G_{\a\b}^{[1]})^{\Ga(\g)}=1$. Then $G_{\a\b}^{[1]}\leqslant G_\g^{[1]}$, and so $G_{\a\b}^{[1]}\leqslant G_{\b\g}^{[1]}$. Noting that
$G_{\a\b}^{[1]}\cong G_{\b\g}^{[1]}$, we have $G_{\a\b}^{[1]}= G_{\b\g}^{[1]}$. Then the connectedness of $\Ga$ yields that $G_{\a\b}^{[1]}=G_{\a'\b'}^{[1]}$ for each arc $(\a',\b')$ of $\Ga$, and hence $G_{\a\b}^{[1]}=1$.
Thus, if $G_{\a\b}^{[1]}$ is a non-trivial $p$-group, then so is $(G_{\a\b}^{[1]})^{\Ga(\g)}$, and then $O_p(G_{\b\g}^{\Ga(\g)})\ne 1$.
Noting that $G_{\a\b}^{\Ga(\a)}\cong G_{\b\g}^{\Ga(\g)}$, we have a useful conclusion.
\begin{lemma}\label{p-local}
Let $\{\a,\b\}\in E$.
If $G_{\a\b}^{[1]}$ is a nontrivial $p$-subgroup, then $G_{\a\b}^{\Ga(\a)}$ has a nontrivial normal $p$-subgroup, where $p$ is a prime.
\end{lemma}
Recall that $G_\a^{\Ga(\a)}$ is 2-transitive on $\Ga(\a)$.
Inspecting 2-transitive permutation groups (refer to \cite[page 194-197, Tables 7.3 and 7.4]{Cameron}), we have the following result.
\begin{lemma}\label{stab-1}
Let $G$ be an almost simple group with socle $\A_n$,
and $\{\a,\b\}\in E$.
Then either $G_\a$ is soluble, or $G\in\{\A_n,\S_n\}$ and one of the following holds.
\begin{itemize}
\item[(1)] $\soc(G_\a^{\Ga(\a)})\cong \A_m$ for some $m\geqslant 5$, and one of the following holds:
\begin{enumerate}
\item[(i)] $G_\a^{\Ga(\a)}\cong \A_m$ or $\S_m$ for even $m\geqslant 6$, and $G_{\a\b}^{\Ga(\a)}\cong \A_{m-1}$ or $\S_{m-1}$, respectively;
\item[(ii)] $G_\a^{\Ga(\a)}\cong \PSL(2,5)$ or $\PGL(2,5)$, and $G_{\a\b}^{\Ga(\a)}\cong \D_{10}$ or $5{:}4$, respectively;
\item[(iii)] $G_\a^{\Ga(\a)}\cong \PSL(2,9).\calO$, and $G_{\a\b}^{\Ga(\a)}\cong 3^2{:}(4.\calO)$, where $\calO\leqslant 2^2$.
\end{enumerate}
\item[(2)] $G_\a^{\Ga(\a)}\cong 2^4{:}H$, where $H=G_{\a\b}^{\Ga(\a)}\cong \A_5$, $\S_5$, $3\times \A_5$, $(3\times \A_5).2$, $\A_6$, $\S_6$, $\A_7$ or $\A_8$; in particular, $G_{\a\b}^{[1]}=1$.
\end{itemize}
\end{lemma}
\proof
Note that \begin{equation}\label{extension} G_\a=G_\a^{[1]}.G_\a^{\Ga(\a)}=(G_{\a\b}^{[1]}.(G_\a^{[1]})^{\Ga(\b)}).G_\a^{\Ga(\a)}.
\end{equation}
Clearly, if $G_\a^{\Ga(\a)}$ is insoluble then $G_\a$ is insoluble.
If $G_\a^{\Ga(\a)}$ is soluble then, by (\ref{exten}), $(G_\a^{[1]})^{\Ga(\b)}$ is soluble, and so $G_\a$ is soluble by (\ref{extension}).
Thus $G_\a$ is soluble if and only if $G_\a^{\Ga(\a)}$ is soluble.
To finish the proof of this lemma, we assume that $G_\a$ is insoluble in the following; in particular, $G\in\{\A_n,\S_n\}$ by Theorem \ref{Alt-subgps}.
Since $\Ga$ is $(G,2)$-arc-transitive,
$G_\a^{\Ga(\a)}$ is an insoluble $2$-transitive permutation group.
As $|V|$ is odd, the valency $|\Ga(\a)|$ is even, and so $G_\a^{\Ga(\a)}$ is of even degree.
{\bf Case 1}. First assume that $G_\a^{\Ga(\a)}$ is an almost simple $2$-transitive permutation group with socle $S$ say.
By Theorem~\ref{Alt-subgps}, either $S\cong \A_m$ for some $m\geqslant 5$, or one of the following cases occurs:
\begin{enumerate}
\item[(a)] $G=\A_7$, $G_\a=\SL(3,2)$;
\item[(b)] $G=\A_8$, $G_\a=\AGL(3,2)$;
\item[(c)] $G=\A_9$, $G_\a=\AGL(3,2)$.
\end{enumerate}
For (a) and (b), we have that $|V|=15$, and $G$ is $2$-transitive on $V$, yielding $\Ga\cong \K_{15}$.
Noting that $\Ga$ is $(G,2)$-arc-transitive, it follows that $G=\A_7$ or $\A_8$ is $3$-transitive on the $15$ vertices of $\Ga$, which is impossible.
Suppose that (c) occurs.
Let $G_\a^{\Ga(\a)}$ be of affine type. Then $G_{\a\b}=\SL(3,2)$; in this case, as a subgroup, $\SL(3,2)$ is self-normalized in $\A_9$.
Thus there is no element in $G$ interchanging $\a$ and $\b$, which contradicts the arc-transitivity of $G$ on $\Ga$.
Thus $G_\a^{\Ga(\a)}$ is almost simple. Then $G_\a^{[1]}=\ZZ_2^3$ and $G_\a^{\Ga(\a)}\cong \SL(3,2)\cong \PSL(2,7)$. Since $\Ga$ has even valency, considering the $2$-transitive permutation representations of $\SL(3,2)$, we have $|\Ga(\a)|=8$.
Then $G_\a^{[1]}$ is not faithful on $\Ga(\b)\setminus\{\a\}$, and so $G_{\a\b}^{[1]}$ is a non-trivial normal $2$-group. By Lemma \ref{double-star},
$G_{\a\b}^{\Ga(\a)}$ has a non-trivial $2$-subgroup; however,
$G_{\a\b}^{\Ga(\a)}\cong \ZZ_7{:}\ZZ_3$, a contradiction.
Let $S\cong \A_m$.
Note that $\A_5\cong \PSL(2,5)$ and $\A_6\cong \PSL(2,9)$.
By the classification of 2-transitive permutation groups (refer to \cite[page 197, Table 7.4]{Cameron}), since $|\Ga(\a)|$ is even, either $|\Ga(\a)|=m$ with $m$ even,
or $(S,|\Ga(\a)|)$ is one of $(\PSL(2,5),6)$ and $(\PSL(2,9),10)$.
Then part~(1) follows.
{\bf Case 2}. Now suppose that $G_\a^{\Ga(\a)}$ is an insoluble affine group.
Then $|\Ga(\a)|=2^d$ for some positive integer $d\geqslant 3$, and $G_{\a\b}^{\Ga(\a)}\leqslant \GL(d,2)$. In particular, by \cite{Weiss}, we have $G_{\a\b}^{[1]}=1$.
Since each insoluble composition factor of $G_\a^{\Ga(\a)}$ is alternating,
by the classification of affine 2-transitive permutation groups (see \cite[page 195, Table 7.3]{Cameron}), we conclude that $d=4$ and
$G_{\a\b}^{\Ga(\a)}$ is isomorphic to one of $\A_5$ (isomorphic to
$\SL(2,4)$), $\S_5$ (isomorphic to
$\SigmaL(2,4)$), $\ZZ_3\times \A_5$ (isomorphic to
$\GL(2,4)$), $(\ZZ_3\times \A_5).2$ (isomorphic to
$\GammaL(2,4)$), $\A_6$ (isomorphic to $\Sp(4,2)'$), $\S_6$ (isomorphic to $\Sp(4,2)$), $\A_7$ and $\A_8$
(isomorphic to $\GL(4,2)$).
This gives rise to the candidates in part~(2).
\qed
Let $G$ be an almost simple group with socle $\A_n$. We next organize our analysis of the candidates for $G_\a$ according to the description in Lemma~\ref{stab-1}.
Note that $G\in\{\A_n,\S_n\}$ if $G_\a$ is insoluble.
\subsection{Almost simple stabilizers}
Assume that $G_\a^{\Ga(\a)}$ is almost simple, where $\a\in V$.
First we consider the candidates in Lemma~\ref{stab-1}\,(1)(i).
\begin{lemma}\label{main-case}
Let $\{\a,\b\}\in E$.
Assume $G_\a^{\Ga(\a)}\cong \A_m$ or $\S_m$, and $G_{\a\b}^{\Ga(\a)}\cong \A_{m-1}$ or $\S_{m-1}$, respectively, where $|\Ga(\a)|=m\geqslant6$ is even.
Then one of the following holds:
\begin{itemize}
\item[(i)] $(G_\a,G)=(\A_m,\A_{m+1})$ or $(\S_m,\S_{m+1})$, and $\Ga=\K_{m+1}$, where $m$ is even;
\item[(ii)] $G_\a=(\S_m\times\S_{m-1})\cap G$, $G=\A_{2m-1}$ or $\S_{2m-1}$, respectively, and $\Ga=\O_{m-1}$, where $m$ is a power of $2$.
\end{itemize}
\end{lemma}
\proof
Since $G_{\a\b}^{\Ga(\a)}$ is almost simple, $G_{\a\b}^{[1]}=1$ by Lemma~\ref{p-local}, and so
\begin{equation}\label{double-star=1} G_\a=G_\a^{[1]}.G_\a^{\Ga(\a)}=(G_{\a\b}^{[1]}.(G_\a^{[1]})^{\Ga(\b)}).G_\a^{\Ga(\a)}=(G_\a^{[1]})^{\Ga(\b)}.G_\a^{\Ga(\a)}.
\end{equation}
Since $(G_\a^{[1]})^{\Ga(\b)}$ is isomorphic to a normal subgroup of $G_{\a\b}^{\Ga(\a)}$, we have $(G_\a^{[1]})^{\Ga(\b)}=1$, or $(G_\a^{[1]})^{\Ga(\b)}\cong \A_{m-1}$ or $\S_{m-1}$.
It follows that $G_\a\cong \A_m$, $\S_m$, $\A_{m-1}\times \A_m$, $(\A_{m-1}\times \A_m).2$ or $\S_{m-1}\times \S_m$.
{\bf Case 1.} Assume first that $G_\a\cong \A_m$ or $\S_m$, where $m$ is even.
Since $G=\A_n$ or $\S_n$ and $|G:G_\a|$ is odd, it follows that
either $n=m+1$ and $G_\a=\S_m\cap G$, or $n=m+k$, $G=\A_{m+k}$ and $G_\a\cong \S_m$ for $k\in \{2,3\}$.
Suppose that $n=m+k$, $G=\A_{m+k}$ and $G_\a\cong \S_m$, where $k=2$ or $3$.
Then $G_{\a\b}\cong \S_{m-1}$ since $\Ga$ is of valency $m$.
Consider the maximal subgroups of $G=\A_{m+k}$ which contains $G_\a$.
By Lemma \ref{max-subgps}, we conclude that $G_\a$ is contained in the stabilizer of an $m$-subset of $\Ome=\{1,2,\dots,m+k\}$, say $\Del=\{1,2,\dots,m\}$.
Thus we may let
$G_\a=\Alt(\Del).\l\s\r$, where $\s=(1\,\,2)(m+1\,\,m+k)$.
Without loss of generality, we may assume that $G_{\a\b}=\Alt(\Del\setminus\{m\}).\l\s\r$.
Let $g\in G$ interchange $\a$ and $\b$.
Then $g$ normalizes $G_{\a\b}$, and hence $g$ fixes $\Del\setminus\{m\}$ setwise, and $\s^g=(i\,\,j)(m+1\,\,m+k)$.
It follows that $\Del$ and $\{m+1,m+k\}$ are two orbits of $\l G_\a,g\r$, which is a contradiction since $\l G_\a,g\r$ should be equal to $G$.
Thus $(G_\a,G)=(\A_m,\A_{m+1})$ or $(\S_m,\S_{m+1})$.
It then follows that $\Ga=\K_{m+1}$, as in part~(i).
{\bf Case 2.} Now assume that $G_\a$ has a subgroup isomorphic to $\A_m\times\A_{m-1}$.
Clearly, $n\geqslant 2m-1$.
Recall that $2^{s(l)}$ is the 2-part of $l!$, see Section \ref{exam-sec}.
Then $|G|_2\geqslant 2^{s(n)-1}$ and $|G_\a|_2\leqslant 2^{s(m)+s(m-1)}$.
Since $|G:G_\a|$ is odd, $s(m)+s(m-1)\geqslant s(n)-1\geqslant s(2m-1)-1$.
By (\ref{s(n)}) given in Section \ref{exam-sec},
$s(2m-1)\geqslant s(m)+s(m-1)$, and so
\[s(2m-1)\geqslant s(m)+s(m-1)\geqslant s(n)-1\geqslant s(2m-1)-1.\]
Since $m$ is even, $2m$ is divisible by $2^2$, and hence $s(2m)\geqslant s(2m-1)+2$.
It follows that $n<2m$. Therefore, we have
\[n=2m-1\]
and $s(2m-1)=s(m)+s(m-1)$. Then $m$ is a power of $2$ by Lemma \ref{s(t)}.
Since $|G:G_\a|$ is odd, either $G=\A_{2m-1}$ and $G_\a=(\A_m\times\A_{m-1}).2$, or
$G=\S_{2m-1}$ and $G_\a=\S_m\times\S_{m-1}$.
That is to say, $G_\a$ is the stabilizer of $G$ acting on the set of $(m-1)$-subsets of $\{1,2,\dots,2m-1\}$.
It follows since $\Ga$ is $(G,2)$-arc-transitive that $\Ga=\O_{m-1}$ is an odd graph, as in part~(ii).
\qed
Next, we handle the candidates in part~(1)(ii-iii) of Lemma~\ref{stab-1}.
\begin{lemma}\label{PSL(2,5)}
There is no $2$-arc-transitive graph corresponding to part~{\rm (1)(ii)} of Lemma~{\rm \ref{stab-1}}.
\end{lemma}
\proof
Suppose that $G_\a^{\Ga(\a)}\cong \PSL(2,5)$ or $\PGL(2,5)$, and $G_{\a\b}^{\Ga(\a)}\cong \D_{10}$ or $5{:}4$.
By Lemma~\ref{p-local}, $G_{\a\b}^{[1]}$ is a 5-group, and so $|G_\a^{[1]}|_2=|(G_\a^{[1]})^{\Ga(\b)}|_2$ divides $|G_{\a\b}^{\Ga(\b)}|_2$.
Thus
\[|G_\a|_2=|G_\a^{[1]}|_2|G_\a^{\Ga(\a)}|_2\leqslant 2^5,\]
that is, a Sylow $2$-subgroup of $G_\a$ has order a divisor of $2^5$.
It follows that $G\leqslant\S_7$.
Since $G_\a^{\Ga(\a)}\cong \PSL(2,5)$ or $\PGL(2,5)$, we conclude that either $G=\A_7$ and $G_\a\cong \S_5$, or $G=\S_7$ and $G_\a=\S_2\times\S_5$.
Then $\Ga$ is an orbital graph of $G=\S_7$ acting on 2-subsets of $\{1,2,\dots,7\}$, which is not 2-arc-transitive.
\qed
\begin{lemma}\label{PSL(2,9)}
There is no $2$-arc-transitive graph corresponding to to part~{\rm (1)(iii)} of Lemma~{\rm \ref{stab-1}}.
\end{lemma}
\proof
Suppose that $G_\a^{\Ga(\a)}\cong \PSL(2,9).\calO$, and $G_{\a\b}^{\Ga(\a)}\cong 3^2{:}(4.\calO)$, where $\calO\leqslant 2^2$.
By Lemma~\ref{p-local}, $G_{\a\b}^{[1]}$ is a 3-group, and so $|G_\a^{[1]}|_2=|(G_\a^{[1]})^{\Ga(\b)}|_2$ divides $|G_{\a\b}^{\Ga(\b)}|_2$.
We have
\[|G_\a|_2=|G_\a^{[1]}|_2|G_\a^{\Ga(\a)}|_2\leqslant 2^9,\]
that is, a Sylow $2$-subgroup of $G_\a$ is of order dividing $2^9$.
It follows that $G\leqslant\A_{13}$, and further, either $G\leqslant\S_{11}$, or $G$ is one of $\A_{12}$ and $\A_{13}$.
Suppose $|G|_2=2^9$.
Then $G=\S_{11}$, $\A_{12}$ or $\A_{13}$, and moreover,
$G_\a^{\Ga(\a)}\cong \PSL(2,9).2^2$ and $G_\a^{[1]}\cong 3^2{:}[2^4]$, and hence
\[G_\a=(\PSL(2,9)\times (3^2{:}4)).[2^4].\]
By the Atlas \cite{Atlas}, $G$ does not have a subgroup of odd index which contains a normal subgroup $\PSL(2,9)\times (3^2{:}4)$, which is a contradiction.
Thus $|G|_2\leqslant 2^8$, and then $G\leqslant\A_{11}$ or $\S_{10}$.
Checking the subgroups of $G$ with odd index, we conclude that $\A_7\leqslant G\leqslant \S_7$ and $\A_6\leqslant G_\a\leqslant \S_6$.
It follows that $\Ga=\K_7$, which is not possible since $\Ga$ should have valency 10.
\qed
\subsection{The affine stabilizers} Let $\{\a,\b\}\in E$. Assume that $G_\a^{\Ga(\a)}$ is an affine $2$-transitive permutation group.
Now consider the case where $G_\a$ is soluble.
By \cite{odd-excep}, Theorem \ref{zthm1} holds for the case where $G_\a$ is soluble.
\begin{lemma}\label{soluble-case}
If $G_\a$ is soluble, then $\Ga$ has valency $4$, and either
\begin{itemize}
\item[(i)] $n=5$ and $\Ga$ is the complete graph $\K_5$, or
\item[(ii)]
$n=7$ and $\Ga$ is the odd graph $\O_3$ of order $35$.
\end{itemize}
\end{lemma}
We now consider the candidates for $G_\a^{\Ga(\a)}$ in part~(2) of Lemma~\ref{stab-1}.
\begin{lemma}\label{stab-2}
There is no $2$-arc-transitive graph corresponding to part~{\rm {(2)}} of Lemma~{\rm \ref{stab-1}}.
\end{lemma}
\proof
Suppose that $G_\a^{\Ga(\a)}\cong 2^4{:}H$ is affine and described as in part~(2) of Lemma~\ref{stab-1}.
Let $\{\a,\b\}\in E$.
Since $G_{\a\b}^{[1]}=1$, (\ref{exten}) yields that $G_\a^{[1]}$ is isomorphic to a normal subgroup of
$H=G_{\a\b}^{\Ga(\a)}$. Then the outer automorphism group of $G_\a^{[1]}$ has order at most $4$.
It follows that $G_\a$ has a (minimal) normal subgroup $N$ which is regular on $\Ga(\a)$, and thus
\[G_\a=N{:}G_{\a\b},\, \C_{G_\a}(N)=N\times G_\a^{[1]}.\]
Moreover, $|G_\a^{[1]}|_2$ is a divisor of $|G_{\a\b}^{\Ga(\b)}|_2=|H|_2$,
and then $|G|_2=|G_\a|_2$ is a divisor of $2^4|H|_2^2$. In particular, $2^6\leqslant |G|_2\leqslant 2^{16}$, and then $8\leqslant n\leqslant 19$.
Consider the natural action of $G_\a$ on $\Ome=\{1,2,\dots,n\}$, and choose a $G_\a$-orbit $\Del$ such that
$N$ is nontrivial on $\Del$. Let $|\Del|=m$. Then $m$ is even, and $|G_\a^{\Del}|_2=|\S_m|_2$ or $|\A_m|_2$ by Lemma \ref{tech}.
Let $K$ be the kernel of
$G_\a$ acting on $\Del$. Then $K\cap N=1$ as $N$ is a minimal normal subgroup of $G_\a$, and so $K\le \C_{G_\a}(N)=N\times G_\a^{[1]}$.
It follows that $K\le G_\a^{[1]}$, and hence $G_\a^{\Del}$ is insoluble. In particular, $m\geqslant 6$.
{\bf Case 1.} Suppose that $K$ is soluble. Then $|K|_2=1$, and $2^4|H|_2||G_\a^{[1]}|_2=|G_\a|_2=|G_\a^{\Del}|_2=|\S_m|_2$ or $|\A_m|_2$.
Recalling that $|G_\a|_2=|G|_2=|\S_n|_2$ or $|\A_n|_2$, we have $n\leqslant m+3$.
If $N$ is transitive on $\Del$, then $m=|N|=16$, yielding $|G_\a|_2=2^{15}$ or $2^{14}$, which is impossible.
Thus $N$ is intransitive on $\Del$, and then $G_\a^{\Del}\lesssim\S_\ell\wr\S_k$, where $\ell,k>1$, $m=\ell k$ and $\ell$ is the size of each $N$-orbit. In particular, $\ell=2$, $4$ or $8$.
For $\ell=4$ or $8$, since $m=\ell k\leqslant n\leqslant 19$, we have $m=16$, which yields a contradiction as
above. Therefore, $\ell=2$ and, since $G_\a^{\Del}$ is insoluble, $5\leqslant k\leqslant 9$. Then
$G_\a$ has exactly one insoluble composition factor, and thus $|G_\a|_2=|G_\a^{\Del}|_2=2^4|H|_2$. This implies that $k=5$, $m=10$, and
$|G_\a|_2=2^7$ or $2^8$. Then $G=\A_{11}$ or $\A_{10}$, and $G_\a=2^4{:}\S_5$ which is faithful on $\Del$.
Thus $G_{\a\b}\cong \S_5$, which has two orbits on $\Del$ of equal size $5$.
Let $g\in G$ with $(\a,\b)^g=(\b,\a)$. Then $g$ normalizes $G_{\a\b}$, fixes $\Ome\setminus\Del$ and either interchanges or fixes those two $G_{\a\b}$-orbits on $\Del$. It follows that $g\in G_\a$, a contradiction.
{\bf Case 2.} Suppose that $K$ is insoluble. In this case, $G_\a$ is intransitive on $\Ome$, and
$K$ has a normal subgroup $L$ isomorphic to $\A_r$, where $r\in \{5,6,7,8\}$.
Choose a $G_\a$-orbit $\Del'$ such that $L$ is faithful on $\Del'$. Then $m':=|\Del'|\geqslant r$, and $19\geqslant n\geqslant m+m'\geqslant m+r$.
Note that $2^4|H|_2\leqslant |G_\a^{\Del}|_2\leqslant 2^5|H|_2$, and
$|G_\a^{\Del}|_2=|\S_m|_2$ or $|\A_m|_2$.
If $r=8$ then $m\geqslant 12$, and so $n\ge m+r\geqslant 20$, a contradiction.
Suppose $r=7$. Then $m\geqslant 8$ and $n\geqslant 15$, and so $|G|_2\geqslant 2^{10}$.
It follows that $|G|_2=2^{10}$ and $m=8$; however, in this case, $G_\a^{\Del}\cong 2^4{:}\A_7$, which
can not be contained in a group isomorphic to $\S_8$.
For $r=6$ and $H\cong \A_6$, we get a similar contradiction as above.
Suppose that
$r=6$ and $H\cong \S_6$. Then $2^8\leqslant |G_\a^{\Del}|_2\leqslant 2^9$,
and thus $10\leqslant m \leqslant 13$, yielding $n\geqslant 16$. This leads to
$|G_\a|_2\geqslant 2^{14}$, which is impossible.
By the above argument, we have $r=5$ and $|G_\a|_2=2^{8}$, $2^9$ or $2^{10}$, and then $n\leqslant 15$.
On the other hand, $2^6\leqslant |G_\a^{\Del}|_2\leqslant 2^8$, we have $m\leqslant 11$, yielding $m=10$ and $n=15$.
It follows that $G=\A_{15}$ and $G_\a=(\Alt(\Del')\times 2^4{:}\S_5)\l\s\t\r$, where
$\s$ is a transposition in $\Sym(\Del')$ and $\t$ is a product of five disjoint transpositions in $\Sym(\Del')$.
Then both $G_\a$ and $G_{\a\b}$ have two orbits $\Del'$ and $\Del$ on $\Ome$.
Thus there is no element $g\in \N_G(G_{\a\b})$ such that $\l G_\a,g\r$ is transitive on $\Ome$, a contradiction.
\qed
\subsection{Proof of Theorem~\ref{zthm1}}
Let $G$ be an almost simple group with socle $\A_n$, and let $\Ga$ be $(G,2)$-arc-transitive.
The sufficiency is obvious since the complete graphs $\K_n$ and the odd graphs are clearly 2-arc-transitive under the action of $\A_n$.
The necessity has been established in several lemmas, explained below.
By Lemma~\ref{stab-1}, the vertex stabilizer $G_\a$ is either soluble or divided into two parts~(1)-(2), according to $G_\a^{\Ga(\a)}$ being almost simple or affine.
For the case where $G_\a^{\Ga(\a)}$ is almost simple, Lemmas~\ref{main-case}-\ref{PSL(2,9)} show that $\Ga$ is a complete graph or an odd graph.
For the affine case, Lemmas~\ref{soluble-case}-\ref{stab-2} verify the theorem.
\qed
\vskip 30pt
|
{
"timestamp": "2021-05-11T02:18:49",
"yymm": "2105",
"arxiv_id": "2105.03880",
"language": "en",
"url": "https://arxiv.org/abs/2105.03880"
}
|
\section{introduction}
We consider the inhomogeneous Landau equation:
\begin{equation}\label{e:mainL}
\partial_t f + v\cdot \nabla_x f = Q_L(f,f) := \mbox{tr}( \bar a^f D_v^2 f) + \bar c^f f,
\end{equation}
where, for $f:\mathbb R_+\times \mathbb R^3 \times \mathbb R^3\to \mathbb R$,
\begin{equation}\label{e:coeffs}
\begin{split}
\bar a^f(t,x,v) &:= a_{\gamma}\int_{\mathbb R^3} \Pi(v_*) |v_*|^{\gamma+2} f(t,x,v-v_*)\, \mathrm{d} v_*,\\
\bar c^f(t,x,v) &:= \begin{cases} c_{\gamma} \int_{\mathbb R^3} |v_*|^{\gamma} f(t,x,v-v_*)\, \mathrm{d} v_*, &\gamma >-3,\\
f, &\gamma = -3,\end{cases}
\end{split}
\end{equation}
and $\Pi(z) := \left(Id - \dfrac {z\otimes z}{|z|^2}\right)$. The constants $a_\gamma$ and $c_\gamma$ are positive and only depend on $\gamma$. The constant $\gamma$ belongs to the range of very soft potentials, i.e. $\gamma \in [-3,-2]$. For $\gamma\le -2$ the Landau collision operator $Q_L$ shares several similarities with the semilinear operator $\Delta f + f^2$ and a question naturally arises: do smooth solutions to (\ref{e:mainL}) stay bounded for all times or do they become unbounded after a finite time? We say that a solution $f$ blows up at a time $T<+\infty$ if it is well defined for all $0<t<T$, and if
$$
\lim_{t\to T^-} \|f(t,x,v)\|_{L^{\infty}( \mathbb R^3 \times \mathbb R^3)}=+\infty.
$$
We would call $T$ the blow-up time for $f$. This question of regularity versus singularity formation for (\ref{e:mainL}) is, at the present day, still unanswered.
The existence of smooth solutions to the inhomogeneous Landau equation (\ref{e:mainL}) for very soft potentials is known for a short time, \cite{he2014boltzmannlandau, HST2019rough, HST2018landau}, and for long times under
simplifying assumptions on the initial data. For example, when the initial data is sufficiently close to
a Maxwellian equilibrium state, solutions exist globally and converge to equilibrium \cite{guo2002periodic}.
Solutions are also known to exist when initial data are near vacuum in the cases of moderately soft potentials \cite{luk2019vacuum} and hard potentials \cite{chaturvedi2020vacuum}. Recently, several new studies concerning regularity and continuation criteria have appeared; these results are based on {\em conditional} assumptions on the hydrodynamic quantities, see \cite{golse2016, cameron2017landau, henderson2017smoothing, chen2009smoothing, liu2014regularization}.
The situation for Vlasov-Poisson-Landau is less well-studied; see \cite{Guo2012,strain2013vlasov,chaturvedi2021vlasov} for results near global Maxwellians and \cite{duan2020vlasov} for results near some local Maxwellian data.
The available literature on the homogeneous version of (\ref{e:mainL}) for very soft potentials is larger. In \cite{arsenev-peskov}, \cite{villani1996global}, \cite{alexandre2004landau} and, later, in \cite{desvillettes2015landau}, the authors show global existence of weak solutions. Recently, it was proven that for short time weak solutions become instantaneously regular and smooth, see \cite{silvestre2015landau} and \cite{gualdani2017landau}. The question of whether they stay smooth for all time or become unbounded after a finite time is, however, also in this case, still open. Recent research has also produced several conditional results. This includes uniqueness results in \cite{Fournier2010} for solutions that belong to the space $L^1(0,T,L^\infty(\mathbb{R}^3))$ and in \cite{ChGu20} for solutions in $L^\infty(0,T,L^p(\mathbb{R}^3))$ with $p>\frac{3}{2}$; and regularity results for solutions in $L^\infty(0,T,L^p(\mathbb{R}^d))$ with $p>\frac{d}{2}$ \cite{silvestre2015landau} \cite{gualdani2017landau}. We also mention the long time asymptotic results for weak solutions from \cite{CM2017verysoft} and \cite{CDH15}.
In the very recent manuscript \cite{Desvillettes-He-Jiang-2021} the authors studied behavior of solutions in the space $L^\infty(0,T,\dot{H}(\mathbb{R}{^3}))$. They show that for general initial data there exists a time $T^*$ after which the weak solution belongs to $L^\infty((T^*, +\infty), {H^1}(\mathbb{R}{^3}))$. This result is in accordance with the one in \cite{GGIV2019partial}; in \cite{GGIV2019partial} the authors showed that the set of singular times for weak solutions has Hausdorff measure at most $\frac{1}{2}$. On the other hand, global existence of bounded smooth solutions has been shown for an {\em{isotropic}} modification of the Landau equation $\partial_t f= tr(\bar a^f \Delta f) + f^2$ in \cite{gualdani2014radial}.
For the non-cutoff Boltzmann equation, existence theory is in a roughly similar stage as the Landau equation. Global existence is known for initial data that is close to equilibrium \cite{gressman2011boltzmann}.
In the space homogeneous case, solutions are known to exist globally when $\gamma +2s\geq 0$ \cite{he2012homogeneous-boltzmann}. (The parameter $s$ will be defined in Section \ref{s:boltzmann} below.) See also
\cite{lu2012measure, morimoto2016measure} for global existence of measure-valued solutions in the homogeneous setting, which regularize in some cases. Short-time existence for the inhomogeneous equation was established in various regimes in, e.g. \cite{amuxy2010regularizing, amuxy2011bounded, HST2019boltzmann}. There is also a program of conditional regularity (see \cite{silvestre2016boltzmann,imbert2016weak, imbert2018decay, imbert2019smooth}) that gives $C^\infty$ smoothness in the case $\gamma + 2s \in [0,2]$, provided the mass, energy, and entropy densities remain under control.
As the question of whether or not solutions to the Landau and Boltzmann equations exhibit finite-time singularities remains an open challenge, it is natural to narrow down the search to certain kinds of singularities. Our goal is to investigate, and {\em eliminate the existence of}, one particular breakdown mechanism, which is usually called \emph{approximately self-similar blowup}.
Self-similar singularities are very common in nonlinear partial differential equations and can come in many different forms; see \cite{eggers2015singularities,barenblatt1996scaling} for many examples and detailed discussions.
Here we consider a singularity to be (approximately) \emph{self-similar} if the solution is of the following form
\begin{align}
f(t,x) = \frac{1}{\mu(t)} g\left(\frac{x}{\lambda(t)} \right) + \mathcal{E}(t,x),
\end{align}
where $\mathcal{E}$ is some error (possibly zero) which is less singular than the self-similar term and where $\mu(t),\lambda(t)$ are rates such that $\lim_{t \nearrow T} \mu(t) = 0$
and $\lim_{t \nearrow T} \lambda(t) = 0$. The function $g$ is called the ``(inner) profile''.
In the literature, self-similar singularities are roughly divided into two kinds (terminology dating from at least \cite{BZ72}): Type I self-similar, in which the blow-up rate is determined by dimensional analysis (i.e. the scaling symmetry of the equations); and Type II self-similar, in which the rate is determined also by other additional effects, for example by an eigenvalue problem associated to the inner blow-up profile.
In this context, two well-studied equations are the semilinear heat equation and the Keller-Segel system.
The semilinear heat equation, despite its simplicity, displays both types of singularities; see reviews in \cite{matano2004nonexistence,quittner2019superlinear,C17} and the references therein.
The Keller-Segel equations displays type II self-similar finite time and infinite time singularities \cite{CGMN19,GM16,RS14}, and with nonlinear diffusion, can display type I self-similar singularities \cite{BL09}.
Another semilinear parabolic PDE studied in this context are the incompressible Navier-Stokes equations; finite-energy type I self-similar solutions were ruled out in \cite{NRS96,Tsai1998}; see also \cite{chae2007,SS09}. Self-similar singularities are also intensely studied in the setting of dispersive equations, such as for example, the nonlinear Schr\"odinger equations \cite{MR05} and wave equations
One significant difference between the Landau and Boltzmann equations and the semilinear equations discussed above (and many quasilinear problems too) is a two-parameter scaling symmetry.
That is, if $f$ is a solution to either (\ref{e:mainL}), or (\ref{e:mainB}), then for any $\alpha \in \mathbb R$ and $\lambda>0$, so is
\[f_{\lambda,\alpha}(t,x,v) := \lambda^{\alpha + 3+\gamma} f(\lambda^\alpha t, \lambda^{1+\alpha}x, \lambda v).\]
This provides for a much wider and subtle class of potential type I singularities (and likely type II as well) than equations with a one-parameter scaling symmetry.
These kinds of two-parameter symmetry groups are common in fluid mechanics and kinetic theory. Some examples include the Burgers equation, which undergoes type I self-similar shock formation \cite{CGM18,eggers2015singularities}, and the isentropic, compressible Euler equations, for which there are many self-similar finite-time singularities, including implosions \cite{MRRS19I,MRRS19II} and shocks \cite{BSV19,buckmaster2019formation,christodoulou2007formation,christodoulou2014compressible}.
Another example are the incompressible Euler equations, for which the existence or smooth finite-time singular solutions remains open.
Type I self-similar singularities have been ruled out under a variety of decay and/or integrability conditions on the profile \cite{chae2007,CS13,CW18,ChaeTsai15}, nevertheless, there is strong numerical evidence that type I self-similar singularity formation is possible, at least along the boundary \cite{luo2014potentially}.
Moreover, in H\"older regularity (as opposed to smooth), there does exist type I self-similar finite-time blow-up solutions \cite{elgindi2019finite,EJ19}. Smooth type I self-similar blow-up solutions have also been constructed for some toy models of the Euler equations, such as the Choi-Kiselev-Yao (CKY) model \cite{hou2015self} and the de Gregorio model \cite{CHD19}.
Finally, the four-dimensional gravitational Vlasov-Poisson equations have a family of type I self-similar finite-time singularities \cite{LMR08I,LMR08II}.
One significant difference that Landau and Boltzmann equations (with singular cut-off) from all of the examples just discussed is the presence of hypoelliptic (or parabolic, if homogeneous) smoothing.
For example, this is likely to rule out the kind of regularity-dependent blow-up dynamics observed in Burgers and Euler \cite{CGM18,elgindi2019finite,EJ19}.
In light of the rich number and types of blow-up profiles found in similar equations, it makes sense to narrow down the search for potential singularities by eliminating them one at the time. This work can be considered a first study in this direction, endeavoring to rule out as many kinds of Type I singularities as possible.
Our first main result is summarized in the following statement, which will be presented and discussed in detail in the next section:
{\bf{Main Theorem Summary}} {\em{Let $\gamma \in [-3,-2]$ and let $f$ be a smooth solutions to \eqref{e:mainL} with mass and kinetic energy locally bounded, namely
\begin{align*
f \geq 0,\;f \in C^\infty((0,T) \times \mathbb R^3_x \times \mathbb R^3_v), \quad \forall R>0, \sup_{0 < t < T} \int_{\abs{x} \leq R}\int \left(1 + \abs{v}^2\right) f \, \mathrm{d} v < \infty,
\end{align*}
for any $T>0$.
Then, if $f$ has the form
\begin{equation}\label{e:ansatz}
f(t,x,v) = \phi(t,x,v) + \frac 1 {(T-t)^{1+\theta(3+\gamma)}}\, g\left( \frac x {(T-t)^{1+\theta}}, \frac v {(T-t)^{\theta}}\right),
\end{equation}
with $-1 < \theta < 1/2$, $g$ smooth, $\phi$ not too singular as $t \nearrow T$, and $g$ bounded and satisfying mild decay conditions, then we must have $g\equiv 0$. }}\\
{\bf{Corollary.}} {\em{Let $\gamma \in [-3,-2]$ and let $f$ be a smooth solutions to the homogeneous Landau equation
$$
\partial_t f =\mbox{tr}( \bar a^f D_v^2 f) + \bar c^f f.
$$
Then, if $f$ has the form
\begin{equation*
f(t,v) = \phi(t,v) + \frac 1 {(T-t)^{1+\theta(3+\gamma)}}\, g\left(\frac v {(T-t)^{\theta}}\right),
\end{equation*}
with $1/|\gamma| < \theta < 1/2$, $g$ smooth, $\phi$ not too singular as $t \nearrow T$, and $g$ bounded and satisfying mild decay conditions, then we must have $g\equiv 0$. }}\\
In the second part of our manuscript we extend our blow-up analysis to the Vlasov-Poisson-Landau system ($\gamma =-3$):
\begin{equation}\label{e:LCP}
\begin{split}
&\partial_t f + v\cdot \nabla_x f -\nabla_x E \cdot\nabla_v f = Q_L(f,f),\\
& -\Delta_x E = \pm 4\pi \int_{\mathbb R^3} f(t,x,v) \, \mathrm{d} v,
\end{split}
\end{equation}
and to the non-cutoff Boltzmann equation:
\begin{equation}\label{e:mainB}
\partial_t f + v\cdot \nabla_x f = Q_B(f,f) := \int_{\mathbb R^3} \int_{\mathbb S^{2}} B(v-v_*,\sigma) [f(v_*')f(v') - f(v_*)f(v)]\, \mathrm{d} \sigma \, \mathrm{d} v_*,\\
\end{equation}
(See Section \ref{s:boltzmann} for the definitions of $B(v-v_*,\sigma)$, $v'$, and $v_*'$.) For both models we similarly rule out existence of solutions of the form (\ref{e:ansatz}), see Theorem \ref{t:boltzmann} and Theorem \ref{main_VLP}. \\
Let us briefly comment on the admissible values of $\theta$. Define the self-similar variables
\begin{equation*}
y = \frac x {(T-t)^{1+\theta}}, \quad w = \frac v {(T-t)^\theta}.
\end{equation*}
In these variables, our ansatz becomes
\begin{equation*
f(t,x,v) = \phi\left(t,(T-t)^{1+\theta}y, (T-t)^\theta w\right) + \frac 1 {(T-t)^{1+\theta(3+\gamma)}} g(y,w).
\end{equation*}
We consider all $\theta$ that satisfy at the same time $1+\theta >0$ and $1+\theta(3+\gamma) > 0$ for all $\gamma \in [-3,-2]$, i.e. we want a solution that forms a singularity a point in space. This implies $\theta >-1$.
In the case of the homogeneous Landau equations we additionally have the requirement $1/\abs{\gamma} < \theta$ because otherwise, our ansatz violates conservation of mass and is therefore not an admissible solution.
To motivate the upper bound on $\theta$ that appears in the our results, we recall from \cite{gualdani2017landau, silvestre2015landau}, that if $f$ is a solution to the homogeneous Landau equations which belongs to $L^\infty(0,T,L_v^q(\mathbb{R}^3))$ for some $q > 3/(5+\gamma)$ then $f$ is uniformly bounded.
Hence, it is natural to require blow up in all of $L_v^q(\mathbb{R}^3))$ with $ 3/(5+\gamma) < q \le +\infty$ at $x = 0$. This motivates the requirement $\theta <\frac{1}{2}$, which also appears in the proof in order to control error terms coming from the interaction of $\phi$ and $g$ near the singularity.
For the Boltzmann equation, we will take the same ansatz, with $\theta > -1$ for the same reasons mentioned above. The upper restriction on $\theta$ that arises from our proof in the Boltzmann case is $1/(2s)$ rather than $1/2$. Note that $2s$ is the order of the diffusion generated by the collision operator, whereas the Landau collision operator gives rise to diffusion of order $2$. For homogeneous Boltzmann, conservation of mass also holds, which rules out values of $\theta$ smaller than $1/|\gamma|$ in our ansatz.
\begin{remark}
Note that $\theta < 0$ and $\theta > 0$ each correspond to very qualitatively different blow-up scenarios. In $\theta > 0$ the distribution function is forming a singularity at $v=0$; that is, many particles are slowing to a halt near the singularity. For $\theta < 0$, many particles are being accelerated to unbounded velocities near the point of singularity. Due to the conservation of energy, the latter kind of singularity cannot occur in the homogeneous equations. However there is, a priori, no reason why such a singularity cannot occur in the inhomogeneous equations, such as in (\ref{e:LCP}) or (\ref{e:mainL}). In fact, precisely this kind of approximately Type I self-similar singularity with accelerating particles occurs in the 4-dimensional gravitational Vlasov equation \cite{LMR08I,LMR08II}.
\end{remark}
\section{Main results on the Landau equation}\label{Main section}
To properly formulate our results, we need to choose a proper class of solutions. In that class, we will show that breakdown mechanisms of the form \eqref{e:ansatz} will not occur.
The conditions we impose on $\phi$ and $g$ in \eqref{e:ansatz} are mild, but somewhat tedious to state. For convenience, by shifting time, we will take $t=0$ to be the potential blowup time, and assume that $f$ is defined for $(t,x,v) \in(-T,0)\times \mathbb R^3 \times \mathbb R^3$ for some $T>0$. Hence, we write
\begin{equation}\label{e:ansatzII}
f(t,x,v) = \phi(t,x,v) + \frac 1 {(-t)^{1+\theta(3+\gamma)}}\, g\left( \frac x {(-t)^{1+\theta}}, \frac v {(-t)^{\theta}}\right),
\end{equation}
or
\begin{equation*
f(t,x,v) = \phi\left(t,(-t)^{1+\theta}y, (-t)^\theta w\right) + \frac 1 {(-t)^{1+\theta(3+\gamma)}} g(y,w),
\end{equation*}
in the self-similar variables
\begin{equation}\label{e:self}
y := \frac x {(-t)^{1+\theta}}, \quad w := \frac v {(-t)^\theta}.
\end{equation}
The first condition is that the mass and kinetic energy of $f$ are locally bounded,
\begin{align}\label{e:f-condition}
f \geq 0, \quad \forall R > 0, \quad \sup_{-T < t < 0} \int_{\abs{x} < R} \int \left(1 + \abs{v}^2\right) f \, \mathrm{d} v \, \mathrm{d} x < \infty,
\end{align}
in particular, we do not require the solution to decay as $x \to \infty$. Our analysis is, therefore, valid also for periodic domains and homogeneous solutions (that is, $x$-independent solutions).
The second condition is that the singularity occurs only at the blow-up space-time point $(t,x)=(0,0)$:
\begin{align}\label{e:singularity}
f \in C^\infty((-T,0] \times \mathbb R^3 \times \mathbb R^3 \setminus \set{0} \times \set{0}\times \mathbb R^3 ).
\end{align}
The third condition concerns the inner profile $g$. For all $1 \leq p \leq \infty$, $0\leq j\leq 2$, $0\leq \ell \leq 1$, we require
\begin{eqnarray}\label{e:g-smooth}
\begin{array}{cc}
g \in C^\infty(\mathbb R^3 \times \mathbb R^3), \quad
D_w^jD_y^\ell g \in L^{\infty}_{y, \rm loc} L^{p}_{w}.
\end{array}
\end{eqnarray}
\begin{remark}
In fact, one can use the slightly weaker condition $g \in L^\infty_{y,loc} L^1_w \cap L^\infty_{y,loc} L^p_w \cap C^\infty$ for some $p > \frac{3}{\gamma + 5}$, however for simplicity of exposition we will use the stronger assumption \ref{e:g-smooth}.
\end{remark}
The fourth condition will assure that, near the singularity, the contribution of $\phi$ is small compared to $g$ in the natural self-similar frame. In this regard, the function $\phi$ is allowed to form a singularity at a rate that is `sub-critical' with respect to the scaling.
First note that
$$
D_v^jD_x^{\ell} f (v,x,t) = D_v^jD_x^{\ell} \phi + (-t)^{-1-\theta(3+\gamma)-\ell(1+\theta)-j\theta}D_w^jD_y^{\ell} g(y,w) .
$$
We would like to compare
$$\sup_{|y|\le R} \| D_v^jD_x^{\ell} \phi \|_{L^p_v}$$
with (using the definition of the self-similar coordinates \eqref{e:self})
$$ (-t)^{-1-\theta(3+\gamma)-\ell(1+\theta)-j\theta}\sup_{|y|\le R} \| D_w^jD_y^{\ell} g \|_{L^p_v} = (-t)^{-1-\theta(3+\gamma)+3\theta /p-\ell(1+\theta)-j\theta}\sup_{|y|\le R} \| D_w^jD_y^{\ell} g \|_{L^p_w}.
$$
Let's consider first $\ell=j=0$;
we compare the terms
$$\sup_{|y|\le R} \| \phi \|_{L^p_v} \quad \textrm{vs}\quad (-t)^{-1-\theta(3+\gamma)+3\theta /p}\sup_{|y|\le R} \| g \|_{L^p_w}.$$
If $\theta$ and $p$ are such that $-1-\theta(3+\gamma)+3\theta /p <0$, we enforce $\phi$ to satisfy
\begin{align}\label{case2}
\lim_{t \to 0} (-t)^{1+\theta(3+\gamma)-3\theta /p} \sup_{|y|\le R} \| \phi \|_{L^p_v}=0.
\end{align}
In this case, $\sup_{|y|\le 1} \| f \|_{L^p_v}$ blows up with a rate $(-t)^{-1-\theta(3+\gamma)+3\theta /p}$ dictated by $g$ and the $\phi$ contribution is at least slightly less singular.
This case happens for all $p\ge 1$ for $\theta < \frac{1}{|\gamma|}$ and for $p>\frac{3\theta}{1+\theta(3+\gamma)}$ if $\theta \ge\frac{1}{|\gamma|}$. Note that this condition with $p=\infty$, implies $g\geq 0$, by taking $t \to 0$.
Generalizing to derivatives, we assume that for all $1 \leq p \leq \infty$ such that $1 + \theta(3+\gamma) - \tfrac{3\theta}{p} \geq 0$, for every $R > 0$, $0\leq i\leq 1$, $0 \leq j \leq 2$, $0 \leq \ell \leq 1$, the function $\phi$ satisfies
\begin{equation} \label{e:lim-phi-t1}
\lim_{t \to 0} (-t)^{1+ \theta(3+\gamma) - \tfrac{3\theta}{p} + i + (1+\theta)\ell + \theta j} \sup_{\abs{y} \leq R} \norm{\partial_t^i D_v^j D_x^\ell \phi(t,(-t)^{1+\theta}y,\cdot)}_{L^p_v} = 0.
\end{equation}
Our main result for the Landau equation is summarized in the following theorem:
\begin{theorem}\label{t:landau}
Let $\gamma \in [-3,-2]$, {$-1 < \theta < 1/2$}. Let $f$ be a smooth solution of the Landau equation \eqref{e:mainL} that satisfies \eqref{e:f-condition} and \eqref{e:singularity}.
Assume that $\phi$ satisfies \eqref{e:lim-phi-t1}.
For $g$, assume it satisfies \eqref{e:g-smooth} and that there exist $h$ and $q$ such that
\begin{align*}
g(y,w) = q(w) + h(y,w),
\end{align*}
with
$$(1+ \abs{y} + |w|)h\in L_{y,w}^1(\mathbb R^6)\quad \textrm{and} \quad q \in L_w^1(\mathbb R^3).$$
Finally, if $\theta = \pm1/3$ we additionally assume that
$$(1+ \abs{y}\abs{w}^2 + |w|^3) h \in L^1_{y,w}(\mathbb R^6) \quad \textrm{and} \quad (1+ |w|^2) q \in L^1_w(\mathbb R^3).$$
Then, for any solution to the Landau equation \eqref{e:mainL} of the form
\begin{equation*}
f(t,x,v) = \phi(t,x,v) + \frac 1 {(-t)^{1+\theta(3+\gamma)}}\, g\left( \frac x {(-t)^{1+\theta}}, \frac v {(-t)^{\theta}}\right),
\end{equation*}
we must have $g\equiv 0$ and hence no approximate self-similar singularity of this type can occur.
\end{theorem}
\begin{remark}
Note that the inhomogeneous problem could have a self-similar profile with $q \neq 0$.
In other equations, there are type I self-similar singularities with inner profiles that do not decay at infinity (although the solution does), such as in the semilinear heat equation \cite{GigaKohn85}, and even profiles grow at infinity, such as in shock formation in Burgers \cite{CGM18,eggers2015singularities} and in the CKY model \cite{hou2015self}. Numerical evidence suggests that such singularities exist also in the incompressible Euler equations \cite{luo2014potentially}.
At the current moment, we do not know how to classify potential singularities with inner profiles that grow at infinity.
\end{remark}
\begin{remark}
If one knows a priori that $g \in L^{1}_{y,v}$, then it suffices to use $(1 + |w|)g\in L_{y,w}^1(\mathbb R^6)$ (and $(1+|w|^3) h$ if $\theta = \pm 1/3$).
\end{remark}
Next, we specialize our analysis to the homogeneous Landau equation. Our next theorem shows essentially that if any solution to the homogeneous Landau equation has a singularity, such singularity is either (i) not of Type I self-similar, or (ii) is of Type I self-similar with a profile $g \not\in L^1(\mathbb R^3)$.
\begin{theorem}\label{theom:Landau_Hom}
Let $\gamma \in [-3,-2)$ and $1/\abs{\gamma} \leq \theta < 1/2$. Assume that $f = f(v,t)$ has finite mass and second moment and satisfies
\begin{align*
f \in C^\infty((-T,0) \times \mathbb R^3), f \in C^\infty((-T,0] \times \mathbb R^3 ).
\end{align*
Assume that $\phi$ satisfies \eqref{e:lim-phi-t1}, and that $g = g(v)$ is such that
$$
g\in C^{\infty}(\mathbb R^3), \quad g \in L^1_w,
$$
If $\theta = 1/3$, then additionally assume that $g$ satisfies $ (1+|w|^2)g \in L^1_w$.
Then, for any solution to the homogeneous Landau equation of the form
\begin{equation*}
f(t,v) = \phi(t,v) + \frac 1 {(-t)^{1+\theta(3+\gamma)}}\, g\left( \frac v {(-t)^{\theta}}\right),
\end{equation*}
we must have $g \equiv 0$, and hence no approximate self-similar singularity of this type can occur.
\end{theorem}
The outline of the paper is as follows: in Section \ref{s:landau} we prove Theorem \ref{t:landau} and Theorem \ref{theom:Landau_Hom}, in Section \ref{s:landau-poisson} we investigate the Vlasov-Landau-Poisson system and in Section \ref{s:boltzmann} the Boltzmann equation.
\subsection{Notation}
We will employ the notation $\langle \cdot \rangle = \sqrt{1+|\cdot|^2}$ throughout. When we say $t\to 0$, we always mean that $t$ increases to $0$ through negative values. We will write $A\lesssim B$ when $A\leq CB$ for some universal constant $C$. When integrals appear with no domain, it is assumed that the domain of integration is $\mathbb R^3$. Similarly, norms such as $\|\cdot\|_{L^p}$ are over $\mathbb R^3$, unless stated otherwise.
\section{Proof of Theorem \ref{t:landau}} \label{s:landau}
\subsection{Preliminary lemmas}
First, we recall important global estimates on the coefficients in \eqref{e:mainL}. The proof is standard, but we include a sketch for the readers' convenience.
\begin{lemma}\label{l:coeffs}
Let $\gamma \in [-3,-2]$. With $\bar a^h$ and $\bar c^h$ defined as in \eqref{e:coeffs}, for any $1 \leq p < \frac{-3}{\gamma + 2}$, there exists $C > 0$ such that
\begin{align}
|\bar a^h(v)| &\leq C\|h\|_{L^1(\mathbb R^3)}^{1 + \frac{p}{3}(\gamma+2)}\|h\|_{L^{\frac{p}{p-1}}(\mathbb R^3)}^{-(\gamma+2)p/3}. \label{ineq:a1hi}
\end{align}
Moreover, for any $1 \leq q < \frac{3}{\gamma + 5}$, there exists $C > 0$ such that
\begin{align}
|\bar a^h(v)| &\leq C\|h\|_{L^q(\mathbb R^3)}^{\frac{q}{3}(\gamma+5)}\|h\|_{L^{\infty}(\mathbb R^3)}^{1-(\gamma+5)\frac{q}{3}}.\label{ineq:aloinf}
\end{align}
For any $1 \leq p < \frac{-3}{\gamma + 1}$, there exists $C > 0$ such that
\begin{align}
|\partial_{v_i} \bar a^h (v)| &\leq C\|h\|_{L^1(\mathbb R^3)}^{1 + \frac{p}{3}(\gamma +1)}\|h\|_{L^{\frac{p}{p-1}}(\mathbb R^3)}^{-(\gamma+1)p/3}. \label{ineq:agrad}
\end{align}
Finally, for any $1 \leq p < \frac{-3}{\gamma}$ and $\gamma \in (-3,-2]$
\begin{align}
\abs{\bar c^h(v)} &\leq C\|h\|_{L^1(\mathbb R^3)}^{1 + \frac{p}{3}\gamma} \|h\|_{L^{\frac{p}{p-1}}(\mathbb R^3)}^{-\gamma p/3}, \qquad \gamma \in (-3,-2]. \label{ineq:c}
\end{align}
\end{lemma}
\begin{proof}
For $s\in (-3,0)$, splitting the integral into $\abs{v_*} \leq R$ and $\abs{v_*} > R$, applying H\"older's inequality on each, we have for $1 \leq p < 3/(3-s) < r \leq \infty$
\begin{align}
\left| \int_{\mathbb R^3} h(v-v_*) \abs{v_*}^s \, \mathrm{d} v_* \right| \lesssim R^{-s +3 - 3/p} \norm{h}_{L^p} + R^{-s + 3 - 3/r}\norm{h}_{L^r}. \label{ineq:absvstarin}
\end{align}
Optimizing in $R$ gives the estimates \eqref{ineq:a1hi}, \eqref{ineq:aloinf}, and \eqref{ineq:c} (using also $|\Pi(v_*)| \leq 1$).
For \eqref{ineq:agrad}, first one integrates by parts and uses $|\partial_{v_i}(\Pi(v_*) |v_*|^{\gamma+2})|\lesssim |v_*|^{\gamma+1}$ before applying again \eqref{ineq:absvstarin}.
\end{proof}
The next lemma ensures that the formal identity $\int_{\mathbb R^3}Q_{L}(g,g) (1 + |w|^2) \, \mathrm{d} w = 0$ and entropy dissipation inequality $\int_{\mathbb R^3}\log g Q_{L}(g,g) \, \mathrm{d} w \leq 0$ are valid under our assumptions on $g$.
\begin{lemma}\label{l:invariants}
Let $\chi \in C^\infty(B(0,2))$ be a smooth cut-off function, such that $\chi(|x|) = 1$ for $\abs{x} \leq 1$. With $g \in L_w^1 \cap L_w^\infty(\mathbb R^3) \cap C^\infty$, we have
\begin{itemize}
\item
$$\lim_{R \to \infty}\int_{\mathbb R^3} \chi\left( \frac{w}{R} \right) Q_{L}(g,g) \, \mathrm{d} w = 0.$$
\item If, in addition, $g$ satisfies $|w|^2 g \in L_w^1 $ we have
$$\lim_{R \to \infty}\int_{\mathbb R^3} \chi\left( \frac{w}{R} \right)|w|^2 Q_{L}(g,g) \, \mathrm{d} w = 0.$$
\item If, in addition, $g$ satisfies
\begin{eqnarray}\label{e:entropy}
& g \log g \in L_w^1, \quad \nabla \sqrt{g} \in L_w^2,
\end{eqnarray}
then we have
$$\lim_{R \to \infty}\int_{\mathbb R^3} \chi\left( \frac{w}{R} \right) \log g Q_{L}(g,g) \, \mathrm{d} w \leq 0.$$
\end{itemize}
\end{lemma}
\begin{proof}
Define
\begin{align*}
\chi_R(w) : = \chi(w/R);
\end{align*}
notice that
\begin{align}
\abs{\nabla^j \chi_R} \lesssim R^{-j}, \label{ineq:gradchi}
\end{align}
and moreover, the derivatives are only supported in the region $w \approx R$.
Since $Q_L$ can be written as
\[Q_L(g,g) = \nabla_v \cdot \left(\int_{\mathbb R^3} \Pi(v_*) |v_*|^{\gamma+2} [g(v-v_*)\nabla_v g(v) - g(v)\nabla_v g(v-v_*)] \, \mathrm{d} v_* \right),\]
integration by parts gives
\begin{align*}
\int_{\mathbb R^3} \chi_R Q_{L}(g,g) \, \mathrm{d} w& = - \int_{\mathbb R^3} \nabla\chi_R\\
& \quad \cdot \left(\int_{\mathbb R^3} \Pi(w_*) |w_*|^{\gamma+2} [g(w-w_*)\nabla_w g(w) - g(w)\nabla_w g(w-w_*)] \, \mathrm{d} w_* \right) dw \\
& = 2\int_{\mathbb R^3} g \nabla \chi \cdot \textrm{div}_w \bar a^g (w)\;dw \\
& \quad + \int_{\mathbb R^3} g \nabla^2 \chi_R : \bar a^g (w)\;dw
\end{align*}
Since $g \in L_w^p$ for any $p\ge 1$, thanks to \eqref{ineq:aloinf} and \eqref{ineq:agrad} the integrals are bounded uniformly in $R$ and hence as $R\to +\infty$, we have by \eqref{ineq:gradchi}, $\lim_{R \to \infty}\int_{\mathbb R^3} \chi\left( \frac{w}{R} \right) Q_{L}(g,g) \, \mathrm{d} w = 0$.
Similarly, integration by parts yields
\begin{align*}
\int_{\mathbb R^3} \chi_R |w|^2 Q_{L}(g,g) \, \mathrm{d} w = \;& \frac{2}{R}\int_{\mathbb R^3} g |w|^2\nabla \chi_R \cdot \textrm{div}_w \bar a^g (w)\;dw \\
& + 4 \int_{\mathbb R^3} g \chi_R \textrm{div}_w \bar a^g (w) \cdot w \;dw\\
& + \frac{1}{R^2} \int_{\mathbb R^3} g |w|^2 \nabla^2 \chi : \bar a^g (w)\;dw + 2 \int_{\mathbb R^3} g \chi_R Tr[\bar a^g]\;dw \\
& + \int_{\mathbb R^3} g \sum_{i,j} \bar a_{i,j}^g \left( 2 w_j \partial_i \chi_R + 2 w_i \partial_j \chi_R\right) \;dw.
\end{align*}
All of the terms involving derivatives of the cutoff function
vanish as $R \to \infty$ by the same arguments as used in the previous case.
Since
$$
2 g \textrm{div}_w \bar a^g (w) \cdot w + g Tr[\bar a^g] = 2\frac{g(w)}{|z-w|} \nabla_z g(z) \cdot w + \frac{g(w)g(z)}{|w-z|},
$$
integration by parts yields
\begin{align*}
\int_{\mathbb R^3}2 g \chi_R \left( \textrm{div}_w \bar a^g (w) \cdot w + g Tr[\bar a^g] \right) \;dw & = \int_{\mathbb R^6}\chi_R\frac{g(w)g(z)}{|w-z|} \left[ \frac{1}{|z-w|} - \frac{|z-w|^2}{|z-w|^3}\right] \;dzdw \\
& \quad - 2\int_{\mathbb R^3} \nabla \chi_R \cdot w g \bar{a}^g \; dw \\
& = - 2\int_{\mathbb R^3} \nabla \chi_R \cdot w g \bar{a}^g \; dw.
\end{align*}
Hence, by the assumptions on $g$ and \eqref{ineq:gradchi}, we can pass to the limit $R\to +\infty$ and get
\begin{align*}
\lim_{R \to \infty}\int_{\mathbb R^3} \chi\left( \frac{w}{R} \right) |w|^2 Q_{L}(g,g) \, \mathrm{d} w = 0.
\end{align*}
For the entropy inequality, we begin with
\begin{align*}
\int_{\mathbb R^3} \chi_R \log g Q_{L}(g,g) \, \mathrm{d} w = \;& \int_{\mathbb R^3} (2g\ln g -g) \nabla \chi_R \cdot \textrm{div}_w \bar a^g (w)\;dw \\% \left(\int_{\mathbb R^3} \Pi(v_*) |v_*|^{\gamma+2} [ g(v)\nabla_{v_*} g(v-v_*)] \, \mathrm{d} v_* \right) dv \\
& + \int_{\mathbb R^3} (g\ln g -g) \nabla^2 \chi_R : \bar a^g (w)\;dw \\
& - \int_{\mathbb R^3} \chi_R \left [ \left \langle \bar a^g (w) \frac{ \nabla g }{\sqrt{g}}, \frac{ \nabla g }{\sqrt{g}}\right \rangle - \nabla g \cdot \textrm{div}_w \bar a^g (w)\right] \;dw\\
= \;& \int_{\mathbb R^3} (2g\ln g -2g) \nabla \chi_R \cdot \textrm{div}_w \bar a^g (w)\;dw \\% \left(\int_{\mathbb R^3} \Pi(v_*) |v_*|^{\gamma+2} [ g(v)\nabla_{v_*} g(v-v_*)] \, \mathrm{d} v_* \right) dv \\
& + \int_{\mathbb R^3} (g\ln g -g) \nabla^2 \chi_R : \bar a^g (w)\;dw \\
& - \int_{\mathbb R^3} \chi_R \left [ \left \langle \bar a^g (w) \frac{ \nabla g }{\sqrt{g}}, \frac{ \nabla g }{\sqrt{g}}\right \rangle - g \bar c^g(w) \right] \;dw,
\end{align*}
using the identity
\begin{align*}
\int_{\mathbb R^3} \chi_R \nabla g \cdot \textrm{div}_w \bar a^g (w) \;dw = & -\int_{\mathbb R^3} g \nabla \chi_R \cdot \textrm{div}_w \bar a^g (w)\;dw \\
& + \int_{\mathbb R^3} \chi_R g \bar c^g(w) \;dw.
\end{align*}
Using, once more, \eqref{ineq:aloinf}, \eqref{ineq:agrad}, $g \in L^p$ for any $p\ge 1$, \eqref{ineq:gradchi} and this time also (\ref {e:entropy}), we conclude that the first two integrals vanish as $R\to +\infty$.
Moreover,
$$
\left \langle \bar a^g (w) \frac{ \nabla g }{\sqrt{g}}, \frac{ \nabla g }{\sqrt{g}}\right \rangle - g \bar c^g(w)
$$
is a $L^1$ function, thanks to \eqref{ineq:aloinf} and (\ref {e:entropy}) for the first term, and (\ref{ineq:c}) for the second. Hence, dominated convergence theorem allows to pass to the limit
$$
\lim_{R \to \infty} \int_{\mathbb R^3} \chi_R \log g Q_{L}(g,g) \, \mathrm{d} w = - \int_{\mathbb R^3} \left [ \left \langle \bar a^g (w) \frac{ \nabla g }{\sqrt{g}}, \frac{ \nabla g }{\sqrt{g}}\right \rangle - g \bar c^g(w) \right] \;dw.
$$
The thesis follows by noticing that the integral on the right hand side can be rewritten as
\begin{align*}
\int_{\mathbb R^3} & \left [ \left \langle \bar a^g (w) \frac{ \nabla g }{\sqrt{g}}, \frac{ \nabla g }{\sqrt{g}}\right \rangle - g \bar c^g(w) \right] \;dw \\
&= \int_{\mathbb R^3}\int_{\mathbb R^3}\frac{g (w) g (w_*)}{|w-w_*|^{-\gamma-2}} \left\langle \Pi(w-w_*)\left( \frac{\nabla g}{g}-\frac{\nabla_* g}{g_*}\right), \left( \frac{\nabla g}{g}-\frac{\nabla_* g}{g_*}\right) \right \rangle\;dvdv_* \ge 0.
\end{align*}
\end{proof}
\subsection{Proof of Theorem \ref{t:landau}}
As first step, we plug ansatz \eqref{e:ansatzII} into \eqref{e:mainL} and change to self-similar variables $y$ and $w$. The left-hand side of \eqref{e:mainL} transforms as follows:
\begin{align*}
\partial_t f + v\cdot \nabla_x f &= \partial_t \phi + v\cdot \nabla_x \phi + \frac 1 {(-t)^{2+\theta(3+\gamma)}} g + \frac {1} {(-t)^{1+\theta(3+\gamma)}}\left( \frac{\partial y}{\partial t}\cdot \nabla_y g + \frac{\partial w}{\partial t}\cdot\nabla_w g + v \cdot \nabla_x g\right)\\
&= \partial_t \phi + v\cdot \nabla_x \phi + \frac 1 {(-t)^{2+\theta(3+\gamma)}} \left(g + (1+\theta)y\cdot \nabla_y g + \theta w\cdot\nabla_w g + w\cdot \nabla_y g\right).
\end{align*}
Here, and throughout the proof, all terms involving $g$ are evaluated at $(y,w)$, and terms involving $\phi$ are evaluated at $(t,(-t)^{1+\theta}y,(-t)^\theta w)$, unless otherwise noted.
Moreover, we have
\begin{align*}
\bar a^f &= \bar a^\phi + \frac 1 {(-t)^{1+\theta(3+\gamma)}} \int_{\mathbb R^3} \Pi(v-v_*) |v-v_*|^{\gamma+2} g\left(\frac x {(-t)^{1+\theta}}, \frac {v_*} {(-t)^\theta}\right) \, \mathrm{d} v_*\\
&= \bar a^\phi + (-t)^{-\gamma\theta-1} \int_{\mathbb R^3} \Pi((-t)^\theta(w- w_*)) |(-t)^\theta(w- w_*)|^{\gamma+2} g(y,w_*) \, \mathrm{d} w_*\\
&= \bar a^\phi + (-t)^{2\theta-1} \int_{\mathbb R^3} \Pi(w- w_*) |w- w_*|^{\gamma+2} g(y,w_*) \, \mathrm{d} w_*\\
&= \bar a^\phi + (-t)^{2\theta-1} \bar a^g,
\end{align*}
and, for $\gamma > -3$,
\begin{align*}
\bar c^f &= \bar c^\phi + \frac 1 {(-t)^{1+\theta(3+\gamma)}} c_\gamma\int_{\mathbb R^3} |v-v_*|^\gamma g\left(\frac x {(-t)^{1+\theta}}, \frac {v_*} {(-t)^\theta}\right) \, \mathrm{d} v_*\\
&= \bar c^\phi + \frac 1 {(-t)} \bar c^g.
\end{align*}
Taking into account that $D_v^2 f = D_v^2 \phi + (-t)^{-(1+(5+\gamma)\theta)} D_w^2 g$, the right-hand side of \eqref{e:mainL} becomes
\begin{align*}
Q_L(f,f) &= Q_L(\phi,\phi) + \frac 1 {(-t)^{1+(5+\gamma)\theta}}\mbox{tr}(\bar a^\phi D_w^2 g) + (-t)^{2\theta-1}\mbox{tr} (\bar a^g D_v^2 \phi) + \frac 1 {(-t)^{1+\theta(3+\gamma)}} \bar c^\phi g\\
&\quad + \frac 1 {(-t)} \bar c^g \phi + \frac 1 {(-t)^{2+(3+\gamma)\theta}} \mbox{tr}(\bar a^g D_w^2 g) + \frac 1 {(-t)^{2+\theta(3+\gamma)}}\bar c^g g.
\end{align*}
Multiplying through by $(-t)^{2+\theta(3+\gamma)}$ and rearranging the terms, we have
\begin{equation}\label{e:expansion-L}
\begin{split}
0 = & g + (1+\theta)y\cdot \nabla_y g + \theta w\cdot \nabla_w g + w\cdot \nabla_y g - Q_{L,w}(g,g)\\
& - (-t)^{1-2\theta} \mbox{tr}(\bar a^\phi D_w^2 g) - (-t)\bar c^\phi g - (-t)^{1+\theta(3+\gamma)}\bar c^g \phi \\
& - (-t)^{1+\theta(5+\gamma)} \mbox{tr}(\bar a^g D_v^2\phi) + (-t)^{2+\theta(3+\gamma)} (\partial_t \phi + v\cdot \nabla_x \phi - Q_L(\phi,\phi)),
\end{split}
\end{equation}
where
$$
Q_{L,w} = \mbox{tr}(\bar a^g D_w^2 g) + \bar c^g g.
$$
As $t\to 0$, we expect that the terms $g + (1+\theta)y\cdot \nabla_y g + \theta w\cdot \nabla_w g + w\cdot \nabla_y g - Q_{L,w}(g,g)$. will dominate. To show that, we will prove in the next two lemmas that the two error functions
\[\begin{split}
\mathcal E_1(\phi,g) &: = - (-t)^{1-2\theta} \mbox{tr}(\bar a^\phi D_w^2 g) - (-t)\bar c^\phi g - (-t)^{1+\theta(3+\gamma)}\bar c^g \phi\\
\mathcal{E}_2(\phi,g) &: = - (-t)^{1+\theta(5+\gamma)} \mbox{tr}(\bar a^g D_v^2\phi) + (-t)^{2+\theta(3+\gamma)} (\partial_t \phi + v\cdot \nabla_x \phi - Q_L(\phi,\phi)),
\end{split}
\]
are decaying to zero as $t\to 0$. More precisely
\begin{lemma}\label{lemma_E_1}
For all $R> 0$ we have
\begin{equation}\label{e:E1}
\lim_{t \to 0} \;\sup_{\abs{y} \leq R} \;\sup_{v \in \mathbb R^3} \abs{\mathcal E_1} = 0.
\end{equation}
\end{lemma}
\begin{proof}
Using \eqref{ineq:aloinf} and assumption \eqref{e:g-smooth}, for all $1 \leq p < 3/(\gamma + 5)$, we estimate the first term in $\mathcal E_1$ as
\begin{align}
|(-t)^{1-2\theta}\mbox{tr}(\bar a^\phi D_w^2g)| & \lesssim (-t)^{1-2\theta} \norm{\phi}_{L^p_v}^{\frac{p}{3}(\gamma + 5)}\norm{\phi}_{L^\infty_v}^{1- \frac{p}{3}(\gamma + 5)} \nonumber \\
& = \left( (-t)^{1 + \theta(\gamma+ 3) - \frac{3\theta}{p}} \norm{\phi}_{L^p_v}\right)^{\frac{p}{3}(\gamma + 5)} \left( (-t)^{1 + \theta(\gamma+ 3)} \norm{\phi}_{L^\infty_v}\right)^{1- \frac{p}{3}(\gamma + 5)}.\label{3.11}
\end{align}
The second factor vanishes for any $\gamma$ and $\theta$, thanks to (\ref{case2}) with $p=\infty$. For the first term, we still use (\ref{case2}) if $ p > \frac{3\theta}{1+\theta(3+\gamma)}$. Hence, we need
\begin{align*}
\frac{3\theta}{1 + \theta(3+\gamma)} < \frac{3}{\gamma + 5}
\end{align*}
which is fulfilled if $\theta < 1/2$.
Now let us turn to the term involving $c^\phi$.
We have for any $1 \leq p < 3/(3+\gamma)$ by \eqref{ineq:c},
\begin{align}\label{c_inf}
\abs{\bar c^\phi} & \lesssim \norm{\phi}_{L^p_v}^{\frac{p}{3}(\gamma+3)} \norm{\phi}_{L^\infty_v}^{1- {\frac{p}{3}(\gamma+3)}}.
\end{align}
Hence, by assumption \eqref{e:g-smooth},
\begin{align*}
(-t) \abs{\bar c^\phi g} \lesssim \left((-t)^{1 + \theta(\gamma +3) - \frac{3\theta}{p}}\norm{\phi}_{L^p_v}\right)^{\frac{p}{3}(\gamma+3)} \left((-t)^{1 + \theta(\gamma +3)}\norm{\phi}_{L^\infty_v}\right)^{1- {\frac{p}{3}(\gamma+3)}}.
\end{align*}
Analogous to above, the second factor vanishes for any $\gamma$ and $\theta$, thanks to (\ref{case2}) with $p=\infty$. For the first one, we still use (\ref{case2}) if $ p > \frac{3\theta}{1+\theta(3+\gamma)}$. Hence, we need once more
\begin{align*}
\frac{3\theta}{1 + \theta(3+\gamma)} < \frac{3}{\gamma + 3},
\end{align*}
which is satisfied for any $\gamma$ and $\theta$.
Finally, by assumption \eqref{e:g-smooth} and \eqref{ineq:c} for $g$, we get
\begin{align*}
(-t)^{1+\theta(3+\gamma)}\abs{\bar c^g \phi} \lesssim (-t)^{1+\theta(3+\gamma)}\abs{\phi},
\end{align*}
which vanishes for any $\gamma$ and $\theta$, thanks again to (\ref{case2}) with $p=\infty$.
This completes the proof of the lemma.
\end{proof}
\begin{lemma}\label{lemma_E_2}
For any $R_1,R_2> 0$, we have.
\begin{equation}\label{e:E2}
\lim_{t \to 0}\; \sup_{\abs{y} \leq R_1}\; \sup_{\abs{w} \leq R_2} \;\abs{\mathcal E_2} = 0.
\end{equation}
\end{lemma}
\begin{proof}
First, as $\abs{c^g} + \abs{\bar{a}^g} \lesssim 1$ by assumption \eqref{e:g-smooth},
\begin{align*}
\abs{\mathcal E_2} & \lesssim (-t)^{1+\theta(3+\gamma)}\abs{\phi} + (-t)^{1+\theta(5+\gamma)}\abs{D_v^2\phi} \\
& \quad + (-t)^{1+\theta(3+\gamma) + (1+\theta)} \abs{ (-t)^{-\theta}\partial_t \phi + w\cdot \nabla_x \phi} + |(-t)^{2+\theta(3+\gamma)} Q_L(\phi,\phi)|.
\end{align*}
The first three terms on the right hand side converge to zero in $L^\infty_{loc}( \mathbb R^6)$ directly by assumption \eqref{e:lim-phi-t1}.
We now look at the collision term
\[ (-t)^{2+\theta(3+\gamma)}Q_L(\phi,\phi) = (-t)^{2+\theta(3+\gamma)} [\mbox{tr}(\bar a^\phi D_v^2\phi)+\bar c^\phi \phi] .\]
Analogously as in the previous lemma, we write
\begin{align*}
|(-t)^{2 + \theta(3+\gamma)} \mbox{tr}(\bar a^\phi D_v^2\phi)| \lesssim &\left( (-t)^{1 + \theta(3+\gamma) - 3\theta/p}\|\phi \|_{L^p_v(\mathbb{R}{^3})} \right)^{(\gamma + 5)\frac{p}{3}} \\
&\cdot\left( (-t)^{1 + \theta(3+\gamma)}\|\phi \|_{L^\infty_v(\mathbb{R}{^3})}\right)^{1-(\gamma + 5)\frac{p}{3}} \left( (-t)^{1 + \theta(3+\gamma) + 2\theta}\| D^2_v \phi \|_{L^\infty_v(\mathbb{R}{^3})} \right).
\end{align*}
The second and third factor vanish for any $\gamma$ and $\theta$, thanks to (\ref{case2}) with $p=\infty$. The first term is identical to the one in (\ref{3.11}) and vanishes for $p<\frac{3}{5+\gamma}$ and $\theta < \frac{1}{2}$.
To estimate the last term $\bar c^\phi \phi$ we use (\ref{c_inf}) and get,
\begin{align*}
(-t)^{2+\theta(3+\gamma)} \abs{\bar c^\phi \phi} & \lesssim (-t)^{2+\theta(3+\gamma)} \norm{\phi}_{L^p}^{\frac{p}{3}(\gamma+3)} \norm{\phi}_{L^\infty}^{2 - \frac{p}{3}(\gamma+3)} \\
& \lesssim \left( (-t)^{1 + \theta (3+\gamma) - \frac{3\theta}{p}} \norm{\phi}_{L^p}\right)^{\frac{p}{3}(\gamma+3)} \left( (-t)^{1 + \theta (3+\gamma)} \norm{\phi}_{L^\infty}\right)^{2 - \frac{p}{3}(\gamma+3)},
\end{align*}
with $1 \leq p < 3/(\gamma+3)$. The second factor vanishes for any $\gamma$ and $\theta$, thanks to (\ref{case2}) with $p=\infty$. For the first term we still use (\ref{case2}) with $\frac{3\theta}{1+\theta(3+\gamma)} < p < \frac{3}{(\gamma+3)}$. This completes the proof of the lemma.
\end{proof}
Having shown that the dominant terms in (\ref{e:expansion-L}) are
\begin{equation*}
g + (1+\theta)y\cdot \nabla_y g + \theta w\cdot \nabla_w g + w\cdot \nabla_y g - Q_{L,w}(g,g),
\end{equation*}
our next step is to show that the only admissible solution to
$$
0 = g + (1+\theta)y\cdot \nabla_y g + \theta w\cdot \nabla_w g + w\cdot \nabla_y g - Q_{L,w}(g,g),
$$
is the trivial one $g \equiv 0$.
\begin{proof}[Proof of Theorem \ref{t:landau}]
We multiply \eqref{e:expansion-L} by a general smooth test function $\psi(y,w)$ with compact support in $\mathbb R^6$, and take the limit $t\to 0$. Thanks to \eqref{e:E1} and \eqref{e:E2}, we conclude that $g$ satisfies
\begin{equation}\label{e:yw}
g + (1+\theta)y\cdot \nabla_y g + \theta w\cdot \nabla_w g + w\cdot \nabla_y g - Q_{L,w}(g,g) = 0,
\end{equation}
in the sense of distributions. However, using the regularity and decay assumptions for $g$ \eqref{e:g-smooth}, we conclude that \eqref{e:yw} holds pointwise for all $(y,w)\in \mathbb R^6$.
The rest of the theorem is devoted to showing that the only solution to (\ref{e:yw}) is the trivial one, $g \equiv 0$. The proof varies depending on the value of $\theta$. We distinguish three cases, $\theta \neq \pm \frac{1}{3}$, $\theta = 1/3$ and $\theta =- 1/3$.
\begin{itemize}
\item Let { \em{$\theta \neq 1/3$, $\theta \neq -1/3$.}} Let $\chi_R(w) = \chi(w/R)$ be a cutoff function and $\varphi \in C^\infty_0(B(0,1))$ a smooth function such that $\int_{\mathbb R^3} \varphi(y) dy = 1$.
For $y_0 \in \mathbb R^3$ define $\varphi_{y_0}(y) := \varphi(y+y_0)$. We multiply (\ref{e:yw}) by $\chi_R(w) \varphi(y + y_0)$ for some $y_0 \in \mathbb R^3$ and integrate in $\mathbb R^6$. Recall the decomposition $g = h(y,w) + q(w)$. Integration by parts yields
\begin{align*}
(1-3\theta)\int_{\mathbb R^6} \chi_R g \varphi_{y_0} \, \mathrm{d} w \, \mathrm{d} y & - \theta \int_{\mathbb R^6} g \varphi_{y_0} w \cdot \nabla_w \chi_R \;dw \;dy - (1+\theta)\int_{\mathbb R^6} h \chi_R y\cdot \nabla_y \varphi_{y_0} \, \mathrm{d} w \, \mathrm{d} y \\
&- \int_{\mathbb R^6} h\chi_R w\cdot \nabla_y \varphi_{y_0} \, \mathrm{d} w \, \mathrm{d} y -3(1+\theta) \int_{\mathbb R^6} h\chi_R \varphi_{y_0} \, \mathrm{d} w \, \mathrm{d} y \\
= & \int_{\mathbb R^6} \varphi \chi_R Q_{L,w}(g,g) \, \mathrm{d} w \, \mathrm{d} y .
\end{align*}
We first take the limit $R\to +\infty$.
Note that $\abs{w \cdot \nabla_w \chi_R} \lesssim 1$ by \eqref{ineq:gradchi} and $w \cdot \nabla_w \chi_R$ converges to zero pointwise. Therefore, by the dominated convergence theorem, the second term vanishes. For the collision term, we use Lemma \ref{l:invariants}.
The remaining terms converge by the dominated convergence theorem and the assumptions on $h$ and $q$.
Therefore, we obtain
\begin{align*}
(1-3\theta)\int_{\mathbb R^6} (q+h) \varphi_{y_0} \, \mathrm{d} w \, \mathrm{d} y& - 3(1+\theta) \int_{\mathbb R^6} h \varphi_{y_0} \, \mathrm{d} w \, \mathrm{d} y \\
& - \int_{\mathbb R^6} h w\cdot \nabla_y \varphi_{y_0} \, \mathrm{d} w \, \mathrm{d} y- (1+\theta)\int_{\mathbb R^6} h y\cdot \nabla_y \varphi_{y_0} \, \mathrm{d} w \, \mathrm{d} y =0.
\end{align*}
Next, we perform the limit $y_0 \to \infty$. Thanks to the assumption {{$(1+\abs{y} + \abs{w})h \in L^1(\mathbb R^6)$}}, all of the terms involving $h$ vanish as $y_0 \to \infty$ by the dominated convergence theorem. Hence, the above identity reduces to
\begin{align*}
(1-3\theta)\int_{\mathbb R^3} q(w) \, \mathrm{d} w = 0.
\end{align*}
Since $q \geq 0$, we conclude that $q \equiv 0$. The condition $g\ge 0$ and $q=0$ implies $h\ge 0$. We now go back to (\ref{e:yw}) with $q=0$, multiply it by $\chi_{R_1}(y) \chi_{R_2}(w)$ with $\chi_{R_1}(y) = \chi(y/R_1)$ and $\chi_{R_2}(w) = \chi(w/R_2)$ and integrate in $\mathbb R^6$. Similarly as above, we first take the limit $R_2 \to +\infty$ and get
\begin{eqnarray*
-2(1+3\theta)\int_{\mathbb R^6} \chi_{R_1} h \, \mathrm{d} w \, \mathrm{d} y & - (1+\theta)\int_{\mathbb R^6} h y\cdot \nabla_y \chi_{R_1} \, \mathrm{d} w \, \mathrm{d} y \\
&- \int_{\mathbb R^6} h w\cdot \nabla_y \chi_{R_1} \, \mathrm{d} w\, \mathrm{d} y = 0.
\end{eqnarray*}
Using the assumption that {{$(1+w)h \in L^1(\mathbb R^6)$}}, we can pass to the limit $R_1 \to +\infty$ in the above equation by the dominated convergence theorem as we used above (in particular that $y \cdot \nabla_y \chi_{R_1}$ is uniformly bounded and converges pointwise to zero) and obtain
$$
-2(1+3\theta)\int_{\mathbb R^6} h \, \mathrm{d} w \, \mathrm{d} y =0,
$$
which implies $h \equiv 0$.
\item Let {\em{$\theta = 1/3.$} } As before, let $\chi_R(w) = \chi(w/R)$ a cutoff function and $\varphi \in C^\infty_0(B(0,1))$ a smooth function such that $\int_{\mathbb R^3} \varphi(y) dy = 1$ and take $\varphi_{y_0}(y) = \varphi(y + y_0)$. This time we multiply (\ref{e:yw}) by $\chi_R(w) |w|^2 \varphi_{y_0}$ for some $y_0 \in \mathbb R^3$ and integrate in $\mathbb R^6$. We obtain
\begin{align*}
-\frac{2}{3} \int_{\mathbb R^6} g \varphi_{y_0} |w|^2 \chi_R \;dwdy &- \frac{1}{3} \int_{\mathbb R^6} g \varphi_{y_0} |w|^2 w \cdot \nabla_w \chi_R \;dwdy \\
-2 \int_{\mathbb R^6} h \chi_R \varphi_{y_0} |w|^2 \;dwdy &- \frac{2}{3} \int_{\mathbb R^6} |w|^2 h \chi_R y \cdot \nabla_y \varphi_{y_0} \;dydw \\
- \int_{\mathbb R^6} |w|^2 \chi_R h w \cdot \nabla_y \varphi_{y_0} \;dwdy &= \int_{\mathbb R^6} \varphi_{y_0} \chi_R(w) |w|^2 Q_{L}(g,g)\;dwdy.
\end{align*}
Thanks to the condition {{$q(1+ |w|^2) \in L^1_w$ and $h(1 + \abs{y}\abs{w}^2 + |w|^3) \in L^1(\mathbb R^6)$ }} and Lemma \ref{l:invariants} for the collision term, we can pass to the limit $R\to +\infty$ using the dominated convergence theorem as above and we get
\begin{align*}
-\frac{2}{3} \int_{\mathbb R^6} (q+h) \varphi_{y_0} |w|^2 \;dwdy & -4 \int_{\mathbb R^6} h \varphi_{y_0} |w|^2 \;dwdy \; + \\
- \frac{4}{3} \int_{\mathbb R^6} |w|^2 h y \cdot \nabla_y \varphi_{y_0} \;dydw & - \int_{\mathbb R^6} |w|^2 h w \cdot \nabla_y \varphi_{y_0} \;dwdy = 0.
\end{align*}
The limit $y_0 \to +\infty$, using again $h(1 + \abs{y}\abs{w}^2 + |w|^3) \in L^1_{y,w}$, gives
$$
-\frac{2}{3} \int_{\mathbb R^3} q|w|^2 \;dw =0,
$$
which implies $q\equiv 0$. To show that also $h \equiv 0$, we multiply (\ref{e:yw}) with $q=0$ by $\chi_{R_1}(w)\chi_{R_2}(y) |w|^2$ and integrate in $\mathbb R^6$. After taking the limit $R_1 \to +\infty$ we obtain
\begin{align*}
-\frac{2}{3} \int_{\mathbb R^6} h \chi_{R_2}|w|^2 \;dwdy & -4 \int_{\mathbb R^6} h \chi_{R_2} |w|^2 \;dwdy \; + \\
- \frac{4}{3} \int_{\mathbb R^6} |w|^2 h y \cdot \nabla_y\chi_{R_2} \;dydw & - \int_{\mathbb R^6} |w|^2 h w \cdot \nabla_y \chi_{R_2}\;dwdy = 0.
\end{align*}
The limit $R_2 \to +\infty$ yields
$$
-\frac{14}{3} \int_{\mathbb R^6} h |w|^2 \;dwdy =0,
$$
which implies, since $h\ge 0$, that $h \equiv 0$.
\item Let {\em{$\theta = -1/3.$} } Mimicking the same calculation of the case $\theta =1/3$, we multiply (\ref{e:yw}) by $\chi_R(w) |w|^2 \varphi_{y_0}$, integrate over $\mathbb R^6$ and perform the limit $R \to +\infty$. We get \begin{align*}
\frac{8}{3} \int_{\mathbb R^6} (q+h) \varphi_{y_0} |w|^2 \;dwdy & -2 \int_{\mathbb R^6} h \varphi_{y_0} |w|^2 \;dwdy \; + \\
- \frac{2}{3} \int_{\mathbb R^6} |w|^2 h y \cdot \nabla_y \varphi_{y_0} \;dydw & - \int_{\mathbb R^6} |w|^2 h w \cdot \nabla_y \varphi_{y_0} \;dwdy = 0.
\end{align*}
The limit $y_0 \to +\infty$, using again {{$h(1+|w|^3) \in L^1(\mathbb R^6)$}}, gives
$$
\frac{8}{3} \int_{\mathbb R^3} q|w|^2 \;dw =0,
$$
which implies $q\equiv 0$. To show that also $h \equiv 0$, we multiply (\ref{e:yw}) with $q=0$ by $\chi_{R_1}(w)\chi_{R_2}(y) |w|^2$ and integrate in $\mathbb R^6$. The limits $R_1, R_2 \to +\infty$ yield
$$
\frac{2}{3} \int_{\mathbb R^6} h |w|^2 \;dwdy =0,
$$
which implies, since $h\ge 0$, that $h \equiv 0$.
\end{itemize}
This finishes the proof of the theorem.
\end{proof}
\section{The Vlasov-Poisson-Landau system} \label{s:landau-poisson}
In this section we analyze the following system:
\[
\partial_t f + v\cdot \nabla_x f + F[f]\cdot \nabla_v f =Q_L(f,f),
\]
with
\[
F[f]=C\int_{\mathbb R^3} \frac{x-z}{|x-z|^3}\left[\int_{\mathbb R^3} f(z,v) \, \mathrm{d} v - n_0 (z)\right] \, \mathrm{d} z,
\]
where $n_0(x) \geq 0$ is a fixed function that models a neutralizing background.
If $C \ge 0$ we are in the repulsive interaction case, if $C\le 0$ we are in the attractive case.
Unlike Landau and Boltzmann, the Vlasov-Poisson-Landau equation only has a one-parameter scaling symmetry, and hence there is only one case to consider: $\gamma = -3$ and $\theta = -\frac 1 3$.
Therefore, our ansatz becomes
\begin{equation}\label{e:ansatz-LCP}
f(t,x,v) = \phi(x,t,v) + \frac{1}{(-t)}g\left(\frac{x}{(-t)^{2/3}}, (-t)^{1/3}v\right).
\end{equation}
For the analysis of the Vlasov-Landau-Poisson system, our proof requires the use of $\ln g$ as a test function, which requires the following additional assumptions on the profile
\begin{eqnarray}\label{e:sec_moment}
(1 + \abs{w})(g \ln g - g) \in L^1_{y,w}, \quad \nabla \sqrt{g} \in L^\infty_y L_w^2, \quad g\in L^1_{y,w}.
\end{eqnarray}
One new detail must be addressed: due to the non-locality in $x$ introduced by the interaction term, we must be more specific about the global structure of the solution.
Our methods can handle any of the following three cases, each of which is physically relevant:
\begin{itemize}
\item[(a)] $n_0 = 0$ and $f \in L^1(\mathbb R^6)$. This case is natural for studying gravitational interactions (where $f$ models the density of stars or galaxies and hence in the attractive case).
\item[(b)] The physical domain is $\mathbb{T}^3_x$ and we take $n_0(x) =n_0 = \frac{1}{( 2\pi )^3} \int_{\mathbb R^6} f(x,v) dx dv$.
This case is most natural for studying periodic perturbations arising in the kinetic theory of plasmas (where $f$ will model the density of electrons in a plasma and the $n_0$ models a homogeneous background of ions, hence the interactions are repulsive).
\item[(c)] The solution is given by $f(t,x,v) = \mu(v) + h(t,x,v)$ where $\mu$ is a Maxwellian with fixed density, momentum, and temperature, $h \in L^1(\mathbb R^6)$ with average zero, and $n_0 = \int_{\mathbb R^3} \mu(v) dv$. This case is most natural for studying localized perturbations of a homogeneous plasma (here $f$ models the density of electrons in a plasma and the $n_0$ models a homogeneous background of ions, hence the interactions are repulsive).
\end{itemize}
Our proof easily adapts to any of these three cases, so we focus on the simplest one, which is case (a). It is straightforward to extend the argument to cases (b) and (c).
As in the previous section, we will assume $\phi$ satisfies \eqref{e:lim-phi-t1}, which, for $i=0$ and $\theta=\frac{1}{\gamma}=-\frac{1}{3}$, reads
\begin{equation}\label{e:lim-phi-2}
\lim_{t \to 0} (-t)^{1+ \tfrac{1}{p} + \tfrac 2 3\ell -\tfrac 1 3 j} \sup_{\abs{y} \leq R} \norm{D_v^j D_x^\ell \phi(t,(-t)^{2/3}y,\cdot)}_{L^p_v} = 0,
\end{equation}
for all $1 \leq p \leq \infty$, $0\leq j\leq 1$, $0\leq \ell \leq 2$, $R>0$.
Due to the nonlocality in $x$ of the interaction force, we also enforce the condition that the density $\rho_\phi = \int_{\mathbb R^3} \phi(t,x,v) dv$ satisfies
\begin{equation}\label{e:lim-rho}
\lim_{t \to 0} (-t)^{1+ \tfrac{1}{p}} \norm{\rho_\phi(t)}_{L^p_x} = 0,
\end{equation}
for some $p <3$ and also for $p=\infty$ (and hence everything in between by interpolation).
\begin{theorem}\label{main_VLP}
Let $f$ satisfy \eqref{e:f-condition} and \eqref{e:singularity}, $\phi$ satisfy \eqref{e:lim-phi-t1} and \eqref{e:lim-rho}, and $g$ satisfy \eqref{e:g-smooth} and \eqref{e:sec_moment}.
Then for any solution to the Vlasov Landau Poisson system of the form
\begin{equation*
f(t,x,v) = \phi(x,t,v) + \frac{1}{(-t)}g\left(\frac{x}{(-t)^{2/3}}, (-t)^{1/3}v\right),
\end{equation*}
we must have $g\equiv 0$.
\end{theorem}
\begin{proof}
Define the self similar variables
$$
y:= \frac{x}{(-t)^{2/3}}, \quad w := v(-t)^{1/3}.
$$
A brief computation shows that $F[f]$ transforms as
\[
F\left[\phi + \frac{1}{(-t)}g\right]\left( (-t)^{2/3}y \right) = F[\phi ]\left( (-t)^{2/3} y \right) + \frac{1}{(-t)^{4/3}}F[g]\left( y \right) .
\]
We now plug \eqref{e:ansatz-LCP} into the system \eqref{e:LCP}. The resulting equation, after multiplying by $(-t)^2$, reads as
\begin{align*}
&(-t)^2[\partial_t\phi + v\cdot \nabla_x \phi + F[\phi]\cdot \nabla_v \phi - Q_L(\phi,\phi)] \\
&+ {(-t)^{4/3}} F[\phi]\cdot \nabla_w g + (-t)^{2/3} F[g]\cdot \nabla_v \phi \\
& - (-t) [Q_{L,w}(\phi,g) + Q_{L,w}(g,\phi)]\\
&+ g + \frac{2}{3}y \cdot \nabla_y g -\frac{1}{3} w \cdot \nabla_w g + w \cdot \nabla_y g + F[g]\cdot \nabla_w g - Q_{L,w}(g,g) = 0.
\end{align*}
We now define the error as
\[\begin{split}
\mathcal E(\phi,g) := \;& (-t)^2[\partial_t\phi + v\cdot \nabla_x \phi + F[\phi]\cdot \nabla_v \phi - Q_L(\phi,\phi)] + (-t)^{4/3} F[\phi]\cdot \nabla_w g\\
& + (-t)^{2/3} F[g]\cdot \nabla_v \phi - (-t) [Q_{L,w}(\phi,g) + Q_{L,w}(g,\phi)].
\end{split} \]
We claim $\mathcal E(\phi,g) \to 0$ as $t\to 0$, uniformly on compact sets of $R^3_y\times \mathbb R^3_w$. All the terms, except the ones with $F[\cdot]$, appeared already in $\mathcal E_1$ and $\mathcal E_2$ and converge to zero, as proven in Lemma \ref{lemma_E_1} and \ref{lemma_E_2}. We start by analyzing
$$(-t)^2F[\phi]\cdot \nabla_v \phi. $$ We have
\begin{align*}
\lim_{t \to 0} \sup_{\abs{w, y} \leq R} |(-t)^2F[\phi]\cdot \nabla_v \phi | \le \lim_{t \to 0} \;\sup_{\abs{y} \leq R} (-t)^{4/3}|F[\phi]| (-t)^{2/3} \| \nabla_v \phi\|_{L^\infty_v} =0,
\end{align*}
thanks to \eqref{e:lim-phi-2} with $p = \infty$, $\ell = 0$, $j=1$.
Note that $\sup_{\abs{y} \leq R} \;|F[g]| $ is bounded thanks to our assumption $g\in L^\infty_{y,loc}L^1_w$ and $g\in L^1_{y,w}$ and therefore we similarly have
\begin{align*}
\lim_{t \to 0} \sup_{\abs{w, y} \leq R} |(-t)^{2/3}F[g]\cdot \nabla_v \phi | \le \lim_{t \to 0} \sup_{\abs{y} \leq R} \;|F[g]| (-t)^{2/3} \| \nabla_v \phi\|_{L^\infty_v} =0.
\end{align*}
We turn next to the term $(-t)^{4/3}F[\phi] \cdot \nabla_w g$. For this, we use the interpolation \eqref{ineq:absvstarin} with $s=-2$, $r = \infty$, and $1 \leq p < 3$ to obtain
\begin{align*}
(-t)^{4/3}\abs{F[\phi]} & \lesssim (-t)^{4/3}\norm{\rho_{\phi}}_{L^p}^{p/3} \norm{\rho_{\phi}}_{L^\infty}^{1-p/3} \\
& \lesssim \left( (-t)^{1 + \frac{1}{p}} \norm{\rho_{\phi}}_{L^p} \right)^{p/3} \left( (-t)\norm{\rho_{\phi}}_{L^\infty} \right)^{1-p/3},
\end{align*}
and, hence, the associated term vanishes by \eqref{e:lim-rho}.
Thus, in the limit as $t\to 0$, we obtain
\begin{align}\label{g_LCP}
g + \frac 2 3 y \cdot \nabla_y g - \frac 1 3 w \cdot \nabla_w g + w \cdot \nabla_y g + F[g]\cdot \nabla_w g = Q(g,g) .
\end{align}
From where, we multiply by $ \chi_{R_1}(w) \chi_{R_2}(y) \log g$ and integrate in both variables; after integration by parts we get
\begin{align*}
\int_{\mathbb R^6} g \chi_{R_2}(y) \chi_{R_1}(w) \;dwdy& -\frac{2}{3} \int_{\mathbb R^6} (g \ln g - g)\chi_{R_1}(w) y \cdot \nabla_y \chi_{R_2}(y) \;dwdy \\
+ \frac{1}{3} \int_{\mathbb R^6} (g \ln g - g)\chi_{R_2}(y) w \cdot \nabla_w \chi_{R_1}(w) \;dwdy &- \int_{\mathbb R^6} (g \ln g - g)\chi_{R_1}(w) w \cdot \nabla_y \chi_{R_2}(y) \;dwdy\\
&- \int_{\mathbb R^6} (g \ln g - g)\chi_{R_2}(y) F[g] \cdot \nabla_w \chi_{R_1}(w)\;dwdy \\
&= \int_{\mathbb R^6}\chi_{R_2}(y) \chi_{R_1}(w) Q_{L,w}(g,g)\;dwdy.
\end{align*}
With the assumptions on $g$, we can pass to the limit $R_1\to +\infty$ by the dominated convergence theorem and get
\begin{align*}
\int_{\mathbb R^6} g \chi_{R_2}(y) \;dwdy& -\frac{2}{3} \int_{\mathbb R^6} (g \ln g - g)y \cdot \nabla_y \chi_{R_2}(y) \;dwdy \\
&- \int_{\mathbb R^6} (g \ln g - g) w \cdot \nabla_y \chi_{R_2}(y) \;dwdy\le 0,
\end{align*}
using Lemma \ref{l:invariants} for the right hand side. Thanks to the assumption
\begin{align*}
&{{(1+ w)(g \ln g - g) \in L^1_{w,y}}},
\end{align*}
we take the limit $R_2\to +\infty$ and obtai
\[
\int_{\mathbb R^6} g \, \mathrm{d} w\, \mathrm{d} y \leq 0,
\]
which implies $g\equiv 0$.
\end{proof}
\section{The Boltzmann equation} \label{s:boltzmann}
We recall the Boltzmann equation
\begin{equation*
\partial_t f + v\cdot \nabla_x f = Q_B(f,f) := \int_{\mathbb R^3} \int_{\mathbb S^{2}} B(v-v_*, \sigma) [f(v_*')f(v') - f(v_*)f(v)]\, \mathrm{d} \sigma \, \mathrm{d} v_*.
\end{equation*}
The velocities are related by the formulas
\begin{align}
v' &= \frac{v+v_*} 2 + \frac{|v-v_*|} 2 \sigma ,\\
v_*' &= \frac{v+v_*} 2 - \frac{|v-v_*|} 2 \sigma,
\end{align}
and the pre-post collisional angle $\eta$ (usually denoted $\theta$ in the literature) is defined by
\[\cos \eta = \left\langle \frac{v-v_*}{|v-v_*|}, \sigma\right\rangle.\]
We take the standard non-cutoff collision kernel described by
\[ B(v-v_*,\sigma) := |v-v_*|^\gamma b(\cos \eta),\]
for some $\gamma \in (-3,1]$, with the angular cross-section $b$ satisfying the asymptotics
\begin{align}
b(\cos\eta) &\approx \eta^{-2-2s} \quad \mbox{ as } \eta\to 0,
\end{align}
for some $s\in (0,1)$. We assume $\gamma + 2s < 0$ for ease of presentation. Results similar to Theorem \ref{t:boltzmann} should also be available when $\gamma + 2s \geq 0$.
As mentioned above, the Boltzmann equation obeys the same family of scaling laws as the Landau equation, so the approximately self-semilar ansatz \eqref{e:ansatz} takes the same form.
In our main result for the Boltzmann equation, we derive the same conclusion as Theorem \ref{t:landau}, under similar hypotheses:
\begin{theorem}\label{t:boltzmann}
Let $\gamma > -3$ and $s\in (0,1)$ be such that $\gamma+2s < 0$, and assume $-1< \theta < 1/(2s)$. Let $f$ be a smooth solution of the Boltzmann equation \eqref{e:mainB} that satisfies \eqref{e:f-condition} and \eqref{e:singularity}.
Assume that $\phi$ satisfies \eqref{e:lim-phi-t1}.
For $g$, assume it satisfies \eqref{e:g-smooth} as well as $(1+|w|^{2+\gamma}) g(y,\cdot) \in L^1_w(\mathbb R^3)$ for all $y\in \mathbb R^3$, and that there exist $h$ and $q$ such that
\begin{align*}
g(y,w) = q(w) + h(y,w),
\end{align*}
with
$$(1+ \abs{y} + |w|)h\in L_{y,w}^1(\mathbb R^6)\quad \textrm{and} \quad q \in L_w^1(\mathbb R^3).$$
Finally, if $\theta = \pm1/3$ we additionally assume that
$$(1+ \abs{y}\abs{w}^2 + |w|^3) h \in L^1_{y,w}(\mathbb R^6) \quad \textrm{and} \quad (1+ |w|^2) q \in L^1_w(\mathbb R^3).$$
Then, for any solution to the Boltzmann equation \eqref{e:mainB} of the form
\begin{equation*}
f(t,x,v) = \phi(t,x,v) + \frac 1 {(-t)^{1+\theta(3+\gamma)}}\, g\left( \frac x {(-t)^{1+\theta}}, \frac v {(-t)^{\theta}}\right),
\end{equation*}
we must have $g\equiv 0$
\end{theorem}
\begin{remark}
As for Landau, if $g \in L^1_{y,w}$, then it suffices to assume $(1+ |w|)g \in L^1_{y,w}$ (and $(1+ |w|^3) g \in L^1_{y,w}(\mathbb R^6)$).
\end{remark}
Specializing to the homogeneous case as above, we have the following result:
\begin{theorem}\label{theom:Boltzmann_Hom}
With $\gamma$ and $s$ as in Theorem \ref{t:boltzmann}, and $\frac 1 {|\gamma|} < \theta < \frac 1 {2s}$, assume that $f = f(v,t)$ has finite mass and second moment and satisfies
\begin{align*
f \in C^\infty((-T,0) \times \mathbb R^3), f \in C^\infty((-T,0] \times \mathbb R^3 ).
\end{align*
Assume that $\phi$ satisfies \eqref{e:lim-phi-t1}, and that $g = g(v)$ is such that
$$
g\in C^{\infty}(\mathbb R^3), \quad (1+|w|^{2+\gamma}) g \in L^1_w
$$
If $\theta = 1/3$, then additionally assume that $g$ satisfies $ (1+|w|^2)g \in L^1_w$.
Then, for any solution to the homogeneous Boltzmann equation of the form
\begin{equation*}
f(t,v) = \phi(t,v) + \frac 1 {(-t)^{1+\theta(3+\gamma)}}\, g\left( \frac v {(-t)^{\theta}}\right),
\end{equation*}
we must have $g \equiv 0$, and hence no approximate self-similar singularity of this type can occur.
\end{theorem}
To prove Theorem \ref{t:boltzmann}, we need the following decomposition of the collision operator $Q_B(f_1,f_2)$ into two terms: by adding and subtracting $f_1(v_*')f_2(v)$ inside the integral, we write $Q_B(f_1,f_2) = Q_1(f_1,f_2) + Q_2(f_1,f_2)$, with
\begin{equation}\label{e:decomposition}
\begin{split}
Q_1(f_1,f_2) &= p.v. \int_{\mathbb R^3} \int_{\mathbb S^2} b(\cos\eta) |v-v_*|^\gamma(f_2(v') - f_2(v)) f_1(v_*') \, \mathrm{d} \sigma \, \mathrm{d} v_*,\\
Q_2(f_1,f_2) &= f_2(v)\int_{\mathbb R^3} \int_{\mathbb S^2} b(\cos\eta) |v-v_*|^\gamma(f_1(v_*') - f_1(v_*)) \, \mathrm{d} \sigma \, \mathrm{d} v_*,
\end{split}
\end{equation}
for functions $f_1$ and $f_2$ defined on $\mathbb R^3$.
The term $Q_1(f_1,f_2)$ acts as a fractional differential operator of order $2s$, and can roughly be thought of as analogous to the term $\mbox{tr}(\bar a^{f_1} D_v^2 f_2)$ from the Landau collision operator. The following lemma, quoted from \cite{silvestre2016boltzmann}, makes this point of view clearer:
\begin{lemma}{\cite[Section 4]{silvestre2016boltzmann}}\label{l:Q1}
There holds
\[ Q_1(f_1,f_2) = \int_{\mathbb R^3} [f_2(v+h) - f_2(v)]K_{f_1}(v,h) \, \mathrm{d} h,\]
where
\[K_{f_1}(v,h) \approx |h|^{-3-2s} \int_{\{z:z\cdot h = 0\}} f_2(v+z) |z|^{\gamma+2s+1} \, \mathrm{d} z,\]
with implied constants depending only on $\gamma$, $s$, and the angular cross-section $b$. The kernel $K_{f_1}$ is symmetric ($K_{f_1}(v,-h) = K_{f_1}(v,h)$) and satisfies the following bounds: for any $r>0$,
\[ \int_{B_{2r}\setminus B_r} K_{f_1} (v,h) \, \mathrm{d} h \leq C\left(\int_{\mathbb R^3} |z|^{\gamma+2s} f_1(v+z) \, \mathrm{d} z \right) r^{-2s},\]
whenever the right-hand side is finite.
\end{lemma}
Estimating the convolution as in the proof of Lemma \ref{l:coeffs}, we see that Lemma \ref{l:Q1} implies
\begin{equation}\label{e:kernel-bound}
\int_{B_{2r}\setminus B_r} K_{f_1} (v,h) \, \mathrm{d} h \leq C \|f_1\|_{L^p_v}^{(\gamma+3+2s)p/3} \|f_1\|_{L^{\infty}_v}^{1-(\gamma+3+2s)p/3} r^{-2s},
\end{equation}
for all $r>0$ and $1\leq p < 3/(\gamma+3+2s)$. For $f_1$ depending on $(x,v)$ or $(t,x,v)$, we will write $K_{f_1}(x,v,h)$ or $K_{f_1}(t,x,v,h)$. Note that the ``p.v.'' in $Q_1$ is only necessary when $s>1/2$. We omit the ``p.v'' from now on, since our functions are smooth enough ($C^2$ in $v$) that the value of the integral is well-defined.
For $Q_2(f_1,f_2)$, symmetry effects for grazing collisions (see \cite{alexandre2000entropy}) imply the following representation:
\begin{lemma}{\cite[Lemmas 5.1 and 5.2]{silvestre2016boltzmann}}\label{l:Q2}
The integral in $Q_2(f_1,f_2)$ satisfies
\[ Q_2(f_1,f_2) = f_2(v) \int_{\mathbb R^3} f_1(v+z) (C|z|^{\gamma}) \, \mathrm{d} z,\]
where the constant $C$ depends only on $\gamma$ and $s$.
\end{lemma}
In other words, surprisingly, $Q_2(f_1,f_2)$ is equal up to a constant to $\bar c^{f_1} f_2$ in the notation of the Landau equation.
Lemmas \ref{l:Q1} and \ref{l:Q2} imply in particular that $Q_B(f_1,f_2)$ is well-defined in a pointwise sense whenever $f_1 \in L^1_v \cap L^\infty_v$ and $f_2 \in L^\infty_v \cap C^2_v$. As in the previous sections (see Lemma \ref{l:invariants}), we need to use a form of the identities $\int Q_B(g,g) \, \mathrm{d} w = \int |w|^2 Q_B(g,g) \, \mathrm{d} w = 0$:
\begin{lemma}\label{l:Binvariants}
With $\chi\in C^\infty(B(0,2))$ a smooth-cutoff with $\chi(|x|) = 1$ for $|x|\leq 1$, and with $g\in L^1_x\cap L^\infty_w (\mathbb R^3)$ satisfying \eqref{e:g-smooth} as well as $(1+|w|^{2+\gamma}) g \in L^1$, we have
\[ \lim_{R\to \infty} \int_{\mathbb R^3} \chi\left(\frac w R\right) Q_B(g,g) \, \mathrm{d} w = 0,\]
and
\[ \lim_{R\to \infty} \int_{\mathbb R^3} \chi\left(\frac w R\right)|w|^2 Q_B(g,g) \, \mathrm{d} w = 0.\]
\end{lemma}
This lemma is more or less understood in the literature on the Boltzmann equation. We give a proof for the convenience of the reader, and because we could not find an easy reference to apply in our setting.
\begin{proof}
The well-known weak formulation of the Boltzmann collision operator allows one to make sense of integrals of the form $\int_{\mathbb R^3} \varphi Q_B (g,g) \, \mathrm{d} w$ using smoothness of $\varphi$. For any function $f$, let us introduce the abbreviations $f = f(w)$, $f_* = f(w_*)$, $f' = f(w')$, and $f_*' = f(w_*')$. Applying the pre-post collisional change of variables $(\sigma, w,w_*)\leftrightarrow (\sigma,w',w_*')$ (with unit Jacobian) one has
\begin{equation*
\int_{\mathbb R^3} \varphi Q_B(g,g) \, \mathrm{d} w = \int_{\mathbb R^3} \int_{\mathbb R^3}\int_{\mathbb S^2} B(w-w_*, \sigma) g g_* [\varphi' - \varphi] \, \mathrm{d} \sigma \, \mathrm{d} w_* \, \mathrm{d} w.
\end{equation*}
Symmetrizing further with the change of variables $w \leftrightarrow w_*$, which also exchanges $w'$ and $w_*'$, one has
\begin{equation}\label{e:weak-form}
\int_{\mathbb R^3} \varphi Q_B(g,g) \, \mathrm{d} w = \frac 1 2 \int_{\mathbb R^3} \int_{\mathbb R^3} \int_{\mathbb S^2} B(w-w_*,\sigma) g g_* [\varphi_*' + \varphi' - \varphi_* - \varphi] \, \mathrm{d} \sigma \, \mathrm{d} w_* \, \mathrm{d} w.
\end{equation}
These formal calculations can be justified rigorously under our assumption that $g$ is smooth, provided that $\varphi$ is (say) $C^2$ and compactly supported.
The expression $\varphi_*' + \varphi' - \varphi_* - \varphi$ is equal to zero for the following three cases: $\varphi = 1$, $\varphi = w$, and $\varphi = |w|^2$. This reflects the conservation of mass, momentum, and energy during collisions.
Taylor expanding $\varphi$ and using $w_*' + w' = w_* + w$ and $|w_*' - w_*| = |w' - w|$, we have
\[ \begin{split}
\varphi_*' + \varphi' - \varphi_* - \varphi &= \nabla \varphi_* \cdot (w_*' - w_*) + \nabla \varphi \cdot (w' - w) + O(\|D^2\varphi\|_{L^\infty} |w' - w|^2)\\
&= (\nabla \varphi - \nabla \varphi_* )\cdot (w' - w) + O(\|D^2\varphi\|_{L^\infty}|w' - w|^2).
\end{split}\]
It follows from the geometry of collisions that $|w'-w| \approx |w-w_*|\eta$. Therefore, the second term in the last expression is proportional to $\eta^2|w-w_*|^2$, which is good enough to cancel the angular singularity $\eta^{-2-2s}$, but the first term is only proportional to $\eta |w-w_*|^2$. We get around this problem in a standard way, by parametrizing $\mathbb S^2$ in spherical coordinates $(\phi,\eta) \in [0,2\pi]\times [0,\pi]$ (where $\eta = 0$ corresponds to $w = w'$) and realizing that $\left|\int_0^{2\pi} (w' - w) \, \mathrm{d} \phi\right| \lesssim |w-w_*|\eta^2$. We now have
\[ \left|\int_0^{2\pi}\left[\varphi_*' + \varphi' - \varphi_* - \varphi\right]\, \mathrm{d} \phi\right| \lesssim \|D^2\varphi\|_{L^\infty} \eta^2 |w-w_*|^2,\]
and
\begin{equation}
\begin{split}
\left| \int_{\mathbb R^3} \varphi Q_B(g,g) \, \mathrm{d} w \right| &\lesssim \int_{\mathbb R^3}\int_{\mathbb R^3} g g_* |w-w_*|^{\gamma+2} \int_0^\pi \eta^{-2-2s}\|D^2\varphi\|_{L^\infty} \eta^2 \sin \eta \, \mathrm{d} \eta \, \mathrm{d} w_* \, \mathrm{d} w\\
&\lesssim \|D^2\varphi\|_{L^\infty} \int_{\mathbb R^3}\int_{\mathbb R^3} g g_* |w-w_*|^{\gamma+2} \, \mathrm{d} w_* \, \mathrm{d} w.
\end{split}
\end{equation}
The last integral is convergent by our assumption that $(1+|w|^{\gamma+2}) g \in L^1$.
Now, with the choice $\varphi(w) = \chi(|w|/R)$, since $\|D^2\varphi\|_{L^\infty} \lesssim R^{-2}$, we see directly that $\int \chi(|w|/R) Q_B(g,g) \, \mathrm{d} w \to 0$ as $R\to \infty$.
If we choose $\varphi(w) = |w|^2\chi(|w|/R)$, then $\|D^2\varphi\|_{L^\infty}$ is bounded independently of $R$. Writing \eqref{e:weak-form} as the integral over $\mathbb R^3\times\mathbb R^3 \times [0,\pi]$ of
\[F_R(w,w_*,\eta) := |w-w_*|^\gamma b(\cos\eta) g g_* \int_0^{2\pi}[ \varphi_*' + \varphi' - \varphi_* - \varphi] \, \mathrm{d} \phi,\]
then $F_R$ converges to 0 pointwise as $R\to \infty$, and by the above integrability estimates, we may apply Dominated Convergence to conclude $\int |w|^2 \chi(|w|/R) Q_B(g,g) \, \mathrm{d} w \to 0$.
\end{proof}
Now we are ready to prove our main result for the Boltzmann equation.
\begin{proof}[Proof of Theorem \ref{t:boltzmann}]
Proceeding as in the proof of Theorem \ref{t:landau}, we plug the ansatz \eqref{e:ansatz} into the Boltzmann equation \eqref{e:mainB}, and change variables to $y$ and $w$. The left-hand side transforms in the same way as before. For the right-hand side,
\begin{align*}
Q_B(f,f) = Q_B(\phi,\phi) + \frac 1 {(-t)^{1+\theta(3+\gamma)}}[Q_B(\phi,g) + Q_B(g,\phi)] + \frac 1 {(-t)^{2+2\theta(3+\gamma)}} Q_B(g,g),
\end{align*}
where $g$ is evaluated at $(x/(-t)^{1+\theta}, v/(-t)^\theta)$. Applying the decomposition $Q_B = Q_1+Q_2$ and changing variables appropriately, we have
\[ \begin{split}
Q_1(\phi,g) &\approx \int_{\mathbb R^3} [g((v+h)/(-t)^\theta) - g(v/(-t)^\theta)] |h|^{-3-2s} \int_{z\perp h} \phi(v+z) |z|^{\gamma+2s+1} \, \mathrm{d} z \, \mathrm{d} h\\
&= (-t)^{-2s\theta} \int_{\mathbb R^3} [g(w+\tilde h) - g(w)] |\tilde h|^{-3-2s} \int_{z\perp\tilde h} \phi((-t)^\theta w + z) |z|^{\gamma+2s+1} \, \mathrm{d} z \, \mathrm{d} \tilde h\\
&=: (-t)^{-2s\theta} \tilde Q_{1}(\phi,g),
\end{split}
\]
and
\[ \begin{split}
Q_1(g,\phi) &\approx \int_{\mathbb R^3} [\phi(v+h) - \phi(v)] |h|^{-3-2s} \int_{z\perp h} g((v+z)/(-t)^\theta) |z|^{\gamma+2s+1} \, \mathrm{d} z \, \mathrm{d} h\\
&= (-t)^{(\gamma+2s+3)\theta} \int_{\mathbb R^3} [\phi((-t)^\theta w+ h) - \phi((-t)^{\theta}w)] | h|^{-3-2s} \int_{\tilde z\perp h} g( w + \tilde z) |\tilde z|^{\gamma+2s+1} \, \mathrm{d} \tilde z \, \mathrm{d} \tilde h\\
&=: (-t)^{(\gamma+2s+3)\theta} \tilde Q_{1}(g,\phi).
\end{split}
\]
(Note that $\{z\perp h\}$ is a two-dimensional subspace.) By similar calculations, we have
\[ \begin{split}
Q_1(g,g) &= (-t)^{(\gamma+3)\theta} \int_{\mathbb R^3} [g(w+\tilde h) - g(w)]|\tilde h|^{-3-2s} \int_{\tilde z\perp \tilde h} g(w+\tilde z)|\tilde z|^{\gamma+2s+1} \, \mathrm{d} \tilde z \, \mathrm{d} \tilde h\\
&=: (-t)^{(\gamma+3)\theta} Q_{1,w}(g,g).
\end{split}\]
Since $Q_2(h_1,h_2) \approx \bar c^{h_1}h_2$, calculations from the proof of Theorem \ref{t:landau} imply
\[ Q_2(\phi,g) \approx \bar c^\phi g, \quad Q_2(g,\phi) \approx (-t)^{\theta(\gamma+3)} \bar c^g \phi, \]
and we abuse notation by writing $=$ instead of $\approx$ (which amounts to a change of constants).
Making these substitutions in the right-hand side of \eqref{e:mainB}, multiplying through by $(-t)^{2+\theta(3+\gamma)}$, and grouping terms, we have
\begin{equation}\label{e:expansion}
\begin{split}
0 &= g + (1+\theta)y\cdot \nabla_y g + \theta w\cdot \nabla_w g + w\cdot \nabla_y g - Q_{B,w}(g,g) - (-t)^{1-2s\theta} \tilde Q_{1}(\phi,g)\\
&\quad - (-t)^{1+(\gamma+2s+3)\theta} \tilde Q_{1}(g,\phi) - (-t)\bar c^\phi g - (-t)^{1+\theta(3+\gamma)} \bar c^g \phi\\
&\quad + (-t)^{2+\theta(3+\gamma)} (\partial_t \phi + v\cdot \nabla_x \phi - Q_B(\phi,\phi)),
\end{split}
\end{equation}
where, as above, $\phi$ is understood to stand for $\phi(t,(-t)^{1+\theta}y,(-t)^\theta w)$, and $g = g(y,w)$. We remark that, as $s\to 1$, all exponents in this expansion converge to the exponents of the corresponding terms in \eqref{e:expansion-L} in the proof of Theorem \ref{t:landau}.
The error is defined as
\[\begin{split}
\mathcal E(\phi,g) &= - (-t)^{1-2s\theta} \tilde Q_{1}(\phi,g) - (-t)^{1+(\gamma+2s+3)\theta} \tilde Q_{1}(g,\phi)\\
&\quad - (-t)\bar c^\phi g - (-t)^{1+\theta(3+\gamma)} \bar c^g \phi + (-t)^{2+\theta(3+\gamma)} (\partial_t \phi + v\cdot \nabla_x \phi - Q_1(\phi,\phi) - \bar c^\phi \phi).\end{split}\]
We claim that for all $R_1,R_2>0$,
\begin{equation}\label{e:E-Boltz}
\lim_{t \to 0} \sup_{\abs{y} \leq R_1} \sup_{\abs{w}\leq R_2} \abs{\mathcal E(\phi,g)} = 0.
\end{equation}
First, all terms in $\mathcal E(\phi,g)$ that do not involve $Q_1$ or $\tilde Q_1$ are equal to corresponding terms in the proof of Theorem \ref{t:landau}, so the same arguments (which do not require any restriction on $\theta$ from above) imply convergence to zero in the sense of \eqref{e:E-Boltz} for those terms.
Now we address the singular integral terms.\footnote{The following calculation is similar to the proof of \cite[Lemma 2.3]{imbert2019lowerbounds}.} For any integer $k$, let $A_k$ denote the annulus $\{2^k \leq |v| < 2^{k+1} \}$. Splitting $\tilde Q_1(\phi,g)$ into integrals over $|\tilde h| <1$ and $|\tilde h| \geq 1$, we have, using \eqref{e:kernel-bound},
\begin{equation}\label{e:annulus1}
\begin{split}
&(-t)^{1-2s\theta}\int_{|\tilde h|\geq 1} [g(w+\tilde h) - g(w)] K_\phi(t,(-t)^{1+\theta}y, (-t)^\theta w, \tilde h) \, \mathrm{d} \tilde h\\
& = (-t)^{1-2s\theta} \sum_{k\geq 0} \int_{A_k} [g(w+\tilde h) - g(w)] K_\phi(t,(-t)^{1+\theta}y, (-t)^\theta w, \tilde h) \, \mathrm{d} \tilde h\\
&\lesssim (-t)^{1-2s\theta} \|g\|_{L^\infty_w} \sum_{k\geq 0} 2^{-2sk} \|\phi(t,(-t)^{1+\theta}y, \cdot)\|_{L^p_v}^{(\gamma+2s+3)p/3} \|\phi(t,(-t)^{1+\theta}y,\cdot)\|_{L^\infty_v}^{1-(\gamma+2s+3)p/3}\\
&\lesssim \left[ (-t)^{1+\theta(3+\gamma)-3\theta/p} \|\phi(t,(-t)^{1+\theta}y, \cdot)\|_{L^p_v}\right]^{(\gamma+2s+3)p/3}\\
&\qquad \cdot \left[ (-t)^{1+\theta(3+\gamma)} \|\phi(t,(-t)^{1+\theta}y,\cdot)\|_{L^\infty_v}\right]^{1-(\gamma+2s+3)p/3}.
\end{split}
\end{equation}
with $1\leq p < 3/(\gamma+2s+3)$. The second factor converges to 0 by \eqref{e:lim-phi-t1}. For the first factor, we use \eqref{e:lim-phi-t1} again, which requires $p> 3\theta/(1+\theta(3+\gamma))$. Therefore, an admissible $p$ satisfies
\[ \frac {3\theta}{1+\theta(3+\gamma)} < p < \frac 3 {\gamma+ 2s + 3},\]
which is possible since $\theta < 1/(2s)$.
For the integral over $|\tilde h|< 1$, we write
\[ g(w+\tilde h) - g(w) = \nabla_w g(w)\cdot \tilde h + E(w,\tilde h)|\tilde h|^2,\]
with $|E(w,\tilde h)| \lesssim \|D_w^2 g\|_{L^\infty_w}$. By the symmetry of the kernel $K_\phi$, the term with $\nabla_w g(w)$ vanishes, and we have, with $p$ as in the previous paragraph,
\[\begin{split}
&(-t)^{1-2s\theta} \int_{|\tilde h| < 1} [g(w+\tilde h) - g(w)] K_\phi(t,(-t)^{1+\theta}y, (-t)^\theta w, \tilde h) \, \mathrm{d} \tilde h\\
& = (-t)^{1-2s\theta} \sum_{k<0} \int_{A_k} E(w,\tilde h)|\tilde h|^2 K_\phi(t,(-t)^{1+\theta}y, (-t)^\theta w, \tilde h) \, \mathrm{d} \tilde h\\
&\lesssim (-t)^{1-2s\theta} \|D_w^2 g\|_{L^\infty_w} \sum_{k<0} 2^{(2-2s)k} \|\phi(t,(-t)^{1+\theta}y, \cdot)\|_{L^p_v}^{(\gamma+2s+3)p/3} \|\phi(t,(-t)^{1+\theta}y,\cdot)\|_{L^\infty_v}^{1-(\gamma+2s+3)p/3},
\end{split}\]
which converges to zero using \eqref{e:lim-phi-t1}, as above. For $\tilde Q_1(g,\phi)$, we divide the $h$ integral into the annuli $\tilde A_k = \{(-t)^\theta 2^k < |v| \leq (-t)^\theta 2^{k+1}\}$ (which are the same as $A_k$, read in $h$ variables rather than $\tilde h$). By a similar Taylor expansion for $|h|<1$, we have, with $p$ as above,
\[\begin{split}
&(-t)^{1+\theta(\gamma + 2s + 3)} \left( \sum_{k\geq 0}\int_{\tilde A_k} [\phi((-t)^\theta w + h) - \phi((-t)^\theta w)] K_g(y,w,h) \, \mathrm{d} h\right.\\
&\left.\qquad + \sum_{k<0} \int_{\tilde A_k} [\phi((-t)^\theta w + h) - \phi((-t)^\theta w)] K_g(y,w,h) \, \mathrm{d} h\right)\\
&\lesssim (-t)^{1+\theta(\gamma + 2s + 3)}\left( \|\phi(t,(-t)^{1+\theta}y,\cdot)\|_{L^\infty_v} \sum_{k\geq 0} (-t)^{-2s\theta} 2^{-2sk} \right.\\
&\left.\qquad + \|D_v^2 \phi(t,(-t)^{1+\theta}y,\cdot)\|_{L^\infty_v} \sum_{k<0} (-t)^{(2-2s)\theta} 2^{(2-2s)k}\right) \|g\|_{L^1_w}^{(\gamma+2s+3)/3}\|g\|_{L^\infty_w}^{-(\gamma+2s)/3}\\
&\lesssim (-t)^{1+\theta(\gamma+3)}\|\phi(t,(-t)^{1+\theta}y,\cdot)\|_{L^\infty_v} + (-t)^{1+\theta(\gamma+5)}\|D_v^2\phi(t,(-t)^{1+\theta}y,\cdot)\|_{L^\infty_v},
\end{split}\]
which converges to 0 by \eqref{e:lim-phi-t1}.
For the term $Q_1(\phi,\phi)$, we apply \cite[Lemma 2.3]{imbert2019lowerbounds} directly to obtain, with $p$ as above and $\phi = \phi(t,(-t)^{(1+\theta)}y,(-t)^\theta w)$,
\[ \begin{split}
(-t)^{2 + \theta(3+\gamma)} Q_1(\phi,\phi) &\lesssim (-t)^{2 + \theta(3+\gamma)} \|D_v^2\phi\|_{L^\infty_w}^s \|\phi\|_{L^\infty_w}^{1-s} \int_{\mathbb R^3} \phi(t,(-t)^{(1+\theta)}y,(-t)^{\theta} w - z)|z|^{\gamma+2s} \, \mathrm{d} z\\
&\lesssim (-t)^{2 + \theta(3+\gamma)}\|D_v^2\phi\|_{L^\infty_v}^s \|\phi\|_{L^\infty_v}^{2-s - (\gamma+2s+3)p/3}\|\phi\|_{L^p_v}^{(\gamma+2s+3)p/3}\\
&= \left( (-t)^{1+\theta(5+\gamma)} \|D_v^2 \phi\|_{L^\infty_v}\right)^s \left( (-t)^{1+\theta(3+\gamma)}\|\phi\|_{L^\infty_v}\right)^{2-s-(\gamma+2s+3)p/3}\\
&\quad \cdot \left( (-t)^{1+\theta(3+\gamma)-3\theta/p}\|\phi\|_{L^p_v}\right)^{(\gamma+2s+3)p/3},
\end{split}\]
which also converges to 0 by \eqref{e:lim-phi-t1}. In the second line, we performed a convolution estimate as in \eqref{e:kernel-bound}. We could apply \eqref{e:lim-phi-t1} because $1+\theta(3+\gamma) - 3\theta/p \geq 0$.
Multiplying \eqref{e:expansion} by any smooth, compactly supported test function and sending $t\to 0$, we conclude
\begin{equation}\label{e:t-zero}
g + (1+\theta)y\cdot \nabla_y g + \theta w\cdot \nabla_w g + w\cdot \nabla_y g - Q_{B,w}(g,g) = 0,
\end{equation}
in the sense of distributions. As above, the regularity assumptions \eqref{e:g-smooth} for $g$ imply \eqref{e:t-zero} holds pointwise.
From this point on, the proof is the same as for the Landau equation (the proof of Theorem \ref{t:landau}), since $\lim_{R\to\infty}\int_{\mathbb R^3}\chi(|w|/R) Q_{B,w}(g,g) \, \mathrm{d} w = 0$ holds thanks to Lemma \ref{l:Binvariants}, as well as $\lim_{R\to\infty}\int_{\mathbb R^3}\chi(|w|/R) |w|^2 Q_{B,w}(g,g) \, \mathrm{d} w =0$ in the distinguished cases $\theta = \pm 1/3$. (We can apply Lemma \ref{l:Binvariants} because $|w|^{2+\gamma} g \in L^1_w$, by assumption.) Applying the same argument as above, we conclude $g\equiv 0$ in all cases.
\end{proof}
\bibliographystyle{abbrv}
|
{
"timestamp": "2021-05-11T02:20:39",
"yymm": "2105",
"arxiv_id": "2105.03942",
"language": "en",
"url": "https://arxiv.org/abs/2105.03942"
}
|
\section{Introduction}
Tableau models provide an indispensable framework for giving explicit positive combinatorial formulas for important families of polynomials and their relationships to one another. The celebrated \emph{Schur polynomials}, which form a basis for the ring $\ensuremath{\mathrm{Sym}}_n$ of symmetric polynomials in $n$ variables, are famously realized as the weight generating functions of \emph{semistandard Young tableaux}: tableaux of partition shape whose entries weakly increase from left to right in each row and strictly increase from bottom to top in each column. In fact, this definition may be reversed and Schur polynomials may alternatively be realized as the weight generating functions of \emph{semistandard reverse tableaux}, whose entries weakly decrease from left to right along rows rows and strictly decrease from bottom to top in each column.
$\ensuremath{\mathrm{Sym}}_n$ is a subring of the ring $\ensuremath{\mathrm{QSym}}_n$ of quasisymmetric polynomials. Basis elements of $\ensuremath{\mathrm{QSym}}_n$ are indexed by \emph{compositions} (sequences of positive integers) with at most $n$ parts. The semistandard reverse tableau model used in $\ensuremath{\mathrm{Sym}}_n$ naturally extends to produce tableaux of composition shape. The \emph{diagram} $D(\alpha)$ of a composition $\alpha$, written in French notation, is the diagram consisting of left-justified rows of boxes whose $i^{th}$ row from the bottom contains $\alpha_i$ boxes. A \emph{tableau} (of shape $\alpha$) is a filling of $D(\alpha)$ with positive integers. A \emph{reverse composition tableau} is a tableau with entries no larger than $n$, so that entries \emph{weakly decrease} from left to right along rows.
Imposing different choices of further restrictions on the entries produces collections of reverse composition tableaux whose weight generating functions are, for example, the \emph{quasisymmetric Schur polynomial} \cite{HLMvW09}, the \emph{fundamental quasisymmetric polynomial} \cite{Ges84}, or the \emph{monomial quasisymmetric polynomial} \cite{Ges84} corresponding to $\alpha$. On the other hand, certain other bases of $\ensuremath{\mathrm{QSym}}_n$ are naturally described instead by restrictions of \emph{Young} composition tableaux, where entries \emph{weakly increase} from left to right along rows. Examples include the \emph{dual immaculate polynomials} \cite{BerBerSalSerZab14}, the \emph{Young quasisymmetric Schur polynomials} \cite{LMvWbook}, and the \emph{extended Schur polynomials} \cite{Assaf.Searles:3}.
Extending further, \emph{reverse fillings} provide a combinatorial framework that naturally generalizes the model of reverse composition tableaux to the ring $\ensuremath{\mathrm{Poly}}_n$ of polynomials in $n$ variables. Basis elements of $\ensuremath{\mathrm{Poly}}_n$ are indexed by \emph{weak compositions}: sequences of nonnegative integers. The diagram $D(a)$ of a weak composition $a$ is the diagram in $\mathbb{N}\times \mathbb{N}$ having $a_i$ boxes in row $i$, left-justified. A \emph{filling} (of shape $a$) is an assignment of positive integers, no larger than $n$, to the boxes of $D(a)$. A reverse filling is a filling in which entries weakly decrease from left to right along each row.
By imposing further restrictions on the entries, one can obtain a set of reverse fillings of $D(a)$ whose weight generating function is, for example, the \emph{key polynomial}~\cite{RS95}, the \emph{quasi-key polynomial}~\cite{Assaf.Searles:2}, the \emph{Demazure atom}~\cite{Mas09}, or the \emph{fundamental slide polynomial}~\cite{Sea20} corresponding to $a$. At present, the majority of well-studied bases for $\ensuremath{\mathrm{Poly}}_n$ are described in terms of reverse fillings, i.e., with decreasing rows.
As noted earlier, Schur polynomials may be realized in terms of either semistandard Young tableaux or semistandard reverse tableaux. This coincidence can be understood in terms of an involution on tableaux whose entries are at most $n$, namely, replacing each entry $i$ with $n+1-i$. This bijectively maps semistandard Young tableaux to semistandard reverse tableaux and vice versa.
While this map is weight-reversing rather than preserving, the fact that Schur polynomials are symmetric means that the multiset of weights of semistandard Young tableaux is equal to the multiset of weights of semistandard reverse tableaux.
This map inspires a closely-related \emph{flip-and-reverse} map on composition tableaux, defined by reversing the order of the rows (\emph{reverse}) and replacing every entry $i$ with $n+1-i$ (\emph{flip}). This weight-reversing map changes decreasing rows to increasing rows and vice versa. As is the case for Schur polynomials, the flip-and-reverse map preserves both the monomial and fundamental bases of $\ensuremath{\mathrm{QSym}}_n$. However, bases of $\ensuremath{\mathrm{QSym}}_n$ are not preserved in general. In particular, the reverse composition tableaux that generate the quasisymmetric Schur polynomial corresponding to $\alpha$ are mapped to precisely the Young composition tableaux that generate the Young quasisymmetric Schur polynomial corresponding to ${\rm rev}(\alpha)$, the composition obtained by reading $\alpha$ in reverse. Typically a Young quasisymmetric Schur polynomial is not also a quasisymmetric Schur polynomial; we characterize their coincidences in Section~\ref{sec:background}.
The flip-and-reverse map extends naturally to fillings of weak composition diagrams, giving two parallel constructions of bases for $\ensuremath{\mathrm{Poly}}_n$, one (\emph{reverse}) defined by reverse fillings and one (\emph{Young}) defined by \emph{Young fillings}, i.e., fillings in which entries increase from left to right along rows. The fillings obtained by applying the flip-and-reverse map to those reverse fillings that generate a particular basis of $\ensuremath{\mathrm{Poly}}_n$ generate a Young analogue of that basis.
Young analogues of the quasi-key and fundamental slide bases and a reverse analogue of the dual immaculate functions were introduced in~\cite{MasSea20} and properties of these bases were developed including a number of useful applications. In particular, these analogues were used to extend a result of \cite{AHM18} on positive expansions of dual immaculate functions to the full polynomial ring, to establish properties of stable limits of these polynomials and their expansions, and to uncover a previously-unknown connection between dual immaculate functions and Demazure atoms. These results necessitated repeated passage between reverse and Young analogues. In particular, reverse analogues were needed to study stable limits for a polynomial ring analogue of the dual immaculate functions, whereas Young analogues were needed to connect to established results in $\ensuremath{\mathrm{QSym}}_n$ from \cite{AHM18}. In a similar vein, Young analogues of pre-existing reverse bases of $\ensuremath{\mathrm{QSym}}_n$ were applied in the study of $q$-analogues of combinatorial Hopf algebras \cite{Li15} and skew variants of quasisymmetric bases \cite{MN-SkewRS} to take advantage of classical combinatorics in $\ensuremath{\mathrm{Sym}}_n$ concerning Schur functions and Young tableaux.
This type of relabelling is also used in~\cite{PreRic21} (there called ``shifting") to simplify arguments relating to the equivariant cohomology of Springer fibers for $GL_n(\mathbb{C})$.
We are motivated by the utility of the flip-and-reverse perspective to explore and develop further Young analogues of bases of $\ensuremath{\mathrm{Poly}}_n$ and establish structural results. The Young analogue of the key polynomials is of particular interest and forms a primary focus. In fact, this Young basis has already found application: this variant of the key polynomials is used in~\cite{HRS18} to obtain the Hilbert series of a generalization of the coinvariant algebra. In Section 3 we establish a connection with left and right \emph{keys} of semistandard Young tableaux, proving in Theorem~\ref{thm:leftkeygen} that the Young key polynomials are in fact a generating function for semistandard Young tableaux whose left key is greater than a fixed key. We establish an analogous result for the Young analogue of the Demazure atom basis. We also provide a representation-theoretic construction for the Young key polynomials as traces of the action of a diagonal matrix on certain modules. Moreover, in addition to the Young skyline filling model arising from the flip-and-reverse map, we detail several other constructions and interpretations of the Young key polynomials and Young atoms, including divided difference operators, crystal graphs, and compatible sequences.
In Section~\ref{Sec:others} we provide a new formula for the expansion of a key polynomial into fundamental slide polynomials as well as a new combinatorial construction of the \emph{fundamental particle} basis for polynomials \cite{Sea20} in terms of \emph{flag-compatible sequences}. We describe Young analogues for additional families of polynomials, classify which of these Young bases expand positively in one another, and explain different behaviour exhibited by Young and reverse versions including stable limits and embedding into larger polynomial rings.
We also completely determine the intersection of the Young and reverse versions of all bases we consider. As a result, we find that when the Young and reverse versions of such a basis of $\ensuremath{\mathrm{Poly}}_n$ extend a given basis of $\ensuremath{\mathrm{Sym}}_n$ or $\ensuremath{\mathrm{QSym}}_n$, the intersection of the Young and reverse basis of $\ensuremath{\mathrm{Poly}}_n$ is exactly the original basis. For example, we show that the intersection of the Young key polynomials and the key polynomials is exactly the Schur polynomials, and the intersection of the fundamental slide and Young fundamental slide polynomials is exactly the fundamental quasisymmetric polynomials.
Finally in Section~\ref{Sec:Schubert}, we introduce a Young analogue of the famous Schubert polynomials, extending this perspective further. We describe how to generate the Young Schubert polynomials using pipe dreams and divided difference operators and detail how Young Schubert polynomials expand into Young key polynomials. Interestingly, unlike the case for Young analogues of other polynomial bases, there is no basis of $\ensuremath{\mathrm{Poly}}_n$ consisting of Young Schubert polynomials. We also describe the crystal graph structure for Young Schubert polynomials (analogous to the crystal graph structure for Young key polynomials), as Demazure subcrystals of the crystal on \emph{reduced factorizations} introduced in \cite{MorSch16}, using methods that were developed on a flipped and reversed version of this crystal in \cite{AssSch18}.
\section{Background}\label{sec:background}
Throughout the following, we denote permutations in one-line notation and allow the transposition $s_i$ to act on the right by swapping the entries in the $i$th and $(i+1)$th positions. For a weak composition $a$, let ${\rm sort}(a)$ denote the partition obtained by recording the entries of $a$ in weakly decreasing order. We refer to assignments of integers to diagrams of compositions as \emph{tableaux} and assignments of integers to diagrams of weak compositions as \emph{fillings}. For any tableau or filling $T$, the \emph{weight} ${\rm wt}(T)$ denotes the weak composition whose $i$th entry is the number of occurrences of $i$ in $T$.
\subsection{Quasisymmetric polynomials}~\label{sec:qsymintro}
Let $\alpha$ be a composition with at most $n$ parts. The \emph{fundamental quasisymmetric polynomial} $F_\alpha(x_1,\ldots , x_n)$ was originally introduced through the enumeration of $P$-partitions~\cite{Ges84}. Although there are several different ways to generate the fundamental quasisymmetric polynomials, we describe them as generating functions for certain tableau-like objects which we call \emph{fundamental reverse composition tableaux} to align with other definitions to follow. Fundamental reverse composition tableaux are those reverse composition tableaux (i.e., entries decrease from left to right in each row) satisfying the additional condition that if $i<j$, then every entry in row $i$ is strictly smaller than every entry in row $j$. It is straightforward to check that this definition is equivalent to the definition of the fundamental quasisymmetric polynomials as generating functions of ribbon tableaux (see for example~\cite[Section 4.1]{Hua16}).
In this way, $F_\alpha(x_1, \ldots , x_n)$ is the sum of all monomials $x^{{\rm wt}(T)}$, where $T$ ranges over fundamental reverse composition tableaux of shape $\alpha$ and largest entry at most $n$.
\begin{ex}
We have $F_{13}(x_1,x_2,x_3) = x^{013}+x^{103}+x^{112} + x^{121}+x^{130}$, as witnessed by the following fundamental reverse composition tableaux.
\begin{displaymath}
\begin{array}{c@{\hskip2\cellsize}c@{\hskip2\cellsize}c@{\hskip2\cellsize}c@{\hskip2\cellsize}c}
\tableau{ 3 & 3 & 3 \\ 2 } & \tableau{ 3 & 3 & 3 \\ 1 } & \tableau{ 3 & 3 & 2 \\ 1 } & \tableau{ 3 & 2 & 2 \\ 1 } & \tableau{ 2 & 2 & 2 \\ 1 }
\end{array}
\end{displaymath}
\end{ex}
The \emph{monomial quasisymmetric polynomial} $M_\alpha(x_1,\ldots , x_n)$ is the generating function of what we call monomial reverse composition tableaux, which are those fundamental reverse composition tableaux in which all entries in the same row are equal.
\begin{ex}
We have $M_{13}(x_1,x_2,x_3) = x^{013}+x^{103}+x^{130}$, as witnessed by the following monomial reverse composition tableaux.
\begin{displaymath}
\begin{array}{c@{\hskip2\cellsize}c@{\hskip2\cellsize}c@{\hskip2\cellsize}c@{\hskip2\cellsize}c}
\tableau{ 3 & 3 & 3 \\ 2 } & \tableau{ 3 & 3 & 3 \\ 1 } & \tableau{ 2 & 2 & 2 \\ 1 }
\end{array}
\end{displaymath}
\end{ex}
One may also define fundamental Young composition tableaux and monomial Young composition tableaux, by replacing the decreasing row condition with the corresponding increasing row condition in the definitions of fundamental (respectively, monomial) reverse composition tableaux. One could then define Young fundamental quasisymmetric polynomials and Young monomial quasisymmetric polynomials to be the generating functions of fundamental (respectively, monomial) Young composition tableaux. In this case, however, the polynomials remain the same.
\begin{proposition}\label{prop:YoungFisF}
The generating function of the fundamental Young composition tableaux of shape $\alpha$ is $F_\alpha(x_1, \ldots , x_n)$ and the generating function of the monomial Young composition tableaux of shape $\alpha$ is $M_\alpha(x_1, \ldots , x_n)$.
\end{proposition}
\begin{proof}
By definition, the monomial reverse composition tableaux are exactly the monomial Young composition tableaux. Since every entry in any row of a fundamental reverse composition tableaux is strictly smaller than any entry in the row above, reversing the entries of every row is a weight-preserving bijection between fundamental reverse composition tableaux and fundamental Young composition tableaux of the same shape.
\end{proof}
We turn our attention to the quasisymmetric Schur polynomials $\qs_\alpha$ and the Young quasisymmetric Schur polynomials $\yqs_\alpha$, where we will see a distinction between the reverse and the Young models. To define quasisymmetric Schur polynomials, we first define \emph{triples} in reverse composition tableaux. These are collections of three boxes in $D(\alpha)$ with two adjacent in a row and either (Type A) the third box above the right box with the lower row weakly longer, or (Type B) the third box below the left box with the higher row strictly longer. A triple of either type is said to be an \emph{inversion triple} if it is not the case that $z\ge y\ge x$.
\begin{figure}[ht]
\begin{displaymath}
\begin{array}{l}
\tableau{ & y } \\ \\ \tableau{ z & x } \\ \mbox{Type A} \\ \hspace{-3\cellsize} \mbox{lower row weakly longer}
\end{array}
\hspace{3\cellsize}
\begin{array}{l}
\hspace{3\cellsize} \tableau{ z & x } \\ \\ \hspace{3\cellsize} \tableau{ y & } \\ \hspace{3\cellsize}\mbox{Type B} \\ \mbox{higher row strictly longer}
\end{array}
\end{displaymath}
\caption{Triples for reverse composition tableaux.}\label{fig:reversetriples}
\end{figure}
Define the \emph{semistandard reverse composition tableaux} $\mathrm{RCT}(\alpha)$ for $\alpha$ to be the fillings of $D(\alpha)$ satisfying the following conditions.
\begin{enumerate}
\item Entries in each row weakly decrease from left to right.
\item Entries strictly increase from bottom to top in the first column.
\item All type A and type B triples are inversion triples.
\end{enumerate}
Then $\qs_\alpha(x_1, \ldots , x_n)$ is the generating function of $\mathrm{RCT}(\alpha)$ \cite{HLMvW09}.
\begin{ex}\label{ex:qs}
We have $\qs_{13}(x_1,x_2,x_3) = x^{013} + x^{022} + 2x^{112} + x^{103} + x^{202} + x^{121} + x^{211}+x^{130}+x^{220}$, as witnessed by the semistandard reverse composition tableaux in Figure~\ref{fig:QS13}.
\begin{figure}[ht]
\begin{displaymath}
\begin{array}{c@{\hskip3\cellsize}c@{\hskip3\cellsize}c@{\hskip3\cellsize}c@{\hskip3\cellsize}c@{\hskip3\cellsize}c@{\hskip3\cellsize}c}
\tableau{ 3 & 3 & 3 \\ 2 } & \tableau{ 3 & 3 & 2 \\ 2 } & \tableau{ 3 & 3 & 1 \\ 2 } & \tableau{ 3 & 3 & 3 \\ 1 } & \tableau{ 3 & 3 & 2 \\ 1 } \\ \\ \tableau{ 3 & 3 & 1 \\ 1 } & \tableau{ 3 & 2 & 2 \\ 1 } & \tableau{ 3 & 2 & 1 \\ 1 } & \tableau{ 2 & 2 & 2 \\ 1 } & \tableau{ 2 & 2 & 1 \\ 1 }
\end{array}
\end{displaymath}
\caption{The ten elements of $\mathrm{RCT}(13)$ with entries at most $3$.}\label{fig:QS13}
\end{figure}
\end{ex}
A \emph{Young triple} is a collection of three boxes with two adjacent in a row such that either (Type I) the third box is below the right box and the higher row is weakly longer, or (Type II) the third box is above the left box and the lower row is strictly longer (Figure~\ref{fig:Youngtriples}). A Young triple of either type is said to be a \emph{Young inversion triple} if it is not the case that $x\ge y\ge z$.
\begin{figure}[ht]
\begin{displaymath}
\begin{array}{l}
\tableau{ z & x \\ & & \\ & y} \\ \mbox{Type I} \\ \hspace{-3\cellsize} \mbox{higher row weakly longer}
\end{array}
\hspace{3\cellsize}
\begin{array}{l}
\hspace{3\cellsize} \tableau{ y \\ & & \\ z & x} \\ \hspace{3\cellsize}\mbox{Type II} \\ \mbox{lower row strictly longer}
\end{array}
\end{displaymath}
\caption{Young triples for Young composition tableaux.}\label{fig:Youngtriples}
\end{figure}
Define the \emph{Young semistandard reverse composition tableaux} $\mathrm{YCT}(\alpha)$ for $\alpha$ to be the fillings of $D(\alpha)$ satisfying the following conditions.
\begin{enumerate}
\item Entries in each row weakly increase from left to right.
\item Entries strictly increase from bottom to top in the first column.
\item All type I and type II Young triples are Young inversion triples.
\end{enumerate}
Then the Young quasisymmetric Schur polynomial $\yqs_\alpha(x_1, \ldots , x_n)$ is the generating function of $\mathrm{YCT}(\alpha)$ \cite{LMvWbook}.
\begin{remark}
Young quasisymmetric Schur polynomials are most often defined in terms of a single triple condition; e.g \cite{LMvWbook}, \cite{AHM18}. While this is more compact, it does not extend appropriately to define a Young analogue of key polynomials. The proof that these definitions are equivalent is analogous to the corresponding proof for reverse composition tableaux given in~\cite{HLMvW09}.
\end{remark}
\begin{ex}
We have $\yqs_{13}(x_1,x_2,x_3) = x^{130} + x^{121} + x^{112} + x^{103} + x^{013}$, as witnessed by the semistandard Young composition tableaux in Figure~\ref{fig:YQS103}.
\begin{figure}[ht]
\begin{displaymath}
\begin{array}{c@{\hskip2\cellsize}c@{\hskip2\cellsize}c@{\hskip2\cellsize}c@{\hskip2\cellsize}c@{\hskip2\cellsize}c@{\hskip2\cellsize}c}
\tableau{ 2 & 2 & 2 \\ 1 } & \tableau{ 2 & 2 & 3 \\ 1 } & \tableau{ 2 & 3 & 3 \\ 1 } & \tableau{ 3 & 3 & 3 \\ 1 } & \tableau{ 3 & 3 & 3 \\ 2 }
\end{array}
\end{displaymath}
\caption{The five elements of $\mathrm{YCT}(13)$ with entries at most $3$.}\label{fig:YQS103}
\end{figure}
\end{ex}
Notice that $\yqs_{13}(x_1,x_2,x_3) \neq \qs_{13}(x_1,x_2,x_3)$; indeed, they have a different number of terms. However, quasisymmetric Schur and Young quasisymmetric Schur polynomials are related by the following formula.
\begin{proposition}\label{prop:yqstoqs}~\cite{LMvWbook}
Let $\alpha$ be a composition with at most $n$ parts. Then
\[\yqs_\alpha(x_1,\ldots , x_n) = \qs_{{\rm rev}(\alpha)}(x_n,\ldots , x_1).\]
\end{proposition}
\begin{remark}
As mentioned in the introduction, the \emph{flip-and-reverse} map on composition tableaux which reverses the order of the rows and exchanges entries $i \leftrightarrow (n+1-i)$ is a weight-reversing bijection between $\mathrm{YCT}(\alpha)$ and $\mathrm{RCT}({\rm rev}(\alpha))$, implying Proposition~\ref{prop:yqstoqs}. In particular, reversing the order of the rows ensures the increasing first column condition is preserved.
\end{remark}
To illustrate this, we compute the Young quasisymmetric Schur polynomial $\yqs_{31}(x_1,x_2,x_3)$; compare this to the computation of $\qs_{13}(x_1,x_2,x_3)$ in Example~\ref{ex:qs}.
\begin{ex}
We have $\yqs_{23}(x_1,x_2,x_3) = x^{310} + x^{220} + 2x^{211} + x^{301} + x^{202} + x^{121} + x^{112} + x^{031}+x^{022}$, as witnessed by the semistandard Young composition tableaux in Figure~\ref{fig:YQS31}.
\begin{figure}[ht]
\begin{displaymath}
\begin{array}{c@{\hskip2\cellsize}c@{\hskip2\cellsize}c@{\hskip2\cellsize}c@{\hskip2\cellsize}c@{\hskip2\cellsize}c@{\hskip2\cellsize}c}
\tableau{ 2 \\ 1 & 1 & 1} & \tableau{ 2 \\ 1 & 1 & 2 } & \tableau{ 2 \\ 1 & 1 & 3 } & \tableau{3 \\ 1 & 1 & 1 } & \tableau{3 \\ 1 & 1 & 2 } \\ \\ \tableau{3 \\ 1 & 1 & 3 } & \tableau{ 3\\ 1 & 2 & 2 } & \tableau{ 3\\ 1 & 2 & 3 } & \tableau{ 3 \\ 2 & 2 & 2 } & \tableau{ 3 \\ 2 & 2 & 3 }
\end{array}
\end{displaymath}
\caption{The ten elements of $\mathrm{YCT}(31)$ with entries at most $3$.}\label{fig:YQS31}
\end{figure}
\end{ex}
Notice this involution preserves monomial and fundamental quasisymmetric polynomials: $M_{\alpha}(x_1,x_2, \hdots , x_n) = M_{{\rm rev}(\alpha)} (x_n, \hdots , x_2, x_1)$ and $F_{\alpha}(x_1,x_2, \hdots , x_n) = F_{{\rm rev}(\alpha)} (x_n, \hdots , x_2, x_1)$.
\begin{proposition}\label{prop:yqsqsexpansion}~\cite{HLMvW09,LMvWbook}
Quasisymmetric Schur and Young quasisymmetric Schur polynomials expand positively in the fundamental quasisymmetric basis, and
\[\yqs_\alpha(x_1,\ldots , x_n) = \sum_\beta c_\beta^\alpha F_\beta(x_1,\ldots , x_n)\]
if and only if
\[\qs_{{\rm rev}(\alpha)}(x_1,\ldots , x_n) = \sum_\beta c_\beta^\alpha F_{{\rm rev}(\beta)}(x_1,\ldots , x_n) .\]
\end{proposition}
For example,
$\yqs_{31}(x_1,x_2,x_3) = F_{31}(x_1,x_2,x_3)+F_{22}(x_1,x_2,x_3)$, whereas
$\qs_{13}(x_1,x_2,x_3) = F_{13}(x_1,x_2,x_3)+F_{22}(x_1,x_2,x_3).$
A remarkable property of the quasisymmetric Schur and Young quasisymmetric Schur polynomials is that they both positively refine Schur polynomials:
\begin{proposition}\label{prop:schurexpansion}\cite{LMvWbook}
\[s_\lambda(x_1,\ldots , x_n) = \sum_{{\rm sort}(\alpha) = \lambda}\qs_\alpha(x_1,\ldots , x_n) = \sum_{{\rm sort}(\alpha) = \lambda}\yqs_\alpha(x_1,\ldots , x_n)\]
\end{proposition}
\begin{remark}
As noted in the introduction, Schur polynomials may be described in terms of either decreasing or increasing semistandard tableaux. Therefore Schur polynomials and ``Young Schur polynomials'' are the same (provided we consider a partition and its reversal to be the same), so from this perspective it makes sense that Schur polynomials expand positively into both the quasisymmetric Schur and Young quasisymmetric Schur bases. Similarly, the fact that both quasisymmetric Schur and Young quasisymmetric Schur polynomials expand positively in fundamental quasisymmetric polynomials (Proposition~\ref{prop:yqsqsexpansion}) makes sense due to the fact that fundamental quasisymmetric polynomials may also be described in terms of either increasing or decreasing tableaux (Proposition~\ref{prop:YoungFisF}), and thus are the same as ``Young fundamental quasisymmetric polynomials''.
\end{remark}
Typically a Young quasisymmetric Schur polynomial is not equal to any quasisymmetric Schur polynomial. However, we can classify their coincidences. We delay the proof to the appendix.
\begin{theorem}\label{thm:yqsqs}
$\yqs_\alpha(x_1, \ldots , x_n) = \qs_\beta(x_1, \ldots , x_n)$ if and only if $\alpha=\beta$ and either $\alpha$ has all parts the same, or all parts of $\alpha$ are $1$ or $2$, or $n=\ell(\alpha)$ and consecutive parts of $\alpha$ differ by at most $1$.
\end{theorem}
\subsection{Key polynomials and Demazure atoms}
We now shift our attention to the ring $\ensuremath{\mathrm{Poly}}_n=\mathbb{Z}[x_1,\ldots , x_n]$ of all polynomials in $n$ variables. This ring possesses a variety of bases important in geometry and representation theory. A principal example is the basis of \emph{key polynomials}, which are characters of (type A) Demazure modules \cite{Dem74a, LasSch90, RS95} and which also arise as specializations of nonsymmetric Macdonald polynomials. Closely related is the basis of \emph{Demazure atoms}, originally introduced as \emph{standard bases} in \cite{LasSch90}. Demazure atoms were shown in~\cite{Mas09} to also be a specialization of nonsymmetric Macdonald polynomials. They are equal to the smallest non-intersecting pieces of type $A$ Demazure characters and can be obtained through a truncated application of \emph{divided difference operators}. Intuitively, one can build the Demazure atoms by starting with a monomial and partially symmetrizing, keeping only the monomials not appearing in the previous iteration of this process.
\subsubsection{Semi-skyline fillings}
Both key polynomials and Demazure atoms are defined in terms of reverse fillings that are often referred to as semi-skyline fillings. To define the key polynomial corresponding to a weak composition $a$ of length $n$, first note that the definition of type A and B triples extends verbatim from composition diagrams to weak composition diagrams. We need to include a \emph{basement column}, an extra $0$th column in the diagram: for our purposes the basement entry of row $i$ is $n+1-i$. Basement entries do not contribute to the weight of a filling. Define the \emph{key fillings} $\mathrm{KSSF}(a)$ for $a$ to be the fillings of $D({\rm rev}(a))$ (note the reversal) satisfying the following conditions.
\begin{enumerate}
\item Entries in each row, including basement entries, weakly decrease from left to right.
\item Entries do not repeat in any column.
\item All type A and type B triples, including triples containing basement entries, are inversion triples.
\end{enumerate}
We use the following as definitional for key polynomials.
\begin{theorem}\label{thm:keydefinition}\cite{HHL08,Mas09},
Let $a$ be a weak composition of length $n$. Then
\[\key_a = \sum_{T\in \mathrm{KSSF}(a)}x^{{\rm wt}(T)},\]
where only the non-basement entries contribute to the weight.
\end{theorem}
For example, we have $\key_{032} = x^{032} + x^{122} + x^{212} + x^{302} + x^{311} + x^{320} + x^{131} + x^{221} + x^{230}$,
which is computed using the elements of $\mathrm{KSSF}(032)$ shown in Figure~\ref{fig:key032} below.
\begin{figure}[ht]
\begin{displaymath}
\begin{array}{c@{\hskip3\cellsize}c@{\hskip3\cellsize}c@{\hskip3\cellsize}c@{\hskip3\cellsize}c@{\hskip3\cellsize}c@{\hskip3\cellsize}c}
\tableau{ \bf{1} \\ \bf{2} & 2 & 2 & 2 \\ \bf{3} & 3 & 3 \\ \hline } & \tableau{ \bf{1} \\ \bf{2} & 2 & 2 & 1 \\ \bf{3} & 3 & 3 \\ \hline } & \tableau{ \bf{1} \\ \bf{2} & 2 & 1 & 1 \\ \bf{3} & 3 & 3 \\ \hline } & \tableau{ \bf{1} \\ \bf{2} & 1 & 1 & 1 \\ \bf{3} & 3 & 3 \\ \hline } & \tableau{ \bf{1} \\ \bf{2} & 1 & 1 & 1 \\ \bf{3} & 3 & 2 \\ \hline } \\ \\ \tableau{ \bf{1} \\ \bf{2} & 1 & 1 & 1 \\ \bf{3} & 2 & 2 \\ \hline } & \tableau{ \bf{1} \\ \bf{2} & 2 & 2 & 2 \\ \bf{3} & 3 & 1 \\ \hline } & \tableau{ \bf{1} \\ \bf{2} & 2 & 1 & 1 \\ \bf{3} & 3 & 2 \\ \hline } & \tableau{ \bf{1} \\ \bf{2} & 2 & 2 & 2 \\ \bf{3} & 1 & 1 \\ \hline }
\end{array}
\end{displaymath}
\caption{The 9 key fillings of shape $032$. (Basement entries in bold.)}\label{fig:key032}
\end{figure}
The definition of the Demazure atoms in terms of semi-skyline fillings comes from specializing the diagram fillings used to generate the nonsymmetric Macdonald polynomials~\cite{HHL08}. Define the \emph{atom fillings} $\mathrm{ASSF}(a)$ for $a$ to be the fillings of $D(a)$ (no basement) satisfying the following conditions.
\begin{enumerate}
\item Entries weakly decrease from left to right in each row.
\item Entries do not repeat in any column.
\item The first entry of each row is equal to its row index.
\item All type A and type B triples are inversion triples.
\end{enumerate}
We use the following as definitional for Demazure atoms.
\begin{theorem}\label{thm:atomdefinition}\cite{Mas09}
Let $a$ be a weak composition of length $n$. Then
\[\atom_a = \sum_{T\in \mathrm{ASSF}(a)}x^{{\rm wt}(T)}.\]
\end{theorem}
\subsubsection{Left and right keys}
The eponymous formula for the key polynomial $\key_a$ is given in terms of right keys. A \emph{semistandard Young tableau} (or $\mathrm{SSYT}$) $T$ is a tableau of partition shape such that entries weakly increase along rows and strictly increase up columns. For a partition $\lambda$, let $\mathrm{SSYT}(\lambda)$ denote the set of all $\mathrm{SSYT}$ of shape $\lambda$, and $\mathrm{SSYT}_n(\lambda)$ the subset of $\mathrm{SSYT}(\lambda)$ whose entries are at most $n$.
A semistandard Young tableau $T$ is a \emph{key} if the entries appearing in the $(i+1)$th column of $T$ are a subset of the entries appearing in the $i^{th}$ column of $T$, for all $i$. For a weak composition $a$, define $\mathrm{key}(a)$ to be the unique key of weight $a$. For any semistandard Young tableau $T$, there are two keys of the same shape as $T$ associated to $T$, called the \emph{right key} of $T$, denoted $K_+(T)$, and the \emph{left key} of $T$, denoted $K_-(T)$
We now describe procedures for computing right and left keys, which will be illustrated in Example~\ref{ex:leftrightkey} below. There are several different methods for computing keys (see, for example~\cite{Mas09},~\cite{Wil13}) but we use the classical method presented in \cite{RS95} as it involves several tools we will need later.
Two words ${\bf b}$ and ${\bf c}$ in $\{1,2,\ldots n\}$ are said to be \emph{Knuth-equivalent}, written ${\bf b}\sim {\bf c}$, if one can be obtained from the other by a series of the following local moves:
\begin{align*}
{\bf d}xzy{\bf e} \sim {\bf d}zxy{\bf e} & \quad \mbox{ for } \quad x\le y < z \\
{\bf d}yxz{\bf e} \sim {\bf d}yzx{\bf e} & \quad \mbox{ for } \quad x< y \le z
\end{align*}
for words ${\bf d}$ and ${\bf e}$ and letters $x,y,z$.
Define the \emph{column word factorization} of a word $v$ to be the decomposition of $v$ into subwords $v=v^{(1)} v^{(2)} \cdots$ by starting a new subword between every weak ascent. Then the \emph{column form} of $v$ (denoted $\colform(v)$) is the composition whose parts are the lengths of the subwords appearing in the column word factorization. Let $\lambda$ be the shape of the $\mathrm{SSYT}$ obtained when Schensted insertion (see, e.g.,~\cite{Ful97,Sag13,Sta99}) is applied to $v$. The word $v$ is said to be \emph{column-frank} if $\colform(v)$ is a rearrangement of the nonzero parts of $\lambda'$, where $\lambda'$ denotes the conjugate shape of $\lambda$ obtained by reflecting the diagram of $\lambda$ across the line $y=x$. Let $T\in \mathrm{SSYT}(\lambda)$. Then the right key (resp. left key) of $T$ is the key of shape $\lambda$ whose $j^{th}$ column is equal to the last (resp. first) subword in any column-frank word which is Knuth equivalent to the \emph{column word} $\col(T)$ of $T$ (obtained by reading the entries of $T$ down columns from left to right) and whose last (resp. first) subword has length $\lambda_j'$.
Notice the difference in the construction of left and right keys. The weight of the left key is usually not a reversal of the weight of the right key; the subtle connection between left and right keys is explored in Section~\ref{sec:leftkeys}, wherein we also define polynomials naturally associated to left keys.
\begin{theorem}\label{thm:rightkey}~\cite{LasSch90,RS95}
Let $a$ be a weak composition of length $n$. Then
$$\key_{a} = \sum_{\substack{ T \in \mathrm{SSYT}_n({\rm sort}(a)) \\ K_+(T) \le \mathrm{key}(a)}} x^{{\rm wt}(T)} ,$$
where $K_+(T)\le \mathrm{key}(a)$ if each entry of $K_+(T)$ is weakly smaller than the corresponding entry of $\mathrm{key}(a)$.
\end{theorem}
\begin{ex}\label{ex:leftrightkey}
Let $a=032$. Then $\mathrm{key}(a) = \tableau{ 3 & 3 \\ 2 & 2 & 2 }$, which is a tableau of shape $\lambda = 32$. The nine tableaux whose right keys are smaller than or equal to $\mathrm{key}(a)$ are
\[
\begin{array}{c@{\hskip1.5\cellsize}c@{\hskip1.5\cellsize}c@{\hskip1.5\cellsize}c@{\hskip1.5\cellsize}c@{\hskip1.5\cellsize}c@{\hskip1.5\cellsize}c@{\hskip1.5\cellsize}c@{\hskip1.5\cellsize}c@{\hskip1.5\cellsize}c@{\hskip1.5\cellsize}c}
\tableau{ 3 & 3 \\ 2 & 2 & 2 } & \tableau{ 3 & 3 \\ 1 & 2 & 2 } & \tableau{ 3 & 3 \\ 1 & 1 & 2 } & \tableau{ 3 & 3 \\ 1 & 1 & 1 } & \tableau{ 2 & 3 \\ 1 & 1 & 1 } &
\tableau{ 2 & 2 \\ 1 & 1 & 1 } & \tableau{ 2 & 2 \\ 1 & 1 & 2 } & \tableau{ 2 & 3 \\ 1 & 1 & 2 } & \tableau{ 2 & 3 \\ 1 & 2 & 2 }
\end{array}
\]
To illustrate the process of finding right (and left) keys, let $T$ be the last tableau in the list above. Then $\col(T) = 21322$. The words that are Knuth-equivalent to $21322$ (listed with vertical bars indicating the column word factorizations) are $\{21|32|2, \; 21|2|32, \; 2|21|32, \; 2|2|31|2\}$. The column form of the first three words is a rearrangement of $221$, the shape of $\lambda'$, so these three words are column-frank. The fourth is not column-frank so we ignore it. Looking at the rightmost subword in each column-frank word, the first of these words tells us that the column of $K_+(T)$ of length $1$ consists of a single $2$, and the second (or third) word tells us that the columns of $K_+(T)$ of length $2$ each contain a $2$ and a $3$.
Thus $K_+(T) = \tableau{ 3 & 3 \\ 2 & 2 & 2 }$. Similarly, via leftmost subwords, we obtain $K_-(T) = \tableau{ 2 & 2 \\ 1 & 1 & 2 }$.
\end{ex}
One may also use right keys to define the Demazure atoms. Given a weak composition $a$ of length $n$, the Demazure atom $\atom_a$ can also be given by \begin{align}\label{atomrtkey} \atom_{a} = \sum_{\substack{ T \in \mathrm{SSYT}_n(\lambda(a)) \\ K_+(T) = \mathrm{key}(a)}} x^{{\rm wt}(T)}. \end{align}
From this construction and Theorem~\ref{thm:rightkey}, it is apparent that key polynomials expand positively in Demazure atoms. In particular,
\begin{align}\label{keysintoatoms} \key_a = \sum_{b \le a} \atom_b,\end{align}
where $b \le a$ if and only if ${\rm sort}(b)={\rm sort}(a)$ and the permutation $w$ such that $w({\rm sort}(b))=b$ is less than or equal to the permutation $v$ such that $v({\rm sort}(a))=a$ in the Bruhat order.
\subsubsection{Divided differences and crystal graphs}{\label{sec:keycrystal}}
Key polynomials can be defined in terms of \emph{divided difference operators}. Given a positive integer $i$, where $1\le i <n$, define an operator $\partial_i$ on $\mathbb{Z}[x_1,\ldots , x_n]$ by
\[\partial_i(f) = \frac{f-s_i(f)}{x_i-x_{i+1}}\]
where $s_i$ exchanges $x_i$ and $x_{i+1}$. Now define another operator $\pi_i$ on $\mathbb{Z}[x_1,\ldots , x_n]$ by
\[\pi_i(f) = \partial_i(x_if).\]
For a permutation $w$, define $\pi_w = \pi_{i_1}\cdots \pi_{i_r}$, where $s_{i_1}\cdots s_{i_r}$ is any reduced word for $w$. (This definition is independent of the choice of reduced word because the $\pi_i$ satisfy the commutation and braid relations for the symmetric group.) Recall that ${\rm sort}(a)$ is the rearrangement of the entries of $a$ into decreasing order. For a weak composition $a$ let $w_a$ be the minimal length permutation that sends $a$ to ${\rm sort}(a)$ acting on the right. Then the key polynomial is given by
\[\key_a = \pi_{w_a}x^{{\rm sort}(a)}.\]
\begin{ex}
Let $a=032$. Then the minimal length permutation taking $a$ to ${\rm sort}(a) = 320$ is $s_1 s_2$. We compute
\begin{align*}
\pi_1\pi_2 (x_1^3x_2^2) & =\pi_1 \frac{x_1^3x_2^3 - x_1^3x_3^3}{x_2-x_3} \\
& = \pi_1(x_1^3x_2^2+x_1^3x_2x_3+x_1^3x_3^2) \\
& = \frac{(x_1^4x_2^2 - x_1^2x_2^4) + (x_1^4x_2x_3 - x_1x_2^4x_3) +(x_1^4x_3^2-x_2^4x_3^2)}{x_1-x_2} \\
& = x_1^3x_2^2+x_1^2x_2^3+x_1^3x_2x_3+x_1^2x_2^2x_3+x_1x_2^3x_3+x_1^3x_3^2+x_1^2x_2x_3^2+x_1x_2^2x_3^2+x_2^3x_3^2 \\
& = \key_{032}.
\end{align*}
\end{ex}
Demazure atoms can also be described in terms of divided difference operators. In particular, let $\overline{\pi_i} = \pi_i -1$. Then (see ~\cite{Mas09}) $$\atom_a=\overline{\pi}_{w_a} x^{{\rm sort}(a)}.$$
The action of the divided difference operators can be realised in terms of \emph{Demazure crystals}. A \emph{crystal graph} is a directed and colored graph whose edges are defined by \emph{Kashiwara operators}~\cite{Kas91,Kas93,Kas95} $e_i$ and $f_i$. See~\cite{HonKan02} for a detailed introduction to the theory of quantum groups and crystal bases and~\cite{BS17} for a more combinatorial exploration of crystals.
For a partition $\lambda$, the type $A_n$ highest weight crystal $B(\lambda)$ of highest weight $\lambda$ has vertices indexed by $\mathrm{SSYT}_n(\lambda)$. The \emph{character} of $B(\lambda)$ is
\[\ch(B(\lambda)) = \sum_{T\in B(\lambda)}x^{{\rm wt}(T)},\]
which is equal to the Schur polynomial $s_{\lambda}(x_1, \ldots , x_n)$, reflecting the fact that Schur polynomials are characters for irreducible highest weight modules for $GL_n$. See Figure~\ref{fig:Youngcrystalkey} below for $B(21)$ when $n=3$, in which the arrows index the Kashiwara operators $f_1$ and $f_2$. Precise definitions of the $f_i$ can be found in e.g.~\cite{BS17}; in particular we note that $f_i(b)=0$ if there is no $i$-arrow emanating from vertex $b$, and the $e_i$ are defined by $e_i(b)=b'$ if $f_i(b')=b$, and $e_i(b)=0$ otherwise.
\begin{figure}[ht]
\begin{center}
\begin{tikzpicture}[xscale=1.5,yscale=1.2
\node at (2,4) (T112) {$\tableau{2 \\ 1 & 1}$};
\node at (0,3) (T113) {$\tableau{3 \\ 1 & 1}$};
\node at (4,3) (T122) {$\tableau{2 \\ 1 & 2}$};
\node at (1,2) (T132) {$\tableau{2 \\ 1 & 3}$};
\node at (3,2) (T123) {$\tableau{3 \\ 1 & 2}$};
\node at (0,1) (T133) {$\tableau{3 \\ 1 & 3}$};
\node at (4,1) (T223) {$\tableau{3 \\ 2 & 2}$};
\node at (2,0) (T233) {$\tableau{3 \\ 2 & 3}$};
\draw[thick,->,blue ] (T112) -- (T122) node[midway,above] {$1$};
\draw[thick,->,blue ] (T113) -- (T123) node[midway,above] {$1$};
\draw[thick,->,blue ] (T123) -- (T223) node[midway,above] {$1$};
\draw[thick,->,blue ] (T133) -- (T233) node[midway,above] {$1$};
\draw[thick,->,red ] (T112) -- (T113) node[midway,above] {$2$};
\draw[thick,->,red ] (T122) -- (T132) node[midway,above] {$2$};
\draw[thick,->,red ] (T132) -- (T133) node[midway,above] {$2$};
\draw[thick,->,red ] (T223) -- (T233) node[midway,above] {$2$};
\end{tikzpicture}
\caption{\label{fig:Youngcrystalkey} Crystal graph $B(21)$ for $n=3$.}
\end{center}
\end{figure}
A Demazure crystal is a subset of $B(\lambda)$ whose character is a key polynomial~\cite{Lit95, Kas93}, obtained by a truncated action of the Kashiwara operators. Specifically, given a subset $X$ of $B(\lambda)$, define operators $\mathfrak{D}_i$ for $1 \le i < n$ by
$$\mathfrak{D}_i X = \{ b \in B(\lambda) | e_i^r(b) \in X \textrm{ for some } r \ge 0 \}.$$
Given a permutation $w$ with reduced word $w = s_{i_1} s_{i_2} \cdots s_{i_k}$, define
$$B_w(\lambda) = \mathfrak{D}_{i_1} \mathfrak{D}_{i_2} \cdots \mathfrak{D}_{i_k} \{ u_\lambda \},$$
where $u_\lambda$ is the highest weight element in $B(\lambda)$, i.e., $e_i(u_{\lambda}) = 0$ for all $1 \le i < n$. If $b,b' \in B_w(\lambda) \subseteq B(\lambda)$ and $f_i(b)=b'$ in $B(\lambda)$, then the crystal operator $f_i$ is also defined in $B_w(\lambda)$. The character of a Demazure crystal $B_w(\lambda)$ is defined as $$\ch B_w (\lambda) = \sum_{b \in B_w(\lambda)} x_1^{{\rm wt}(b)_1} \cdots x_n^{{\rm wt}(b)_n},$$ which is equal to $\key_a$ when $w$ is of shortest length such that $w(a)=\lambda$~\cite{Lit95,Kas93}. The repeated actions of the $\mathfrak{D}_i$ starting with $u^\lambda$ precisely mirrors the repeated action of the divided difference operators $\pi_i$ starting with the monomial $x^\lambda$.
\begin{ex}\label{ex:crystalkey}
Let $a=102$. Then the shortest length $w$ such that $w(a)={\rm sort}(a) = 210$ is $w=s_2s_1$. Therefore, the crystal graph for $\key_{102}$ is the subgraph of $B(21)$ consisting of all vertices that can be obtained from the highest weight $\tableau{2 \\ 1 & 1}$ by first applying a sequence of $f_1$'s and then a sequence of $f_2$'s. In Figure~\ref{fig:Youngcrystalkey}, these are the tableaux of weight $210$, $201$, $120$, $111$ (the leftmost such) and $102$. Hence $\key_{102} = x_1^2 x_2 + x_1x_2^2 + x_1^2 x_3 + x_1 x_2 x_3 + x_1 x_3^2$.
\end{ex}
\subsubsection{Compatible Sequences}{\label{sec:CompatibleSequences}}
Key polynomials can also be constructed using \emph{compatible sequences} as follows. Let ${\bf b} = b_1 b_2 \cdots b_p$ be a word in the alphabet $\{1,2,\ldots n\}$. A word ${\bf w}=w_1 w_2 \cdots w_p$ is \emph{${\bf b}$-compatible} if
\begin{enumerate}
\item $1 \le w_1 \le w_2 \le \cdots \le w_p\le n$,
\item $w_k < w_{k+1}$ whenever $b_k < b_{k+1},$ for all $1 \le k <p$, and
\item $w_k \le b_k$ for all $1 \le k \le p$ (flag condition).
\end{enumerate}
\begin{theorem}\cite{RS95}\label{thm:keycompatible}
Let $a$ be a weak composition of length $n$. Then
$$\key_a = \sum_{{\rm rev}(b) \sim \col(\mathrm{key}(a)), \; w \textrm{ is $b$-compatible}} x^{\mathrm{comp}(w)},$$
where $\mathrm{comp}(w)$ is the weak composition whose $i^{th}$ entry counts the incidences of $i$ in $w$.
\end{theorem}
\begin{ex}\label{ex:keycompatible}
Let $a=032$. We have ${\rm key}(032)=\tableau{3 & 3 \\ 2 & 2 & 2 }$, and $\col({\rm key}(032)) = 32322$. The set of words Knuth-equivalent to $32322$ is $\{32322, 33222, 32232, 23232, 23322\}$. Reversing these gives the set
$\{22323, 22233, 23223, 23232, 22332\}.$ We compute the set of compatible sequences for each of these:
\begin{figure}[h]
\begin{tabular}{l | l}
{\bf Word} & {\bf Compatible sequences} \\\hline
22323 & 11223 \\\hline
22233 & 22233 12233 11233 11133 11123 11122 \\\hline
23223 & 12223 \\\hline
23232 & \\\hline
22332 & 11222 \\
\end{tabular}
\caption{Compatible sequences. \label{fig:compseq}}
\end{figure}
\end{ex}
Observe there are $9$ compatible sequences, each having the weight $\mathrm{comp}(w)$ of a monomial of $\key_{032}$. In Proposition~\ref{prop:keytoslidecompatible}, we interpret the \emph{fundamental slide} expansion of a key polynomial in terms of Knuth equivalence classes.
\section{Young key polynomials}{\label{Sec:Youngkeys}}
We now introduce the \emph{Young key polynomial} basis for polynomials. This basis has proved useful in computing the Hilbert series of a generalization of the coinvariant algebra, specifically, in constructing a Gr\"{o}bner basis for the ideal $I_{n,k} = \langle x_1^k ,x_2^k, \hdots , x_n^k , e_n, e_{n-1} , \hdots , e_{n-k+1} \rangle$ \cite{HRS18}. However, the combinatorial and representation-theoretic properties of the Young key polynomials have not, to our knowledge, been explored previously, nor has the connection to the overall flip-and-reverse perspective. We begin by providing a combinatorial description of the Young key polynomial basis analogous to that of the Young version of the quasisymmetric Schur polynomials.
Note that the definition of Young triples extends verbatim to weak composition diagrams. As in the definition of key polynomials, we append a \emph{basement column} to diagrams. Given a weak composition $a$ of length $n$, define the \emph{Young key fillings} $\mathrm{YKSSF}(a)$ for $a$ to be the fillings of $D({\rm rev}(a))$ (note the reversal) with entries from $\{1, \ldots , n\}$ satisfying the following conditions.
\begin{enumerate}
\item Entries in each row, including basement entries, weakly increase from left to right.
\item Entries do not repeat in any column.
\item All type I and type II Young triples, including triples using basement entries, are Young inversion triples.
\end{enumerate}
Define the \emph{Young key polynomial} $\ykey_a$ by
\[\ykey_a = \sum_{T\in \mathrm{YKSSF}(a)}x^{{\rm wt}(T)},\]
where only the non-basement entries contribute to the weight.
For example, we have $\ykey_{230} = x^{230} + x^{221} + x^{212} + x^{203} + x^{113} + x^{023} + x^{131} + x^{122} + x^{032}$, which is computed by the elements of $\mathrm{YKSSF}(230)$ shown in Figure~\ref{fig:ykey230}.
\begin{figure}[ht]
\begin{displaymath}
\begin{array}{c@{\hskip3\cellsize}c@{\hskip3\cellsize}c@{\hskip3\cellsize}c@{\hskip3\cellsize}c@{\hskip3\cellsize}c@{\hskip3\cellsize}c}
\tableau{ \bf{1} & 1 & 1 \\ \bf{2} & 2 & 2 & 2 \\ \bf{3} \\ \hline } & \tableau{ \bf{1} & 1 & 1 \\ \bf{2} & 2 & 2 & 3 \\ \bf{3} \\ \hline } & \tableau{ \bf{1} & 1 & 1 \\ \bf{2} & 2 & 3 & 3 \\ \bf{3} \\ \hline } & \tableau{ \bf{1} & 1 & 1 \\ \bf{2} & 3 & 3 & 3 \\ \bf{3} \\ \hline } & \tableau{ \bf{1} & 1 & 2 \\ \bf{2} & 3 & 3 & 3 \\ \bf{3} \\ \hline } \\ \\ \tableau{ \bf{1} & 2 & 2 \\ \bf{2} & 3 & 3 & 3 \\ \bf{3} \\ \hline } & \tableau{ \bf{1} & 1 & 3 \\ \bf{2} & 2 & 2 & 2 \\ \bf{3} \\ \hline } & \tableau{ \bf{1} & 1 & 2 \\ \bf{2} & 2 & 3 & 3 \\ \bf{3} \\ \hline } & \tableau{ \bf{1} & 3 & 3 \\ \bf{2} & 2 & 2 & 2 \\ \bf{3} \\ \hline }
\end{array}
\end{displaymath}
\caption{The 9 Young key fillings of shape $230$. (Basement entries are in bold.)}\label{fig:ykey230}
\end{figure}
Note that the definition immediately implies that
\begin{equation}\label{eqn:keyyoungkey}
\ykey_a(x_1, x_2 , \hdots , x_n) = \key_{{\rm rev}(a)}(x_n, x_{n-1} , \hdots , x_1).
\end{equation}
\begin{proposition}\label{prop:youngkeybasis}
The Young key polynomials are a basis for $\ensuremath{\mathrm{Poly}}_n$, containing the Schur polynomials. In particular, if $a$ is decreasing then
\[\ykey_a = s_\lambda(x_1,\ldots , x_n),\]
where $\lambda$ is $a$ with trailing zeros removed.
\end{proposition}
\begin{proof}
The Young key polynomials are equinumerous with the key polynomials. Any polynomial can be expressed as a linear combination of key polynomials (since key polynomials are a basis of $\ensuremath{\mathrm{Poly}}_n$), and thus as a linear combination of Young key polynomials by (\ref{eqn:keyyoungkey}). Hence the Young key polynomials are a basis of $\ensuremath{\mathrm{Poly}}_n$.
We have $\key_{{\rm rev}(a)} = s_{a}$ \cite{Mac91}, hence $\ykey_a = s_a = s_\lambda$ by (\ref{eqn:keyyoungkey}) and because Schur polynomials are symmetric, hence invariant under exchanging variables.
\end{proof}
In this way, both the key and Young key polynomials extend the Schur polynomials to $\ensuremath{\mathrm{Poly}}_n$. This is in fact their only coincidence.
\begin{theorem}\label{thm:keyyoungkeyintersect}
The polynomials that are both key polynomials and Young key polynomials are exactly the Schur polynomials.
\end{theorem}
\begin{proof}
Suppose $s_{\lambda}(x_1,\ldots , x_n)$ is a Schur polynomial in $n$ variables. Then
\[s_{\lambda}(x_1,\ldots , x_n) = \key_{0^{n-\ell(\lambda)}\times {\rm rev}(\lambda)} = \ykey_{\lambda\times 0^{n-\ell(\lambda)}},\]
where $0^m\times b$ (respectively, $b\times 0^m$) denotes $b$ with $m$ zeros prepended (respectively, appended).
For the converse, note that for any weak composition $a$, the key polynomial $\key_a$ has the monomial $x^{{\rm sort}(a)}$ as a term; this follows from the divided difference definition. But the only Young key polynomial containing $x^{{\rm sort}(a)}$ as a term is $\ykey_{{\rm sort}(a)}$ itself, which is a Schur polynomial. So if $\key_a$ is not a Schur polynomial it cannot be equal to any Young key polynomial.
\end{proof}
We also define a Young analogue of the Demazure atoms. Let $a$ be a weak composition of length $n$. Define the \emph{Young atom fillings} $\mathrm{YASSF}(a)$ for $a$ to be the fillings of $D(a)$ (no basement) with entries from $\{1, \ldots , n\}$ satisfying the following conditions.
\begin{enumerate}
\item Entries weakly increase from left to right in each row.
\item Entries do not repeat in any column.
\item All type I and type II Young triples are Young inversion triples.
\item The first entry of each row is equal to its row index.
\end{enumerate}
Define the \emph{Young atom} $\ya_a$ by
\[\ya_a = \sum_{T\in \mathrm{YASSF}(a)}x^{{\rm wt}(T)}.\]
The definition immediately implies that $\ya_a(x_1, x_2 , \hdots , x_n) = \atom_{{\rm rev}(a)}(x_n, x_{n-1} , \hdots , x_1).$ Similar to Proposition~\ref{prop:youngkeybasis}, the Young atoms form a basis of $\ensuremath{\mathrm{Poly}}_n$. We can establish the coincidences between Demazure atoms and Young atoms, as we did in Theorem~\ref{thm:keyyoungkeyintersect} for keys and Young keys. Note the condition for coincidence is less restrictive than that for coincidence of quasisymmetric Schur and Young quasisymmetric Schur polynomials (Theorem~\ref{thm:yqsqs}), due to elements of $\mathrm{YASSF}(a)$ and $\mathrm{ASSF}(a)$ necessarily having identical first column.
\begin{theorem}\label{thm:atomyatom}
The polynomials that are both Demazure atoms and Young atoms are precisely the $\ya_a$ such that $|a_i-a_{i+1}|\le 1$ for all $1\le i < n$.
\end{theorem}
\begin{proof}
First we show that if $\ya_a = \atom_b$ then $a=b$. Suppose $\max(a) > \max(b)$, where $\max(a)$ is the largest part of $a$. Then since entries cannot repeat in any column for either $\mathrm{YASSF}$ or $\mathrm{ASSF}$, $\ya_a$ has terms where some $x_i$ has degree $\max(a)$, but $\atom_b$ cannot have any such term. Hence if $\ya_a=\atom_b$, the longest row(s) in $D(a)$ and $D(b)$ must have the same length. By a similar argument, the next-longest rows must then have the same length, etc.
Thus if $\ya_a = \atom_b$, then $b$ must be a rearrangement of $a$.
Now suppose $b$ rearranges $a$. Let $T\in \mathrm{YASSF}(a)$ be such that all entries in the $j$th row (for each $j$) are equal to $j$, and suppose there exists $S\in \mathrm{ASSF}(b)$ with the same weight as $T$. By definition, the first entry in each row $j$ of $S$ is $j$. Because the rows of $b$ rearrange those of $a$, the number of boxes in each column of $D(b)$ is the same as that for each column of $D(a)$. It follows that the set of entries in each column of $S$ must be the same as that in the corresponding column of $T$, since $T$ has $a_j$ instances of each entry $j$, and entries cannot repeat in any column of $T$ or $S$.
Now consider the entries in the second column of $S$, which are a subset of the entries in the first column for both $S$ and $T$. None of these entries can go in a row above the row that contains that entry in the first column, else the two copies of that entry must violate one of the triple conditions. Nor can they go in a row below, since entries must decrease along each row. So each entry must go immediately adjacent to the same entry in the first column of $S$. Continuing thus, we obtain $S=T$, so in particular $a=b$.
Now suppose $a_{i} - a_{i+1} \ge 2$ for some $i$. Let $T\in \mathrm{YASSF}(a)$ be such that all entries in each row $j$ are $j$, and let $T'$ be obtained by changing the rightmost $i$ in $T$ to $i+1$. Since $a_{i} - a_{i+1} \ge 2$, this new $i+1$ is not in the first column, and is at least two columns to the right of any other $i+1$, so no $\mathrm{YASSF}$ properties are affected by this change and $T'\in \mathrm{YASSF}(a)$. But there is no $S\in \mathrm{ASSF}(a)$ with weight equal to $T'$: in rows $i+1$ and above, entries in $S$ must agree with entries in $T'$, and then there is nowhere the new $i+1$ could be placed in $S$. Hence $\ya_a\neq \atom_a$. A similar argument shows that if $a_{i+1} - a_{i} \ge 2$, then $\atom_a\neq \ya_a$.
Conversely, it is straightforward to observe that if $|a_i-a_{i+1}|\le 1$ for all $1\le i < n$, then both $\ya_a$ and $\atom_a$ are equal to the single monomial $x^a$.
\end{proof}
\subsection{Compatible sequences}{\label{sec:compatibleseqs}
The Young key polynomials may also be described in terms of compatible sequences.
For a word $w$ in $\{1,2,\ldots , n\}$ define the \emph{flip} of $w$ to be the word $f(w)$ in $\{1,2,\ldots , n\}$ obtained by replacing each entry $w_i$ with $n+1-w_i$. Also define the \emph{flip-reverse} of $w$, denoted $\mathrm{frev}(w)$, to be the word $f({\rm rev}(w))$, or equivalently ${\rm rev}(f(w))$.
\begin{ex}
If $n=6$ and $w=2446154$, then $f(w)=5331623$ and $\mathrm{frev}(w)=3261335$.
\end{ex}
Let $T$ be an $\mathrm{SSYT}$. Define the \emph{right-to-left column reading word} $\col_R(T)$ to be the word obtained by reading the entries in each column of $T$ from top to bottom starting with the rightmost column and moving from right to left.
\begin{lemma}\label{lem:frevrighttoleft}
Let $a$ be a weak composition. Then $\mathrm{frev}(\col(\mathrm{key}(a)))=\col_R(\mathrm{key}({\rm rev}(a)))$.
\end{lemma}
\begin{proof}
First of all, $\mathrm{key}(a)$ and $\mathrm{key}({\rm rev}(a))$ have the same shape. To see this, note that the height of the $i^{th}$ column of $\mathrm{key}(a)$ is equal to the number of entries $a_j$ in $a$ such that $a_j \ge i$. This number is the same for $a$ and ${\rm rev}(a)$.
This also shows that for any given column, the entries of that column in $\mathrm{key}(a)$ are the flips of the entries of that column in $\mathrm{key}({\rm rev}(a))$. Hence when the word for $\mathrm{key}({\rm rev}(a))$ is reversed, the column breaks line up and the word in each column is the flip-reverse of the word in that column of $\mathrm{key}(a)$. The statement follows.
\end{proof}
\begin{ex}
Let $a=(2,4,0,3)$. We have
\[\mathrm{key}(a) = \tableau{ 4 & 4 \\ 2 & 2 & 4 \\ 1 & 1 & 2 & 2 } \qquad \mbox{ and } \qquad \mathrm{key}({\rm rev}(a))= \tableau{ 4 & 4 \\ 3 & 3 & 3 \\ 1 & 1 & 1 & 3 }.\]
Here $\col(\mathrm{key}(a))$ is $421|421|42|2$ and $\col_R(\mathrm{key}({\rm rev}(a)))$ is $3|31|431|431$, which is indeed equal to $\mathrm{frev}(\col(\mathrm{key}(a)))$ (column-breaks included for emphasis).
\end{ex}
The following lemma is fairly well-known~\cite[Appendix A.1]{Ful97}; we include a proof here for completeness and to illustrate the flip-and-reverse procedure.
\begin{lemma}\label{lem:knuthfrev}
Let $w,w'$ be words in $\{1,\ldots , n\}$. Then $w\sim w'$ if and only if $\mathrm{frev}(w)\sim \mathrm{frev}(w')$.
\end{lemma}
\begin{proof}
It is enough to show this for the case that $w$ and $w'$ are related by a single Knuth move. For $x$ a letter in $w$, let $\overline{x}$ denote $n+1-x$. Suppose $w$ contains the sequence $\ldots xzy \ldots$ where $x\le y < z$. Then one may perform a Knuth move to obtain $w' = \ldots zxy \ldots$. In $\mathrm{frev}(w)$ we have $\ldots \overline{y}\overline{z}\overline{x} \ldots$ where $\overline{z}<\overline{y}\le \overline{x}$. Then one may perform a Knuth move to obtain the word $\ldots \overline{y}\overline{x}\overline{z} \ldots$, which is indeed $\mathrm{frev}(w')$. Now suppose $w$ contains the sequence $\ldots yxz \ldots$ where $x< y \le z$. Then one may perform a Knuth move to obtain $w' = \ldots yzx \ldots$. In $\mathrm{frev}(w)$ we have $\ldots \overline{x}\overline{z}\overline{y} \ldots$ where $\overline{z}\le\overline{y}< \overline{x}$. Then one may perform a Knuth move to obtain the word $\ldots \overline{z}\overline{x}\overline{y} \ldots$, which is indeed $\mathrm{frev}(w')$.
Therefore, $\mathrm{frev}(w)\sim \mathrm{frev}(w')$ whenever $w\sim w'$. The converse direction is immediate from the fact that $\mathrm{frev}$ is an involution.
\end{proof}
\begin{proposition}\label{prop:Knutheq}
Let $a$ be a weak composition. Then $\col(\mathrm{key}(a))$ is Knuth-equivalent to $\col_R(\mathrm{key}(a))$.
\end{proposition}
\begin{proof}
It suffices to show that the word $\col_R(\mathrm{key}(a))$ inserts to $\mathrm{key}(a)$.
Suppose $T$ is a key. Let $w$ be a word that contains the entries in the leftmost column of $T$ and let $T'$ be the key obtained by removing the leftmost column of $T$. We will show that inserting the entries of $w$ into $T'$ in order from largest to smallest yields another key, namely $T'$ with the column whose entries are the entries of $w$ adjoined on the left. This is the key $T$ and the conclusion then follows by induction, the base case where $T$ is empty being trivial.
We will establish that insertion of the $i$th entry of $w$ causes (a copy of) the $(i-1)$th entry of $w$ to be bumped from the first into the second row, the $(i-2)$nd entry of $w$ to be bumped from the 2nd to the 3rd row, etc, culminating in the first entry of $w$ arriving at the end of the $i$th row. This is clearly true for $i=1$, as the largest entry of $w$ is weakly larger than any entry of $T$ (due to the key condition) so it is inserted at the end of the first row. Suppose this is true for all entries up to the $(i-1)$th entry of $w$. Now, when the $i$th entry of $w$ is inserted, it bumps (a copy of) the $(i-1)$th entry of $w$ from row $1$, since there is no entry $x$ in the tableau such that $w_i<x<w_{i-1}$ by the key condition. Then the $(i-1)$th entry of $w$ must bump (a copy of) the $(i-2)$nd entry of $w$ (which is in row $2$ by the inductive hypothesis), since again there is no entry $y$ in the tableau such that $w_{i-1}<y<w_{i-2}$ by the key condition. Continuing thus, $w_1$ is eventually bumped into row $i$, and comes to rest at the end of row $i$ since it is weakly larger than any other entry in the tableau.
Hence the insertion process results in a new entry $w_i$ in each row $|w|+1-i$. There is a unique such semistandard Young tableau, and by the key condition each entry $w_i$ (or a copy of this entry) must appear as the first entry of row $|w|+1-i$ for every $i$. Therefore the result is $T'$ with the column determined by $w$ appended, as required.
\end{proof}
We now give a formula for Young key polynomials in terms of compatible sequences.
\begin{theorem}\label{thm:ykeycompatible}
Let $a$ be a weak composition of length $n$. Then
$$\ykey_a = \sum_{f(c) \sim \col(\mathrm{key}(a)), \, w \textrm{ is $c$-compatible}} x^{\mathrm{comp}(f(w))}.$$
\end{theorem}
\begin{proof}
The set $X$ of words Knuth-equivalent to $\col(\mathrm{key}({\rm rev}(a)))$ is equal to the set of words Knuth-equivalent to $\col_R(\mathrm{key}({\rm rev}(a)))$ by Proposition~\ref{prop:Knutheq}, which is equal to the set of words Knuth-equivalent to $\mathrm{frev}(\col(\mathrm{key}(a)))$ by Lemma~\ref{lem:frevrighttoleft}. Then by Lemma~\ref{lem:knuthfrev}, the flip-reverses of the words in $X$ form the set $Y$ of words Knuth-equivalent to $\col(\mathrm{key}(a))$. Since $Y = \{\mathrm{frev}(x) : x\in X\}$, we have $\{f(y):y\in Y\} = \{{\rm rev}(x): x\in X\}$. By Theorem~\ref{thm:keycompatible}, $\key_{{\rm rev}(a)}(x_1,\ldots , x_n)$ is generated by the compatible sequences for $\{{\rm rev}(x): x\in X\}$, and thus also generated by the compatible sequences for $\{f(y):y\in Y\}$. Since $\ykey_a(x_n, \ldots , x_1) = \key_{{\rm rev}(a)}(x_1, \ldots , x_n)$, the compatible sequences for $\{f(y):y\in Y\}$ generate $\ykey_a(x_n, \ldots , x_1)$, i.e.,
$$\ykey_a(x_n, \ldots , x_1) = \sum_{f(c) \sim \col(\mathrm{key}(a)), \, w \textrm{ is $c$-compatible}} x^{\mathrm{comp}(w)}.$$
Finally, flipping each compatible sequence in the formula above yields $\ykey_a(x_1, \ldots , x_n)$.
\end{proof}
\begin{ex}\label{ex:ykeyrevcompatible}
Let $a=230$. Then $\mathrm{key}(a) = \tableau{ 2 & 2 \\ 1 & 1 & 2 }$; its column word is $21212$. The set of words Knuth-equivalent to $21212$ is $\{22121, 22211, 21221, 21212, 22112\}$.
We compute the set of compatible sequences for the flips of each of these words.
\begin{figure}[h]
\begin{tabular}{l | l | l | l}
{\bf Word} & {\bf flip} & {\bf Compatible sequences} & {\bf Flips of Compatible sequences} \\\hline
22121 & 22323 & 11223 & 33221 \\\hline
22211 & 22233 & 11122 \, 11123 \, 11133 & 33322 \, 33321 \, 33311 \\
& & 11233 \, 12233 \, 22233 & 33211 \, 32211 \, 22211 \\\hline
21221 & 23223 & 12223 & 32221 \\\hline
21212 & 23232 & & \\\hline
22112 & 22332 & 11222 & 33222 \\
\end{tabular}
\caption{Compatible sequences and their flips \label{fig:revcompseq}}
\end{figure}
\end{ex}
The corresponding monomials indeed sum up to $\ykey_{230}$; compare this example to Example~\ref{ex:keycompatible} computing $\key_{032}$ in terms of compatible sequences.
\subsection{Divided differences and Demazure crystals}
Young key polynomials may also be described in terms of divided difference operators. Given a weak composition $a$, let ${\rm revsort}(a)$ be the rearrangement of $a$ into increasing order. Let $\hat{w}_a$ be the permutation of shortest length rearranging $a$ to ${\rm revsort}(a)$. For $1 \le i < n$ define an operator
\[\hat{\pi}_i = -\partial_i x_{i+1},\]
and for a permutation $w$, define $\hat{\pi}_w = \hat{\pi}_1\cdots \hat{\pi}_r$, where $s_1\cdots s_r$ is any reduced word for $w$.
\begin{lemma}\label{lem:reversediff}
Let $f$ be a polynomial in $\mathbb{Z}[x_1,\ldots , x_n]$. We have
\[I({\pi}_i f) = \hat{\pi}_{n-i} I(f)\]
where $I(f)$ is the polynomial obtained by exchanging variables $x_j\leftrightarrow x_{n+1-j}$.
\end{lemma}
\begin{proof}
By linearity, it suffices to show this is true for a monomial $f=x^b$, where $b$ is a weak composition of length $n$. We compute
\begin{align*}
I(\pi_ix^b) & = I\left(\frac{x_1^{b_1} \cdots x_i^{b_i+1} x_{i+1}^{b_{i+1}} \cdots x_n^{b_n} - x_1^{b_1} \cdots x_i^{b_{i+1}} x_{i+1}^{b_{i}+1} \cdots x_n^{b_n}}{x_i-x_{i+1}}\right) \\
& = \frac{x_n^{b_1} \cdots x_{n+1-i}^{b_i+1} x_{n-i}^{b_{i+1}} \cdots x_1^{b_n} - x_n^{b_1} \cdots x_{n+1-i}^{b_{i+1}} x_{n-i}^{b_{i}+1} \cdots x_1^{b_n}}{x_{n+1-i}-x_{n-i}}
\end{align*}
and
\begin{align*}
\hat{\pi}_{n-i} I (x^b) & = \hat{\pi}_{n-i} (x_1^{b_n} \cdots x_{n-i}^{b_{i+1}} x_{n+1-i}^{b_i} \cdots x_n^{b_1}) \\
& = \frac{x_1^{b_n} \cdots x_{n-i}^{b_{i+1}} x_{n+1-i}^{b_i+1} \cdots x_n^{b_1} -x_1^{b_n} \cdots x_{n-i}^{b_i+1} x_{n+1-i}^{b_{i+1}} \cdots x_n^{b_1}}{x_{n+1-i}-x_{n-i}}
\end{align*}
as required.
\end{proof}
\begin{lemma}
$\hat{\pi}_w$ is well-defined.
\end{lemma}
\begin{proof}
Since the $\pi_i$'s satisfy the commutativity and braid relations of $S_n$, it follows from Lemma~\ref{lem:reversediff} that the $\hat{\pi}_i$'s also do.
\end{proof}
\begin{theorem}\label{thm:Youngkeydivideddifference}
Let $a$ be a weak composition of length $n$. Then
$\ykey_a = \hat{\pi}_{\hat{w}_a}x^{{\rm revsort}(a)}.$
\end{theorem}
\begin{proof}
First observe that if $w_a=s_{i_1}\cdots s_{i_k}$ is the minimal length permutation sending $a$ to ${\rm sort}(a)$, then $s_{n-i_1}\cdots s_{n-i_k}$ is the minimal length permutation sending ${\rm rev}(a)$ to ${\rm revsort}({\rm rev}(a))$, i.e., is $\hat{w}_{{\rm rev}(a)}$.
Therefore, by Lemma~\ref{lem:reversediff} and the fact that $I(x^{{\rm sort}(a)}) = x^{{\rm revsort}(a)}=x^{{\rm revsort}({\rm rev}(a))}$, we have
\[\hat{\pi}_{\hat{w}_{{\rm rev}(a)}}x^{{\rm revsort}({\rm rev}(a))} = \hat{\pi}_{{\hat{w}_{{\rm rev}(a)}}}I(x^{{\rm sort}(a)}) = I(\pi_{w_a}(x^{{\rm sort}(a)})) = I(\key_a) = \ykey_{{\rm rev}(a)}.\qedhere \]
\end{proof}
\begin{ex}
Let $a=230$. Then the minimal length permutation taking $a$ to ${\rm revsort}(a) = 023$ is $s_2 s_1$. We compute
\begin{align*}
\hat{\pi}_2\hat{\pi}_1 (x_2^2x_3^3) & = \hat{\pi}_2\frac{x_2^3x_3^3 - x_1^3x_3^3}{x_2-x_1} \\
& = \hat{\pi}_2(x_2x_3^3+x_1x_2^2x_3^3 + x_1^3x_3^3) \\
& = \frac{(x_2^2 x_3^4 - x_2^4x_3^2) + (x_1 x_2 x_3^4 - x_1 x_2^4 x_3) + (x_1^2 x_3^4 - x_1^2 x_2^4)}{x_3-x_2} \\
& = x_2^2 x_3^3 + x_2^3 x_3^2 + x_1 x_2 x_3^3 + x_1 x_2^2 x_3^2 + x_1 x_2^3 x_3 + x_1^2 x_3^3 + x_1^2 x_2 x_3^2 + x_1^2 x_2^2 x_3 + x_1^2 x_2^3 \\
& = \ykey_{230}.
\end{align*}
\end{ex}
Recall the Demazure crystal structure for key polynomials described in Section~\ref{sec:keycrystal}. The Young key polynomials may be realized as characters of crystals that are obtained via Demazure truncations beginning from the \emph{lowest} weight of the crystal $B(\lambda)$ rather than the highest. For a subset $X$ of $B(\lambda)$, define $\hat{\mathfrak{D}}_iX = \{ b \in B(\lambda) | f_i^r(b) \in X \textrm{ for some } r \ge 0 \}.$
\begin{theorem}
Let $a$ be a weak composition of length $n$ and let $w$ be of shortest length such that $w(a) = {\rm revsort}(a)$. Then the Young key polynomial $\ykey_a$ is the character of the subcrystal of $B({\rm sort}(a))$ obtained by
\[\hat{\mathfrak{D}}_{i_1}\cdots \hat{\mathfrak{D}}_{i_k}\{\hat{u}_\lambda\},\]
where $s_{i_1}\cdots s_{i_k}$ is a reduced word for $w$ and $\hat{u}_\lambda$ is the lowest weight element of $B(\lambda)$.
\end{theorem}
\begin{proof}
Recall that the shortest permutation sending ${\rm rev}(a)$ to ${\rm sort}({\rm rev}(a))$ is $s_{n-i_1}\cdots s_{n-i_k}$. Performing the \emph{Lusztig involution} \cite{Lus10} $\star$ on $B(\lambda)$ exchanges each $f_i$ with $e_{n-i}$ and $e_i$ with $f_{n-i}$, and reverses the weight of each vertex. Hence, applying a Demazure truncation with $s_{n-i_1}\cdots s_{n-i_k}$ from the highest weight of $B(\lambda)^\star$ yields $\key_{{\rm rev}(a)}$ with variables reversed, which is equal to $\ykey_a$ by (\ref{eqn:keyyoungkey}). The statement follows.
\end{proof}
Observe that the repeated actions of the $\hat{\mathfrak{D}}_i$ starting with $\hat{u}_\lambda$ precisely mirrors the repeated action of the divided difference operators $\hat{\pi}_i$ starting with the monomial $x^{0^{n-\ell(\lambda)}\times {\rm rev}(\lambda)}$.
\begin{ex}\label{ex:youngcrystalkey}
Let $a=201$, and recall $B(21)$ from Figure~\ref{fig:Youngcrystalkey}. The shortest-length $w$ such that $w(a)={\rm revsort}(a)$ is $w=s_1s_2$. Therefore, the crystal graph for $\ykey_{201}$ is the subgraph of $B(21)$ consisting of all vertices that can be obtained from the lowest weight $\tableau{3 \\ 2 & 3}$ by first applying a sequence of $e_2$'s and then a sequence of $e_1$'s. Hence $\ykey_{201} = x_2 x_3^2 + x_2^2 x_3 + x_1 x_3^2 + x_1 x_2 x_3 + x_1^2 x_3$.
\end{ex}
\subsection{Young key polynomials as generators for left keys}{\label{sec:leftkeys}}
Recall Theorem~\ref{thm:rightkey} states that a key polynomial can be described as the generating function for the set of all $\mathrm{SSYT}$ with bounded right key. In this section we provide an analogous description of Young key polynomials as well as the corresponding description of Young Demazure atoms.
Given a semistandard Young tableau $T$, let $\mathrm{frev}(T)$ denote the filling obtained by flipping all entries in $T$ and reversing the order of the resulting column entries. Compare this to the definition of $\mathrm{frev}$ applied to a word at the beginning of Section~\ref{sec:compatibleseqs}. It is a straightforward observation that when $T$ is a key, $\mathrm{frev}(T)$ is the key whose entries in each column are the flip-reverses of the entries in the corresponding column of $T$. (However, if $T$ is not a key then $\mathrm{frev}(T)$ is not necessarily even a semistandard Young tableau.)
We need to establish a weight-reversing bijection between semistandard Young tableaux with a given right key $U$ and semistandard Young tableaux with left key $\mathrm{frev}(U)$. This is done in the following lemma, which can also be understood in terms of the \emph{evacuation} operation on semistandard Young tableaux. Recall that a word $w$ is Knuth equivalent to a semistandard Young tableau $T$ if and only if Schensted insertion of the word $w$ produces the tableau $T$.
\begin{lemma}\label{lem:rightkeyleftkey}
Let $T$ be a semistandard Young tableau. Then the left key of the tableau obtained via Schensted insertion of $\mathrm{frev}(\col(T))$ is $\mathrm{frev}(K_+(T))$.
\end{lemma}
\begin{proof}
Let $T$ have shape $\lambda$ and let $U$ be the semistandard Young tableau obtained by Schensted insertion of $\mathrm{frev}(\col(T))$. Consider any column index $j$. Consider any word $w'$ that is Knuth equivalent to $\col(T)$, has column form a rearrangement of $\lambda'$, and whose rightmost maximal decreasing subsequence has length $\lambda_j'$. Then the entries in column $j$ of $K_+(T)$ are the entries of the rightmost maximal decreasing subsequence of $w'$. Now, the column form of $\mathrm{frev}(w')$ is the reversal of the column form of $w'$ (thus also a rearrangement of $\lambda'$), and Lemma~\ref{lem:knuthfrev} implies that $\mathrm{frev}(\col(T))$ is Knuth equivalent to $\mathrm{frev}(w')$. Therefore the leftmost maximal decreasing subsequence of $\mathrm{frev}(w')$ is the flip-reverse of the rightmost maximal decreasing subsequence of $w'$, and hence the entries in the the $j^{th}$ column of the left key of $U$ are precisely the flip-reverses of the entries in the $j^{th}$ column of the right key of $T$.
\end{proof}
\begin{theorem}{\label{thm:leftkeygen}}
The Young Demazure atoms and Young key polynomials are generated by the left keys of semistandard Young tableaux as follows:
$$\ya_a= \sum_{\substack{T \in \mathrm{SSYT}_n( \lambda (a)) \\ K_-(T) = \mathrm{key}(a)}} x^{{\rm wt}(T)} \qquad \mbox{ and }\qquad \ykey_{a} = \sum_{\substack{T \in \mathrm{SSYT}_n( \lambda (a)) \\ K_-(T) \ge \mathrm{key}(a)}} x^{{\rm wt}(T)},$$
where $\ge$ means entrywise comparison and $n=\ell(a)$.
\end{theorem}
\begin{proof}
Consider the first expansion. Recall that $\ya_a(x_1,\ldots , x_n) = \atom_{{\rm rev}(a)}(x_n,\ldots , x_1)$ and that (by Equation~\ref{atomrtkey}) $\atom_{{\rm rev}(a)}$ is generated by the set of all semistandard Young tableaux whose right key equals $\mathrm{key}({\rm rev}(a))$. It is therefore enough to exhibit a weight-reversing bijection between the set of all semistandard Young tableaux whose right key equals $\mathrm{key}({\rm rev}(a))$ and the set of all semistandard Young tableaux whose left key is $\mathrm{key}(a)$.
We know from Lemma~\ref{lem:rightkeyleftkey} that if $T$ is a semistandard Young tableau such that $K_+(T)=\mathrm{key}({\rm rev}(a))$, then the semistandard Young tableau $S$ obtained via Schensted insertion of $\mathrm{frev}(\col(T))$ has $K_-(S) = \mathrm{frev}(K_+(T)) = \mathrm{frev}(\mathrm{key}({\rm rev}(a))) = \mathrm{key}(a)$. This process is clearly invertible, hence bijective, and the application of $\mathrm{frev}$ to $\col(T)$ ensures it is weight-reversing.
For the second expansion, we recall that $\ykey_a(x_1,\ldots , x_n) = \key_{{\rm rev}(a)}(x_n, \ldots , x_1)$ and that by Theorem~\ref{thm:rightkey} $\key_{{\rm rev}(a)}$ is generated by the set of all semistandard Young tableaux whose right key is less than or equal to $\mathrm{key}({\rm rev}(a))$. It is straightforward to check that if $K_+(T)\le \mathrm{key}({\rm rev}(a))$, then the semistandard Young tableau $S$ obtained via Schensted insertion of $\mathrm{frev}(\col(T))$ has $K_-(S) \ge \mathrm{frev}(K_+(T)) = \mathrm{key}(a)$. The second expansion then follows by applying the same argument used to prove the first expansion.
\end{proof}
\begin{ex}{\label{ex:frev}}
Let $T = \tableau{ 3 & 4 \\1 & 1 & 2 }$, which has right key $K_+(T) = \tableau{ 4 & 4 \\ 2 & 2 & 2 }$. We have $\col(T) = 3 1 4 1 2$. Schensted insertion of $\mathrm{frev}(\col(T))=3 4 1 4 2$ produces the semistandard Young tableau $\tableau{3 & 4 \\ 1 & 2 & 4}$ which indeed has left key $\tableau{3 & 3 \\ 1 & 1 & 3} = \mathrm{frev}(K_+(T))$.
\end{ex}
\subsection{Row-frank words}
Our next aim is to realize Young key polynomials as traces on modules. For this, we first adapt a formula of \cite{LasSch90} expressing key polynomials in terms of \emph{row-frank} words. The first condition below is equivalent to the condition of being row-frank; see~\cite{RS95} for details. The \emph{standardization} of a semistandard Young tableau $T$, denoted $\std(T)$, is the standard Young tableau obtained by replacing the $1$'s in $T$ from left to right by $1,2, \hdots , \gamma_1$, the $2$'s by $\gamma_1 +1 , \gamma_1 +2, \hdots , \gamma_1 + \gamma_2$, and so on, where $\gamma_i$ equals the number of times the entry $i$ appears in $T$. Given a word $u$ in positive integers, its \emph{row-word factorization} is $\cdots u^{(2)} u^{(1)}$, where each \emph{row-word} $u^{(i)}$ is a weakly increasing subsequence of maximal length.
For a weak composition $a$, let $\w(a)$ be the set of all words $u = \cdots u^{(2)} u^{(1)}$ with each $u^{(i)}$ having $a_i$ letters, satisfying the following conditions.
\begin{enumerate}
\item The word $u$ maps to a pair $(P,\std(\mathrm{key}(a)))$ under the \emph{column insertion} described in \cite{RS95}.
\item No letter of $u^{(i)}$ exceeds $i$.
\end{enumerate}
\begin{theorem}{\label{thm:frankkey}}\cite{LasSch90}
The key polynomials are generated using words in $\w(a)$ as follows:
$$\key_a = \sum_{u \in \w(a)} x_u.$$
\end{theorem}
We now provide the analogue of this generating function for Young key polynomials.
For a weak composition $a$, let $\yw(a)$ be the set of all words $u = \cdots u^{(2)} u^{(1)}$ with each $u^{(i)}$ having $a_i$ letters, satisfying the following conditions.
\begin{enumerate}
\item The word $\mathrm{frev}(u)$ maps to a pair $(P,\std(\mathrm{key}({\rm rev}(a))))$ under column insertion.
\item For each letter $j$ of $u^{(i)}$, we have $i \le j \le \ell(a)$.
\end{enumerate}
\begin{ex}\label{ex:WandYW}
We have
\[\w(032)=\{33| 222 |, 33| 122 |, 33| 112 |, 33| 111|, 23| 111|, 23| 112|, 23| 122|, 22|111|, 22|112| \}\]
and
\[\yw(230)=\{| 222| 11, | 223| 11, | 233| 11, | 333| 11, | 333| 12, | 233 | 12, | 223 | 12, |333|22, |233|22 \},\]
where the vertical bars denote the row word factorization (including empty row-words).
\end{ex}
\begin{theorem}{\label{thm:frankyoungkey}}
The Young key polynomials are generated using the words in $\yw(a)$ as follows:
$$\ykey_a = \sum_{w \in \yw(a)} x_w.$$
\end{theorem}
\begin{proof}
Consider a word $u$ in $\w(a)$ and let $w=\mathrm{frev}(u)$. Then $w$ satisfies condition (1) for $\yw(a)$ by construction. Consider a letter $b$ in $u^{(i)}$. By definition, $b \le i$. The flip $n-b+1$ of $b$ appears in the $(n-i+1)^{th}$ row-word of $w$, and $b \le i$ implies $n-b+1 \ge n-i+1$. So $w$ satisfies both the conditions to be in the set $\yw(a)$. Since flipping and reversing is an invertible process, we have that the words in $\yw(a)$ are exactly the flip-reverses of the words in $\w({\rm rev}(a))$. Then since the monomials appearing in $\ykey_a(x_1,\ldots x_n)$ are the flips of those appearing in $\key_{{\rm rev}(a)}(x_n,\ldots, x_1)$, it follows from Theorem~\ref{thm:frankkey} that $\yw(a)$ generates $\ykey_a$.
\end{proof}
\subsection{Young key polynomials as traces on modules}
In \cite{RS95}, \emph{generalized flagged Schur modules} and \emph{key modules} are defined. The key polynomials are realized as traces on key modules, which are a special case of generalized flagged Schur modules. In this section we modify the Reiner-Shimozono approach to construct modules so that the Young key polynomials are realised as traces on these modules.
As in~\cite{RS95}, a \emph{diagram} $D$ is a finite subset of the Cartesian product $\mathbb{P} \times \mathbb{P}$ of the positive integers with itself, where every element of $\mathbb{P} \times \mathbb{P}$ in $D$ is thought of as being a box. A \emph{filling of shape $D$} is a map $T: D \rightarrow \mathbb{P}$ assigning a positive integer to each box in $D$ (note this is called a \emph{tableau of shape $D$} in \cite{RS95}).
Let $\mathbb{F}$ be a field of characteristic $0$, and let $\mathcal{T}^{n}_D$ be the vector space over $\mathbb{F}$ with basis the set of all fillings $T$ of shape $D$ whose largest entry does not exceed $n$. Fix an order $\mathfrak{b}_1, \mathfrak{b}_2, \ldots$ on the boxes of $D$, and identify the filling $T$ with the tensor product $\epsilon_{T(\mathfrak{b}_1)}\otimes \epsilon_{T(\mathfrak{b}_2)}\otimes \cdots$, where $\epsilon_i$ is the $i$th standard basis vector. Then an action of $GL_n(\mathbb{F})$ on $\mathcal{T}^{n}_D$ is defined by letting $GL_n(\mathbb{F})$ act on each $\epsilon_i$ as usual and extending this action linearly.
The \emph{row group} $R(D)$ (respectively \emph{column group} $C(D)$) is the set of all permutations of the boxes of $D$ which fixes the rows (resp. columns) in which the boxes appear. These groups act on $\mathcal{T}^{n}_D$ by permuting the positions of the entries within a filling. As in~\cite{RS95}, define
\[e_T = \sum_{\alpha \in R(D), \,\, \beta \in C(D)} {\rm sgn}(\beta) T\alpha \beta,\]
where $T\alpha \beta$ is the filling obtained by acting first by $\alpha$ and then by $\beta$.
Define the \emph{Young generalized flagged Schur module $\yflagschur^{n}_{D}$} for an arbitrary diagram $D$ (with $n$ at least the maximum row index of $D$) to be the subspace of $\mathcal{T}^{n}_D$ spanned by the set $\{ e_T \}$ as $T$ runs over all fillings of shape $D$ whose entries in row $i$ are not smaller than $i$. It is straightforward that $\yflagschur^{n}_{D}$ is a $B$-module, where $B$ is the Borel subgroup of $GL_n(\mathbb{F})$ consisting of lower-triangular matrices.
\begin{remark}\label{rmk:youngvsreverse}
The construction of the \emph{generalized flagged Schur module $\flagschur_{D}$} described in~\cite{RS95} is similar, but serves to illustrate an important difference in the behaviors of Young and reverse families of polynomials. In \cite{RS95} $\mathcal{T}_D$ is defined to be the vector space with basis consisting of all fillings of shape $D$, with no restriction on the size of the entries. In this way, $\mathcal{T}_D$ is a $GL_\infty(\mathbb{F})$-module. Then $\flagschur_{D}$ is spanned by the set $\{ e_T \}$ as $T$ runs over all fillings of shape $D$ whose entries in row $i$ are not \emph{larger} than $i$, which is finite even though $\mathcal{T}_D$ is infinite-dimensional. In this way, $\flagschur_{D}$ is a module for the opposite Borel subgroup $B_-$ consisting of upper-triangular elements of $GL_\infty(\mathbb{F})$. The dependence on $n$ in the Young case is reflected in the fact that appending zeros to a weak composition does not change the corresponding key polynomial, but does change the Young key polynomial.
\end{remark}
\begin{ex}
Let $a=032$. Then if $T \mbox{ $=$ } \vline \tableau{ 2 & 3 \\ 1 & 2 & 2 \\ \\ \hline}$, applying elements of the row group to $T$ yields the following:
\begin{displaymath}
\begin{array}{c@{\hskip1.5\cellsize}c@{\hskip1\cellsize}c@{\hskip1.5\cellsize}c@{\hskip1.5\cellsize}c@{\hskip1.5\cellsize}c@{\hskip1.5\cellsize}c}
2 \,\,\vline \tableau{ 2 & 3 \\ 1 & 2 & 2 \\ \\ \hline} & 2 \,\,\vline \tableau{ 2 & 3 \\ 2 & 1 & 2 \\ \\ \hline } & 2 \,\,\vline \tableau{ 2 & 3 \\ 2 & 2 & 1 \\ \\ \hline } & 2 \,\,\vline \tableau{ 3 & 2 \\ 1 & 2 & 2 \\ \\ \hline } & 2 \,\, \vline \tableau{ 3 & 2 \\ 2 & 1 & 2 \\ \\ \hline } & 2 \,\, \vline \tableau{ 3 & 2 \\ 2 & 2 & 1 \\ \\ \hline }
\end{array}
\end{displaymath}
where the coefficients are $2$ because there are two distinct permutations yielding each ordering of $1,2,2$. It is easy to see that for any filling $S$ with repeated entries in any column, we have $\sum_{ \beta \in C(D)} {\rm sgn}(\beta) S \beta = 0$, hence only the first and fifth fillings above contribute to $e_T$. Applying the column group to each of these and summing the resulting fillings yields
\[
e_T = 2 \,\, \vline \tableau{ 2 & 3 \\ 1 & 2 & 2 \\ \\ \hline} - 2 \,\, \vline \tableau{ 1 & 3 \\ 2 & 2 & 2 \\ \\ \hline } - 2 \,\, \vline \tableau{ 2 & 2 \\ 1 & 3 & 2 \\ \\ \hline } + 2 \,\, \vline \tableau{ 1 & 2 \\ 2 & 3 & 2 \\ \\ \hline } + 2 \,\, \vline \tableau{ 3 & 2 \\ 2 & 1 & 2 \\ \\ \hline } - 2 \,\,\vline \tableau{ 2 & 2 \\ 3 & 1 & 2 \\ \\ \hline } - 2 \,\, \vline \tableau{ 3 & 1 \\ 2 & 2 & 2 \\ \\ \hline } + 2 \,\, \vline \tableau{ 2 & 1 \\ 3 & 2 & 2 \\ \\ \hline }
\]
\end{ex}
Define the \emph{key module} $\keymod_a$ for the weak composition $a$ to be the $B_-$-module $\flagschur_{D(a)}$.
\begin{theorem}{\cite{RS95}}{\label{thm:keymod}}
For $u = \cdots u^{(2)} u^{(1)}$ in $\w(a),$ let $T(u)$ be the filling of shape $D(a)$ obtained by placing $u^{(j)}$ in row $j$. Then $\{ e_{T(u)} : u \in \w(a)\}$ is a basis for the key module $\keymod_a$.
\end{theorem}
We now describe the variation on the Reiner-Shimozono construction that needed to describe the Young key polynomials as characters.
Let $a$ be a weak composition of length $n$, and define the \emph{Young key module} $\ykeymod_a$ for the weak composition $a$ to be the $B$-module $\yflagschur_{D(a)}$. Here we may drop $n$ from the notation, since $n$ is determined by the weak composition $a$.
\begin{cor}
For $u = u^{(1)} u^{(2)} \cdots$ in $\yw(a)$, let $T(u)$ be the filling of shape $D(a)$ obtained by placing $u^{(j)}$ in row $j$. Then $\{ e_{T(u)} : u \in \yw(a)\}$ is a basis for the Young key module $\ykeymod_a$.
\end{cor}
\begin{proof}
The flip-and-reverse map on fillings extends linearly to an involution $\psi$, hence an isomorphism, on $\mathcal{T}^{n}_{D(a)}$. Moreover, $\psi$ sends a filling whose entries are at least their row index to a filling whose entries are at most their row index, and vice versa. In particular, by the proof of Theorem~\ref{thm:frankyoungkey}, $\psi$ carries $\{ e_{T(u)} : u \in \yw(a)\}$ to the basis $\{ e_{T(\mathrm{frev}(u))} : \mathrm{frev}(u) \in \w({\rm rev}(a))\}$ of $\keymod_{{\rm rev}(a)}$ given by Theorem~\ref{thm:keymod}.
Therefore, $\{ e_{T(u)} : u \in \yw(a)\}$ is a linearly independent set, since any linear dependence in this set would imply, via $\psi$, a linear dependence in the linearly independent set $\{ e_{T(\mathrm{frev}(u))} : \mathrm{frev}(u) \in \w({\rm rev}(a))\}$. Similarly $\{ e_{T(u)} : u \in \yw(a)\}$ is spanning: suppose $e_T\in \ykeymod_a$. Then $\psi(e_T)\in \keymod_{{\rm rev}(a)}$, hence is in the span of the spanning set $\{ e_{T(\mathrm{frev}(u))} : \mathrm{frev}(u) \in \w({\rm rev}(a))\}$ of $\keymod_{{\rm rev}(a)}$, and applying $\psi$ again yields $e_T$ as a linear combination of $\{ e_{T(u)} : u \in \yw(a)\}$.
\end{proof}
\begin{remark}
The order in which entries of $u^{(j)}$ are placed in row $j$ does not matter, since fillings with any given ordering of $u^{(j)}$ in each row $j$ appear in $e_T$ due to the action of the row group. In Example~\ref{ex:ykeymod}, we represent $e_T$ by the filling $T$ with entries increasing from left to right in each row, which agrees with the choices of representatives for key modules in \cite{RS95}.
\end{remark}
Let $x$ be the diagonal matrix whose diagonal entries are $x_1, x_2, \ldots , x_n$. We immediately obtain the following (compare to Corollary 14 in~\cite{RS95}).
\begin{cor}
The Young key polynomial $\ykey_a$ is the trace of $x$ acting on the Young key module $\ykeymod_a$.
\end{cor}
\begin{ex}\label{ex:ykeymod}
The Young key module $\ykeymod_{230}$ has basis $\{ e_T \}$ for the following fillings $T$.
\begin{displaymath}
\begin{array}{c@{\hskip2\cellsize}c@{\hskip2\cellsize}c@{\hskip2\cellsize}c@{\hskip2\cellsize}c@{\hskip2\cellsize}c@{\hskip2\cellsize}c@{\hskip2\cellsize}c@{\hskip2\cellsize}c@{\hskip2\cellsize}c@{\hskip2\cellsize}c}
\vline \tableau{ \\ 2 & 2 & 2 \\ 1 & 1 \\ \hline} & \vline \tableau{ \\ 2 & 2 & 3 \\ 1 & 1 \\ \hline} &\vline \tableau{ \\ 2 & 3 & 3 \\ 1 & 1 \\ \hline} &\vline \tableau{ \\ 3 & 3 & 3 \\ 1 & 1 \\ \hline} &\vline \tableau{ \\ 2 & 3 & 3 \\ 1 & 2 \\ \hline} &
\vline \tableau{ \\ 2 & 2 & 3 \\ 1 & 2 \\ \hline} &\vline \tableau{ \\ 3 & 3 & 3 \\ 2 & 2 \\ \hline} &\vline \tableau{ \\ 2 & 3 & 3 \\ 2 & 2 \\ \hline}.
\end{array}
\end{displaymath}
\end{ex}
\section{Other polynomial families and intersections}{\label{Sec:others}}
In this section, we provide a new formula in terms of Knuth equivalence for the \emph{fundamental slide} expansion of a key polynomial, and interpret compatible sequences in terms of the \emph{fundamental particle} basis, introduced in \cite{Sea20} as a common refinement of the fundamental slide and Demazure atom bases. As we did for Young key polynomials and Young atoms, we also determine the intersections of further reverse bases and their Young analogues.
\subsection{The fundamental and monomial slide bases}
For a weak composition $a$, define the \emph{fundamental fillings} $\mathrm{FF}(a)$ for $a$ \cite{Sea20} to be the (reverse) fillings of $D(a)$ satisfying the following conditions.
\begin{enumerate}
\item Entries weakly decrease from left to right in each row.
\item No entry in row $i$ is greater than $i$.
\item If a box with label $b$ is in a lower row than a box with label $c$, then $b<c$.
\end{enumerate}
The \emph{fundamental slide polynomial} $\fs_a$ \cite{Assaf.Searles} is the generating function of $\mathrm{FF}(a)$:
\[\fs_a = \sum_{T\in \mathrm{FF}(a)}x^{{\rm wt}(T)}.\]
For example, $\fs_{103} = x^{103}+x^{112}+x^{121}+x^{130}$, computed by $\mathrm{FF}(103)$ below.
\begin{figure}[ht]
\begin{displaymath}
\begin{array}{c@{\hskip3\cellsize}c@{\hskip3\cellsize}c@{\hskip3\cellsize}c@{\hskip3\cellsize}c@{\hskip3\cellsize}c@{\hskip3\cellsize}c}
\vline \tableau{ 3 & 3 & 3 \\ \\ 1 \\ \hline } & \vline \tableau{ 3 & 3 & 2 \\ \\ 1 \\ \hline } & \vline \tableau{ 3 & 2 & 2 \\ \\ 1 \\ \hline } & \vline \tableau{ 2 & 2 & 2 \\ \\ 1 \\ \hline } \end{array}
\end{displaymath}
\end{figure}
The \emph{monomial slide basis} can also be described using reverse fillings. Given a weak composition $a$, the \emph{monomial fillings} $\mathrm{MF}(a)$ \cite{Sea20} are the subset of $\mathrm{FF}(a)$ for which all entries in the same row are equal. The \emph{monomial slide polynomial} $\msp_a$ \cite{Assaf.Searles} is $$\msp_a = \sum_{T\in \mathrm{MF}(a)} x^{{\rm wt}(T)}.$$
For example, $\msp_{103} = x^{103}+x^{130}$.
Various formulas have been given \cite{Assaf.Searles:2}, \cite{Assaf:comb}, \cite{MPS18} for the fundamental slide expansion of a key polynomial. Here we provide another, more in keeping with the theme of the previous section.
\begin{proposition}\label{prop:keytoslidecompatible}
$$\key_a = \sum_{{\rm rev}(b) \sim \col(\mathrm{key}(a))} \fs_{\mathrm{maxcomp}(b)}.$$
where $\mathrm{maxcomp}(b)$ is the weak composition associated to the compatible sequence for $b$ whose entries are maximum possible. (If $b$ has no compatible sequences, we declare $\fs_{\mathrm{maxcomp}(b)}=0$.)
\end{proposition}
\begin{proof}
We need to establish $\displaystyle{\fs_{\mathrm{maxcomp}(b)} = \sum_{\textrm{$w$ is $b$-compatible}} x^w}$; the statement then follows from Theorem~\ref{thm:keycompatible}. The compatible sequence for a word $b$ whose entries are maximum possible is found as follows. First, partition $b$ into (weakly) decreasing runs $b=(r_1 | r_2 | \ldots | r_k)$. Let $b^{(i)}$ denote the rightmost (i.e. smallest) entry of $b$ in the $i^{th}$ run $r_i$. We proceed right-to-left, at each step replacing every entry in a run $r_i$ with a certain number $c_i$. To begin, replace every element in $r_k$ with $b^{(k)}$, i.e., we set $c_k=b^{(k)}$. Proceeding leftwards, replace every entry in $r_i$ with $c_i\coloneqq \min\{b^{(i)}, c_{i+1}-1\}$. This process is a variant of the construction of the \emph{weak descent composition} of a word in \cite{Assaf:comb}, \cite{MasSea20}.
Every compatible sequence $w$ for $b$ can be obtained from the maximal one by decrementing parts as long as we still have $w_i<w_{i+1}$ whenever $b_i<b_{i+1}$. In exactly the same way, every fundamental filling for $\mathrm{maxcomp}(b)$ can be obtained from the filling that has every entry equal to its row index by decrementing entries as long as entries in a given row remain strictly larger than entries in any lower row. This gives a weight-preserving bijection between the compatible sequences for $b$ and the fundamental fillings for $\mathrm{maxcomp}(b)$.
\end{proof}
For example, suppose $b=435254$. Then the partition into weakly decreasing runs gives $43|52|54$. We replace each entry in the last run with $4$, obtaining $43|52|{\bf 44}$. Next, we replace each entry in the next run with $\min\{2,4-1\} = 2$, obtaining $43|{\bf 22}|{\bf 44}$. Finally, we replace each entry in the first run with $\min\{3, 2-1\} = 1$, obtaining ${\bf 11}|{\bf 22}|{\bf 44}$. The largest compatible sequence for $b$ is thus $112244$.
\begin{ex}
From the table in Figure~\ref{fig:compseq}, we compute $\key_{032} = \fs_{221} + \fs_{032} + \fs_{131} + \, 0 + \fs_{230}$.
The only compatible sequence for $b=23223$ is $12223$, so $\mathrm{maxcomp}(23223)=131$.
\end{ex}
This yields a formula for the Young fundamental slide expansion of Young key polynomials, proved similarly to Theorem~\ref{thm:ykeycompatible}.
\begin{proposition}\label{prop:ykeytoyslidecompatible}
$$\ykey_a = \sum_{f(b) \sim \col(\mathrm{key}(a))} \yfs_{{\rm rev}(\mathrm{maxcomp}(b))}.$$
\end{proposition}
\subsection{Quasi-key polynomials and fundamental particles}{\label{sec:SSF}}
For a weak composition $a$, define the \emph{quasi-key fillings} $\mathrm{QF}(a)$ to be the (reverse) fillings of $D(a)$ satisfying the following conditions.
\begin{enumerate}
\item Entries weakly decrease from left to right in each row.
\item No entry in row $i$ is greater than $i$.
\item Entries strictly increase up the first column, and entries in any column are distinct.
\item All type A and type B triples are inversion triples.
\end{enumerate}
The \emph{quasi-key polynomial} is
\[\qk_a = \sum_{T\in \mathrm{QF}(a)}x^{{\rm wt}(T)}.\]
Quasi-key polynomials were first defined in \cite{Assaf.Searles:2} as a lift of the quasisymmetric Schur functions to a basis of $\ensuremath{\mathrm{Poly}}_n$. The above formula is due to \cite{MPS18}.
For example, we have $\qk_{103} = x^{103} +x^{112} +x^{202} +x^{121} +x^{211} +x^{130} +x^{220}$ which is computed by the quasi-key fillings shown in Figure~\ref{fig:QK103} below.
\begin{figure}[ht]
\begin{displaymath}
\begin{array}{c@{\hskip2\cellsize}c@{\hskip2\cellsize}c@{\hskip2\cellsize}c@{\hskip2\cellsize}c@{\hskip2\cellsize}c@{\hskip2\cellsize}c@{\hskip2\cellsize}c@{\hskip2\cellsize}c@{\hskip2\cellsize}c}
\vline \tableau{ 3 & 3 & 3 \\ \\ 1 \\ \hline } & \vline \tableau{ 3 & 3 & 2 \\ \\ 1 \\ \hline } & \vline \tableau{ 3 & 3 & 1 \\ \\ 1 \\ \hline } & \vline \tableau{ 3 & 2 & 2 \\ \\ 1 \\ \hline } & \vline \tableau{ 3 & 2 & 1 \\ \\ 1 \\ \hline } & \vline \tableau{ 2 & 2 & 2 \\ \\ 1 \\ \hline } & \vline \tableau{ 2 & 2 & 1 \\ \\ 1 \\ \hline } \end{array}
\end{displaymath}
\caption{The 7 quasi-key fillings of shape $103$.}\label{fig:QK103}
\end{figure}
The set of fillings $\mathrm{ASSF}(a)$ generating Demazure atoms is exactly the subset of $\mathrm{QF}(a)$ consisting of those fillings whose entries in the leftmost column are equal to their row index. For example,
$\atom_{103} = x^{103} + x^{112} + x^{202} + x^{121} + x^{211}$,
which is computed by those fillings in Figure~\ref{fig:QK103} whose leftmost column entries are $1$ and $3$.
Finally, define the \emph{particle fillings} $\mathrm{LF}(a)$ to be the subset of $\mathrm{ASSF}(a)$ consisting of those fillings such that whenever $i<j$, all entries in row $i$ are strictly smaller than all entries in row $j$. Then the \emph{fundamental particle $\fp_a$} \cite{Sea20} is defined to be
\[\fp_a = \sum_{T\in \mathrm{LF}(a)}x^{{\rm wt}(T)}.\]
For example, $\fp_{103} = x^{103} + x^{112} + x^{121}$, by the 1st, 2nd, and 4th fillings in Figure~\ref{fig:QK103}.
We give a new formula for $\fp_a$ in terms of compatible sequences. Let $S=\{p_1, p_2, \hdots , p_k \}$ be the set of the partial sums of the entries in $a$ with duplicate entries (obtained when an entry of $a$ is $0$) removed. Then we say that a compatible sequence $w$ for the word formed by writing $a_i$ instances of $i$ consecutively is \emph{$a$-flag compatible} if for all $p_i \in S$, the letter in position $p_i$ of $w$ is equal to the row index of the $i^{th}$ nonzero entry in $a$.
\begin{theorem}
Let $a$ be a weak composition of length $n$, Then $$\fp_a = \sum_{w \textrm{ is a-flag compatible}} x^{\mathrm{comp}(w)}.$$
\end{theorem}
\begin{proof}
The statement follows from the fact that the $a$-flag compatible sequences correspond to $\mathrm{LF}(a)$ via the following bijection. Let $w$ be an $a$-flag compatible sequence and let $\tilde{w}^{(i)}$ be the subword of $w$ corresponding to the subword $a^{(i)}$. Construct the $i^{th}$ row of a $\mathrm{LF}$ by writing $\tilde{w}^{(i)}$ in weakly decreasing order. Conditions (1),(2), and (3) in the definition of a $\mathrm{QF}$ are satisfied by construction. Condition (4) is satisfied since the entries in a given row are all smaller than all of the entries in any higher row. The flag condition guarantees that these fillings are in $\mathrm{ASSF}(a)$, and further, the fact that the entries in a given row are all smaller than all of the entries in any higher row implies these fillings are in $\mathrm{LF}(a)$. To obtain an $a$-flag compatible sequence from an element of $\mathrm{LF}(a)$, record the entries in each row from right to left (to force them to be weakly increasing), reading rows from bottom to top.
\end{proof}
Figure~\ref{fig:poset} below shows how the bases discussed here expand into one another. An arrow indicates that the basis at the tail expands positively in the basis at the head. This figure is taken from that in \cite{Sea20}.
\begin{figure}[ht]
\[
\begin{tikzcd}
\key_a \arrow[rr,"{\rm [AS18]}"] & & \qk_a \arrow[rr,"{\rm [AS18]}"] \arrow[d, "{\rm [Sea20]}"] & & \fs_a \arrow[rr,"{\rm [AS17]}"] \arrow[d, "{\rm [Sea20]}"] & & \msp_a \arrow[d] & \\
& & \atom_a \arrow [rr,"{\rm [Sea20]}"] & & \fp_a \arrow[rr] & & x^a &
\end{tikzcd}
\]
\caption{Positive expansions between bases defined by reverse fillings.}\label{fig:poset}
\end{figure}
\subsection{Young bases and intersections}
Young analogues may be defined for all the families described above. Indeed, Young analogues of the fundamental slide polynomials and the quasi-key polynomials were introduced and utilized in~\cite{MasSea20}. In addition to the Young key polynomials and Young Demazure atoms studied in Section 3, Young analogues of the monomial slide polynomials and fundamental particles may be defined similarly, and these families can be shown (by utilizing Lemma~\ref{lem:flipreversediagram} below) to exhibit positive expansions in Figure~\ref{fig:yposet} analogous to those shown in Figure~\ref{fig:poset}.
\begin{figure}[ht]
\[
\begin{tikzcd}
\ykey_a \arrow[rr] & & \yqk_a \arrow[rr] \arrow[d] & & \yfs_a \arrow[rr] \arrow[d] & & \yms_a \arrow[d] & \\
& & \ya_a \arrow [rr] & & \yfp_a \arrow[rr] & & x^a &
\end{tikzcd}
\]
\caption{Positive expansions between bases defined by Young fillings.}\label{fig:yposet}
\end{figure}
\begin{remark}\label{rmk:Youngbases}
All of the families of Young polynomials listed in Figure~\ref{fig:yposet} are bases for $\ensuremath{\mathrm{Poly}}_n$, since their reverse analogues are bases for $\ensuremath{\mathrm{Poly}}_n$ and the flip-and-reverse process is an involution on $\ensuremath{\mathrm{Poly}}_n$ that preserves both cardinality and linear independence, cf. Proposition~\ref{prop:youngkeybasis}.
\end{remark}
\begin{lemma}\label{lem:flipreversediagram}
Let $a$ be a weak composition of length $n$, and let $\mathrm{Fill}_a$ denote the set of all possible fillings of $D(a)$ with entries from $1,\ldots , n$, one entry per box. Define $\theta: \mathrm{Fill}_a \rightarrow \mathrm{Fill}_{{\rm rev}(a)}$ by letting $\theta(T)$ be the filling obtained by moving all boxes in row $i$ to row $n+1-i$ and replacing every entry $j$ with $n+1-j$, for all $1\le i,j \le n$. Then the following statements are true.
\begin{enumerate}
\item The map $\theta$ is an involution.
\item If $T$ has weight $b$ then $\theta(T)$ has weight ${\rm rev}(b)$.
\item The relative order of entries in row $i$ of $T$ is the reverse of the relative order of entries in row $i$ of $\theta(T)$.
\item The relative order of entries in any column of $T$ is the same as the relative order of entries in the same column of $\theta(T)$.
\item A triple of boxes in $T$ is an inversion triple if and only if the image of those boxes is a Young inversion triple in $\theta(T)$.
\end{enumerate}
\end{lemma}
\begin{proof}
The first four properties are immediate from the definition of $\theta$. Since the relative order of entries in the boxes of a triple in $T$ is the reverse of the relative order of entries in the images of those boxes in $\theta(T)$, it follows from the definition of inversion triples and Young inversion triples that the image of an inversion triple (of type A, respectively B) in $T$ must be a Young inversion triple (of type I, respectively II) in $\theta(T)$. Likewise, the images of non-inversion triples in $T$ are Young non-inversion triples in $\theta(T)$.
\end{proof}
Given a weak composition $a$ of length $n$, define the \emph{Young fundamental fillings} $\mathrm{YFF}(a)$ of $a$ to be the fillings of $D(a)$ with entries from $1, \ldots , n$ satisfying the following conditions.
\begin{enumerate}
\item Entries weakly increase from left to right in each row
\item No entry in row $i$ is less than $i$
\item If a box with label $b$ is in a lower row than a box with label $c$, then $b<c$.
\end{enumerate}
In particular, $\mathrm{YFF}(a)$ is the image of $\mathrm{FF}({\rm rev}(a))$ under $\theta$. The \emph{Young fundamental slide polynomial} $\yfs_a$~\cite{MasSea20} is the generating function of $\mathrm{YFF}(a)$:
\[\yfs_a = \sum_{T\in \mathrm{YFF}(a)}x^{{\rm wt}(T)}.\]
For example, we have $\yfs_{301} = x^{301} + x^{211} +x^{121} +x^{031}$, which is computed by the elements of $\mathrm{YFF}(301)$ shown below.
\begin{displaymath}
\begin{array}{c@{\hskip3\cellsize}c@{\hskip3\cellsize}c@{\hskip3\cellsize}c@{\hskip3\cellsize}c@{\hskip3\cellsize}c@{\hskip3\cellsize}c}
\vline \tableau{ 3 \\ \\ 1 & 1 & 1 \\ \hline } & \vline \tableau{ 3 \\ \\ 1 & 1 & 2 \\ \hline } & \vline \tableau{ 3 \\ \\ 1 & 2 & 2 \\ \hline } & \vline \tableau{ 3 \\ \\ 2 & 2 & 2 \\ \hline } \end{array}
\end{displaymath}
For a weak composition $a$ of length $n$, define the \emph{Young monomial fillings} $\mathrm{YMF}(a)$ to be the subset of $\mathrm{YFF}(a)$ for which all entries in any row are equal. Define the \emph{Young monomial slide polynomial} $\yms_a$ to be the generating function of $\mathrm{YMF}(a)$:
\[\yms_a = \sum_{T\in \mathrm{YMF}(a)}x^{{\rm wt}(T)}.\]
For example, we have $\yms_{301} = x^{301} +x^{031}$.
\begin{proposition}
The Young fundamental slide and the Young monomial slide bases of $\mathbb{Z}[x_1,\ldots , x_n]$ contain (respectively) the fundamental quasisymmetric and monomial quasisymmetric bases of quasisymmetric polynomials in $n$ variables. Specifically, if $a$ is a weak composition of length $n$ such that all zero entries are to the right of all nonzero entries, then
\[\yfs_a = F_{{\rm flat}(a)}(x_1,\ldots , x_n) \qquad \mbox{ and } \qquad \yms_a = M_{{\rm flat}(a)}(x_1,\ldots , x_n),\]
where ${\rm flat}(a)$ is the composition obtained by deleting all $0$ parts of $a$.
\end{proposition}
\begin{proof}
This is shown in \cite{MasSea20} for Young fundamental slides. For monomial slides, since all nonzero entries of $a$ occur before all zero entries, the flag condition on $\mathrm{YMF}$ is always satisfied whenever the other conditions are satisfied. Hence the $\mathrm{YMF}$ are exactly the monomial Young composition tableaux (Proposition~\ref{prop:YoungFisF}).
\end{proof}
\begin{theorem}{\label{thm:intersectionsFM}}
The polynomials in $\mathbb{Z}[x_1,\ldots , x_n]$ that are both a fundamental (respectively, monomial) slide polynomial and a Young fundamental (respectively, monomial) slide polynomial are exactly the fundamental (respectively, monomial) quasisymmetric polynomials in $n$ variables.
In other words, $\{\fs_a\}\cap \{\yfs_b\} = \{F_\alpha(x_1, \ldots , x_n)\}$ and $\{\msp_a\}\cap \{\yms_b\} = \{M_\alpha(x_1, \ldots , x_n)\}$.
\end{theorem}
\begin{proof}
We prove this in the fundamental case; the monomial case is completely analogous. First, let $\alpha$ be a composition of length $\ell(\alpha) \le n$. Then
\[F_\alpha(x_1, \ldots x_n) = \fs_{0^{n-\ell(\alpha)}\times \alpha} = \yfs_{\alpha\times 0^{n-\ell(\alpha)}}.\]
For the other direction, let $\fs_a$ be a fundamental slide polynomial that is not equal to $F_\alpha(x_1,\ldots , x_n)$ for any composition $\alpha$. This implies $a$ has a zero entry to the right of a nonzero entry (\cite{Assaf.Searles}). Let $a_j$ be the earliest such zero entry, so $a_{j-1}$ is nonzero. Let $\overline{a}$ denote the weak composition obtained by exchanging the entries $a_{j-1}$ and $a_j$. Then $x^a \in \fs_a$ and $x^{\overline{a}} \notin \fs_a$. However, if a Young fundamental slide polynomial contains $x^a$, it must also contain $x^{\overline{a}}$. Hence $\fs_a$ is not equal to any Young fundamental slide polynomial.
\end{proof}
For a weak composition $a$ of length $n$, define the \emph{Young quasi-key fillings} $\mathrm{YQF}(a)$ to be the (Young) fillings of $D(a)$ obtained by applying $\theta$ to $\mathrm{QF}({\rm rev}(a))$. Specifically, these are the fillings such that entries increase along rows, entries are at least their row index, entries strictly increase up the first column and entries in any column are distinct, and all type I and II Young triples are Young inversion triples. These generate the \emph{Young quasi-key polynomial} $\yqk_a$ \cite{MasSea20}. Unsurprisingly, the conditions governing the intersections of quasi-key and Young quasi-key polynomials are similar to those governing the intersections of the quasisymmetric bases that they extend (Theorem~\ref{thm:yqsqs}).
\begin{theorem}\label{thm:intersectionsQKYQK}
The polynomials that are both quasi-key and Young quasi-key polynomials are precisely the $\yqk_a$ such that $a$ is a number of equal parts followed by zeros, or a sequence of $1$'s and $2$'s followed by zeros, or $a$ has no zero parts and consecutive parts differ by at most $1$.
\end{theorem}
\begin{proof}
For any $a$, the polynomial $\yqk_a$ contains the monomial $x^a$, realised by $T\in \mathrm{YQF}(a)$ whose entries in row $j$ are all $j$. Suppose a quasi-key polynomial $\qk_b$ contains $x^a$, realised by some $S\in \mathrm{QF}(b)$. Suppose $a$ has a zero entry preceding a nonzero entry, e.g., $a_i=0$ but $a_{i+1}$ is nonzero. Create $S'$ by changing the rightmost $i+1$ in $S$ to an $i$. Since we change the rightmost $i+1$, entries of $S'$ still decrease along rows, and since no other $i$'s exist in $S$, entries still strictly increase up the first column of $S'$ and do not repeat in any column of $S'$, and the relative order of the entries in any triple in $S$ remains unchanged. Hence $S'\in \mathrm{QF}(b)$, but there is no element of $\mathrm{YQF}(a)$ that has this weight since all entries of $T$ are already minimal possible. Therefore $\yqk_a\neq \qk_b$ for any $b$.
It follows that for $\yqk_a$ to be equal to $\qk_b$, $a$ must consist of an interval of nonzero entries, followed by zero entries. But then $\yqk_a = \yqs_\alpha(x_1, \ldots , x_n)$ by [MS20]. The quasi-key polynomials that are quasisymmetric are exactly the quasisymmetric Schur polynomials: $\qk_b = \qs_\beta(x_1, \ldots , x_n)$ where $b$ is an interval of zero entries followed by an interval $\beta$ of nonzero entries [AS18]. Then, by Theorem~\ref{thm:yqsqs}, $\yqs_\alpha(x_1, \ldots , x_n)$ is equal to $\qs_\beta(x_1, \ldots , x_n)$ exactly when $\alpha$ has all parts the same, or all parts of $\alpha$ are $1$ or $2$, or $\ell(\alpha)=n$ (so $a=\alpha$ has no zero parts) and consecutive parts differ by at most $1$.
\end{proof}
Similarly, define the \emph{Young particle fillings} $\mathrm{YLF}(a)$ to be the image of $\mathrm{LF}({\rm rev}(a))$ under $\theta$. These Young fillings, which are the $\mathrm{YASSF}(a)$ such that any entry in a lower row is strictly smaller than any entry in a higher row, generate the \emph{Young fundamental particle} $\yfp_a$.
\begin{theorem}
The polynomials that are both fundamental particles and Young fundamental particles are precisely the $\yfp_a$ such that $a$ has no zero part adjacent to a part of size at least $2$.
\end{theorem}
\begin{proof}
The $\mathrm{LF}$ (respectively, $\mathrm{YLF}$) obey all the conditions on $\mathrm{ASSF}$ (respectively $\mathrm{YASSF}$), hence the same argument used in the proof of Theorem~\ref{thm:atomyatom} shows that if $\yfp_a = \fp_b$ then $a=b$.
If $a_{i+1}=0$ and $a_i\ge 2$ for some $i$, then let $T\in \mathrm{YLF}(a)$ such that all entries in each row $j$ are $j$. Let also $T'\in \mathrm{YLF}(a)$ be obtained by changing the rightmost $i$ to $i+1$. Then there is no $S\in \mathrm{LF}(a)$ with the same weight as $T'$, since the entries in $S$ above row $i$ must agree with those in $T'$ above row $i$, and then there is nowhere the new $i+1$ could be placed in $S$. Hence $\yfp_a\neq \fp_a$. A similar argument shows that if $a_{i+1}\ge 2$ and $a_{i}=0$ then $\fp_a \neq \yfp_a$.
Straightforwardly, $\yfp_a = \fp_a = x^a$ if $a$ has no zero part next to a part of size at least $2$.
\end{proof}
\begin{remark}\label{rmk:noembed}
While the Young and reverse analogues of a given basis have similar definitions, they have important structural differences. Unlike the reverse families, for each family of Young polynomials, the basis of Young polynomials of $\ensuremath{\mathrm{Poly}}_n$ does not embed into $\ensuremath{\mathrm{Poly}}_{n+1}$. For example, $\yfs_{0101} = x_2x_4+x_3x_4 \in \ensuremath{\mathrm{Poly}}_4$ is not a Young fundamental slide polynomial in $\ensuremath{\mathrm{Poly}}_{5}$.
Because of this, we cannot use the typical definition of a weak composition as an infinite sequence of nonnegative integers (almost all zero); the number of entries in the sequence matters and the value of $n$ must be specified.
\end{remark}
\begin{remark}\label{rmk:stablelimit}
Stable limits are obtained for all families in the top row of Figure~\ref{fig:poset} by prepending zeros to the weak composition $a$; they are equal to the appropriate quasisymmetric function for ${\rm flat}(a)$ (\cite[Theorem D]{Sea20}). While stable limits for the Young analogues of these families can be defined (by appending zeros to the weak composition $a$), they are not equal to the quasisymmetric functions for ${\rm flat}(a)$, except in the case that all nonzero entries of $a$ are to the left of all zero entries of $a$. A polynomial satisfying this condition is in fact already quasisymmetric, and indeed limits to the expected quasisymmetric function.
\end{remark}
For example, the stable limit of the Young fundamental slide polynomial $\yfs_{230}$ (which is equal to $F_{23}(x_1,x_2,x_3)$) is the (Young) fundamental quasisymmetric function $F_{23}$. However, the stable limit of $\yfs_{203}$ is not $F_{23}$.
\section{Young Schubert polynomials}{\label{Sec:Schubert}}
Schubert polynomials were first introduced in \cite{LasSch82} to represent Schubert classes in the cohomology of the flag manifold. Schubert polynomials are typically indexed by permutations. However, every permutation corresponds to a weak composition called a \emph{Lehmer code}, which may also be used to index the Schubert polynomial. For each $n$ there is a $\mathbb{Z}$-basis for $\ensuremath{\mathrm{Poly}}_n$ consisting of Schubert polynomials, however unlike the previously-discussed bases of $\ensuremath{\mathrm{Poly}}_n$, the indexing compositions of the Schubert basis elements are not compositions of length $n$ but of arbitrary length. It is a long-standing open problem to find a positive combinatorial formula for the structure constants of the Schubert basis. See~\cite{Mac91,Man98} for more details about the geometry, algebra, and combinatorics of Schubert polynomials.
We will take the combinatorial ``pipe dreams" model introduced in \cite{BerBil93} as our definition of Schubert polynomials. Consider a permutation $ w \in S_n$. The \emph{Lehmer code} of $w$ is the weak composition $L(w)$ of length $n$ whose $i^{th}$ term equals the number of pairs $(i,j)$ with $i<j$ such that $w_i > w_j$. For example, if $w=31254$ then $L(w)=(2,0,0,1,0)$. A \emph{(reduced) pipe dream} is a tiling of the first quadrant of $\mathbb{Z} \times \mathbb{Z}$ with \emph{elbows} and \emph{crosses} so that any two of the resulting strands (called \emph{pipes}) cross at most once. The associated permutation
can be read from the diagram by following the pipes from the $y$-axis to the $x$-axis. Let $\mathrm{PD}(w)$ denote the set of pipe dreams for $w$. The five pipe dreams in $\mathrm{PD}(31524)$ are shown in Figure~\ref{fig:pipedreams}.
\begin{figure}[ht]
$
\pipes{
5 & \upelb \\
4 & \elbow & \upelb \\
3 & \elbow & \elbow & \upelb \\
2 & \elbow & \cross & \elbow & \upelb \\
1 & \cross & \cross & \elbow & \cross & \upelb \\
& 1 & 2 & 3 & 4 & 5
} \qquad
\pipes{
5 & \upelb \\
4 & \elbow & \upelb \\
3 & \elbow & \elbow & \upelb \\
2 & \elbow & \cross & \cross & \upelb \\
1 & \cross & \cross & \elbow & \elbow & \upelb \\
& 1 & 2 & 3 & 4 & 5
} \qquad
\pipes{
5 & \upelb \\
4 & \elbow & \upelb \\
3 & \cross & \elbow & \upelb \\
2 & \elbow & \elbow & \elbow & \upelb \\
1 & \cross & \cross & \elbow & \cross & \upelb \\
& 1 & 2 & 3 & 4 & 5
} \qquad
\pipes{
5 & \upelb \\
4 & \elbow & \upelb \\
3 & \cross & \elbow & \upelb \\
2 & \elbow & \elbow & \cross & \upelb \\
1 & \cross & \cross & \elbow & \elbow & \upelb \\
& 1 & 2 & 3 & 4 & 5
} \qquad
\pipes{
5 & \upelb \\
4 & \elbow & \upelb \\
3 & \cross & \cross & \upelb \\
2 & \elbow & \elbow & \elbow & \upelb \\
1 & \cross & \cross & \elbow & \elbow & \upelb \\
& 1 & 2 & 3 & 4 & 5
}
$
\caption{The $5$ pipe dreams associated to the permutation $31524$.}{\label{fig:pipedreams}}
\end{figure}
Let $w \in S_n$. The \emph{Schubert polynomial} $\sch_w = \sch_w(x_1, \hdots , x_n)$ is given by $$\sch_w = \sum_{P \in \mathrm{PD}(w)} x^{{\rm wt}(P)},$$ where ${\rm wt}(P)$ is the weak composition whose $i^{th}$ term counts the crosses in the $i^{th}$ row of $P$.
For example, by Figure~\ref{fig:pipedreams} the Schubert polynomial indexed by the permutation $31524$ is $$\sch_{31524} = x_1^3x_2 + x_1^2 x_2^2 + x_1^3 x_3 + x_1^2 x_2 x_3 + x_1^2 x_3^2.$$
Let $\mathrm{Red}(w)$ denote the set of reduced words for a permutation $w$. Every Schubert polynomial can be written as a positive sum of key polynomials according to the following theorem.
\begin{theorem}[\cite{RS95, LasSch89}]{\label{thm:schubintokeys}}
$$\sch_w = \sum_{\col(T) \in \mathrm{Red}(w^{-1})} \key_{{\rm wt}(K^0_{-}(T))},$$ where the sum is over semistandard Young tableaux $T$, and $K^0_{-}(T)$ is the \emph{left nil key} of $T$, obtained via a modification of Knuth equivalence called \emph{nilplactic} equivalence.
\end{theorem}
\subsection{Young pipe dreams}
Towards giving a combinatorial construction of the Young analogue of Schubert polynomials, we define a Young analogue of pipe dreams. Relabel the row indices (on the $y$-axis) with $n$ as the bottom row, $n-1$ as the second row, and so on.
Then read the ``reversal" of the permutation by following the pipes from the $y$-axis to the $x$-axis. This reversal is the permutation $w$ read from right to left (in one-line notation), which we denote ${\rm rev}(w)$. This new diagram is called the \emph{Young pipe dream} corresponding to the permutation obtained by reading the pipes in this manner, and the set of all Young pipe dreams for a permutation $w$ is denoted $\mathrm{YPD}(w)$.
Let the \emph{Young Lehmer code} of a permutation $w\in S_n$, denoted $\mathcal{L}(w)$, be the weak composition of length $n$ whose $i^{th}$ term is the number of pairs $(i,j)$ with $i>j$ such that $w_i>w_j$. The \emph{Young weight} ${\rm ywt}(P)$ of a Young pipe dream $P$ is the weak composition whose $i^{th}$ part is the number of crosses in the $i^{th}$ row from the top.
\begin{figure}[ht]
$
\pipes{
1 & \upelb \\
2 & \elbow & \upelb \\
3 & \elbow & \elbow & \upelb \\
4 & \elbow & \cross & \elbow & \upelb \\
5 & \cross & \cross & \elbow & \cross & \upelb \\
& 1 & 2 & 3 & 4 & 5
} \qquad
\pipes{
1 & \upelb \\
2 & \elbow & \upelb \\
3 & \elbow & \elbow & \upelb \\
4 & \elbow & \cross & \cross & \upelb \\
5 & \cross & \cross & \elbow & \elbow & \upelb \\
& 1 & 2 & 3 & 4 & 5
} \qquad
\pipes{
1 & \upelb \\
2 & \elbow & \upelb \\
3 & \cross & \elbow & \upelb \\
4 & \elbow & \elbow & \elbow & \upelb \\
5 & \cross & \cross & \elbow & \cross & \upelb \\
& 1 & 2 & 3 & 4 & 5
} \qquad
\pipes{
1 & \upelb \\
2 & \elbow & \upelb \\
3 & \cross & \elbow & \upelb \\
4 & \elbow & \elbow & \cross & \upelb \\
5 & \cross & \cross & \elbow & \elbow & \upelb \\
& 1 & 2 & 3 & 4 & 5
} \qquad
\pipes{
1 & \upelb \\
2 & \elbow & \upelb \\
3 & \cross & \cross & \upelb \\
4 & \elbow & \elbow & \elbow & \upelb \\
5 & \cross & \cross & \elbow & \elbow & \upelb \\
& 1 & 2 & 3 & 4 & 5
}
$
\caption{The $5$ elements of $\mathrm{YPD}(42513)$.}{\label{fig:youngpipedreams}}
\end{figure}
Let $w\in S_n$. Then the \emph{Young Schubert polynomial} $\ysch_w = \ysch_w(x_1, \hdots , x_n)$ is given by
$$\ysch_w = \sum_{P \in \mathrm{YPD}(w)} x^{{\rm ywt}(P)}.$$
For example, the Young Schubert polynomial associated to the permutation $42513$ can be calculated by reading the Young weights of the Young pipe dreams in Figure~\ref{fig:youngpipedreams} as follows:
$$\ysch_{42513} = x_4 x_5^3 + x_4^2x_5^2 + x_3 x_5^3 + x_3 x_4 x_5^2 + x_3^2 x_5^2.$$
It is straightforward to check that $\mathcal{L}({\rm rev}(w)) = {\rm rev}(L(w))$. It follows that
\begin{equation}\label{eqn:SchubertYoungSchubert}
\ysch_w(x_1, \ldots , x_n)=\sch_{{\rm rev}(w)}(x_n, \ldots , x_1).
\end{equation}
\begin{remark}
Schubert polynomials have a well-known stability property, namely, for $w\in S_n$, $\sch_w = \sch_{i_n(w)}$, where $i_n:S_n\rightarrow S_{n+1}$ is the embedding in which $S_n$ acts on the first $n$ letters. The same is not true for Young Schubert polynomials, for example $\ysch_{132} = x_2x_3$ but $\ysch_{1324} = x_2x_3x_4^3$. Analogous to Remark~\ref{rmk:noembed}, a Young Schubert polynomial in $\ensuremath{\mathrm{Poly}}_n$ is not a Young Schubert polynomial in $\ensuremath{\mathrm{Poly}}_{n+1}$. Similarly, the stable limit of a Schubert polynomial (on prepending zeros to the Lehmer code) exists, and was shown in \cite{Mac91} to be a \emph{Stanley symmetric function}. Analogous to Remark~\ref{rmk:stablelimit}, there is no corresponding stable limit for Young Schubert polynomials.
\end{remark}
\begin{remark}
The analogue of Remark~\ref{rmk:Youngbases} fails in this case: despite the fact that Schubert polynomials form a basis for $\ensuremath{\mathrm{Poly}}_n$, no collection of Young Schubert polynomials forms a basis for $\ensuremath{\mathrm{Poly}}_n$. This is due to the fact that the exponent of $x_i$ in a monomial in a Young Schubert polynomial is bounded by $i-1$. For Schubert polynomials this ``staircase'' condition goes the opposite way: the exponent of $x_i$ is bounded by $m-i$ when the indexing permutation is in $S_m$. Hence, by increasing $m$ as needed, one can find a Schubert polynomial in $\ensuremath{\mathrm{Poly}}_n$ containing any given monomial in $\ensuremath{\mathrm{Poly}}_n$.
\end{remark}
\begin{remark}
For completeness, we note that no polynomial is both a Schubert and a Young Schubert polynomial. All Schubert polynomials have at least one monomial divisible by $x_1$, but no Young Schubert polynomials do.
\end{remark}
A permutation $w$ is said to be \emph{vexillary} if for every sequence $a<b<c<d$ of indices, one never has $w_b < w_a<w_d<w_c$. That is, $w$ is \emph{vexillary} if and only if $w$ avoids the pattern $2143$. For $w$ vexillary, we have~\cite{LasSch90} $$\sch_w = \key_{L(w)}.$$ Thus the Young Schubert polynomials indexed by permutations whose reversal is vexillary are the Young key polynomials indexed by Young Lehmer codes of $3412$-avoiding permutations.
Theorem~\ref{thm:schubintokeys} and (\ref{eqn:keyyoungkey}) yield the following formula for writing any Young Schubert polynomial as a positive sum of Young key polynomials.
$$\ysch_w = \sum_{\col(T) \in {\rm \mathrm{Red}}(({\rm rev}(w))^{-1})} \ykey_{{\rm rev}({\rm wt}(K^0_{-}(T)))}.$$
Other combinatorial descriptions of Schubert polynomials can similarly be translated into descriptions of Young Schubert polynomials.
Schubert polynomials were initially defined in terms of divided difference operators so that $$\sch_w(x_1, x_2, \hdots , x_n) = \partial_{w^{-1} w_0} (x_1^{n-1} x_2^{n-2} \cdots x_{n-1}),$$ where $w_0=n \; n-1 \; \cdots 2 \; 1$ is the longest permutation of an $n$-element set and $\partial_i (f) = \frac{f-s_i(f)}{x_i - x_{i+1}}$. There is a natural way to describe Young Schubert polynomials in terms of divided difference operators, which we establish below.
For $w\in S_n$, let $\mathrm{frev}(w)$ be the permutation $w_0ww_0$. It is straightforward to see that in one-line notation, $\mathrm{frev}(w)$ is obtained from $w$ by reversing the entries of $w$ and replacing each entry $i$ with $n+1-i$, e.g. $\mathrm{frev}(31542) = 42153$.
\begin{lemma}\label{lem:reducedwordfrev}
Let $s_{i_1}\cdots s_{i_r}$ be a reduced word for $w\in S_n$. Then $\mathrm{frev}(w) = s_{n-i_1}\cdots s_{n-i_r}$.
\end{lemma}
\begin{proof}
We induct on the length of $w$. If $w$ has length $0$, then $w = \mathrm{frev}(w) = id$ and the statement holds. Now suppose the statement holds for all $w$ of length $r$, for some $r\ge 0$. Suppose $w$ has an ascent in position $j$, i.e. $w(j)<w(j+1)$. Then $w s_j$ has length $r+1$, and is obtained by exchanging the $j$th and $(j+1)$th entries of $w$. We have $w s_j = s_{i_1}\cdots s_{i_r} s_j$; we need to show $\mathrm{frev}(w s_j) = s_{n-i_1}\cdots s_{n-i_r} s_{n-j}$. But $s_{n-i_1}\cdots s_{n-i_r}$ is equal to $\mathrm{frev}(w)$ by the inductive hypothesis, and therefore $s_{n-i_1}\cdots s_{n-i_r} s_{n-j}$ is obtained from $\mathrm{frev}(w)$ by exchanging the entries in the $(n-j)$th and $(n-j+1)$th positions. This permutation is exactly $\mathrm{frev}(w s_j)$.
\end{proof}
\begin{lemma}\label{lem:Ipartial}
Let $f$ be a polynomial in $x_1, \ldots , x_n$, and let $I(f)$ be defined as in Lemma~\ref{lem:reversediff}. Then $I ( \partial_{i_1} \cdots \partial_{i_r} (f)) = (-1)^r \partial_{n-i_1} \cdots \partial_{n-i_r} (I(f))$.
\end{lemma}
\begin{proof}
We show that $I(\partial_i (f)) = -\partial_{n-i} (I(f))$; after which repeated iteration establishes the result. To see this, consider the monomial $x_i^a x_{i+1}^b$ where $a>b$. (The case where $a<b$ is similar and if $a=b$ then $\partial_i (x_i^a x_{i+1}^b) = 0$.)
\begin{align*}
I(\partial_i (x_i^a x_{i+1}^b)) = I \left(\frac{x_i^{a} x_{i+1}^{b} - x_i^b x_{i+1}^a}{x_i-x_{i+1}}\right) & = \frac{x_{n+1-i}^a x_{n-i}^b - x_{n+1-i}^b x_{n-i}^a}{x_{n+1-i}-x_{n-i}} \\
& = - \partial_{n-i}( x_{n-i}^{b} x_{n+1-i}^a)
= - \partial_{n-i} (I(x_i^a x_{i+1}^b)). \qedhere
\end{align*}
\end{proof}
We are now ready to establish a divided difference formula for $\ysch_w$. The power of $-1$ appearing in the formula below is due solely to the fact that since we begin with $x_2 x_3^2 \cdots x_n^{n-1}$, we apply $\partial_i$ to a polynomial whose power of $x_i$ is smaller than its power of $x_{i+1}$ in each monomial. The power of $-1$ could be defined away by replacing the denominator with $x_{i+1} - x_i$ in the definition of $\partial_i$.
\begin{theorem}
Let $w\in S_n$. Then $\ysch_{w} (x_1, x_2, \hdots , x_n)= (-1)^{\ell(w)}\partial_{w^{-1}} (x_2 x_3^2 \cdots x_n^{n-1} ).$
\end{theorem}
\begin{proof}
Let $s_{i_1}\cdots s_{i_r}$ be a reduced word for $w^{-1}$. Combining Lemmas~\ref{lem:reducedwordfrev} and \ref{lem:Ipartial}, we have
\[I (\partial_{\mathrm{frev}(w^{-1})} (\sch_{w_0})) = I ( \partial_{n-i_1} \cdots \partial_{n-i_r} (\sch_{w_0})) = (-1)^r \partial_{i_1} \cdots \partial_{i_r} (I(\sch_{w_0})) = (-1)^r\partial_{w^{-1}}(I(\sch_{w_0})).\]
Recall that $\ysch_w=I(\sch_{{\rm rev}(w)})$, and in particular $\ysch_{id}=I(\sch_{w_0}) = I(x_1^{n-1}x_2^{n-2}\cdots x_{n-1})$. Note also that $w_0^{-1} = w_0$, that $\ell(w) = \ell(w^{-1})$, and that $ww_0 = {\rm rev}(w)$. We therefore have
\begin{align*}
\ysch_w = I(\sch_{{\rm rev}(w)}) & = I(\partial_{({\rm rev}(w))^{-1}w_0}(\sch_{w_0})) \\
& = I(\partial_{(ww_0)^{-1}w_0}(\sch_{w_0})) \\
& = I(\partial_{w_0w^{-1}w_0}(\sch_{w_0})) \\
& = I(\partial_{\mathrm{frev}(w^{-1})}(\sch_{w_0})) \\
& = (-1)^{\ell(w)}\partial_{w^{-1}}(I(\sch_{w_0})) \\
& = (-1)^{\ell(w)}\partial_{w^{-1}}(\ysch_{id})
= (-1)^{\ell(w)}\partial_{w^{-1}}(x_2x_3^2\cdots x_n^{n-1}). \qedhere
\end{align*} \end{proof}
\begin{ex}
Let $w=2314 = s_1s_2$. Then $w^{-1} = 3124 = s_2s_1$ and we have
\begin{align*}
\ysch_{2314} = (-1)^{\ell(2314)}\partial_{(2314)^{-1}} (x_2x_3^2x_4^3)
& = (-1)^2\partial_{(3124)} (x_2x_3^2x_4^3) \\
& = \partial_2\partial_1 (x_2x_3^2x_4^3) \\
& = \partial_2 \left(\frac{x_2x_3^2x_4^3 - x_1x_3^2x_4^3}{x_1-x_2}\right) \\
& = \partial_2(-x_3^2x_4^3) \\
& = - \frac{x_3^2x_4^3 - x_2^2x_4^3}{x_2-x_3}
= -(-(x_3+x_2)x_4^3)
= x_3x_4^3 + x_2x_4^3 .
\end{align*}
Compare this to $\sch_{{\rm rev}(2314)} = \sch_{4132}$, which is equal to $x_1^3x_2 + x_1^3x_3$.
\end{ex}
\subsection{Demazure crystal structure}
We use the recently developed crystal structure for Stanley symmetric functions~\cite{MorSch16} and the Demazure crystal structure for Schubert polynomials~\cite{AssSch18} to generate the Demazure crystal structure for Young Schubert polynomials.
Let $w\in S_n$. Following \cite{MorSch16}, a \emph{reduced factorization} for $w$ is a partition of a reduced word for $w$ into blocks (possibly empty) of consecutive entries such that entries decrease from left to right within each block; let $\mathrm{RF}^\ell(w)$ denote the set of all reduced factorisations of $w$ with $\ell$ blocks. In \cite{MorSch16}, a crystal structure is defined on $\mathrm{RF}^\ell(w)$. Precise definitions of the $e_i$ and $f_i$ operators may be found in \cite[Section 3.2]{MorSch16}. See Figure~\ref{fig:YoungDemazureCrystal} for the crystal structure on $\mathrm{RF}^3(21534)$, with arrows $f_i$ labelled. For our purposes, we need to define the weight ${\rm wt}(r)$ of $r\in \mathrm{RF}^{\ell}(w)$ to be the weak composition of length $n$ given by $(0, \ldots , 0, |r^\ell|, |r^{\ell-1}| \ldots |r^1|)$ (as opposed to $(|r^\ell|, |r^{\ell-1}| \ldots |r^1|)$ used in \cite{MorSch16}). In particular we define ${\rm wt}(r)$ to begin with $n-\ell$ zeros, e.g., for $(41)()(3)\in \mathrm{RF}^3(21534)$, we have $n=5$, $\ell=3$ and ${\rm wt}((41)()(3)) = 00102$.
Let $\ell$ be the position of the rightmost descent in $w$. Define the \emph{reduced factorisations with Young cutoff} for $w$, denoted $\mathrm{RFYC}(w)$, to be those elements of $\mathrm{RF}^\ell(w)$ such that the smallest entry in the $i^{th}$ block from the left is at least $i$. See Figure~\ref{fig:YoungDemazureCrystal}, in which the elements of $\mathrm{RFYC}(21534)$ are bolded. Compare this to the \emph{reduced factorisations with cutoff} defined in \cite{AssSch18}.
\begin{theorem}
The Young Schubert polynomial $\ysch_w$ is equal to $\sum_{r\in \mathrm{RFYC}({\rm rev}(w))}x^{{\rm wt}(r)}$. Moreover, $\mathrm{RFYC}(w)$ is a union of Demazure crystals, under the convention that we begin with the lowest weight rather than the highest and use the $f_i$ operators.
\end{theorem}
\begin{proof}
In \cite{AssSch18}, a crystal structure isomorphic to that of \cite{MorSch16} is obtained by reversing each reduced factorisation for $w$ (thus obtaining reduced factorisations for $w^{-1}$ partitioned into increasing blocks), and exchanging the roles of $f_i$ with $e_{n-i}$ and $e_i$ with $f_{n-i}$. Restricting this isomorphism to $\mathrm{RFYC}(w)$ gives the set of reduced factorisations with cutoff for $w^{-1}$, of which the weight generating function is $\sch_w$ \cite{AssSch18}. Since this isomorphism is weight-reversing, it follows from (\ref{eqn:SchubertYoungSchubert}) that the weight generating function of $\mathrm{RFYC}(w)$ is $\ysch_{{\rm rev}(w)}$. By \cite[Theorem 5.11]{AssSch18}, reduced factorisations with cutoff have a Demazure crystal structure, and the isomorphism implies $\mathrm{RFYC}(w)$ is a union of Demazure truncations of the components of $\mathrm{RF}(w)$, starting with the lowest weight.
\end{proof}
\begin{figure}[ht]
\begin{center}
\begin{tikzpicture}[xscale=1.5,yscale=1.35
\node at (0,6) (L6) {$()()(431)$};
\node at (1,5) (L5) {$()(4)(31)$};
\node at (0,4) (L4a) {$(4)()(31)$};
\node at (2,4) (L4b) {$()(43)(1)$};
\node at (1,3) (L3a) {$(4)(3)(1)$};
\node at (3,3) (L3b) {$()(431)()$};
\node at (0,2) (L2a) {$(43)()(1)$};
\node at (2,2) (L2b) {$(4)(31)()$};
\node at (1,1) (L1) {$(43)(1)()$};
\node at (0,0) (L0) {${\bf (431)()()}$};
\node at (7,4) (R4) {$()(1)(43)$};
\node at (5,3) (R3a) {${\bf (1)()(43)}$};
\node at (9,3) (R3b) {$()()(41)(3)$};
\node at (6,2) (R2a) {${\bf (1)(4)(3)}$};
\node at (8,2) (R2b) {$(4)(1)(3)$};
\node at (5,1) (R1a) {${\bf (41)()(3)}$};
\node at (9,1) (R1b) {${\bf (1)(43)()}$};
\node at (7,0) (R0) {${\bf (41)(3)()}$};
\draw[thick,->,blue ] (L6) -- (L5) node[midway,above] {$1$};
\draw[thick,->,blue ] (L5) -- (L4b) node[midway,above] {$1$};
\draw[thick,->,blue ] (L4b) -- (L3b) node[midway,above] {$1$};
\draw[thick,->,blue ] (L4a) -- (L3a) node[midway,above] {$1$};
\draw[thick,->,blue ] (L3a) -- (L2b) node[midway,above] {$1$};
\draw[thick,->,blue ] (L2a) -- (L1) node[midway,above] {$1$};
\draw[thick,->,blue ] (R4) -- (R3b) node[midway,above] {$1$};
\draw[thick,->,blue ] (R3a) -- (R2a) node[midway,above] {$1$};
\draw[thick,->,blue ] (R2a) -- (R1b) node[midway,above] {$1$};
\draw[thick,->,blue ] (R1a) -- (R0) node[midway,above] {$1$};
\draw[thick,->,red ] (L5) -- (L4a) node[midway,above] {$2$};
\draw[thick,->,red ] (L4b) -- (L3a) node[midway,above] {$2$};
\draw[thick,->,red ] (L3a) -- (L2a) node[midway,above] {$2$};
\draw[thick,->,red ] (L3b) -- (L2b) node[midway,above] {$2$};
\draw[thick,->,red ] (L2b) -- (L1) node[midway,above] {$2$};
\draw[thick,->,red ] (L1) -- (L0) node[midway,above] {$2$};
\draw[thick,->,red ] (R4) -- (R3a) node[midway,above] {$2$};
\draw[thick,->,red ] (R3b) -- (R2b) node[midway,above] {$2$};
\draw[thick,->,red ] (R2b) -- (R1a) node[midway,above] {$2$};
\draw[thick,->,red ] (R1b) -- (R0) node[midway,above] {$2$};
\end{tikzpicture}
\caption{\label{fig:YoungDemazureCrystal} The crystal on $\mathrm{RF}^3(21534)$ and the subcrystal $\mathrm{RFYC}(21534)$ ({\bf bold}).}
\end{center}
\end{figure}
The Demazure crystal structure provides another method for expanding Young Schubert polynomials in Young key polynomials, cf. \cite[Corollary 5.12]{AssSch18}.
\begin{ex}
Figure~\ref{fig:YoungDemazureCrystal} demonstrates that $\ysch_{43512} = \ykey_{00003} + \ykey_{00201}$, where $\ykey_{00003} = x_5^3$ is the bolded Demazure truncation of the left component and $\ykey_{00201} = x_4x_5^2+x_4^2x_5+x_3x_5^2 + x_3x_4x_5 + x_3^2x_5$ is the bolded Demazure truncation of the right component.
\end{ex}
\section*{Acknowledgements}
We thank Sami Assaf and Anne Schilling for suggesting a connection with the Demazure crystal structure for Schubert polynomials, and Martha Precup and Brendon Rhoades for pointing out further recent appearances of the Young/reverse dichotomy for polynomials and tableaux. We also thank Vic Reiner for pointing out a connection to evacuation in Section 3.
\bibliographystyle{alpha}
|
{
"timestamp": "2021-05-11T02:19:12",
"yymm": "2105",
"arxiv_id": "2105.03895",
"language": "en",
"url": "https://arxiv.org/abs/2105.03895"
}
|
\section{Introducción}
\PARstart{D}{esde} hace más de una década, las unidades de procesamiento gráfico (GPUs) se han consolidado como dispositivos de computación masivamente paralelos y se han adoptado en múltiples áreas de la computación, desde sistemas empotrados hasta grandes centros de datos de alto rendimiento. Este éxito ha resultado en una gran cantidad de aplicaciones de propósito general programadas y optimizadas para su ejecución en una GPU (aplicaciones GPGPU). Entre ellas, las aplicaciones emergentes, como las relativas al aprendizaje profundo, la analítica o la minería de datos, empujan hacia el diseño de arquitecturas GPU con mayores recursos computacionales y de almacenamiento mientras se mantiene el consumo de energía bajo control. En consecuencia, la eficiencia energética se ha convertido en uno de los principales aspectos de diseño de la arquitectura de GPUs modernas.
El banco de registros es una de las estructuras de memoria que más energía consume en una GPU, siendo responsable de aproximadamente un 20\% del consumo total de energía del dispositivo~\cite{access} y su consumo aumenta generación tras generación. Por ejemplo, el banco de registros de NVIDIA Tesla V100, con 20 MB, es 5 veces más grande que su homólogo en Tesla K40~\cite{Volta2017}. Muchas propuestas se han centrado en la eficiencia energética del banco de registros, desde técnicas tradicionales, como \emph{clock} y \emph{power gating}, hasta propuestas recientes, incluyendo la explotación de patrones de acceso a los datos~\cite{energy2}, el análisis del tiempo de vida de los datos~\cite{energy}, técnicas de prebúsqueda~\cite{ltrf}, redundancia de los datos~\cite{wctc}, uso compartido de registros~\cite{virtual} o técnicas de \emph{coalescing}~\cite{corf}.
Por otro lado, como cualquier otro circuito digital, el banco de registros de la GPU se encuentra afectado por los efectos de variación estática y dinámica. Las variaciones estáticas son una consecuencia del proceso de fabricación del chip, mientras que las variaciones dinámicas provienen del funcionamiento del circuito (e.g., ruido de tensión y efectos de envejecimiento). Las variaciones en el proceso de fabricación imponen un límite mínimo de tensión de alimentación de seguridad ($V_{min}$) a cada celda de memoria para garantizar su fiabilidad y, a su vez, para un banco de registros completo, $V_{min}$ queda establecida como la $V_{min}$ más alta correspondiente a la peor celda.
Un suministro de tensión ($V_{dd}$) por encima de $V_{min}$ asegura un margen de protección suficiente para una operación más segura ante de una caída repentina de $V_{dd}$, pero por el contrario, acelera el envejecimiento del circuito. Además, una $V_{dd}$ alta supone un desperdicio de energía, puesto que la energía escala cuadráticamente con $V_{dd}$ y un ruido notable en la tensión de alimentación es un evento poco frecuente~\cite{dvfs_gpu2}.
En este contexto, algunos trabajos anteriores han propuesto esquemas de suavizado del ruido de tensión para el banco de registros de la GPU, los cuales permiten relajar el margen de protección empujando $V_{dd}$ hacia $V_{min}$ con una frecuencia fija~\cite{dvfs_gpu,dvfs_gpu2}. Reducir $V_{dd}$ por debajo de $V_{min}$ es una tarea desafiante debido al elevado número de fallos permanentes que pueden producirse como resultado de superar la $V_{min}$ de múltiples celdas~\cite{salami}. A diferencia de los fallos solitarios e inducidos por las variaciones dinámicas o el impacto de partículas, el elevado número de fallos permanentes que surgen cuando $V_{dd}< V_{min}$ está lejos de las capacidades de los códigos de corrección de errores (ECC) convencionales, lo cual no sólo requiere de mayores capacidades de almacenamiento y consumo energético sino también de (de)codificadores lentos y complejos para garantizar una operación segura sobre el banco~\cite{ecc2,ecc}.
Con el objetivo de mejorar aún más la eficiencia energética sin utilizar códigos ECC costosos, se está explorando la redirección de contenidos como una solución para tolerar fallos permanentes debido a las variaciones en el proceso de fabricación del banco de registros GPU operando en una tensión por debajo de $V_{min}$. Este enfoque consiste en deshabilitar las entradas de registro defectuosas y proporcionar entradas alternativas y fiables hacia donde los accesos a registros defectuosos se redirigen. En este contexto, el trabajo previo GR-Guard identifica entradas fiables que contienen datos inútiles en tiempo de compilación y redirige accesos defectuosos a dichas entradas en tiempo de ejecución gracias a modificaciones en el repertorio de instrucciones (ISA)~\cite{Tan2016}. Sin embargo, modificar la ISA presenta los siguientes inconvenientes: i) expone detalles de implementación innecesarios al programador, ii) requiere cambios en el software para explotar el mecanismo y iii) aumenta la complejidad de la ISA (compatibilidad con versiones anteriores y extensiones futuras). Por el contrario, este trabajo presenta una técnica novedosa de redirección a nivel de microarquitectura, RRCD, que habilita entradas defectuosas con fines de redirección al explotar la compresión de datos inherente de las aplicaciones GPGPU en tiempo de ejecución.
Para que la compresión de datos sea efectiva, se requiere la presencia de patrones regulares en el flujo de datos. Afortunadamente, muchos lenguajes de programación para GPU generan patrones de acceso a memoria regulares y evitan la divergencia en el flujo de ejecución, almacenando patrones de datos regulares en los registros de la GPU~\cite{xiang2013}. Estos patrones regulares se pueden comprimir utilizando variaciones del algoritmo \emph{Base-Delta-Immediate} (BDI) originalmente propuesto para comprimir líneas de cache de la CPU~\cite{bdi}. Estos algoritmos de compresión se pueden aplicar a cualquier arquitectura GPU de NVIDIA o AMD, ya que los patrones de datos explotados provienen exclusivamente de modelos de programación de una instrucción-múltiples hilos proporcionado por CUDA u OpenCL.
Algunos trabajos recientes explotan la compresión de datos en el banco de registros de la GPU para reducir energía~\cite{wctc,ereer,sttcomp}, mitigar efectos de envejecimiento de los transistores~\cite{valerotc} o para lidiar con fallos transitorios~\cite{mittal}. Sin embargo, a nuestro mejor entender, la compresión no se ha utilizado con anterioridad para eludir fallos permanentes en bancos de registros GPU. Además, este es el primer trabajo que propone un mecanismo de redirección sin introducir complejidad o una sobrecarga en el software o el programador.
Los resultados experimentales muestran que RRCD garantiza un funcionamiento fiable de un banco de registros con un 39\% de entradas de registro defectuosas, reduciendo el consumo de energía promedio en un 47 y un 21\% con respecto a un banco de registros convencional que opera con una $V_{dd}$ nominal y con una $V_{min}$ segura, respectivamente, mientras que el impacto en el rendimiento y área es inferior al 2 y 6\%, respectivamente.
\section{Antecedentes}
\label{bckgnd}
Esta sección resume la arquitectura del banco de registros de la GPU, el modelo de fiabilidad y los escenarios utilizados para evaluar la propuesta, así como la estrategia de compresión de datos utilizada para combatir las variaciones en el proceso de fabricación.
\subsection{Banco de Registros GPU}
Las GPUs actuales constan de decenas de procesadores en orden, también conocidos como \emph{Streaming Multiprocessors} (SMs) y \emph{Compute Units} (CUs) en las GPUs de NVIDIA y AMD, respectivamente. Sin pérdida de generalidad, este trabajo utiliza como ejemplo la familia de GPUs AMD Graphics Core Next (GCN)~\cite{amdwhite}. Por tanto, el resto del documento utilizará la terminología de AMD.
\begin{figure}[t!]
\centering
\includegraphics[width=0.99\columnwidth]{./figures/rf_pipeline_spa.eps}
\caption{Pipeline correspondiente a una unidad SIMD después de las etapas de búsqueda y decodificación.}
\label{pipe}
\end{figure}
Una CU consta de 4 unidades \emph{single-instruction, multiple-data} (SIMD). Cada unidad SIMD se asocia con un segmento o \emph{slice} de 64 KB del banco de registros. La Figura~\ref{pipe} muestra las etapas de \emph{pipeline} asociadas a una unidad SIMD y cómo 256 entradas de registros vectoriales componen un slice. A su vez, cada entrada consta de 64 componentes de 4 bytes. Para acceder a las entradas de registro, los hilos se organizan en grupos de hasta 64 hilos llamados \emph{wavefronts}. Todos los hilos que pertenecen a un mismo \emph{wavefront} acceden a la misma entrada de registro, pero con un desplazamiento de componente basado en el identificador de hilo en el \emph{wavefront}. De esta forma, aunque cada hilo trabaja con una componente diferente de la misma entrada, se evita hacer referencia a cada componente en la ISA.
Dado que una unidad SIMD consta de 16 canales (64 B), los hilos de un \emph{wavefront} ejecutan una instrucción formando 4 paquetes de 16 hilos adyacentes, denominados como \emph{bloques} ($blq_i$) en la figura. Estos bloques se acceden uno tras otro durante cuatro ciclos sucesivos. Dado que las instrucciones de AMD suelen disponer de dos operandos fuente ($fnt0$ y $fnt1$) y un operando de destino ($dest$), un slice incorpora dos puertos de lectura y un puerto de escritura
Las entradas de registro de un slice se distribuyen estáticamente entre los \emph{wavefronts} que se ejecutan en la unidad SIMD correspondiente. Para ello, cuando un \emph{wavefront} se asigna a una unidad SIMD, se le añade un identificador ($WF_{id}$), la dirección física de su registro base y un número de entradas de registro contiguas, siendo este último un valor constante para todos los \emph{wavefronts} de una aplicación~\cite{amd2015}. El conjunto de entradas de un \emph{wavefront} se denomina ventana de registros. Por ejemplo, la Figura~\ref{pipe} muestra ventanas de 3 registros en el slice, resaltadas con un gris de mayor gradiente para los \emph{wavefronts} $WF_{0}$, $WF_{1}$ y $WF_{2}$.
Como se muestra en la etapa de traducción de registros, las instrucciones de cada \emph{wavefront} se refieren a registros lógicos dentro de su ventana con índices ($idc$). Estos índices se agregan al registro base para obtener las entradas de registro físico. La entrada del registro base se obtiene de una tabla base de registros indexada con el id del \emph{wavefront}.
\subsection{Escenarios de Fiabilidad}
\label{rel}
La distribución de fallos permanentes en una memoria SRAM depende del porcentaje de impacto de las variaciones sistemáticas (transistores vecinos con variaciones similares) y aleatorias (variaciones uniformes a lo largo del chip). La propuesta se evalúa bajo tres escenarios de fiabilidad: \emph{Común}, \emph{Agrupado} y \emph{Disperso}. El escenario \emph{Común} se refiere a un escenario ampliamente utilizado donde las variaciones sistemáticas y aleatorias se tratan por igual~\cite{variusnvt}, mientras que \emph{Agrupado} y \emph{Disperso} hacen referencia a escenarios con un mayor impacto de variaciones sistemáticas y aleatorias, respectivamente.
\begin{table}[t!]
\renewcommand{\baselinestretch}{1}
\small
\centering
\caption{Porcentaje de entradas de registro con diferente número de bits defectuosos para cada escenario de fiabilidad~\protect\cite{Tan2016}.}
{
\begin{tabular}{|c||c|c|c|c|c|} \hline
Escenario & \multicolumn{2}{c|}{Entradas fiables} & \multicolumn{3}{c|}{Entradas defectuosas}\\
de fiabilidad & 0-bit & 1-bit & 2-bit & 3-bit & $\geq$4-bit\\\hline\hline
\emph{Común} & 34 & 33 & 20 & 10 & 3 \\\hline
\emph{Agrupado} & 43 & 20 & 12 & 10 & 15 \\\hline
\emph{Disperso} & 26 & 35 & 23 & 12 & 4 \\\hline
\end{tabular}}
\label{fop}
\end{table}
La Tabla~\ref{fop} muestra la distribución de las entradas de registro de un slice según su número de bits defectuosos para cada escenario de fiabilidad. Este trabajo asume el modelo de fallos de 28 nm propuesto en~\cite{Tan2016} y obtenido con VARIUS~\cite{iccd16}.
El modelo de fiabilidad asumido se centra en el banco de registros, asumiendo una implementación con dominios de tensión dedicados para lógica y memoria, manteniendo la lógica con una $V_{dd}$ elevada para evitar fallos como se describe en~\cite{vdomains,vdomains2}.
$V_{dd}$ queda establecida durante toda la ejecución de una aplicación en 419, 497 y 371 mV para los escenarios \emph{Común}, \emph{Agrupado} y \emph{Disperso}, respectivamente, todas ellas por debajo de una $V_{min}$ de 600 mV~\cite{vmin,Tan2016,Ipatch}. En aras de mejorar la claridad, se han agrupado las entradas con cuatro o más bits defectuosos.
\emph{Error-Correcting Pointer} (ECP) es un enfoque que corrige fallos permanentes mediante la codificación de las ubicaciones de los bits defectuosos en una tabla y la asignación de bits de reemplazo adicionales para reemplazar a los bits defectuosos~\cite{ecp}. En este trabajo, ECP se emplea con una granularidad razonable de entrada de registro con un bit de reemplazo por entrada. Por tanto, las entradas con menos de dos bits defectuosos se consideran como fiables. De manera conservadora, asumimos que los bits defectuosos se distribuyen uniformemente entre los cuatro bloques de una entrada de registro. Es decir, las entradas defectuosas con $i$ bits defectuosos, $i\geq2$, tienen $i$ bloques defectuosos. Por supuesto, cuando $i\geq4$, todos los bloques son defectuosos y la entrada se considera como completamente inútil.
El escenario \emph{Común} muestra una distribución de registros donde el porcentaje de entradas se reduce con el número de bits defectuosos por entrada. En contraste, el escenario \emph{Agrupado}, donde el efecto sistemático domina sobre el efecto aleatorio, muestra un mayor porcentaje de entradas en los extremos de cero y cuatro o más bits defectuosos. El escenario \emph{Disperso}, donde los fallos se distribuyen aleatoriamente, presenta un mayor porcentaje de entradas con al menos un bit defectuoso, pero no muestra tantas entradas completamente defectuosas como el escenario \emph{Agrupado}. En general, el porcentaje de entradas defectuosas en un slice es 33, 37 y 39\% para los escenarios \emph{Común}, \emph{Agrupado} y \emph{Disperso}, respectivamente.
Finalmente, cada escenario de fiabilidad cuenta con mapas de fallos de $256\times4$ bits por slice de acuerdo con la ubicación de los bloques defectuosos. Estos mapas se determinan durante una prueba posterior a la fabricación~\cite{Ipatch,Tan2016,tanmap} y se utilizarán como una entrada de RRCD para diferenciar entre bloques (y entradas) de registro fiables y defectuosos.
\subsection{Compresión de Datos}
\label{datac}
Este artículo explota el mecanismo de compresión de datos propuesto en~\cite{valerotc}, donde se identifican una serie de patrones de datos regulares en el banco de registros GPU. Estos patrones aparecen cuando todas las componentes de un registro almacenan el mismo valor escalar debido al control de divergencias~\cite{xiang2013}, cuando un registro almacena una secuencia de valores donde la diferencia entre componentes consecutivas es constante debido a identificadores de hilo o direcciones de vectores y cuando, aparte de esta diferencia mencionada, aparece una segunda diferencia de valor entre otras componentes adyacentes, siendo el caso de registros que almacenan linealmente las direcciones de una matriz. Este último patrón también aparece cuando se utilizan técnicas de programación como \emph{tiling} o ventanas deslizantes.
Estos patrones ofrecen una relación de compresión muy alta, puesto que se necesitan tan sólo 4,88 B para (des)comprimir una entrada de registro de 256 B. Las unidades de hardware requeridas para (des)comprimir un registro en tiempo de ejecución consisten en una serie de sumadores, restadores, comparadores y pequeños \emph{buffers} de memoria para hacer frente a las operaciones de lectura y escritura de cuatro ciclos en un slice de acuerdo con el número de bloques en una entrada de registro. En este sentido, una unidad de descompresión recibe un bloque con los datos comprimidos de un registro y envía cada bloque sin comprimir en un ciclo sucesivo, completando el proceso tras cuatro ciclos. De manera similar, una unidad de compresión recibe cada bloque sin comprimir uno tras otro, obteniendo la potencial compresión en el primer ciclo y determinando si el registro completo se puede comprimir en el cuarto ciclo tras examinarse todos los bloques.
\section{Motivación}
\label{mot}
Esta sección explora el potencial de una técnica de redirección de registros basada en la compresión de datos para tolerar fallos permanentes en el banco de registros GPU. Para ello, las entradas de registros se clasifican dependiendo del estado de la compresión de los contenidos (compresible o incompresible) y el estado de la fiabilidad de la entrada (fiable o defectuosa).
\begin{figure}[t!]
\centering
\includegraphics[width=0.99\columnwidth]{./figures/mot_spa.eps}
\caption{Porcentaje promedio de entradas de registro por ciclo según su estado de fiabilidad y compresión de contenidos.}
\label{motfig}
\end{figure}
La Figura~\ref{motfig} muestra las cuatro categorías resultantes. Cada barra muestra el porcentaje promedio de entradas ocupadas por ciclo. Los resultados se limitan al análisis del escenario de fiabilidad \emph{Común} y se evalúa un subconjunto representativo de aplicaciones de la \emph{suite} OpenCL SDK 2.5 con características diferentes como requisitos de memoria, estrés del banco de registros y oportunidades de compresión de datos~\cite{opencl}. Refiérase a la Sección~\ref{exp} para más detalles sobre el entorno experimental.
Casi un 30\% de las entradas son fiables y almacenan un registro comprimido por defecto en cada ciclo. Estas entradas son candidatas principales para la redirección, puesto que la técnica RRCD trata de forzar la redirección de un registro incompresible hacia una entrada fiable. Los resultados también muestran que un 22\% de las entradas son fiables y almacenan un registro sin comprimir. Por tanto, no constituyen entradas apropiadas para la redirección. Un porcentaje reducido del 10\% de las entradas son defectuosas y almacenan un registro sin comprimir. Un registro en una entrada defectuosa requiere una redirección hacia una entrada fiable, de lo contrario, la integridad de los datos estaría comprometida con la presencia de fallos. Finalmente, un 14\% de las entradas son defectuosas pero almacenan un registro comprimido. En este caso, no se necesitaría una redirección siempre y cuando el registro comprimido esté almacenado en un bloque fiable dentro de la entrada defectuosa. Los bloques fiables restantes (si los hubiese) de dicha entrada pueden almacenar otros registros comprimidos, lo que da como resultado una entrada defectuosa que almacena múltiples registros comprimidos.
En general, la compresión de datos ofrece oportunidades prometedoras para la técnica de redirección, ya que todas las aplicaciones, aparte de \emph{BlackS}, muestran un mayor porcentaje de entradas fiables que almacenan registros comprimidos frente a entradas defectuosas que almacenan registros sin comprimir. Nótese que las entradas no asignadas también ofrecen oportunidades de redirección para registros comprimidos o sin comprimir. La utilización del banco de registros depende de: i) la arquitectura de la GPU, incluyendo el número de entradas del slice, el número máximo de \emph{wavefronts} concurrentes asignados a un slice y el tamaño de la ventana de registros~\cite{amd2015} y ii) optimizaciones de las aplicaciones~\cite{abdel}. En la Figura~\ref{motfig}, la suma de cada categoría para una aplicación determinada constituye el porcentaje de utilización del banco de registros. Dependiendo de la aplicación, estos porcentajes oscilan entre 54 (\emph{QRandS}) y 93\% (\emph{SConv}), con un promedio del 74\%, siendo un porcentaje más alto que en estudios anteriores~\cite{abdel}.
\begin{figure*}[t!]
\centering
\includegraphics[width=0.99\textwidth]{./figures/dc_patch2_spa.eps}
\caption{Etapas del pipeline de una unidad SIMD incluyendo los componentes principales del diseño RRCD coloreados en gris.}
\label{dcpatch}
\end{figure*}
\section{Propuesta RRCD}
\label{prop}
Esta sección presenta la propuesta RRCD. En primer lugar, se presenta una visión general de RRCD, discutiendo cómo funciona el algoritmo de selección de redirecciones e identificando los componentes principales del diseño. A continuación, se describen estos componentes en detalle, incluyendo las operaciones implicadas en cada uno de ellos. Por último, se analizan los costes de temporización, energía, potencia y área.
\subsection{Vista General}
RRCD redirecciona dinámicamente registros con entradas del slice en función del estado de compresión del registro. Se requiere una redirección cuando se escribe un registro por primera vez, así como cuando las escrituras posteriores cambian su estado de compresión. Esto significa que se libera la entrada anterior del registro y se le asigna al mismo una nueva entrada defectuosa o fiable en función de su nuevo estado. Los registros no comprimidos siempre se redireccionan a entradas fiables, mientras que los comprimidos pueden redireccionarse a entradas fiables o defectuosas. En este sentido, RRCD prioriza que los registros comprimidos se asignen a entradas defectuosas. De lo contrario se desperdiciarían entradas fiables.
El enfoque propuesto aprovecha el acceso segmentado de una entrada de registro de 256 bytes en bloques de 64 bytes. Es decir, una entrada fiable puede contener hasta cuatro registros comprimidos, cada uno en un bloque diferente. En cambio, las entradas defectuosas sólo pueden contener registros comprimidos en bloques fiables, mientras que los bloques defectuosos permanecen deshabilitados. De este modo, al contrario que trabajos previos~\cite{Tan2016}, la compresión de datos permite explotar las entradas defectuosas. Nótese que un registro comprimido (4,88 bytes) ocupa una pequeña fracción del bloque de 64 bytes, lo cual deja espacio para redireccionar más de un registro comprimido dentro de un bloque. Sin embargo, esto conllevaría un diseño más complejo con un mayor coste en energía y área.
La Figura~\ref{dcpatch} muestra las etapas del pipeline de una unidad SIMD tras la búsqueda y decodificación de una instrucción, incluyendo los componentes principales del diseño RRCD coloreados en gris. A primera vista, la Tabla de Redirecciones (TR) se encuentra después de la traducción de registros y envía a la siguiente etapa la entrada donde encontrar cada registro.
Las unidades de (des)compresión (Com/Des) tienen etapas dedicadas en el pipeline. Los descompresores se sitúan tras los puertos de lectura del slice y garantizan que la unidad SIMD opere con bloques no comprimidos en cada ciclo. Después de examinar un bloque destino, la unidad de compresión envía los datos comprimidos (si es el caso) al puerto de escritura del slice. Por último, la Unidad de Selección de Redirección (USR) se encuentra en la etapa de \emph{writeback} (WB). Teniendo en cuenta el mapa de fallos del escenario de fiabilidad, la USR emite una nueva redirección cuando cambia el estado de compresión de un registro de destino. A continuación, se describe en detalle cada uno de estos componentes y las operaciones implicadas.
\subsection{Tabla de Redirecciones (TR)}
La TR es una memoria de 416 bytes indexada por los registros físicos fuente y destino de una instrucción. El número de filas de la TR es igual al número de entradas del slice (es decir, 256) y también están dispuestas en ventanas de registros. De hecho, la TR desvincula completamente las ventanas de registros del slice para maximizar la posibilidad de encontrar una redirección. En otras palabras, las redirecciones permiten asignar dinámicamente registros de un \emph{wavefront} en cualquier entrada de slice disponible.
El contenido de una fila de la TR se actualiza por la USR cuando es necesaria una nueva redirección. Cada fila contiene la entrada ($entrada_{tr}$) en la que reside un registro en el slice. Para un registro comprimido, los bits $blq_{tr}$ indican en qué bloque dentro de la entrada se encuentra el registro. Estos bits también se utilizan en los puertos de lectura del slice (bits $blq_{tr \; fnt_{i}}$) para enviar un bloque fuente a la siguiente etapa en un único ciclo. Del mismo modo, cuando el estado de un registro destino permanece como comprimido (es decir, no se requiere una nueva redirección de la USR), los bits $blq_{tr}$ de dicho registro controlan el puerto de escritura (bits $blq_{tr \; dest}$) para almacenar el resultado comprimido de la unidad SIMD en el bloque adecuado del slice en un único ciclo. En el caso de un registro no comprimido, los bits $blq_{tr}$ no se utilizan para controlar los puertos del slice. En su lugar, se accede a todos los bloques del registro en cuatro ciclos consecutivos.
Los bits de validez $v$ y compresión $c_{tr}$ definen el estado de un registro, codificando un registro válido y comprimido con un `1' lógico en los bits. La USR utiliza estos bits para obtener la redirección de un registro destino. Además, el bit $c_{tr}$ acciona los muxes 2:1 de la etapa de descompresión para seleccionar entre bloques no comprimidos de un puerto de lectura o bloques descomprimidos de una unidad Des.
Nótese que si no es posible encontrar una redirección para un registro comprimido o sin comprimir, se asigna una redirección de respaldo al registro (\emph{spill}). Es decir, una estructura de memoria adicional almacena el contenido del registro. En este trabajo, en lugar de contaminar o reducir la capacidad de almacenamiento efectiva de la jerarquía de memoria~\cite{Tan2016}, se explota la cache Local Data Share (LDS)~\cite{amdwhite} como almacenamiento de respaldo\footnote{La LDS se comparte entre las unidades SIMD de una CU. Para aplicaciones GPGPU, el uso de la LDS depende del programador. Las aplicaciones pueden cargar o almacenar datos en la LDS para amplificar el ancho de banda de cache, evitar la contaminación en la jerarquía de memoria con operaciones \emph{scatter} y \emph{gather} o realizar operaciones atómicas a nivel de \emph{work-group}~\cite{amdwhite}.}.
El bit $m$ indica si la redirección de un registro se encuentra en un almacén de respaldo ($m=1$) o en el slice ($m=0$). Si $m=1$, en lugar de acceder al slice, se accede a la cache LDS para obtener los datos solicitados. Para ello, la LDS se divide en dos mitades, una para los datos regulares propios de la LDS y la otra para los registros de respaldo. Estos registos se organizan como un array contiguo en la LDS. En la TR, cuando $m=1$, los bits $entrada_{tr}$ se refieren a un desplazamiento desde la dirección base de la partición de respaldo hasta la dirección donde se encuentra el registro.
\subsection{Unidades de (Des)compresión (Com/Des)}
De acuerdo con el número de puertos del slice, RRCD requiere una unidad Com y dos unidades Des. A diferencia de TR, la latencia de estas unidades obliga a ampliar la profundidad del pipeline en dos etapas adicionales.
La entrada de una unidad Des proviene del puerto de lectura y se refiere a un bloque que contiene los datos comprimidos de un registro fuente. En estos casos, esta unidad desenrolla los cuatro bloques del registro y los envía uno tras otro a la unidad SIMD en ciclos sucesivos. Por el contrario, los bloques procedentes de registros no comprimidos simplemente pasan por alto las unidades Des.
La unidad Com recibe de la unidad SIMD un bloque sin comprimir en cada ciclo. Cuando esta unidad recibe el primer bloque de un registro, determina el estado de compresión del bloque y notifica dicho estado a la USR con el bit $c_{compr}$, con valor `1' o `0' cuando el contenido está comprimido o descomprimido, respectivamente. Si $c_{compr}=1$, el compresor también envía los datos comprimidos hacia el puerto de escritura. En caso contrario, los cuatro bloques de un registro se envían hacia el mismo puerto. El bit $c_{compr}$ también controla un mux 2:1 para enviar los datos comprimidos o no comprimidos hacia el puerto de escritura.
La compresión de un registro es especulativa, ya que se desconoce si el registro se puede comprimir hasta que se comprueba el cuarto bloque. Por esta razón, se propone el uso de un Buffer de Registro Destino (BRD), tan grande como una entrada del slice, el cual almacena los cuatro bloques de la unidad SIMD mientras se determina la compresibilidad del registro actual. Cuando la compresión no es posible, se notifica a la USR cambiando el estado del bit $c_{compr}$ y la segmentación se detiene durante cuatro ciclos hasta que el registro completo del BRD se escribe en la entrada del slice correspondiente.
\subsection{Unidad de Selección de Redirección (USR)}
La USR consta de un mapa de bits de tamaño $256\times 4$ refiriéndose a cada bloque del slice y dos codificadores de prioridad con 1024 y 256 entradas cada uno para seleccionar entradas defectuosas y fiables, respectivamente. El mapa de bits se preconfigura con el mapa de fallos del escenario de fiabilidad elegido y se actualiza a lo largo de la ejecución de una aplicación con las redirecciones ocupadas y liberadas. Por supuesto, los bloques defectuosos se marcan permanentemente como ocupados en el mapa de bits y no se pueden utilizar. Según el estado del mapa de bits, el codificador de 1024 entradas selecciona un bloque libre de una entrada defectuosa para redireccionar un registro comprimido, mientras que el codificador de 256 entradas selecciona una entrada de registro fiable, es decir cuatro bloques libres, para redireccionar un registro incompresible. La USR escoge el tipo de redirección en función del estado de compresión del registro destino (bits $c_{compr}$). El resto de las entradas de la USR son el contenido TR del registro destino.
\begin{table*}[t!]
\renewcommand{\baselinestretch}{1}
\small
\centering
\caption{Temporización, energía, potencia y área para un nodo de fabricación de 28 nm y una frecuencia de reloj de 1 GHz. N/A: No aplicable.}
\begin{tabular}{|c|c|c|c|c||c|c||c|c|} \cline{2-9}
\multicolumn{1}{c|}{} & Slice & Slice & Slice & Slice & Com & Des & TR & USR\\
\multicolumn{1}{c|}{} & @Conv & @Com & @Agr & @Dis & & & & \\\hline\hline
Tiempo de acceso (ns) & \multicolumn{4}{c||}{0,85} & 0,95 & 0,95 & 0,09 & 0,11 \\\hline
Energía por lectura (pJ) & 247,38 & 84,38 & 84,90 & 68,25 & N/A & 0,62 & 0,54 & N/A\\\hline
Energía por escritura (pJ) & 302,23 & 97,68 & 99,76 & 78,33 & 0,72 & N/A & 0,50 & 0,22\\\hline
Potencia estática (mW) & 58,58 & 30,79 & 35,18 & 27,73 & 6,67 & 7,14 & 0,41 & 15,41\\\hline
Área ($mm^{2}$) & \multicolumn{4}{c||}{0,680} & 0,005 & 0,007 & 0,005 & 0,014\\\hline
\end{tabular}
\label{tvalues}
\end{table*}
Para cada instrucción con un registro destino, la USR permite que uno de los codificadores obtenga de manera preventiva una redirección mientras la instrucción atraviesa el pipeline. Esta redirección preventiva se refiere a una redirección complementaria con respecto al estado del registro actual. Por ejemplo, si la redirección actual se refiere a una entrada fiable ($c_{tr}=0$), la USR selecciona preventivamente un bloque fiable de una entrada defectuosa, asumiendo que el estado de compresión cambiará. De este modo, cuando el primer bloque del registro alcanza la etapa WB, la USR compara los bits $c_{tr}$ y $c_{compr}$, y acciona el puerto de escritura en consecuencia (bit $rsel$), ya sea seleccionando la nueva redirección ($entrada_{usr \; {dest}}$ y $blq_{usr \; {dest}}$) o la redirección anterior de la TR.
Las redirecciones preventivas permiten situar a la USR en la etapa WB sin aumentar la profundidad del pipeline. Si se escribe un registro por primera vez ($v_{dest}=0$), la USR permite a ambos codificadores obtener redirecciones preventivas. Una vez que la instrucción abandona el pipeline, la USR libera la redirección no utilizada y actualiza la TR si es necesario.
En el caso de un fallo de compresión, la USR selecciona la redirección correspondiente a una entrada fiable y los bloques no comprimidos del BRD se escriben en el slice uno tras otro ($misp=1$).
También, cuando una instrucción escribe en un registro de respaldo ($m_{dest}=1$), la USR trata de encontrar una nueva redirección en el slice para el registro con el fin de mitigar las penalizaciones de latencia por el acceso a la LDS, obteniendo dos redirecciones preventivas al igual que en la primera escritura a un registro. Por otro lado, cuando no se puede encontrar una nueva redirección en el slice, la USR establece el bit $m_{dest}$ a `1' y actualiza la $entrada_{tr \; {dest}}$ con el desplazamiento de la partición de la LDS donde se asigna el registro.
Finalmente, cuando un \emph{wavefront} libera su ventana de registros porque ha terminado la ejecución, la USR accede a la ventana del \emph{wavefront} en la TR, invalidando todas las filas ($v=0$) del \emph{wavefront} y liberando todas las redirecciones correspondientes en el mapa de bits de la USR.
\subsection{Retardo, Energía, Potencia y Área}
\label{over}
Esta sección estima el retardo, energía, potencia y área de los componentes principales de RRCD. Las estructuras de memoria como el slice y la TR se han modelado con CACTI 7.0~\cite{cactip}, mientras que la lógica combinacional y los flip-flops presentes en las unidades Com/Des y la USR se han sintetizado con Synopsis Design Compiler y simulado con Mentor Graphics Modelsim. La librería corresponde a una tecnología de 28 nm de bajo $V_{th}$ disponible en el ámbito académico. Se asume la frecuencia de reloj de 1 GHz de la GPU AMD GCN HD 7770. Se trata de la GPU considerada en todos los experimentos.
La Tabla~\ref{tvalues} muestra los resultados. El tiempo de acceso y la energía dinámica (lecturas y escrituras) del slice se refiere al acceso a un bloque de 64 bytes. Para las unidades Com/Des, TR y USR, estos parámetros se refieren a la evaluación de un bloque, el acceso a una entrada y la obtención de una nueva redirección, respectivamente. Sin embargo, en el caso de la USR, el tiempo de acceso se refiere exclusivamente al envío de la nueva redirección al puerto de escritura en la propia etapa del pipeline, ya que las redirecciones preventivas se obtienen en etapas anteriores. Nótese que el consumo dinámico no es aplicable en algunos componentes de RRCD debido a que no intervienen en un acceso de lectura o escritura. Los resultados del slice se muestran para los tres escenarios de fiabilidad estudiados más un escenario convencional donde la GPU trabaja con una $V_{dd}$ nominal para evitar fallos. Al igual que la GPU convencional, todos los componentes de RRCD se mantienen a $V_{dd}$ nominal por la misma razón.
Se asume que el tiempo de acceso del slice permanece constante independientemente del valor de $V_{dd}$. Sin embargo, la reducción de $V_{dd}$ puede conducir a un aumento del retardo de conmutación de los transistores~\cite{energy2}. En este sentido, se ha medido la pérdida media de rendimiento al segmentar el acceso al slice y aumentar la latencia de 1 a 3 ciclos adicionales. De forma similar a~\cite{wctc,sttcomp}, el impacto en el rendimiento oscila entre un 0,6 y un 1,5\%.
Estos resultados indican que la ampliación de la profundidad del pipeline tiene un impacto relativamente bajo en el rendimiento, en particular si hay suficiente paralelismo a nivel de hilos de los \emph{wavefronts} asignados al slice.
La energía dinámica y la potencia estática del slice disminuyen con $V_{dd}$. Las mayores reducciones se observan en la energía dinámica, puesto que aumenta cuadráticamente con $V_{dd}$.
En comparación con el slice, los componentes de RRCD reducen en gran medida la energía, potencia y área. La potencia es el parámetro que más afecta, nótese que la potencia de todos los componentes de RRCD se sitúa en 36,77 mW, lo que supone casi dos tercios de la potencia del slice cuando opera con una $V_{dd}$ nominal. Por otro lado, la estimación del área conjunta de todos los componentes propuestos es 0,038 $mm^2$, lo que corresponde a un 5,6\% del slice.
Recuérdese que el tiempo de acceso de las unidades Com/Des (0,95 ns) obliga a ampliar el pipeline en dos etapas adicionales. Por el contrario, el tiempo de acceso es lo suficientemente pequeño tanto en la TR (0,09 ns) como en la USR (0,11 ns) como para encajarlas en las etapas originales de traducción y \emph{writeback}. Además, el impacto de los muxes 2:1 adicionales es mínimo, puesto que el retardo de este componente es 13 ps de acuerdo con Synopsis DC Ultra.
\section{Evaluación Experimental}
\label{exp}
\begin{table}[t!]
\renewcommand{\baselinestretch}{1}
\small
\centering
\caption{Configuración de la GPU y jerarquía de memoria.\label{param}}
{
\begin{tabular}{|l|l|} \hline
Frecuencia de reloj & 1 GHz \\
CUs & 10 \\
Slice & 64 KB, 4/CU, 4-1-1-1 ciclos/instr. \\
Máx. WFs/CU & 16 \\
Hilos/WF & 64\\
\hline\hline
Todas las cache & LRU, 64B/línea \\
Caches L1 escalares & 16 KB, 4-vías, 1/CU, 1 ciclo \\
Caches L1 texturas & 16 KB, 4-vías, 1/CU, 1 ciclo \\
Caches LDS & 64 KB \emph{scratch}, 1/CU, 1 ciclo \\
2$\times$ caches L2 & 128 KB, 16-vías/módulo, 10 ciclos \\
Memoria principal & 2 canales/módulo L2, 100 ciclos \\
\hline
\end{tabular}}
\end{table}
RRCD se ha modelado con el simulador ciclo-a-ciclo Multi2Sim~\cite{m2sgpu}. Los resultados incluyen el tiempo de ejecución de las aplicaciones y estadísticas adicionales del procesador requeridas para estimar el consumo de energía. La energía total se ha calculado combinando estadísticas de Multi2Sim con números de energía provenientes de CACTI y Synopsis. La Tabla~\ref{param} muestra los parámetros de configuración y jerarquía de memoria de la GPU modelada. Todos los benchmarks se ejecutan de principio a fin.
\subsection{Impacto en el Rendimiento}
\label{expperf}
Para entender mejor las fuentes principales de degradación de rendimiento, en primer lugar se clasifican las escrituras en el slice como accesos regulares, escrituras que requieren una nueva redirección y escrituras que disparan un acceso hacia la LDS debido a registros de respaldo. A continuación, se analiza el impacto en el rendimiento del sistema.
\subsubsection{\emph{Desglose de Escrituras en Slice}}
\begin{figure}[t!]
\centering
\includegraphics[width=0.99\columnwidth]{./figures/patches_spa.eps}
\caption{Porcentaje de escrituras de registro que requieren un acceso regular sin redirección, una redirección hacia una entrada fiable o defectuosa y accesos adicionales a la LDS.}
\label{exppatch}
\end{figure}
La Figura~\ref{exppatch} ilustra un desglose de tipos de escritura en el slice. Nótese que la primera escritura de un registro siempre se considera como una redirección hacia una entrada fiable o defectuosa, o un acceso a la LDS.
Los resultados muestran que, en aplicaciones como \emph{QRandS} y \emph{MatrixM}, el porcentaje de escrituras normales sin redirección alcanza el 90\%. Este porcentaje es en promedio del 70\%, lo que significa que la mayoría de los registros no cambian el estado de compresión durante toda su vida útil. En otras palabras, una vez que RRCD selecciona una entrada donde redireccionar un registro, normalmente se refiere al mismo registro lógico durante toda la ejecución de un \emph{wavefront}. El hecho de que los resultados sean bastante homogéneos para una aplicación dada bajo los diferentes escenarios de fiabilidad también confirma este razonamiento.
RRCD proporciona suficientes entradas para redireccionar registros en todos los escenarios de fiabilidad estudiados, ya que las escrituras hacia la LDS sólo son notables en \emph{BlackS}, donde el porcentaje oscila entre el 1-2\%. La combinación de dos factores clave son la causa de la presencia de escrituras en la LDS para esta aplicación. Primero, el porcentaje de registros comprimidos es bastante bajo, lo que implica que una gran cantidad de entradas defectuosas no se pueden explotar con fines de redirección. En segundo lugar, la utilización del banco de registros es muy alta (véase la Figura~\ref{motfig}), lo cual limita las oportunidades de encontrar una entrada redireccionable.
\subsubsection{\emph{Impacto en el Rendimiento del Sistema}}
\label{systperf}
La Figura~\ref{slowd} ilustra la degradación de rendimiento del sistema para la técnica RRCD con respecto a un banco de registros convencional operando en la tensión nominal. Nótese que, aparte de los accesos adicionales a la LDS, el aumento de la profundidad del pipeline, así como las especulaciones erróneas en las operaciones de compresión también pueden afectar al rendimiento. En este sentido, la predicción del mecanismo de compresión es muy precisa, puesto que el porcentaje de escrituras con una especulación incorrecta es un 2.4\% en promedio.
El impacto en el rendimiento varía ampliamente entre las aplicaciones. \emph{BlackS} muestra el mayor impacto en el rendimiento, puesto que este benchmark sufre los tres efectos mencionados anteriormente. Sin embargo, la pérdida de rendimiento no excede un 4.5\% con respecto al diseño convencional. En
\emph{DCT}, \emph{Histog} y \emph{QRandS}, el impacto en el rendimiento es bastante uniforme con independencia de los escenarios de fiabilidad. Estas pérdidas se atribuyen al efecto combinado de la especulación errónea y el aumento de etapas en el pipeline. El resto de aplicaciones se encuentran afectadas principalmente por la mayor longitud del pipeline. Sin embargo, los resultados confirman que el planificador es capaz de despachar instrucciones independientes de \emph{wavefronts} diferentes de manera frecuente, enmascarando los efectos de un pipeline más profundo en el rendimiento.
\begin{figure}[t!]
\centering
\includegraphics[width=0.99\columnwidth]{./figures/slowdownred_spa.eps}
\caption{Pérdida de rendimiento de RRCD frente a un banco de registros convencional.
}
\label{slowd}
\end{figure}
\begin{figure*}[!t]
\centering
\includegraphics[width=0.9\textwidth]{./figures/energy_spa.eps}
\caption{Consumo de energía normalizado para RRCD y un esquema de suavizado de ruido de tensión (Suav) con respecto a un banco de registros convencional.}
\label{energy}
\end{figure*}
En resumen, el impacto en el rendimiento del mecanismo RRCD es en promedio 1,53, 1,60 y 1,76\% para \emph{Común}, \emph{Agrupado} y \emph{Disperso}, respectivamente, frente a un banco de registros convencional.
\subsection{Ahorro Energético}
\label{expenergy}
La Figura~\ref{energy} muestra el consumo de energía normalizado del slice con respecto al diseño convencional. Con fines comparativos, se incluye también un esquema de suavizado de ruido de tensión (Suav) operando con una $V_{dd}=600$ mV. La energía se divide en coste estático y dinámico. A su vez, la energía dinámica se clasifica en coste debido a operaciones de lectura y escritura en el slice. La sobrecarga de energía de las especulaciones erróneas se agrega a la categoría de escritura. La etiqueta \emph{Com/Des} se refiere al consumo estático y dinámico de las unidades de (des)compresión, mientras que el consumo estático y dinámico de la TR, USR y BRD se acumula en la categoría de \emph{Redirecciones}. Finalmente, también se cuantifica la sobrecarga de energía dinámica por acceso a la LDS debido a registros de respaldo.
La contribución del coste estático sobre la energía total varía ampliamente entre aplicaciones. Por ejemplo, las aplicaciones con un uso del banco de registros relativamente bajo muestran una contribución menor de consumo dinámico, pero siguen acumulando energía estática en cada ciclo. Este es el caso de \emph{RadixS}, donde se asigna una gran cantidad de \emph{wavefronts} durante la ejecución de la aplicación, pero el número de accesos al slice es relativamente bajo. Dado que el consumo estático aumenta linealmente con $V_{dd}$, Suav y RRCD reducen significativamente este consumo en comparación con el esquema convencional. RRCD logra tal ahorro de energía a pesar de la sobrecarga de energía estática debido a un mayor tiempo de ejecución frente Conv y Suav. Como se esperaba, bajo un escenario \emph{Disperso}, con la $V_{dd}$ más baja, se obtiene el mayor ahorro de energía estática, seguido de \emph{Común} y \emph{Agrupado}.
La reducción de $V_{dd}$ tiene un mayor impacto en el coste dinámico, ya que este parámetro tiene un efecto cuadrático sobre este tipo de consumo. Además, RRCD también contribuye a reducir aún más el consumo dinámico frente a Conv y Suav. Esto se debe a que, en un acceso de lectura/escritura a un registro comprimido del slice, sólo se accede a un bloque que contiene los datos comprimidos en lugar de al registro completo. Las aplicaciones con un gran uso del banco de registros y una capacidad elevada de compresión, como \emph{QRandS} y \emph{SConv}, muestran un gran ahorro de energía dinámica.
Recuérdese que los componentes de RRCD operan con una $V_{dd}$ nominal para evitar fallos. Esto impone una sobrecarga de energía, pero no impide que RRCD reduzca en gran medida el consumo total de energía gracias a: i) una reducción agresiva de $V_{dd}$ en el slice y ii) una mitigación del coste dinámico del slice al acceder a registros comprimidos. En comparación con Suav, sólo \emph{Histog} y \emph{RadixS} muestran un mayor consumo. Esto se debe principalmente a que estas aplicaciones hacen un uso del slice relativamente bajo y a las pocas oportunidades de compresión de datos.
Nótese que el coste de acceso a la partición de respaldo de la LDS sólo se aprecia levemente en \emph{BlackS}, confirmando que el número de accesos a registros de respaldo es mucho menor que el número de accesos a registros redireccionados en el propio slice.
En resumen, el ahorro total de energía de RRCD bajo los escenarios \emph{Común}, \emph{Agrupado} y \emph{Disperso} es de 39, 43 y 47\%, respectivamente, con respecto al esquema convencional. Comparado con Suav, estos porcentajes son 10, 14 y 21\%.
\section{Trabajo Relacionado}
\label{related}
Esta sección describe trabajos recientes que utilizan la compresión de datos en el banco de registros GPU, así como técnicas de redirección de registros para tolerar fallos permanentes en esta estructura de memoria.
\subsection{Compresión de Datos en Bancos de Registros}
Zhang \emph{et al}. implementan un banco de registros con tecnología \emph{spin-transfer torque magnetic} RAM~\cite{sttcomp}. Con el objetivo de reducir la energía dinámica y las latencias prolongadas de las operaciones de escritura, los autores utilizan el algoritmo BDI para comprimir registros.
Warped-Compression ahorra energía estática en el banco de registros al aprovechar la compresión BDI~\cite{wctc}. Para aquellos registros identificados como compresibles, este enfoque mantiene los datos comprimidos en los bits menos significativos de estos registros, mientras que las celdas con los bits restantes que no contienen datos útiles se apagan mediante la técnica \emph{power gating}.
La propuesta EREER elimina las componentes adyacentes y duplicadas de un registro, conservando las componentes no duplicadas y los bits de control necesarios para descomprimir el registro en los bits menos significativos de la entrada~\cite{ereer}. Las celdas que almacenan las componentes no utilizadas se apagan para reducir el consumo de energía.
\emph{Power gating} ayuda no sólo a reducir el consumo estático, sino también a mitigar el efecto de envejecimiento \emph{Negative Bias Temperature Instability} (NBTI). A diferencia de Warped-Compression y EREER, RC+RAR es una técnica que desconecta registros completos almacenando los datos comprimidos en una memoria pequeña y auxiliar libre del efecto NBTI por diseño~\cite{valerotc}.
\subsection{Técnicas de Redirección}
GR-Guard es una técnica de redirección que aprovecha las entradas fiables y que contienen registros inútiles para almacenar registros útiles de entradas defectuosas, evitando el uso de dichas entradas defectuosas~\cite{Tan2016}. Una entrada de registro almacena datos inútiles durante el periodo de tiempo desde la última lectura hasta la siguiente operación de escritura. Dado que esta información es desconocida en tiempo de ejecución, GR-Guard hace uso del compilador y modifica el repertorio de instrucciones para identificar estas entradas de registro en tiempo de ejecución. Específicamente, el formato de instrucción incluye un bit adicional para cada operando, distinguiendo de esta manera si la entrada de registro asociada es útil o no lo es. Al contrario que GR-Guard, el presente trabajo no explota el compilador ni modifica el repertorio de instrucciones. RRCD se basa en la observación de que los contenidos del registro se pueden comprimir fácilmente en tiempo de ejecución, lo que permite tratar las entradas defectuosas como ubicaciones para la redirección en sí mismas, aumentando las oportunidades de redirección con respecto a GR-Guard.
iPatch tolera fallos en las cache L1 de la CPU explotando la replicación inherente de los contenidos de la cache en otras estructuras del pipeline, como la \emph{trace} cache, el MSHR y la cola de escrituras~\cite{Ipatch}. La propuesta almacena las líneas de cache L1 defectuosas en dichas estructuras del pipeline. Esta técnica requiere modificaciones en los componentes del procesador, incluida la gestión de la consistencia de memoria en las estructuras utilizadas como respaldo, lo cual complica el diseño y la verificación del procesador. Además, la cobertura de fallos se limita a las cache L1 debido al tamaño relativamente pequeño de las estructuras de respaldo.
\section{Conclusiones}
\label{conclus}
El objetivo principal de este trabajo ha sido asegurar la tolerancia a fallos permanentes de un banco de registros GPU operando por debajo del límite seguro de tensión de alimentación. Para ello, este trabajo ha propuesto una técnica de redirección, RRCD, que aprovecha la redundancia inherente de los datos de las aplicaciones GPGPU para comprimir entradas de registro en tiempo de ejecución. Con la compresión habilitada, las entradas de registro defectuosas vuelven a ser útiles porque pueden almacenar múltiples registros comprimidos en el resto de sus celdas fiables, mejorando de manera efectiva la capacidad de almacenamiento del banco de registros.
Los resultados experimentales han mostrado que RRCD garantiza un funcionamiento fiable de un banco de registros con un 39\% de entradas defectuosas. Para esta tasa de fallos, el consumo de energía se reduce en un 47 y un 21\% en comparación con un banco de registros libre de fallos que opera con una tensión nominal y en el límite seguro de tensión de alimentación, respectivamente, mientras que el impacto en el rendimiento y el área permanece por debajo del 2 y 6\%.
\section*{Agradecimientos}
Este trabajo ha sido financiado por la Universidad de Zaragoza bajo el proyecto JIUZ-2019-TEC-08, MINECO/AEI/FEDER (UE) bajo los proyectos TIN2016-76635-C2-1-R y PID2019-105660RB-C21, Gobierno de Aragón (Grupo T58\_20R) y FEDER 2014-2020 \emph{Construyendo Europa desde Aragón}.
\bibliographystyle{abbrv}
|
{
"timestamp": "2021-05-11T02:18:17",
"yymm": "2105",
"arxiv_id": "2105.03859",
"language": "es",
"url": "https://arxiv.org/abs/2105.03859"
}
|
\section{Introduction} \label{sec:intro}
Photoacoustic tomography (PAT) is a technique which detects optical contrast acoustically.
As photons travel through an absorbing medium like tissue, most, if not all the light is converted into heat.
This causes a temperature increase which induces a pressure increase through thermo-elastic expansion.
This pressure then dissipates through the tissue as a wideband acoustic signal.
An ultrasound transducer array records these signals which can be used to form an image.
This method has found several uses, as one of the key advantages of PAT over purely optical methods is the increase
in imaging depth within tissue since acoustic signals scatter less than light.
A review of PAT imaging methods can be found in \cite{yao2013}.
In the particular imaging method that is the focus of this paper, the illumination is generally a diffuse beam, and a single ultrasound transducer or a transducer array is scanned to obtain an image.
This is illustrated in Fig.\ \ref{fig:pat-illustration}.
The resolution is limited by the focal volume of the transducer which is at least two orders of magnitude greater than the wavelength of light.
This resolution limit decreases as the imaging depth increases.
For this reason,
there is a need for techniques that can improve the imaging capability at increased depths.
\begin{figure}[ht!]
\centering
\includegraphics[width=.6\textwidth]{pat-illustration.pdf}
\caption{The PAT setting with different speckle illuminations.}
\label{fig:pat-illustration}
\end{figure}
When light enters a scattering medium like tissue, the photons are scattered from their initial paths and form random speckle patterns \cite{sebbah2001waves}.
By leveraging the recordings generated by many such random illumination patterns, the resolution can be improved in both microscopy and PAT \cite{chaigne2016, hojman2017a, murray2017, mudry2012b, min2013a}.
Our paper is inspired by the work done by Idier et al.\ \cite{idier2018} in the microscopy setting.
In that setting, the recorded image $\ybf$ can be modeled as
\begin{equation} \label{eq:idier-model}
\ybf = h * (\rhobf \odot \ebf) + \varepsilonbf,
\end{equation}
where $h$ is a point spread function (PSF), $\rhobf$ is the object, $\ebf$ is a random speckle, $\varepsilonbf$ is noise, $*$ denotes convolution, and $\odot$ denotes Hadamard (pointwise) product.
Under this model, they demonstrate that random speckle illumination and the second order moments of the speckle and noise can be used to improve resolution in certain regimes---without any assumptions on the sparsity of the imaged object.
In particular, they show that decreasing the size of the speckle patterns can lead to improved resolution.
We make the following contributions in this paper:
\begin{itemize}
\item We provide evidence that the ideas in \cite{idier2018} can be extended to PAT, and that random speckle illumination combined with second order moment information can lead to enhanced resolution in PAT.
\item We represent the imaged object using Dirac delta expansion functions.
This representation allows us to use a coarser temporal discretization than e.g.\ spherical expansion functions, leading to faster computation and less memory usage.
\item We propose a simple algorithm for recovering the object from the empirical transducer recording covariance matrix.
Unlike the iterative algorithm in \cite{idier2018}, our method requires a small number of simple linear algebra steps and therefore runs much faster (minutes instead of hours).
\end{itemize}
\section{Related work} \label{sec:related-work}
In numerous recent works in PAT, the conventional resolution limits are surpassed via
methods using speckle illumination \cite{gateau2013, chaigne2016, hojman2017a, murray2017, liu2019a}, SOFI \cite{dertinger2009} inspired fluctuation imaging \cite{chaigne2017, vilov2020, vilov2020a}, and localization-based methods \cite{vilov2017, dean-ben2018}.
The class of methods \cite{gateau2013, chaigne2016, hojman2017a, murray2017, liu2019a} that has taken inspiration from blind structured illumination microscopy \cite{mudry2012b, min2013a} has achieved a resolution improvement by a factor of two.
These techniques illuminate the sample with unknown speckle patterns instead of a uniform beam.
This has the effect of frequency shifting the acoustic signals into a frequency band which is detectable by the transducer.
Thus, the grain size of the speckle and the transducer properties have a significant impact on the achievable resolution.
Previous approaches \cite{chaigne2016, murray2017}
estimate the object as the solution to a regularized optimization problem, solved via iterative methods.
The regularizer is chosen to take advantage of the fact that the object is usually sparse.
There has been other work \cite{yeh2017a} that also uses second order speckle information.
In this purely optical method, the illumination patterns are initially estimated after calculating a widefield low-resolution image.
The covariance of these recovered patterns are then used to reconstruct a covariance image which estimates the object.
Many optical imaging techniques extend to PAT but there are key differences, for example
PAT has a non-uniform point response unlike in optical microscopy, hence the resolution is different along the axial and lateral directions. In addition to this, the point response is uniform only within a small region of the imaging aperture and it is desirable to have a general forward model that takes this into account,
in contrast to optical microscopy where a point spread function can be defined for the entire field of view.
\section{Reconstruction via second order moments} \label{sec:method}
Suppose there are $M$ transducers, each recording a time series consisting of $T$ time points for each of $K$ different random speckle patterns.
Let $\rhobf \in \Rb^N$ be a vector representing the object we are trying to reconstruct.
For example, on a two-dimensional grid of size $n \times n$ we have $N = n^2$.
Moreover, let $\ebf \in \Rb^N$ be a vector describing the speckle pattern illuminating the object.
The recorded signal for a given speckle pattern $\ebf$ may now be modeled as
\begin{equation} \label{eq:model}
\ybf = \Abf (\rhobf \odot \ebf) + \varepsilonbf = \Abf \Rbf \ebf + \varepsilonbf,
\end{equation}
where $\Rbf \defeq \diag(\rhobf)$, $\ybf \in \Rb^{TM}$ contains the length $T$ time series recording for each transducer concatenated into a single vector, $\varepsilonbf \in \Rb^{TM}$ is a random noise vector, and $\Abf \in \Rb^{TM \times N}$ is a linear operator.
The form of $\Abf$ depends on how the imaged object is represented.
$\Abf$ can either be derived analytically from the photoacoustic wave equation and properties of the transducers \cite{wang2008, wang2011} or empirically by observing the recorded signal from a known point absorber \cite{egolf2018, vilov2020a}.
We use a variant of the forward operator derived in \cite{wang2011} which we describe in Section~\ref{sec:forward-model}.
However, our reconstruction method should work well with any reasonable forward model.
The goal of the reconstruction problem is to recover $\rhobf$.
Let $\ybf^{(1)}, \ldots, \ybf^{(K)}$ denote recorded signals corresponding to $K$ different speckle patterns.
A standard assumption is that the speckle pattern intensity, on average, is the same for each point of the object and that the noise is centered around zero.
Mathematically, these assumptions can be written as $\Eb[e_n] = \mu$ for each $n \in \{1,\ldots,N\}$ where $\mu$ is some fixed number, and $\Eb[\varepsilonbf] = \zerobf$.
Under these assumptions, $\bar{\ybf} \defeq K^{-1} \sum_{k=1}^K \ybf^{(k)} \approx \Abf \rhobf$ if the number of speckles $K$ is sufficiently large.
One may then estimate $\rhobf$ by solving the following least squares problem:
\begin{equation} \label{eq:rho-hat-1}
\hat{\rhobf}_1 = \argmin_{\rhobf \in \Rb^{N}} \| \Abf \rhobf - \bar{\ybf} \|_2 + \lambda \|\rhobf\|_2.
\end{equation}
The subscript on $\hat{\rhobf}_1$ indicates that this estimate uses assumptions on the mean, or first order moment, of the speckles and noise.
Similar assumptions are made in e.g.\ \cite{mudry2012b, murray2017}.
The added Tikhonov regularizer is necessary since the inversion is ill-posed.
Similarly to \cite{idier2018}, we make the the additional assumption that we know the speckle covariance matrix $\Gammabf_{\ebf} \defeq \Eb[(\ebf - \mubf_{\ebf}) (\ebf - \mubf_{\ebf})^\top]$, where $\mubf_{\ebf} \defeq \Eb[\ebf]$, and the noise covariance matrix $\Gammabf_{\varepsilonbf} \defeq \Eb[\varepsilonbf \varepsilonbf^\top]$ (recall that $\Eb[\varepsilonbf] = \zerobf$).
This amounts to an assumption on the \emph{second order moments}.
Additionally, we assume that the random speckle pattern $\ebf$ and the noise $\varepsilonbf$ are independent.
It is then easy to show that the signal covariance matrix $\Gammabf_{\ybf}$ satisfies
\begin{equation} \label{eq:signal-covariance}
\Gammabf_{\ybf} \defeq \Eb[(\ybf - \mubf_{\ybf}) (\ybf - \mubf_{\ybf})^\top] = \Abf \Rbf \Gammabf_{\ebf} \Rbf \Abf^\top + \Gammabf_{\varepsilonbf},
\end{equation}
where $\mubf_{\ybf} \defeq \Eb[\ybf]$.
We compute the empirical covariance matrix $\hat{\Gammabf}_{\ybf}$ via
\begin{equation}
\hat{\Gammabf}_{\ybf} = \frac{1}{K} \sum_{k=1}^K \ybf^{(k)} \ybf^{(k) \top} - \hat{\mubf}_{\ybf} \hat{\mubf}_{\ybf}^\top, \;\;\;\; \hat{\mubf}_{\ybf} = \frac{1}{K} \sum_{k=1}^K \ybf^{(k)}.
\end{equation}
After replacing the unknown $\Gammabf_{\ybf}$ in \eqref{eq:signal-covariance} with $\hat{\Gammabf}_{\ybf}$, we ``solve'' that equation for $\Rbf$ which contains the sought object $\rhobf$ on the diagonal.
Recovering $\Rbf$ from \eqref{eq:signal-covariance} is a nontrivial problem.
Idier et al.\ \cite{idier2018} propose using an iterative nonlinear conjugate gradient (CG) method to do this.
It requires inverting an estimate of $\Gammabf_{\ybf}$ which costs $O(N^3)$ \emph{per iteration}.
Nonlinear CG methods usually require hundreds of iterations, which makes the method very expensive.
In Section~\ref{sec:algorithm}, we provide details on our simple noniterative method for estimating $\Rbf$ from \eqref{eq:signal-covariance} which costs $O(N^3)$ \emph{in total}.
It only requires basic linear algebra computations which are easy to implement.
\subsection{The forward operator model} \label{sec:forward-model}
For the forward operator $\Abf$, we use a variant of the discrete-to-discrete operator proposed in \cite{wang2011} which incorporates a model for the acousto-electric impulse response (EIR) of ultrasound transducers.
We use the EIR model in equation (6) of \cite{liu2012}.
Applying $\Abf$ na\"{i}vely to a vector costs $O(TMN)$ operations.
However, $\Abf$ can be split into two parts, $\Abf = \Abf_\text{EIR} \Abf_0$, where $\Abf_0$ gives the transducer recordings without the EIR and $\Abf_\text{EIR}$ then applies a convolution with the EIR \cite{wang2011}.
The benefit of this approach is that $\Abf_0$ usually is sparse and $\Abf_\text{EIR}$ can be applied implicitly in time $O(TM \log (TM))$ by using the FFT and the convolution theorem.
Our operator differs from that in \cite{wang2011} in the choice of expansion functions used to represent the object.
We may represent an object $f : \Rb^3 \rightarrow \Rb$ on grid points $\{\xbf^{(n)}\}_{n=1}^N \subset \Rb^3$ via
\begin{equation} \label{eq:f-approx}
f(\xbf) \approx \hat{f}(\xbf) \defeq \sum_{n = 1}^N \rho_n \phi_n(\xbf),
\end{equation}
where $\{\phi_n\}_{n=1}^N$ is a family of expansion functions and $\rhobf = (\rho_n)_{n=1}^N$ is a vector of coefficients.
Wang et al.\ \cite{wang2011} choose each $\phi_n$ to be a spherical expansion function centered at $\xbf^{(n)}$.
We found that this choice works poorly when $\Abf$ is split up into two separate operators $\Abf = \Abf_\text{EIR} \Abf_0$.
The reason is that the ``N'' shaped signal that results after computing $\Abf_0 \rhobf$ require a very fine temporal grid (i.e., large $T$) for accurate representation.
To address this, we instead use expansion functions $\phi_n(\xbf) \defeq \delta(\xbf - \xbf^{(n)})$.
Define
\begin{equation} \label{eq:photoacoustics-model}
s(\xbf, t) \defeq \frac{\beta}{4 \pi C_p} \sum_{n=1}^N \frac{\rho_n}{\|\xbf - \xbf^{(n)}\|} \delta \Big(t - \frac{\|\xbf - \xbf^{(n)}\|}{c_0}\Big),
\end{equation}
where $\beta$ is the thermal coefficient of volume expansion, $C_p$ is the specific heat capacity of the medium at constant pressure, and $c_0$ is the speed of sound in the object and background medium.
The signal generated at position $\xbf$ by the approximation $\hat{f}$ in \eqref{eq:f-approx} at time $t$ when $\phi_n(\xbf) = \delta(\xbf - \xbf^{(n)})$ is then $\partial s(\xbf,t)/\partial t$.
Convolving with the EIR and using properties of the Dirac delta function, we get $\EIR * (\partial s /\partial t) = \EIR' * s$, where $\EIR' \defeq d \EIR / dt$.
We therefore split our forward operator into two parts $\Abf = \Abf_{\EIR'} \Abf_s$ where $\Abf_s$ transforms a discretized object $\rhobf$ to a discretized signal and $\Abf_{\EIR'}$ applies convolution with $\EIR'$.
$\Abf_s$ is sparse and $\Abf_{\EIR'}$ can be applied implicitly via the FFT, so $\Abf \rhobf$ can be computed efficiently.
Moreover, the representation $\Abf_s \rhobf$ performs well even on relatively coarse temporal grids.
\subsection{Reconstruction algorithm} \label{sec:algorithm}
Our reconstruction method is presented in Alg.~\ref{alg:recovery}.
After subtracting the noise covariance on line~\ref{line:sub-mean} and solving the systems on lines~\ref{line:solve-1} and \ref{line:solve-2}, $\Mbf_3$ approximates $\Rbf \Gammabf_{\ebf} \Rbf$.
After multiplying $\Mbf_3$ on each side by $\sqrt{\Gammabf_{\ebf}}$ on line~\ref{line:add-sqrt} and subsequently taking the square root in line~\ref{line:take-sqrt} (we discuss the symmetrization and projection steps below), $\Mbf_5$ approximates $\sqrt{\Gammabf_{\ebf}} \Rbf \sqrt{\Gammabf_{\ebf}}$.
After the two solves on lines~\ref{line:solve-3} and \ref{line:solve-4}, $\hat{\Rbf}$ approximates $\Rbf$.
Finally, on line~\ref{line:diag} the diagonal $\hat{\rhobf}$ estimating $\rhobf$ is extracted.
\begin{algorithm} \label{alg:recovery}
\DontPrintSemicolon
\SetAlgoNoLine
\KwIn{Estimate $\hat{\Gammabf}_{\ybf}$; known $\Gammabf_{\ebf}$, $\Gammabf_{\varepsilonbf}$, $\Abf$; constants $\lambda_1, \lambda_2$}
\KwOut{Object estimate $\hat{\rhobf}$}
$\Mbf_1 = \hat{\Gammabf}_{\ybf} - \Gammabf_{\varepsilonbf}$\label{line:sub-mean}\;
$\Mbf_2 = \argmin_{\Mbf} \| \Abf \Mbf - \Mbf_1 \|_\F^2 + \lambda_1 \|\Mbf\|_\F^2$\label{line:solve-1}\;
$\Mbf_3 = \argmin_{\Mbf} \| \Mbf \Abf^\top - \Mbf_2 \|_\F^2 + \lambda_1 \|\Mbf\|_\F^2$\label{line:solve-2}\;
$\Mbf_4 = \sqrt{\Gammabf_{\ebf}} \Mbf_3 \sqrt{\Gammabf_{\ebf}}$\label{line:add-sqrt}\;
$\Mbf_5 = \sqrt{\proj_{\Sb_+^N} (\sym(\Mbf_4))}$\label{line:take-sqrt} \tcp*{Compute via \eqref{eq:projection-computation}}
$\Mbf_6 = \argmin_{\Mbf} \| \sqrt{\Gammabf_{\ebf}} \Mbf - \Mbf_5 \|_\F^2 + \lambda_2 \| \Mbf \|_\F^2$\label{line:solve-3}\;
$\hat{\Rbf} = \argmin_{\Mbf} \| \Mbf \sqrt{\Gammabf_{\ebf}} - \Mbf_6 \|_\F^2 + \lambda_2 \| \Mbf \|_\F^2$\label{line:solve-4}\;
Set $\hat{\rhobf}$ to diagonal of $\hat{\Rbf}$ \label{line:diag}\;
\Return{$\hat{\rhobf}$}\;
\caption{Efficient reconstruction of $\hat{\rhobf}$ from \eqref{eq:signal-covariance}}
\end{algorithm}
Idier et al.\ \cite{idier2018} point out that since $\hat{\Gammabf}_{\ybf}$ is an empirical covariance matrix, it may not be positive semidefinite.
Consequently, the matrix $\Mbf_4$ may not be positive semidefinite and its square root may not exist.
Idier et al.\ therefore propose using a Kullback--Leibler divergence based dissimilarity measure between the empirical and true distributions, and then find an estimate $\hat{\rhobf}$ via a nonlinear CG method.
Additionally, due to the regularizers on lines \ref{line:solve-1} and \ref{line:solve-2}, $\Mbf_4$ may not be exactly symmetric.
We propose a very simple solution to address these challenges:
We symmetrize and then project $\Mbf_4$ onto the set of positive semidefinite $N \times N$ matrices before taking the square root on line~\ref{line:take-sqrt}.
The symmetrization can be done via $\sym(\Mbf_4) = (\Mbf_4 + \Mbf_4^\top)/2$.
Let $\Sb^N$ and $\Sb_+^N$ denote the symmetric and positive semidefinite matrices of size $N \times N$, respectively.
The projection operator $\proj_{\Sb_+^N} : \Sb^N \rightarrow \Sb_+^N$ is defined as
\begin{equation} \label{eq:projection}
\proj_{\Sb_+^N}(\Mbf) \defeq \min_{\Mbf' \in \Sb_+^N} \|\Mbf' - \Mbf\|_\F.
\end{equation}
This projection is easy to compute via
\begin{equation} \label{eq:projection-computation}
\proj_{\Sb_+^N}(\Mbf) = \Qbf \max(\Lambdabf, 0) \Qbf^\top,
\end{equation}
where $\Mbf = \Qbf \Lambdabf \Qbf^\top$ is the eigendecomposition of $\Mbf$, and the $\max(\cdot, 0)$ operator is applied elementwise.
The projection is the same if spectral norm is used instead of Frobenius norm in \eqref{eq:projection}; see Section~8.1.1 of \cite{boyd2004} for details.
The matrix $\proj_{\Sb_+^N}(\sym(\Mbf_4))$ is positive semidefinite and therefore guaranteed to have a square root; see Theorem~7.2.6 in \cite{horn2012} for details.
In practice, we find that the square root of $\Mbf_4$ usually exists, in which case the symmetrization and projection steps can be skipped.
We found that the Tikhonov regularization in lines~\ref{line:solve-1}, \ref{line:solve-2}, \ref{line:solve-3} and \ref{line:solve-4} of Alg.\ \ref{alg:recovery} with a careful choice of $\lambda_1$ and $\lambda_2$ is essential for the reconstruction.
The regularization terms can easily be incorporated into the design matrix.
For example, the problem in line~\ref{line:solve-1} can be written as
\begin{equation} \label{eq:large-system}
\Mbf_2 = \argmin_{\Mbf} \left\| \begin{bmatrix}\Abf \\ \sqrt{\lambda} \Ibf \end{bmatrix} \Mbf - \begin{bmatrix} \Mbf_1 \\ \zerobf \end{bmatrix} \right\|_\F^2.
\end{equation}
If the problem in line~\ref{line:solve-2} is transposed and rewritten in a similar fashion, it will have the same design matrix.
The leading order cost of solving these problems is decomposing the design matrix (e.g.\ via the QR decomposition; see Section~5.3.3 of \cite{golub2013} for details), and this therefore only has to be done once for both lines.
In fact, since $\Abf$ remains fixed for a certain imaging setup, the decomposition only needs to be computed once for that setup.
Similar cost savings are possible for the lines~\ref{line:solve-3} and \ref{line:solve-4}.
The leading order cost of our algorithm is decomposing the design matrix in \eqref{eq:large-system}, which costs $O(\max(TM,N) N^2)$.
If this has been done ahead of time for the particular imaging setup, the leading order cost is reduced to $O(N^3)$.
\section{Experiments} \label{sec:experiments}
We compare our method to the first order reconstruction estimate $\hat{\rhobf}_1$ in \eqref{eq:rho-hat-1}.
Additionally, we also compare to time reversal image reconstruction in k-Wave \cite{treeby2010} based on the average signal $\bar{\ybf}$.
In our experiments, we use the star-shaped object of size 160 \textmu m by 160 \textmu m shown in Fig.~\ref{fig:square-array}~(a).
In order to avoid inverse crime, we use different grids to represent the object when we generate the data and when we do the reconstruction.
For data generation we use a 101 by 101 grid ($N=101^2$) and for reconstruction we use an 81 by 81 grid ($N=81^2$).
We use $M=64$ transducers arranged in two different geometries shown in Fig.~\ref{fig:transducer-geometries}.
In the first geometry, the transducers are arranged into a square array positioned a distance 30 \textmu m above the object.
In the second geometry, the transducers are positioned in a circle of radius 160 \textmu m around the object and in the same plane as the object.
The transducers have a center frequency $f_0 = \text{50 MHz}$ and full width at half-maximum $\text{FWHM} = \text{25 MHz}$.
Fig.~\ref{fig:transducer-geometries} shows examples of transducer recordings.
The recordings are 199 ns long and discretized into $T=200$ time points.
In our simulations, we add i.i.d.\ Gaussian noise with standard deviation equal to 1\% of the maximum signal amplitude to all recordings.
Consequently, $\Gammabf_{\varepsilonbf}$ is an identity matrix rescaled by the noise variance.
For each experiment we generate $K=1000$ random speckles using a discretized variant of a speckle model from \cite{dainty2013}.
From this model, we also compute the speckle covariance matrix $\Gammabf_{\ebf}$.
We use speckles of three different sizes in the experiments to demonstrate how finer speckles lead to resolution improvement.
Examples of speckles of each size are given in Fig.\ \ref{fig:speckle}.
We choose the parameters in \eqref{eq:photoacoustics-model} to correspond to an experiment in water.
\begin{figure}[ht!]
\centering
\includegraphics[width=.6\textwidth]{{drawing_2}.pdf}
\caption{Left: The two transducer geometries.
Right: Examples of recordings from four different transducers in the circular geometry.}
\label{fig:transducer-geometries}
\end{figure}
\begin{figure}[ht!]
\centering
\includegraphics[width=.6\textwidth]{speckle-fig.pdf}
\caption{Examples of speckle patterns of different size.}
\label{fig:speckle}
\end{figure}
Figs.~\ref{fig:square-array} and \ref{fig:circ-array} show the experiment results for the square and circular transducer array geometries, respectively.
Since both $\hat{\rhobf}_1$ and the time reversal solution are computed from the mean signal, they are not impacted by the speckle size.
In both figures, subplots (b) and (c) show the reconstructions by time reversal in k-Wave and via the first order method in \eqref{eq:rho-hat-1}, respectively.
Subplots (d)--(f) show how the resolution for reconstruction via Alg.\ \ref{alg:recovery} improves as the speckle size is reduced.
The speckle sizes are those specified in Fig.\ \ref{fig:speckle}.
These experiments indicate that combining random speckle illumination and second order statistics allows us to outperform the first order methods.
In particular, finer speckles allow us to recover finer details.
We found that using speckles finer than those shown in Fig.\ \ref{fig:speckle} (c) did not lead to any further improvement in resolution.
\begin{figure}[ht!]
\centering
\includegraphics[width=.6\textwidth]{surface-array-fig.pdf}
\caption{Reconstruction with the \emph{square} transducer array.
(a) Original object.
(b) Reconstruction using time reversal in k-Wave.
(c) Object reconstructed via \eqref{eq:rho-hat-1}.
(d)--(f) Object reconstructed using Alg.\ \ref{alg:recovery} for the different speckle sizes illustrated in Fig.\ \ref{fig:speckle}.}
\label{fig:square-array}
\end{figure}
\begin{figure}[ht!]
\centering
\includegraphics[width=.6\textwidth]{circ-array-fig.pdf}
\caption{Reconstruction with the \emph{circular} transducer array.
The different subplot descriptions are the same as in Fig.\ \ref{fig:square-array}.}
\label{fig:circ-array}
\end{figure}
Our algorithm also works well in the original microscopy setting considered in \cite{idier2018}, in which case $\Abf$ just represents convolution with the PSF $h$ in \eqref{eq:idier-model}.
Indeed, we are able to achieve the same results as in \cite{idier2018} by using our algorithm at a fraction of the cost.
Due to space constraints, we do not include those results here.
\section{Conclusion} \label{sec:conclusion}
We have shown in experiments that the ideas by Idier et al.\ \cite{idier2018} in the microscopy setting can be extended to the more general PAT setting.
We also proposed a simple algorithm for computing the object which is much faster to run and easier to implement than the iterative method in \cite{idier2018}.
Despite the speedup achieved by our algorithm, it still remains quite expensive at a cost of $O(N^3)$ where $N$ is the number of pixels.
Another factor that will impact the performance of the method is how well we are able to model or estimate the true speckle covariance $\Gammabf_{\ebf}$.
Addressing these issues are an interesting direction for future research.
Other interesting directions include trying to reconstruct three-dimensional objects, and modifying Alg.~\ref{alg:recovery} to leverage prior knowledge about object sparsity.
It may also be possible to combine least squares solves in Alg.~\ref{alg:recovery} and use e.g.\ LSQR \cite{paige1982} to achieve further speedups.
\section{Acknowledgment}
This material is based upon work supported by the National Science Foundation under Grant No.\ 1810314.
\bibliographystyle{plain}
|
{
"timestamp": "2021-05-11T02:16:24",
"yymm": "2105",
"arxiv_id": "2105.03809",
"language": "en",
"url": "https://arxiv.org/abs/2105.03809"
}
|
\section{Introduction}
Backflow effect is a striking universal quantum phenomenon that was revealed by Allcock \cite{alc} long time ago, (for a recent brief review see \cite{intro}), while considering the arrival time problem in quantum theory. The arrival time connection was further pursued in \cite{muga}. It was explicitly shown that a free particle with wave function centred in $x < 0$, constructed out of only positive momenta, can still possess non-vanishing probability of remaining in $x < 0$ and furthermore this probability can increase
with time, albeit for finite period of time. This clearly indicates that the quantum-mechanical current at the origin can be negative with the
probability flowing {\it in opposite direction to the momenta}. Subsequently various aspects of the effect were studied by Bracken and Melloy \cite{bm} who revealed two remarkable features of Quantum Back Flow (QBF): (i) the total amount of QBF is bounded (by a dimensionless number computed
numerically to be approximately 0.04) and (ii) the bound is a universal numerical fraction, independent of the time duration, particle mass and $\hbar $. {\footnote{This has led to claims that this number should be considered as a new independent quantum number.}} This indicates that there is subtlety involved in taking classical limit since the effect apparently survives in the naive $\hbar \rightarrow 0$. Subsequently the proper limiting procedure that showed vanishing of the effect in classical limit appeared in \cite{years}. A numerical study \cite{penz} revealed a structure of the wave function and corresponding approximate analytic form of the wave function appeared in
\cite{intro,years,sol} that yields a modest backflow. Its relativistic extension was studied in \cite{rel,ash,qdir,qpot} and its interpretation in terms of pilot wave theory was given in \cite{relpi}. Furthermore, QBF for a Dirac particle
\cite{qdir} and the effect of a linear potential \cite{qpot} were investigated. QBF has been related to the negativity of Wigner function \cite{wig} thus showing its non-classicality. Connection between QBF and superoscillations was noted in \cite{ber}.
In spite of the theoretical development, till date there is very little experimental evidence of QBF. There are proposals of experimental observation of QBF with Bose-Einstein condensates \cite{29, 30}. Recently, in \cite{31} QBF has been experimentally observed in an optical counterpart. A proposal, more conducive to experiment, appeared in \cite{mil} where the solution consists of both
positive and negative momenta. The recent work \cite{expt} compares the above scheme with the conventional one where the particle solution is made up of positive momenta states only. After these brief introductory remarks, let us come to our main concern - possible relevance of QBF across a Black Hole (BH) horizon.
The classical BH physics was revolutionised after the work of Hawking \cite{haw,haw2} who demonstrated that a BH can radiate ($\sim$ Hawking radiation) with a characteristic temperature ($\sim$ Hawking temperature) and ushered in BH thermodynamics. This strengthened the BH area - entropy connection, the idea proposed by Bekenstein \cite{bek}. However, this brought in a new puzzle, in the form of the information paradox. In a semi-classical framework, BHs can radiate, similar to a black body with a temperature that depends only on the few macroscopic BH parameters. The process of black hole formation and subsequent evaporation will be a non-unitary process where a mixed state will result from a pure state. Loosely speaking, after the BH has completely evaporated what one is left with is only black body radiation and so the information of the content of the BH gets lost. Recent works are revealing how BH can evaporate without losing unitarity. In keeping unitarity intact the BH entropy has to follow the Page curve \cite{page} and only recently it is becoming clear \cite{info,info2,info3} how quantum entanglement can remove the information paradox.
It is well known that the BH horizon is responsible for the above features since classically once matter crosses the BH horizon it inexorably moves towards the BH singularity and classically nothing comes out of the horizon although in a quantum mechanical framework, Hawking radiation can escape. However, the Hawking radiation, being thermal in nature, does not carry any information of the BH interior and hence the information paradox. Curiously enough, since the singularity at the horizon is a coordinate singularity that is removable by a coordinate transformation, (unlike the singularity at the BH origin), a material particle will smoothly cross the horizon to move towards the centre.
In this perspective we tentatively put forward the question: can QBF play a relevant role in the above topical issues? Specifically if QBF across BH horizon is non-zero (which we will exhibit) this is significant since, the process is unitary as conventional quantum mechanics has been used. However, there are open questions regarding QBF: can the back flow be directly interpreted as particles or at least can it transfer information across horizon from inside, and if so, whether there can be some form of correlation between the ingoing wave and its QBF component.
In the present work we consider a simplified scenario, partly due to computational convenience. In particular, following earlier works \cite{bm}, we study QBF pertaining to a superposition of two solutions of the Scrodinger equation, near the BH horizon. (Indeed, it would have been more realistic to consider a wave packet with only ingoing momenta. We leave this problem for a future publication.) Since the problem is time dependent but {\it stationary} with no fall off in time we can not use the conventional quantitative measures for QBF and can only establish its presence conclusively. We believe that these technical problems, (such as wave packet construction), can be addressed straightforwardly in a more detailed analysis.
The paper is organized as follows: in Sec. \ref{sec:schroprob} we set up the Scrodinger problem of the particle by constructing the Hamiltonian in a BH background. Sec. \ref{sec:psixt} provides an explicit structure of the wave function for QBF investigation. Sec. \ref{sec:observables} gives a general discussion on the QBF observables for our case and in Sec. \ref{sec:results} we discuss our results. Sec. \ref{sec:conc} is devoted to discussion and future problems.
\section {Setting up the Schrodinger equation}\label{sec:schroprob}
In order to study QBF we need to cast the particle in BH background system in a Schrodinger equation framework. We exploit the formalism used in \cite{hertz} to write the Hamiltonian in curved background where the BH metric in Cartesian coordinates reads,
\begin{equation}
g^{00}=\frac{1}{U};~~g^{ij}=-\left[\eta^{ij}+(U-1)\frac{x^ix^j}{r^2}\right];~~r^2=(x_1)^2+(x_2)^2+(x_3)^2
\label{c1}
\end{equation}
\begin{equation}
g_{00}=U;~~g_{ij}=-\left[\eta_{ij}+(\frac{1}{U}-1)\frac{x_ix_j}{r^2}\right]
\label{c2}
\end{equation}
with ${\sqrt{-g_{\mu\nu}}}=1$ and $U=(1-\lambda/r)$ for Schwarzschild metric. Following \cite{hertz} $H$ is given by
\begin{equation}
H=\left[\frac{1}{\sqrt{-g}g^{\eta\eta}}{\cal A}+\sqrt{-g}g^{ij}\partial_i\partial_j\frac{1}{{\cal A}}+m^2\sqrt{-g}\frac{1}{{\cal A}}\right]
=U{\cal A}-\left[\eta^{ij}+(U-1)\frac{x^ix^j}{r^2}\right]\partial_i\partial_j\frac{1}{\cal A}+ m^2\frac{1}{\cal A},
\label{c3}
\end{equation}
where ${\cal A} =\sqrt{-\nabla^2+m^2}=\sqrt{-\eta_{ij}\partial^i\partial^j+m^2} $. We reduce $H$ to the form
\begin{equation}
H=H_0+V;~H_0={\cal A},~~V=-\frac{\lambda}{r}{\cal A}-\nabla^2 \frac{1}{\cal A}+\frac{\lambda}{r^3}x^ix^j\partial_i\partial_j \frac{1}{\cal A} +m^2\frac{1}{\cal A}
\label{c5}
\end{equation}
leading to the Schrodinger equation, $i\partial_t \psi -H_0\psi = -V\psi$, which reads in detail
\begin{equation}
i\partial_t \psi -{\cal A}\psi =-\left[-\frac{\lambda}{r}{\cal A}-\nabla^2 \frac{1}{\cal A}+\frac{\lambda}{r^3}x^ix^j\partial_i\partial_j \frac{1}{\cal A} +m^2\frac{1}{\cal A}\right]\psi .
\label{c6}
\end{equation}
Putting back the fundamental constants, the above equation takes the form:
\begin{equation}
\begin{split}
&i\hbar\partial_t \psi -\sqrt{-c^2\hbar^2\nabla^2 +c^4m^2}\psi \\
=-[\frac{2Gm}{c^2r}&\sqrt{-c^2\hbar^2\nabla^2+c^4m^2}
-\hbar^2\nabla^2\frac{1}{\sqrt{-c^2\hbar^2\nabla^2 +c^4m^2}}+\\
\frac{2\hbar^2GM}{c^2r^3}x^ix^j&\partial_i\partial_j\frac{1}{\sqrt{-c^2\hbar^2\nabla^2 +c^4m^2}}+m^2c^4\frac{1}{\sqrt{-c^2\hbar^2\nabla^2 +c^4m^2}}]\psi .
\end{split}
\label{c61}
\end{equation}
We now restrict ourselves to ingoing solutions along $X$-axis as shown in Figure 1, where the observer is just outside the horizon ($r=\lambda $) on the $X$-axis. Let us isolate the free plane wave part $\psi_u $ from $\psi$, $\psi =\psi_u +\psi_s$ with $\psi_u=Ae^{i(-Et + \vec k.\vec x)}$ and subsequently
\begin{equation}
[i\partial_t \psi_u -H_0\psi_u +V\psi_u] + [i\partial_t \psi_s -H_0\psi_s +V\psi_s]
=0 .
\label{c7}
\end{equation}
Using the free particle dispersion relation
\begin{equation}
E-\sqrt{k^2+m^2} =0
\label{c8}
\end{equation}
the equation for $\psi_s $ is given by
\begin{equation}
[i\partial_t \psi_s -H_0\psi_s +V\psi_s] + V\psi_u =0
\label{c8}
\end{equation}
Considering a first order potential correction for $\psi_s$, we drop $V\psi_s$ term,
\begin{equation}
i\partial_t \psi_s -H_0\psi_s + V\psi_u =0 .
\label{c9}
\end{equation}
Note that the present work pertains to QBF in presence of an effective potential. Using the notation of \cite{hertz} the first order Green's function solution is
\begin{equation}
\psi^{(\vec k)}_s(x,t)=\int d^4x'~G_4(t-t'; x-x') V(t',x')\psi^{(\vec k)}_u(t',x')
\label{c10}
\end{equation}
which, after simplification \cite{hertz}, yields
\begin{equation}
\begin{split}
\psi^{(k)}_s(x,t)&=e^{-iE_kt}\int d^3x'~G_3( x-x') V(x')\psi^{(\vec k)}_u(x') \\
=e^{-iE_kt}\int d^3x'~\frac{-E_k}{2\pi |x-x'|}&e^{ik|x-x'|} \left[-\frac{\lambda}{|x'|}{\cal A}-\nabla^2 \frac{1}{\cal A}+\frac{\lambda}{|x'|^3}x^{'i}x^{'j}\partial'_i\partial'_j \frac{1}{\cal A} +m^2\frac{1}{\cal A} \right](Ae^{ik.x'})
\end{split}
\label{c11}
\end{equation}
where $E_k^2=k^2+m^2$. In the above, $G_4(t-t'; x-x')$ and $G_3( x-x')$ refer to the four and three dimensional Green's functions respectively. This yields
\begin{equation}
\begin{split}
\psi^{(k)}_s(x,t)=\frac{-AE_k}{2\pi}e^{-iE_kt}\int &d^3x'~\frac{e^{ik|x-x'|}}{|x-x'|}\left[-\frac{\lambda}{|x'|}{{\sqrt {k^2+m^2}}+k^2} \frac{1}{\sqrt { k^2+m^2}}+m^2\frac{1}{\sqrt {k^2+m^2}} \right.\\
-& \left.\frac{\lambda}{|x'|^3}x^{'i}x^{'j}k_i k _j \frac{1}{\sqrt {k^2+m^2}} \right](e^{ik.x'})
\label{c12}
\end{split}
\end{equation}
After simplification we find
\begin{equation}
\psi^{(k)}_s(x,t)=\frac{-AE_k}{2\pi}e^{-iE_kt}\int d^3x'~\frac{e^{ik|x-x'|}}{|x-x'|}\left[\left(1-\frac{\lambda}{|x'|}\right)E_k-\frac{\lambda}{E_k}k_i k_j \frac{x^{'i}x^{'j}}{|x'|^3} \right](e^{ik.x'}).
\label{c13}
\end{equation}
We concentrate on the region near horizon $|x|=\lambda >>|x'|$ and expand
$$ \rightarrow|x-x'| \approx |x|-\frac{\vec x.\vec x'}{|x|} \equiv |x|\approx \lambda -\hat x .\vec x'$$
so that in a standard approximation scheme in (\ref{c13}),
\begin{equation}
\frac{e^{ik|x-x'|}}{|x-x'|}e^{ik.x'} \approx \frac{e^{ik(|x|-\hat x.\vec x')}}{|x|-\hat x.\vec x'}e^{ik.x'}\approx \frac{e^{ik|x|}}{|x|}e^{i(\vec k-k\hat x).\vec x'}=\frac{e^{ik|x|}}{|x|}e^{i\vec q.\vec x'};~\vec q=\vec k-k\hat x .
\label{c14}
\end{equation}
Note that we are restricting to supermassive black hole whose Schwarzschild radius exceeds its physical radius. The scattering part of $\psi$ turns out to be
\begin{equation}
\psi^{(k)}_s(x,t)=\frac{-AE_k}{2\pi}e^{-iE_kt}\left[E_k\frac{e^{ik|x|}}{|x|}\int d^3x'~ \left(1-\frac{\lambda}{|x'|}\right) e^{i\vec q.\vec x'} - \frac{\lambda}{E_k}k_i k_j \frac{e^{ik|x|}}{|x|}\int d^3x'~\frac{x^{'i}x^{'j}}{|x'|^3} e^{i\vec q.\vec x'}\right].
\label{c15}
\end{equation}
Hence we will work with the form of $\psi^{(\vec k)} =\psi^{(\vec k)}_u +\psi^{(\vec k)}_s$ with $\psi^{(\vec k)}_u=Ae^{i(-Et + \vec k.\vec x)}$ and (\ref{c15}) for $\psi^{(\vec k)}_s $.
\section{Working form of the wave function $\psi^{(\vec k)}(\vec x,t)$}\label{sec:psixt}
\label{sec: psi}
Expressing (\ref{c15}) in the following form
\begin{equation}\label{psis}
\begin{split}
\psi_s^{(k)}(&\vec x,t) = -\frac{A E_k}{2\pi}e^{-iE_k t} \frac{e^{i k |\vec x|}}{|\vec x|}\left[E_k \int d^3 \vec x' \left(1 - \frac{\lambda}{|\vec x|}\right)e^{i\vec q \cdot \vec x} + \frac{\lambda}{E_k}k_i k_j \partial q_i \partial q_j \int d^3\vec x' \frac{1}{|\vec x'|^3}e^{i\vec q \cdot \vec x}\right] \\
&=-\frac{A E_k}{2\pi}e^{-iE_k t} \frac{e^{i k |\vec x|}}{|\vec x|}\left[E_k \left(F_1(\vec q) - \lambda F_2(\vec q)\right) + \frac{\lambda}{E_k}k_i k_j \partial q_i \partial q_j F_3(\vec q)\right],
\end{split}
\end{equation}
reveals that we require three Fourier transforms
\begin{equation}
\begin{split}
F_1(q)&=\int d^3x'~1~ e^{i\vec q.\vec x'}=(2\pi)^3\delta (\vec q), \\
F_2(q)=\int d^3x'&~\frac{1}{|x'|} e^{i\vec q.\vec x'} , ~~
F_3(q)=\int d^3x'~\frac{1}{|x'|^3} e^{i\vec q.\vec x'} .
\label{c17}
\end{split}
\end{equation}
Exploiting spherical symmetry for a generic case yields,
\begin{equation}
F(q)=\frac{4\pi}{q}\int_0^\infty dr~V(r)~ r~ sin(qr).
\label{c18}
\end{equation}
For $V=1/r$, a regularization (in the form of a mass scale $\mu$) is needed;
\begin{equation}
F_2(q;\mu)=\frac{4\pi}{q}\int_0^\infty dr~\frac{e^{-\mu r}}{r}~ r~ sin(qr) =\frac{4\pi}{\mu ^2+q^2}
\label{c19}
\end{equation}
where $\vec q=\vec k-k\hat r;~~q=2k sin(\theta/2)$. Finally taking $\mu \rightarrow 0$ we find
\begin{equation}
F_2(q)=\frac{4\pi}{q^2}.
\label{c20}
\end{equation}
On the other hand,
for $F_3(q)$ we find, using the same prescription as above, that,
\[F_3(q) = \frac{4\pi}{q} \int_0^\infty \frac{\sin(qr)}{r^2} dr,\]
diverges due to the singularity at $r=0$. Therefore, we perform the integral,
\begin{equation}\label{epsint}
\bar{F}_3(q) = \frac{4\pi}{q} \int_\epsilon^\infty \frac{\sin(qr)}{r^2} dr.
\end{equation}
for some, $\epsilon \in \mathbb{R}^+$.
Then we have,
\begin{equation}\label{ansgen}
\bar{F}_3(q) = 2\pi \left(-\textit{Ci}(q\epsilon)+ 2 \ln(q) - \ln(q^2) + 2 \frac{\sin(q\epsilon)}{q\epsilon}\right),
\end{equation}
where, $\textit{Ci}(z)$ is the Cosine Integral function defined as,
\[\textit{Ci}(z) = - \int_z^\infty \frac{\cos t}{t} dt,\]
whose series expansion about 0, is given as,
$\textit{Ci}(x) = \gamma + \ln(x) + \sum_{k=1}^\infty \frac{(-x^2)^k}{2k (2k)!}$. Here $\gamma$ is the Euler-Mascheroni constant.
The value of $2 \ln(q)-\ln(q^2)$ is $2 \pi i $ if $q<0$ and $0$ if $q>0$. Thus, more compactly,
\begin{equation}\label{F3ans}
\bar{F}_3(q) = 2\pi\left(-\textit{Ci}(q\epsilon) + 2\pi i \Theta(-q) + 2\frac{\sin(q\epsilon)}{q\epsilon}\right),
\end{equation}
where $\Theta(x)$ denotes the Heaviside-Theta function.
In (\ref{psis}), $F_3(\vec q)$ will be approximated by $\bar{F}_3(\vec q)$ and the limit $\epsilon \rightarrow 0$ will be taken after differentiating $\bar{F}_3(\vec q)$ as in (\ref{psis}).
Finally we substitute these in (\ref{psis}) to get $\psi^k_s$ the $k^{th}$ mode of the full wave function (in first order perturbation) as
\begin{equation}
\psi^{(k)}(x,t)=\psi^{(k)}_u(x,t)+\psi^{(k)}_s(x,t)
=Ae^{-iE_kt}e^{i( \vec k.\vec x)} + \psi^{(k)}_s(x,t).
\label{psik}
\end{equation}
This is one of our important results that we will use subsequently to construct the superposition of two waves (with momenta in the same direction) to study QBF. This form aught to be used to consider a wave packet with momentum only in one direction. We consider the wave just inside horizon, $|x|=\lambda -h(x)$ comprising of the ingoing modes, as is necessary for a BH and try to ascertain the QBF outside the horizon, $|x|> \lambda $ . The situation is depicted schematically in Figure 1. The system is reduced to an effectively one (space) dimensional one with QBF observed at the point $P$ on $Z$-axis outside the horizon $x=\lambda $.
\section{QBF observables}\label{sec:observables}
A simple theoretical model to observe QBF was suggested in \cite{bm} where two plane waves with appropriate mixing coefficients and positive momentum were superposed and the associated probability current $J(x,t)$ studied. Conventionally one computes the current $J(x,t) = \frac{dP}{dt}$ with $P(t)$ denoting the probability of observing the particle in $x>0$ at time $t$. The sign of $J(x,t)$ indicates presence of QBF since for a superposition of negative momenta, the current will also be negative, at least classically. A positive current will indicate presence of QBF, a strictly quantum effect. In our case, it is more convenient to calculate the current directly, following \cite{bm},
\begin{equation}\label{jzt}
J(x,t)=-i\frac{\hbar}{2m}\left(\psi^*(x,t)\frac{\partial \psi(x,t)}{\partial x}-\frac{\partial \psi^*(x,t) }{\partial x}\psi(x,t)\right)
\end{equation}
using the form of $\psi(z,t)$ we have obtained. We should mention that in our simplified model, since the time dependence is stationary, the sum or integral occurring in the total probability $P(t)$ will not converge.
\section{QBF arising from Scattering off a Schwarzchild Black Hole }\label{sec:results}
Let us explicitly consider
\begin{figure}
\centering
\includegraphics[scale=0.8]{diagram.png}
\caption{Schematic diagram of the problem. $P$ denotes the observer and $\lambda$ is the radius of the event horizon. $\Psi_T(\vec x,t)$ denotes the probe superposed wavefunction going into the black hole.}
\label{fig:diagram}
\end{figure}
$ \psi^{(\vec k)}(\vec x,t) = \psi^{(\vec k)}_u(\vec x,t) + \psi_s^{(\vec k)}(\vec x,t)$
with $\psi_u^{(k)}(x,t) = A \exp\left(i\vec k \cdot \vec x -i E_k t\right)$, where as shown in the effective one dimensional problem illustrated in Fig. \ref{fig:diagram} with $\vec{k} = -k \hat{x}$ and $\vec{x} = x\hat{x}$ (position vector of $P$ in the Fig. \ref{fig:diagram}), leading to $\vec{q} = \vec{k} - |\vec{k}|\hat{x} = - 2 k \hat{x}$.
As discussed in Sec. \ref{sec: psi}, the perturbation part,
\begin{equation}\label{psiS}
\begin{split}
\psi^{(k)}_s &= -A \frac{E_k}{2\pi}e^{-i E_k t} \frac{e ^{ik |\vec{x}|}}{|\vec{x}|} \left[E_k (2\pi)^3 \delta(\vec{q}) - E_k \frac{4\pi\lambda}{q^2} + \frac{2\pi\lambda}{E_k} k^2 \left(\lim_{\epsilon\rightarrow 0}\partial^2_{q_3} (-Ci(q\epsilon) + \frac{2\sin q\epsilon}{q\epsilon})\right)\right] \\
&= -A \frac{E_k}{2\pi}e^{-i E_k t} \frac{e ^{ik x}}{x} \left[- E_k \frac{4\pi\lambda}{4k^2} + \frac{2\pi\lambda}{E_k} k^2 \frac{q_3^2}{q^4}\right]
\end{split}
\end{equation}
with the momentum $\vec{q} = q_3 \hat{z}$, $q^2 = q_1^2 + q_2^2 + q_3^2$ and $\lim_{\epsilon\rightarrow 0}\left[\partial^2_{q_3} \left(-Ci(q\epsilon) + \frac{2\sin q\epsilon}{q\epsilon}\right)\right] = -
\frac{q_1^2 + q_2^2 - q_3^2}{q^4}$
after simplification leads to
\begin{equation}\label{psiSfin}
\begin{split}
\psi^{(k)}_s (x,t) &= -\frac{A}{2\pi}e^{-i E_k t} \frac{e^{ik x}}{x} \left[-(k^2 + m^2) \frac{\pi\lambda}{k^2} + \frac{\pi\lambda}{2}\right] \\
&= \frac{A}{2\pi x}e^{-(i E_k t - k x)} \left[\pi\lambda\left(\frac{1}{2} + \frac{m^2}{k^2}\right)\right].
\end{split}
\end{equation}
We now construct the all important two wave superposition as,
\begin{equation}\label{psiT}
\begin{split}
\Psi_T (x, &t) = \psi^{(-\hat{x})}(x,t) - 3\psi^{(-4\hat{x})}(x,t)
=\left[e^{-i(x - \sqrt{1+m^2}t)} - 3e^{-4i(x - \sqrt{16+m^2}t)}\right]-\\
&\frac{\pi\lambda}{2\pi x}\left[e^{-i(x - \sqrt{1+m^2}t)}\left(\frac{1}{2}+m^2\right) - 3e^{-4i(x - \sqrt{16+m^2}t)}\left(\frac{1}{2}+\frac{m^2}{16}\right)\right]
\end{split}
\end{equation}
where $\psi^{(\vec k)}(x,t)$ is defined in (\ref{psik}) and we have used the calculated form of $\psi^{(\vec k)}_u(x,t)$ and $\psi^{(\vec k)}_s(x,t)$.
\subsection{Numerical Analysis}
In this section, we use Wolfram Mathematica for the calculations and plots.
We start by deriving the form of density function $\zeta (x,t) =\Psi_T^*(x,t)\Psi_T (x,t)$,
\begin{equation}
\begin{split}
\zeta(x,t)&= \frac{1}{1024 x^2}[5\left(2048 x^2 -
64 (16 + 5 m^2) x \lambda + (128 + 80 m^2 + 53 m^4) \lambda^2\right)-\\
&48 (128 x^2 -
4 (16 + 17 m^2) x \lambda + (8 + 17 m^2 + 2 m^4) \lambda^2)\cos((\sqrt{1 + m^2} - \sqrt{16 + m^2})t - 3 x)]
\end{split}
\end{equation}
In Fig. \ref{fig:zetaXt10t100}, we show snapshots of the density profile for two different times. Notice that the density is greater inside horizon and, as expected, falls off away from the horizon. Also the time-stationary behaviour is readily observable. In Fig. \ref{fig:zetaXt0}, we again provide density plots for four values of $\lambda $. As expected, the density is greater for $x<\lambda$ and falls away from the horizon. There appears to be a qualitative change in the behaviour after the horizon, that is more pronounced for smaller $\lambda $.
Subsequently we compute the current (\ref{jzt}) from the two-wave superposition (\ref{J}) $\Psi_T(x,t)$ defined above. Analytical form of the current is given as,
\begin{equation}\label{J}
\begin{split}
J(x,t) &= \frac{\hbar}{(256~m x^2)} [-9472 x^2 + 4736 x \lambda +
832 m^2 x \lambda - 592 \lambda^2 - 208 m^2 \lambda^2 -
73 m^4 \lambda^2 + \\
&30 \left(128 x^2 -
4 (16 + 17 m^2) x \lambda + (8 + 17 m^2 +
2 m^4) \lambda^2\right) \cos[(\sqrt{1 + m^2} - \sqrt{16 + m^2}) t -
3 x] + \\
&360 m^2 \lambda \sin[
\sqrt{1 + m^2} t - \sqrt{16 + m^2} t - 3 x]].
\end{split}
\end{equation}
The form of $J(x,t)$ given above is analytic, $\forall x \neq 0, \forall t \ge 0$ - i.e. it is non-analytic only at the curvature singularity at $x=0$ and not at the coordinate singularity at $x=\lambda$.
\begin{figure}
\centering
\includegraphics[scale=0.8]{zetat10100L60.png}
\caption{Plot of $\zeta(x,t)$ at two different times, $t=10$ and $t=100$, for $\lambda=60$.}
\label{fig:zetaXt10t100}
\end{figure}
Our most decisive results appear in the profile in In Fig. \ref{fig:JXt} whwer we plot the current $J(x,t)$ for $\hbar=1, m=3$ and various values of $\lambda$. In Fig. \ref{fig:JXt}, we see that there is a small QBF region for $t=0$ (actually for all values of $t$ as well) near the horizon $x=\lambda$, that is shaded red in the figure. It can be shown numerically that $J(x,t)$ is strictly negative (as expected naively) for $x << \lambda$ as well as for $x >> \lambda$. This can also be seen from the time slices in Fig. \ref{fig:JXt}. The envelope of the oscillatory plots in Fig. \ref{fig:JXt} is only positive for a finite region surrounding $x=\lambda$ as shown. This suggests that QBF is observed only in a finite region around the Schwarzchild BH event horizon.
\begin{figure}
\centering
\includegraphics[scale=0.6]{zetaL20.png}
\includegraphics[scale=0.6]{zetaL40.png}
\includegraphics[scale=0.6]{zetaL60.png}
\includegraphics[scale=0.6]{zetaL80.png}
\caption{Plots of the density function $\zeta(x,t=0)$ for various values of $\lambda$.}
\label{fig:zetaXt0}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.6]{JL20.png}
\includegraphics[scale=0.6]{JL40.png}
\includegraphics[scale=0.6]{JL60.png}
\includegraphics[scale=0.6]{JL80.png}
\caption{QBF (red region) observed from the plots of $J(x,t=0)$ for various values of $\lambda$. The size of the QBF region increases with $\lambda$. }
\label{fig:JXt}
\end{figure}
\section{Discussion}\label{sec:conc}
Let us summarize our results. We have studied the Quantum Back Flow across the event horizon of a Schwarzschild Black Hole. We perturbatively solved the Schrodinger equation for a particle in the BH background near horizon. For simplicity, we have considered a superposition of two ingoing modes and have observed its backflow just outside the horizon. We notice the QBF persists for a finite spatial range. Since our model is stationary in time, the overall time dependence and related observations of charge and current density behaviours are uniform (without any decay). This is a drawback of our simplified model since we can not provide quantitative estimates in terms of a conventional observable, the total probability (integrated over position) of QBF. We believe this weakness can be overcome by considering wavepackets instead of simple superposition, as done here.
The present work probably has raised more questions than answers such as: how to interpret the QBF across BH horizon? Can the QBF correspond to outgoing modes? Does there exist any form of correlation between the ingoing and outgoing (QBF) part of the wavefunction? Is this a poor man's form of information leakage? Can this type of QBF be realized in analogue gravity models? We conclude by noting several major differences between QBF (in the present BH case) and Hawking effect: QBF is a generic quantum mechanical process that is present in wave function evolution while, Hawking effect is a semi-classical field theoretic model applicable near the event horizon of a BH. QBF is intimately connected to the external matter wave function moving across BH horizon, whereas, no external matter degrees of freedom are involved in Hawking effect. Again there is no chance of mass of the BH decreasing due to QBF (it can only acquire mass of the ingoing part of wave function) and hence BH evaporation due to QBF can not occur. Also QBF is active for a limited period of time and is restricted in a spatial domain whereas BH mass decreases via Hawking radiation resulting in a BH evaporation. The latter is present throughout the time the BH is alive.
An immediate pending problem is to consider more realistic ingoing modes in terms of renormalizable wave packets so that quantitative estimates of the QBF is possible. Work is in progress along these lines. Other than that, deeper analysis of the physical interpretation of the QBF part is needed since apart from an intriguing remark by Berry \cite{rel} regarding possible particle-like nature of the QBF sector, not much observations are present in the literature. A promising future direction is to investigate QBF in analogue gravity models.
\section{Acknowledgements} We thank Bibhas Ranjan Majhi for suggestions in the early stage of the work. We are also grateful to Professor Michael Berry and Professor Arseni Goussev for correspondence.
|
{
"timestamp": "2021-05-11T02:20:40",
"yymm": "2105",
"arxiv_id": "2105.03944",
"language": "en",
"url": "https://arxiv.org/abs/2105.03944"
}
|
\section{Introduction}
Ultracold molecules are today one of the physical systems most used to study a variety of physical phenomena,
ranging from quantum information \cite{soderberg2009NJP,carr2009NJP,park2017Sci,ni2018CS,anderegg2019Sci},
to ultracold chemistry \cite{krems2008PCCP,quemener2012CR,hu2019Sci,valtolina2020NT}, to exploration of novel dipolar
phases of matter \cite{ni2008Sci,danzl2010NP,ospelkaus2010Sci,ni2010NT,wang2011PRLa,wang2011PRLb}, to tests
of variations of fundamental constants \cite{chin2006PRL,chin2009NJP,borschevsky2011PRA}.
As a result, developing efficient techniques to produce such molecules is a highly sought after goal
\cite{kohler2006RMP,hanna2007PRA,tscerbul2010PRA,owens2016PRA,ding2017PRA,dincao2017PRA,giannakeas2019PRL}.
Since most experiments using such molecules start from a gas of ultracold
atoms, the central question is how to efficiently produce a dense sample of molecules while still keeping them
at ultracold temperatures. Magneto-association provides such a path and has been routinely implemented to a broad range
of molecular experiments to date
\cite{cubizolles2003PRL,herbig2003Sci,regal2003NT,strecker2003PRL,strecker2003PRL,durr2004PRL,mark2005EPL}.
In this scheme, atoms are exposed to a magnetic field, $B$, tuned near a Feshbach
resonance, causing the $s$-wave scattering length, $a$, to go through a pole, and causing interactions
to become extremely strong \cite{chin2010RMP}. Sweeping the magnitude of the $B$-field across the
resonance will convert atoms adiabatically to weakly bound Feshbach molecules, existing for $a>0$.
Feshbach molecules can then be further used to explore a variety of phenomena or can be used as an intermediate
state to form more deeply bound molecular species by using other association schemes like
STIRAP~\cite{bergmann2015JCP}.
The efficiency of magneto-association is fundamentally controlled by the $B$-field sweeping rates,
but also depends on the initial atomic densities and temperatures
\cite{cubizolles2003PRL,herbig2003Sci,regal2003NT,strecker2003PRL,strecker2003PRL,durr2004PRL,mark2005EPL},
as well as depending non-trivially on the microscopic details characterizing the interatomic interactions
\cite{kohler2006RMP,hanna2007PRA}.
For NASA’s Cold Atom Laboratory (NASA-CAL), a multi-user facility
aboard the International Space Station \cite{elliott2018MC,aveline2020NT},
the microgravity environment will provide experimental conditions vastly different
than those achievable in ground-based experiments exploring magneto-association.
Here, we will show that although the unique experimental conditions available at NASA-CAL favor high phase-space
density, long interrogation times, and suppression of gravitation-sag,
the ultralow atomic densities ($n=10^8$-$10^{11}$/cm$^{3}$) desired for various proposed experiments
will drastically affect the efficiency of magneto-association.
At such low densities, the required $B$-field sweeps to obtain a satisfactory efficiency are simply too slow, compromising the stability of the molecular sample against three-body losses.
This issue is of particular importance for studies of dual-species atom interferometry where the formation heteronuclear Feshbach molecules \cite{dincao2017PRA} is of fundamental interest to mitigate
some of the major sources of systematic errors for high-precision tests of fundamental physics \cite{williams2016NJP,PhysRevLett.125.191101,RevModPhys.90.025008}.
In order to optimize the formation of Feshbach molecules, we modify the traditional magneto-association (tMA)
scheme. Our scheme adds a preparatory stage where the $B$-field is changed abruptly (quenched) from off-resonance
to on-resonance, and then is allowed to dwell in this regime while developing correlations.
This strongly correlated state will now serve as the initial state for magneto-association, providing a much higher
overlap to the desired final molecular states.
This scheme is similar to the one used for $^{85}$Rb in Ref.~\cite{klauss2017PRL} which not only provided
association of dimers but also Efimov trimers \cite{dincao2018PRL}.
We show that this scheme, which we defined as quenched magneto-association (qMA),
substantially improves the efficiency of molecule formation in the ultralow density regime,
while still allowing it to be performed within time scales much shorter than those associated with atomic and molecular
losses. Keeping in mind the relevant case of $^{87}$Rb-$^{41}$K mixtures available at NASA-CAL,
our manuscript is organized as follows. In Section \ref{sec:theory} we describe our theoretical model and
emphasize its main assumptions and approximations. Section \ref{sec:mag} details both tMA and qMA schemes
and analyzes the important time scales associated with the atomic and molecular losses. In Section \ref{sec:results} we
present our results for association of $^{87}$Rb$^{41}$K Feshbach molecules
and discuss the main advantages of qMA while verifying its fundamental differences
to tMA schemes.
\section{Theoretical Model}{\label{sec:theory}}
In our present study we adopted two major assumptions that allow for a qualitative description
of magneto-association while still providing a clear physical picture on
how medium (density) affects the association process.
For the interatomic interactions we assume a single channel interaction model between
$^{87}$Rb and $^{41}$K atoms, given by a Lennard-Jones potential
\begin{align}
v(r)=-\frac{C_6}{r^6}\left(1-\frac{\lambda^6}{r^6}\right),\label{Vpot}
\end{align}
where $C_6\approx4274$a$_0^6E_{\rm h}$ \cite{derevianko2001PRA}
is the van der Waals' dispersion coefficient and $\lambda$ is a tunable parameter adjusted to provide
the desired value of the scattering length. A more realistic description of the interactions between alkali
atoms, however, is multichannel in nature and includes the hyperfine interactions responsible for
the $B$-field dependence of the scattering length used in experiments with Feshbach resonances. A single
channel description of this phenomena is supported by the universal properties of the system \cite{chin2010RMP}
whenever $|a|\gg r_{\rm vdW}$, where $r_{\rm vdW}=({2\mu C_6}/{\hbar^2})^{1/4}/2$ is the van de Waals length
and $\mu$ the two-body reduced mass.
In our present study we model the $^{87}$Rb-$^{41}$K interactions near the well-known Feshbach resonance
for atoms in the $\ket{f=1,m_f=1}$ hyperfine state (see Fig. \ref{fig:aVSb}) by adjusting the values of
$\lambda$ in Eq.~(\ref{Vpot}) for each value of $B$ to produce the same value of $a$. As usual, for $B$-fields
near a Feshbach resonance the scattering length is well represented by
\begin{equation}{\label{eq:aofB}}
a(B) = a_{\rm bg}\left(1-\frac{\Delta B}{B - B_0}\right),
\end{equation}
where the position of the resonance is $B_0\approx39.4$G, its width $\Delta B\approx37$G, and background scattering length
$a_{\rm bg}\approx284$a$_0$ \cite{simoni2008PRA}.
\begin{figure}[htbp]
\centering
\includegraphics[width=\columnwidth]{ScaLvsB.eps}
\caption{Scattering length as a function of applied $B$-field near the
$^{87}$Rb-$^{41}$K Feshbach resonance at $B_0\approx39.4$G.
This resonance is characterized by a width $\Delta B\approx37$G and background scattering length
$a_{\rm bg}\approx284$a$_0$~\cite{simoni2008PRA}.
Both $^{87}$Rb and $^{41}$K atoms are in the $\ket{f=1,m_f=1}$ hyperfine state. $B_i$ and $B_f$ indicate, respectively,
the initial and final values of the $B$-field in a hypothetical magneto-association scheme.}
\label{fig:aVSb}
\end{figure}
In order to incorporate density effects to properly describe magneto-association in ultracold quantum gases,
we have employed the local density model
\cite{borca2003NJP,goral2004JPB,stecher2007PRL,sykes2014PRA,corson2015PRA,colussi2018PRL},
allowing for a physically meaningful way to qualitatively describe the density dependence of various few-body
observables. This model introduces a harmonic confinement to the few-body Hamiltonian,
whose strength is adjusted to produce a few-body ``density'' that matches that of the
experiment. In our current study, the two-atom Hamiltonian is then written as
\begin{equation}
\hat{H} = -\frac{\hbar^2}{2\mu}\nabla^2+\frac{\hbar^2}{8 \mu a_{ho}^4} r^2 +v(r),\label{Ham}
\end{equation}
where $a_{\rm ho}$ is the harmonic oscillator length. We assume that the number
densities of Rb and K are equal, i.e., $n_{\rm Rb} = n_{\rm K} = n$, and relate the oscillator length
to the number density as \cite{sykes2014PRA}
\begin{equation}
a_{\rm ho} = \sqrt{\frac{\pi}{8}}\left( \frac{4\pi n}{3} \right)^{-1/3}.
\end{equation}
This relation allow us to connect our few-body analysis with the relevant energy, length, and time scales
to the macroscopic system characterized, respectively,
\begin{align}
E_n = \frac{(6 \pi^2 n)^{2/3}}{2\mu}\hbar^2,~~
k_n = \frac{\sqrt{2\mu E_n}}{\hbar},~{\rm and}~
t_n = \frac{\hbar}{E_n}.\label{nUnits}
\end{align}
We note that our model does not take into account quantum degeneracy and phase-space density
effects for association of Feshbach resonances as those experimentally observed in Refs.
\cite{hodby2005PRL,thompson2005PRL,papp2006PRL}.
Nevertheless, the qualitative aspects of our analysis (in particular the comparison
between tMA and qMA protocols) still persists and should be observed in more
elaborate models in which such collective effects are properly accounted for.
\begin{figure}[htbp]
\centering
\includegraphics[width=\columnwidth]{EnergySpectrum.eps}
\caption{The energy levels of two atoms in a harmonic trap parameterized by $1/k_na$ and orbital angular momentum
$l=0$ (see text). In the limit of large ${1}/|{k_na}|$, the spectrum consists of pure harmonic oscillator levels,
whose energies are plotted as dashed lines. Positive energies correspond to atomic states, and negative energies
represent molecular states. Note that the expected Feshbach molecular state for large positive~$a$, having binding energy
${E_b={\hbar^2}/{2\mu a^2}}$, is shifted slightly due to the oscillator potential. The gray region indicate the values
in which the system is found in the unitary regime, $1/|k_na|<1$.}
\label{fig:eigenSpectrum}
\end{figure}
As a result, within our model magneto-association can be easily visualized through the two-atom energy spectrum
as shown in Fig.~\ref{fig:eigenSpectrum}. The horizontal energy levels in the $1/|k_na|\gg1$ regime correspond to harmonic
states and represent non-interacting atoms states. The desired target Feshbach molecular state is that with
energy given by $-E_b=-\hbar^2/2\mu a^2$ for $a>0$ and is also indicated in Fig.~\ref{fig:eigenSpectrum}.
[The energy spectrum is obtained by solving the two-body Schr\"odinger equation for the Hamiltonian in
Eq.~(\ref{Ham}) for the lowest angular momentum state, $l=0$.]
The effect of sweeping the $B$-field from high to low (from $B_i$ to $B_f$ in Fig.~\ref{fig:aVSb})
corresponds to sweeping $1/k_na$ from left to right (from $1/ka_i$ to $1/k_na_f$) in Fig.~\ref{fig:eigenSpectrum}.
Transitions from atomic to molecular states are stronger in the {\em interaction region} $1/|k_na|\leq1$, i.e.,
when interactions are unitary, $n|a|^3\geq1$.
Therefore, within our physical picture, magneto-association reduces to the problem of non-adiabatic crossing of
energy levels (Landau-Zener) \cite{clark1979PLA}. In our case, multiple levels can participate in the
process. Nevertheless, the Landau-Zener (two-level) model still provides a qualitative interpretation of the
phenomena and can serve as a guide for understanding of the important parameters controlling molecular
association. For instance, within the Landau-Zener model \cite{clark1979PLA} the probability of
transitioning from an energy level $\epsilon_1$ to $\epsilon_2$ via a
linear sweep, $\epsilon_1-\epsilon_2=\alpha_\epsilon t$, where $\alpha_\epsilon$ is the sweeping
rate, is given by
\begin{align}
P_{LZ}=1-e^{-2\pi\Gamma}.\label{PLZ}
\end{align}
Here, $\Gamma=\epsilon_{12}^2/\hbar\alpha_\epsilon$ with $\epsilon_{12}$ being the coupling between states $\epsilon_1$ and
$\epsilon_2$. Applying this picture to our case in Fig.~\ref{fig:eigenSpectrum}, where energies are given in units of
$E_n$ (\ref{nUnits}), allows us to access important information.
For instance, since efficient association is obtained for $\Gamma\gg1$,
the sweeping rates are required to be $\alpha_\varepsilon\propto n^{4/3}$, which can
be too slow in the ultralow density regime of NASA-CAL.
We will explore these issues next and provide an alternative approach
to circumvent this limitation.
\section{Quenched Magneto-association}{\label{sec:mag}}
As already anticipated from the discussion on the previous section, tMA schemes, i.e.,
applying a linear $B$-field sweep across a Feshbach resonance, might be inefficient for the low density regime relevant to a microgravity environment. Here we detail tMA and we discuss an alternative scheme that overcomes its
limitations, but that can also be applied to ground based-experiments. We also discuss and characterize
atomic and molecular losses.
\subsection{Sweeps and Quenches}{\label{ssec:schemes}}
Within our model, the key physical aspect that makes tMA inefficient at low
densities is the fact that during the $B$-field sweep the system remains in the interaction region, $1/|k_na|\leq1$,
for a too short amount of time, thus requiring slow sweeps.
The tMA scheme is illustrated in Fig.~\ref{fig:boft}(a), where the
$B$-field is linearly swept from $B_i$ to $B_f$ during a time $t_{\rm sw}$.
In order to determine the interaction time, $t_{\rm u}$, we assume $B(t)=B_i-\alpha_B t$,
where $\alpha_B=|B_f-B_i|/t_{\rm sw}\approx26.03{\rm G}/t_{\rm sw}$ is the sweep rate,
and determine values of $B$ from Eq.~(\ref{eq:aofB}) in which the condition $1/|k_na|\leq1$ is satisfied.
After some algebra one arrives at the interaction time given by
\begin{align}
t_{\rm u} \approx 2\left|\frac{(k_na_{\rm bg})\Delta}{\alpha_B}\right|\propto n^{1/3}t_{\rm sw}.\label{tu}
\end{align}
As a result, during a given sweep, atoms in the relevant states interact only during a much reduced
amount of time as $n\rightarrow0$.
\begin{figure}[htb!]
\centering
\includegraphics[width=3.0in]{RampingSchemes.eps}
\caption{Schematic of magnetic field vs time in the traditional and quenched magneto-association schemes.
(a) In traditional magneto-association (tMA) $B$-field swept from $B_i$ to $B_f$ at a constant rate during a
time $t_{\rm sw}$. (b) In quenched magneto-association (qMA) the $B$-field is instantaneously quenched
from $B_i$ to $B_0$, remaining at $1/|k_na|=0$ for a dwell time, $t_{\rm dw}$,
followed by a linear sweep from $B_0$ to $B_f$. This schematic figure is not to scale, and $B_i$ and $B_f$
will not, in general, be equidistant from $B_0$, nor will $t_{\rm sw}$ and $t_{\rm dw}$ bear any particular
relationship to each other. In the figure, $t_{\rm u}$ and $t^*_{\rm u}$ represent the interaction time, i.e.,
the time in which experience $1/|k_na|<1$, for tMA and qMA, respectively.}
\label{fig:boft}
\end{figure}
In order to improve on the interaction time, and consequently the association efficiency, we propose the scheme illustrated
in Fig.~\ref{fig:boft}(b). In such a scheme, the system is first quenched to $1/|k_na|=0$, corresponding to changing
the $B$-field from $B_i$ to $B_0$ within time scales much shorter than $t_n$.
We note that at low densities the technical aspects of quenching the $B$-field becomes
increasingly easier since $t_n$ increases as $n$ decreases. For our studies we assume that the quench is
performed instantaneously. After the quench, we allow the system to dwell for a time $t_{\rm dw}$ at $1/|k_na|=0$,
thus letting interactions evolve before finally sweeping the field to its final value, $B_f$,
accordingly to $B(t)=B_0-\alpha^*_{B}t$, where $\alpha^*_{B}=|B_f-B_0|/t_{\rm sw}\approx1.51{\rm G}/t_{\rm sw}$.
As a result, in quenched magneto-association (qMA) the interaction time is now given by
\begin{align}
t_{\rm u}^*=t_{\rm dw}+\frac{t_{\rm u}}{2}=t_{\rm dw}+\underbrace{\left|\frac{(k_na_{\rm bg})\Delta}{\alpha_B^*}\right|}_{\propto n^{1/3}t_{\rm sw}},\label{tuS}
\end{align}
which can be substantially enhanced by controlling $t_{\rm dw}$.
We note that this scheme is also similar to the one explored in Ref.~\cite{mark2005EPL}, though assuming
$t_{\rm dw}\approx0$, and obtaining an efficiency of about 30\% at densities of $10^{12}$/cm$^{3}$.
Numerically, we study both tMA and qMA using the time propagation
methodology developed in Refs.~\cite{dincao2018PRL,tolstikin2014}, with a few caveats introduced by the quench.
For tMA the initial state for propagation is a pure state, i.e.,
it is given by
\begin{align}
\Psi_{i}\equiv\psi^{a=a_i}_{A}(\vec{r}),
\end{align}
where $\psi_{A}$ is an eigenstate of energy $E_A$ for $a=a_i$ (see Fig.~\ref{fig:eigenSpectrum}).
In the quenched case, however, the initial state for propagation is instead a superposition of states given by
\begin{align}
\Psi_{i}\equiv\sum_{\beta}c_{\beta}\:{\rm exp}\left[-\frac{iE_\beta t_{\rm dw}}{\hbar}\right]\:\psi^{a=\pm\infty}_{\beta}(\vec{r}),\label{PsiQ}
\end{align}
where $c_\beta=\langle\psi_A|\psi_\beta\rangle$, with $\psi_{\beta}$ and $E_\beta$ being the eigenstates
and energies of the system at $a=\pm\infty$. As we will see in Section \ref{sec:results},
the dependence on $t_{\rm dw}$ in this state plays a crucial role that improves the efficiency of
magneto-association by letting interactions evolve at $1/k_na=0$.
Note also that for $t_{\rm dw}\gg t_n$ we expect that truly many-body effects to take place,
potentially playing a rule in qMA. Current models do not not capture this physics properly, so we will
keep our study within modest values of $t_{\rm dw}$.
Before we compare in details both tMA and qMA schemes of Fig.~\ref{fig:boft},
we must first analyse the stability of the system with respect to losses, as
done in the next section.
\subsection{Atomic and molecular losses}{\label{ssec:losses}}
Regardless of the particular magneto-association scheme adopted, few-body losses
can drastically reduce the efficiency of molecule formation. Although such loss processes are in general
well understood in ultracold atomic and molecular gases \cite{dincao2018JPB}, magneto-association is a dynamical
process and the full understanding on how losses occur as the interactions evolve is nontrivial.
In this section, however, we present an analysis that offers a qualitative understanding of the
major loss processes, thus helping us to characterize and identify experimental regimes that are
likely to mitigate their harmful effects.
In magneto-association the two major few-body loss processes are three-body recombination,
the process in which three free atoms collide to produce an atom and diatomic molecule,
and atom-molecule vibrational relaxation, causing a de-excitation of the molecular state.
Both processes release enough kinetic energy to make their products to escape from typical traps \cite{dincao2018JPB}.
In order to gain some insight on the time scales of the loss rates and their dependence on the experimentally
relevant parameters, we will consider the loss rates only in the regime at which they are maximal, i.e., $1/|k_na|<1$,
which is the regime most relevant to magneto-association.
This analysis should provide an upper limit for the lifetime of the sample during the magneto-association
process.
\begin{figure}[htbp]
\centering
\includegraphics[width=\columnwidth]{TimeScaleLosses.eps}
\caption{Analysis of the relevant time scales for magneto-association.
For tMA, the values for $\tau_{\rm u}/t_{\rm u}$ calculated from the lifetimes due to
RbRbK losses at $T=100$pK (black-dashed curves) for various densities and a broad range of sweeping times.
As $n$ increases, the values of $t_{\rm sw}$ in which $\tau_{\rm u}/t_{\rm u}>1$ becomes quickly more
restrictive. For qMA we display the values for $\tau^*_{\rm u}/t^*_{\rm u}$ (solid curves) for various values
of $t_{\rm dw}$. Regardless of the density, a broad range of $t_{\rm sw}$ satisfy the favorable condition for
magneto-association, $\tau^*_{\rm u}/t^*_{\rm u}>1$.}
\label{fig:losses}
\end{figure}
It is well known that three-body recombination rate in the regime $1/|k_na|<1$ becomes independent of the
scattering length and estimated as \cite{dincao2018JPB}
\begin{align}
L_{3}^{\rm u}(T)=\frac{4\pi^2\hbar^5}{\mu_{\rm 3b}^3(k_B T)^2}(1-e^{-4\eta}),\label{L3u}
\end{align}
where $\mu_{\rm 3b}^2=m_1m_2m_3/(m_1+m_2+m_3)$ is the three-body reduced mass, $T$ is the temperature, and
$\eta$ is the three-body inelasticity parameter, which provides a measure of the probability for inelastic
transitions. The parameter $\eta$ is dependent upon the details of the interactions and in general obtained experimentally.
For the Rb-K mixture we are interested here, Ref.~\cite{barontini2009PRL} has determined $\eta\approx0.12$ for
collisions involving
two Rb atoms and one K (RbRbK), and $\eta\approx0.02$ for collisions involving two K atoms and one Rb (KKRb).
Now, assuming that the $B$-field sweep in tMA is performed at constant temperature,
the atomic lifetime is given by
\begin{align}
\tau_A^{\rm u}&=\frac{1}{n^2L_{3}^{\rm u}(T)}\nonumber\\
&=\frac{9\pi^2(k_BT)^2\mu_{\rm 3b}^3}{8(1-e^{-4\eta})\mu^3\hbar^2}t_n^3\propto \frac{(k_BT)^2}{n^2},\label{TauT}
\end{align}
thus becoming shorter as the density increases and/or the temperature decreases.
As a practical example, the lifetimes due to RbRbK losses at $T=100$pK and
densities of 10$^8$/cm$^{3}$, 10$^9$/cm$^{3}$, 10$^{10}$/cm$^{3}$, and 10$^{11}$/cm$^{3}$,
are, respectively, $37t_n$, $1.7t_n$, $0.08t_n$ and $0.0037t_n$, or, equivalently, 1s, 10ms,
0.1ms and 1$\mu$s. In order to qualitatively understand what these lifetimes mean, the time scale of losses
needs to be compared to that of the interaction time [Eq.~(\ref{tu})],
\begin{align}
{\tau_A^{\rm u}}/{t_{\rm u}}\propto\frac{(k_BT)^2}{n^{7/3}t_{\rm sw}}.
\end{align}
This indicates that an increase in the density, or a decrease in temperature, must be accompanied by a decrease
of the sweeping time $t_{\rm sw}$ or, equivalently, an increase of the sweep rate $\alpha_B$, in order to compensate
for the increase of atomic losses. As a result, since faster $B$\nobreakdash-field sweeps reduce efficiency, obtain a good balance
of losses and interction time, $\tau_{\rm u}/t_{\rm u}>1$, can only be done at the risk of compromising efficiency.
This makes evident that finding the best regime for tMA is dependent upon a balance of various factors.
Figure \ref{fig:losses} shows the values for $\tau_{\rm u}/t_{\rm u}$ calculated from the lifetimes due to
RbRbK losses at $T=100$pK (black-dashed curves) for various densities and a broad range of sweeping times.
Note that as $n$ increases, the values of $t_{\rm sw}$ in which $\tau_{\rm u}/t_{\rm u}>1$ becomes quickly more
restrictive. For each density, we have indicated the corresponding values for $E_n$ and $t_n$ in relevant units.
In the case of qMA the key difference that improves the time scales for losses is that the quench
itself increases the gas temperature and, according to Eq.~(\ref{L3u}), reduces the loss rates. Assuming
that initial temperature is smaller than $E_n$, the quench sets the temperature to $k_BT=E_n$ \cite{makotyn2014NP},
thus leading to an atomic lifetime determined by
\begin{align}
\tau_A^{\rm u*}&=\frac{1}{n^2L_{3}^{\rm u}(E_n/k_B)}\nonumber\\
&=\frac{9\pi^2\mu_{\rm 3b}^3}{8(1-e^{-4\eta})\mu^3}t_n\propto \frac{1}{n^{2/3}}.\label{TauQ}
\end{align}
Interestingly, the lifetime now is linearly proportional to $t_n$ and consequently
automatically rescaling for different densities and providing a weaker dependence on density than
Eq.~(\ref{TauT}).
For instance, now the lifetimes due to RbRbK losses at $T=100$pK is 295$t_n$, which for
densities of 10$^8$/cm$^{3}$, 10$^9$/cm$^{3}$, 10$^{10}$/cm$^{3}$, and 10$^{11}$/cm$^{3}$,
become 8s, 1.7s, 370ms, and 80ms, respectively.
Comparing this lifetime to the interaction time in Eq.~(\ref{tuS}), we obtain in the
limit of long dwell times, $t_{\rm dw}\gg t_{\rm u}$,
an interaction time of $t^*_{\rm u}\approx t_{\rm dw}(\propto t_n)$, and
\begin{align}
\tau_A^{\rm u*}/t^*_{\rm u}\propto const.
\end{align}
In the other hand, limit of short dwell times, $t_{\rm dw}\ll t_{\rm u}$
the interaction time is $t^*_{\rm u}\approx t_{\rm u}/2 (\propto t_n)$, and
\begin{align}
\tau_A^{\rm u*}/t^*_{\rm u}\propto \frac{1}{n^{2/3}t_{\rm sw}}.
\end{align}
In either case, the above analysis indicates that qMA provides a substantially more favorable regime with
respect to the scaling of losses with density and sweeping times.
Figure \ref{fig:losses} demonstrate this by displaying the values for $\tau^*_{\rm u}/t^*_{\rm u}$, also
calculated from the lifetimes due to RbRbK (solid curves), for various values of $t_{\rm dw}$.
Note that regardless of the density, a broad range of $t_{\rm sw}$ satisfy the favorable condition for association,
$\tau^*_{\rm u}/t^*_{\rm u}>1$, a result much superior from those from tMA.
A similar analysis of the lifetime can be also provided once molecular association take places and
the relevant loss process is atom-molecule relaxation. Here, too, loss rates in the regime $1/|k_na|<1$
become independent on the scattering length and estimated as \cite{dincao2018JPB}
\begin{align}
\beta_{\rm u}(T)=\frac{2^{1/2}\pi^{1/2}\hbar}{\mu_{\rm ad}^{3/2}(k_B T)^{1/2}}(1-e^{-4\eta}),
\end{align}
where $\mu_{\rm ad}=(m_1+m_2)m_3/(m_1+m_2+m_3)$ is the atom-molecule reduced mass.
In this case, assuming that the $B$-field sweep in tMA is
performed at constant temperature, the molecular lifetime is simply given by
\begin{align}
\tau_M^{\rm u}&=\frac{1}{n\beta_{\rm u}(T)}\nonumber\\
&=\frac{3\pi^{3/2}\mu_{\rm ad}^{3/2}(k_BT)^{1/2}\hbar^{1/2}}{2\mu^{3/2}(1-e^{-4\eta})}t_{n}^{3/2}
\propto \frac{(k_BT)^{1/2}}{n},
\end{align}
providing a weaker dependence on temperature and density than those for their atomic
counterparts in Eq.~(\ref{TauT}).
From the above equation, the molecular lifetimes due to RbRbK losses at $T=100$pK and
densities of 10$^8$/cm$^{3}$, 10$^9$/cm$^{3}$, 10$^{10}$/cm$^{3}$, and 10$^{11}$/cm$^{3}$,
are, respectively, $29t_n$, $14t_n$, $6.3t_n$ and $2.9t_n$, or, equivalently, 0.8s, 80ms,
8ms and 0.8ms. These lifetimes are to be compared to that of the interaction time [Eq.~(\ref{tu})],
\begin{align}
{\tau_M^{\rm u}}/{t_{\rm u}}\propto\frac{(k_BT)^{1/2}}{n^{4/3}t_{\rm sw}}.
\end{align}
In the case of qMA, assuming that initial temperature is smaller than
$E_n$, the molecular lifetime determined by
\begin{align}
\tau_M^{\rm u*}&=\frac{1}{n \beta_{\rm u}(E_n/k_B)}\nonumber\\
&=\frac{3\pi^{3/2}\mu_{\rm ad}^{3/2}\hbar}{2\mu^{3/2}(1-e^{-4\eta})}t_{n}
\propto \frac{1}{n^{2/3}},
\end{align}
Here, the lifetimes due to RbRbK losses at $T=100$pK is 49$t_n$, which for
densities of 10$^8$/cm$^{3}$, 10$^9$/cm$^{3}$, 10$^{10}$/cm$^{3}$, and 10$^{11}$/cm$^{3}$,
become 1.3s, 0.3s, 62ms, and 13ms, respectively.
Comparing this lifetime to the interaction time in Eq.~(\ref{tuS}), we obtain in the
limit of long dwell times, $t_{\rm dw}\gg t_{\rm u}$,
\begin{align}
\tau_M^{\rm u*}/t^*_{\rm u}\propto const,
\end{align}
while for short dwell times, $t_{\rm dw}\ll t_{\rm u}$
\begin{align}
\tau_M^{\rm u*}/t^*_{\rm u}\propto \frac{1}{n^{2/3}t_{\rm sw}}.
\end{align}
The results above, lead to similar conclusions reached when analysing the atomic lifetimes.
Note that for the particular case of RbK mixtures, the typical molecular lifetimes $\tau^{\rm u}_M$
and $\tau^{\rm u*}_M$ are longer than those for the atomic lifetimes analysed above.
\begin{figure*}[htbp]
\centering
\includegraphics[width=6.8in]{MolecularFraction.eps}
\caption{Fraction of molecules produced as a function of $t_{\rm sw}$ for various densities.
The dashed curve represent results for tMA while solid curves are those for qMA at different dweeling times, $t_{\rm dw}$.
In all cases the initial and final states are characterized by $a_i=-2r_{\rm vdW}$ and $a_f=100r_{\rm vdw}$, respectively.
Note that at $t_{\rm sw} = 0$, tMA results are identical to those from qMA at $t_{\rm sw}=t_{\rm dw} = 0$. (See text for more discussions on these results and comparisons.) Figure display the corresponding values for $E_n$ and $t_{n}$.}
\label{fig:resultsRealTime}
\end{figure*}
Overall, the analysis above indicates that the detrimental effects atomic on molecular losses to
magneto-association are in general mitigated in the low-density regime, regardless of the magneto-association
scheme.
The NASA-CAL fully takes advantage of this property due to its unique capability to provide ultralow-density
samples.
The qMA, however, seems to provide a more stable scheme, in particular at higher densities,
due to its independence of temperature and the much more favorable ratio between lifetime and interaction
times, $\tau^{\rm u*}_A/t^*_{\rm u}$. The caveat of using quenches is that the final temperature of the
molecular sample will directly depend on the density (via its dependence on $E_n$),
an effect that can once again mitigated at low-densities. Although such prospects are positive with
respect to losses at the low-density regime, the question we now focus is how the actual efficiency of
both magneto-association schemes compares to each other in this regime and whether association
occurs in time scales shorter than the atomic and molecular losses. We provide such an analysis
in the next section.
\section{Molecular Association}\label{sec:results}
In this section we present our numerical simulation for both tMA and qMA near the $^{87}$Rb-$^{41}$K Feshbach
resonance at $B_0=39.4$~G~\cite{simoni2008PRA} (Fig. \ref{fig:aVSb}).
In each case, we select $a_i =$-2$r_{\rm vdW}$ ($\approx-$144a$_0$) and $a_f$=100$r_{\rm vdW}$ ($\approx$7230a$_0$),
corresponding to $B_i= 63.92 $~G and $B_f= 37.89$~G. Therefore, we are assuming bosonic heteronuclear Feshbach
molecules which are substantially larger than previously studied \cite{klempt2008PRA,weber2008PRA}.
We study the molecular fraction in terms of the sweeping time, $t_{\rm sw}$, dwell time, $t_{\rm dw}$,
and atomic densities.
We note that the qualitative trends in our results do not depend on the particular choices of $B_i$ and $B_f$,
so long as they are chosen off resonance, i.e. $1/|k_na|\gg1$.
Here, we chose to compute the molecular fraction for four different atomic densities $n$=
$10^8$/cm$^{3}$, $10^9$/cm$^{3}$, $10^{10}$/cm$^{3}$, and $10^{11}$/cm$^{3}$, thereby covering the
density regime available the NASA-CAL environment. Results are displayed in Fig.~\ref{fig:resultsRealTime},
where dashed lines represent the molecular fractions obtained via tMA,
and solid lines the ones obtained via qMA for different values of $t_{\rm dw}$
[see color-coded legend on the inset of Fig.~\ref{fig:resultsRealTime}(d)]. For each density, we also have indicated
the corresponding values for $E_n$ and $t_n$. These calculations yield a few crucial observations, valid for
all densities considered.
First, as shown in Fig.~\ref{fig:resultsRealTime}, in the low-density regime qMA
produces a higher molecular fraction than tMA within the same value for $t_{\rm sw}$.
This immediately implies the most significant result of this study:
within a given $t_{\rm sw}$, a quench with finite $t_{\rm dw}$ followed by a $B$-field sweep away from unitarity will in general
produce a larger molecular fraction than a pure $B$-field sweep in the same amount of time. Therefore, in order
to produce a larger fraction of molecules in the shortest amount of time, qMA is clearly the optimal choice.
The faster times for molecular formation within qMA is crucially important since it ensures the mitigation of atomic
and molecular losses, in particular for higher densities, as discussed in Section.~\ref{ssec:losses}
(see Fig.~\ref{fig:losses}). Figure \ref{fig:resultsRealTime} also indicates that qMA reaches an efficiency of
about $80\%$, reaching this maximum value for smaller values of $t_{\rm sw}/t_n$ values as $n$ increases.
The finite efficiency in qMA is likely to be associated to the nature of the quenched state in
Eq.~(\ref{PsiQ}), where excited states are more likely to remain in the atomic states than the target molecular
state which the $B$-field is swept away from the interaction region $1/|k_na|<1$. A more complete analysis on this
topic is beyond the scope of our present study.
For tMA, although one would in theory expect a nearly $100\%$ efficiency as $t_{\rm sw}\rightarrow\infty$, Fig.~\ref{fig:resultsRealTime} clearly shows that atomic losses (see Fig.~\ref{fig:losses}) will take place way before
being able to reach this efficiency.
In order to better understand the physical aspects determining the superiority of qMA over tMA
we look at the values of the molecular fraction at $t_{\rm sw}=0$ in Fig.~\ref{fig:resultsRealTime}.
This value indicates the quality of the overlap between the initial state (prior the $B$-field sweeping) and the
final molecular state. As shown in Fig.~\ref{fig:resultsRealTime}, while for tMA and qMA ($t_{\rm dw}=0$)
the molecular fraction are nearly identical, it increases for qMA with $t_{\rm dw}$. This is consistent with the
experimental observations in Ref.~\cite{mark2005EPL} and demonstrates that the dwelling time plays an important role
in the state preparation by letting correlations evolve in the interaction region $1/|k_na|<1$.
We note, however, that for $t_{\rm sw}>0$ the sweeping rates implied in Fig.~\ref{fig:resultsRealTime} are,
in fact, different. While for the tMA the sweep rate is $\alpha_{B}=|B_f-B_i|/t_{\rm sw}\approx26.03{\rm G}/t_{\rm sw}$
in qMA the rate is $\alpha^*_{B}=|B_f-B_0|/t_{\rm sw}\approx1.51{\rm G}/t_{\rm sw}$.
The slower values for $\alpha^*_{B}$ thus partially explains the higher molecular fraction obtained via qMA for a
given value of $t_{\rm sw}>0$.
In order to make a more direct comparison between the efficiency of both schemes we should compare, for instance,
the molecular fractions obtained for sweeping rates that leads to the same sweeping times within the interaction
region, $1/|k_na|=1$. Now, considering that for tMA one expands twice the time sweeping in this region than for
qMA, we will seek for values of $t_{\rm sw}$ in which $\alpha^*_B=\alpha_{B}/2$. We note, however, since in tMA and qMA
the system experiences interactions differently during the time spent sweeping across the $1/|k_na|<1$, our comparison
here can only provides a rough analysis. Our comparison is shown in Fig.~\ref{fig:resultsRealTime}. We indicate
the molecular fraction for tMA at $t_{\rm sw}=25t_n$ by a solid circle, producing the rate $\alpha_B=1.04/t_n$.
This value should then be compare to those obtained via qMA at $t_{\rm sw}=2.90t_n$, as indicated by the open circles
in Fig.~\ref{fig:resultsRealTime}. This analysis again demonstrate that for low densities, as $t_{\rm dw}$ increases,
the efficiency of qMA will surpass that of tMA, while still providing a much faster scheme to associate atoms
into molecules.
\section{Conclusion}
In this manuscript we have investigated methods to produce molecules via magneto-association near a Feshbach
resonance, focusing on the low-density regime relevant to the microgravity environment of NASA-CAL.
Based on the trends discovered from our computations, we conclude that a qMA can generally be made superior to tMA.
Within qMA, we found that the dwelling time at the interaction region, $1/|k_na|<1$, allows for correlations
to develop, thus providing a much more efficient scheme for association of atoms in molecules. Our results
show that qMA allows for a much higher association efficiency ($\sim80\%$) within considerably faster
time scales than tMA. This allows for further mitigation of atomic and molecular losses, regardless of the
density regime.
In further studies of molecular production, several more complicated aspects of the system could be
investigated. For instance, one could introduce dynamically the various loss processes dependent on scattering
length, and attempt to optimize molecular production constrained by the loss timescales in a more rigorous way than
presented here. Also, while the analysis in this study considered purely two-body interactions, one could extend
the analysis by incorporating three-body effects, which would then have to consider both Rb-Rb-K and Rb-K-K
interactions, as well as the formation of triatomic Efimov states existing in the system \cite{klauss2017PRL}.
\acknowledgments
This research was carried out under a contract with the National Aeronautics and Space
Administration. KW acknowledges support provided by the Undergraduate Research Opportunities Program
(UROP) at the University of Colorado Boulder. JPD also acknowledges partial support from the U. S.
National Science Foundation, grant number PHY-2012125.
|
{
"timestamp": "2021-05-11T02:14:26",
"yymm": "2105",
"arxiv_id": "2105.03763",
"language": "en",
"url": "https://arxiv.org/abs/2105.03763"
}
|
\section{Introduction}
Although deep neural networks have achieved prominent performance on many NLP tasks, they are vulnerable to adversarial examples that are intentionally crafted by replacing, scrambling, and erasing characters \cite{gao2018black,ebrahimi-etal-2018-hotflip} or words \cite{alzantot2018generating,ren-etal-2019-generating, zheng2020evaluating, Jin_Jin_Zhou_Szolovits_2020, li2020bert} under certain semantic and syntactic constraints.
These adversarial examples are imperceptible to humans but can easily fool deep neural networks.
The existence of adversarial examples has raised serious concerns, especially when deploying such NLP models to security-sensitive tasks.
Many methods have been proposed to defend against adversarial attacks for neural NLP models. Most of them are evaluated empirically, including adversarial data augmentation \cite{Jin_Jin_Zhou_Szolovits_2020,zheng2020evaluating}, adversarial training \cite{madry2018towards, zhu2019freelb}, Dirichlet Neighborhood Ensemble (DNE) \cite{zhou2020defense} and so on.
Although the above-mentioned methods can empirically defend against the attack algorithms used during the training, the trained model often cannot survive from other stronger attacks.
Certified defense methods have recently been proposed~\cite{jia-etal-2019-certified,huang-etal-2019-achieving} by certifying the performance within the convex hull formed by the embeddings of a word and its synonyms.
However, due to the difficulty of propagating convex hull through deep neural networks, they compute a loose outer bound using Interval Bound Propagation (IBP).
As a result, IBP-based certified defense methods are hard to scale to large architectures such as BERT \cite{devlin2018bert}.
\begin{figure*}
\centering
\small
\includegraphics[width = 16.0cm]{figures/Mask.pdf}
\caption{\label{fig:mask} Consider an original sentence given at the top, we assume that an adversarial example is created by replacing the word ``grades'' with ``scores'' and ``football'' with ``fo0tba1l''. Taking the adversarial example as input, we randomly mask three words (indicated by {\fontfamily{qcr}\selectfont [MASK]}) on the input and generate a set of masked copies. A base classifier is then used to label each of these masked copies (only five are shown here for clarity), and the prediction scores on the masked texts are ensembled to get a robust output.}
\end{figure*}
To achieve certified robustness on large architectures,
\citet{ye-etal-2020-safer} proposed SAFER,
a randomized smoothing method that can provably ensure that the prediction cannot be altered by any possible synonymous word substitution.
However, existing certified defense methods assume that the defenders know how the adversaries generate synonyms, which is not a realistic scenario since we cannot impose a limitation on the synonym table used by the attackers.
Actually, existing adversarial attack algorithms for NLP models may use a synonym table in which a single word can have many (up to $50$) synonyms \cite{Jin_Jin_Zhou_Szolovits_2020}, generate synonyms dynamically by using BERT \cite{li2020bert}, or perform character-level perturbations
\cite{gao2018black,li2019textbugger} to launch attacks.
In this paper, we propose \textbf{RanMASK}, a certifiably robust defense method against text adversarial attacks based on a new randomized smoothing technique for NLP models.
The proposed method works by repeatedly performing random ``mask'' operations on an input text, in order to generate a large set of masked copies of the text.
A base classifier is then used to classify each of these masked texts, and the final robust classification is made by ``majority vote'' (see Figure \ref{fig:mask}).
In the training time, the base classifier is also trained with similar masked text samples.
The key idea is that, if a sufficient number of words are randomly masked from a text before the text is given to the base classifier and a relatively small number of words have been intentionally perturbed, then it is highly unlikely that all of the perturbed words (adversarially chosen) are present in a given masked text.
Note that remaining just some of these perturbed words is often not enough to fool the base classifier.
Given a text $\boldsymbol{x}$ and a potentially adversarial text $\boldsymbol{x}'$, if we use a statistically sufficient number of random masked samples, and if the observed ``gap'' between the number of ``votes'' for the top class and the number of ``votes'' for any other class at $\boldsymbol{x}$ is sufficiently large, then we can guarantee with high probability that the robust classification at $\boldsymbol{x}'$ will be the same as it is at $\boldsymbol{x}$.
Therefore, we can prove that with high probability, the smoothed classifier will label $\boldsymbol{x}$ robustly against any text adversarial attack which is allowed to perturb a certain number of words in an input text at both word-level and character-level.
The major advantage of our method over existing certified defense methods is that our certified robustness is not based on the assumption that the defenders know how the adversaries generate synonyms.
Given a text, the adversaries are allowed to replace few words with their synonyms (word-level perturbation) or deliberately misspell few of them (character-level perturbation).
Through extensive experiments on multiple datasets, we show that RanMASK achieved better performance on adversarial samples than existing defense methods.
Experimentally, we can certify the classifications of over $50\%$ sentences to be robust to any perturbation of $5$ words on AGNEWS dataset, and $2$ words on SST2.
Furthermore, unlike most certified defenses (except SAFER), the proposed method is easy to implement and can be integrated into any existing neural network including those with large architecture such as BERT \cite{devlin2018bert}.
\section{Related Work}
Many defense methods have been proposed to defend against text adversarial attacks, which can roughly be divided into two categories: \emph{empirical} \cite{miyato2016adversarial,sato2018interpretable,zhou2020defense,iclr-21:Dong} and \emph{certified} \cite{jia-etal-2019-certified,huang-etal-2019-achieving,ye-etal-2020-safer} methods.
Adversarial data augmentation is one of the most effective empirical defenses \cite{ren2019generating, Jin_Jin_Zhou_Szolovits_2020, li2020bert} for NLP models.
During the training time, they replace a word with one of its synonyms to create adversarial examples.
By augmenting these adversarial examples with the original training data, the model is robust to such perturbations.
However, the number of possible perturbations scales exponentially with sentence length, so data augmentation cannot cover all perturbations of an input text.
\citet{zhou2020defense} and \citet{iclr-21:Dong} use a convex hull formed by a word and its synonyms to capture word substitution-based perturbations, and they guarantee with high probability that the model is robust at any point within the convex hull.
Adversarial training \cite{miyato2016adversarial, zhu2019freelb} is another one of the most successful empirical defense methods by adding norm-bounded adversarial perturbations to word embeddings and minimizes the resultant adversarial loss.
However, they focus on the effects on generalization rather than the robustness against adversarial attacks.
Although the above empirical methods can experimentally defend the attack algorithms used during the training,
the downside of such methods is that failure to discover an adversarial example does not mean that another more sophisticated attack could not find one.
Recently, a set of certified defenses has been introduced, which guarantees the robustness to some specific types of attacks.
For example, Jia et al. \shortcite{jia-etal-2019-certified} and Huang et al. \shortcite{huang-etal-2019-achieving} use a bounding technique, Interval Bound Propagation \cite{arxiv-18:Gowal,arxiv-18:Dvijotham},
to formally verify a model's robustness against word substitution-based perturbations.
However, these certified defense methods are usually hard to scale to large networks, such as BERT.
\citet{ye-etal-2020-safer} proposed a certified robust method, called SAFER, which is structure-free and can be applied to arbitrary large architectures.
All existing certified defense methods make an unrealistic assumption that the defenders can access the synonyms used by the adversaries.
They could be broken by more sophisticated attacks by using synonym sets with large size \cite{Jin_Jin_Zhou_Szolovits_2020}, generating synonyms dynamically with BERT \cite{li2020bert}, or perturbing the inputs at the character-level
\cite{gao2018black,li2019textbugger}.
In this paper, we show that random smoothing can be integrating with random masking strategy to boost the certified robust accuracy.
In contrast to existing certified robust methods, the above unrealistic assumption is no longer required.
Furthermore, the NLP models trained by our defense method can defend against both the word substitution-based attacks and character-level perturbations.
\section{Preliminaries and Notation}
For text classification, a neural network-based classifier $f(\boldsymbol{x})$ maps an input text $\boldsymbol{x} \in \mathcal{X}$ to a label $y \in \mathcal{Y}$, where $\boldsymbol{x} = x_1, \dots, x_h$ is a text consisting of $h$ words and $\mathcal{Y}$ is a set of discrete categories.
Given an original input $\boldsymbol{x}$, adversarial examples are created to cause a model to make mistakes.
The adversaries may create an adversarial example $\boldsymbol{x}' = x_1', \dots, x_h'$ by perturbing at most $d \leq h$ words in $\boldsymbol{x}$.
We say $\boldsymbol{x}'$ is a good adversarial example of $\boldsymbol{x}$ for untargeted attack if:
\begin{equation}
\small
f(\boldsymbol{x}') \neq y, \quad
\norm{\boldsymbol{x} - \boldsymbol{x}'}_0 \leq d
\end{equation}
\noindent where $y$ is the truth output for $\boldsymbol{x}$, and $\norm{\boldsymbol{x} - \boldsymbol{x}'}_0 = \sum_{i=1}^n{\mathbb{I}\{x_i \neq x_i'\}}$ is the Hamming distance, with $\mathbb{I}\{ \cdot \}$ the indicator function.
For character-level perturbations, $x'_i$ is a visually similar misspelling or typo of $x_i$, and for word-level substitutions, $x_i'$ is any of $x_i$'s synonyms, where the synonym sets are chosen by the adversaries, which may not be known by the defenders.
A model $f$ is said to be \emph{certified robust} against adversarial attacks on an input $\boldsymbol{x}$ if it is able to give consistently correct predictions for all the possible adversarial examples $\boldsymbol{x}'$ under the constraint of $\norm{\boldsymbol{x} - \boldsymbol{x}'}_0 \leq d$.
Let $\boldsymbol{x} \ominus \boldsymbol{x}'$ denote the set of word indices at which $\boldsymbol{x}$ and $\boldsymbol{x}'$ differ, so that $|\boldsymbol{x} \ominus \boldsymbol{x}'| = \norm{\boldsymbol{x} - \boldsymbol{x}'}_0$.
For example, if $\boldsymbol{x} = $ ``{\fontfamily{qcr}\selectfont A B C E D}'' and $\boldsymbol{x}' = $ ``{\fontfamily{qcr}\selectfont A F C G D}'', $\boldsymbol{x} \ominus \boldsymbol{x}' = \{2, 4\}$ and $|\boldsymbol{x} \ominus \boldsymbol{x}'| = 2$.
Also, let $\mathcal{S}$ denote the set of indices $\{1, \dots, h\}$, $\mathcal{I}(h, k) \subseteq \mathcal{P}(\mathcal{S}) $ all sets of $k$ unique indices in $\mathcal{S}$, where $\mathcal{P}(\mathcal{S})$ is the power set of $\mathcal{S}$, and
$\mathcal{U}(h, k)$ the uniform distribution over $\mathcal{I}(h, k)$.
Note that to sample from $\mathcal{U}(h, k)$ is to sample $k$ out of $h$ indices uniformly without replacement.
For instance, an element sampled from $\mathcal{U}(5, 3)$ might be $\{1, 3, 5\}$.
We define a \emph{mask} operation $\mathcal{M}: \mathcal{X} \times \mathcal{I}(h, k) \rightarrow \mathcal{X}_{\text{mask}}$, where $\mathcal{X}_{\text{mask}}$ is a set of texts in which some words may be masked.
This operation takes an text of length $h$ and a set of indices as inputs and outputs the text, with all words except those in the set replaced with a special token {\fontfamily{qcr}\selectfont [MASK]}.
For example, $\mathcal{M}(\text{``{\fontfamily{qcr}\selectfont A F C G D}''}, \{1, 3, 5\}) = $ ``{\fontfamily{qcr}\selectfont A [MASK] C [MASK] D}''.
Following \citet{devlin2018bert}, we use {\fontfamily{qcr}\selectfont [MASK]} to replace the words masked.
\section{RanMASK: Certified Defense Method}
Inspired by the work of \cite{pmlr-v97-cohen19c, levine2020robustness} in the image domain, our idea is to replace a base model $f$ with a more smoothed model that is easier to verify by ensembling the outputs of a number of random masked inputs.
In particular, let $f: \mathcal{X}_{\text{mask}} \rightarrow \mathcal{Y}$ be a base classifier, which is trained to classify texts with some words masked, and we define a smoothed classifier $g(\boldsymbol{x})$:
\begin{equation}
\small
g(\boldsymbol{x}) = \argmax_{c \in \mathcal{Y}} \left[ \underset{\mathcal{H} \sim \mathcal{U}(h_{\boldsymbol{x}}, k_{\boldsymbol{x}})}{\mathbb{P}} (f(\mathcal{M}(\boldsymbol{x}, \mathcal{H})) = c) \right]
\end{equation}
\noindent where $h_{\boldsymbol{x}}$ is the length of $\boldsymbol{x}$, and $k_{\boldsymbol{x}}$ is the number of words retained (not masked) from $\boldsymbol{x}$, which is calculated by $\lfloor h_{\boldsymbol{x}} - \rho \times h_{\boldsymbol{x}} \rceil$, where $\rho$ is the percentage of words that are masked.
To simplify notation, we let $p_c(\boldsymbol{x})$ denote the probability that, after randomly masking, $f$ returns the class $c$:
\begin{equation}
\label{equ-pcx}
\small
p_c(\boldsymbol{x}) = \underset{\mathcal{H} \sim \mathcal{U}(h_{\boldsymbol{x}}, k_{\boldsymbol{x}})}{\mathbb{P}} (f(\mathcal{M}(\boldsymbol{x}, \mathcal{H})) = c).
\end{equation}
Then, $g(\boldsymbol{x})$ can be defined as $\argmax_{c \in \mathcal{Y}} p_c(\boldsymbol{x})$.
In other words, $g(\boldsymbol{x})$ denotes the class most likely to be returned if we first randomly mask all but $k_{\boldsymbol{x}}$ words from $\boldsymbol{x}$ and then classify the resulting text with the base classifier.
\subsection{Certified Robustness}
We are ready to prove the following theorem, which can be used to develop a robustness certificate.
\begin{theorem}
\label{theorem-pcx-pcxpai-diff}
Given text $\boldsymbol{x}$ and $\boldsymbol{x}'$, $\norm{ \boldsymbol{x} - \boldsymbol{x}'}_0 \leq d$, for all class $c \in \mathcal{Y}$, we have:
\begin{equation}
\small
p_c(\boldsymbol{x}) - p_c(\boldsymbol{x}') \leq \beta \Delta
\end{equation}
where
\begin{equation}
\label{equ-beta-definition}
\small
\begin{gathered}
\Delta = 1 - \frac{\tbinom{h_{\boldsymbol{x}} - d}{k_{\boldsymbol{x}}}}{\tbinom{h_{\boldsymbol{x}}}{k_{\boldsymbol{x}}}},
\\
\beta = \mathbb{P}(f(\mathcal{M}(\boldsymbol{x}, \mathcal{H})) = c \mid \mathcal{H} \cap (\boldsymbol{x} \ominus \boldsymbol{x}') \neq \emptyset).
\end{gathered}
\end{equation}
\end{theorem}
\begin{proof}
Recall that $\mathcal{H} \sim \mathcal{U}(h_{\boldsymbol{x}}, k_{\boldsymbol{x}})$, we have:
\begin{equation}
\small
\begin{gathered}
p_c(\boldsymbol{x}) = \mathbb{P} (f(\mathcal{M}(\boldsymbol{x}, \mathcal{H})) = c),
\\
p_c(\boldsymbol{x}') = \mathbb{P} (f(\mathcal{M}(\boldsymbol{x}', \mathcal{H})) = c).
\end{gathered}
\end{equation}
\noindent By the law of total probability, we have:
\begin{equation}
\label{equ-pcx-pcxpai}
\small
\begin{aligned}
p_c(\boldsymbol{x}) = & \mathbb{P} ([f(\mathcal{M}(\boldsymbol{x}, \mathcal{H})) = c] \wedge [\mathcal{H} \cap (\boldsymbol{x} \ominus \boldsymbol{x}') = \emptyset]) +
\\
& \mathbb{P} ([f(\mathcal{M}(\boldsymbol{x}, \mathcal{H})) = c] \wedge [\mathcal{H} \cap (\boldsymbol{x} \ominus \boldsymbol{x}') \neq \emptyset])
\\
p_c(\boldsymbol{x}') = & \mathbb{P} ([f(\mathcal{M}(\boldsymbol{x}', \mathcal{H})) = c] \wedge [\mathcal{H} \cap (\boldsymbol{x} \ominus \boldsymbol{x}') = \emptyset ]) +
\\
& \mathbb{P} ([f(\mathcal{M}(\boldsymbol{x}', \mathcal{H})) = c] \wedge [\mathcal{H} \cap (\boldsymbol{x} \ominus \boldsymbol{x}') \neq \emptyset]).
\end{aligned}
\end{equation}
\noindent
Note that if $\mathcal{H} \cap (\boldsymbol{x} \ominus \boldsymbol{x}') = \emptyset$, then $\boldsymbol{x}$ and $\boldsymbol{x}'$ are identical at all indices in $\mathcal{H}$.
In this case, we have $\mathcal{M}(\boldsymbol{x},\mathcal{H}) = \mathcal{M}(\boldsymbol{x}',\mathcal{H})$, which implies:
\begin{equation}
\label{equ-equal-conditional-sample-empty}
\small
\begin{aligned}
\mathbb{P}(f(\mathcal{M}(\boldsymbol{x}, \mathcal{H})) = c \mid \mathcal{H} \cap (\boldsymbol{x} \ominus \boldsymbol{x}') = \emptyset) & =
\\
\mathbb{P}(f(\mathcal{M}(\boldsymbol{x}', \mathcal{H})) = c \mid \mathcal{H} \cap (\boldsymbol{x} \ominus \boldsymbol{x}') = \emptyset) &
\end{aligned}
\end{equation}
\noindent Multiplying both sizes of (\ref{equ-equal-conditional-sample-empty}) by $\mathbb{P} (\mathcal{H} \cap (\boldsymbol{x} \ominus \boldsymbol{x}') = \emptyset)$, we obtain:
\begin{equation}
\label{equ-equal-allprob-sample-empty}
\small
\begin{aligned}
\mathbb{P} ([f(\mathcal{M}(\boldsymbol{x}, \mathcal{H})) = c] \wedge [\mathcal{H} \cap (\boldsymbol{x} \ominus \boldsymbol{x}') = \emptyset]) & =
\\
\mathbb{P} ([f(\mathcal{M}(\boldsymbol{x}', \mathcal{H})) = c] \wedge [\mathcal{H} \cap (\boldsymbol{x} \ominus \boldsymbol{x}') = \emptyset]) &
\end{aligned}
\end{equation}
\noindent By (\ref{equ-pcx-pcxpai}), (\ref{equ-equal-allprob-sample-empty}) and the non-negativity of probability, subtracting $p_c(\boldsymbol{x}')$ from $p_c(\boldsymbol{x})$ yields:
\begin{equation}
\small
\label{equ-subtraction}
\begin{aligned}
p_c(&\boldsymbol{x}) - p_c(\boldsymbol{x}') =
\\
& \mathbb{P} ([f(\mathcal{M}(\boldsymbol{x}, \mathcal{H})) = c] \wedge [\mathcal{H} \cap (\boldsymbol{x} \ominus \boldsymbol{x}') \neq \emptyset]) -
\\
& \mathbb{P} ([f(\mathcal{M}(\boldsymbol{x}', \mathcal{H})) = c] \wedge [\mathcal{H} \cap (\boldsymbol{x} \ominus \boldsymbol{x}') \neq \emptyset])
\\
\leq & \mathbb{P} ([f(\mathcal{M}(\boldsymbol{x}, \mathcal{H})) = c] \wedge [\mathcal{H} \cap (\boldsymbol{x} \ominus \boldsymbol{x}') \neq \emptyset])
\end{aligned}
\end{equation}
\noindent By the definition of $\beta$, we have
\begin{equation}
\small
\label{equ-beta}
\begin{aligned}
\mathbb{P} ([f(\mathcal{M}(\boldsymbol{x}, \mathcal{H})) = c] \wedge [\mathcal{H} \cap (\boldsymbol{x} \ominus \boldsymbol{x}') \neq \emptyset]) & = \\ \beta \times \mathbb{P} (\mathcal{H} \cap (\boldsymbol{x} \ominus \boldsymbol{x}') \neq \emptyset) &
\end{aligned}
\end{equation}
\noindent Substituting (\ref{equ-beta}) into (\ref{equ-subtraction}) gives:
\begin{equation}
\label{equ-pcx-pcxpai-inequality}
\small
p_c(\boldsymbol{x}) - p_c(\boldsymbol{x}') \leq \beta \times \mathbb{P} (\mathcal{H} \cap (\boldsymbol{x} \ominus \boldsymbol{x}') \neq \emptyset)
\end{equation}
Note that,
\begin{equation}
\small
\mathbb{P} (\mathcal{H} \cap (\boldsymbol{x} \ominus \boldsymbol{x}') = \emptyset) = \frac{\tbinom{h_{\boldsymbol{x}} - |\boldsymbol{x} \ominus \boldsymbol{x}'|}{k_{\boldsymbol{x}}}}{\tbinom{h_{\boldsymbol{x}}}{k_{\boldsymbol{x}}}} = \frac{\tbinom{h_{\boldsymbol{x}} - \norm{\boldsymbol{x} - \boldsymbol{x}'}_0}{k_{\boldsymbol{x}}}}{\tbinom{h_{\boldsymbol{x}}}{k_{\boldsymbol{x}}}}
\end{equation}
\noindent where the last equality follows since $\mathcal{H}$ is a uniform choice of $k_{\boldsymbol{x}}$ elements from $h_{\boldsymbol{x}}$: there are $\tbinom{h_{\boldsymbol{x}}}{k_{\boldsymbol{x}}}$ total ways to make this selection. Among all these selections, there are $\tbinom{h_{\boldsymbol{x}} - |\boldsymbol{x} \ominus \boldsymbol{x}'|}{k_{\boldsymbol{x}}}$ of which do not contain any element from $\boldsymbol{x} \ominus \boldsymbol{x}'$.
\noindent By the constraint of $\norm{ \boldsymbol{x} - \boldsymbol{x}'}_0 \leq d$, we have:
\begin{equation}
\label{equ-delta-inequality}
\small
\begin{aligned}
\mathbb{P} (\mathcal{H} \cap (\boldsymbol{x} \ominus \boldsymbol{x}') \neq \emptyset) & =
1 - \mathbb{P} (\mathcal{H} \cap (\boldsymbol{x} \ominus \boldsymbol{x}') = \emptyset)
\\
& = 1 -\frac{\tbinom{h_{\boldsymbol{x}} - \norm{\boldsymbol{x} - \boldsymbol{x}'}_0}{k_{\boldsymbol{x}}}}{\tbinom{h_{\boldsymbol{x}}}{k_{\boldsymbol{x}}}}
\\
& \leq 1 - \frac{\tbinom{h_{\boldsymbol{x}} - d}{k_{\boldsymbol{x}}}}{\tbinom{h_{\boldsymbol{x}}}{k_{\boldsymbol{x}}}} = \Delta
\end{aligned}
\end{equation}
\noindent Combining inequalities (\ref{equ-pcx-pcxpai-inequality}) and (\ref{equ-delta-inequality}) gives the statement of Theorem \ref{theorem-pcx-pcxpai-diff}.
\end{proof}
It is impossible to exactly compute the probabilities with which $f$ classifies $\mathcal{M}(\boldsymbol{x}, \mathcal{H})$ as each class.
Therefore, it is also impossible to exactly evaluate $p_c(\boldsymbol{x})$ and $g(\boldsymbol{x})$ at any input $\boldsymbol{x}$.
Let $\underline{p_c(\boldsymbol{x})}$ denote a lower bound on $p_c(\boldsymbol{x})$, we can bound
$\underline{p_c(\boldsymbol{x})}$ with $(1 - \alpha)$ confidence.
Following \citet{pmlr-v97-cohen19c} and \citet{jia-etal-2019-certified}, we estimates $\underline{p_c(\boldsymbol{x})}$ using the standard one-sided Clopper-Pearson method \cite{clopper1934use}. Specifically, we randomly construct $n$ masked copies of $\boldsymbol{x}$, then count for class $c$ as $n_c = \sum_{i}^{n} {\mathbb{I}(f(\mathcal{M}(\boldsymbol{x}, \mathcal{H})) = c)}$ according to the outputs of $f$ (see Algorithm \ref{alg_overall_conclusion} for details).
Assuming that $n_c$ follows a binomial distribution with parameters $n$ and $p_c$, $n_c \sim \text{B}(n, p_c)$, where $p_c = n_c / n$, we have:
\begin{equation}
\label{equ-lower-bound}
\small
\underline{p_c(\boldsymbol{x})} = \text{Beta}(\alpha; n_c, n - n_c + 1),
\end{equation}
\noindent where $\text{Beta}(\alpha; u, v)$ is the $\alpha$-th quantile of a beta distribution with parameters $u$ and $v$, and $(1 - \alpha)$ is the confidence level.
\begin{corollary}
\label{corollary-certified}
For text $\boldsymbol{x}$, $\boldsymbol{x}'$, $\norm{ \boldsymbol{x} - \boldsymbol{x}'}_0 \leq d$, if
\begin{equation}
\label{equ-certified-condition}
\small
\underline{p_c(\boldsymbol{x})} - \beta \Delta > 0.5
\end{equation}
then, with probability at least $(1 - \alpha)$:
\begin{equation}
\small
g(\boldsymbol{x}') = c
\end{equation}
\end{corollary}
\begin{proof}
With probability at least $(1 - \alpha)$ we have:
\begin{equation}
\small
0.5 < \underline{p_c(\boldsymbol{x})} - \beta \Delta \leq p_c(\boldsymbol{x}) - \beta \Delta \leq p_c(\boldsymbol{x}')
\end{equation}
\noindent Where the last inequality is from Theorem \ref{theorem-pcx-pcxpai-diff}. Then $g(\boldsymbol{x}') = c$ from the definition of $g$.
If $c = y$ (the truth output for $\boldsymbol{x}$) and if $\underline{p_c(\boldsymbol{x})} - \beta \Delta > 0.5$,
the smoothed $g$ is \emph{certified robust} at the input $\boldsymbol{x}$.
\end{proof}
\subsection{Estimating the Value of $\beta$}
We discuss how to estimate the value of $\beta$ defined in Theorem \ref{theorem-pcx-pcxpai-diff} here.
Recall that $\beta$ is the probability that $f$ will label the masked copies of $\boldsymbol{x}$ with the class $c$ where the indices of unmasked words are overlapped with $\boldsymbol{x} \ominus \boldsymbol{x}'$ (i.e., the set of word indices at which $\boldsymbol{x}$ and $\boldsymbol{x}'$ differ).
We use a Monte Carlo algorithm to evaluating $\beta$ by sampling a large number of elements from $\mathcal{U}(h_{\boldsymbol{x}}, k_{\boldsymbol{x}})$.
To simplify notation, we let $r$ denote the value of $|\boldsymbol{x} \ominus \boldsymbol{x}'|$.
\begin{algorithm}[h]
\small
\setstretch{1.0}
\caption{For estimating the value of $\beta$}
\label{alg_estimating_beta}
\begin{algorithmic}[1]
\Procedure{BetaEstimator}{$\boldsymbol{x},h_{\boldsymbol{x}},k_{\boldsymbol{x}},r,f,n_r,n_k$}
\State $\beta \gets 0$
\State $\mathcal{A}$ $\gets$ Sampling $n_r$ elements from $\mathcal{U}(h_{\boldsymbol{x}},r)$
\For{ each $a$ in $\mathcal{A}$ }
\State $\mathcal{B}$ $\gets$ Sampling $n_k$ elements from $\mathcal{U}(h_{\boldsymbol{x}},k_{\boldsymbol{x}})$
\For { each $b$ in $\mathcal{B}$ }
\If {$a \cap b = \emptyset$}
\State $\mathcal{B}.delete(b)$
\EndIf
\EndFor
\State $p_c \gets$ Using Eq. (\ref{equ-pcx}) with $f$ and $\mathcal{B}$.
\State $\beta \gets \beta + p_c$
\EndFor
\State $\beta \gets \beta / n_r$
\State \Return $\beta$
\EndProcedure
\end{algorithmic}
\end{algorithm}
The Monte Carlo-based algorithm used to evaluate $\beta$ is given in Algorithm \ref{alg_estimating_beta}.
We first sample $n_r$ elements from $\mathcal{U}(h_{\boldsymbol{x}}, r)$ and each sampled element, denoted by $a$, is a set of indices where the words are supposed to be perturbed.
For every $a$, we then sample $n_k$ elements from $\mathcal{U}(h_{\boldsymbol{x}}, k_{\boldsymbol{x}})$, each of which, denoted by $b$, is set of indices where the words are not masked.
We remove those from $n_k$ elements if the intersection of $a$ and $b$ is empty.
With the remaining elements and $f$, we can approximately compute the value of $\beta$.
\begin{figure}
\centering
\small
\includegraphics[width = 6.0cm]{figures/eslimating_beta_agnews.pdf}
\caption{\label{fig:js-divergence} The Jensen$\text{-}$Shannon divergence between the values of $\beta$ and $p_c(\boldsymbol{x})$ estimated on AGNEWS with different masking rate $\rho$.
No matter what value of $\rho$ is, all the divergence values are less than $2.5\times10^{-5}$.
Therefore, we can use $p_c(\boldsymbol{x})$ to approximate the value of $\beta$.}
\end{figure}
As the value of $r$ grows, for any $a$ it is more likely that $a$ is overlapped with any sampled $b$, and the value $\beta$ will approach to $p_c(\boldsymbol{x})$.
To investigate how close the values of $\beta$ and $p_c(\boldsymbol{x})$ are to each other, we conducted an experiment on the test set of AGNEWS, in which $n_r = 200$ and $n_k = 10,000$, and use the Jensen-Shannon divergence
to calculate the distance of these two distributions.
As we can seen from Figure \ref{fig:js-divergence}, no matter what value of $\rho$ is, all the Jensen-Shannon divergences values are very small and less than $2.5\times10^{-5}$.
Therefore, we can use $p_c(\boldsymbol{x})$ to approximate the value of $\beta$, namely $\beta \approx p_c (\boldsymbol{x})$,
in all the following experiments.
\begin{algorithm}[h]
\small
\setstretch{1.0}
\caption{For prediction and certification}
\label{alg_overall_conclusion}
\begin{algorithmic}[1]
\Procedure {ClassifierG} {{$\boldsymbol{x}, h_{\boldsymbol{x}}, k_{\boldsymbol{x}},f, n$}}
\State $\mathcal{B} \gets$ sampling $n$ elements from $\mathcal{U}(h_{\boldsymbol{x}},k_{\boldsymbol{x}})$
\State $counts \gets 0$ for each label $c \in \mathcal{Y}$
\For { each $\mathcal{H}$ in $\mathcal{B}$ }
\State $\boldsymbol{x}_{\text{mask}} \gets \mathcal{M}(\boldsymbol{x}, \mathcal{H})$, $c \gets f(\boldsymbol{x}_{\text{mask}})$
\State $counts[c] \gets counts[c] + 1$
\EndFor
\State \Return $count$
\EndProcedure
\Procedure {Predict} {{$\boldsymbol{x}, h_{\boldsymbol{x}}, k_{\boldsymbol{x}},f, n$}}
\State $counts \gets$ $\textsc{ClassifierG}(\boldsymbol{x}, h_{\boldsymbol{x}}, k_{\boldsymbol{x}},f, n)$
\State $\hat{c} \gets $ the top index with the maximum $counts$
\State $p_{\hat{c}} \gets counts[\hat{c}] / n$
\State \Return $\hat{c}, p_{\hat{c}} $
\EndProcedure
\Procedure {Certify} {{$\boldsymbol{x},y, h_{\boldsymbol{x}}, k_{\boldsymbol{x}},f, n, n', \alpha$}}
\State $\hat{c}, p_{\hat{c}} \gets \textsc{Predict}(\boldsymbol{x}, h_{\boldsymbol{x}}, k_{\boldsymbol{x}},f, n)$
\If {$\hat{c} \neq y $} \Return ``N/A''
\Else
\State $counts \gets \textsc{ClassifierG}(\boldsymbol{x}, h_{\boldsymbol{x}}, k_{\boldsymbol{x}},f, n')$
\State $\underline{p_y} \gets $ Using Eq. (\ref{equ-lower-bound}) with $counts[y], n'$ and $\alpha$
\State $\beta \gets counts[y] / n'$
\EndIf
\For {$d$ from $0$ to $h_{\boldsymbol{x}}$}
\State $\Delta \gets $ Using Eq. (\ref{equ-beta-definition}) with $h_{\boldsymbol{x}}, k_{\boldsymbol{x}}$ and $d$
\If {$\underline{p_y} - \beta \Delta > 0.5$} $d \gets d + 1$
\Else \hspace{0.5em} \textbf{break}
\EndIf
\EndFor
\State \Return $d$
\EndProcedure
\end{algorithmic}
\end{algorithm}
\subsection{Practical Algorithms}
In order for $g$ to classify the labeled examples correctly and robustly, the base classifier $f$ is trained to classify texts in which $\rho$ words are masked.
Specifically, at each training iteration, we first sample a mini-batch of samples and randomly performing mask operation on the samples.
We then apply gradient descent on $f$ based on the masked mini-batch.
We present practical Monte Carlo algorithms for evaluating $g(\boldsymbol{x})$ and certifying the robustness of $g$ around $\boldsymbol{x}$ in Algorithm \ref{alg_overall_conclusion}.
Evaluating the smoothed classifier's prediction $g(\boldsymbol{x})$ requires identify the class $c$ with maximal weight in the categorical distribution.
The procedure described in pseudocode as $\textsc{Predict}$ randomly draws $n$ masked copies of $\boldsymbol{x}$ and runs these $n$ copies through $f$.
If $c$ appeared more than any other class, then $\textsc{Predict}$ returns $c$.
Evaluating and certifying the robustness of $g$ around an input $\boldsymbol{x}$ requires not only identify the class $c$ with maximal weight, but also estimating a lower bound $\underline{p_c(\boldsymbol{x})}$ and $\beta$.
The procedure described as $\textsc{Certify}$, we first ensure that $g$ correctly classifies $\boldsymbol{x}$ as $y$, and then estimate the values of $\underline{p_c(\boldsymbol{x})}$ and $\beta$ by randomly generating $n'$ masked copies of $\boldsymbol{x}$, where $n'$ is much greater than $n$.
We gradually increase the value of $d$ (the number words can be perturbed) from $0$ by $1$, and compute $\Delta$ by Equation (\ref{equ-beta-definition}).
This process will continue until $\underline{p_c(\boldsymbol{x})} - \beta \Delta \leq 0.5$ (see Corollary \ref{corollary-certified}), and when it stops $\textsc{Certify}$ returns $d$ as the maximum
\emph{certified robustness} for $\boldsymbol{x}$.
In this way, we can certify with ($1 - \alpha$) confidence that $g(\boldsymbol{x}')$ will return the label $y$ for any adversarial example $\boldsymbol{x}'$ if $\norm{\boldsymbol{x} - \boldsymbol{x}'}_0 \leq d$.
\section{Experiments}
We first give the certified robustness of our RanMASK on AGNEWS \cite{zhang2015character} and SST2 \cite{socher2013recursive} datasets, and then report the empirical robustness on these datasets by comparing with other representative defense methods, including PGD-K \cite{madry2018towards}, FreeLB \cite{zhu2019freelb}, Adv-Hotflip \cite{ebrahimi-etal-2018-hotflip}, and adversarial data augmentation.
Finally, we empirically compare RanMASK with SAFER \cite{ye-etal-2020-safer}, a recently proposed certified defense that also can be applied to BERT.
We found that different randomized smoothing methods may behave quite differently when different ensemble methods are used.
We show that the ``majority-vote'' ensemble sometimes could fool the score-based attack algorithms using the greedy search strategy.
\subsection{Implementation Details}
BERT \cite{devlin2018bert} was pretrained by using a ``masked language model'', and our mask operation is the same as that used in training BERT.
Therefore, we use BERT and its variant, RoBERTa \cite{liu2019roberta} as our base models.
Unless otherwise specified, all the models are trained with AdamW optimizer \cite{loshchilov2017decoupled} with a weight decay $(1e-6)$ and a learning rate $(5e-5)$, which is decayed by the cosine annealing method \cite{loshchilov2016sgdr}.
We randomly selected $1,000$ test examples for each dataset in both certified and empirical experiments.
When conducting the experiments of certified robustness, we set the uncertainty $\alpha$ to $0.05$, the number of samples $n$ for $\textsc{Predict}$ procedure to $1,000$, the number of samples $n'$ for $\textsc{Certify}$ to $5,000$.
To evaluate the empirical robustness of models, we set $n$ to $100$ to speed up the process.
\subsection{Results of the Certified Robustness}
We here provide the certified robustness of RanMASK on AGNEWS and SST2.
We refer to the following metrics when reporting certified results:
\begin{itemize}
\normalsize
\setlength{\itemsep}{0pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item The \emph{certified robustness} of a text $\boldsymbol{x}$ is the maximum $d$ for which we can certify that the smoothed classifier $g(\boldsymbol{x}')$ will return the \emph{correct} label $y$ where $\boldsymbol{x}'$ is any adversarial example of $\boldsymbol{x}$ such that $\norm{\boldsymbol{x} - \boldsymbol{x}'}_0 \leq d$.
If $g(\boldsymbol{x})$ labels $\boldsymbol{x}$ incorrectly, we define the certified robustness as ``N/A'' (see Algorithm \ref{alg_overall_conclusion}).
\item The \emph{certified rate} of a text $\boldsymbol{x}$ is the \emph{certified robustness} of $\boldsymbol{x}$ divided by $\boldsymbol{x}$'s length $h_{\boldsymbol{x}}$.
\item The \emph{median certified robustness} (MCB) on a dataset is the median value of the \emph{certified robustness} across the dataset.
It is the maximum $d$ for which the smoothed classifier $g$ can guarantee robustness for at least $50\%$ texts in the dataset.
In other words, we can certify the classifications of over $50\%$ texts to be robust to any perturbation of at most $d$ words.
When computing this median, the texts which $g$ misclassifies
are counted as having $-\rotatebox{90}{8}$ certified robustness. For example, if the certified robustness of the texts in a dataset are $\{\text{N/A},\text{N/A},1,2,3\}$, the \emph{median certified robustness} is $1$, not $2$.
\item The \emph{median certified rate} (MCR) on a dataset is the median value of the \emph{certified rate} across the datasets, which is obtained in a similar way to MCB.
\end{itemize}
We first tested the robust classification on AGNEWS using RoBERTa \cite{liu2019roberta} as the base classifier.
As we can see from Table \ref{tb-agnews-certified}, the maximum MCB was achieved at $5$ when using the masking rate $\rho = 90\%$ or $\rho = 95\%$, indicating that we can certify the classifications of over $50\%$ sentences to be robust to any perturbation of at most $5$ words.
We chose to use the model ($\rho = 90\%$) to evaluate the empirical robustness on AGNEWS because it gives better classification accuracy.
\begin{table}[htbp]
\small
\setlength{\abovecaptionskip}{0.05cm}
\begin{center}
\setlength{\tabcolsep}{0.7mm}
\begin{tabular}{c|>{\centering\arraybackslash}p{1.8cm}|>{\centering\arraybackslash}p{1.2cm}|>{\centering\arraybackslash}p{1.8cm}}
\hline
\hline
{\bf Rate $\rho$\%} & {\bf Accuracy\%} & {\bf MCB} & {\bf MCR\%}
\\ \hline
$40$ & $96.2$ & $1$ & $2.6$ \\
$50$ & $95.7$ & $1$ & $2.7$ \\
$60$ & $95.7$ & $2$ & $5.0$ \\
$65$ & $95.0$ & $2$ & $5.0$ \\
$70$ & $94.5$ & $2$ & $5.0$ \\
$75$ & $93.9$ & $3$ & $7.0$ \\
$80$ & $92.0$ & $3$ & $7.5$ \\
$85$ & $92.2$ & $4$ & $8.8$ \\
$\bf 90$ & $\bf 91.1$ & $\bf 5$ & $\bf 11.4$ \\
$95$ & $85.8$ & $5$ & $11.8$ \\
\hline
\hline
\end{tabular}
\end{center}
\caption{Robustness certificates on AGNEWS with different masking rates $\rho$.
The maximum median certified robustness (MCB) was achieved when using $\rho = 90\%$ or $95\%$.
We use the model (highlighted in bold) when evaluating the empirical robustness against adversarial attacks.
``MCR'' denotes the median certified rate.
}
\label{tb-agnews-certified}
\end{table}
We evaluated the robust classification on SST2 using RoBERTa as the base classifier.
As shown in Table \ref{tb-sst2-certified}, the maximum MCB was achieved at $2$ when $\rho = 70\%$ or $80\%$, indicating that over $50\%$ sentences are robust to any perturbation of $2$ words.
But, these two models achieve the maximum MCB in a higher cost of clean accuracy (about $10\%$ drop compared with the best).
We chose to use the model ($\rho = 30\%$) to evaluate the empirical robustness on SST2 due to its higher classification accuracy on the clean data.
We found that it is impossible to train the models when $\rho \geq 90\%$.
Unlike AGNEWS (created for the news topic classification), SST2 was constructed for the sentiment analysis.
The sentiment of a text largely depends on whether few specific sentiment words occur in the text.
All the sentiment words in a text would be masked with high probability when a large masking rate is applied, which makes it hard for any model to correctly predict the sentiment of the masked texts.
\begin{table}[htbp]
\small
\setlength{\abovecaptionskip}{0.05cm}
\begin{center}
\setlength{\tabcolsep}{0.7mm}
\begin{tabular}{c|>{\centering\arraybackslash}p{1.8cm}|>{\centering\arraybackslash}p{1.2cm}|>{\centering\arraybackslash}p{1.8cm}}
\hline
\hline
{\bf Rate $\rho$\%} & {\bf Accuracy\%} & {\bf MCB} & {\bf MCR\%}
\\
\hline
$20$ & $92.4$ & $1$ & $5.26$ \\
$\bf 30$ & $\bf 92.4$ & $\bf 1$ & $\bf 5.26$ \\
$40$ & $91.2$ & $1$ & $5.26$ \\
$50$ & $89.3$ & $1$ & $5.56$ \\
$60$ & $84.3$ & $1$ & $7.41$ \\
$70$ & $83.3$ & $2$ & $8.00$ \\
$80$ & $81.4$ & $2$ & $10.00$ \\
$90$ & $49.6$ & N/A & N/A \\
\hline
\hline
\end{tabular}
\end{center}
\caption{Robustness certificates on SST2 with different masking rates $\rho$.
The maximum median certified robustness (MCB) was achieved when $\rho = 70\%$ or $80\%$.
We use the model (highlighted in bold) when evaluating the empirical robustness against adversarial attacks because $\rho = 30\%$ gives the better classification.}
\label{tb-sst2-certified}
\end{table}
\subsection{Results of the Empirical Robustness}
\begin{table*}[htbp]
\small
\setlength{\abovecaptionskip}{0.05cm}
\begin{center}
\setlength{\tabcolsep}{0.7mm}
\begin{tabular}{l|*{8}{>{\centering\arraybackslash}p{1.0cm}|}{>{\centering\arraybackslash}p{1.0cm}}}
\hline
\hline
\multirow{2}{*}{\bf Method} & \multicolumn{3}{c|}{\bf TextFooler} & \multicolumn{3}{c|}{\bf BERT-Attack} &\multicolumn{3}{c}{\bf DeepWordBug} \\
\cline{2-10}
&\bf Cln\% &\bf Boa\% &\bf Succ\% &\bf Cln\% &\bf Boa\% &\bf Succ\% &\bf Cln\% &\bf Boa\% &\bf Succ\% \\
\hline
Baseline (RoBERTa) & $93.9$ & $15.8$ & $83.2$ & $94.7$ & $26.7$ & $71.8$ & $94.2$ & $33.0$ & $65.0$ \\
PGD-10 \cite{madry2018towards} & $\bf 95.0$ & $22.3$ & $76.5$ & $\bf 95.3$ & $30.0$ & $68.5$ & $\bf 94.9$ & $38.8$ & $59.1$ \\
FreeLB \cite{zhu2019freelb} & $93.9$ & $24.6$ & $73.8$ & $\bf 95.3$ & $28.3$ & $70.3$ & $93.7$ & $44.0$ & $53.0$ \\
Adv-Hotflip \cite{ebrahimi-etal-2018-hotflip} & $93.4$ & $21.3$ & $77.2$ & $93.9$ & $26.8$ & $71.5$ & $94.6$ & $37.6$ & $60.3$ \\
Adversarial Data Augmentation & $93.3$ & $23.7$ & $74.6$ & $92.3$ & $39.1$ & $57.6$ & $93.8$ & $49.7$ & $47.0$ \\
RanMASK-$90\%$ (logit) & $89.1$ & $42.7$ & $52.1$ & $88.5$ & $30.0$ & $66.1$ & $89.8$ & $45.4$ & $45.4$ \\
RanMASK-$90\%$ (vote) & $91.2$ & $\bf 55.1$ & $\bf 39.6$ & $89.1$ & $\bf 41.1$ & $\bf 53.9$ & $90.3$ & $\bf 57.5$ & $\bf 36.0$ \\
\hline
\hline
\end{tabular}
\end{center}
\caption{Empirical results on AGNEWS. RanMASK-$90 \%$ with the ``vote'' ensemble method achieved the best results on the robust accuracy under all three different attack algorithm, indicating that RanMASK can defend against both word substitution-based attacks and character-level perturbations.
}
\label{tb-agnews-emperical}
\end{table*}
\begin{table*}[htbp]
\small
\setlength{\abovecaptionskip}{0.05cm}
\begin{center}
\setlength{\tabcolsep}{0.7mm}
\begin{tabular}{l|*{8}{>{\centering\arraybackslash}p{1.0cm}|}{>{\centering\arraybackslash}p{1.0cm}}}
\hline
\hline
\multirow{2}{*}{\bf Method} & \multicolumn{3}{c|}{\bf TextFooler} & \multicolumn{3}{c|}{\bf BERT-Attack} &\multicolumn{3}{c}{\bf DeepWordBug} \\
\cline{2-10}
&\bf Cln\% &\bf Boa\% &\bf Succ\% &\bf Cln\% &\bf Boa\% &\bf Succ\% &\bf Cln\% &\bf Boa\% &\bf Succ\% \\
\hline
Baseline (RoBERTa) & $\bf 94.3$ & $5.4$ & $94.3$ & $93.9$ & $6.2$ & $93.4$ & $\bf 94.7$ & $17.0$ & $82.1$ \\
PGD-10 \cite{madry2018towards} & $94.0$ & $5.6$ & $94.0$ & $\bf 94.4$ & $5.6$ & $94.1$ & $92.9$ & $18.3$ & $80.3$ \\
FreeLB \cite{zhu2019freelb} & $93.7$ & $13.9$ & $85.2$ & $93.8$ & $10.4$ & $89.0$ & $93.0$ & $23.7$ & $74.5$ \\
Adv-Hotflip \cite{ebrahimi-etal-2018-hotflip} & $\bf 94.3$ & $12.3$ & $87.0$ & $93.8$ & $11.4$ & $87.9$ & $93.3$ & $23.4$ & $74.9$ \\
Adversarial Data Augmentation & $91.0$ & $9.6$ & $89.5$ & $88.2$ & $16.9$ & $80.8$ & $91.8$ & $23.5$ & $74.4$ \\
RanMASK-$30\%$ (vote) & $92.7$ & $12.9$ & $86.1$ & $93.0$ & $11.4$ & $87.7$ & $92.7$ & $27.5$ & $70.3$ \\
RanMASK-$30\%$ (vote) + LM & $90.6$ & $\bf 23.4$ & $\bf 74.2$ & $90.4$ & $\bf 22.8$ & $\bf 74.8$ & $91.0$ & $\bf 41.7$ & $\bf 53.1$ \\
\hline
\hline
\end{tabular}
\end{center}
\caption{Empirical results on SST2. RanMASK-$30 \%$ with the ``vote'' ensemble method and the LM-based sampling strategy achieved the best results on the robust accuracy under all three different attack algorithm.
}
\label{tb-sst2-emperical}
\end{table*}
As mentioned in the introduction, if a relatively small proportion of words are perturbed (relative to the masking rate $\rho$), then it is highly unlikely that all of the perturbed words can survive from the random masking.
As the value of $\rho$ decreases, we have to make sure that the following \emph{risk probability} is greater than $0.5$ for any text $\boldsymbol{x}$; otherwise more than $50\%$ masked copies of $\boldsymbol{x}$ will contain all the perturbed words, which easily causes the base classifier $f$ to make mistakes, and so does $g$.
\begin{equation}
\small
\label{eq:probability}
\mathbb{P}(\boldsymbol{x}, \rho, \gamma) = \tbinom{(1 - \gamma) h_{\boldsymbol{x}}}{(1 - \gamma - \rho) h_{ \boldsymbol{x}}} /
\tbinom{h_{ \boldsymbol{x}}}{\rho h_{ \boldsymbol{x}}}
\end{equation}
\noindent where $\gamma$ is the maximum percentage of words that can be perturbed.
For AGNEWS dataset, this risk probability is very close to zero when $\rho = 90\%$ no matter what the value of $\gamma$ is applied.
We use the average length of texts in a dataset (instead of each text) to estimate the risk probability.
For SST2, this risk probability is approximately equal to $0.5$ when $\rho = 30\%$.
To reduce this risk, we designed a new sampling strategy in which the probability of a word being masked corresponds to its output probability of a BERT-based language model (LM).
Generally, the higher the output probability of a word is, the lower the probability that this word is perturbed.
We would like to retain the words that have not been perturbed as much as possible.
This LM-based sampling instead of that based on the uniform distribution was used to evaluate the empirical results on SST2.
Note that LM-based sampling is unnecessary when evaluating on AGNEWS.
In the following experiments, we consider two ensemble methods \cite{cheng2020voting}: \emph{logits-summed} ensemble (logit) and \emph{majority-vote} ensemble (vote).
In the ``logit'' method, we take the average of the logits produced by the base classifier $f$ over all the individual random samples as the final prediction.
In the ``vote'' strategy, we simply count the votes for each class label.
The following metrics are used to report the empirical results:
\begin{itemize}
\normalsize
\setlength{\itemsep}{0pt}
\setlength{\parsep}{0pt}
\setlength{\parskip}{0pt}
\item The \emph{clean accuracy} (Cln) is the classification accuracy achieved on the clean texts $\boldsymbol{x}$.
\item The \emph{robust accuracy} (Boa) is the accuracy of a classifier achieved under a certain attack.
\item The \emph{success rate} (Succ) is the number of texts successfully perturbed by an attack algorithm (causing the model to make errors) divided by all the number of texts to be attempted.
\end{itemize}
We evaluate the empirical robustness under test-time attacks by using TextAttack\footnote{\url{https://github.com/QData/TextAttack}}
framework \cite{morris2020textattack}
with three black-box, score-based attack algorithms: TextFooler \cite{Jin_Jin_Zhou_Szolovits_2020}, BERT-Attack \cite{li2020bert}, and DeepWordBug \cite{gao2018black}.
TextFooler and BERT-Attack adversarially perturb the text inputs by the word-level substitutions, while DeepWordBug performs the character-level perturbations to the inputs.
The results of the empirical robustness on AGNEWS listed shown in Table \ref{tb-agnews-emperical}.
RanMASK-$90\%$ consistently performs better than the competitors under all the three attack algorithms on the robust accuracy while suffering little performance drop on the clean data.
The empirical results on SST2 are reported in Table \ref{tb-sst2-emperical}, and we found similar trends as those on AGNEWS, especially for those when the LM-based sampling was applied.
The results for both datasets show that our RanMASK consistently achieves better robust accuracy while suffering little loss on the original clean data.
\subsection{Comparison with SAFER}
\label{sec:safer}
We report in Table \ref{tb-safer} the empirical robustness of RanMASK on AGNEWS compared with SAFER, a very recently proposed certified defense \cite{ye-etal-2020-safer}.
From these numbers, we found that RanMASK outperforms SAFER under the setting where the ``logit'' ensemble is used, while SAFER slightly performs better than RanMASK when using the ``vote'' ensemble for the predictions under the attack of TextFooler.
However, this comparison is not direct and fair.
First, SAFER makes use of the same synonym table used by TextFooler.
Second, we found that different smoothing defense methods behave quite differently as the ensemble method is changed from the ``vote'' to the ``logit.''
\begin{table}[t]
\small
\setlength{\abovecaptionskip}{0.05cm}
\begin{center}
\setlength{\tabcolsep}{0.7mm}
\begin{tabular}{l|*{3}{>{\centering\arraybackslash}p{0.8cm}|}{>{\centering\arraybackslash}p{0.7cm}}}
\hline
\hline
\multirow{2}{*}{\bf Method} & \multicolumn{2}{c|}{\bf TextFooler} & \multicolumn{2}{c}{\bf \footnotesize DWB} \\
\cline{2-5}
&\bf Cln\% &\bf Boa\% &\bf Cln\% &\bf Boa\% \\
\hline
Baseline (BERT) & $93.0$ & $5.6$ & $94.3$ & $16.6$ \\
SAFER (logit) & $94.6$ & $26.1$ & $95.1$ & $31.9$ \\
RanMASK-$5\%$ (logit) & $94.4$ & $13.2$ & $94.8$ & $23.4$ \\
RanMASK-$90\%$ (logit) & $91.3$ & $47.3$ & $89.2$ & $39.6$ \\
SAFER (vote) & $\bf 95.4$ & $\bf 78.6$ & $95.2$ & $78.4$ \\
RanMASK-$5\%$ (vote) & $93.9$ & $68.6$ & $\bf 95.3$ & $77.1$ \\
RanMASK-$5\%$ (vote) + LM & $94.8$ & $71.4$ & $93.8$ & $\bf 80.3$ \\
RanMASK-$90\%$ (vote) & $90.3$ & $51.9$ & $90.4$ & $51.8$ \\
\hline
\hline
\end{tabular}
\end{center}
\caption{Empirical results of RanMASK on AGNEWS compared with SAFER under two attack algorithms: TextFooler and DeepWordBug (denoted as ``DWB'').
}
\label{tb-safer}
\end{table}
Typical score-based attack algorithms, such as TextFooler and DeepWordBug, usually use two steps to craft adversarial examples: greedily identify the most vulnerable position to change; modify it slightly to maximize the model's prediction error.
This two-step would be repeated iteratively until the model's prediction changes.
If the ``majority-vote'' method is used, the class distributions produced by the models trained with SAFER would be quite sharp, even close to one-hot categorical distribution\footnote{In contrast to SAFER, the class distributions produced by the models trained with RanMASK are relatively smoother than those with SAFER.
We estimated the average entropy of the distributions predicted by SAFER and RanMASK on $1,000$ test samples selected randomly from AGNEWS.
When TextFooler starts to attack, the average entropy of SAFER's predictions is $0.006$, while those of RanMASK's are $0.025$, $0.036$, $0.102$, and $0.587$ when $\rho = 5\%$, $10\%$, $50\%$, and $90\%$ respectively.
Note that the greater the entropy is, the smoother the distribution will be.}, which hinders the adversaries to peek the changes in the model's predictions by a small perturbation on the input, ending up trapped in local minima.
This forces the adversaries to launch the decision-based attacks \cite{maheshwary2020generating} instead of the score-based ones, which can dramatically affect the resulting attack success rates.
If the ``logit'' ensemble method is used or the attack algorithm is designed to perturb more than one word at a time, the empirical robustness of SAFER will drop significantly. Therefore, it is unfair to compare ``majority vote''-based ensemble defense methods with others when conducting empirical experiments. We believe these methods will greatly improve model's defense performance, but we recommend using ``logit'' ensemble methods if one needs to prove the effectiveness of the proposed algorithm against textual adversarial attacks in future research.
We also found the same phenomenon in experiments with a relatively low mask rate, e.g., RanMASK-$5$\%, seeing Table \ref{tb-safer}. RanMASK-$5$\% with ``vote'' ensemble achieves outstanding defense performance against both TextFooler and DeepWordBug since it also forces a score-based adversary to launch the decision-based attacks. However, if the ``logit'' ensemble method is used for RanMASK-$5$\%, the Boa\% has drop a lot. In this case (``logit''), RanMASK-$90$\% performs better than RanMASK-$5$\% and SAFER as expected.
\section{Conclusion}
In this study, we propose a smoothing-based certified defense method for NLP models to substantially improve the robust accuracy against different threat models, including synonym substitution-based transformations and character-level perturbations.
The main advantage of our method is that we do not base the certified robustness on the unrealistic assumption that the defenders know how the adversaries generate synonyms.
We demonstrated through extensive experimentation that our smoothed classifiers outperform existing empirical and certified defenses across different datasets.
\section*{Acknowledgements}
The authors would like to thank the anonymous reviewers for their valuable comments. This work was supported by
Shanghai Municipal Science and Technology Major Project (No. 2021SHZDZX0103), National Science Foundation of China (No. 62076068) and Zhangjiang Lab.
\bibliographystyle{acl_natbib}
|
{
"timestamp": "2021-06-15T02:32:58",
"yymm": "2105",
"arxiv_id": "2105.03743",
"language": "en",
"url": "https://arxiv.org/abs/2105.03743"
}
|
\section{Introduction}
\label{sec:intro}
Several alternative theories modifying Einstein's general relativity have been proposed in the past decades as parts of an effort to solve long-standing cosmological problems such as dark energy, dark matter, and inflation. One of these theories, based on the assumption that graviton might have nonzero mass and hence later coined as the theory of \textit{massive gravity}, was originally visioned by M.~Fierz and W.~Pauli in 1939 \cite{fierz1939relativistic}. However, the theory did not have a continuous transition to general relativity in the limit of zero graviton mass, an issue known as the van Dam--Veltman--Zakharov discontinuity \cite{vandam1970massive,zakharov1970linearized}. This problem was remedied by Vainshtein's nonlinear mechanism in 1972 \cite{vainshtein1972problem}, but the nonlinear terms then gave rise to another problem called the Boulware-Deser ghost \cite{boulware1972can}. Building on several previous attempts \cite{arkanihamed2003effective,creminelli2005ghosts}, this problem was then finally resolved in 2010 by C.~de Rham, G.~Gabadadze, and A.~Tolley (dRGT) \cite{derham2010generalization,derham2011resummation}, resulting in a Lorentz-invariant, ghost-free nonlinear theory of massive gravity \cite{hassan2012resolving,hassan2012ghost,derham2012ghost,derham2011helicity,hassan2012confirmation,mirbabayi2012proof,golovnev2012on,hassan2012proof,kluson2012nonlinear}. (See also Refs.~\cite{hinterbichler2012theoretical,derham2014massive} for reviews.)
Unfortunately, the dRGT theory also had some serious challenges: there was no stable homogeneous and isotropic cosmological solutions \cite{damico2011massive,gumrukccuouglu2011open,gumrukcuoglu2012cosmological,defelice2012massive}, together with other pathologies such as Higuchi bound \cite{higuchi1987forbidden,fasiello2012cosmological} and positivity bound \cite{cheung2016positive,bellazzini2018beyond}. To overcome this, the \textit{minimal theory of massive gravity} (MTMG) was then proposed by A.~De Felice and S.~Mukohyama in 2016 \cite{defelice2016minimal} by imposing some constraints which suppress the five degrees of freedom in the original dRGT theory such that there are only two degrees of freedom, both of them are tensor modes, as in the case of general relativity, but now the theory is not Lorentz invariant. It has the same Friedmann--Lema\^{\i}tre--Robertson--Walker (FLRW) equations as in the dRGT theory, but now the FLRW background is stable \cite{defelice2016phenomenology}. There are two branches of solutions. The first is the self-accelerating branch, which is phenomenologically the same with the $\Lambda$CDM cosmology, except that the accelerating expansion of the universe is now caused by the graviton mass term, not necessarily by the cosmological constant. The second is the normal branch, which is phenomenology different from general relativity in the scalar and tensor sectors, leading to non-trivial dynamics which could be tested against the predictions of general relativity \cite{defelice2016phenomenology,defelice2017graviton}.
Another attempt to modify the dRGT theory was done by Q.-G.~Huang, Y.-S.~Piao, and S.-Y.~Zhou in 2012 by coupling the graviton potentials to a scalar field $\psi$, enabling the graviton to have a varying mass \cite{huang2012mass}. This theory of \textit{mass-varying massive gravity} (MVMG) is again Boulware-Deser ghost-free and the Lorentz invariance is satisfied. Varying the graviton mass can lead to interesting cosmological behaviors both in the inflationary and late-time era. Specifically, the graviton mass will asymptotically approach zero at late times due to the dynamics of the theory, hence there is no need to fine tune the graviton mass to a very small number to be in line with the cosmological bounds for the graviton mass in the present time \cite{saridakis2013phantom}. However, this may be a disadvantage, since it means that the contribution of massive gravity to explain the cosmic expansion at late times is minimum \cite{tannukij2016mass}.
Another disadvantage of the MVMG theory is that it has many graviton degrees of freedom, which may lead to instabilities. Therefore, to suppress these degrees of freedom, we will follow in this paper the method of Refs.~\cite{defelice2016minimal,defelice2016phenomenology}. First, we define the precursor theory by writing the MVMG action using the vielbein formulation in the Arnowitt--Deser--Misner (ADM) formalism, but here we generalize the theory to the case of higher dimensions. We also adopt the vielbein potential in Ref.~\cite{hinterbichler2012interacting} and couple it to the mass-like scalar potential $W(\psi)$, unlike in the previous literature where it is coupled to metric. The purpose of this is such that the number of graviton degrees of freedom in the theory is the same as in general relativity. We then perform the Legendre transformation to the precursor action to obtain the precursor Hamiltonian. After imposing the non-trivial constraints to the theory, we will obtain the \textit{minimal theory of mass-varying massive gravity} (MTMVMG), where the number of graviton degrees of freedom in this theory becomes ${D (D - 3)}/2$, as in the $D$-dimensional general relativity, where $D$ is the number of spacetime dimensions. For $D = 4$, the number of graviton degrees of freedom in the MTMVMG is two, in contrast to the MVMG theory where there are five graviton degrees of freedom. Therefore, in the light of \textit{minimally modified gravity} (MMG), a modified theory of gravity with two local gravitational degrees of freedom, discussed in Refs.~\cite{lin2017class,mukohyama2019minimally}, where the \textit{type-I} MMG is for the theory in which there exists an Einstein frame \cite{aoki2019phenomenology} and the \textit{type-II} MMG is for the one in which there is no Einstein frame \cite{defelice2020theory}, the MTMVMG can be viewed as an \textit{extended} type-II MMG theory.
To study the cosmological aspects of this theory, we can perform the Legendre transformation again to get the expression for the MTMVMG action, which then can be used to obtain the Friedmann-Lema\^{\i}tre equations. We take both the scalar potential and the graviton mass couplings to have exponential forms, and find that there are eight critical points in the theory: five in the massless sector and three in the massive sector. Therefore, the MTMVMG theory can have both massless and massive sectors even in the late-time era, in contrast to the ordinary MVMG theory where, as mentioned previously, the dynamics of the theory will lead the graviton mass to asymptotically approach zero at late times. This makes the MTMVMG a richer theory, which then can give us good descriptions of both the inflationary and late-time era. Especially, there are at least two interesting possible scenarios for the late-time cosmology: the dark energy is either due to the constant graviton mass which comes from the scalar field $\psi_\infty$ that becomes frozen after the reheating era, or due to the quintessence paradigm where the scalar field $\psi$ is dynamic. Therefore, if the accelerating expansion of the universe in the massless sector can be explained by standard quintessence paradigm, in the massive sector it has to be explained by the nontrivial interplay between quintessence and massive gravity.
This paper is organized as the following: In Sec.~\ref{sec:precursor} we derive the precursor action by writing the MVMG theory using the vielbein formulation. In Sec.~\ref{sec:MVMGS} we then construct the minimal theory by imposing $D$-constraints. In Sec.~\ref{sec:Friedmanneq} we derive the Friedmann-Lema\^{\i}tre equations for our model. We then perform in Sec.~\ref{sec:Dynas} the dynamical analysis around the critical points and check in Sec.~\ref{sec:locglobex} their local and global existences. In Sec.~\ref{sec:cosmologicalcons} we discuss the cosmological implication on the inflationary expansion and late-time acceleration. We then conclude the paper and write several remarks in Sec.~\ref{sec:conclusions}. Detailed calculations are presented in the Appendix.
\section{Precursor Theory}
\label{sec:precursor}
In this Section we will construct the MVMG action by replacing the graviton mass with the mass-like scalar potential $W(\psi)$. However, we will adopt the vielbein potential from Ref.~\cite{hinterbichler2012interacting} and couple it to $W(\psi)$, in contrast to Ref.~\cite{huang2012mass} where $W(\psi)$ is coupled to metric. We will then define the action for the precursor theory, which will be needed later to construct the MTMVMG action.
Let us first consider two $D$-dimensional Lorentzian manifolds $(\mathcal{M}, g)$ and $(\mathcal{M}_0, g_0)$ parametrized by the coordinate systems $x^\mu$ and $y^a$ with $\mu, a = 0, \ldots, D - 1$, respectively. Using the ADM formalism, the metrices $g$ and $g_0$ can be written as
\begin{eqnarray}
ds^2_{(d)} &=& -N^2 dt^2 + \gamma_{i j} (dx^i + N^i dt) (dx^j + N^j dt), \nonumber \\
%
ds^2_{(b)} &=& -N_0^2 d\tau^2 + \gamma_{0 i j} (dy^i + N_0^i d\tau) (dy^j + N_0^j d\tau), \label{eq:metfoliasi}
\end{eqnarray}
where $\gamma_{i j}$ and $\gamma_{0 i j}$ are the components of the induced spatial metrices, $N$ and $N_0$ are the lapse functions, $N^i$ and $N_0^i$ are the shift vectors, and $i, j = 1, \ldots, D - 1$. The subscripts $(d)$ and $(b)$ denote the dynamic and the background, respectively. Let us then introduce the vielbeins $\tensor{E}{^A_\mu}$ and $\tensor{E}{_0^A_a}$ such that the metric \eqref{eq:metfoliasi} can be written as $g_{\mu \nu} = \eta_{A B} \tensor{E}{^A_\mu} \tensor{E}{^B_\nu}$ and $g_{0 a b} = \eta_{A B} \tensor{E}{_0^A_a} \tensor{E}{_0^B_b}$, where $A, B = 0, \ldots, D - 1$ and $\eta_{A B}$ is the flat Minkowski metric. Therefore, the vielbeins $\tensor{E}{^A_\mu}$ and $\tensor{E}{_0^A_a}$ have the form
\begin{equation}
\tensor{E}{^A_\mu} = \begin{pmatrix} N & 0 \\ N^k \tensor{e}{^I_k} & \tensor{e}{^I_i} \end{pmatrix}, \qquad \tensor{E}{_0^A_a} = \begin{pmatrix} N_0 & 0 \\ N_0^k \tensor{e}{_0^I_k} & \tensor{e}{_0^I_i} \end{pmatrix}, \label{eq:basis}
\end{equation}
whose duals are given by
\begin{equation}
\tensor{E}{_A^\mu} = \begin{pmatrix} \frac{1}{N} & -\frac{N^i}{N} \\ 0 & \tensor{e}{_I^i} \end{pmatrix}, \qquad \tensor{E}{_0_A^a} = \begin{pmatrix} \frac{1}{N_0} & -\frac{N_0^i}{N_0} \\ 0 & \tensor{e}{_0_I^i} \end{pmatrix}, \label{eq:dualbasis}
\end{equation}
such that they satisfy
\begin{eqnarray}
\tensor{E}{^A_\mu} \tensor{E}{_B^\mu} = \tensor{\delta}{^A_B}, &\qquad& \tensor{E}{_0^A_a} \tensor{E}{_0_B^a} = \tensor{\delta}{^A_B}, \\
\tensor{E}{^A_\mu} \tensor{E}{_A^\nu} = \tensor{\delta}{_\mu^\nu}, &\qquad& \tensor{E}{_0^A_a} \tensor{E}{_0_A^b} = \tensor{\delta}{_a^b}.
\end{eqnarray}
Here $\tensor{e}{^I_i}$ and $\tensor{e}{_0^I_i}$ are the spatial vielbeins, with $I, J = 1, \ldots, D - 1$.
Now, we consider a smooth embedding $\phi: \mathcal{M} \longrightarrow \mathcal{M}_0$ such that we could have pulled back quantities
\begin{equation}
\tensor{\tilde{E}}{^A_\mu}(x) = \frac{\partial \phi^a}{\partial x^\mu} \tensor{E}{_0^A_a} (\phi(x)), \qquad f_{\mu \nu} (x) = \frac{\partial \phi^a}{\partial x^\mu} \frac{\partial \phi^b}{\partial x^\nu} g_{0 a b} (\phi(x)), \label{eq:transfgaugepre}
\end{equation}
so that $\mathcal{M}$ obeys diffeomorphism rules through a St\"uckelberg field $\phi^a$. In the unitary gauge where $\phi^a = \tensor{\delta}{^a_\mu }x^\mu$, we simply recover $\tensor{\tilde{E}}{^A_\mu} = \tensor{E}{_0^A_\mu}$ and $f_{\mu \nu} = g_{0 \mu \nu}$. Note that the original formulation of the dRGT theory has a Minkowski background, $g_{0 a b} = \eta_{a b}$. Using the vielbein formulation in Ref.~\cite{hinterbichler2012interacting}, the ghost-free potential related to the graviton mass has the form
\begin{equation}
\sum_{n = 0}^D \frac{c_n}{n! (D - n)!} \hat{\epsilon}_{A_1 A_2 \cdots A_D} \tensor{\bold{E}}{^{A_1}} \wedge \cdots \wedge \tensor{\bold{E}}{^{A_n}} \wedge \tensor{\bold{\tilde{E}}}{^{A_{n + 1}}} \wedge \cdots \wedge \tensor{\bold{\tilde{E}}}{^{A_D}} \label{eq:gfpot}
\end{equation}
with one-forms $\bold{E}^A = \tensor{E}{^A_\mu} dx^\mu$ and $\bm{\tilde{E}}^A = \tensor{\tilde{E}}{^A_\mu} dx^\mu$ the dRGT mass term for arbitrary background and dimension. The quantity $\hat{\epsilon}$ denotes the Levi-Civita symbol in the flat spacetime. Furthermore, inspired by \cite{huang2012mass}, the graviton mass is replaced by a function of a scalar field $W(\psi)$, where $\psi(x)$ is well-defined on $\mathcal{M}$. Coupling $W(\psi)$ to the potential in Eq.~\eqref{eq:gfpot} and adding this coupling term to the Einstein--Klein--Gordon action, we obtain the ghost-free MVMG action as the following,
\begin{eqnarray}
&& S_\text{MVMG} = \int_{\mathcal{M}} d^Dx \, \det{(E)} \, \left(\frac{M_{Pl}^{D - 2}}{2} R(E) - \frac{1}{2} \partial^\mu \psi \partial_\mu \psi - V(\psi) \right) \nonumber \\
%
&& \qquad - \, \frac{1}{4} \int_{\mathcal{M}} {W(\psi)} \left( \sum_{n = 0}^D \frac{c_n}{n! (D - n)!} \hat{\epsilon}_{A_1 A_2 \cdots A_D} \tensor{\bold{E}}{^{A_1}} \wedge \cdots \wedge \tensor{\bold{E}}{^{A_n}} \wedge \tensor{\bold{\tilde{E}}}{^{A_{n + 1}}} \wedge \cdots \wedge \tensor{\bold{\tilde{E}}}{^{A_D}} \right), \nonumber \\
\label{eq:ev1}
\end{eqnarray}
where $V(\psi)$ is the scalar potential function.
The action for the precursor theory can be obtained by simply substituting the vielbeins in \eqref{eq:basis} and \eqref{eq:dualbasis} to the action \eqref{eq:ev1},
\begin{eqnarray}
S_{\text{pre}} &=& \int d^Dx \, N \det{(e)} \Bigg[ \frac{M_{\text{Pl}}^{D - 2}}{2} \left({}^{(D - 1)}R(e) + K^{i j} K_{i j} - K^2 \right) \nonumber \\
%
&& + \, \frac{1}{2 N^2} \dot{\psi}^2 - \frac{1}{2} \partial^i \psi \partial_i \psi + \frac{N^i N^j}{2N^2} \partial_i \psi \partial_j \psi - \frac{N^i}{N^2} \dot{\psi} \partial_i \psi - V(\psi) \nonumber\\
%
&& - \, W(\psi) \left( | \det{(X)} | \frac{M}{N} \sum_{n = 0}^{D - 1} c_n \mathcal{S}_n(Y) + \sum_{n = 0}^{D - 1} c_{D - n} \mathcal{S}_n(X) \right) \Bigg], \label{eq:Spre}
\end{eqnarray}
where ${}^{(D - 1)}R(e)$ is the spatial Ricci scalar, while $K^{i j}$ and $K$ are the second fundamental form and the mean curvature, respectively. For any function $\psi$, we define $\dot{\psi} \equiv {\partial \psi}/{\partial t}$ and $\partial_i \psi \equiv {\partial \psi}/{\partial x^i}$. Here, the quantities $\{M, M^i, \tensor{\tilde{e}}{^I_i}\}$ are the pull back of $\{N_0, N_0^i, \tensor{e}{_0^I_i}\}$ via St\"uckelberg field $\phi$ as a background image on $\mathcal{M}$ given by \eqref{eq:transfgaugepre}, respectively. The elements $\mathcal{S}_n$ are the $n$th order symmetric polynomials which depend on either $\tensor{Y}{_{I}^{J}} \equiv \tensor{\tilde{e}}{_{I}^k} \tensor{e}{^J_k}$ or $\tensor{X}{_{I}^{J}} \equiv \tensor{e}{_{I}^k} \tensor{\tilde{e}}{^J_k}$ (see Appendix \ref{sec:appendixA} for more discussions). Note that the action \eqref{eq:Spre} violates the local Lorentz symmetry because we have used the ADM vielbeins in Eqs.~\eqref{eq:basis} and \eqref{eq:dualbasis}.
\section{Minimal Theory of MVMG}
\label{sec:MVMGS}
The discussion in this Section is divided into two parts. Firstly, we construct the Hamiltonian for the MTMVMG and identify some constraints which restricts the graviton degrees of freedom. Secondly, we discuss the MTMVMG action which is constructed from the precursor action discussed in the previous section but with some additional constraints.
\subsection{MTMVMG Hamiltonian and Some Constraints}
\label{sec:Hamil}
Let us first consider the spatial vielbeins $\tensor{e}{^I_i}$ and the scalar field $\psi$ as the canonical variables which correspond to the conjugate momenta defined as
\begin{eqnarray}
\tensor{\pi}{_I^i} &\equiv& \frac{\delta S_{\text{pre}}}{\delta \tensor{\dot{e}}{^I_i}} = \det{(e)} \, M_{Pl}^{D - 2} (K^{i j} - K \gamma^{ij}) \delta_{I J} \tensor{e}{^J_j}, \label{eq:pikon1} \\
%
\pi &\equiv& \frac{\delta S_{\text{pre}}}{\delta \dot{\psi}} = \det{(e)} \, \left( \frac{1}{N} \dot{\psi} - \frac{N^i}{N} \partial_i \psi \right). \label{eq:pikon2}
\end{eqnarray}
We can switch from Lagrangian to Hamiltonian by performing the Legendre transformation in order to see some constraints of the theory. As it is well-known in the vielbein language that the lapse function $N$ and the shift vector $N^i$ appear as Lagrange multipliers enforcing the diffeomorphism constraints \cite{hinterbichler2012interacting}, namely $\mathcal{R}_0 \approx 0$ and $\mathcal{R}_i \approx 0$. These constraints are called the primary constraints of the first kind, which then enable us to construct another set of constraints called the secondary constraints of the first kind \cite{henneaux1994quantization}. Note that there are only $D - 2$ independent secondary constraints, since two of them can be obtained from the others. We denote them as $\tilde{\mathcal{C}}_\tau$ $(\tau = 1, \ldots, D - 2)$, together with their Lagrange multipliers $\lambda^\tau$.
Additionally, as studied in Refs.~\cite{defelice2016minimal,defelice2016phenomenology}, from Eq.~\eqref{eq:pikon1} we also have another set of primary constraints of the second kind $\mathcal{P}^{[M N]}$ that lead to the secondary constraints of the second kind $\mathcal{Z}^{[M N]}$ in the phase space, together with their Lagrange multipliers $\alpha_{M N}$ and $\beta_{M N}$, with $M, N = 1, \ldots, (D - 1) (D - 2)/2$. These secondary constraints are necessary since the primary constraints should be preserved with respect to the time evolution. Therefore, the precursor Hamiltonian can be written down as
\begin{eqnarray}
H_{\text{pre}} &=& \int d^{D - 1}x \Big( -N \mathcal{R}_0 - N^i \mathcal{R}_i + W(\psi) N \mathcal{H}_0 + W(\psi) M \mathcal{H}_1 + \tilde{\lambda}^{\tau} \tilde{\mathcal{C}}_\tau \nonumber \\
%
&& + \, \alpha_{M N} \mathcal{P}^{[M N]} + \beta_{M N} \mathcal{Z}^{[M N]} \Big), \label{eq:hampre}
\end{eqnarray}
where
\begin{eqnarray}
\mathcal{R}_0 &\equiv& \det{(e)} \, \frac{M_{Pl}^{D - 2}}{2} {}^{(D - 1)}R(e) - \frac{1}{2 \det{(e)} \, M_{Pl}^{D - 2}} \left[\tensor{\pi}{_I^i} \tensor{\pi}{^I_i} - \frac{1}{D - 2} (\tensor{\pi}{_I^i} \tensor{e}{^I_i})^2 \right], \nonumber \\
%
&& - \, \frac{1}{2 \det{(e)}} \pi^2 - \frac{\det{(e)}}{2} \partial_i \psi \partial^i \psi - \det{(e)} \, V(\psi), \\
%
\mathcal{R}_i &\equiv& \nabla_j (\tensor{\pi}{_I^j} \tensor{e}{^I_i}) - \pi \partial_i \psi, \\
%
\mathcal{H}_0 &\equiv& \det{(e)} \, \sum_{n = 0}^{D - 1} c_{D - n} \mathcal{S}_n(X), \\
%
\mathcal{H}_1 &\equiv& \det{(e)} \, | \det{(X)} | \sum_{n = 0}^{D - 1} c_n \mathcal{S}_n(Y),
\end{eqnarray}
and
\begin{eqnarray}
\mathcal{P}^{[M N]} &\equiv& (\tensor{e}{^M_j} \delta^{K N} - \tensor{e}{^N_j} \delta^{K M}) \tensor{\pi}{_K^j}, \\
%
\mathcal{Z}^{[M N]} &\equiv& (\tensor{e}{^M_j} \delta^{K N} - \tensor{e}{^N_j} \delta^{K M}) \tensor{\tilde{e}}{_K^j}.
\end{eqnarray}
Since the constraints above remove some graviton degrees of freedom, we can construct a theory in which the spatial graviton degrees of freedom coincide with the standard general relativity. Inspired by Ref.~\cite{defelice2016minimal}, we can impose the $D$-constraints in unitary gauge given by
\begin{equation}
\mathcal{C}_0 \equiv \{\mathcal{R}_0, H_{1}\}_\text{PB} - W(\psi) \frac{\partial \mathcal{H}_0}{\partial t} \approx 0, \qquad \mathcal{C}_i \equiv \{\mathcal{R}_i, H_{1}\}_\text{PB} \approx 0, \label{Dconstraints}
\end{equation}
with
\begin{equation}
H_1 \equiv \int d^{D - 1} x \, W(\psi) M \mathcal{H}_1.
\end{equation}
Note that these constraints consist of two new constraints and $D-2$ independent constraints $\tilde{\mathcal{C}}_\tau$ which already exist in the precursor theory. Moreover, the constraints in Eq.~\eqref{Dconstraints} imply that the theory admits the Lorentz symmetry violation. The Hamiltonian of the MTMVMG theory then reads
\begin{eqnarray}
H_\text{MTMVMG} &=& \int d^{D - 1} x \, \Big[ -N \mathcal{R}_0 - N^i \mathcal{R}_i + W(\psi) (N \mathcal{H}_0 + M \mathcal{H}_1) + \lambda \mathcal{C}_0 + \lambda^i \mathcal{C}_i \nonumber \\
%
&& + \, \alpha_{M N} \mathcal{P}^{[M N]} + \beta_{M N} \mathcal{Z}^{[M N]} \Big]. \label{eq:hmtmg}
\end{eqnarray}
Thus, in total we have $D^2 - D + 2$ constraints, which means that the number of spatial graviton degrees of freedom are $D (D - 3)/2$, as in the $D$-dimensional general relativity.
\subsection{MTMVMG Action}
To construct the MTMVMG action, one has to employ the Legendre transformation on the Hamiltonian functional \eqref{eq:hmtmg}. It will be shortly discussed in this Subsection, but its detailed derivations will be presented in the Appendix \ref{sec:appendixB}.
As discussed in the previous subsection, we should have the nontrivial $D$-constraints \eqref{Dconstraints} in the MTMVMG theory. In order to have a consistent theory, we have to modify the conjugate momenta \eqref{eq:pikon1} and \eqref{eq:pikon2} to
\begin{eqnarray}
\frac{\tensor{\pi}{_I^i}}{\det{(e)}} &\equiv& M_{Pl}^{D - 2} (K^{i j} \delta_{I J} \tensor{e}{^J_j} - K \tensor{e}{_I^i}) - \lambda\frac{W(\psi)}{2} \frac{M}{N} \Theta^{i j} \delta_{I J} \tensor{e}{^J_j}, \label{eq:pikon3} \\
%
\frac{\pi}{\det{(e)}} &\equiv& \frac{\dot{\psi}}{N} - \frac{N^i}{N} \partial_i \psi - \lambda \frac{dW}{d\psi} \frac{M}{N} \Phi, \label{pikonpi}
\end{eqnarray}
with
\begin{eqnarray}
\Theta^{i j} &\equiv& - | \det{(X)} | \delta^{I K} (\tensor{e}{_K^i} \tensor{\tilde{e}}{_J^j} + \tensor{e}{_K^j} \tensor{\tilde{e}}{_J^i}) \sum_{n = 1}^{D - 1} \sum_{m = 1}^n (-1)^m c_n \tensor{\left(Y^{m - 1} \right)}{_I^J} \mathcal{S}_{n - m}(Y), \nonumber \\
%
\Phi &\equiv& | \det{(X)} | \sum_{n = 1}^{D - 1} c_n \mathcal{S}_n(Y).
\end{eqnarray}
The notation $\tensor{(M^m)}{_I^J}$ means
\begin{equation}
\tensor{(M^m)}{_I^J} \equiv \tensor{M}{_I^{K_1}} \tensor{M}{_{K_1}^{K_2}} \cdots \tensor{M}{_{K_{m - 1}}^J}.
\end{equation}
The modifications \eqref{eq:pikon3} and \eqref{pikonpi} imply that the MTMVMG theory modifies both the kinetic part and the mass term. As we will see later, this also provides a class of solutions which coincides in the dRGT theory in the FLRW background.
For the sake of convenience, let us first introduce the following tensors
\begin{equation}
\tensor{\mathcal{K}}{^i_j} \equiv \tensor{\tilde{e}}{_I^i} \tensor{e}{^I_j}, \qquad \tensor{\mathcal{\bar{K}}}{^i_j} \equiv \tensor{e}{_I^i} \tensor{\tilde{e}}{^I_j}.
\end{equation}
satisfying $\tensor{\mathcal{K}}{^i_k} \tensor{\mathcal{\bar{K}}}{^k_j} = \delta^i_j$, which correspond to the spatial metrics by
\begin{equation}
\tensor{\mathcal{K}}{^i_k} \tensor{\mathcal{K}}{^k_j} = \tilde{\gamma}^{i l} \gamma_{l j}, \qquad \tensor{\mathcal{\bar{K}}}{^i_k} \tensor{\mathcal{\bar{K}}}{^k_j} = \gamma^{i l} \tilde{\gamma}_{l j},
\end{equation}
where $\tilde{\gamma}_{i j} = \delta_{I J} \tensor{\tilde{e}}{^I_i} \tensor{\tilde{e}}{^J_j}$ is the spatial metric on $\mathcal{M}$. Performing the Legendre transformation, we obtain the MTMVMG action,
\begin{equation}
S_{\text{MTMVMG}} = S_{\text{pre}} + S_\lambda, \label{eq:aksimtmg}
\end{equation}
where
\begin{eqnarray}
S_{\lambda} &=& \frac{2}{M_{Pl}^{D - 2}} \int d^D x \, N \sqrt{\gamma} \left( \lambda \frac{W(\psi)}{4} \frac{M}{N} \right)^2 \left( \gamma_{i k} \gamma_{j l} - \frac{1}{D - 2} \gamma_{i j} \gamma_{k l} \right) \Theta^{i j} \Theta^{k l} \nonumber \\
%
&& + \, \frac{1}{2} \int d^D x \, N \sqrt{\gamma} \left( \lambda \frac{dW}{d\psi} \frac{M}{N} \right)^2 \Phi^2 - \int d^D x \, \sqrt{\gamma} \left[ \lambda \mathcal{\bar{C}}_0 + \lambda^i \tensor{\mathcal{C}}{_i}\right],
\end{eqnarray}
and
\begin{eqnarray}
\mathcal{\bar{C}}_0 &=& \frac{1}{2} W(\psi) M \left( \gamma_{i k} \gamma_{j l} - \frac{1}{D - 2} \gamma_{i j} \gamma_{k l} \right) \Theta^{k l} (K^{i j} - K \gamma^{i j}) \nonumber \\
%
&& + \, W(\psi) | \det{(\mathcal{\bar{K}})} | \sum_{n = 1}^{D - 1} \sum_{m = 1}^n (-1)^m c_n \tensor{\left( \mathcal{K}^{m - 1} \right)}{^k_l} \tensor{\tilde{\zeta}}{^l_k} \mathcal{S}_{n - m}(\mathcal{K}) \nonumber \\
%
&& - \, M | \det{(\mathcal{\bar{K}})} | \frac{dW}{d\psi} \left( \frac{\dot{\psi}}{N} - \frac{N^i}{N} \partial_i \psi \right) \sum_{n = 1}^{D - 1} c_n \mathcal{S}_n(\mathcal{K}), \\
%
\tensor{\mathcal{C}}{_i} &=& - W(\psi) \nabla_k M | \det{(\mathcal{\bar{K}})} | \sum_{n = 1}^{D - 1} \sum_{m = 1}^n (-1)^m c_n \tensor{\left( \mathcal{K}^m \right)}{^k_i} \mathcal{S}_{n - m}(\mathcal{K}) \nonumber \\
%
&& - \, M | \det{(\mathcal{\bar{K}})} | \partial_i \psi \frac{dW}{d\psi} \sum_{n = 1}^{D - 1} c_n \mathcal{S}_n(\mathcal{K}),
\end{eqnarray}
with
\begin{eqnarray}
\Theta^{i j} &=& - 2 | \det{(\mathcal{\bar{K}})} | \gamma^{i l} \sum_{n = 1}^{D - 1} \sum_{m = 1}^n (-1)^m c_n \tensor{\left(\mathcal{K}^{m}\right)}{^j_l} \mathcal{S}_{n - m}(\mathcal{K}) \label{eq:thetai} \\
%
\Phi &=& | \det{(\mathcal{\bar{K}})} | \sum_{n = 1}^{D - 1} c_n \mathcal{S}_n(\mathcal{K}). \label{eq:phii}
\end{eqnarray}
It is worth mentioning that the additional term in \eqref{eq:pikon3} implies that we have to set $\alpha_{M N} = \beta_{M N} = 0$ \cite{defelice2016phenomenology,defelice2017minimal}. This is so because the tensors $\mathcal{P}^{[M N]}$ and $Y^{[M N]}$ are antisymmetric, while the tensor $\Theta^{i j}$ is symmetric.
We could extend the action \eqref{eq:aksimtmg} by adding the matter field,
\begin{equation}
S_{\text{MTMVMG-M}} = S_{\text{pre}} + S_\lambda + S_\text{matter}. \label{eq:aksimtmg1}
\end{equation}
Here, we consider the matter field part $S_\text{matter}$ to be the perfect fluid whose energy-momentum tensor has the form
\begin{equation}
T_{\mu \nu} = \frac{2}{\sqrt{-g}} \frac{\delta S_\text{matter}}{\delta g^{\mu \nu}} = \rho_m U_\mu U_\nu + P_m \left( g_{\mu \nu} + U_\mu U_\nu \right),
\end{equation}
where $U^\mu$ and $\rho_m$ are the unit velocity of the fluid and the energy density, respectively. The pressure $P_m$ is given by the state equation of matter fields,
\begin{equation}
P_m = w_m \rho_m, \label{eq:state_eq}
\end{equation}
with $w_m$ is a real constant \cite{akbar2019local}. In the standard higher-dimensional cosmology, we particularly have $w_m = \frac{1}{D - 1}$ (radiation), $w_m = 0$ (dust), and $w_m = -1$ (vacuum) \cite{chatterjee1990homogeneous}.
\section{Friedmann-Lema\^{\i}tre Equations}
\label{sec:Friedmanneq}
In this Section we consider a cosmological model in the MTMVMG theory. Our starting point is to write down the metric ansatz for the dynamic and the background manifolds which are spatially flat,
\begin{eqnarray}
ds_{(d)}^2 &=& g_{\mu \nu} dx^\mu dx^\nu = -N^2(t) dt^2 + a^2(t) \delta_{i j} dx^i dx^j, \label{eq:flrwmetrik} \\
%
ds_{(b)}^2 &=& f_{\mu \nu} dx^\mu dx^\nu = -M^2(t) dt^2 + \tilde{a}^2(t) \delta_{i j} dx^i dx^j. \label{eq:flrwmetrik2}
\end{eqnarray}
In the case at hand, the action \eqref{eq:aksimtmg1} simplifies to
\begin{eqnarray}
S_\text{MTMVMG-M} &=& -\int d^D x \, a^{D - 1} \Bigg\{ (D - 1) M_{Pl}^{D - 2} N H^2 - \frac{1}{2 N} \dot{\psi}^2 + N V(\psi) \nonumber \\
%
&& - \, W(\psi) \bigg( M \sum_{n = 0}^{D - 1} c_n A_nu^{D - n - 1} + N \sum_{n = 0}^{D - 1} c_{D - n} A_n u^n \bigg) \nonumber \\
%
&& + \, \frac{\lambda^2 M^2}{N} \Bigg[ \frac{D - 1}{D - 2} \frac{2}{M_{Pl}^{D - 2}} \left( \frac{W(\psi)}{4} \sum_{n = 1}^{D - 1} c_n B_n u^{D - n - 1} \right)^2 \nonumber \\
%
&& + \, \frac{1}{2} \left(\frac{dW}{d\psi} \sum_{n = 1}^{D - 1} c_n A_n u^{D - n - 1} \right)^2 \Bigg] - \lambda \Bigg[ \frac{\dot{\psi}M}{N} \frac{dW}{d\psi} \sum_{n = 1}^{D - 1} c_n A_n u^{D - n - 1} \nonumber \\
%
&& + \, (D - 1) W(\psi) M \left( H - u H_f \right) \sum_{n = 1}^{D - 1} c_n B_n u^{D - n - 1} \Bigg] \Bigg\} + S_\text{matter}.
\end{eqnarray}
Here, we have introduced the quantities
\begin{equation}
H \equiv \frac{\dot{a}}{Na}, \qquad H_f \equiv \frac{\dot{\tilde{a}}}{M\tilde{a}}, \qquad u \equiv \frac{\tilde{a}}{a}, \label{eq:parameter}
\end{equation}
and parameters
\begin{equation}
A_n \equiv \frac{(D - 1)!}{(D - n - 1)! n!}, \quad B_n \equiv \sum_{m = 1}^n (-1)^m \frac{(D - 1)!}{(D - n + m - 1)! (n - m)!}.
\end{equation}
Then, the variation with respect to the lapse function $N(t)$ gives us the first Friedmann-Lema\^{\i}tre equation,
\begin{equation}
\frac{1}{2} (D - 1) (D - 2) H^2 = \frac{1}{M_{Pl}^{D - 2}} (\rho_m + \rho_\text{MG} + \rho_\lambda), \label{eq:Friedman1}
\end{equation}
where
\begin{eqnarray}
\rho_\text{MG} &\equiv& \frac{1}{2 N^2} \dot{\psi}^2 + V(\psi) + W(\psi) \sum_{n = 0}^{D - 1} c_{n + 1} A_n u^{D - n - 1}, \nonumber \\
%
\rho_\lambda &\equiv& \frac{\lambda \dot{\psi} M}{N^2} \frac{dW}{d\psi} \sum_{n = 1}^{D - 1} c_n A_n u^{D - n - 1} - \frac{D - 1}{D - 2} \frac{\lambda^2 M^2 W^2(\psi)}{2M_{Pl}^{D - 2} N^2} \left( \sum_{n = 1}^{D - 1} c_n B_n u^{D - n - 1} \right)^2 \nonumber \\
%
&& + \, \frac{(D - 1) \lambda H M W(\psi)}{N} \sum_{n = 1}^{D - 1} c_n B_n u^{D - n - 1} + \frac{\lambda^2 M^2}{2 N^2} \left(\frac{dW}{d\psi} \right)^2 \left( \sum_{n = 1}^{D - 1} c_n A_n u^{D - n - 1} \right)^2. \label{eq:rhograv} \nonumber \\
\end{eqnarray}
The variation with respect to the scale factor $a(t)$ gives us the second Friedmann-Lema\^{\i}tre equation,
\begin{equation}
(D - 2) \frac{\dot{H}}{N} + \frac{1}{2} (D - 1) (D - 2) H^2 = - \frac{1}{M_{Pl}^{D - 2}} (P_m + P_\text{MG} + P_\lambda), \label{eq:Friedman2}
\end{equation}
where
\begin{eqnarray}
P_\text{MG} &\equiv& \frac{1}{2 N^2} \dot{\psi}^2 - V(\psi) - \frac{W(\psi)}{N} \sum_{n = 1}^{D - 1} \left(M c_n + N c_{n + 1} \right) n A_n u^{D - n - 1}, \nonumber \\
%
P_\lambda &\equiv& \frac{\lambda^2 M^2 W^2(\psi)} {2 M_{Pl}^{D - 2} N^2} \left( \sum_{n = 1}^{D - 1} \frac{D - 2 n - 1}{D - 2} c_n B_n u^{D - n - 1} \right) \left( \sum_{n = 1}^{D - 1} c_n B_n u^{D - n - 1} \right) \nonumber \\
%
&& - \, \frac{\lambda^2 M^2}{2 N^2} \left(\frac{dW}{d\psi} \right)^2 \left( \sum_{n = 1}^{D - 1} \frac{D - 2 n - 1}{D - 1} c_n A_n u^{D - n - 1} \right) \left( \sum_{n = 1}^{D - 1} c_n A_n u^{D - n - 1} \right) \nonumber \\
%
&& - \, \frac{\lambda M W(\psi) H_f}{N} \sum_{n = 1}^{D - 1} \Bigg( (D - n - 1) \frac{M}{N} + (n - 1) u \Bigg) c_n B_n u^{D - n - 1} \nonumber \\
%
&& - \, \left( \frac{\dot{\lambda} M N + \lambda (\dot{M} N - M \dot{N})}{N^3} \right) W(\psi) \sum_{n = 1}^{D - 1} c_n B_n u^{D - n - 1} \nonumber \\
%
&& + \, \frac{\lambda M \dot{\psi}}{N^2} \frac{dW}{d\psi} \sum_{n = 1}^{D - 1} c_n \left(\frac{n A_n}{D-1} + B_n \right) u^{D - n - 1}. \label{eq:Pgrav}
\end{eqnarray}
From the variation with respect to $\psi(t)$, we obtain the equation of motions,
\begin{eqnarray}
&& \frac{1}{N^2} \ddot{\psi} + \left( \frac{(D - 1) H}{N} - \frac{\dot{N}}{N^3} \right) \dot{\psi} + \frac{dV}{d\psi} + \frac{1}{N} \frac{dW}{d\psi} \Bigg\{ \sum_{n = 0}^{D - 1} \left(M c_n + N c_{n + 1} \right) A_n u^{D - n - 1} \nonumber \\
%
&& \quad + \, \frac{\lambda M}{N} \left( N H \sum_{n = 1}^{D - 1} n c_n A_n u^{D - n - 1} + M H_f \sum_{n = 1}^{D - 1} (D - n - 1) c_n A_n u^{D - n - 1} \right) \nonumber \\
%
&& \quad + \, \left( \frac{\dot{\lambda} M}{N} + \frac{\lambda (\dot{M} N - M \dot{N})}{N^2} \right) \sum_{n = 1}^{D - 1} c_n A_n u^{D - n - 1} - \lambda M \Bigg[ \frac{\lambda M}{N} \frac{d^2W}{d\psi^2} \left( \sum_{n = 1}^{D - 1} c_n A_n u^{D - n - 1} \right)^2 \nonumber \\
%
&& \quad - \, \frac{D - 1}{D - 2} \frac{\lambda M W(\psi)}{4 M_{Pl}^{D - 2} N} \left( \sum_{n = 1}^{D - 1} c_n B_n u^{D - n - 1} \right)^2 + (D - 1) \left( H -u H_f \right) \sum_{n = 1}^{D - 1} c_n B_n u^{D - n - 1} \Bigg] \Bigg\} = 0. \nonumber \\ \label{eq:dinamikamedanskalar}
\end{eqnarray}
Performing the variation with respect to $\lambda$ will give us
\begin{eqnarray}
&& W(\psi) \left( u H_f - H \right) \left( \sum_{n = 1}^{D - 1} c_n B_n u^{D - n - 1} \right) + \frac{\lambda}{D - 2} \frac{W^2(\psi) M}{M_{Pl}^{D - 2} N} \left( \sum_{n = 1}^{D - 1} c_n B_n u^{D - n - 1} \right)^2 \nonumber \\
%
&& \quad - \, \frac{\dot{\psi}}{D - 1} \frac{dW}{d\psi} \left( \sum_{n = 1}^{D - 1} c_n A_n u^{D - n - 1} \right) - \frac{\lambda}{D - 1} \frac{M}{N} \left( \frac{dW}{d\psi} \right)^2 \left( \sum_{n = 1}^{D - 1} c_n A_n u^{D - n - 1} \right)^2 = 0, \nonumber \\ \label{eq:parhubblelatarbelakang}
\end{eqnarray}
which relates the Hubble rate of the background spacetime with the Hubble rate of the dynamical spacetime. In the simple case where $\psi$ is trivial and $D = 4$, Eq.~\eqref{eq:parhubblelatarbelakang} corresponds to the branches of solutions discussed in \cite{defelice2016phenomenology}. In the case of $D = 5, 6$ with trivial $\psi$, one has to solve the qubic and the quartic polynomials, respectively, while for $D \ge 7$ the solutions of the polynomials are still unknown. For nontrivial $\psi$, it is still unknown whether such branches exist. These aspects will be considered elsewhere.
\section{Dynamical System Analysis}
\label{sec:Dynas}
Let us consider a special case where the couplings $W(\psi)$ and $V(\psi)$ have the form
\begin{eqnarray}
W(\psi) = W_0 \exp \left( -\frac{\lambda_W \psi}{\sqrt{M_{Pl}^{D - 2}}} \right), \qquad V(\psi) = V_0 \exp \left( -\frac{\lambda_V \psi}{\sqrt{M_{Pl}^{D - 2}}} \right), \label{eq:VWpsi}
\end{eqnarray}
which has been considered in four dimensional cases \cite{leon2013cosmological, wu2013dynamical} where the constants $V_0, W_0 > 0$ and $\lambda_V, \lambda_W \geq 0$. Moreover, the form of the scalar potential $V(\psi)$ may provide inflationary expansion of the early universe model and has been well studied in the context of dynamical systems in Ref.~\cite{copeland1998exponential}. Note that for $\lambda_W = 0$ we have the MTMVMG theory with ordinary constant graviton mass \cite{defelice2016minimal,defelice2016phenomenology}.
For the rest of the paper we simply take a branch of solutions of Eq.~\eqref{eq:parhubblelatarbelakang} where $\lambda = 0$. Setting the lapse functions $N(t) = M(t) = 1$, we introduce the autonomous variables,
\begin{eqnarray}
x_\rho &\equiv& \left( \frac{2 \rho_m}{M_{Pl}^{D - 2} \lambda_D^2 H^2} \right)^{1/2}, \qquad x_\psi \equiv \frac{\dot{\psi}}{\sqrt{M_{Pl}^{D - 2}} \lambda_D H}, \nonumber \\
%
x_V &\equiv& \left( \frac{2 V(\psi)}{M_{Pl}^{D - 2} \lambda_D^2 H^2} \right)^{1/2}, \qquad x_W \equiv \left( \frac{2 W(\psi)}{M_{Pl}^{D - 2} \lambda_D^2 H^2} \right)^{1/2},
\end{eqnarray}
with $\lambda^2_D \equiv (D - 1) (D - 2)$ such that the equations of motion \eqref{eq:Friedman2}, \eqref{eq:dinamikamedanskalar}, and \eqref{eq:parhubblelatarbelakang} can be written down in terms of the autonomous variables,
\begin{eqnarray}
\frac{2}{D - 1} x_\psi' &=& (1 - w_m) x_\psi^3 - (1 + w_m) x_\psi x_V^2 - (f_2(u) + w_m f_1(u)) x_\psi x_W^2 \nonumber \\
%
&& - \, (1 - w_m) x_\psi + 2 \sqrt{\frac{D - 2}{D - 1}} \left( \lambda_V x_V^2 + \lambda_W f_1(u) x_W^2 \right), \label{eq:xpsiprime} \\
%
\frac{2}{D - 1} x_V' &=& (1 + w_m) x_V + (1 - w_m) x_\psi^2 x_V - (1 + w_m) x_V^3 - (f_2(u) + w_m f_1(u)) x_V x_W^2 \nonumber \\
%
&& - \, \sqrt{\frac{D - 2}{D - 1}} \lambda_V x_\psi x_V, \label{eq:xvprime} \\
%
\frac{2}{D - 1} x_W' &=& (1 + w_m) x_W + (1 - w_m) x_\psi^2 x_W - (1 + w_m) x_V^2 x_W - (f_2(u) + w_m f_1(u)) x_W^3 \nonumber \\
%
&& - \, \sqrt{\frac{D - 2}{D - 1}} \lambda_W x_\psi x_W, \label{eq:xwprime}
\end{eqnarray}
where we have used the constraint coming from Eq.~\eqref{eq:Friedman1},
\begin{equation}
x_\rho^2 + x_\psi^2 + x_V^2 + f_1(u) x_W^2 = 1. \label{eq:friedmanconstr}
\end{equation}
Note that here the prime symbol denotes the derivative with respect to $\ln{(a)}$. We also have defined
\begin{equation}
f_1(u) \equiv \sum_{n = 0}^{D - 1} c_{n + 1} A_n u^{D - n - 1}, \qquad f_2(u) \equiv \sum_{n = 1}^{D - 1} \left( c_n + c_{n + 1} \right) n A_n u^{D - n - 1}, \label{eq:f1f2}
\end{equation}
which are assumed to be bounded functions. As we have seen above, the scale factor $a(t)$ can be thought of as a parameter in this picture and the fiducial scale $\tilde{a}(t)$ is only a background in this setup. Therefore, we conclude that the quantity $u$ in Eq.~\eqref{eq:parameter} is not a dynamical variable; it might be either a function of time, $u(t)$, or a constant. The first case is called the \textit{normal branch}, while the latter is called the \textit{self-accelerating branch}. Both have been appeared in the context of four-dimensional MTMG \cite{defelice2016phenomenology}. Similar situation also occurs in the cosmological model of dRGT massive gravity (see, for example, Ref.~\cite{alatas2017parameter}).
We could also introduce some higher dimensional quantities which are analogous to the four-dimensional cases. First, the state equation parameter and the density parameter in this theory are given by
\begin{eqnarray}
w_\text{MG} &\equiv& \frac{P_\text{MG}}{\rho_\text{MG}} = \frac{x_\psi^2 - x_V^2 - f_2(u) x_W^2}{x_\psi^2 + x_V^2 + f_1(u) x_W^2}, \\
%
\Omega_\text{MG} &\equiv& \frac{2 \rho_\text{MG}}{M_{Pl}^{D - 2} \lambda_D^2 H^2} = x_\psi^2 + x_V^2 + f_1(u) x_W^2, \label{eq:mgsector}
\end{eqnarray}
respectively. Then, the decelerated parameter has the form
\begin{eqnarray}
q &=& -1 - \frac{\dot{H}}{H^2} \nonumber \\
%
&=& -1 + \frac{(D - 1)}{2} \left[ 1 + w_m + (1 - w_m) x_\psi^2 - (1 + w_m) x_V^2 - (f_2(u) + w_m f_1(u)) x_W^2 \right], \nonumber \\
\end{eqnarray}
where for $q < 0$ we have an accelerated universe model. There are five critical points in the massless sector and three critical points in the massive sector, which are listed in the Table~\ref{tab:massless} and \ref{tab:massive}, respectively, including their properties. Note that the quantities $\mathcal{A}_\pm(D, w_m, \lambda_V)$ and $\mathcal{B}_\pm(D, w_m, \lambda_W)$ mentioned in these tables are defined as
\begin{eqnarray}
&& \mathcal{A}_\pm(D, w_m, \lambda_V) \equiv \frac{1 + w_m}{3 - w_m} \nonumber \\
%
&& \qquad + \, \frac{(D - 2) \lambda_V^2}{(D - 2) (3 - w_m)} \Bigg\{ 1 \pm \sqrt{1 - \frac{2 (D - 1) (2 - w_m) (1 + w_m)}{(D - 2) \lambda_V^2} + \left[ \frac{(D - 1)(1 + w_m)}{(D - 2) \lambda_V^2} \right]^2} \Bigg\} \nonumber \\ \label{eq:Apm} \\
%
&& \mathcal{B}_\pm(D, w_m, \lambda_W) \equiv \frac{1 + w_m}{3 - w_m} \nonumber \\
%
&& \qquad + \, \frac{(D - 2) \lambda_W^2}{(D - 1) (3 - w_m)} \Bigg\{ 1 \pm \sqrt{1 - \frac{2 (D - 1) (2 - w_m) (1 + w_m)}{(D - 2) \lambda_W^2} + \left[ \frac{(D - 1) (1 + w_m)}{(D - 2) \lambda_W^2} \right]^2} \Bigg\}. \nonumber \\ \label{eq:Bpm}
\end{eqnarray}
\begin{landscape}
\begin{table}[p]
\scriptsize
\begin{tabularx}{\columnwidth}{| >{\centering\arraybackslash}>{\hsize=0.4\hsize}X | >{\centering\arraybackslash}>{\hsize=0.7\hsize}X | >{\centering\arraybackslash}>{\hsize=1.4\hsize}X | >{\centering\arraybackslash}>{\hsize=0.3\hsize}X | >{\centering\arraybackslash}>{\hsize=0.9\hsize}X | >{\centering\arraybackslash}>{\hsize=0.6\hsize}X | >{\centering\arraybackslash}>{\hsize=1.0\hsize}X | >{\centering\arraybackslash}>{\hsize=0.9\hsize}X | >{\centering\arraybackslash}>{\hsize=2.8\hsize}X |}
\hline
Critical Points & $x_{\psi, c}$ & $x_{V, c}$ & $x_{W, c}$ & Existence & $w_{\text{MG}}$ & $\Omega_{\text{MG}}$ & $q$ & Stability \\
\hline
\hline
CP$_1$ & $0$ & $0$ & $0$ & always & undefined & $0$ & $\frac{D - 3 + w_m(D - 1)}{2}$ & stable node for $w_m < -1$, unstable node for $w_m > 1$, saddle point otherwise \\
\hline
CP$_2$ & $1$ & $0$ & $0$ & always & $1$ & $1$ & $D - 2$ & stable node for $w_m > 1$, $\lambda_V > 2 \sqrt{\frac{D - 1}{D - 2}}$, and $\lambda_W > 2 \sqrt{\frac{D - 1}{D - 2}}$, unstable node for $w_m < 1$, $\lambda_V < 2 \sqrt{\frac{D - 1}{D - 2}}$, and $\lambda_W < 2 \sqrt{\frac{D - 1}{D - 2}}$, saddle point otherwise \\
\hline
CP$_3$ & $-1$ & $0$ & $0$ & always & $1$ & $1$ & $D - 2$ & unstable node for $w_m < 1$, saddle point otherwise \\
\hline
CP$_4$ & $x_{\psi, c}$ & $0$ & $0$ & $w_m = 1$ and $0 < | x_{\psi, c} | < 1$ & $1$ & $x_{\psi, c}^2$ & $D - 2$ & unstable for $-1 < x_{\psi, c} < 0$, or $0 < x_{\psi, c} < 1$ with at least either $\lambda_V < \frac{2}{x_{\psi, c}} \sqrt{\frac{D - 1}{D - 2}}$ or $\lambda_W < \frac{2}{x_{\psi, c}} \sqrt{\frac{D - 1}{D - 2}}$, non-hyperbolic for $0 < x_{\psi, c} < 1$ \\
\hline
CP$_5$ & $\frac{\mathcal{A}_\pm}{\lambda_V} \sqrt{\frac{D - 1}{D - 2}}$ & $\frac{1}{\lambda_V} \sqrt{\frac{(D - 1) (2 - \mathcal{A}_\pm) \mathcal{A}_\pm}{2 (D - 2)}}$ & $0$ & $0 < \mathcal{A}_\pm < 2$ & $\frac{3 \mathcal{A}_\pm - 2}{\mathcal{A}_\pm + 2}$ & $\frac{(D - 1) (2 + \mathcal{A}_\pm) \mathcal{A}_\pm}{2 (D - 2) \lambda_V^2}$ & $\frac{(D - 1) \mathcal{A}_\pm}{2} - 1$ & see Fig.~\ref{fig:CP5} \\
\hline
\end{tabularx}
\caption{\label{tab:massless} The properties, the existence, the equation-of-state parameter $w_\text{MG}$, the density parameter $\Omega_\text{MG}$, the deceleration parameter $q$, and the stability conditions of the critical points of the autonomous system in the massless sector, $W(\psi) = 0$. Note that we have introduced the notation $\mathcal{A}_\pm$ in Eq.~\eqref{eq:Apm}.}
\end{table}
\begin{table}[p]
\scriptsize
\begin{tabularx}{\columnwidth}{| >{\centering\arraybackslash}>{\hsize=0.4\hsize}X | >{\centering\arraybackslash}>{\hsize=0.6\hsize}X | >{\centering\arraybackslash}>{\hsize=0.9\hsize}X | >{\centering\arraybackslash}>{\hsize=1.3\hsize}X | >{\centering\arraybackslash}>{\hsize=1.1\hsize}X | >{\centering\arraybackslash}>{\hsize=0.6\hsize}X | >{\centering\arraybackslash}>{\hsize=1.0\hsize}X | >{\centering\arraybackslash}>{\hsize=0.8\hsize}X | >{\centering\arraybackslash}>{\hsize=2.3\hsize}X |}
\hline
Critical Points & $x_{\psi, c}$ & $x_{V, c}$ & $x_{W, c}$ & Existence & $w_{\text{MG}}$ & $\Omega_{\text{MG}}$ & $q$ & Stability \\
\hline
\hline
CP$_6$ & $0$ & $\sqrt{\frac{\lambda_W}{\lambda_W - \lambda_V}}$ & $\sqrt{\frac{\lambda_V}{| f_1(u) | (\lambda_W - \lambda_V)}}$ & $\lambda_W > \lambda_V$ and $f_1(u) = f_2(u) < 0$ & $-1$ & $1$ & $-1$ & stable node for $w_m > -1$ and $\lambda_V \lambda_W < \frac{1}{4} \frac{D - 2}{D - 1}$, stable spiral for $w_m > -1$ and $\lambda_V \lambda_W > \frac{1}{4} \frac{D - 2}{D - 1}$, saddle point otherwise \\
\hline
CP$_7$ & $0$ & $\sqrt{1 - f_1(u) x_{W, c}^2}$ & $x_{W, c}$ & $0 \leq x_{W, c} \leq 1$, $\lambda_V = \lambda_W = 0$, and $f_1(u) = f_2(u) > 0$ & $-1$ & $1$ & $-1$ & unstable for $w_m < -1$, non-hyperbolic for $w_m > -1$ \\
\hline
CP$_8$ & $\frac{\mathcal{B}_\pm}{\lambda_W} \sqrt{\frac{D - 1}{D - 2}}$ & $0$ & $\frac{1}{\lambda_W} \sqrt{\frac{(D - 1) (2 - \mathcal{B}_\pm) \mathcal{B}_\pm}{2 (D - 2) f_1(u)}}$ & $0 < \mathcal{B}_\pm < 2$ & $\frac{3 \mathcal{B}_\pm - 2}{\mathcal{B}_\pm + 2}$ & $\frac{(D - 1) (2 + \mathcal{B}_\pm) \mathcal{B}_\pm}{2 (D - 2) \lambda_W^2}$ & $\frac{(D - 1) \mathcal{B}_\pm}{2} - 1$ & see Fig.~\ref{fig:CP8} \\
\hline
\end{tabularx}
\caption{\label{tab:massive} The properties, the existence, the equation-of-state parameter $w_\text{MG}$, the density parameter $\Omega_\text{MG}$, the deceleration parameter $q$, and the stability conditions of the critical points of the autonomous system in massive sector, $W(\psi)\neq 0$. Note that we have introduced the notation $\mathcal{B}_\pm$ in Eq.~\eqref{eq:Bpm}.}
\end{table}
\end{landscape}
\begin{figure}[hbtp]
\centering
\includegraphics[scale=0.1]{m3positive.png}
\includegraphics[scale=0.1]{m3negative.png}
\caption{The stability conditions of the critical point CP$_5$ obtained by plotting $\mathcal{A}_\pm$ defined in Eq.~\eqref{eq:Apm} as a function of $\lambda_V$. The forbidden zone is the area with unphysical properties $\Omega_\text{MG} > 1$. This figure consist of two possibilities of the parameter values $\lambda_V$ and $\lambda_W$, where (a) for $\lambda_V > \lambda_W$, and (b) for $\lambda_V < \lambda_W$.}
\label{fig:CP5}
\end{figure}
\begin{figure}[hbtp]
\centering
\includegraphics[scale=0.1]{m3positive2.png}
\includegraphics[scale=0.1]{m3negative2.png}
\caption{The stability conditions of the critical point CP$_8$ obtained by plotting $\mathcal{B}_\pm$ defined in Eq.~\eqref{eq:Bpm} as a function of $\lambda_W$. The forbidden zone is the area with unphysical properties $\Omega_\text{MG} > 1$. This figure consists of two possibilities of the parameter values $\lambda_V$ and $\lambda_W$, where (a) for $\lambda_V < \lambda_W$, and (b) for $\lambda_V>\lambda_W$.}
\label{fig:CP8}
\end{figure}
Let us first discuss the massless sector in which there are five critical points, namely, CP$_1$, CP$_2$, CP$_3$, CP$_4$ and CP$_5$ with trivial coupling $W_0 = 0$. The point CP$_1$ describes a matter-dominated era in which the stability behavior depends on the state parameter $w_m$. We find the late-time attractor identified by a stable node and the past-time attractor identified by an unstable node. There are three possible stabilities, namely stable node, unstable node, and saddle point. The universe is either acceleratedly expanded for $w_m < -{(D - 3)}/{(D - 1)}$ or unacceleratedly expanded for $w_m \geq -{(D - 3)}/{(D - 1)}$. The first case is however unphysical since it contains vacuum with $w_m = -1$, while in the latter case the matter of the universe could be dominated by dust ($w_m = 0$) or radiation ($w_m = \frac{1}{D - 1}$).
At CP$_2$ we also find a late-time and past-time attractors according to the parameter values with also three possible stabilities, namely stable node, unstable node, and saddle point. At this point, gravitons can be thought of as non-phantom energy dominated by non-accelerating expansion process of the universe ($\Omega_\text{MG} = 1$). The point CP$_3$ has similar properties as CP$_2$ in terms of its existence and the values of $w_m$, $\Omega_\text{MG}$, and $q$. At this point we do not have a late-time attractor (a stable node).
The point CP$_4$ exists on the interval $0 < | x_{\psi, c} | < 1$ which are unstable on $-1 < x_{\psi, c} < 0$, or on $0 < x_{\psi, c} < 1$ with at least either $\lambda_W < \frac{2}{x_{\psi, c}} \sqrt{\frac{D - 1}{D - 2}}$ or $\lambda_V < \frac{2}{x_{\psi, c}} \sqrt{\frac{D - 1}{D - 2}}$. It becomes non-hyperbolic on $0 < x_{\psi, c} < 1$ with $\lambda_V > \frac{2}{x_{\psi, c}} \sqrt{\frac{D - 1}{D - 2}}$ and $\lambda_W > \frac{2}{x_{\psi, c}} \sqrt{\frac{D - 1}{D - 2}}$. At this point, we have a class of universe with non-accelerating expansion since $D \geq 4$. These universe are filled by the matter and the scalar field with constraint $x_{\rho, c}^2 + x_{\psi, c}^2 = 1$ and the equation of state parameters $w_m = w_\text{MG} = 1$.
The point CP$_5$ exists on the interval $0 < \mathcal{A}_\pm < 2$ which could be either the late-time attractor, the past-time attractor, or saddle points, depending on the parameter $\lambda_V$ dan $\lambda_W$. For $\mathcal{A}_\pm(D, \lambda_V, \lambda_W) < \frac{2}{D - 1}$, we have a class of universe which acceleratedly expands and the scalar field plays a role as quintessence-like. The density parameter are on the interval $0 \leq \Omega_\text{MG} \leq 1$ such that $\lambda_V < \sqrt{\frac{(D - 1)(2 + \mathcal{A}_\pm) \mathcal{A}_\pm}{2 (D - 2)}}$ is not allowed since it will produce $\Omega_\text{MG} > 1$. Around $\lambda_V \approx \sqrt{\frac{(D - 1)(2 + \mathcal{A}_\pm) \mathcal{A}_\pm}{2(D - 2)}}$, the expansion of the universe is dominated by the scalar field ($\Omega_\text{MG} \approx 1$) which will be a good candidate for saddle power-law inflation model. This will be discussed in detail in Sec.~\ref{sec:cosmologicalcons}.
Next, we consider the massive sector which consists of the points CP$_6$, CP$_7$, and CP$_8$. The points CP$_6$ dan CP$_7$ have vanishing kinetic part of the scalar field, which implies that the scalar becomes very massive. On the other hand, the kinetic part of the scalar is non-zero at CP$_8$.
The point CP$_6$ could be either the late-time attractor for $w_m > -1$ or saddle point for $w_m < -1$ where the graviton mass plays a role as the cosmological constant which dominates the accelerating expansion of universe. This cosmological constant has to be positive since the parameter $\lambda_W > \lambda_V$. These features give us a good candidate for the compatible description of the well-known observation result \cite{peebles2003cosmological}.
The point CP$_7$ is defined on the circle $x_{V, c}^2 + f_1(u) x_{W, c}^2 = 1$ which may be either unstable for $w_m < -1$ or non-hyperbolic for $w_m > -1$. At this point, the parameter $\lambda_V = \lambda_W = 0$ showing that both the scalar potential and the graviton mass are constant. They behave like the cosmological constant which dominates the accelerating expansion of universe.
Finally, the point CP$_8$ which could be either the late-time attractor or the saddle point depending on the values of $\lambda_V$ and $\lambda_W$. For $\mathcal{B}_\pm(D, \lambda_V, \lambda_W) < \frac{2}{D - 1}$, the universe acceleratedly expands and the mass-varying massive graviton plays a role as quentessence dark energy. There exists a forbidden zone in CP$_8$; it has $\Omega_\text{MG} > 1$ for $\lambda_W < \sqrt{\frac{(D - 1)(2 + \mathcal{B}_\pm) \mathcal{B}_\pm}{2 (D - 2)}}$.
\section{Local-Global Existence of Solutions}
\label{sec:locglobex}
In this Section we will prove the local-global existence and the uniqueness of the evolution equations \eqref{eq:xpsiprime}, \eqref{eq:xvprime}, and \eqref{eq:xwprime} with constraint \eqref{eq:friedmanconstr} using Picard's iteration and the contraction mapping properties. We will first discuss the $f_1(u) > 0$ case, then continue with the $f_1(u) < 0$ case.
First of all, we introduce the dynamical variables
\begin{equation}
\bm{u} = \begin{pmatrix} x_\psi \\ x_V \\ x_W \end{pmatrix}, \label{dynvar}
\end{equation}
defined on an interval $I \equiv [s, s+ \epsilon]$ where $s \equiv \ln{a} \in \mathrm{I\hspace{-0.7mm}R}$ and $\epsilon$ is a small positive constant. The functions $f_1(u)$ and $f_2(u)$ in Eq.~\eqref{eq:f1f2} are bounded. In the first part of this Section, we consider the case of $f_1(u) > 0$ such that from the constraint \eqref{eq:friedmanconstr} we could have
\begin{equation}
0 \le | x_\psi | \le 1, \qquad 0 \le | x_V | \le 1, \qquad 0 \le | f_1(u) |^{1/2} | x_W | \le 1. \label{eq:syaratvardin}
\end{equation}
In other words, all of the quantities $(x_\psi, x_V, f_1(u) x_W)$ are defined on an open set $U \subset S^3$ where $S^3$ is the 3-sphere. It is important to notice that the critical point CP$_6$ is excluded in this setup.
All of the evolution equations \eqref{eq:xpsiprime}, \eqref{eq:xvprime}, and \eqref{eq:xwprime} can be simply rewritten into
\begin{equation}
\frac{d\bm{u}}{ds} = \mathcal{J}(\bm{u}), \label{fungsiJ}
\end{equation}
with
\begin{eqnarray}\label{JY}
\mathcal{J}(\bm{u}) \equiv \frac{1}{2} (D-2) \left( \begin{array}{c}
(1 - w_m) x_\psi^3 - (1 + w_m) x_\psi x_V^2 - (f_2(u) + w_m f_1(u)) x_\psi x_W^2 \\
%
-(1 - w_m) x_\psi + 2 \sqrt{\frac{D - 2}{D - 1}} \left( \lambda_V x_V^2 + \lambda_W f_1(u) x_W^2 \right) \\ \\
%
(1 + w_m) x_V + (1 - w_m) x_\psi^2 x_V - (1 + w_m) x_V^3 \\
%
-(f_2(u) + w_m f_1(u)) x_V x_W^2 - \sqrt{\frac{D - 2}{D - 1}} \lambda_V x_\psi x_V \\ \\
%
(1 + w_m) x_W + (1 - w_m) x_\psi^2 x_W - (1 + w_m) x_V^2 x_W \\
%
-(f_2(u) + w_m f_1(u)) x_W^3 - \sqrt{\frac{D - 2}{D - 1}} \lambda_W x_\psi x_W
\end{array} \right).
\end{eqnarray}
\begin{lemma} \label{opJY}
The operator $ \mathcal{J}(\bm{u})$ in Eq.~\eqref{fungsiJ} is locally Lipschitz with respect to $\bm{u}$.
\end{lemma}
\begin{proof}
We have the following estimate
\begin{eqnarray}
| \mathcal{J} |_U &\le& \frac{1}{2} (D-2) \Bigg[ | 1 - w_m | | x_\psi |^3 + | 1 + w_m | | x_\psi | | x_V |^2 + | f_2(u) + w_m f_1(u) | | x_\psi | | x_W |^2 \nonumber \\
%
&& + \, | 1 - w_m | | x_\psi | + 2 \sqrt{\frac{D - 2}{D - 1}} \left( \lambda_V | x_V |^2 + \lambda_W | f_1(u) | | x_W |^2 \right) \nonumber \\
%
&& + \, | 1 + w_m | | x_V | + | 1 - w_m | | x_\psi |^2 | x_V | + | 1 + w_m | | x_V |^3 \nonumber \\
%
&& + \, | f_2(u) + w_m f_1(u) | | x_V | | x_W |^2 + \sqrt{\frac{D - 2}{D - 1}} \lambda_V | x_\psi | | x_V | \nonumber \\
%
&& + \, | 1 + w_m | | x_W | + | 1 - w_m | | x_\psi |^2 | x_W | + | 1 + w_m | | x_V |^2 | x_W | \nonumber \\
%
&& + \, | f_2(u) + w_m f_1(u) | | x_W |^3 + \sqrt{\frac{D - 2}{D - 1}} \lambda_W | x_\psi | | x_W | \Bigg]. \label{estJY}
\end{eqnarray}
Then, using Eq.~\eqref{eq:syaratvardin}, we can show that $| \mathcal{J}(\bm{u}, x) |_U$ is indeed bounded on $U$.
Moreover, for $\bm{u}, \hat{\bm{u}} \in U$ we have
\begin{eqnarray}\label{estJYLps}
| \mathcal{J}(\bm{u}) - \mathcal{J}(\hat{\bm{u}}) |_U &\le& \frac{1}{2} (D-2) \Bigg[ | 1 - w_m | | x_\psi^3 - \hat{x}_\psi^3 | + | 1 + w_m | | x_\psi x^2_V - \hat{x}_\psi \hat{x}^2_V | \nonumber \\
%
&& + \, | f_2(u) + w_m f_1(u) | | x_\psi x^2_W - \hat{x}_\psi \hat{x}^2_W | + | 1 - w_m | | x_\psi - \hat{x}_\psi | \nonumber \\
%
&& + \, 2 \sqrt{\frac{D - 2}{D - 1}} \left( \lambda_V | x^2_V - \hat{x}^2_V | + \lambda_W | f_1(u) | | x^2_W - \hat{x}^2_W | \right) \nonumber \\
%
&& + \, | 1 + w_m | | x_V - \hat{x}_V | + | 1 - w_m | | x^2_\psi x_V - \hat{x}^2_\psi \hat{x}_V | + | 1 + w_m | | x^3_V - \hat{x}^3_V | \nonumber \\
%
&& + \, | f_2(u) + w_m f_1(u) | | x_V x^2_W - \hat{x}_V \hat{x}^2_W | + \sqrt{\frac{D - 2}{D - 1}} \lambda_V | x_\psi x_V - \hat{x}_\psi \hat{x}_V | \nonumber \\
%
&& + \, | 1 + w_m | | x_W - \hat{x}_W | + | 1 - w_m | | x_\psi^2 x_W - \hat{x}_\psi^2 \hat{x}_W | \nonumber \\
%
&& + \, | 1 + w_m | | x_V^2 x_W - \hat{x}_V^2 \hat{x}_W | + | f_2(u) + w_m f_1(u) | | x^3_W - \hat{x}^3_W | \nonumber \\
%
&& + \, \sqrt{\frac{D - 2}{D - 1}} \lambda_W | x_\psi x_W - \hat{x}_\psi \hat{x}_W | \Bigg].
\end{eqnarray}
After some computations, we obtain
\begin{equation}
\left| \mathcal{J}(\bm{u}) - \mathcal{J}(\hat{\bm{u}}) \right|_U \le C_{\mathcal{J}}(| \bm{u} |, | \hat{\bm{u}} |) | \bm{u} - \hat{\bm{u}} |, \label{localLipshitzcon}
\end{equation}
showing that $\mathcal{J}$ is locally Lipshitz with respect to $\bm{u}$.
\end{proof}
Next, we rewrite Eq.~\eqref{fungsiJ} into the integral form
\begin{equation}
\bm{u}(s) = \bm{u}(s_0) + \int_{s_0}^s \, \mathcal{J} \left( \bm{u}(\hat{s}) \right) d\hat{s}. \label{IntegralEquation}
\end{equation}
We define a Banach space
\begin{equation}
X \equiv \{ \bm{u} \in C(I, \mathrm{I\hspace{-0.7mm}R}^2) : \, \bm{u}(x_0) = \bm{u}_{0}, \, \sup_{x \in I}{| \bm{u}(x) |} \leq L_0 \},
\end{equation}
endowed with the norm
\begin{equation}
| \bm{u} |_{X} = \sup_{x \in I}{|\bm{u}(x)|},
\end{equation}
where $L_0 > 0$. Introducing an operator $\mathcal{K}$
\begin{equation}
\mathcal{K}(\bm{u}(x)) = \bm{u}_{0} + \int_{x_0}^x \mathcal{J} \left( \bm{u}(s), s \right) ds, \label{OpKdefinition}
\end{equation}
and using Lemma \ref{opJY}, we have the following result \cite{akbar2015existence}:
\begin{lemma} \label{uniqueness}
Let $\mathcal{K}$ be an operator defined in Eq.~\eqref{OpKdefinition}. Suppose there exists a constant $\varepsilon > 0$ such that $\mathcal{K}$ is a mapping from $X$ to itself and $\mathcal{K}$ is a contraction mapping on $I = [x, x + \varepsilon]$ with
\begin{equation}
\varepsilon \leq \min \left( \frac{1}{C_{L_0}}, \frac{1}{C_{L_0} L_0 + \| \mathcal{J}(x) \|} \right).
\end{equation}
Then, the operator $\mathcal{K}$ is a contraction mapping on $X$.
\end{lemma}
\noindent The above lemma shows that there exists a unique fixed point of Eq.~\ref{OpKdefinition} ensuring a unique local solution of the differential equation \ref{fungsiJ}. We can further construct a maximal solution by repeating the above arguments of the local existence with the initial condition $\bm{u}(x - x_n)$ for some $x_0 < x_n < x$ and using the uniqueness condition to glue the solutions.
We can now show the existence of global solutions of Eq.~\eqref{fungsiJ}. Let us consider the integral form \eqref{IntegralEquation} such that
\begin{equation}
| \bm{u}(s) | \le | \bm{u}(s_0) | + \int_{s_0}^s | \mathcal{J} \left( \bm{u}(\hat{s}) \right) | d\hat{s}. \label{IntegralEquation1}
\end{equation}
We first consider the self-accelerating branch where the parameter $u$ in Eq.~\eqref{eq:parameter} is constant. Using Eqs.~\eqref{eq:syaratvardin} and \eqref{estJY}, we get
\begin{eqnarray}
| \bm{u}(t) | &\le& | \bm{u}(t_0) | + \frac{1}{2} (D - 2) \Bigg[ 3 | 1 - w_m | + 3 | 1 + w_m | + | f_2(u) + w_m | \nonumber \\
%
&& + \, \sqrt{\frac{D - 2}{D - 1}} \left( 3 \lambda_V + 2 \lambda_W \right) + 2 \left| \frac{f_2(u)}{f_1(u)} + w_m \right| + 2 \frac{| 1 + w_m |}{| f_1(u) |^{1/2}} + \frac{| 1 - w_m |}{| f_1(u) |^{1/2}} \nonumber \\
%
&& + \, \frac{| f_2(u) + w_m f_1(u) |}{| f_1(u) |^{3/2}} + \sqrt{\frac{D - 2}{D - 1}} \frac{\lambda_W}{| f_1(u) |^{1/2}} \Bigg] \ln{\left(\frac{a(t)}{a(t_0)}\right)}. \label{eq:solIntegralEquation}
\end{eqnarray}
The second part is to consider the $f_1(u) < 0$ case. From the constraint \eqref{eq:friedmanconstr} we could have
\begin{equation}
x_\psi = \cos{\alpha}, \qquad x_V = \sin{\alpha} \cosh{\beta}, \qquad | f_1(u) |^{1/2} x_W = \sin{\alpha} \sinh{\beta}, \label{eq:syaratvardin1}
\end{equation}
where $\alpha \equiv \alpha(s)$ and $\beta \equiv \beta(s)$. In this case, Lemma \ref{opJY} and Lemma \ref{uniqueness} still hold, but we need to modify the estimate to show the global existence. In the case at hand, using Eq.~\eqref{eq:syaratvardin1}, the estimate \eqref{IntegralEquation1} becomes
\begin{eqnarray}
| \bm{u}(t) | &\le& | \bm{u}(t_0) | + \frac{1}{2} (D - 2) \Bigg[ 2 | 1 - w_m | \ln{\left( \frac{a(t)}{a (t_0)} \right)} \nonumber \\
%
&& + \, \left( | 1 + w_m | + | 1 - w_m | + \frac{| 1 + w_m |}{| f_1(u) |^{1/2}} + \frac{| 1 - w_m |}{| f_1(u) |^{1/2}} \right. \nonumber \\
%
&& + \, \left. \sqrt{\frac{D - 2}{D - 1}} \left(\lambda_V + \frac{\lambda_W}{| f_1(u) |^{1/2}} \right) \right) \int_{s_0}^s \cosh{\beta} d\hat{s} \nonumber \\
%
&& + \, \left( | 1 + w_m | + \left| \frac{f_2(u)}{f_1(u)} + w_m \right| + 2 \sqrt{\frac{D - 2}{D - 1}} \left( \lambda_V + \lambda_W \right) \right) \int_{s_0}^s \cosh^2{\beta} \, d\hat{s} \nonumber \\
%
&& + \, \left( 2 | 1 + w_m | + \left| \frac{f_2(u)}{f_1(u)} + w_m \right| + \frac{| f_2(u) + w_m f_1(u) |}{| f_1(u) |^{3/2}} \right) \int_{s_0}^s \cosh^3{\beta} \, d\hat{s} \Bigg]. \nonumber \\ \label{eq:solIntegralEquation1}
\end{eqnarray}
For the normal branch with $u(t)$, we employ similar procedure as above and use the assumptions that $f_1(u)$ and $f_2(u)$ are bounded. Then, we obtain the slightly modified forms of Eqs.~\eqref{eq:solIntegralEquation} and \eqref{eq:solIntegralEquation1}.
Thus, we have proven
\begin{theorem} \label{thmlocglob}
There exists a global solution of the evolution equations \eqref{eq:xpsiprime}, \eqref{eq:xvprime}, and \eqref{eq:xwprime} with constraint \eqref{eq:friedmanconstr}.
\end{theorem}
\section{Cosmological Models}
\label{sec:cosmologicalcons}
In this Section we will discuss some possible cosmological models of the theory. To simplify the computation, we particularly choose the self-accelerating branch where the parameter $u$ in Eq.~\eqref{eq:parameter} is constant. In the case of the exponential form of the potentials \eqref{eq:VWpsi}, we may have an inflation era in which it can be described by the well-known power law inflation \cite{lucchin1985power} where $a(t) \varpropto t^{1/\epsilon}$, with the slow-roll parameter $\epsilon = {| \dot{H} |}/{H^2} < 1$.
If the scalar $\psi$ plays a role as the inflaton field in the early epoch, then the critical points CP$_5$ and CP$_8$ are the good candidates to describe that era, with the slow-roll parameter given by
\begin{eqnarray}
\epsilon &=& \frac{(D - 2) \lambda_V^2}{4} < 1, \label{eq:epsiloncp5} \\
%
\epsilon &=& \frac{(D - 2) \lambda_W^2}{4} \left( 1 + \frac{\sum_{n = 0}^{D - 1} c_n A_n u^{D - n - 1}}{\sum_{n = 0}^{D - 1} c_{n + 1} A_n u^{D - n - 1}} \right) < 1, \label{eq:epsiloncp8}
\end{eqnarray}
for CP$_5$ and CP$_8$, respectively. In the CP$_5$ case, which is in the massless sector, we can use Eqs.~\eqref{eq:mgsector} and \eqref{eq:epsiloncp5} to get
\begin{equation}
w_\text{MG} = \frac{D - 2}{D - 1} \frac{\lambda_V^2}{2} - 1,
\end{equation}
which is negative for $D > 3$. Hence, from Table \ref{tab:massless}, we have
\begin{equation}
\mathcal{A}_\pm = \frac{2 (D - 2) \lambda_V^2}{8 (D - 1) - (D - 2) \lambda_V^2} < \frac{1}{D - 1}.
\end{equation}
Similarly, in the CP$_8$ case, which is in the massive sector, assuming that $u$ is constant, we can use Eqs.~\eqref{eq:mgsector} and \eqref{eq:epsiloncp8} to get
\begin{equation}
w_\text{MG} = \frac{D - 2}{D - 1} \frac{\lambda_W^2}{2} \left( 1 + \frac{\sum_{n = 0}^{D - 1} c_n A_n u^{D - n - 1}}{\sum_{n = 0}^{D - 1} c_{n + 1} A_n u^{D - n - 1}} \right) - 1,
\end{equation}
which is negative for $D > 3$. Hence, from Table \ref{tab:massive}, we obtain
\begin{eqnarray}
\mathcal{B}_\pm &=& \frac{2 (D - 2) \lambda_W^2 \sum_{n = 0}^{D - 1} (c_n + c_{n + 1}) A_n u^{D - n - 1}}{8 (D - 1) \sum_{n = 0}^{D - 1} c_{n + 1} A_n u^{D - n - 1} - (D - 2) \lambda_W^2 \sum_{n = 0}^{D - 1} (c_n + c_{n + 1}) A_n u^{D - n - 1}} \\
%
&<& \frac{1}{D - 1}.
\end{eqnarray}
The scalar $\psi$ in the inflation era has the form
\begin{equation}
\psi(t) \propto \frac{\sqrt{M_{Pl}^{D - 2}}}{\lambda_\alpha} \ln{\left[\frac{\lambda_\alpha^2 V_0 \epsilon t^2}{2 M_{Pl}^{D - 2} (D - 1 - \epsilon)} \right]}, \label{eq:psiinfl}
\end{equation}
where $\lambda_\alpha = \lambda_V$ ($\lambda_W$) for the case of CP$_5$ (CP$_8$), respectively.
In order to have a nucleosynthesis era, we have to assume $\epsilon \ll 1$ such that the reheating phase begins at $t_i$ and ends at $t_f$. Therefore, we have the scale factor
\begin{equation}
\frac{a(t_f)}{a(t_i)} = \left( \frac{t_f}{t_i} \right)^{1/\epsilon} = \exp{\left[ \frac{\lambda_\alpha (\psi_f - \psi_i)}{2 \epsilon \sqrt{M_{Pl}^{D - 2}}} \right]}. \label{eq:afaiinfl}
\end{equation}
In addition, the scalar potential in $\psi_i = \psi(t_i)$ drops to the scalar potential in $\psi_f = \psi(t_f)$ implying that the argument of the exponential is positive and the scale factor increases during this period. At the end of the inflation (or in the beginning of the reheating at $t = t_i$), the scalar field $\psi$ still dominates the universe with initial density parameter $\Omega_\text{MG, i} \simeq 1$. Then, the phase transition of $\psi$ occurs such that $\psi$ decays to matter fields in the time interval $t_i < t < t_f$. The decay process of the scalar $\psi$ becoming a matter field occurs relativistically with $P_{m^\ast} = {\rho_{m^\ast}}/{(D - 1)}$. From Eq.~\eqref{eq:afaiinfl} and the conservation of energy of relativistic matter (radiation), we obtain the relation $\rho_{\text{MG}, f}/{\rho_{\text{MG}, i}} = ({a_i}/{a_f})^{2 \epsilon}$ and $\rho_{M, f}/\rho_{M, i} = ({a_i}/{a_f})^D$, respectively. Since the inflaton density parameter $\Omega_\text{MG} = 2 \rho_\text{MG}/{(M_{Pl}^{D - 2} \lambda_D^2 H^2)}$ and the matter field density, we find that $\Omega_\text{MG}$ decreases according to
\begin{equation}
\Omega_\text{MG} \simeq \left(\frac{1 \ \text{MeV}^4}{\rho_M} \right)^{2 \epsilon/D}, \qquad (t_i \le t \le t_f),
\end{equation}
as $\rho_M$ increases. Note that here we have assumed that $\epsilon = {|\dot{H}|}/{H^2} \ll 1$ in the interval. At the end, we have $\rho_M \sim 10^4-10^8 \text{ MeV}^4$ at $t = t_f$.
On the other hand, we may also apply the theory to the case of late-time era. Here, we have at least two interesting possible scenarios. The first is that the scalar field $\psi$ becomes frozen, after the reheating era, at fixed value $\psi_\infty$ with $w_\text{MG} = -1$ at CP$_6$. The value of $W(\psi_\infty)$ then becomes the graviton mass in the present time, which can be determined using observational constraints. Hence, the accelerating expansion of the universe is due to the constant mass term, as in the dRGT theory. The second possible scenario is that the scalar field $\psi$ could play a role as dynamical quintessensial dark energy, either from CP$_5$ in the massless sector or from CP$_8$ in the massive sector. In the massless sector, the accelerating expansion of the universe is due solely to the standard quintessence paradigm. However, in the massive sector, it is due to the nontrivial interplay between quintessence and massive gravity in the massive sector.
\section{Conclusions}
\label{sec:conclusions}
We have constructed higher-dimensional MTMVMG with nonzero scalar potential, where the graviton mass is varied with respect to the real scalar field $\psi$. Our construction can be summarized as follows. First, we write the MVMG action for higher dimensions using the vielbein formulation in the ADM formalism. We also adopt the vielbein potential in Ref.~\cite{hinterbichler2012interacting} and couple it to the mass-like scalar potential $W(\psi)$. Inserting the ansatz metric \eqref{eq:basis} and \eqref{eq:dualbasis} into the MVMG action, we then derive the precursor action \eqref{eq:Spre}. By imposing the set of $D$-constraints \eqref{Dconstraints} to the precursor action \eqref{eq:Spre}, we obtain a theory in which the graviton has $D (D - 3)/2$ degrees of freedom for general $D$ spacetime dimensions, without scalar and vector modes of the St\"uckelberg field. The resulting theory admits the Lorentz violation since we have used the ADM vielbein \eqref{eq:basis} and \eqref{eq:dualbasis}. Still, the $O(D - 1)$ symmetry and the spatial diffeomorphism are preserved in the MTMVMG action \eqref{eq:aksimtmg1}. This theory generalizes the four-dimensional MTMG in Refs.~\cite{defelice2016minimal,defelice2016phenomenology} and the four-dimensional MVMG in Ref.~\cite{huang2012mass}.
Then, we derive the Friedmann-Lema\^itre equations \eqref{eq:Friedman1} and \eqref{eq:Friedman2} for the case of spatially flat spacetimes. To proceed, inspired by Refs.~\cite{copeland1998exponential,leon2013cosmological,wu2013dynamical}, we take both the scalar potential and the graviton mass couplings to have exponential forms \eqref{eq:VWpsi}, such that Eqs.~\eqref{eq:Friedman1} and \eqref{eq:Friedman2} can be written into a set of autonomous equations \eqref{eq:xwprime} with constraint \eqref{eq:friedmanconstr}. By performing dynamical system analysis, we find that in this theory there exists five critical points in the massless sector, namely CP$_{1 - 5}$, whereas in the massive sector we have three critical points, namely CP$_{6 - 8}$. We also discuss their stability, their existence, and their cosmological aspects related to the state equation parameter $w_\text{MG}$, the density parameter $\Omega_\text{MG}$, and the decelerated parameter $q$. Among them, we may have some critical points that are suitable to explain either inflation phenomenon or the accelerated universe in the late-time era.
We also have established the local-global existence and the uniqueness of the evolution equations \eqref{eq:xpsiprime}, \eqref{eq:xvprime}, and \eqref{eq:xwprime} with constraint \eqref{eq:friedmanconstr} using Picard's iteration and the contraction mapping properties, assuming that the functions $f_1(u)$ and $f_2(u)$ are bounded. The discussion is then divided into two parts: the $f_1(u) > 0$ case and the $f_1(u) < 0$ case. Note that our results apply to all branches, namely the self-accelerating branch and the normal branch.
Finally, we have particularly discussed some possible cosmological models of the MTMVMG in the self-accelerating branch. Since both the scalar potential and the graviton mass couplings have exponential forms \eqref{eq:VWpsi}, the theory has a good description of the inflation era in the early universe using the power-law inflation \cite{lucchin1985power} in which the scale factor $a(t) \varpropto t^{1/\epsilon}$ with the slow-roll parameter $\epsilon = {| \dot{H} |}/{H^2} < 1$. This era can be described either by the critical point CP$_5$ or CP$_8$. In other words, our theory can describe the inflationary era using both the massless and the massive sectors. Also, we have shown that the MTMVMG could accommodate the reheating mechanism in this framework, again for both massless and massive sectors. Perturbative approach needs to be applied, for example, to study the behavior of primordial gravitational waves based on MTMVMG, as in the case of four-dimensional MTMG \cite{fujita2019blue, fujita2020primordial}. The detailed construction and the phenomenological predictions are left for subsequent works. On the other hand, we have at least two interesting possible scenarios for the late times. The first scenario is that the dark energy in the present time is due to the graviton mass which depends on the scalar field $\psi_\infty$ that becomes frozen after the reheating era. The second scenario is that the scalar field $\psi$ plays role as dynamical quintessential dark energy. Therefore, contrary to the massless sector where the accelerating expansion is due to the standard quintessence paradigm, in the massive sector it is due to the nontrivial interplay between quintessence and massive gravity.
\section*{Acknowledgments}
The work in this paper is supported by P2MI FMIPA ITB 2021 and Riset ITB 2021. The work of HA is partially funded by the WCR grant from Kemendikbudristek-IPB 2021.
|
{
"timestamp": "2021-05-11T02:17:55",
"yymm": "2105",
"arxiv_id": "2105.03849",
"language": "en",
"url": "https://arxiv.org/abs/2105.03849"
}
|
\section{Is math useful?}
\begin{quotation} The mass of mathematical
truth is obvious and imposing; its practical applications, the
bridges and steam-engines and dynamos, obtrude themselves on
the dullest imagination. The public does not need to be convinced
that there is something in mathematics.
All this is in its way very comforting to mathematicians, but it
is hardly possible for a genuine mathematician to be content with
it.
\begin{flushright}A mathematician's Apology \cite{H} \S 2 --- G. H. Hardy
\end{flushright}
\end{quotation}
\begin{quotation} [...] the most ‘useful’ subjects are quite
commonly just those which it is most useless for most of us to
learn. It is useful to have an adequate supply of physiologists and
engineers; but physiology and engineering are not useful studies
for ordinary men[...]
\begin{flushright}A mathematician's Apology \cite{H} \S 20 --- G. H. Hardy
\end{flushright}
\end{quotation}
The English mathematician Hardy dealt very well with the subject of this paper, and I will often cite his famous \textit{Apology}, written over 70 years ago. Since we are dwarves sitting on the shoulders of giants, I hope I'll be able to see a little further and give some new ideas on the subject.
More precisely I would like to deal with the problem posed by the above two quotes: no one is usually fool enough to deny the usefulness of mathematics to our society, but the usefulness to a society is not at all the same as the usefulness to an individual. \textit{What is math useful to me?} will be our next to final section.
Before getting there, tough, we have a long way. We first have to understand what is the usefulness of math and how math is (and can be) used.
\section{How is math used in war time?}
\label{sec:WWII}
\begin{quotation}Ten, twenty, thirty, forty, fifty or more
The bloody Red Baron was running up the score
Eighty men died trying to end that spree
Of the bloody Red Baron of Germany
\begin{flushright}Snoopy vs the Red Baron (1966) --- The Royal Guardsmen\footnote{Usually the books of the series \textit{Imagine Math} are the proceedings of the meeting on mathematics and culture held in Venice. This year, due to the pandemic of Sars-Cov2, the meeting has not taken place. Anyhow you may think of me beginning my lecture playing with planes while the song \textit{Snoopy vs the Red Baron} is being played. I suggest you to listen to this song while beginning to read this chapter, to put yourself in the right mood.}
\end{flushright}
\end{quotation}
\begin{figure}[h]
\includegraphics[width=0.40\textwidth]{Royal_guardsmen_snoopy.jpg}
\caption{The cover of the album Snoopy vs the Red Baron (imagine from wikipedia)}
\label{figRedBaron}
\end{figure}
Math has always been considered a strong ally in war time. Archimedes used math (and physics) to construct parabolic glasses in order to set on fire Roman ships. Math has been used to compute the line of firing of cannons: modern ballistics was born due to an English mathematician, Robins, who in 1942 wrote New Principles of Gunnery, a treaty which was used till World War II. Mathematicians have always been considered precious for war and enrolled for their logical and computing abilities. The English mathematician Littlewood, closed friend and fellow mathematician of the already cited Hardy, served in the Royal Garrison Artillery during World War I.
Up to World War I, tough, the math used in war time was quite elementary: basic geoemetry and physics.
During World War II math played a fundamental role and many different areas of mathematics turned out to be useful for winning the war. Everyone knows the story of how Alan Turing cracked Enigma, the Nazi cryptography tool, and heavily contributed to lead the Allies to a victory. Both cryptography and decryptography are based on deep math.
Moreover there is some statistics in figuring out the number of tanks produced by the Nazi: the problem of estimating the maximum of a discrete uniform distribution from sampling without replacement. In simple terms, suppose there exists an unknown number of items which are sequentially numbered from 1 to N. A random sample of these items is taken and their sequence numbers observed; the problem is to estimate N from these observed numbers. This problem is called the German tank problem, since it was of uttermost importance to the Allies: they wanted to estimate the number of German tanks just by knowing the serial numbers of the few tanks captured.
\begin{figure}[h]
\includegraphics[width=0.40\textwidth]{Survivorship-bias.png}
\caption{Area of damage of damaged airplanes returned to base during WWII (imagine from wikipedia).\\
Where do the airplanes need to be reinforced?}
\label{figBias}
\end{figure}
But probably the nicest use of math in WWII is that Wald did to evaluate where planes would need additional armor against enemy's shootings. Data of damaged airplanes was collected by US military, leading to the picture of figure \ref{figBias}. The US military concluded that the area in need of ticker armor where the ones with the most shoots. Wald concluded the opposite: the areas in need for additional armor were precisely the one with least red dots, since the sample was made up just of planes who survived the enemy's fire. The planes which were shot elsewhere did not made their way back home: they simply crushed down, as if they were shot by the bloody Red Baron. So the parts to reinforce were exactly those which when shot did not allow the plane to come back and its damage to enter the stats.
Math is no doubt useful in war time, both for computational purposes (and in this I put also physics and computer science) and for the mathematical-logical thinking.
\begin{question}{What is math useful for?} I have no doubt that the reader, when reading the title of this chapter, immeditely tought that math \textit{is useful} and did not think about math in war time, which ---depending on how you put it--- may be described as useful or bloody dangerous. So, why did I choose such a subject to begin my chapter? I'll let Hardy answer:
\begin{quotation} I once said that ‘a science is said to be
useful if its development tends to accentuate the existing inequalities in the distribution of
wealth, or more directly promotes the destruction of human life’
\begin{flushright}A mathematician's Apology \cite{H} 21 --- G. H. Hardy
\end{flushright}
\end{quotation}
This extremely pessimistic phrase was spoken by Hardy in 1915, when times were dark and there was little space for hope and for the future of humankind. Nevertheless, way too often the \textit{usefulness} of something has indeed had the effect to accentuate inequalities or favour wars, as Hardy stated. Hence, I feel that we should start discussing usefulness of math by starting from its darkest sides, not hiding them under the carpet, but being well conscious of their existence.
\end{question}
It is also worth noticing that, usually, when a war time example of usefullness of math is made, it is usually a situation in which the good Allies used math to win against the Nazi.
It is as if, when talking about the applications of math to the real world, we try to hide the darker sides of math's applications, and ---if ever we talk about math and war--- cite only occasion in which math has been used to make the Allies won WWII vs the Nazi, i.e. show war-time-math as the hero in a classic war movie.
Of course, reality is much more various than a movie, and in war math helped killing people as much as saving them.
We must deal with this whenever we try to answer the question whether math is useful or not: math is a tool, and like most tools it can be used in a wise or a wicked way.
\section{How can pure math be useful?}
\label{sec:2}
There are uncountable\footnote{Obviously this is an hyperbole, since everything in the real world is not only countable, but finite!} applications of math in everyday's life. While most of the applications known to the wide public rely on basic math, or on math born explicitely for applications, I would like to give some example of pure math which later on turned into applied math. As the online comics Abstruse Goose puts it, \textit{all math is eventually applied math} (see figure \ref{figAG}).
\begin{figure}[h]
\center
\includegraphics[width=0.75\textwidth]{AG504.PNG}
\caption{All math is applied math... eventually \cite{AG}}
\label{figAG}
\end{figure}
If you feel a geek online comics is not a good enough reference\footnote{And you'd be wrong. Comics are totally part of cuture and they also had a good place in this series of books, see e.g.\ \cite{A,S}}, I'll go with Galileo:
\begin{quotation}
[Nature's book] is written in mathematical language, and its characters are triangles, circles and other geometrical figures.\footnote{\textit{[Il libro della natura] è scritto in lingua matematica, e i caratteri son triangoli, cerchi, ed altre figure geometriche}, in the original}
\begin{flushright}\textit{Il Saggiatore} --- Galileo Galilei\end{flushright}
\end{quotation}
Our limits in applying mathematics in describing the world are just those of our knowledge (deep and true knowledge) of it\footnote{Is it "the world" or "math"? I'll leave the answer to the reader}. We may think our knowledge as a box of tools. As soon as we have a tool in it, we may find some uses for the tool. When we do not have the tool (or we even ignore its existence), we cannot find uses for the tool. And mathematical tools, being completely general and abstract, have a wide range of possible applications. What is really needed is to create the mathematical tools (doing mathematical research, both pure and applied) and handle them to people who may need them.
Thus, in this section I go with some example of pure math applications to the real world.
\subsection{Number Theory and Cryptography}
\label{subsec:2}
The Abstruse Goose comics (figure \ref{figAG}) gets it right: no matter how pure and far from applications a part of mathematics may be, once it is in our tool box it is only a matter of time since it will find some applications\footnote{Again an hyperbole: some theorems are just too weird to find an application... or are they just too weird for now? Maybe just because we do not understand that result well enough?}.
\subsubsection{Number Theory} So it is just appropriate to begin with an example about number theory, the field of research of Hardy, who was absolutely proud of doing research in a pure area of math, with no applications whatsoever.
Hardy in his \textit{A mathematician's apology} \cite{H} makes quite a point of personal pride in number theory being a completely pure (and useless) branch of mathematics. And he was quite right! Number theory deals with the distribution and the properties of prime numbers and so it is a subject of great charm, ancient (the proof that there are infinite prime numbers and Eratosthenes' sieve date back to Ancient Greece) and full of elementary problems (e.g. Goldbach's conjecture), which are easy to state and extremely difficult to solve. This characteristic lead to a heap of amateurs trying to solve very difficult conjectures in this field across centuries\footnote{And this often lead to frustration professional number theorist who continue to recieve "proofs" from amateurs, see e.g the nice \textit{Dialogue on Prime Numbers} written by Zaccagnini \cite{Z}}. Few succeded, most didn't, and a lot of appearently simple conjectures are still unsolved.
So Number Theory always had a great appeal, both to professional mathematicians, amateurs and the wide audience. But no one ever questioned its being a totally pure and abstract area of mathematics, whose interest belonged all to the world of pure ideas and not to our material world.
At least, that was so until computer age begun and some old Number Theory theorems by Fermat become useful for cryptography.
\subsubsection{Cryptography} Also Cryptography is an ancient subject. Sending secret messages has always been of crucial importance in war time (as we already stated in section \ref{sec:WWII}) and the first use of Cryptography dates back to Julius Caesar, who sent messages replacing each letter with the one 3 places after that in alphabetical order (see figure \ref{figCC}).
\begin{figure}[h]
\center
\includegraphics[width=0.50\textwidth]{Caesar.png}
\caption{Caesar's cryptographic method: just replace a letter with the one three places after in alphabetical order (image from wikipedia)}
\label{figCC}
\end{figure}
To read the original message, one should just reverse the arrows and replace each letter with the one 3 places before in alphabetical order.
This method of crypography has several problems, of course.
First of all, if one knows how to encrypt a message, also knows how to decrypt the message. Secondly, the possible shifts are just one less then the number of characters in your alphabeth (not counting the 0-shift which does not encrypt): not many to check fast by hand. Moreover, even if not a simple shift is used to encrypt but rather any permutation, if the message is long enough, a simple statistical analysis of most frequently occurring characters may yield to an easy decryptation (as it is done in E. A. Poe's \textit{The Gold-Bug}, see figure \ref{figGB}). Finally, both the receiver and the sender must know the crypting and decrypting keys in order to have a crypted communication between them. And how do they exchange the keys?
\begin{figure}[h]
\center
\includegraphics[width=0.60\textwidth]{GoldBug.jpg}
\caption{The cryptogram in Poe's novel \textit{The Gold-Bug} \cite{GB}.}
\label{figGB}
\end{figure}
With the computer era, decrypting messages has become easier and easier. The faster the computers, the better encryption methods had to be.
The breaking of the Nazi encrypting machine \textit{Enigma} by a huge group lead by Alan Turing was a key turning point of WWII.
\subsubsection{Number theory and cryptography}
Number theory was used to solve the last and biggest problem in cryptography. Namely, number theory allows for a method in which the encryption key is public but the decryption key is private, thus allowing anyone to send messages to the receiver (e.g. your password to a website or the PIN of your credit card to your bank) without having to worry about a third party decrypting the message.
This has been a mayor breakthrough in cryptography and its applications are amusing.
From the theoretical point of view, the RSA method is really simple. One needs to find two distinct prime numbers, $p$ and $q$ and computes $n=pq$ and $m=(p-1)(q-1)$. Then a number $a$ such that $(a,m)=1$ is choosen and the number $b$ such that $ab\equiv 1$ (mod $m$) is computed.
The couple $(n,a)$ is the public key and is known to everyone. The number $b$ is the private key and it is secret. The message is translated into a number $x<n$ and the sender sends the encrypted message $y<n$, where $y\equiv x^a$ (mod $n$). The receiver computes
$$y^b \equiv (x^a)^b = x^{ab} \equiv x \ (mod\ n)$$
thanks to Euler Theorem, thus getting to know the original message.
The operations of taking a power up to a congruence class is not much time consuming and can be easily done by a computer. Finding out from $n$ its prime factors $p,q$ is completely a different task, in term of computation time. Of course, the greater the computational power of computers, the bigger the two primes $p$ and $q$ need to be. Nowadays the RSA key is 128 bits long (or 256 bits for TopSecret tasks).
\subsection{Radon-Nikodim antitransformation and computed axial tomography}
Looking inside a body may be a difficult task. Our body is not transparent and cutting a person in order to see what's on the inside may not always be a good idea. Radiography, using X-rays, helped in seeing bones, since the rest of the body is trasparent to X-rays, but they are not a good means to inspect soft parts of our bodies.
Medicine was in search for a tool we were appearently lacking: a way to see inside our bodies without tearing them apart. The tool was only appearently lacking. Indeed math has invented ways to transform local informations into global ones and viceversa: trasformation and antitransformation. There are several of them, and they answer to different needs, but actually what a transformation does is taking as an input a function or a series of numbers, and giving back another function or a series of numbers; the antitrasformation going the opposite direction and being an inverse to the transformation. Usually these tools work computing integrals.
Sending rays through your body and see how much they were absorbed was not a new idea (indeed it was used with X-rays and radiography), but it is just in the early Seventies that a physicist (Allan Cormack) and an engineer (Godfrey Housefield) had the idea of using Radon-Nikodim transformation and its inverse in order to compute from the information of rays absorbed in the various direction a 3D model of the inside of a body. This application of math eventually led to the Nobel Prize for Medicine (in 1979) for the two and gave a huge tool of diagnosis to hospitals all around the world.
When Radon-Nicodim transformation was developed in 1917, it was a completely pure and "useless" tool of high mathematics. Of course, computers were far from being invented, at that time, and practical applications of the Radon antitransformation were unforeseeable. But, as we said, all mathematics is eventually applied mathematics. And you never know when something you know may actually turn useful. In any case, it is better to know more than less.
\section{Why politicians should know math?}
All of the above is a bunch of examples showing what is math useful to us as a species, as a community. But of course, we may ask ourselves if math knowledge should not be simply limited to mathematicians, engineers abd other people who may use it in their work for the benefit of the community at large. After all, we do not need people know exactly how a bridge is constructed, how a TV works, or how to repair a broken engine. For that, we use people who know how to do. Why should math be different?
I will address this question in the following sections. First, let us consider why politicians should know math.
Our modern world is a world filled with data and numbers. Decisions must be made based on those numbers and those data. But interpreting data is far from obvious, as the "survivor's bias" example should have already clearified. An inability to correctly interpret data may turn into a disaster. Indeed statistics is quite difficult.
\subsection{Education system in the US, covid-19 death toll and Simpson's paradox}
For many years, Winsconsin's students performed consistely better than Texas' students in standardized tests (see figure \ref{overall}).
\begin{figure}[h]
\center
\includegraphics[width=0.60\textwidth]{WT-overall.PNG}
\caption{Data of Texas and Winsconsin overall results in standardized tests (source minutephysics \cite{MP}).}
\label{overall}
\end{figure}
One could conclude that Wnsconsin's education system is way better than Texas', and a politician willing to improve Texas' education system may be tempted to copy the one of Winsconsin. But is that a good idea?
Knowing the mean performance of a huge number of students for a long time may sound as pretty solid evidence towards this claim. But statistics is full of surprises.
Namely, if we divide the data of the students of the two States among different ethnic group (and we know that ethnic group correlates with wealth which correlates with education level), a surprise pops out: white Texas students outperform white Winsconsin students, black Texas students outperform black Winsconsin students and hispanic Texas students outperform hispanic Wisconsin students (see figure \ref{race}).
\begin{figure}[h]
\center
\includegraphics[width=0.60\textwidth]{WT-race.PNG}
\caption{Data of Texas and Winsconsin results in standardized tests, divided by race (source minutephysics \cite{MP}).}
\label{race}
\end{figure}
So it actually looks that, when seen broken by race, data suggest that Texas' education system is better than Winsconsin's. How can data tell two different things? First of all, one of the problem is that the mean of some data is not the same as the mean of the means: it depends on how many data are there in every subgroup in which data have been divided. Wisconsin's population is much whiter than Texas': thus the overall mean of Wisconsin is much more tilted towards the white mean (which is the etnic group performing better in the test) than it is the mean of Texas.
A similar situation happened when comparing the death toll in Italy and China at the beginning of the Covid-19 pandemic. Indeed, the overall fatality rate of the disease in Italy was bigger than the overall fatality rate of the virus in China, but ---when people infected with Sars-CoV2 were split in age groups--- in every single age groups the fatality rate was greater in China than in Italy (see figure \ref{CovidFatality}).
\begin{figure}[h]
\center
\includegraphics[width=0.60\textwidth]{CovidCases.PNG}
\caption{Data of Italy and China fatality rates of Covid-19 cases (image from \cite{B}).}
\label{CovidFatality}
\end{figure}
In this case the problem is that in Italy there were much more old people sick with Covid-19 than there were in China. And Covid-19 has a higher fatality rate in older patients. So, this explains the appaerent discrepancy of the data.
This phenomenon, where there is a positive correlation overall, while ---when data is divided in groups--- there is a negative correlation, is called Simpson's paradox. Simpson's paradox is one of the thing a politician should be aware of, before taking action according to data.
But actually the problem is deeper than that. Indeed, is Wisconsin's school system better or worse than Texas'? Is this second way to see data the correct one? If we say that etnicity correlates with wealth (or with parents' education level) and this last thing correlates with results in tests, why are we using etnic group and not wealth (or parents' education level) directly in order to interpret our data? A third look at data may be needed.
The point is: if you believe data are objective and need not to be interpreted and anylized with a close look, you are likely to be fooled by data. If you know how statistics (and math) works, you are more likely not to get fooled and to take a second (or even a third) look at data before taking action (and possibly going in the wrong direction).
This is why politicians should have a good base in math and know how to analyze data. Before making decisions, the least you must have is correct data and infos, and possibly understand correctly what they mean.
\section{How do politicians use math?}
You might say that politicians do not really have to know and understand math, in order not to fall into such errors, but just to have good advisors who do know math. And indeed they have. Plenty of them. And here we get to the problem.
First, as we have seen, data and numbers are far from being \textit{objective}: data must be interpreted and investigated in order to understand what they say, but they are also easily bent to furnish support to almost any political view. So, sadly, way too often the scientific advisors of politicians try to cherry pick data or to present data in such a way to give a scientific-looking aspect to the political ideas they want to communicate. This when data are not right-away invented. But cheating too much is not even needed: the same data, presented with different words, from a different point of view, can lead to very different conclusions. And we must bear in mind that politicians often have a very skilled adving group whose only purpose is to find the best way to present data.
Another new interesting tool of math (or computer science) often used by politicians is given by big data: \textit{sentiment analysis} and \textit{trending topics} are fundamental in political communication. In our modern world we have an incredible amount of data about almost everyone: use of credit cards, posts or comments on social networks, our GPS position in real time, the shopping habits (both on-line and in physical stores), internet usage... The math of big data can extract patterns out of all this huge amount of data. And this is how your phone can suggest you the fastest path to go back home or where to buy the book you really want to read or the item you really needed. This can be useful, but of course all this information can be (and is!) used to make enormous profits.
Politicians are informed real-time about the hottest arguments of discussion (\textit{trending topics}) and on what most people think about the argument (\textit{sentiment analysis}), and so they are ready to band-wagon on the hottest topic with the coolest opinion. It almost does not matter whether the opinions expressed are coherent with one another or not: what is really important is to say something on the trending topic of the day, with their opinion being shared and viewed by the highest possible number of people. In time of election, people will recognise your name, and you'll have bigger chances of being voted, hence more votes. This is the core of marketing, applied to politics. Not so great if you think politics should be about solving the community's problems, but that's how it is. And there is a lot of math in that.
\subsection{Paradoxes of elections}
So, we have decided that politicians' biggest task is getting elected (even if they are really interested in doing their work for the benefit of the community: in order to do that, they need to be elected). Alas, the outcome of elections is far from being determined from what voters think, and the electoral system is crucial for the result. This is exactly the reason why politicans spend so much time discussing the electoral system. This subsection is mainly based on my paper \cite{SS}, on mathematical paradoxes of elections. I refer the reader to that paper for greater details.
Unluckily, no electoral system is perfect. In 1951, the economist Kenneth Arrow \cite{KA} considered a very general definition of electoral system as a function (which he calls \textit{social choice function}) from the individual preferences among the alternatives of the electors to a single preference of the social group, where a \textit{preference} is a total ordering of the alternatives. Arrow introduces three desiderable properties of the social choice function:
\begin{itemize}
\item[\textbf{A1}] \ (sovranity of electors) The function is surjective, i.e.\ if the electors agree on the desired outcome, they can vote (choose their individual preferences) in order to have that outcome;
\item[\textbf{A2}] \ (positive correlation) If in a certain situation the social choice function says $x$ is better than $y$, in any other situation in which the only change in any elector's preference is that their ranking of $x$ gets higher, then $x$ is still better than $y$;
\item[\textbf{A3}] \ (Invariance under irrelevant alternatives) The relative position of $x$ and $y$ according to the social choice function depends only on the relative positions of $x$ and $y$ for each elector and not on the opinion on a third alternative $z$.
\end{itemize}
Arrow then proves that if there are at least three alternative, the only social choice functions satisfying the above three axioms are dictatorships: the social choice function is simply a projection on one on the factors, or differently said the "will" of the people is the "will" of a single individual, the dictator (see figure \ref{Paperone}).
\begin{figure}[h]
\center
\includegraphics[width=0.6\textwidth]{Paperone.PNG}
\caption{The only election satisfying Arrow's axioms is a dictatorship:\\
\textit{US: "If you vote YES, my proposals will be accepted. If you vote NO..."\\
GG: "they will be rejected?"
US: "No, they will be accepted, and your vote will be kept in an appropriate dossier"} \cite{NG}.}
\label{Paperone}
\end{figure}
This theorem actually means that, if more than 3 alternatives arre allowed, our electoral system must not satisfy one of the above properties if we do not want to have a dictator. Electoral systems usually do not satisfy the axiom \textbf{A3} and the outcome of election can be modified by the presence or absence of otherwise totally irrelevant political forces.
Politicians (or their advisor) well know this fact, and this explains all the fight on whether some little meaningless party should be allowed or not to partecipate in an election.
The Theorem of Arrow is based on Condorcet paradox, i.e.\ situations where you have 3 alternatives A-B-C and A wins vs B, B wins vs C and C wins vs A, in a rock-paper-scissor way.
The Theorem of Arrow may suggest that a system with only two alternatives to chose from is the best one. The most useful electoral system to force politicians to gather in only two major party, thus having a system with only two alternatives and a way out of Arrow's paradox, is a one-discrict one-seat system, where the party who gets the most vote in the discrit takes the seat, and all other votes for the other parties are meaningless.
Unluckily the system one-district one-seat has one big weakness: the outcome of the election strongly depends on the shape of the districts and a party which has the power to decide the shape of the districts may win the election in a 1 vs 1 race with as little as slightly more than $25\%$ of the votes. This is due to the fact that the party can lose $0$ to $100\%$ in slightly less then half the districts and win by barely one vote over $50\%$ in the remaining (slightly more then half) districts, thus winning the election. Of course it is impossible to have complete information about votes, but big data analysis gives parties a quite good level of knowledge about voting intentions, thus allowing an easy win even with if the electorate strongly favours the other party.
This art of carefully shaping the districts in order to win is called \textit{gerrymandering} in honour to the salamander-shaped district designed by Governor Elbridge Gerry to win an election (see figure \ref{Gerry}), but is still well used nowadays in the US, by both parties and districts of really weird shapes are not at all uncommon. Mathematical research on the subject of gerrymandering is very active, to limit gerrymandering, both by finding the objective subdivision on districts or by giving measures to find out whether there has been some gerrymandering going on in order to cheat or if the subdivision is fair.The two main approaches to the problem are an analytical approach using isoperimetric-like techniques and a discrete geometry approach, using weighted graphs. I proposed an approach of this second kind \cite{SS19}.
\begin{figure}[h]
\center
\includegraphics[width=0.60\textwidth]{Gerry.PNG}
\caption{The satirical panel, with the salamander-shaped district, published on the Boston Centinel to mock Governor Gerry in 1812.\\
Image of public domain, from wikipedia.}
\label{Gerry}
\end{figure}
All this is why it is usually forbidden to change the rules of an election (or the shape of discrits) too near to the upcoming election. But of course, regulations do not completely stop politicians to use math for their own benefit.
\section{Is math useful to me?}
Let us now address the main point of this paper: how is math useful to \textit{me}? Why should \textit{I} learn math? Can't just a few people be knowledgeable of math for the whole society's benefit?
I once heard that when a kid learns to read by itself, it does no longer have to depend on others to read and can find out what's written around without having to trust others to read correctly to them. Math is a powerful tool to read the world, and knowing math gives you the power to independently analize the complexity of the world, without blindly trusting others to do that for you.
Be aware! I am not saying you should not trust others or the scientific community, not at all! What I am saying is that, being able to do the math by yourself or ---even better--- to mathematize a problem and to take a look at it through the magnifying glass of math, is always a good idea, not only to find out a correct answer yourself, but mainly to find out who is trying to trick you and who you should trust.
So a first answer is that the more you know, the less you are likely to be fooled by people who want to gain something by fooling you, by politicians or lobby who want to push their own ideas, or simply by arguments which may look plausible until inspected closely. Knowing math, or even better being able to reason with a math-oriented mind, is fundamental for every single individual.
Of course the problem is that it is not enough being able to reason correctly and foresee what's coming, if the majority of the population does not share this ability and is easily fooled into non rational behaviours. This is always true, but even more in a period where acting fast and correctly is the key to avoiding a disaster.
In October 2020, at a EU meeting, the German Cancellor Angela Merkel said "\textit{Once we got to this point, closures are the only possible choice: we should have acted earlier, but people would have hardly understood. They need to see hospitals' beds full...}"\footnote{Mine translation}. Angela Markel has a Ph.D. in physical chemestry and knew pretty well what was going on. She even gave, some month before, a very nice explanation of the meaning of the index $R_t$ in an epidemics. But that was not enough. She was knowledgeable, she was powerful, but still she could not act without her citizens being fully aware of what was going on. She was in the sad situation where she could foresee hospitals' beds getting full and death tolls raising to very high levels without being able to act tempestively.
The consequences of a low mathematical literacy of the vast majority of the population are terrible: more deaths, more pressure on the health system, more economical consequences... In order to avoid this in the future (the damages of the present situation cannot be undone, alas), a wide-spread math literacy is needed. Math allows to see you that an exponential growth of an epidemics means the hospitals' beds will be full and to act timely, in order not to have them full.
Without people's clearly understanding that, the action needed to solve the problem is also the action that will made people shout "\textit{Nothing happended! There was nothing to be worried about! We should have not done this}".
So, not only math gives you instruments to understand, analize the world and not getting fooled, but a wide-spread knowledge of mathematics will turn into a huge benefit for the whole society.
\section{Is math usefulness relevant to learners?}
I hope we have cleared that math is useful to everyone and to the population at large. Given that, the fact that something is useful to someone, it does not straightforwardly imply that it will be interested in learning that, much more when you are dealing with children or kids or young adults. Showing the utility of something may make want some students to learn something, but most of them will be simply bored as hell.
I will just quote Paul Lockhart, who makes his Salviati go right to the point\footnote{Bolding mine}
\begin{quotation} It may be true that you have to be able to read in order to fill out
forms at the DMV, but that’s not why we teach children to read. We
teach them to read for the higher purpose of allowing them access to
beautiful and meaningful ideas. Not only would it be cruel to teach
reading in such a way— to force third graders to fill out purchase
orders and tax forms— it wouldn’t work! We \textbf{learn} things \textbf{because
they interest us now, not because they might be useful later}.
\begin{flushright}A Mathematician's Lament \cite{L} --- P. Lockhart
\end{flushright}
\end{quotation}
We learn because we are interested, because we are amused by something, not because we have to or because it will do us some good or it will make us a better person!
Luckily math is filled with interesting ideas and theories! We must never forget this, and when trying to appeal a young learner \textit{usefulness} should not be our guide through mathematics. Math was not born \textit{because} it was useful, but beacuse it is amusing. Math is filled with interesting problems, which can be given to kids and adult of different ages and knowledge, in order to hook them into mathematics.
For the sake of completeness, I will just present an example, but many more can be found in the very indicated reading of Lockhart's pamphlet \cite{L}.\vspace{0.2cm}
\textbf{Second degree polynomials.} Usually, when studying second degree polynomials, students are presented with a huge nomenclature (pure polynomial, spurious polynomial...) and a vast casistic to solve particular polynomial equations, then they are given a lot of exercises to practise the method that was given. After that, they are given the general formula
$$ax^2+bx+c=0 \ \ \Leftrightarrow\ \ x=\frac{-b\pm\sqrt{b^2-4ac}}{2a}$$
(possibly also with a second variant in case $b$ happens to be even) and then a new round of dumb exercises, each one equal to the previous. Totally boring.
I mean, I know that during the history of math all these different kinds of equations were solved (and given funny names), but math is not about zoology or funny names: math is about the struggle to find a path that lead to the solution of problems. Not necessarily the smartest and shortest path. Not immediately, anyhow.
A possible different teaching sequence would be to start from problems: give the students some problems, which once mathematized turn out into solving a second degree polynomial. Some of the problems should be aesily solvable (i.e. lead to an equation of the form $x^2=d$ or $x^2+bx=0$), some should lead to a complete equation with no vanishing terms. The students will find by themselves how to solve the simpler ones, and maybe will even give a try to the more difficult ones. Guided by the teacher, working in groups, they may rediscover themselves the formula (maybe by completing the square) and teach it to other groups. A discover made by themselves, while trying to solve a problem and effectively experiencing the hardness of the problem and the sense of joy that comes with the solution, will leave the students something more than just a formula to blindly apply. Most of all, it will leave them with the sense of doing math.
And after the group work, a good recap by the teacher would be nice, so to put all the ideas (which came from the students) in order. Doing like that, probably a lot more of them will remember the formula, but most important will know how to find it again if needed.
\begin{question}{So... is math usefulness relevant to learners?} In my opinion it is very little relevant for their decision to willingly learn maths, and both teachers and popularizers of mathematics should not focus too much on applications and usefulness of mathematics, since applications and usefulness are often not immediate, but rather on the joy and the challenges of doing mathematics.
\end{question}
\section{Is math popularization useful? --- A math popularizer's apology}
To end this chapter, I would like to give an apology to the activity of math popularization. Hardy had quite harsh words for this activity, and I feel most of my collegues agree with him: any amount of time spent in popularizing math is time stolen from doing actual math, and probably, if you do that, it is just because you are not good enough to do actual math.
\begin{quotation} If then I find myself writing, not mathematics, but ‘about’
mathematics, it is a confession of weakness, for which I may
rightly be scorned or pitied by younger and more vigorous
mathematicians. I write about mathematics because, like any
other mathematician who has passed sixty, I have no longer the
freshness of mind, the energy, or the patience to carry on
effectively with my proper job.
\begin{flushright}A Mathematician's Apology \cite{H} \S 1 --- G. H. Hardy
\end{flushright}
\end{quotation}
I'm sorry for my colleagues, but if this vision of popularization or communication of mathematics was ok in the Fourties of the last century, it is no longer so nowadays. For a better and longer essay on this subject, I refer the reader to the article by Silvia Benvenuti and Roberto Natalini \cite{BN}.
There are several top mathematicians deeply involved in communication of math (just think at the Fields medallists Cedric Villani and Alessio Figalli, to name two). Moreover, in the 2011 \textit{European's Charter of Researchers} \cite{ECR} it is clearly written that scientists should be directly involved in communicating their own researches to the wide public in order to favour the creation of a scientific mind.
\begin{quotation}Researchers should ensure that their research activities are made known to society at large in such a way that they can be understood by non-specialists, thereby improving the public’s understanding of science. Direct engagement with the public will help researchers to better understand public interest in priorities for science and technology and also the public’s concerns.
\begin{flushright}European's Charter of Researchers \cite{ECR}
\end{flushright}
\end{quotation}
The reason for that is precisely what we tried to outline and suggest in this chapter: a scientific-leaned mind is needed for the well-being of the society at large, and ---given how modern democracies work--- it is a need of the whole society and not just for a few enlightened who are part of the governing class.
The idea many researchers have of themselves and of research is that they are needed by society (which is true) and they have no urge to explain to society why they are needed (and this is false). This idea that what matters for research is getting it done and not being presented to society at large is deeply fixed into researchers' minds, but it is false, in the sense that the society must be aware of the fact that investing (money, time and effort) in research, both applied and pure, is what we need to do. And this is even more true when we talk of an inherently abstract subject as mathematics, whose practical implications are neither immediate nor evident.
I perfectly know the feeling of frustration when you are telling someone you are a mathematician, or you teach math and the response you get is that shown in the comic by SMBC (see figure \ref{SMBC}).
\begin{figure}[h]
\center
\includegraphics[width=0.55\textwidth]{WhatItIsLike.PNG}
\caption{A comic by Saturday Morning Breakfast Cereal \cite{SMBC} getting right to the point of how math is perceived.}
\label{SMBC}
\end{figure}
You usually get on the defensive and have trouble to communicate the beauty of mathematics, or even ---if you are tired--- do not gey at all into the subject. Or sometimes, people just say that they understand math is useful, but \textit{not for them} / \textit{they do not get it} / \textit{they are not a math person} (chose one or more).
Communicating to the society is tough. And society at large are not people who willingly go to an event of science (or math) popularization, or not just them: society at large, like it or not, is mainly composed by people who have a problematic relationship with mathematics, and they will not come to an event where you talk math to them.
\begin{quotation} Part of the problem is that nobody has the faintest idea what it is that mathematicians do.
\begin{flushright}A Mathematician's Lament \cite{L} --- P. Lockhart
\end{flushright}
\end{quotation}
Doing mathematics is an actvity quite similar to that of an artist or a writer: there is a lot of technique involved, but also a lot of artistic out-of-the-box thinking and imagination. People are scared by technique (the only thing about math they know) and are not willing to know more about math.
It is up to you as a mathematician to get people to know what mathematicians do. They won't come at you. You have to come at them, by using their passions to talk about math. It is a while, since I started doing that with comics, using Disney comics to talk about math (see \cite{S}, but also my YouTube playlists on the subject \cite{OMAM,UMPAD}, see figure \ref{OMAM1}).
\begin{figure}[h]
\center
\includegraphics[width=0.70\textwidth]{OMAM.PNG}
\caption{The first video of the YouTube project \textit{Of math and mice - the mathematics of Disney comics} \cite{OMAM}, devoted to popularizing math using Disney comics, translation in English of the corresponding Italian project \textit{Un matematico prestato alla Disney} \cite{UMPAD}, both available on my YouTube channel \textit{Alberto Saracco}.}
\label{OMAM1}
\end{figure}
I noticed that when I put Disney comics or characters in the title of one of my talks, the audience is thrice as big as it usually is. And often many of the people in the audience were not interested at all about math at the beginning of the talk, but exited the talk with better feelings about the subject.
We need these kind of things in order to get math near to people who would not approach math. And, as I said, it is of fundamental importance. If we want society to grow and fully use math, it is up to us. As Francesca Arici, member of the \textit{Raising Public Awareness} European Commettee, said
\begin{quotation} Don't be afraid to use metaphors and don't be afraid to lie a little bit: the objective is to communicate math and not to make people see we can do math and prove theorems in a rigorous way.
\begin{flushright} F. Arici \cite{Ar}
\end{flushright}
\end{quotation}
|
{
"timestamp": "2021-05-11T02:17:31",
"yymm": "2105",
"arxiv_id": "2105.03843",
"language": "en",
"url": "https://arxiv.org/abs/2105.03843"
}
|
\section{Introduction}
As a model problem we consider
\begin{gather}
\label{BBM}
\partial_t u(t,x) +\partial_x m_L(\sqrt{\varepsilon}\partial_x) u(t,x)+ \varepsilon\partial_x m_Q(\sqrt{\varepsilon}\partial_x) u^2(t,x) =0 \quad (t,x) \in \mathbb{R}\times \mathbb{T}
\end{gather}
with smooth symbols $m_L, m_Q$ satisfying for $\xi \in \mathbb{R}$
\begin{equation}\label{ass1}
\begin{aligned}
& m_L(i\xi) \in \mathbb{R}, \quad m_L(i\xi) = m_L(-i\xi),\quad m_Q(i\xi) \in \mathbb{R}, \quad m_Q(i\xi) = m_Q(-i\xi),\\
&\left \vert m_L^{(4)}(i\xi)\right\vert \leq \frac{c_L}{1+\vert \xi\vert^{\beta_L}}
, \quad \left \vert m_Q(i\xi)\right\vert \leq \frac{1}{1+\vert \xi\vert^{\beta_Q}}, \quad
\left \vert m_Q'(i\xi)\right\vert \leq \frac{1}{1+\vert \xi\vert}
\end{aligned}
\end{equation}
for some $\beta_L, \beta_Q \geq 0$. The class of equations \eqref{BBM} includes a large variety of models such as the Benjamin--Bona--Mahony (BBM) equation
\begin{equation}\label{bbm}
m_L(i\xi)= m_Q(i\xi) = \frac1{1+\xi^2},
\end{equation}
the Korteweg--de Vries (KdV) equation
\begin{equation}\label{kdv}
m_L(i \xi)= 1 - \xi^2, \quad m_Q(i\xi) = 1
\end{equation} and the Whitham equation
\begin{equation*}\label{kdv}
m_L(i\xi)= \sqrt{\frac{\tanh(\xi)}{\xi}}, \quad m_Q(i\xi) = 1.
\end{equation*}
The model \eqref{BBM} can be rigorously derived in the long wave regime from many physical models including water waves, plasma, etc., see, e.g., \cite{ASL,ChR,Craig,GErR,Guo}. In particular, rigorous error estimates between the solution of \eqref{BBM} and the solution of the original model are established on the natural time scale $t= \mathcal{O}(\frac{1}{\varepsilon})$.
In this paper we introduce a novel class of numerical integrators for \eqref{BBM} based on the long wave behaviour of the dispersion relation
\begin{equation}\label{wlim}
\begin{aligned}
& i \xi \left(1 - \frac{ m_L^{(2)}(0)}{2} \xi^2 \right)\quad+\quad\text{higher order terms} \quad \text{with}\quad \xi =\sqrt{\varepsilon} k, \, k \in \mathbb{Z}.
\end{aligned}
\end{equation}
At first order the {long wave limit preserving} (LWP) scheme takes the form
\begin{equation}\label{scheme}
\begin{aligned}
u^{n+1} & = \mathrm{e}^{-\tau \partial_x m_L(\sqrt{\varepsilon}\partial_x) } \left[ u^n
- \frac{1}{3\alpha} m_Q(\sqrt{\varepsilon}\partial_x) \Big(e^{\tau\alpha \varepsilon\partial_x^3} \Big( e^{-\tau\alpha \varepsilon\partial_x^3} \partial_x^{-1} u^n \Big)^2 - \Big( \partial_x^{-1} u^n\Big)^2+ 2 \varepsilon \tau \widehat{u^n_0} \partial_x u^n\Big)
\right]
\end{aligned}
\end{equation}
where we have set $\alpha = \frac{ m_L^{(2)}(0)}{2}$.
Details on its construction will be given in Section~\ref{sec:dev1}. The scheme~\eqref{scheme} (and its second order counterpart, see \eqref{schema2}) will allow us to reproduce the dynamics of the solution $u(t,x)$ of \eqref{BBM} up to long wave regimes $
\varepsilon \ll 1
$ on the natural long time scale $t= \mathcal{O}(\frac{1}{\varepsilon})$. More precisely, at first ($\sigma =1$) and second-order ($\sigma =2$) we will establish the global error estimates
$$
\Vert u(t_n)- u^n \Vert_{L^2} \leq \tau^\sigma t_n \varepsilon^{2}c_0 e^{c_1 t_n {\varepsilon}} \quad \text{on long time scales}\quad t_n \leq \frac{1}{\varepsilon},\quad \sigma =1,2
$$
where $c_0, c_1$ depend on certain Sobolev norms of $u$ (depending on $\beta_L$ and $\beta_Q$). We refer to Theorem~\ref{thm:glob1} and Theorem \ref{thm:glob2} for the precise error estimates. Note that the time scale $t= \mathcal{O}(\frac{1}{\varepsilon})$ is also the natural time scale on the continuous level, i.e., for the PDE \eqref{BBM} itself.
Compared to classical schemes, e.g., splitting or exponential integrator methods, our long wave limit preserving integrators in particular
\begin{itemize}
\item allow for approximations on large natural time scales $t= \mathcal{O}(\frac{1}{\varepsilon})$
\item converge with rates at order $\tau^\sigma \varepsilon^2 t$.
\end{itemize}
Surprisingly, we can even achieve convergence of order $\tau \varepsilon$, i.e., a gain in $\varepsilon$, over long times $t= \frac{1}{\varepsilon}$.
For the analysis of long-time energy conservation for Hamiltonian partial differential equation with the aid of Modulated Fourier Expansion and Birkhoff normal forms we refer to \cite{HLW,HLW1,HLW2,FGP,FGP1,FGP2} and the references therein. Here we in contrast prove long time error estimates on the solution itself. In case of the nonlinear {Klein}-{Gordon} equation with weak nonlinearity $\varepsilon^2 u^3$ long time error estimates of splitting methods were recently established in \cite{Bao}.
The main challenge in the theoretical and numerical analysis of \eqref{BBM} on long time scales $t= \mathcal{O}(\frac{1}{\varepsilon})$ lies in the loss of derivative in the nonlinearity. This loss of derivative is clearly seen in case of the KdV equation \eqref{kdv} for which we face a Burger's type nonlinearity $ \varepsilon\partial_x u^2 $. However, even in case of the BBM equation \eqref{bbm} where we expect some regularisation through the structure of the leading operators (note that $\beta_L = \beta_Q =2$), the smoothing only holds with loss in~$\varepsilon$
\begin{equation}\label{regi2}
\left\Vert \varepsilon \partial_x m_Q(\sqrt{\varepsilon}\partial_x) u^2 \right\Vert_r \leq \text{min}\left( {\sqrt{\varepsilon}} \Vert u^2 \Vert_{r},2 \Vert u \partial_x u \Vert_{r}\right).
\end{equation}
For BBM this may allow first order error estimates at order $\tau \sqrt{\varepsilon} $ for classical splitting or exponential integrator methods up to time $t= \mathcal{O}(\frac{1}{\sqrt{\varepsilon}})$, but not on the natural time scale of the PDE that is $t= \mathcal{O}(\frac{1}{\varepsilon})$. Our new long wave limit adapted discretisation \eqref{scheme} in contrast allows for long time error estimates at order $\tau \varepsilon $ on the natural time scale $t= \mathcal{O}(\frac{1}{{\varepsilon}})$.
In case of the BBM equation with a regularising nonlinearity ($\beta_L = \beta_Q =2$) we can thanks to the estimate \eqref{regi2} play with the gain in $\varepsilon$ and loss of derivatives. This will allow error bounds also for non smooth solutions, however, only on time scales $t=1$. More precisely, one could prove first-order convergence in $H^r$ for solutions in $H^r$ ($r>1/2$), i.e., without any loss of derivative, for short times $t=1$ at the cost of no longer gaining in~$\varepsilon$. Such low regularity estimates on short time scales without gain in $\varepsilon$ also hold true for classical schemes, see for instance \cite{CS21} for the analysis in case of splitting discretisations.
Our idea for LWP schemes can be extended to higher order. We will give details on the second order integrator on long time scales in Section \ref{sec:scheme2}. Note that for the classical KdV equation (that is $\varepsilon = 1$ and without transport term $\partial_x$), and nonlinear Schr\"odinger equations resonance based schemes were recently introduced in \cite{HoS16,OS18} and (short time) error estimates for time $t=1$ were proven. We also refer to \cite{H1,H2,Clem,OSu} for splitting, finite difference and Lawson-type methods for the classical KdV equation on time scales $t=1$.
\\
\noindent{\bf Outline of the paper.} In Section \ref{sec:scheme1} and Section \ref{sec:scheme2} we introduce the first- and second-order LWP scheme and carry out their convergence analysis over long times $t=\mathcal{O}\left( \frac{1}{\varepsilon}\right)$. Numerical experiments in Section \ref{sec:num} underline our theoretical findings.\\
\noindent{\bf Notation and assumptions.} In the following we will assume that $m_L^{(2)}(0)=2$ which implies (as $ \alpha = m_L^{(2)}(0)/2 $) that $ \alpha =1$ in \eqref{wlim}. Our analysis also holds true for any $ m_L^{(2)}(0) \in \mathbb{R}$. For practical implementation issues we will impose periodic boundary conditions that is $x \in \mathbb{T} =[-\pi ,\pi]$. Our result can be extended to the full space $x\in \mathbb{R}$. We denote by $(\cdot,\cdot)$ the standard scalar product $
(f,g) = \int_{\mathbb{T}} f g dx
$ and by $\|.\|_r$ the standard $H^r(\mathbb{T})$ norm. In particular, for $r>1/2$, we will exploit the standard bilinear estimate
\begin{gather}
\label{bil_est}
\|fg\|_r \leq C_r \|f\|_r \|g\|_r.
\end{gather}
For $v(x) = \sum_{k \in \mathbb{Z}} \hat v_k e^{i k x}$ we set $\partial_x^{-1} v(x) := \sum_{k\neq 0} \hat v_k e^{i k x}$.
Let us also define $\Lambda = (1+\vert \xi \vert^2)^{\frac12}.$
\section{A first-order long wave limit preserving scheme}\label{sec:scheme1}
In a first section we will formally derive the LWP scheme \eqref{scheme} (see Section \ref{sec:dev1}). Then we will carry out its convergence analysis and establish long time error estimate (see Section \ref{sec:err1}).
\subsection{Derivation of the scheme}\label{sec:dev1}
Recall Duhamel's formula of \eqref{BBM}
\begin{equation*}\label{expu1}
\begin{aligned}
u(t) & = e^{- t \partial_x m_L(\sqrt{\varepsilon}\partial_x) } u(0)
- \varepsilon \partial_x m_Q(\sqrt{\varepsilon}\partial_x) e^{-t \partial_x m_L(\sqrt{\varepsilon}\partial_x) } \int_0^t e^{s \partial_x m_L(\sqrt{\varepsilon}\partial_x) } u^2(s)ds.
\end{aligned}
\end{equation*}
Iterating the above formula, i.e., using that
\begin{equation*}\label{expu1}
\begin{aligned}
u(s) & = e^{- s \partial_x m_L(\sqrt{\varepsilon}\partial_x) } u(t_n)
+\mathcal{O}\left( s \varepsilon \partial_x m_Q(\sqrt{\varepsilon}\partial_x) u^2\right)
\end{aligned}
\end{equation*}
we see that formally
\begin{equation*}\begin{aligned}
u(t) &
\approx e^{- t\partial_x m_L(\sqrt{\varepsilon}\partial_x) } \left[u(0)
- \varepsilon \partial_x m_Q(\sqrt{\varepsilon}\partial_x) \int_0^t e^{s \partial_x m_L(\sqrt{\varepsilon}\partial_x) } \left( e^{- s \partial_x m_L(\sqrt{\varepsilon}\partial_x) } u(0)\right)^2 ds\right].
\end{aligned}
\end{equation*}
The key point lies in embedding the long wave limit behaviour (cf. \eqref{wlim})
$$
D_L = \partial_x m_L(\sqrt{\varepsilon}\partial_x) - (\partial_x + \varepsilon \partial_x^3)
= \mathcal{O}\left( {\varepsilon^{2 } \partial_x^{5 }} m_L^{(4)}(\sqrt{\varepsilon}\partial_x) \right)
$$
into our numerical discretisation. This motivates (for sufficiently smooth solutions) the following approximation
\begin{equation*}\begin{aligned}
u(t) &
\approx e^{-t \partial_x m_L(\sqrt{\varepsilon}\partial_x) } \left[u(0)
- \varepsilon \partial_x m_Q(\sqrt{\varepsilon}\partial_x) \int_0^t e^{s(\partial_x + \varepsilon \partial_x^3) } \left( e^{- s (\partial_x + \varepsilon \partial_x^3) } u(0)\right)^2 ds\right].
\end{aligned}
\end{equation*}
We may solve the oscillatory integral by the observation that
$$
\varepsilon\partial_x \int_0^t e^{s(\partial_x + \varepsilon \partial_x^3) } \left( e^{- s (\partial_x + \varepsilon \partial_x^3) } v \right)^2 ds = \frac{1}{3} \mathrm{e}^{t \varepsilon \partial_x^3} \left[ \mathrm{e}^{ -t \varepsilon \partial_x^3 } (\partial_x^{-1} v)^2\right] -\frac13 (\partial_x^{-1} v)^2+ 2 \varepsilon t \hat{v}_0 \partial_x v,
$$
see \eqref{resi}. Based on the long wave limit behaviour we thus find the following approximation
\begin{equation*}\begin{aligned}
u(t) &
\approx e^{-t \partial_x m_L(\sqrt{\varepsilon}\partial_x) } \left[u(0)
- m_Q(\sqrt{\varepsilon}\partial_x) \left(
\frac{1}{3} \mathrm{e}^{t \varepsilon \partial_x^3} \left( \mathrm{e}^{ -t \varepsilon \partial_x^3 } (\partial_x^{-1} u(0))^2\right) -\frac13 (\partial_x^{-1} v)^2+ 2 \varepsilon t \widehat{u(0)}_0 \partial_x u(0)\right)
\right]
\end{aligned}
\end{equation*}
which builds the basis of our LWP scheme \eqref{scheme}.
\subsection{Error analysis}\label{sec:err1}
In this section we carry out the error analysis of the filtered scheme \eqref{scheme} over long times $t=\mathcal{O}\left( \frac{1}{\varepsilon}\right)$ We start with the local error analysis. For this purpose we will denote by $\varphi^t$ the exact flow of \eqref{BBM} and by $\Phi^\tau$ the numerical flow defined by the scheme \eqref{scheme}, such that
$$
u(t_n+\tau) = \varphi^\tau(u(t_n) ) \quad \text{and}\quad u^{n+1} = \Phi^\tau(u^n).
$$
\subsubsection{Local error analysis}
We will exploit the following estimate which regularises for $\beta_Q > 1/2$.
\begin{lemma}
\label{lemma_reg}
Let $f\in H^{r+1-\beta_Q}(\mathbb{T})$. It holds that
\begin{equation*}
\|\varepsilon\partial_x m_Q(\sqrt{\varepsilon}\partial_x) f\|_r \leq \varepsilon^{1-\beta_Q} \|f\|_{r+1-\beta_Q}.
\end{equation*}
\end{lemma}
\begin{proof}
The assertion follows thanks to the estimate
\begin{align*}
\|\varepsilon\partial_x m_Q(\sqrt{\varepsilon}\partial_x) f\|_r^2 & = \sum_{k\in\mathbb{Z}} (1 + |k|)^{2r} \bigg| \frac{\varepsilon ik}{1+(\sqrt{\varepsilon} k)^{\beta_Q}} \bigg|^2 |\hat{f}_k|^2
\leq \varepsilon^{2(1-\beta_Q)} \|f\|_{r+1-\beta_Q }^2.
\end{align*}
\end{proof}
\begin{lemma} \label{thm:loc} Fix $r>1/2$. Then, the local error $ \varphi^\tau(u(t_n))- \Phi^\tau(u(t_n))$ satisfies for $$ \beta :=min(2, \beta_L + \beta_Q)$$
the estimate
\begin{align*}
\Vert \varphi^\tau(u(t_n))- \Phi^\tau(u(t_n)) \Vert_r & \leq \tau^2 \varepsilon^{2} c\left(\sup_{t_n\leq t\leq t_{n+1}} \Vert u(t) \Vert_{r+2}\right) + c_L \tau^{2} \varepsilon^{3-\frac{\beta}2} c\left(\sup_{t_n\leq t\leq t_{n+1}} \Vert u(t) \Vert_{r+6- \beta }\right).\end{align*}
\end{lemma}
\begin{proof}
Iterating Duhamel's formula of \eqref{BBM} yields that
\begin{equation}\label{expu1}
\begin{aligned}
u(t_n+\tau) & = e^{- \tau \partial_x m_L(\sqrt{\varepsilon}\partial_x) } u(t_n)
- \varepsilon \partial_x m_Q(\sqrt{\varepsilon}\partial_x) e^{- \tau \partial_x m_L(\sqrt{\varepsilon}\partial_x) } \int_0^\tau e^{s \partial_x m_L(\sqrt{\varepsilon}\partial_x) } u^2(t_n+s)ds\\
& = e^{- \tau \partial_x m_L(\sqrt{\varepsilon}\partial_x) } u(t_n)
\\&- \varepsilon \partial_x m_Q(\sqrt{\varepsilon}\partial_x) e^{- \tau \partial_x m_L(\sqrt{\varepsilon}\partial_x) } \int_0^\tau e^{s \partial_x m_L(\sqrt{\varepsilon}\partial_x) } \left( e^{- s \partial_x m_L(\sqrt{\varepsilon}\partial_x) } u(t_n)\right)^2 ds\\& + \mathcal{R}_1(\varepsilon, \tau, u )
\end{aligned}
\end{equation}
with the remainder
\begin{equation}
\mathcal{R}_1(\varepsilon, \tau, u )
= \varepsilon \partial_x m_Q(\sqrt{\varepsilon}\partial_x) e^{- \tau \partial_x m_L(\sqrt{\varepsilon}\partial_x) } \int_0^\tau e^{s \partial_x m_L(\sqrt{\varepsilon}\partial_x) } \Big[ \left( e^{- s \partial_x m_L(\sqrt{\varepsilon}\partial_x) } u(t_n)\right)^2- u^2(t_n+s)\Big] ds.
\end{equation}
Thanks to the observation that
$$
u(t_n+s) = e^{- s\partial_x m_L(\sqrt{\varepsilon}\partial_x) } u(t_n)
- \varepsilon \partial_x m_Q(\sqrt{\varepsilon}\partial_x) e^{- s \partial_x m_L(\sqrt{\varepsilon}\partial_x) } \int_0^s e^{s_1 \partial_x m_L(\sqrt{\varepsilon}\partial_x) } u^2(t_n+s_1)ds_1
$$
the remainder $\mathcal{R}_1(\varepsilon, \tau, u )$ is of the following form
$$
\mathcal{R}_1(\varepsilon, \tau, u ) = \mathcal{O}\left(\tau^2 \varepsilon \partial_x m_Q(\sqrt{\varepsilon}\partial_x) \left(u(t) \varepsilon \partial_x m_Q(\sqrt{\varepsilon}\partial_x) \left( u^2(t)\right) \right) \right).
$$
Thanks to assumption \eqref{ass1} (which guarantees the boundedness of the symbol $m_Q$) we can thus conclude that
\begin{equation}\label{R1}
\Vert \mathcal{R}_1(\varepsilon, \tau, u )
\Vert_r \leq \tau^2 \varepsilon^{2 }c\left(\sup_{t_n\leq t\leq t_{n+1}} \Vert u(t) \Vert_{r+2}\right).
\end{equation}
Taylor series expansion of the symbol $m_L(\delta)$ around $\delta = 0$ yields that
\begin{align*}
m_L(\delta) & =m_L(0) + \xi m_L'(0) + \frac{\delta^2}{2} m_L^{''}(0) + \frac{\delta^3}{3!} m_L^{(3)}(0) + \int_0^\delta \frac{(\delta-\tilde \delta)^3}{3!} m_L^{(4)}(\tilde \delta) d \tilde \delta \\
& = 1 + {\delta^2} m_L^{''}(0) + \int_0^\delta\frac{(\delta-\tilde \delta)^3}{3!} m_L^{(4)}(\tilde \delta) d \tilde \delta\\
\end{align*}
where in the last step we have used the assumptions \eqref{ass1} (which implies that $m^{(2\ell +1)}(0)=0$) and the assumption that (without loss of generality) $m_L^{(2)}(0) =2$. Together with the assumption that
$ \left \vert m_L^{(4)}(i\xi)\right\vert \leq \frac{c_L}{1+\vert \xi\vert^{\beta_L}}$ (see again \eqref{ass1})
we thus find that
\begin{align}\label{opexp}
D_L = \partial_x m_L(\sqrt{\varepsilon}\partial_x) - (\partial_x + \varepsilon \partial_x^3)
= \mathcal{O}\left( c_L\frac{\varepsilon^{2 } \partial_x^{5 }}{1+ (\sqrt{\varepsilon}\vert \partial_x\vert)^{\beta_L}} \right).
\end{align}
This allows the following expansion of the oscillations
\begin{equation}\label{osc}
e^{ \pm s \partial_x m_L(\sqrt{\varepsilon}\partial_x) } =
e^{ \pm s (\partial_x + \varepsilon \partial_x^3)} +\mathcal{O}\left( c_L\frac{\varepsilon^{2 } \partial_x^{5 }}{1+ (\sqrt{\varepsilon}\vert \partial_x\vert)^{\beta_L}} \right).
\end{equation}
Employing these expansion for the oscillations in the remaining integral term in \eqref{expu1}
yields together with Lemma \ref{lemma_reg} that
\begin{equation}\label{expu2}
\begin{aligned}
u(t_n+\tau)
& = e^{- \tau \partial_x m_L(\sqrt{\varepsilon}\partial_x) } u(t_n)
- \varepsilon \partial_x m_Q(\sqrt{\varepsilon}\partial_x) e^{- \tau \partial_x m_L(\sqrt{\varepsilon}\partial_x) } \int_0^\tau e^{ s (\partial_x + \varepsilon \partial_x^3)} \left( e^{- s (\partial_x + \varepsilon \partial_x^3)} u(t_n)\right)^2 ds \\&+\mathcal{R}_2(\varepsilon, \tau, u )
\end{aligned}
\end{equation}
where the remainder $\mathcal{R}_2 (\varepsilon, \tau, u ) $ is thanks to \eqref{ass1} of type
$$ \mathcal{O}\left(c_L \tau^{2} \frac{\varepsilon^{2 } \partial_x^{5 }}{1+ (\sqrt{\varepsilon}\vert \partial_x\vert)^{\beta_L}} \varepsilon \partial_x m_Q(\sqrt{\varepsilon}\partial_x) u^2 \right) = \mathcal{O}\left(c_L \tau^{2} \frac{\varepsilon^{2 } \partial_x^{5 }}{1+ (\sqrt{\varepsilon}\vert \partial_x\vert)^{\beta_L}} \frac{\varepsilon \partial_x }{1+ (\sqrt{\varepsilon}\vert \partial_x\vert)^{\beta_Q}} u^2 \right)
$$
such that
\begin{equation}\label{R2}
\Vert \mathcal{R}_2(\varepsilon, \tau, u )
\Vert_r \leq c_L \tau^{2} \varepsilon^{3-\frac{\beta}2} c\left(\sup_{t_n\leq t\leq t_{n+1}} \Vert u(t) \Vert_{r+6- \beta }\right), \quad \beta :=min(2, \beta_L + \beta_Q).
\end{equation}
Next we calculate with the aid of the Fourier transform $v(x) = \sum_{k \in \mathbb{Z}} \hat v_k e^{i k x}$ and the definition $$\partial_x^{-1} v(x) = \sum_{k\neq 0} \hat v_k e^{i k x}$$ that
\begin{equation}
\begin{aligned}\label{resi}
\mathcal{I}(\tau,\varepsilon, v) &= \varepsilon \partial_x \int_0^\tau \mathrm{e}^{ s \varepsilon \partial_x^3 } \left[ \mathrm{e}^{ -s \varepsilon \partial_x^3 } v^2\right] ds\\
& = \varepsilon \sum_{\substack{\ell+m = k\\ \ell, m \neq 0}}e^{ik x} \hat v_\ell \hat v_m (ik) \int_0^\tau e^{- 3i s \varepsilon k \ell m } ds + 2 \varepsilon \tau \hat{v}_0 \partial_x v\\
& = \sum_{\substack{\ell+m = k\\ \ell, m \neq 0}}e^{ik x} \hat v_\ell \hat v_m (ik) \left( e^{- 3i \tau \varepsilon k \ell m } -1 \right)
\frac{1}{3(i\ell)(im)} + 2 \varepsilon \tau \hat{v}_0 \partial_x v\\
& = \frac{1}{3} \mathrm{e}^{ \tau \varepsilon \partial_x^3} \left[ \mathrm{e}^{ -\tau \varepsilon \partial_x^3 } (\partial_x^{-1} v)^2\right] -\frac13 (\partial_x^{-1} v)^2+ 2 \varepsilon \tau \hat{v}_0 \partial_x v,
\end{aligned}
\end{equation}
see also \cite{HoS16} in case of no advection term $\partial_x$. Plugging the above relation into \eqref{expu2} we obtain
\begin{align*}
\varphi^\tau(u(t_n))= \Phi^\tau(u(t_n)) + \sum_{i=1,2}\mathcal{R}_i(\varepsilon, \tau, u ),
\end{align*}
where $\mathcal{R}_1$ satisfies \eqref{R1} and $\mathcal{R}_2$ satisfies \eqref{R2}. This concludes the proof.
\end{proof}
\subsubsection{Stability analysis}
In order to carry out the stability analysis we need the following Lemma.
\begin{lemma} \label{lem:lemi}
For $\vert m_Q(i\xi) \vert \leq 1$ and $\vert m_Q'(i\xi) \vert \leq \frac{1}{1+\vert \xi \vert}$ it holds that
$$
\Vert [m_Q(\sqrt{\varepsilon}\partial_x),w] \partial_x \Lambda^r v \Vert_{L^2} \leq \Vert w \Vert_{r+1} \Vert v \Vert_{H^r}.
$$
\end{lemma}
\begin{proof}
Recall that $\Lambda = (1+\vert \xi \vert^2)^{\frac12}$. The $k$-th Fourier coefficient of $ [m_Q,w] \partial_x \Lambda^r v $ is given by
\begin{align}\label{KF}
\reallywidehat{[m_Q(\sqrt{\varepsilon}\partial_x),w] \partial_x \Lambda^r v}(k) &= \sum_{l} \hat{w}(k-l) \Big[m_Q(\sqrt{\varepsilon}i l) - m_Q(\sqrt{\varepsilon} ik) \Big] i l (1+l^2)^\frac{r}{2} \hat{v}(l).
\end{align}
Newt we note that as $\vert m_Q'(i\xi) \vert \leq \frac{1}{1+\vert \xi \vert}$ we have\\
(i) if $\vert l \vert \leq 2 \vert k - l\vert$ that $\vert m_Q(\sqrt{\varepsilon}i l) - m_Q(\sqrt{\varepsilon} ik) \vert \leq 2$\\
(ii) if $\vert l \vert > 2 \vert k - l\vert$ that
$$
\vert m_Q(\sqrt{\varepsilon} i l) - m_Q(\sqrt{\varepsilon}i k) \vert \leq \sqrt{\varepsilon} \vert k - l \vert \int_0^1 \frac{1}{ 1 + \sqrt{\varepsilon} \vert l + s (k-l)\vert} ds \leq \frac{\sqrt{\varepsilon} \vert k - l\vert}{ 1+ \sqrt{\varepsilon}\vert l\vert} \leq \frac{\vert k - l\vert}{\vert l \vert}.
$$
Hence we can conclude that
$$
\vert m_Q(\sqrt{\varepsilon} i l) - m_Q(\sqrt{\varepsilon}i k)\vert \vert l \vert \leq \vert k - l \vert.
$$
Plugging the above estimate into \eqref{KF} we obtain that
\begin{align*}
\left \vert \reallywidehat{[m_Q(\sqrt{\varepsilon}\partial_x),w] \partial_x \Lambda^r v}(k)\right \vert \leq \sum_{l} \vert k - l \vert \vert \hat{w}(k-l) \vert \vert (1+l^2)^\frac{r}{2}\vert \vert \hat{v}(l)\vert
\end{align*}
which implies the assertion.
\end{proof}
\begin{lemma} \label{thm:stab} Fix $r\geq 1$. The numerical flow defined by the scheme \eqref{scheme} is ${\varepsilon}$-stable in $H^r$ in the sense that for two functions $w\in H^{r+1}$ and $v\in H^r$ we have that
$$
\Vert \Phi^\tau(w)- \Phi^\tau(v) \Vert_r \leq e^{\tau {\varepsilon} B} \Vert w-v\Vert_r, \qquad B = B(\Vert w\Vert_{r+1}, \Vert v \Vert_r),
$$
where the constant $B$ depends on the $H^{r+1}$ norm of $w$ and $H^r$ norm of $v$.
\end{lemma}
\begin{proof} Fix $r\geq 1$. For the stability analysis we will rewrite the numerical flow back \eqref{scheme} in its integral form. Tanks to \eqref{resi} we observe that
$$
\Phi^\tau(v) =\mathrm{e}^{-\tau \partial_x m_L(\sqrt{\varepsilon}\partial_x) } v - \varepsilon\partial_x m_Q(\sqrt{\varepsilon}\partial_x) \mathrm{e}^{-\tau \partial_x m_L(\sqrt{\varepsilon}\partial_x) } \int_0^\tau \mathrm{e}^{ s \varepsilon \partial_x^3 } \left[ \mathrm{e}^{ -s \varepsilon \partial_x^3} v^2\right] ds.
$$
We need to show that
\begin{align}\label{todo}
\left \vert \left( \Lambda^r \partial_x m_Q (w v), \Lambda^r v \right)\right\vert \leq \Vert w \Vert_{r+1} \Vert v \Vert^2_r
\end{align}
where for shortness we write $m_Q =m_Q(\sqrt{\varepsilon}\partial_x)$.
Let us note that
$$
\Lambda^r \partial_x m_Q (w v) = \Lambda^r m_Q (w \partial_x v) + \Lambda^r m_Q (v \partial_x w),
$$
where thanks to the boundedness of $m_Q$ (see \eqref{ass1}) we have that
$$
\left \vert \left( \Lambda^r m_Q (v \partial_x w), \Lambda^r v \right)\right\vert \leq
\Vert v \Vert_r^2 \Vert v \partial_x w \Vert_{r}
\leq \Vert v \Vert_r^2 \Vert w \Vert_{r+1}.
$$
Thus we obtain that
\begin{align}\label{todo2}
\left \vert \left( \Lambda^r \partial_x m_Q (w v), \Lambda^r v \right)\right\vert \leq \left \vert \left( \Lambda^r m_Q (w \partial_x v), \Lambda^r v \right)\right\vert + \Vert v \Vert_r^2 \Vert w \Vert_{r+1}
\end{align}
and it remains to derive a suitable bound on $\left \vert \left( \Lambda^r m_Q (w \partial_x v), \Lambda^r v \right)\right\vert $. For this purpose let us note that
\begin{equation}\label{K1}
\Lambda^r m_Q (w \partial_x v) = m_Q( w \Lambda^r \partial_x v) + m_Q ( [\Lambda^r, w] \partial_x v).
\end{equation}
For the second term in \eqref{K1} we see that
\begin{equation}\label{K12}
\begin{aligned}
\left \vert \left( m_Q ( [\Lambda^r, w] \partial_x v), \Lambda^r v \right)\right\vert\leq \Vert [\Lambda^r, w] \partial_x v\Vert_{L^2} \Vert v \Vert_r \leq \left( \Vert \partial_x v\Vert_{L^2} \Vert w \Vert_{r+1} + \Vert w \Vert_{L^\infty} \Vert v \Vert_r\right) \Vert v\Vert_r.
\end{aligned}
\end{equation}
For first term in \eqref{K1} we see that as $m_Q = m_Q^\ast$
\begin{equation*}\label{K11}
\begin{aligned}
\left( m_Q( w \Lambda^r \partial_x v), \Lambda^r v \right) & = \left( w \Lambda^r \partial_x v, m_Q \Lambda^r v \right) = - \left( (\partial_x w) \Lambda^r v, m_Q \Lambda^r v \right) - \left( w \Lambda^r v, m_Q \partial_x \Lambda^r v \right)\\
& = - \left( (\partial_x w) \Lambda^r v, m_Q \Lambda^r v \right) - \left( \Lambda^r v, m_Q (w \partial_x \Lambda^r v) \right)
- \left( \Lambda^r v, [m_Q,w] \partial_x \Lambda^r v \right).
\end{aligned}
\end{equation*}
Hence,
$$
\left( m_Q( w \Lambda^r \partial_x v), \Lambda^r v \right) = - \frac12 \left( (\partial_x w) \Lambda^r v, m_Q \Lambda^r v \right)
- \frac12 \left( \Lambda^r v, [m_Q,w] \partial_x \Lambda^r v \right)
$$
which implies thanks to Lemma \ref{lem:lemi} and the assumptions \eqref{ass1} on $m_Q$ that
\begin{equation}\label{K11}
\begin{aligned}
\left \vert \left( m_Q( w \Lambda^r \partial_x v), \Lambda^r v \right) \right\vert \leq
\Vert \partial_x w\Vert_{L^\infty} \Vert v \Vert_r^2 + \Vert v \Vert_r \Vert [m_Q,w] \partial_x \Lambda^r v \Vert_{L^2} \leq c \Vert v \Vert_r^2 \Vert w \Vert_{r+1}.
\end{aligned}
\end{equation}
Plugging \eqref{K12} and \eqref{K11} into \eqref{K1} wields that
$$
\left \vert \left( \Lambda^r m_Q (w \partial_x v), \Lambda^r v \right)\right\vert \leq c \Vert v \Vert_r^2 \Vert w \Vert_{r+1}
$$
which by \eqref{todo2} implies the desired esitmate \eqref{todo}.
\end{proof}
\subsubsection{Global error estimate}
\begin{theorem} \label{thm:glob1} Fix $ \beta :=min(2, \beta_L + \beta_Q)$ and assume that the solution $u$ of \eqref{BBM} satisfies $u\in \mathcal{C}([0,T];H^{6- \beta})$. Then there exists a $\tau_0>0$ such that for all $0<\tau \leq \tau_0$ the global error estimate holds for $u^n$ defined in \eqref{scheme}
\begin{align*}
\Vert u(t_n)- u^n \Vert_{L^2} & \leq t_n \tau \varepsilon^{2} c\left(\sup_{0\leq t\leq t_{n}} \Vert u(t) \Vert_{2}\right) e^{c t_n {\varepsilon}}+ c_L t_n \tau \varepsilon^2 \varepsilon^{1-\frac{\beta}2} c\left(\sup_{0\leq t\leq t_{n}} \Vert u(t) \Vert_{6- \beta }\right) e^{c t_n {\varepsilon}}
\end{align*}
where $c$ depends on the $H^{2}$ norm of the solution $u$.
\end{theorem}
\begin{proof}
The assertion in $H^r$, $r\geq 1$, follows by the local error bound given in Lemma \ref{thm:loc} together with the stability estimate in Lemma \ref{thm:stab} (with the stronger norm placed on the exact solution $u(t_n)$) via a Lady Windermere's fan argument (see, e.g., \cite{HLW}). Then under the given regularity assumptions on the exact solution (which imply that $u$ is at least in $H^4$) we can prove the corresponding $L^2$ error bound by first proving convergence (with reduced order in $\tau$ but full gain of at least one factor $\varepsilon$) in $H^{1}$, i.e.,
\begin{align*}
\Vert u(t_n)- u^n \Vert_{H^1} & \leq t_n \tau^\delta \varepsilon c\left(\sup_{0\leq t\leq t_{n}} \Vert u(t) \Vert_{4}\right) e^{c t_n {\varepsilon}}
\end{align*}
for some $\delta = \delta(\beta_L,\beta_Q) >0$. Thanks to the estimate
$$
\Vert u^n \Vert_{H^1} \leq \Vert u(t_n)- u^n \Vert_{H^1} + \Vert u(t_n) \Vert_{H^1}
$$ this will give us a priori the boundedness of the numerical solution in $H^{1}$ over long times $t_n=\frac{1}{\varepsilon}$. For details on the latter approach in case of short time ($t=1$) estimates for splitting methods for cubic Schr\"odinger we refer to \cite{Lubich08}.
\end{proof}
\section{A second-order long wave limit preserving scheme}\label{sec:scheme2}
Our second order LWP scheme for \eqref{BBM} takes the form
\begin{equation}\label{schema2}
\begin{aligned}
u^{n+1} &= \mathrm{e}^{-\tau \partial_x m_L(\sqrt{\varepsilon}\partial_x) } \left[ u^n
- \frac{1}{3\alpha} m_Q(\sqrt{\varepsilon}\partial_x) \Big(e^{\tau\alpha\varepsilon\partial_x^3} \Big( e^{-\tau\alpha\varepsilon\partial_x^3} \partial_x^{-1} u^n \Big)^2 - \Big( \partial_x^{-1} u^n\Big)^2+ 2 \varepsilon \tau \widehat{u^n_0} \partial_x u^n\Big)
\right] \\& + {\tau^2} \varepsilon^2\partial_x m_Q(\sqrt{\varepsilon}\partial_x) \Psi_{m_Q} \Big(u^n \Psi_{m_Q}\partial_x m_Q(\sqrt{\varepsilon}\partial_x) (u^n u^n)\Big) \\&
- \frac{\tau^2}{2} \varepsilon \partial_x m_Q(\sqrt{\varepsilon}\partial_x) \Psi_{D_L,m_Q}D_L (u^n u^n) + {\tau^2} \varepsilon \partial_x m_Q(\sqrt{\varepsilon}\partial_x) \Psi_{D_L,m_Q}\left( u^n D_L u^n \right)
\end{aligned}
\end{equation}
where we recall that $ \alpha = m_L^{(2)}(0)/2 $ and
$$
D_L(\sqrt{\varepsilon} \partial_x) = \partial_x m_L(\sqrt{\varepsilon}\partial_x) - (\partial_x + \alpha \varepsilon\partial_x^3).
$$
For stability issues we have introduced the filter functions $$\Psi_{m_Q}(\sqrt{\varepsilon}\partial_x) \quad \text{and}\quad \Psi_{D_L,m_Q} = \Psi_{D_L,m_Q}(\sqrt{\varepsilon}\partial_x)$$ satisfying
\begin{equation}\label{filter}
\begin{aligned}
&\left \Vert \tau \Psi_{m_Q}(\sqrt{\varepsilon}\partial_x)\partial_x m_Q(\sqrt{\varepsilon}\partial_x) v\right \Vert_r \leq \Vert v \Vert_r , \quad
\left \Vert\Psi_{m_Q }(\sqrt{\varepsilon}\partial_x) v - v\right \Vert_r \leq \tau \Vert \partial_x m_Q(\sqrt{\varepsilon}\partial_x) v\Vert_{r}
\\
& \left \Vert \tau \partial_x m_Q(\sqrt{\varepsilon}\partial_x) D_L \Psi_{D_L,m_Q}(\sqrt{\varepsilon}\partial_x) v\right \Vert_r \leq \Vert v \Vert_r , \,
\left \Vert \Psi_{D_L,m_Q}(\sqrt{\varepsilon}\partial_x) v - v\right \Vert_r \leq \tau \Vert \LeQD_L v\Vert_{r}.
\end{aligned}
\end{equation}
For an introduction to filter functions we refer to \cite{HLW}.\\
In a first section we will derive the LWP scheme \eqref{schema2} (see Section \ref{sec:dev2}). Then we will carry out its long time error estimate (see Section \ref{sec:err2}). We will again assume without loss of generality that $ \alpha = 1$.
\subsection{Derivation of the scheme}\label{sec:dev2}
Iterating Duhamel's formula \eqref{BBM} yields that
\begin{equation*}
\begin{aligned}
u(t_n+\tau) & = e^{- \tau \partial_x m_L(\sqrt{\varepsilon}\partial_x) } u(t_n)
- \varepsilon \partial_x m_Q(\sqrt{\varepsilon}\partial_x) e^{- \tau \partial_x m_L(\sqrt{\varepsilon}\partial_x) } \int_0^\tau e^{s \partial_x m_L(\sqrt{\varepsilon}\partial_x) } u^2(t_n+s)ds\\
& = e^{- \tau \partial_x m_L(\sqrt{\varepsilon}\partial_x) } u(t_n)
\\& - \varepsilon \partial_x m_Q(\sqrt{\varepsilon}\partial_x) e^{- \tau \partial_x m_L(\sqrt{\varepsilon}\partial_x) } \int_0^\tau e^{s \partial_x m_L(\sqrt{\varepsilon}\partial_x) } \Big( e^{- s \partial_x m_L(\sqrt{\varepsilon}\partial_x) } u(t_n)\\&\qquad - \varepsilon \partial_x m_Q(\sqrt{\varepsilon}\partial_x) e^{- s \partial_x m_L(\sqrt{\varepsilon}\partial_x) } \int_0^s e^{s_1 \partial_x m_L(\sqrt{\varepsilon}\partial_x) } u^2(t_n+s_1)ds_1\Big)^2 ds.
\end{aligned}
\end{equation*}
Employing the approximation
\begin{align*}
\varepsilon \partial_x m_Q(\sqrt{\varepsilon}\partial_x) & e^{- s \partial_x m_L(\sqrt{\varepsilon}\partial_x) } \int_0^s e^{s_1 \partial_x m_L(\sqrt{\varepsilon}\partial_x) } u^2(t_n+s_1)ds_1 \\ &=
{s} \varepsilon \partial_x m_Q(\sqrt{\varepsilon}\partial_x) u^2(t_n) + \mathcal{O}(s^2 \varepsilon \partial_x m_L(\sqrt{\varepsilon}\partial_x) \partial_x m_Q(\sqrt{\varepsilon}\partial_x) u^2)
\end{align*}
we obtain that
\begin{equation}\label{uexp2}
\begin{aligned}
u(t_n+\tau)
& = e^{- \tau \partial_x m_L(\sqrt{\varepsilon}\partial_x) } u(t_n)
\\&- \varepsilon \partial_x m_Q(\sqrt{\varepsilon}\partial_x) e^{- \tau \partial_x m_L(\sqrt{\varepsilon}\partial_x) } \int_0^\tau e^{s \partial_x m_L(\sqrt{\varepsilon}\partial_x) } \left( e^{- s \partial_x m_L(\sqrt{\varepsilon}\partial_x) } u(t_n)\right)^2 ds\\
& + \tau^2 \varepsilon \partial_x m_Q(\sqrt{\varepsilon}\partial_x) e^{- \tau \partial_x m_L(\sqrt{\varepsilon}\partial_x) } \left( u(t_n) \varepsilon \partial_x m_Q(\sqrt{\varepsilon}\partial_x) u^2(t_n)\right) + \mathcal{R}_1(\tau,\varepsilon,u). \end{aligned}
\end{equation}
The remainder $\mathcal{R}_1(\tau,\varepsilon,u)$ is thereby of order $$\mathcal{O}(s^2 \varepsilon^2 \partial_x m_L(\sqrt{\varepsilon}\partial_x) \partial_x m_Q(\sqrt{\varepsilon}\partial_x) \partial_x m_Q(\sqrt{\varepsilon}\partial_x) u^2)$$ which implies by assumption \eqref{ass1} together with the observation (see \eqref{opexp})
$$
D_L = \partial_x m_L(\sqrt{\varepsilon}\partial_x) - (\partial_x + \varepsilon \partial_x^3) =\mathcal{O}\left( c_L\frac{\varepsilon^{2 } \partial_x^{5 }}{1+ (\sqrt{\varepsilon}\vert \partial_x\vert)^{\beta_L}} \right)
$$
the following bound
\begin{align}\label{2R1}
\Vert \mathcal{R}_1(\tau,\varepsilon,u)\Vert_r \leq \tau^3 \varepsilon^2 c \left(\sup_{t_n \leq t \leq t_{n+1}} \Vert u\Vert_{r+5}\right)+ \tau^3 \varepsilon^2 \varepsilon^{1-\beta_0} c \left(\sup_{t_n \leq t \leq t_{n+1}} \Vert u\Vert_{r+7-2\beta_0}\right)
\end{align}
with $\beta_0 = \text{min}(1, \beta_0)$.
Next we employ the following lemma.
\begin{lemma}\label{lem:osc2}
It holds that
\begin{align*}
\varepsilon &\partial_x m_Q(\sqrt{\varepsilon}\partial_x) e^{- \tau \partial_x m_L(\sqrt{\varepsilon}\partial_x) } \int_0^\tau e^{s \partial_x m_L(\sqrt{\varepsilon}\partial_x) } \left( e^{- s \partial_x m_L(\sqrt{\varepsilon}\partial_x) } v\right)^2 ds\\& =
\varepsilon \partial_x m_Q(\sqrt{\varepsilon}\partial_x) e^{- \tau \partial_x m_L(\sqrt{\varepsilon}\partial_x) } \int_0^\tau e^{s (\partial_x + \varepsilon \partial_x^3) } \left(e^{-s (\partial_x + \varepsilon \partial_x^3) } v\right)^2 ds\\
& + \frac{\tau^2}{2} \varepsilon \partial_x m_Q(\sqrt{\varepsilon}\partial_x) \Psi_{D_L}D_L v^2 - {\tau^2} \varepsilon \partial_x m_Q(\sqrt{\varepsilon}\partial_x) \Psi_{D_L} \left( v D_L v \right) + \mathcal{R}_2(\tau,\varepsilon,u)
\end{align*}
with the remainder
\begin{align}\label{2R2}
\Vert \mathcal{R}_2(\tau,\varepsilon,u)\Vert_r \leq c_L\tau^3 \varepsilon^2 \varepsilon^{1-\beta_1/2} c \left(\sup_{t_n \leq t \leq t_{n+1}} \Vert u\Vert_{r+7-\beta_1}\right) + c_L\tau^3 \varepsilon^2 \varepsilon^{3-\beta_2/2} c \left(\sup_{t_n \leq t \leq t_{n+1}} \Vert u\Vert_{r+11-\beta_2}\right)
\end{align}
where $\beta_1 = \text{min}(2,\beta_Q+\beta_L)$ and $\beta_2 = \text{min}(6,\beta_Q+2 \beta_L)$.
\end{lemma}
\begin{proof}
Note that
\begin{align*}
\int_0^\tau & e^{s \partial_x m_L(\sqrt{\varepsilon}\partial_x) } \left( e^{- s \partial_x m_L(\sqrt{\varepsilon}\partial_x) } v\right)^2 ds
\\& =
\int_0^\tau \left( e^{s \partial_x m_L(\sqrt{\varepsilon}\partial_x) }- e^{s (\partial_x + \varepsilon \partial_x^3) } \right) \left( e^{- s \partial_x m_L(\sqrt{\varepsilon}\partial_x) } v\right)^2 ds\\
&+ \int_0^\tau e^{s (\partial_x + \varepsilon \partial_x^3) } \Big[ \left( e^{- s \partial_x m_L(\sqrt{\varepsilon}\partial_x) } v\right) \left( e^{-s \partial_x m_L(\sqrt{\varepsilon}\partial_x) }- e^{-s (\partial_x + \varepsilon \partial_x^3) } \right) v \Big]ds\\
&+ \int_0^\tau e^{s (\partial_x + \varepsilon \partial_x^3) } \Big[ \left( e^{-s (\partial_x + \varepsilon \partial_x^3) } v \right) \left( e^{-s \partial_x m_L(\sqrt{\varepsilon}\partial_x) }- e^{-s (\partial_x + \varepsilon \partial_x^3) } \right) v \Big]ds\\
& + \int_0^\tau e^{s (\partial_x + \varepsilon \partial_x^3) } \left(e^{-s (\partial_x + \varepsilon \partial_x^3) } v\right)^2 ds.
\end{align*}
Hence, using that (see \eqref{opexp})
$$
D_L = \partial_x m_L(\sqrt{\varepsilon}\partial_x) - (\partial_x + \varepsilon \partial_x^3) =\mathcal{O}\left( c_L\frac{\varepsilon^{2 } \partial_x^{5 }}{1+ (\sqrt{\varepsilon}\vert \partial_x\vert)^{\beta_L}} \right)
$$
together with the expansions
$$
e^{-s \partial_x m_L(\sqrt{\varepsilon}\partial_x) }= 1+\mathcal{O}(s\partial_x m_L(\sqrt{\varepsilon}\partial_x) ), \qquad e^{\pm s (\partial_x + \varepsilon \partial_x^3) } = 1 +\mathcal{O} \left( s (\partial_x + \varepsilon \partial_x^3) \right)
$$
we obtain that
\begin{align*}
\int_0^\tau & e^{s \partial_x m_L(\sqrt{\varepsilon}\partial_x) } \left( e^{- s \partial_x m_L(\sqrt{\varepsilon}\partial_x) } v\right)^2 ds\\& =
\int_0^\tau e^{s (\partial_x + \varepsilon \partial_x^3) } \left(e^{-s (\partial_x + \varepsilon \partial_x^3) } v\right)^2 ds + \frac{\tau^2}{2} D_L v^2 - \tau^2 v D_L v \\&+\mathcal{O}\left(c_L \tau^3\frac{\varepsilon^{4 } \partial_x^{10 }}{1+ (\sqrt{\varepsilon}\vert \partial_x\vert)^{2\beta_L}} p(v)\right)+ \mathcal{O}\left(\tau^3 c_L\frac{\varepsilon^{2 } \partial_x^{6 }}{1+ (\sqrt{\varepsilon}\vert \partial_x\vert)^{\beta_L}} p(v) \right)
\end{align*}
for some polynomials $p$ of degree 2 in $v$.
This yields together with the properties on the filter functions (cf. \eqref{filter})
$$
\Psi_{ m_Q }(\sqrt{\varepsilon} \partial_x) = 1 +\mathcal{O}(\tau m_Q(\sqrt{\varepsilon} \partial_x)), \quad \Psi_{ D_L } (\sqrt{\varepsilon} \partial_x)= 1 +\mathcal{O}(\tau D_L(\sqrt{\varepsilon} \partial_x))
$$
yields that the remainder $\mathcal{R}_2(\tau,\varepsilon,u)$ is of the form
\begin{align*}
\mathcal{R}_2(\tau,\varepsilon,u) & =
\mathcal{O}\left(c_L \tau^3 \frac{\varepsilon \partial_x}{1+ (\sqrt{\varepsilon}\vert \partial_x\vert)^{\beta_Q}} \frac{\varepsilon^{4 } \partial_x^{10 }}{1+ (\sqrt{\varepsilon}\vert \partial_x\vert)^{2\beta_L}} p(v)\right)\\& + \mathcal{O}\left(\tau^3 c_L \frac{\varepsilon \partial_x}{1+ (\sqrt{\varepsilon}\vert \partial_x\vert)^{\beta_Q}} \frac{\varepsilon^{2 } \partial_x^{6 }}{1+ (\sqrt{\varepsilon}\vert \partial_x\vert)^{\beta_L}} p(v) \right).
\end{align*}
This concludes the proof.
\end{proof}
Using Lemma \ref{lem:osc2} in the expansion of the exact solution \eqref{uexp2} yields together with
\eqref{resi} and the definition of the numerical flow $\Phi^\tau$ in \eqref{schema2} that
\begin{equation}\label{expu2fin}
u(t_n+\tau) = \Phi^\tau(u(t_n)) + \mathcal{R}_1(\tau,\varepsilon,u)+ \mathcal{R}_2(\tau,\varepsilon,u)
\end{equation}
where the remainders $ \mathcal{R}_1$ and $ \mathcal{R}_2$ satisfy the bounds \eqref{2R1} and \eqref{2R2}, respectively.
\subsection{Error analysis}\label{sec:err2}
Let us again denote by $\varphi^t$ the exact flow of \eqref{BBM} and by $\Phi^\tau$ the numerical flow defined by the scheme \eqref{schema2}, such that
$$
u(t_n+\tau) = \varphi^\tau(u(t_n) ) \quad \text{and}\quad u^{n+1} = \Phi^\tau(u^n).
$$
\subsubsection{Local error analysis}
\begin{lemma} \label{thm:loc2} Fix $r\geq 0$ and let $\beta_0 = \text{min}(1,\beta_Q)$,
$\beta_1 = \text{min}(2,\beta_Q+\beta_L)$ and $\beta_2 = \text{min}(6,\beta_Q+2 \beta_L)$.
Then, the local error $ \varphi^\tau(u(t_n))- \Phi^\tau(u(t_n))$ satisfies
\begin{align*}
\Vert & \varphi^\tau(u(t_n))- \Phi^\tau(u(t_n)) \Vert_r \leq \tau^3 \varepsilon^2 c \left(\sup_{t_n \leq t \leq t_{n+1}} \Vert u\Vert_{r+5}\right)+ \tau^3 \varepsilon^2 \varepsilon^{1-\beta_0} c \left(\sup_{t_n \leq t \leq t_{n+1}} \Vert u\Vert_{r+7-2\beta_0}\right)\\
& + c_L\tau^3 \varepsilon^2 \varepsilon^{1-\beta_1/2} c \left(\sup_{t_n \leq t \leq t_{n+1}} \Vert u\Vert_{r+7-\beta_1}\right) + c_L\tau^3 \varepsilon^2 \varepsilon^{3-\beta_2/2} c \left(\sup_{t_n \leq t \leq t_{n+1}} \Vert u\Vert_{r+11-\beta_2}\right) .
\end{align*}
\end{lemma}
\begin{proof}
The assertion follows from the expansion of the exact solution given in \eqref{expu2fin}
together with the error bounds on $ \mathcal{R}_1$ and $ \mathcal{R}_2$ in \eqref{2R1} and \eqref{2R2}.
\end{proof}
\subsubsection{Stability analysis}
\begin{lemma} \label{thm:stab2} Fix $r \geq 1$. The numerical flow $ \Phi^\tau$ defined by the scheme \eqref{schema2} is ${\varepsilon}$-stable in $H^r$ in the sense that for two functions $w \in H^{r+1}$ and $v \in H^r$ we have that
$$
\Vert \Phi^\tau(w)- \Phi^\tau(v) \Vert_r \leq e^{\tau {\varepsilon} B} \Vert w-v\Vert_r, \qquad B = B(\Vert w\Vert_{r+1}, \Vert v \Vert_r),
$$
where the constant $B$ depends on the $H^{r+1}$ norm of $w$ and $H^r$ norm of $v$.
\end{lemma}
\begin{proof} Thanks to Lemma \ref{thm:stab2} it remains to prove the stability estimate only on the last three terms in \eqref{schema2}. The latter holds true thanks to the properties \eqref{filter} of the filter functions $\Psi_{m_Q}$ and $\Psi_{D_L}$.
\end{proof}
\subsubsection{Global error estimate}
\begin{theorem} \label{thm:glob2} Let $\beta_0 = \text{min}(1,\beta_Q)$,
$\beta_1 = \text{min}(2,\beta_Q+\beta_L)$ and $\beta_2 = \text{min}(6,\beta_Q+2 \beta_L)$. Then there exists a $\tau_0>0$ such that for all $0<\tau \leq \tau_0$ the global error estimate holds for $u^n$ defined in \eqref{schema2}\begin{align*}
\Vert u(t_n)- u^n \Vert_{L^2} & \leq
\tau^2 t_n \varepsilon^2 \left[ c \left(\sup_{t_n \leq t \leq t_{n+1}} \Vert u\Vert_{r+5}\right)+ \varepsilon^{1-\beta_0} c \left(\sup_{t_n \leq t \leq t_{n+1}} \Vert u\Vert_{r+7-2\beta_0}\right)\right.\\
& + c_L \left. \varepsilon^{1-\beta_1/2} c \left(\sup_{t_n \leq t \leq t_{n+1}} \Vert u\Vert_{r+7-\beta_1}\right) + c_L \varepsilon^{3-\beta_2/2} c \left(\sup_{t_n \leq t \leq t_{n+1}} \Vert u\Vert_{r+11-\beta_2}\right)\right]
e^{c_1 t_n {\varepsilon}} ,
\end{align*}
where $c_1$ depends on the $H^2$ norm of $u$.
\end{theorem}
\begin{proof}
The assertion follows by the local error bound given in Lemma \ref{thm:loc2} together with the stability estimate in Lemma \ref{thm:stab2} via a Lady Windermere's fan argument, see, e.g., \cite{HLW}.\end{proof}
\section{Numerical experiments}\label{sec:num}
We underline our theoretical results with numerical experiments. As a model problem we take the BBM equation \eqref{bbm} and solve it with our first - and second-order long wave limit preserving schemes \eqref{scheme} and \eqref{schema2}, respectively, for various values of $\varepsilon$ on long time scales, i.e., up to $T= \frac{1}{\varepsilon}$. For the spatial discretisation we employ a standard pseudo spectral method. The numerical findings confirm the convergence order stated in Theorem \ref{thm:glob1} and Theorem \ref{thm:glob2}, respectively.
\begin{figure}[h!]
\centering
\includegraphics[width=0.471\linewidth]{first.eps}
\hfill
\includegraphics[width=0.471\linewidth]{second.eps}
\caption{Convergence plot ($L^2$ error versus step size) of the first-order LWP scheme \eqref{scheme} (left) and the second-order LWP scheme \eqref{schema2} (right) on long time scales $t= \frac{1}{\varepsilon}$ for various values of $\varepsilon$. The black solid line corresponds to order one (left) and two (right), respectively. }\label{fig}
\end{figure}
\subsection*{Acknowledgements}
{\small
This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 850941).
}
|
{
"timestamp": "2021-05-11T02:13:17",
"yymm": "2105",
"arxiv_id": "2105.03731",
"language": "en",
"url": "https://arxiv.org/abs/2105.03731"
}
|
\section{Introduction} \label{sec:introduction}
High-throughput DNA sequencing has enabled culture-independent profiling of
complex microbial communities. A microbiota is defined as the assemblage of
micro-organisms present in a defined environment \cite{marchesi2015vocabulary}.
Over the past decade interest about the role microbiota play in both health and
disease, across the human body, has rapidly grown. Microbes directly and
indirectly interact with human physiology through a variety of mechanisms,
including protecting against pathogenic infections, contributing to normal
metabolic functions, and by training the immune system \cite{shreiner2015gut}.
Alterations to the gut microbiota have been implicated in the pathogenesis of a
number of diseases, including Inflammatory Bowel Disease, diabetes, obesity, and
depression \cite{gevers2014treatment, hartstra2015insights, zheng2016gut}.
Typically a microbiota profile is created by collecting an environmental sample
(e.g.\ stool to study the human gut microbiota), extracting the DNA present, and
sequencing the 16S ribosomal RNA (16S rRNA) marker gene. Put simply, the 16S
rRNA marker gene is similar to a barcode: different bacterial species have
different marker gene sequences. Marker gene sequences can be matched to
reference databases to discover the taxonomy of a given sequence (e.g. \texttt{ACTG...} $\mapsto$ \textit{E. coli}). A
high-throughput DNA sequencer will create millions of discrete DNA sequence
reads, often between 200 -- 400 nucleotides long. Microbiota profiles (also
known as microbiome census data) are extremely challenging to analyse: they are
highly dimensional, sparse, noisy, and compositional. These properties violate
many assumptions of standard models that have been widely applied to analyse
microbiota profiles \cite{mcmurdie2014waste} (described further in
Section~\ref{sec:background}). A broad range of complex normalisation algorithms
(data transformations) has been developed and applied to resolve the
problems inherent to such data \cite{weiss2017normalization}. However,
identifying which normalisation algorithm is optimal remains an open question in
the microbiome research community. In contrast, data-driven computational
intelligence paradigms have minimal or weak prior assumptions and offer powerful
tools for extracting knowledge from problematic data such as microbiota profiles
\cite{lahat2015multimodal}. As such, their application to microbiome census data may offer the potential for more robust analysis.
Among computational intelligence/AI approaches, Rough Set Theory (RST) is a topic of great interest amongst the research
community and has been applied to a variety of domains for the purpose of data
analysis \cite{pawlak1998rough}, including many areas of computational biology
\cite{petit2014rough}. Many microbiome
experiments aim to identify correlations between the characterised microbial
community and disease. Only a small part of this process is concerned with
evaluating predictive power: the process of determining elements of a microbial
community that have predictive power is described as biological marker
(biomarker) analysis by molecular biologists. RST can elegantly address both
characterisation and prediction. Firstly, by identifying a minimal knowledge
representation (a reduct), redundant or irrelevant bacterial species can be
discarded, simplifying analysis. Additionally, transparent rules can be induced
to describe the minimal knowledge representation, enabling knowledge discovery.
If additional data are available the rules can be used to evaluate the
predictive power of putative biomarkers. A combination of both approaches allows
domain experts to interpret a model and gain an understanding of the underlying
biological processes involved. Additionally, RST does not require parameters to
be set, which eliminates a source of potential human bias.
This paper presents an approach to characterise microbial communities using RST
to simultaneously transform data into knowledge and to resolve an open research
question regarding normalising microbiome census data (described further in
Section~\ref{sec:background}). This approach is first demonstrated on a small
benchmark dataset to characterise the microbial communities present across
different human body sites. This serves as a simple demonstration of RST, because
it is well known that microbial communities significantly differ across the
human body. The application of RST is then expanded to cover a microbiome
experiment that investigates the link between microbiomes and depression (see
Section~\ref{sec:background}) to enable knowledge discovery. It is important to
note that this paper focuses on characterisation and does not address prediction
from the induced rules (i.e.\ the rules are descriptive and reveal underlying
patterns in the data). The rationale for this is that characterisation enables
the transformation of data into knowledge. From this knowledge insights about
microbial communities can be gained and future experiments planned. The quality
of characterisation can be measured by a variety of measures, discussed in
Section~\ref{sec:background}. The application of RST enables microbial
ecologists to understand better \textit{what} is happening in a microbial community ---
by removing the consideration of superfluous bacterial species and the requirement for destructive
data transformations --- and \textit{why} bacterial species are associated with
phenotypes, via the analysis of transparent induced rules.
The remainder of this paper is structured as follows:
Section~\ref{sec:background} describes some problematic properties of microbiome
census data that make analysis challenging, the links between the microbiome and depression, and core RST
concepts applied throughout this work. Section~\ref{sec:methods} introduces the
rough set microbiota model and describes the datasets used to benchmark the
model. An evaluation of the rough microbiota model that demonstrates the potential of applying RST to
microbiota profiles is provided in Section~\ref{sec:results}. Finally, conclusions and an outline for future work are presented in Section~\ref{sec:conclusion}.
\section{Background} \label{sec:background}
Machine learning and computational intelligence approaches are often applied to
microbiome census data for the purpose of predicting a categorical or numeric
variable from a set of input data (e.g.\ predicting disease). However,
classification and regression are only a subset of data mining and knowledge
discovery tasks. Popular tasks for data mining and knowledge discovery include
\cite{larose2014discovering}:
\begin{itemize}
\item Describing patterns and trends in data;
\item Approximating a categorical target variable from a larger data set (classification);
\item Approximating a numeric target variable from a larger data set (regression);
\item Predicting future events (e.g.\ the share price of a company in 3
months);
\item Clustering observations into similar groups;
\item Identifying association rules (finding features that co-occur).
\end{itemize}
\noindent Describing patterns and trends in data is the most common aim of
microbiome experiments. Many microbiome experiments aim to identify correlations
between the characterised microbial community and disease. The process of
determining elements of a microbial community that have predictive power is
described as biological marker (biomarker) analysis by molecular biologists. When attempting to describe patterns and trends in data, models that offer transparency provide significant benefits \cite{larose2014discovering}; RST provides a suite of tools that allows the transparent description of data, which could help to fulfil an important objective for molecular ecologists.
\subsection{Why RST?: Problematic properties of microbiota profiles}
Microbiome census data produced by high-throughput sequencing are extremely
challenging to analyse: they are highly dimensional
\cite{statnikov2013comprehensive}, noisy \cite{callahan2016dada2}, variably
sparse across different environments \cite{paulson2013differential},
compositional \cite{gloor2016compositional}, and have an uneven mean-to-variance
relationship \cite{mcmurdie2014waste}. Most of these properties will violate the
assumptions of standard analysis models, such as normality or homoscedasticity. For example, investigating if certain bacterial species are more or less abundant in certain environments is difficult because varying sparsity across
different environments can violate probability distribution assumptions. As microbiome census data are not normally distributed and are heteroscedastic it is not appropriate to model differential abundance with popular approaches such as $t$-tests. Microbial ecologists interested in examining bacterial co-occurrence relationships will have problems computing correlation coefficients \cite{friedman2012inferring}. A thorough review discussing these problems is available
\cite{weiss2017normalization}. After initial quality control and clustering
preprocessing steps \cite{kozich2013development} (or alternatively denoising
\cite{callahan2016bioconductor}) microbial community sequencing data are
typically organised into large matrices where rows represent samples and columns
represent counts of clustered sequence reads that constitute different types of
bacteria. The number of discrete sequence reads per sample (the sum of each row)
can differ by orders of magnitude. This uneven sampling effort does not reflect
true biological variation and is an artefact of the sequencing process. The
uneven sampling effort will bias the estimates of bacterial abundance and should
be normalised to allow fair comparison between samples. Normalisation procedures
can also mitigate the other types of bias present in microbial community
sequencing data described above. However, recommended normalisation procedures
that aim to mitigate such complex problems are often difficult for
microbiologists to incorporate (e.g.\ applying a variance stabilising
transformation based on Gamma-Poisson mixture models \cite{love2014moderated})
and can destroy the semantics of the original data. A widely used normalisation
strategy is to convert counts into relative abundances per sample (simple
proportions). However, as relative abundances are constrained by an artificial
limit (1) they represent compositional data. Compositional data have an
arbitrary or non-informative sum \cite{aitchison2005compositional}.
Additionally, relative abundance data can be skewed by the presence of highly
abundant species, and the transformation does not resolve important problems
with the data such as heteroscedasticity \cite{mcmurdie2014waste}. However, it
is rare for microbial communities in the human body to be dominated by a few
species, and relative abundance data are easy for microbial ecologists to
incorporate and analyse. Crucially, the problematic properties of relative
abundance data (heteroscedasticity and compositionality) do not violate the
assumptions of RST, which is our motivation for using this type of
normalisation.
The only assumption required in RST is that each object has an associated set of
attributes used to describe the object, and that the data are a true and
accurate reflection of reality \cite{jensen2008computational}. Thus, the
application of RST resolves the problematic aspects of microbiome census data
described above. Extensive steps were taken to denoise the microbiome census
data, described further in Section III, to ensure that the accuracy assumption
is not violated. Additionally, RST makes redundant the requirement for more complex
normalisation algorithms; the semantics of easily intuited relative abundance
microbiome census data are maintained --- aiding interpretation by domain
experts and providing a possible solution to an open question in the microbiome
research community regarding choice of an optimal normalisation algorithm (which
can differ depending on data and analysis task \cite{weiss2017normalization}).
As far as can be ascertained, there have been no previous attempts to perform
data analysis on microbiome census data using RST described in the literature.
However, aspects of RST have been implemented for bioinformatics applications in
the wider field of metagenomics. The metagenome is defined as the collection of
genomes and genes from the members of a microbiota
\cite{marchesi2015vocabulary}. RST has been applied to remove superfluous
$K$-mers and to improve DNA fragment classification compared with standard
bioinformatics tools \cite{jian2015reduction}. A rough reduction method based on
Particle Swarm Optimisation has also been applied to the same problem
\cite{jian2016rough}. RST has also been used to predict the presence of operons
in metagenomic data. A decision tree classifier based on the Variable Precision
Rough Set Model (VPRSM) was applied to genomic data from \textit{Escherichia
coli} to identify if a gene belonged to an operon
\cite{zaidi2016computational}. The VPRSM had an accuracy of 89.4\% using five
features: maximum distance, minimum distance, direction, cluster of orthologous
groups, and gene order conservation. The use of a decision tree meant that the
decisions of the classifier were easy to interpret and could be validated by
domain experts.
\subsection{The microbiome and depression}
Depression is a mental disorder that causes a persistent low mood, low self
esteem, and chronic anhedonia. Depression is currently the leading cause of
global disease and affects over 300 million people worldwide
\cite{world2017depression}. A growing body of evidence supports the hypothesis that the gut microbiota, the complex community of microorganisms that inhabit the human
gastrointestinal tract, play a key role in the aetiology of
depression via regulation of the central nervous system in a complex network
known as the microbiome-gut-brain axis; an in depth explanation of this
phenomena is outside the scope of this paper, but comprehensive reviews are
available on the subject \cite{cryan2012mind, foster2013gut}. Despite work in
animal models that links the gut microbiota and depression
\cite{park2013altered}, limited work has been done using a human cohort: faecal
samples isolated from Norweigan \cite{naseribafrouei2014correlation} and Chinese
\cite{jiang2015altered, zheng2016gut} cohorts have identified some alterations
in a depressed cohort. Our rationale for applying RST to a publicly available
depression microbiome dataset, described further in Section~\ref{sec:methods},
is to enable the extraction of new knowledge from the data as previous work has
relied on standard analysis techniques.
\subsection{Core RST concepts}
A microbiota profile can be represented by a $M \times N$ decision table. The
rows of a decision table correspond to the universe of discourse, $X$
\cite{jensen2008computational}:
\begin{equation}
X = \{x_1, x_2, \ldots, x_N\}
\end{equation}
\noindent The columns of a decision table correspond to the set of features $A$
(the set of microbes) \cite{jensen2008computational}:
\begin{equation}
A = \{a_1, a_2, \ldots, a_M\}
\end{equation}
\noindent Decision table $DT$ consists of a subset of condition attributes
(input features, different microbial species) and decision attributes (class
labels e.g.\ disease or healthy; $DT = C \cup D$). Each attribute has an
associated value set, which represents the abundance of the microbial species
\cite{petit2014rough}:
\begin{equation}
V_a = \{v^a_1, v^a_2, \ldots, v^a_p\}
\end{equation}
\noindent where $a \in A$. The value set must be discrete (continuous variables
must be discretised). Although microbiome census data are
discrete counts of sequences, they are typically converted into continuous
variables by a normalisation process to mitigate uneven library size bias. A thorough discussion of bias in microbiome census data and normalisation approaches to counteract this is available \cite{weiss2017normalization}.
Therefore relative abundance microbiome census data, which is used as input data
throughout this chapter, must first be discretised. The maximal discernibility
heuristic was used to discretise the microbiome census data throughout this
paper \cite{bazan2000rough}. Any condition or decision attribute subset $P \subseteq C\
\text{or}\ D$ can induce a partition in $X$ \cite{petit2014rough}:
\begin{equation}
X \xrightarrow{P} X(P) = \{ X_1^P, \ldots, X_q^P\}
\end{equation}
\noindent where $X_l^P$ is the partition of $X$ induced by $P$. The subsets
\cite{petit2014rough}:
\begin{equation}
X = X_a^P \cup \ldots \cup X_Q^P
\end{equation}
\noindent correspond to the set of equivalence classes, called indiscernibility
classes in RST. \noindent Discernibility is the core concept of RST: if $(x, y)
\in \text{IND}(P)$ (where $\text{IND}(P)$ is the indiscernibility relation
induced by attribute subset $P$) then $x$ and $y$ are indiscernible by
attributes from $P$. For example, if two bacterial species have the same
abundance in both healthy and sick subjects, then using only the abundance of
the bacterial species it is impossible to discern between the two subjects. In
RST, a set is approximated by two sets known as the lower and upper
approximations \cite{jensen2008computational}:
\begin{align}
\underline{P}S = \{x:[x]_P \subseteq S \} \\
\bar{P}S = \{x: [x]_P \cap S \neq \emptyset \}
\end{align}
\noindent where $S \subseteq X$ and $[x]_P$ are the equivalence classes of the
$P$-indiscernibility relation. The tuple $\langle\underline{P}S,
\bar{P}S\rangle$ is known as a rough set. $P$ and $Q$ are sets of attributes
inducing equivalence relations over $U$. The region between the upper and lower
approximation sets is called the boundary region. The boundary region represents
the set of objects that can possibly be predicted to be from a specific decision
class (non-deterministic; see Figure~\ref{fig:rs})
\cite{jensen2008computational}:
\begin{align}
\text{BND}_P(Q) = \bigcup_{X \in U/Q} \bar{P}S - \bigcup_{X \in U/Q} \underline{P}S
\end{align}
\noindent The positive region, in which objects can be predicted to belong to a
decision class with certainty, is given by:
\begin{align}
\text{POS}_P(Q) = \bigcup_{X \in U/Q} \underline{P}Y \\
\end{align}
\noindent The negative region represents the set of objects that cannot be
predicted to a decision class:
\begin{align}
\text{NEG}_P(Q) = X - \bigcup_{Y \in X / Q} \bar{P}Y
\end{align}
\noindent Attributes that cannot be removed without changing the partitioning of
objects amongst the indiscernibility relations are indispensable. A minimal set
of indispensable condition attributes is known as a reduct.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\columnwidth]{fig/roughset}
\caption{Rough set example. The universe of discourse is partitioned into 9
indiscernibility classes by a set of attributes. The blue line represents
the set being approximated (e.g.\ sick subjects). The green section is the
lower approximation, and the red sections are the upper approximations of
the rough set. In the complement of the upper approximation (grey) it is
certain that no objects in the rough set will be present (e.g.\ a healthy
subject could be in the grey section)}
\label{fig:rs}
\end{figure}
\section{Rough microbiome analysis} \label{sec:methods}
\begin{figure*}[!t]
\centering
\includegraphics[width=1\linewidth]{fig/flowchart}
\caption{Overview of rough microbiota profile analysis}
\label{fig:overview}
\end{figure*}
This work uses two datasets:
\begin{enumerate}
\item a publicly available human body site dataset \cite{caporaso2011global},
which contains environmental samples gathered from across the human body
(tongue, skin, or faeces);
\item a publicly available gut microbiome depression dataset
\cite{jiang2015altered}, which contains faecal samples gathered from depressed and control subjects.
\end{enumerate}
The human body site dataset serves to demonstrate the RST approach before it is
applied to more complex data for knowledge discovery. The human body site
dataset contains 3 samples for each body site: human skin, human tongue, and
human faeces (9 samples total). This dataset forms part of the larger ``Global
Patterns'' dataset --- in microbiome research the Global Patterns dataset is
widely used to benchmark new algorithms or tools \cite{mcmurdie2014waste,
weiss2017normalization}, which is the rationale for applying RST to these
data. The human body site dataset was analysed using the \texttt{RoughSets}
package \cite{riza2014implementing} implemented in \texttt{R}.
The public gut dataset consists of a cohort containing 59 faecal samples (30 control, 29 depressed). Briefly, the bacterial DNA present in the samples was extracted and sequenced with a 16S marker gene survey. This process generated millions of DNA sequences, 200--400 nucleotides in length. Before this raw sequence data can be input to the rough set microbiome characterisation process, it must be first processed with bioinformatics algorithms to generate an accurate survey of bacterial species. The gut data were denoised according to standard operating protocols using an \texttt{R} bioinformatics pipeline \cite{callahan2016bioconductor} to ensure that the truth assumption of RST was met. The human body site dataset was input to the rough set microbiome characterisation process in its preprocessed form, which is often used to to simplify analysis. The data were discretised with
the maximum discernibility discretisation algorithm implemented in the
\texttt{Java} \texttt{rseslib} library \cite{bazan2000rses}. Due to the scale of the depression datasets (both contained 3000 -- 4000 features) the rough set microbiome characterisation was implemented with \texttt{rseslib}, as \texttt{R} (the language in which the \texttt{RoughSets} package was implemented)
suffers from poor computational performance compared with other languages such as
\texttt{Java} \cite{wickham2014advanced}. To aid analysis, discrete data in the
depression datasets were labelled low and high if the bacterial species
abundance had two cuts, and low, medium, and high if the bacterial species
abundance had three cuts. We evaluated the ability of RST to model microbiota
profiles by testing classification performance on the following tasks:
\begin{enumerate}
\item Classify microbial communities from different areas of the human body
(tongue, skin, or faeces);
\item Classify depression status from the gut microbiome.
\end{enumerate}
We began by generating a single reduct for the first decision table (the
demonstrative dataset; see Figure~\ref{fig:overview}). The rationale for
generating a single reduct is that the first decision table serves as a
demonstration of RST applied to a simple problem, and a single reduct can be
used to provide a simple description of microbial communities across the human
body. For the depression datasets, all local reducts were computed for each
decision table using the \texttt{rseslib} \texttt{Java} library. To determine
the classification performance of the partition in $X$ induced by the set of
reduct attributes $A_k$ two measures were used \cite{petit2014rough}:
\begin{align}
\label{eq:acc}
\text{Accuracy} = \frac{\sum_{L=1}^Q \text{Card}(\underline{A}_k X^{A_k}_L)}{\sum_{L=1}^Q \text{Card}(\bar{A}_k X^{A_k}_L)} \\
\text{Quality} = \frac{\sum_{L=1}^Q \text{Card}(\underline{A}_k X^{A_k}_L)}{\text{Card}(X)}
\label{eq:qual}
\end{align}
\noindent where $\text{Card}$ is cardinality, which represents the number of
elements in a set, and $L$ is the total number of upper ($\bar{A}_kX_L^{A_k}$)
and lower-approximation ($\underline{A}_kX_L^{A_k}$) set tuples. Accuracy
represents the ratio of the size of all lower-approximation sets to the size of
all upper-approximation sets ($0 \leq \text{Accuracy}[X(A_k)]\leq 1$). If the
family of lower approximation sets is an empty set (i.e. no objects can be said
to be certainly predicted) then accuracy is zero. Quality represents the ratio
of all objects in the family of lower approximation sets to the total number of
objects in the universe of discourse ($0 \leq \text{Quality}[X(A_k)]\leq 1$). It
is important to note that classification accuracy and quality are not tested on
independent validation data due to insufficient data: these metrics are only
capable of explaining how well a rough set is describing a microbiota profile.
\texttt{IF-THEN} decision rules were generated from the indiscernibility classes
defined by the reduct attributes using the \texttt{rseslib} library. The
descriptive strength of the rules was evaluated by measuring the support each
rule has. Once species of interest were identified by this procedure, a literature review was conducted to identify associations between the generated rules and biological phenomena.
\section{Results} \label{sec:results}
\subsection{Human body site data}
\begin{table}[t]
\centering
\caption{Classification metrics}
\label{tab:metrics}
\begin{tabular}{@{}lll@{}}
\toprule
Classification task & Accuracy & Quality \\ \midrule
Human body site & 1 & 1 \\
Gut microbiome & 1 & 1 \\ \bottomrule
\end{tabular}
\end{table}
A decision table was created from the human body site data which contained 3878 conditional attributes and 9 samples (3 skin samples, 3 faecal samples, and 3 tongue samples). Each conditional attribute defines the abundance of a bacterial Operational Taxonomic Unit, which approximates a bacterial taxa (group e.g. species). A single reduct was generated for this first characterisation task as it serves as a demonstration before the RST approach is expanded to the depression data set for knowledge discovery. A single feature was present in the reduct: the bacterial species
\textit{Propionibacterium acnes}. The characterisation ability of the reduct
rough set was tested using the accuracy and quality measures described in
Equations \ref{eq:acc} and \ref{eq:qual} (see Table~\ref{tab:metrics}). The
lower approximation set contained all of the samples for each sample type so the
accuracy and quality of classification was 1. This demonstrates that RST is capable of excellently discerning between samples collected from different sites across the human body. The next step of characterisation is to describe the alterations identified by RST using \texttt{IF-THEN} rules and linguistic variables.
Rules were generated from the reduct. It is important
to note that the generated rules are descriptive and the predictive power of
them is not assessed: although prediction can be valuable, it forms only one
aspect of a microbiome experiment. Patterns and trends in data can be described
by generating and analysing a set of \texttt{IF-THEN} rules. However, the
quality of characterisation can be measured by the strength of the generated
rules, which is defined as the number of instances in the dataset that are
concordant with each rule. The human body site characterisation task
generated three rules (with 100\% strength) for three classes regarding the
bacterial species \textit{P. acnes}:
\begin{align}
\label{eq:third-start} \tag{Rule 1}
\text{\texttt{IF} \textit{P. acnes }} 0 \text{\texttt{ THEN} Faeces} \\
\tag{Rule 2}
\text{\texttt{IF} \textit{P. acnes }} [0,\num{4.91e-05}] \text{\texttt{ THEN} Tongue} \\
\label{eq:third-end}
\tag{Rule 3}
\text{\texttt{IF} \textit{P. acnes }} [\num{4.91e-05},1] \text{\texttt{ THEN} Skin}
\end{align}
\noindent The generated rules are supported by compelling biological evidence.
Relating the output of the RST characterisation process to biological phemonena
is simple because the semantics of the original data were not destroyed by
complex normalisation approaches. Typically \textit{P. acnes} is a commensal
member of the skin microbiome, but it can act as a pro-inflammatory
opportunistic pathogen, causing acne \cite{perry2011propionibacterium}. Its
pattern of abundance matches descriptions in the literature: most prevalent on
skin, but capable of colonising other areas of the body including the tongue and
large intestine \cite{perry2011propionibacterium}. The absence of \textit{P.
acnes} in stool samples could be related to the sensitivity of the sequencing
process or the low sample size of the cohort (\textit{P. acnes} is not a major
member of the gut microbiome, the most complex of all human microbiomes).
Alternatively, as faeces are not a perfect proxy for the large intestine
\textit{P. acnes} may be present in the large intestine but be undetectable in
stool samples. The RST characterisation of human body sites has therefore identified a
biologically plausible process that represents a key change across habitats. We
will now apply this approach to the more complex depression dataset to enable
knowledge discovery.
\subsection{Gut data}
A decision table created from the gut dataset using the approach described in Section~\ref{sec:methods}. The gut decision table had 2900 conditional attributes, and 59 samples
(30 control, 29 depressed). Each conditional attribute defines the abundance of a denoised amplicon sequence variant which approximates the true DNA sequence present in the samples. The denoising paradigm offers a range of benefits --- including sampling accuracy --- compared with the operational taxonomic unit approach, which is our motivation in using the approach. A thorough explanation of the benefits the denoising paradigm offers is outside the scope of this paper; reviews are available \cite{callahan2016dada2}; the denoising approach did not exist when the human body site data were first created. All local reducts were computed for the gut decision table. The reducts contained 12 features that covered the bacterial genera \textit{Bacteroides}, \textit{Prevotella}, \textit{Anaerostipes},
\textit{Phascolarctobacterium} and \textit{Odoribacter}. One of the features
could not be mapped to a specific genus, and represented the bacterial family
\textit{Ruminococcaceae}. The lower approximation set contained all of the samples for both classes
(depression and control), so the accuracy and quality of characterisation was 1
(creating a crisp set), which demonstrates that the rough set microbiome characterisation is capable of excellently discerning between samples collected from depressed and control subjects.
The next step of characterisation is to describe
the alterations identified by RST using \texttt{IF-THEN} rules and linguistic
variables. More complex rules were generated to characterise the gut microbiome characterisation task (see
Table~\ref{tab:gut-diagram}). The abundance of bacterial taxa was defined as being low or high to aid
comprehension. In the gut microbiome three rules were induced to characterise
control samples, and four rules to characterise depressed samples. Control
samples are characterised by low abundance of the bacterial genera, whilst
depressed samples are characterised by a mixture of high and low abundant
bacterial genera. There are significant underpinning biological justifications for the four
rules that characterise the depressed gut microbiome.
\textit{Phascolarctobacterium} is a bacterial genus that is abundant in the
human gut and produces short chain fatty acids, which are associated with
modifying host metabolism and mood \cite{cryan2012mind}. Additionally,
\textit{Phascolarctobacterium} has been previously positively correlated with
positive mood in healthy adults \cite{li2016gut}. The second and third rules for
depression contain the multiple bacteria in the \textit{Bacteroides} genus;
\textit{Bacteroides} are a major mutualistic member of the normal human
intestinal microbiome, the described abundance patterns indicate a type of gut
dysbiosis has occurred, which has been frequently associated with various diseases \cite{rogers2016gut}.
It is useful to compare our results for the gut microbiome with the original
analysis that used a traditional (i.e. non-RST) methodology
\cite{jiang2015altered}. The low levels of \textit{Ruminococcaceae} in rules 2
and 3 are concordant with the traditional analysis. The low abundance of
\textit{Alistipes} in rule 4 is not consistent with the traditional analysis.
However, the low abundance is combined with a high abundance of
\textit{Odoribacter}, which was also not mentioned in the original analysis.
\textit{Odoribacter} are typically opportunistic pathogens, which can activate
inflammatory pathways associated with the microbiome-gut-brain axis
\cite{hardham2008transfer}.
\input{fig/gut/gut-table}
\begin{table}[t]
\centering
\caption{Quality of microbiome characterisation.}
\label{tab:support}
\begin{tabular}{@{}lll@{}}
\toprule
\multicolumn{3}{c}{Gut microbiome} \\ \cmidrule{1-3}
Decision & Rule \# & Support \\ \midrule
Control & 1 & 86.7\% \\
& 2 & 83.3\% \\
& 3 & 80.0\% \\ \cmidrule{2-3}
Depressed & 1 & 31.0\% \\
& 2 & 27.5\% \\
& 3 & 27.5\% \\
& 4 & 27.5\% \\ \bottomrule
\end{tabular}
\end{table}
\section{Discussion}
The accuracy and quality of characterisation was 1 for all decision tables
(i.e.\ all objects were in the lower approximation for all decision tables).
Therefore the rough sets constructed from each reduct were able to perfectly
discern samples by class (e.g.\ body site location or depression status) from
the microbiota profiles. The ability to describe clear differences between samples in a transparent and simple way is invaluable for biologists, and applying RST to larger microbiome datasets could help to confirm associations between specific microbial species and disease states.
The use of RST makes obsolete the need for complex
normalisation algorithms. Typically some kind of normalisation is required while
generating microbiota profiles \cite{weiss2017normalization} to allow fair
comparison between samples and to mitigate problematic properties of
high-throughput sequencing data, but the choice of optimal normalisation
algorithm remains an open question. This suggests that RST could be a valuable
tool for modelling human microbiomes. Microbiomes across the human body have
been implicated in the pathophysiology and aetiology of a variety of diseases;
the microbiome plays such an important role in disease and health that it has
been dubbed the ``the second genome'' \cite{grice2012human}. Removing the
requirement for more complex normalisation techniques improves the ability of
biological domain experts to comprehend the output of the characterisation, and
also reduces the computational burden of creating microbiota profiles.
In the gut microbiome the support for control rules (83.3\% average) was higher
compared to the support for depression related rules (28.4\%). The lower support for
depressed rules is in line with current theories regarding the
microbiome-gut-brain axis: it is thought the gut in depressed subjects is in a
state of dysbiosis. Dysbiosis describes microbial imbalance, which can vary
significantly across different subjects. Additionally, microbiome composition
can differ significantly across individuals with dysbiosis, while the overall
gene content may be the same (i.e.\ the functions of the bacteria)
\cite{dash2015gut}.
\section{Conclusion and limitations} \label{sec:conclusion}
In this work we present the first application of RST to characterise
microbiota profiles from a standard benchmark dataset. We then extend the
application to a gut microbiome dataset to enable
knowledge discovery concerning microbiomes in depressed subjects. We find that
RST is capable of excellently characterising the gut microbiomes in
depressed subjects and identifying previously undescribed alterations to the
gut microbiome in depressed subjects. The minimal prior assumptions of
RST also offer a potential solution to an unresolved question in the microbiome
community regarding identifying which normalisation procedure is optimal for
microbiota profiles. In addition, the application of RST tools such as reducts
and rule induction allows domain experts to understand RST models without
requiring an understanding of the mathematics involved and thus helps to shed
light on the underlying biological processes present.
Rule-based systems suffer from the combinatorial rule explosion problem. As the
number of features being considered increases, the number of rules increases
exponentially \cite{combs1998combinatorial}. This drastically reduces the
performance and transparency of rule-based systems. The rough set theory
applications to the microbiome census data in this paper have generated small
reducts, with less than a dozen features, which avoids this problem. However, it
is important to note some applications of the rough set characterisation approach may
result in a rule explosion if many bacterial species are relevant to the
characterisation. Rule optimisation would be an important method of tackling
this problem while maintaining the transparency of the system.
The biggest limitation to the proposed approach is the
small sample size of the analysed data. Microbial ecologists use a range of measures to ensure that
sampling has been sufficient to enable the accurate characterisation of an
environment (e.g.\ taxon resampling curves). If our characterisation approach is
expanded to include prediction of disease (biomarker analysis), hundreds of
samples would be required to validate the predictions.
Additionally, the application of fuzzy RST would be valuable for future work.
Disease is a continuum and can rarely be described using a simple two-class
paradigm. Fuzzy RST would also remove the need to discretise the data,
preventing some information loss and resolving one of the largest drawbacks
associated with rough set characterisation.
\section*{Acknowledgment}
\noindent This work was completed under a PhD studentship supported by the
Department for Employment and Learning (DEL) in N. Ireland.
\bibliographystyle{IEEEtran}
|
{
"timestamp": "2021-05-11T02:19:26",
"yymm": "2105",
"arxiv_id": "2105.03903",
"language": "en",
"url": "https://arxiv.org/abs/2105.03903"
}
|
\section{Introduction}
\footnote{“© 20XX IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.”}
Artificial Intelligence (AI) is becoming a vital part of many real-life applications such as healthcare, logistics, surveillance, and industry. Classification is a common concept in the AI field, and it can be considered one of the building blocks for higher-level reasoning and decision-making systems. With the increasing demand for robust and reliable algorithms, especially in safety-critical systems \cite{Aravantinos2020}, the research community has been trying to define the robustness \cite{Fawzi2017}, evaluation metrics \cite{Carlini2017}, and solutions to satisfy the requirements of a robust classifier \cite{Xu2018}.
State-of-the-art classifiers have achieved high accuracy numbers when dealing with simple datasets such as MNIST \cite{LeCun1998} or challenging ones like ImageNet \cite{Deng2009}. However, several open questions remain on how the classifier should behave in the circumstances not covered in the training set, for example, when unseen classes appear (out-of-distribution samples) or when inputs are distorted in a way not seen in the training set. In such cases, a classifier might generate faulty results. So it becomes clear that accuracy is not enough for measuring the performance of classifiers, and the generalization to new environments and robustness to environmental changes should also be considered.
In their review, Zhang \textit{et al. } argue that unexpected faulty result in a pattern recognition algorithm can happen due to the violation of either of the following assumptions\cite{Zhang2020}: (1) Closed-World Assumption where the data is assumed to have a fixed number of classes, all covered in the training set, (2) Independent and Identically Distributed Assumption where the classes in the data are assumed to be independent of each other and have the same distribution, and (3) Clean and Big Data Assumption where the data is assumed to be well-labeled and large enough for training the network properly. While fulfilling these assumptions is more accessible in a controlled environment, real-world applications rarely cover them completely.
This paper deals with the violation of the Closed-World Assumption. While a straightforward way of dealing with this issue is introducing a \textit{trash} class in the training set to cover all out-of-distribution samples, the complex distribution of them makes it impossible to train an effective classifier in most cases. Moreover, different distortions might make a sample not easy to classify, even for a human. While there is ongoing research for adversarial attacks, the phenomenon is not that common in the everyday use of AI algorithms. In a typical case, distortions usually are from these categories: blur, noise, occlusion, and digital alteration of the image.
Recent works try to solve this issue by formulating it to reliable rejection of the predictions when the network is uncertain. The rejection option, also known as selective classification, is a central concept in different classification applications when dealing with uncertainty (e.g., optical character recognition). Previous works either rely on using a specific type of activation function in the classifier, such as OpenMax \cite{Bendale2016}, temperature scaling for SoftMax \cite{Liang2017}, and Sigmoid \cite{Shu2017}, modifying the loss function such as discrepancy loss \cite{Yu2019}, using more resources such as an ensemble of multiple classifiers \cite{Lakshminarayanan2016} and Monte-Carlo dropout \cite{Gal2016}. Moreover, some also suggest a combination of different ideas \cite{Vyas2018}.
The proposed method is a rejection option based on hypothesis testing with probabilistic networks. By utilizing a Z-test over the distribution of outcomes from a probabilistic network, it is possible to estimate the statistical significance of a given output and reject insignificant results. The main difference between the proposed method and previous state-of-the-art methods such as ODIN \cite{Bendale2016} is the non-restricted use of different architectures. The proposed method can be applied to any architecture and improve the performance when dealing with violation of the Closed-World Assumption by not limiting the network to a specific loss function or activation function.
In their work, Geifman and El-Yaniv show that Softmax Response (SR) is a simple yet top-performing method in selective classifiers \cite{Geifman2017} that outperforms Monte Carlo (MC) dropout. However, this paper shows that if utilized correctly, the probabilistic network can easily outperform the SR method, making it a viable choice.
The main contributions of this paper are as follows:
\begin{itemize}
\item Proposing a simple yet effective method (rejection based on the statistical significance of probabilistic network output) to deal with the violation of the Closed-World Assumption in classifiers. This method can be utilized in any modern network architecture by changing the structure into a probabilistic model, which is possible with the help of existing tools.
\item Testing the proposed method on state-of-the-art architecture (ResNet) with a diverse set of distortions (blur, noise, gamma correction, and occlusion) to show the effectiveness of the proposed method over the baseline SR method.
\end{itemize}
The rest of this paper is structured as follows. The details of the proposed method are presented in Section~\ref{method}. Then Section~\ref{experiments} deals with the experiments and their results. Finally, Section~\ref{conclusion} concludes the work and suggests potential research directions for the future.
\begin{figure*}[!ht]
\begin{center}
\includegraphics[width=1\linewidth]{Method.pdf}
\end{center}
\caption{The structure of the proposed method. (1) Pass the test image through the probabilistic classifier. (2) Repeat it $n$ times and store the class scores for each inference. (3) Calculate the mean and standard deviation for each class. (4) Find the maximum mean value and label it as potential output. (5) Run two-sample Z-tests between the potential output and all other classes, then store the Z-scores. (6) Compare Z-scores with the threshold value to decide the acceptance or rejection of the potential output.}
\label{figure_1}
\end{figure*}
\section{Methods}
\label{method}
\subsection{Proposed method}
The proposed method requires a fully trained probabilistic classifier to work. Due to the nature of the probabilistic classifier, each inference of it will result in a slightly different class score. To utilize this fact, first, the test image is passed through the network $n$ times to get the mean and standard deviation values for each class. After that, the maximum mean value between classes is chosen as the potential output. Next, two-sample Z-tests \cite{TwoSampleZTest} are deployed between the potential output and all other classes to find the statistical significance between their difference. Finally, if the Z-scores indicate a significant difference, then the potential output is chosen to be correct. Algorithm \ref{algorithm_1} summarizes these steps and Figure \ref{figure_1} shows the structure of the proposed method.
\begin{algorithm}
\caption{Selective Probabilistic Classifier}
\begin{algorithmic}[1]
\REQUIRE A trained probabilistic classifier.
\STATE run the image through the classifier $n$ times
\STATE find mean ($\mu$) and std. dev. ($\sigma$) for all $N$ classes
\STATE find the class with the highest mean value ($\mathit{c}_M$)
\FOR {$i \in 1,2,\ldots,N;\ i\neq M$}
\STATE run the two-sample Z-test between $\mathit{c}_M$ and $\mathit{c}_i$
\STATE store the $\mathit{Z}_i$ score
\ENDFOR
\IF{$\mathit{Z}_i > z$ for $i \in 1,2,\ldots,C; i\neq M$}
\STATE set output to be $\mathit{C}_M$
\ELSE
\STATE set output to be Reject
\ENDIF
\end{algorithmic}
\textbf{return} output value for the image
\label{algorithm_1}
\end{algorithm}
\subsubsection{Probabilistic Neural Network}
A probabilistic neural network (PNN) classifier \cite{Mohebali2020} uses a stochastic weighting system. The classifier can allocate a class to an input sample by utilizing the posterior probability, which means each run of the network will result in a slightly different output. The amount of difference between several runs is the key to network certainty. A low standard deviation between several runs indicates a higher level of certainty for the network, making standard deviation a suitable metric for selective classification. The convolution layers for such a network are constructed based on Flipout \cite{Wen2018}. The code can be found in the Tensorflow probability directory \cite{TFP}.
\subsubsection{Two-Sample Z-test}
A Z-test \cite{Ztest} refers to any statistical test that can approximate the distribution of the hypothesis by a normal distribution. The two-sample Z-test can be used to test whether two samples are similar to each other or not. The formula is as follows:
\[\text{$Z = \frac{\mu_1-\mu_2-\Delta}{\sqrt{\frac{\sigma_1 ^ 2}{n_1} + \frac{\sigma_2 ^ 2}{n_2}}}$}\]
\noindent Where $\mu_1$ and $\mu_2$ are the mean values for two samples, $\Delta$ is the hypothesized difference between the means (0 if testing for equality), $\sigma_1$ and $\sigma_2$ are the standard deviations, and $n_1$ and $n_2$ are the sample sizes (which are equal in this paper).
By setting the null hypothesis as $H_0: \mu_1 = \mu_2$, the alternative hypothesis as $H_a: \mu_1 \neq \mu_2$, and $\Delta$ to zero, the two-sample Z-test will result in a score that indicates the likelihood of two samples being different from each other. A higher score means more likelihood for the samples to be different. This score can be compared to critical values to get the percentage for the likelihood of a significant difference between samples. These values can be found in any Z-Score table, such as \cite{ZScoreTable}.
\subsection{Softmax Response}
The SR method applies a threshold directly to the output of the Softmax layer from a deep neural network (DNN) and rejects any output below the threshold. This method was chosen as the baseline for comparison. While the method is simple, it is a known top-performer \cite{Geifman2017}.
\begin{figure*}[!ht]
\begin{center}
\includegraphics[width=0.9\linewidth]{Distortions.pdf}
\end{center}
\caption{Distortions on the image. (A) Original image. (B) Motion blur. (C) Frosted glass blur. (D) Gaussian blur. (E) Noise. (F) Gamma darkening. (G) Gamma lightening. (H) Occlusion.}
\label{figure_2}
\end{figure*}
\section{Experiments and Results}
\label{experiments}
The proposed method was experimented on with the well-known ResNet-18 network configuration \cite{He2016}. The goal is to show the performance of it in case of violating the Closed-World Assumption. A comparison with the SR was made to evaluate the performance. This comparison was based on the area under the Receiver Operating Characteristic curve (ROC), which is threshold-independent. Both networks are trained from scratch with the same initial configuration to have a fair comparison. Other state-of-the-art methods were not included in the comparison as they either require a specific structure for the model, limiting the use case, or were only tested on more simple datasets such as MNIST.
Multiple experiments were conducted to represent various violations of Closed-World Assumption in real-world applications. In these experiments, the classifiers are trained with a limited number of classes and presented with both in-distribution and out-of-distribution samples. Further experiments also distort the test samples to see the effect of each distortion on the performance. The chosen distortions were based on \cite{Kamann2020}. Before discussing the results, the dataset and distortions are explained in detail.
\subsection{Dataset and Distortions}
\textbf{\em COCO ---}
COCO \cite{Lin2014} was chosen as the first dataset. It is a complex dataset where the objects have various sizes, qualities, and overlaps. Since the COCO is originally an object detection dataset, all instances were extracted from it manually based on the bounding boxes provided in the dataset. The data was separated into four classes: Human, Vehicle (containing 4-wheeled vehicles), Animal (containing 4-legged animals), and Background (patches of images with no overlapping objects). 260k images were used for training, excluding the animal class, and 40k images were used as test samples. The reason behind using a commonly known object detection dataset for classification is to have a more realistic dataset where an external source does not filter the samples.
\noindent\textbf{\em CIFAR ---}
CIFAR \cite{Krizhevsky2009} was chosen as the second dataset. It is a more straightforward dataset where objects are classified into ten categories. The dataset is small yet sufficiently complex, which makes it an ideal case for testing algorithms. 40k images were used for training, excluding the automobile and truck classes, and 10k images were used as test samples.
\noindent\textbf{\em Blur ---}
Three different blurring algorithms were used to see their effect on the performance: Motion blur, Frosted glass blur, and Gaussian blur. The effect of each algorithm can be seen in Figure \ref{figure_2}(B-D). Each algorithm will simulate a situation where the object is not sharp (e.g., the camera is not focused, the object is moving, a semi-transparent object is between the camera and the object)
\noindent\textbf{\em Noise ---}
Two different noises were added to test samples to see their effect on the performance: Gaussian noise and Salt-and-pepper noise. The effect of a sample noise can be seen in Figure \ref{figure_2}(E). It will simulate a situation where the input is noisy due to internal or external sources.
\noindent\textbf{\em Gamma Correction ---}
The gamma correction technique was applied to each test sample to see the illumination effect on the performance. The effect of darkening and lightening can be seen in Figure \ref{figure_2}(F-G). It will simulate a situation where the amount of light in the environment changes due to environmental factors.
\noindent\textbf{\em Occlusion ---}
A black patch was added to test samples to see the effect of occlusion on the performance. The effect of occlusion can be seen in Figure \ref{figure_2}(H). It will simulate a situation where the object is partially visible.
\begin{table*}[!ht]
\small
\begin{center}
\begin{tabular}{|c||c||c||c c c|| c c||c c||c|}
\hline
\multirow{3}{*}{Dataset} & \multirow{3}{*}{Method} & Out & \multicolumn{3}{c||}{Blur} & \multicolumn{2}{c||}{Noise} & \multicolumn{2}{c||}{Gamma correction} & \multirow{3}{*}{Occlusion}\\
\cline{4-10}
& & of & \multirow{2}{*}{Motion} & Frosted & \multirow{2}{*}{Gaussian} & \multirow{2}{*}{Gaussian} & \multirow{2}{*}{S\&P} & \multirow{2}{*}{Darkening} & \multirow{2}{*}{Lightening} & \\
& & Distribution & & glass & & & & & & \\
\hline\hline
\multirow{2}{*}{COCO} & Proposed & \textbf{0.65} & \textbf{0.34} & \textbf{0.25} & \textbf{0.38} & \textbf{0.22} & \textbf{0.21} & \textbf{0.16} & \textbf{0.17} & \textbf{0.23}\\
& SR & 0.29 & 0.20 & 0.18 & 0.22 & 0.14 & 0.09 & 0.04 & 0.05 & 0.06 \\
\hline\hline
\multirow{2}{*}{CIFAR} & Proposed & \textbf{0.89} & \textbf{0.50} & \textbf{0.50} & \textbf{0.59} & \textbf{0.38} & \textbf{0.39} & \textbf{0.37} & \textbf{0.42} & \textbf{0.48}\\
& SR & 0.52 & 0.44 & 0.34 & 0.47 & 0.35 & 0.25 & 0.22 & 0.26 & 0.31 \\
\hline
\end{tabular}
\end{center}
\caption{AUROC values of the tests. The values are calculated by taking the area under the ROC where the algorithm could produce a valid response.}
\label{table_1}
\end{table*}
\subsection{Results}
\label{results}
After conducting the tests, ROC curves were used to examine the effectiveness of each algorithm. These curves can be seen in Figure \ref{figure_3}-\ref{figure_4}. In general, each point in the ROC curve corresponds to a specific threshold value for the rejection option. If this threshold is set to 0, the algorithm will not reject any input, resulting in a 100\% FPR. The more extreme threshold values will result in lower FPR and True Positive Ratio (TPR) until, at some point, the algorithm rejects all inputs (0\% FPR and TPR). The SR method hits this value when the threshold is set to 1. As the output of Softmax cannot be larger than 1, any output will be rejected. However, since a DNN typically generates high scores for the output, this threshold ends up preventing the SR algorithm from reaching lower FPR rates. On the other hand, the proposed method does not rely on the limit of Softmax output, as it compares the significance of each class to the others. Such a limit will cause a significant gap in AUROC scores, as seen in Table \ref{table_1}.
Judging by the ROC curves, both algorithms start roughly on the same point. This means that both algorithms function similarly when it comes to classification. However, the SR method has the mentioned drawback, which is visible in the curves.
The comparison must be threshold-independent for it to be fair. Thus, the area under the ROC curve (AUROC) was used as a comparison method. The area calculation must consider the limitations of both algorithms. While the SR algorithm can reach 0\% FPR, it only happens when the threshold is at one (1) or higher, which means the output is not valid. Thus, only the area under the valid parts of the ROC curve was used in calculating the AUROC values. These values can be found in Table \ref{table_1}.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.9\linewidth]{PNNROC.pdf}
\end{center}
\caption{ROC curves for proposed method in COCO test. The worst performance of each category was chosen to present the tolerance of the algorithm to extreme distortions.}
\label{figure_3}
\end{figure}
While every distortion reduces the performance, gamma correction has the most significant effect, and blurring has an almost negligible effect on the proposed method. It can be justified by how a classifier works, as changing the intensity of the image makes it harder to separate the objects from the Background class. That being said, the proposed algorithm still outperforms the SR method by a notable margin.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.9\linewidth]{DNNROC.pdf}
\end{center}
\caption{ROC curves for SR method in COCO test. The worst performance of each category was chosen to present the tolerance of the algorithm to extreme distortions.}
\label{figure_4}
\end{figure}
\section{Conclusion}
\label{conclusion}
In this paper, we propose a rejection option for probabilistic classifiers based on Z-test analysis. This method will address the violation of the Closed-World Assumption. By utilizing a probabilistic classifier, each run results in a slightly different class score. A Z-test analyses the mean and standard deviation values for multiple runs to estimate network certainty and filter out uncertain results.
We designed several experiments based on a well-known network configuration (ResNet-18) and datasets (COCO and CIFAR). A comparison with the SR method was made based on AUROC as a threshold-independent metric. The proposed method was shown to have better performance than the SR method by a notable margin while maintaining robustness in the presence of distortions. This makes the proposed method more suitable in safety applications.
In the future, we will consider expanding the method by merging it with existing tools such as ODIN and covering more complex systems such as object detection.
\bibliographystyle{IEEEbib}
|
{
"timestamp": "2021-05-13T02:04:01",
"yymm": "2105",
"arxiv_id": "2105.03876",
"language": "en",
"url": "https://arxiv.org/abs/2105.03876"
}
|
\section{Introduction}
\label{sec:introduction}
\input{sections/Introduction}
\vspace*{-0pt}
\vspace*{6pt}
\section{Background}
\label{sec:preliminaries}
\input{sections/Preliminaries.tex}
\vspace*{-0pt}
\vspace*{4pt}
\section{Proposed in-DRAM Computing Primitives}
\label{sec:tool}
\input{sections/tool}
\vspace*{2pt}
\section{Architecture and Data Flow}
\label{sec:exptsetup}
\input{sections/exptsetup}
\vspace*{10pt}
\section{Evaluation Methodology and Results}
\label{sec:results}
\input{sections/results.tex}
\vspace*{6pt}
\section{Conclusion}
\label{sec:conclusion}
\input{sections/conclusion}
\section{Acknowledgment}
This work was supported by C-BRIC, one of six centers in JUMP, a Semiconductor Research Corporation (SRC) program, sponsored by DARPA.
\vspace*{-0pt}
\scriptsize
\bibliographystyle{IEEEtran}
\subsection{Conventional DRAM Organization}
\vspace*{0pt}
\begin{figure}[htb]
\centering
\vspace*{-2pt}
\includegraphics[width=0.8\columnwidth]{img/dram_hierarchy.pdf}
\caption{DRAM Hierarchy}
\vspace*{-2pt}
\label{fig:dramheir}
\end{figure}
The DRAM hierarchy consists of channels, modules and ranks. Further, a rank consists of multiple banks. Each bank consists of several subarrays. An example DRAM organization is shown in Fig \ref{fig:dramheir}.
\vspace*{0pt}
\begin{figure}[htb]
\centering
\vspace*{-2pt}
\includegraphics[width=0.8\columnwidth]{img/subarray.pdf}
\caption{DRAM subarray view}
\vspace*{-2pt}
\label{fig:dramsub}
\end{figure}
Now, let us consider the internal organization of a DRAM bank as shown in Fig \ref{fig:dramsub}. One DRAM bank comprises of multiple subarrays, a subarray being a 2D array of DRAM cells with associated peripheral circuits. A DRAM cell consists of a storage capacitor with an access transistor connecting it to the bitline (BL) in a manner that is controlled by the wordline (WL) \cite{ambit}.
The DRAM cells in the same row share a wordline and cells in the same column share a bitline. A voltage of magnitude VDD across the storage capacitor signifies a value of 1. Similarly, a magnitude of 0 V across the capacitor signifies a value of 0. To perform a DRAM read operation, BLs are initially precharged to VDD/2 by the PRECHARGE command. The address of the DRAM row to be read is applied to the row decoder that activates the corresponding WL. Once the correct WL is activated, charge sharing occurs between the bitline capacitance and the DRAM cell capacitor. The BL voltage is amplified by turning on the sense amplifiers that regenerate the BL voltage to either 0V or VDD when the cell data is "0" or "1", respectively.
The ACTIVATE command from the memory controller leads to activating the appropriate WL followed by enabling the sense amplifiers to read the row data.
The column decoder selects the word bits to be read from the Sense Amplifiers. The write operation is similar to the read operation, where the column decoder now passes the word to be written to the corresponding BLs. Subsequently, the row decoder activates the corresponding WL for the row that is to be written.
\subsection{DRAM In-Memory Primitives}
Previous works have proposed in-subarray computing primitives supporting row copy \cite{rowclone}, bit-wise logic operations \cite{ambit} and bit-serial arithmetic addition \cite{mustafa}.
RowClone \cite{rowclone} supports copying of data from a source row to a destination row intra-subarray, intra-bank, and inter-bank. Intra-subarray copies are achieved by either performing a complex subarray operation wherein we enable the source row WL, activate the sense amplifiers, and also enable the destination row. Alternatively, we leverage the existing interconnects across subarrays and across banks. We adopt the RowClone approach in this work to transfer data among banks.
Next, we describe the in-memory primitive for ADD in DRAM cells used for this work \cite{mustafa}. The ADD operation is based on the following 2 equations that leverage the multiple-row activation principle to compute the majority function.
\begin{equation}
Cout=Majority(A,B,Cin)
\end{equation}
\begin{equation}
Sum=Majority(A,B,Cin,\overline{Cout},\overline{Cout})
\end{equation}
\begin{figure}[t!]
\centering
\includegraphics[width=\columnwidth]{img/quintuple-activation.pdf}
\caption{Quintuple-row activation: activating five rows at the same time to calculate the five-input majority function, proposed in \cite{mustafa}}.
\label{fig:quintuple-activation}
\end{figure}
The ADD operation requires 9 additional "compute rows" in each sub-array for copying the original data and storing the carry-out and carry-in values. The 9 compute rows are used in the proposed Multiplication primitive as well, which is described in the later section. The operands A, B and carry-in (Cin) are copied to the compute rows. The addition operation comprises four main steps: (i) copy the first vector bit (A) to the compute rows. (ii) copy the second vector bit (B) to the compute rows. (iii) calculate Cout using the multiple-row activation scheme as shown in equation (1). (iv) calculate Sum using the multiple-row activation as shown in equation (2) and Fig. \ref{fig:quintuple-activation}. Note that, addition of two n-bit operands requires 4n+1 ACTIVATE-ACTIVATE-PRECHARGE (AAP) operations.
\subsection{DNN Workloads}
\begin{figure}[htb]
\centering
\vspace*{-2pt}
\includegraphics[width=0.9\columnwidth]{img/DNNworkload.pdf}
\caption{DNN Workload Example}
\vspace*{-2pt}
\label{fig:dnnworkload}
\end{figure}
Deep Neural Networks (DNNs) are adopted in a wide range of applications and are comprised of various primitives like Fully Connected (FC) Layers, Convolutional (CONV) Layers, and/or with recurrent connections as in Recurrent Neural Networks (RNNs). The dominant computation common across all the different kinds of network layers is Matrix Vector Multiplication (MVM), which is in turn comprised of Multiply and Accumulate operations (MACs). An example FC layer is shown in Fig \ref{fig:dnnworkload}.
The figure shows 3 input neurons being connected to 4 output neurons. Each of the output neurons have weight connections associated with each input. The input values are multiplied with their corresponding weight values and added together to calculate the final output value. The computation of the example here can be realized as 4 dot-products to compute the 4 output neuron values, each of which requires 3 Multiply-Accumulate operations.
\subsection{PIM-DRAM Architecture}
\begin{figure}[htb]
\centering
\vspace*{-2pt}
\includegraphics[width=0.6\columnwidth]{img/arch_dram.pdf}
\caption{PIM-DRAM bank architecture.}
\vspace*{-2pt}
\label{fig:arch}
\end{figure}
PIM-DRAM is comprised of multiple banks connected together through the DRAM internal bus, similar to a conventional DRAM organization. The bank architecture , shown in Fig \ref{fig:arch}, consists of multiple subarrays followed by local sense amplifiers. The proposed multiply operations happen in the subarrays with the data being in transposed format where each column holds the output of an n-bit multiplication. Following multiplication, an accumulation operation is needed to complete the MAC operation. Since the multiplication outputs are in different columns and do not share bitlines, we adopt a reconfigurable adder tree in the peripheral area for intra-bank accumulation. Typically, in our proposed architecture, the sense amplifiers are connected to the reconfigurable adder tree, shown in Fig. \ref{fig:treeadder}, through the column decoder. The adder tree provides connections from all levels to accumulators for accumulating the sum results. The accumulators are further connected to Special Function Units (SFUs) used for performing non-MVM operations. SFUs include ReLU units, batch-normalization (BatchNorm) units, quantize units and pooling units. Additionally, SFUs are connected to a transposing unit for converting data layout from column-based to row-based and vice versa. These transposing units are connected to the DRAM bus through the global buffer of the bank. The key components of the PIM-DRAM architecture are described below.
\subsubsection{Reconfigurable Adder Tree}
The reconfigurable adder tree provides both addition and forwarding functionality at each node. Specifically, each node in the adder tree can either forward one operand to the next level node, or add the two operands and pass the result to the next level node.
\begin{figure}[htb]
\centering
\vspace*{-2pt}
\includegraphics[width=0.7\columnwidth]{img/reconnadder.pdf}
\caption{Re-configurable Adder}
\vspace*{-2pt}
\label{fig:treeadder}
\end{figure}
The adder tree is connected to the row buffer. The row buffer here is equal to the size of the first level of the adder tree. Each level of the adder tree has a number of units that is a power of 2. The first level has $2$\textsuperscript{n} addition units and the power keeps on decreasing by 1 every level. The last level of the adder has only 1 unit. The example in Fig \ref{fig:treeadder} shows that there are 8 ($2$\textsuperscript{3}) adder units in the first level followed by 4 ($2$\textsuperscript{2}), 2 ($2$\textsuperscript{1}) and 1 ($2$\textsuperscript{0}) in the subsequent levels. Each unit has two inputs and one output. The inputs to the addition units are connected to two outputs of the previous layer. The bit width of the input increases going down the levels of the adder tree. The inputs to the first level of the adder tree are re-configurable. The adder adds the 0\textsuperscript{th} bit till the 2n\textsuperscript{th} bit resulting from an n-bit multiplication.
\subsubsection{Accumulators}
Accumulators are units used for accumulating the results of the adder tree. Each accumulator shifts and adds the input received from the adder tree to the existing stored value. For example, when the results from adding the first bit of the MAC arrives, it left-shifts the result by 1 and adds it to the existing stored value from the 0\textsuperscript{th} bit result. The amount to be shifted increases by 1 for every higher order bit and is controlled by a counter. The accumulator accumulates the value till the 2n\textsuperscript{th} addition result arrives. The final result of the MAC is forwarded from the Accumulators to the ReLU units.
\subsubsection{ReLU Units}
ReLU Activation zeros out any negative sum results and keeps the positive sum values as the same. The output from the accumulators serves as the input to the ReLU Unit. The output data of the ReLU Unit goes to the Batchnorm Unit.
\subsubsection{Batchnorm Unit}
Batch normalization units take in the inputs from the ReLU Units and standardize the input activations to a layer. The output from this unit is fed to the quantization Unit. Since the batch normalization parameters are fixed for inference, it is a very simple function consisting of subtracting, dividing and scaling by constant factors.
\subsubsection{Pooling Units}
Pooling units are only applicable for convolutional layers. For layers that do not have pooling after them, this serves as a pass through unit. Pooling units have a counter that keeps track of how many elements needs to be accounted for the pooling window. It stores a maximum and checks the maximum between the stored maximum and the incoming data and updates the stored maximum. The Pooling unit sits in between the Quantize unit and the Transpose Unit for convolutional layers.
\subsubsection{Transpose Units}
The quantized data needs to converted into the transposed format before being sent out to a different bank to fit the proposed mapping described in the next subsection. The Transpose unit is a 2D array of SRAM cells with dual read and write ports. Data is written horizontally and read out vertically from this unit for transposing it.
\subsection{Data Orchestration}
This section discusses the mapping of a neural network to the PIM-DRAM architecture as well as the associated dataflow. The mapping algorithm is described in Algorithm \ref{alg:Map}.
In the proposed data mapping, every layer is allocated to a DRAM bank. The number of banks required are equal to the number of layers in the network denoted as Number\_of\_Layers. In the mapping algorithm, the outermost loop runs across all the layers in the neural network. Before mapping every layer to a bank, it initializes a MAC\_no (MAC number) and col\_no (column number) variable to 1. For convolution layers, it runs a loop across the number of output filters (no\_output\_filter). For each output filter, there are lot of 3D convolutions associated with it. Each convolution is a MAC operation and each MAC operation involves a significant number of multiplications. The algorithm consists of a nested loop where the outer loop runs across the number of possible convolutions for every filter. That number is obtained by knowing the dimension of the the input width and height, kernel width and height, padding as well as the stride length. The number of iterations (No\_of\_MAC) of the outer loop is given by ((H-K+2*p)/s+1)*(W-L+2*p)/s+1)). Here H and W refer to the input height and width, K and L refer to the kernel height and width, p refers to padding and s refer to stride length respectively. The inner loop runs across the multiply and accumulate operations in a single 3D convolution. Here the number of iterations (MAC\_size) is given by K*L*I where I is the number of input channels. The mapping is done by assigning the same MAC number (MAC\_no) to the operands in the same multiply and accumulate.
Every multiplication of a MAC is mapped to a single column and the column number (col\_no) is increased for the mapping the next multiplication. After mapping all the multiplications of the same multiply and accumulate, the MAC\_no is increased by 1. The mapping starts with the first subarray. The subarray count (sub\_no) is increased by 1 and the column\_no is set to 1 when the column limit is reached for the particular subarray.
\begin{algorithm}
\caption{Mapping algorithm for DNN's}
\begin{algorithmic}
\REQUIRE Network Description
\WHILE{$layer\_no \leq Number\_of\_Layers$}
\IF{$layer[layer\_no] == conv$}
\FOR{$i=1; i \leq no\_output\_filter; i=i+1$}
\IF{$i\%(no\_output\_filter \div K)==0$}
\STATE $sub\_no \leftarrow 1,col\_no \leftarrow 1$
\ENDIF
\FOR{$j=1;j \leq No\_of\_MAC;j=j+1$}
\IF{$col\_no + MAC\_size \leq column\_size$}
\FOR{$k=1;k \leq MAC\_size;k=k+1$}
\STATE$Bank[sub\_no][col\_no] \leftarrow MAC\_no$
\STATE $col\_no \leftarrow col\_no+1$
\ENDFOR
\ENDIF
\IF{$col\_no + MAC\_size > column\_size$}
\STATE $sub\_no \leftarrow sub\_no+1,col\_no \leftarrow 1$
\FOR{$k=1;k \leq MAC\_size;k=k+1$}
\STATE$Bank[sub\_no][col\_no] \leftarrow MAC\_no$
\STATE $col\_no \leftarrow col\_no+1$
\ENDFOR
\ENDIF
\STATE $MAC\_no \leftarrow MAC\_no+1$
\ENDFOR
\ENDFOR
\ENDIF
\IF{$layer[layer\_no] == linear$}
\FOR{$i=1;i< no\_output\_neuron;i=i+1$}
\IF{$i\%(no\_output\_neuron \div k)==0$}
\STATE $sub\_no \leftarrow 1,col\_no \leftarrow 1$
\ENDIF
\IF{$col\_no + MAC\_size \leq column\_size$}
\FOR{$j=1; j \leq MAC\_count; j=j+1$}
\STATE$Bank[sub\_no][col\_no] \leftarrow MAC\_no$
\STATE $col\_no \leftarrow col\_no+1$
\ENDFOR
\ENDIF
\IF{$col\_no + MAC\_size > column\_size$}
\STATE $sub\_no \leftarrow sub\_no+1,col\_no \leftarrow 1$
\FOR{$k=1;k \leq MAC\_size;k=k+1$}
\STATE$Bank[sub\_no][col\_no] \leftarrow MAC\_no$
\STATE $col\_no \leftarrow col\_no+1$
\ENDFOR
\ENDIF
\STATE $MAC\_no \leftarrow MAC\_no+1$
\ENDFOR
\ENDIF
\STATE $layer\_no \leftarrow layer\_no + 1$
\ENDWHILE
\end{algorithmic}
\label{alg:Map}
\end{algorithm}
One of the rules of the mapping algorithm is that all the operands under the same MAC must be in the same subarray. This is done to fit all the operands of the MAC in the same adder tree. While mapping the operands to the columns of the subarray, if the number of multiplications in the MAC is greater than the remaining columns in the subarray, then the mapping starts from column 1 in the next subarray and nothing gets mapped to the remaining columns in the previous subarray. This rule is applicable for mapping linear layers as well. Every pair of operands in a mapped column in the subarray has an n bit activation and a corresponding n bit weight value occupying 2n rows altogether.
Having just one pair of operands in all the columns provides the maximum amount of parallelism. It is also possible that the mapping may run of space due to limited capacity of a bank. In that case, the mapper can divide the number of output filters into k groups where the number of output filters is divisible by k. In that case after every (no\_output\_filter)/k filters, it goes back to subarray 1 and column 1 and starts mapping from there again. In this way, it puts more pairs of operands in a single column which needs to be processed sequentially. The amount of parallelism reduces as a result of that. Higher the value of k, the lesser is the parallelism. An example mapping of a convolution layer is shown in Fig \ref{fig:Mapeg}. The mapping of a fully connected layer is very similar, with some minute differences. The outer loop in the fully connected layer runs across all the output neurons (no\_output\_neuron).
The mapping algorithm divides the number of output neurons into k groups. After every (no\_output\_neuron/k) neurons, it goes back to subarray 1 and column 1 for mapping the next set of neurons.
\begin{figure*}[htb]
\vspace*{-0pt}
\centering
\includegraphics[width=0.9\textwidth]{img/map.pdf}
\vspace*{-2pt}
\caption{Mapping example of convolution layer}
\label{fig:Mapeg}
\vspace*{-8pt}
\end{figure*}
The inner loop runs across the connections to a single output neuron. Similar to the convolution layer case, the same MAC number is given to all the multiplications in a particular multiply and accumulate. One more difference with regards to convolution layer mapping is that the parallelism is controlled by the number of output neurons instead of the number of output kernels.
Next we briefly discuss the dataflow for the above mentioned mapping and the proposed architecture. The dataflow follow a fixed order for every bank. Different banks work in a pipelined manner. As every bank is associated with a different layer of the neural network, they all can work on different images in the dataset at the same time. Taking an example of a 3 layer network, bank 3 will be working on the k\textsuperscript{th} image when bank 2 and bank 1 will be working on the k-1\textsuperscript{th} and k-2\textsuperscript{th} image respectively. The initial operation in a bank begins with the multiply operation happening across all subarrays in every bank. The bits of the product in every column is read by the the adder tree. The number of MACs of a subarray to be added depends on how many MACs can be fit in the adder. The adder tree keeps on adding results of the products from 0\textsuperscript{th} till the 2n\textsuperscript{th} bit. The accumulators work in conjunction with the adder accumulating all the results of the adder. Once the accumulator has accumulated all the results, the final MAC values are sent to the special function units where each logic block takes its required amount of cycles. Following that, the data is fed into the transpose units. All the banks work in parallel until this point. Then the banks transfer data sequentially using Rowclone \cite{rowclone} to the destination banks. Again, taking the example of a 3 layer network, bank 2 will send its data to bank 3 followed by bank 1 sending its data to bank 2, and so on. After the sequential data transfers, the adder works on the remaining MACs and the same set of operations except the multiplication operation are performed again. In cases where there are more than one pair of operands in a single column, they get executed sequentially over time. Residual layers reserve some banks for adding the results of the skip connections with the output activations of a layer to calculate the final output activations.
\subsection{Circuit-Level Analysis}
We perform extensive circuit simulations using HSPICE to evaluate the proposed in-DRAM compute primitives. Our simulations are performed in CMOS 65 nm technology with the DRAM cells and sense amplifier parameters adapted from the Rambus power model \cite{rambus}.
\begin{figure}[htb]
\centering
\vspace*{-2pt}
\includegraphics[width=0.9\columnwidth]{img/spice.pdf}
\caption{Spice Simulations}
\vspace*{-2pt}
\label{fig:spice}
\end{figure}
Fig. \ref{fig:spice} shows the transient analysis of the proposed AND operation in DRAM subarrays for all input combinations. S1, and S2 are the nodes of the top plates of the two cell capacitors. For the {1,1} case BL, S1, and S2 nodes reach VDD, while in other cases the corresponding nodes drop to GND, representing the AND operation.
The waveforms shown in Fig. \ref{fig:spice}
\begin{figure}[htb]
\centering
\vspace*{-2pt}
\includegraphics[width=\columnwidth]{img/hist.pdf}
\caption{Monte Carlo for 100000 samples}
\vspace*{-2pt}
\label{fig:monte}
\end{figure}
Furthermore, we perform 100000 Monte Carlo simulations of the AND operation with all input cases to validate the robustness of our proposed PIM primitive. Fig. \ref{fig:monte} shows the histograms of all input cases of the BL node before enabling the sense amplifier. We observe large enough sense margin of BL between all input cases (mean is 200mV).
\subsection{System-level Analysis}
We developed our in-house simulator to evaluate the proposed PIM DRAM architecture running commonly-used machine learning workloads (AlexNet, VGG-16, and ResNet-18). We consider DDR3-1600 DRAM structure in our system analysis with a subarray size of 4096x4096. Our simulator maps the workload layers to the DRAM based on layer size to optimize performance. Moreover, it considers all performed operations in the DRAM including computing and internal data movements. We compare the proposed PIM DRAM with GPU and an ideal non-PIM system. The ideal non-PIM system has a compute unit capable of instantaneous computation, coupled with a conventional DRAM, so that the only latency cost is coming from moving the data between the host and CPU. Note that the ideal non-PIM baseline represents the upper bound on performance for any non-PIM system (e.g., GPU and TPU).
We model all the additional logic blocks using RTL and synthesize them using the Cadence RTL Compiler to the TSMC 65 nm library. To consider the effect of DRAM process on logic blocks performance, we add 21.5\% delay to each block based on the study reported in \cite{logic}.
\begin{figure}[htb]
\centering
\vspace*{-2pt}
\includegraphics[width=\columnwidth]{img/system.pdf}
\caption{System Level Evaluation}
\vspace*{-2pt}
\label{fig:sys}
\end{figure}
Fig \ref{fig:sys} shows the performance benefits of our proposed PIM architecture over the ideal non-PIM baseline and GPU. The GPU we used in our evaluation is NVIDIA TITAN Xp. It has 3840 CUDA Cores, a Memory Speed of 11.4 Gbps and a Memory Bandwidth of 547.7 GB/s. Moreover, we vary the parallelism of our PIM architecture in our comparisons. P1, P2, P3 and P4 refer to the parallelism factor for each layer of the the neural network described in Section \ref{sec:exptsetup} . For AlexNet, P1 refers to (1, 1, 1, 1, 1, 1, 1, 1), P2 refers to (2, 2, 2, 2, 2, 2, 2, 2) and P3 refers to (4, 4, 4, 4, 4, 4, 2, 1). Here, the 8 numbers refer to the parallelism factors for the 8 layers of AlexNet. For VGG16 P1, P2, P3 AND P4 refer to (1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1), (2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2), (4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4) and (8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 1, 1, 1) respectively for the 16 layers. Finally for ResNet18, P1 refers to (1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1), for the 18 layers. We have considered 4-bit weights and activations in our evaluation.
The proposed PIM-DRAM shows performance benefits over the baseline and GPU on all networks. We achieve up to 6.5x and 23x peak speedup over ideal non-PIM and GPU, respectively.
\subsection{Bit-Wise AND Operation}
\begin{comment}
\begin{figure}
\includegraphics[width=.24\textwidth]{img/0,0.png}\hfill
\includegraphics[width=.24\textwidth]{img/0,1.2.png}\hfill
\\[\smallskipamount]
\includegraphics[width=.24\textwidth]{img/1.2,0.png}\hfill
\includegraphics[width=.24\textwidth]{img/1.2,0.png}
\caption{This figure shows the simulation results for AND operation for (0,0) (top row left), (0,VDD) (top row right), (VDD,0) (bottom row left), (VDD,VDD) (bottom row right)}
\label{fig:hspice}
\end{figure}
\end{comment}
\begin{figure}[htb]
\centering
\vspace*{-2pt}
\includegraphics[width=\columnwidth]{img/ANDeg.pdf}
\caption{Example of AND operation}
\vspace*{-2pt}
\label{fig:and1}
\end{figure}
We perform the AND operation in a DRAM subarray as shown in Fig \ref{fig:and1}. We dedicate two extra rows, called compute rows (denoted by A and A-1 in the figure), into which we first copy the operands so as to maintain the original data. Initially, the operands are copied to A and A-1 using Rowclone \cite{rowclone}. The bitline is then precharged to VDD/2 and AND-WL is activated. Based on the data stored in A, cell capacitor A or A-1 gets connected to the BL through the PMOS and NMOS, respectively.
Subsequently, the sense amplifiers are turned on and the BL voltage gets amplified to either 0 or VDD based on the AND result.
Fig \ref{fig:and1} shows the three stages of the AND operation for the copied operands in rows A and A-1. It can be seen that each of these three stages is realized using an AAP operation. The addition of the compute rows with three extra transistors leads to very small ($<$ 1\%) area overhead at the subarray level.
\subsection{In-DRAM Multiplication}
\begin{figure}[htb]
\centering
\vspace*{-2pt}
\includegraphics[width=0.65\columnwidth]{img/mul.PNG}
\caption{Multiplication operation}
\vspace*{-2pt}
\label{fig:mulex}
\end{figure}
The Multiplication operation can be broken down into AND and add operations. An example 4 bit multiplication is shown in Fig \ref{fig:mulex}. Here A\textsubscript{n} and B\textsubscript{n} refer to the n\textsuperscript{th} bit of the two operands and P\textsubscript{n} refers to the n\textsuperscript{th} bit of the product. It is evident from the figure that the AND results from every column needs to be added along with the carry in to get the P\textsubscript{n} and the carry out. As an example, A\textsubscript{1}B\textsubscript{0}, A\textsubscript{0}B\textsubscript{1} and the carry out of P\textsubscript{0} are added to get the result for P\textsubscript{1} and the carry required for P\textsubscript{2} computation.
Next, we discuss realizing the multiplication operation in a DRAM subarray. The multiplication primitive requires 9 compute rows- A, A-1, B, B-1, Cin, Cin-1, Cout, Cout-1 and row0 as shown in Fig \ref{fig:MACeg}. These reserved compute rows account for $<$1\% of the total subarray size. A split-line decoder is used to activate the 9 compute rows. It is worth mentioning that the multiplication does not require any peripheral circuit modification, therefore, it can be easily adopted in commodity DRAMs.
\begin{figure*}[htb]
\vspace*{-0pt}
\centering
\includegraphics[width=\textwidth]{img/mult.pdf}
\vspace*{-2pt}
\caption{Example of 2 bit Multiplication}
\label{fig:MACeg}
\vspace*{-8pt}
\end{figure*}
We illustrate the proposed in-DRAM multiplication with the example of a 2 bit multiplication in Fig \ref{fig:MACeg}. row0 is used for storing 0's. At first row0 is copied to Cin and Cin-1. This serves as the carry in for the computing the LSB of the product. Next, the operands A\textsubscript{0} and B\textsubscript{0} are copied to A and A-1. Once the operands are copied, the AND-WL is activated followed by turning on the Sense Amplifiers. After Sense Amplification, P0 is activated to store the result of the AND operation. Next, A\textsubscript{1} and B\textsubscript{0} are copied to A and A-1 respectively. As before, the AND-WL is activated. After charge sharing and sense amplification, A and A-1 are activated and both store the result of A\textsubscript{1} AND B\textsubscript{0}. Next, A\textsubscript{0} and B\textsubscript{1} are copied to B and B-1 followed by activating the AND-WL. After charge sharing, the Sense Amplifiers are turned on and B and B-1 are activated. B and B-1 now store the result of A\textsubscript{0} AND B\textsubscript{1}. After performing the two AND operations, the results of A\textsubscript{1}B\textsubscript{0} and A\textsubscript{0}B\textsubscript{1} need to be added. This primitive uses the majority based adder discussed in \cite{mustafa}. As we know from previous discussions, A\textsubscript{1}B\textsubscript{0} is present in A, A-1 and A\textsubscript{0}B\textsubscript{1} is present in B, B-1 because of the AND operation. Hence we do not need to copy the operands to the compute rows and this leads to fewer AAP operations than the add in \cite{mustafa}. For addition, triple row activation of A, B and Cin is performed to obtain the carry. Charge sharing leads to turning on the Sense Amplifiers followed by activating Cout. Upon charge sharing and sense amplification, the result is Majority(A, B, Cin). \textoverline{Cout} is obtained with the help of a Dual Contact Cell \cite{ambit}. Cin stores the carry result for the next addition computation. The next step is quadruple row activation of A-1, B-1, Cin, and \textoverline{Cout}, \textoverline{Cout}. After charge sharing and Sense Amplification, P1 is activated to store the result of Majority(A-1, B-1, Cin, \textoverline{Cout}, \textoverline{Cout}). Cin is copied to Cin-1 for storing the same value. For computing the result of the final column, A\textsubscript{1} and B\textsubscript{1} are copied to A and A-1 respectively. Similar to before, the AND-WL is activated followed by turning on Sense Amplifiers. A and A-1 are the activated to store the result of A\textsubscript{1} AND B\textsubscript{1}. row0 is then copied to B and B-1 by activating them. This is done for adding the AND result with the carryin to compute the final sum and carry out. Triple row activation of A, B and Cin and sense amplification is followed by activating P3 and Cout to calculate the final carry out and storing it in P3. The final step is to do a quadruple row activation of A-1, B-1, Cin, \textoverline{Cout}, \textoverline{Cout} and sense amplification followed by activating P2 to store the calculated sum. This sums up the multiplication of 2 bit operands and the final 4 bit results are stored in P0-P3.\newline
\indent It is observed that there are (1+2+3+..+(n-1))*2+n AND operations required where n refers to the number of bits in each operand. On the other hand, there are (1+2+3+...(n-2))*2+n-1+1 total add operations required for an n-bit multiplication operation. In addition to the add operation, we need an initial copy operation for writing 0's to row0. Each copy operation requires 1 AAP. This leads to each AND and add operation requiring 3 AAP operations. Taking all the operations into account, it is found that an n-bit multiplication requires \textbf{3n\textsuperscript{2}+3(n-1)\textsuperscript{2}+4} AAP operations.
|
{
"timestamp": "2021-05-18T02:01:49",
"yymm": "2105",
"arxiv_id": "2105.03736",
"language": "en",
"url": "https://arxiv.org/abs/2105.03736"
}
|
\section{Introduction}
Graphs in this paper are finite, and without loops or parallel edges. Let $A,B\subseteq V(G)$ be disjoint.
We say that $A$ is {\em complete} to $B$, or $A,B$ are {\em complete}, if every vertex in $A$ is adjacent to every vertex in $B$,
and similarly $A,B$ are {\em anticomplete} if no vertex in $A$ has a neighbour in $B$. We say $A$ {\em covers} $B$ if every vertex
in $B$ has a neighbour in $A$. A {\em pure pair} in $G$ is a pair $A,B$ of disjoint subsets of $V(G)$ such that $A,B$
are complete or anticomplete.
Jacob Fox~\cite{fox} proved:
\begin{thm}\label{foxthm}
For every sufficiently large positive integer $n$:
\begin{itemize}
\item for every $n$-vertex comparability graph $G$,
there is a pure pair $A,B$ in $G$ with
$|A|,|B| > \frac{n}{4\log_2 n}$;
\item there is an $n$-vertex comparability graph $G$
such that
there is no pure pair $A,B$ in $G$ with $|A|,|B|\ge \frac{15n}{\log_2 n}$.
\end{itemize}
\end{thm}
There is also a slightly stronger asymmetric result, by Fox, Pach and Toth~\cite{toth}:
\begin{thm}\label{toth}
There exists $\varepsilon>0$
such that for every $n$-vertex comparability graph $G$ with $n>1$, either there is a complete pair $A,B$ with $|A|,|B|\ge cn$, or
there is an anticomplete pair $A,B$ with $|A|,|B|\ge cn/\log n$.
\end{thm}
Comparability graphs are perfect, and Fox
conjectured that something like \ref{foxthm} holds for all perfect graphs; more exactly:
\begin{thm}\label{foxconj}
{\bf Conjecture:} For every sufficiently large positive integer $n$ and every $n$-vertex perfect graph $G$,
there is a pure pair $A,B$ in $G$ with
$|A|,|B|\ge n^{1-o(1)}$.
\end{thm}
We will prove this conjecture, and several strengthenings.
To prove \ref{foxconj} itself, we will show that
\begin{thm}\label{mainthm1}
For all $c>0$, and all sufficiently large $n$, if $G$ is an $n$-vertex perfect graph, then there is
a pure pair $A,B$ in $G$ with $|A|,|B|\ge n^{1-c}$.
\end{thm}
We denote the number of vertices of a graph $G$ by $|G|$.
We can replace the ``sufficiently large'' condition in \ref{mainthm1} with a multiplicative constant; \ref{mainthm1} is equivalent to:
\begin{thm}\label{mainthm2}
For all $c>0$ there exists $\varepsilon>0$ such that if $G$ is a perfect graph with $|G|>1$, then there is
a pure pair $A,B$ in $G$ with $|A|,|B|\ge \varepsilon|G|^{1-c}$.
\end{thm}
This can be strengthened; we will show:
\begin{thm}\label{mainthm3}
For all $c>0$ there exists $\varepsilon>0$ such that if $G$ is a perfect graph with $|G|>1$, then there is
a pure pair $A,B$ in $G$ with $|A|\ge \varepsilon|G|$ and $|B|\ge \varepsilon|G|^{1-c}$.
\end{thm}
The complement graph of $G$ is denoted by $\overline{G}$.
A {\em hole} in $G$ is an induced cycle of length at least four; and an {\em antihole} in $G$ is an induced subgraph
whose complement graph is a hole in $\overline{G}$. Perfect graphs have no holes or antiholes of odd length,
but we will show that it is not necessary to exclude all odd holes and odd antiholes to have the result \ref{mainthm3};
it is enough to exclude one of each, of sufficient length. The next result is a strengthening of \ref{mainthm3}:
\begin{thm}\label{mainthm4}
Let $c>0$ with $1/c$ an integer, and let $\ell_1,\ell_2\ge 4c^{-1}+5$ be integers. Then there exists $\varepsilon>0$ such
that
if $G$ is a graph with $|G|>1$, with no hole of length $\ell_1$ and no antihole of length $\ell_2$, then there is
a pure pair $A,B$ in $G$ with $|A|\ge \varepsilon|G|$ and $|B|\ge \varepsilon|G|^{1-c}$.
\end{thm}
This can be further strengthened, as follows. Let us say $G$ {\em contains} $H$ if some induced subgraph of $H$ is isomorphic to $H$,
and $G$ is {\em $H$-free} otherwise.
If $X\subseteq V(G)$, $G[X]$ denotes the subgraph induced on $X$.
We say that a graph $H$ has {\em branch-length} at least $\ell$ if every cycle of $H$
has length at least $\ell$, and every two vertices of $H$ with degree at least three have distance at least $\ell$ in $H$.
Since a cycle of length $\ell_1$ has branch-length $\ell_1$, the next result strengthens \ref{mainthm4} and is the
main result of the paper:
\begin{thm}\label{mainthm}
Let $c>0$ with $1/c$ an integer, and let $H_1,H_2$ be graphs with branch-length at least $4c^{-1}+5$. Then there exists $\varepsilon>0$ such that
if $G$ is a graph with $|G|>1$ that is $H_1$-free and $\overline{H_2}$-free, then there is
a pure pair $A,B$ in $G$ with $|A|\ge \varepsilon|G|$ and $|B|\ge \varepsilon|G|^{1-c}$.
\end{thm}
\section{Reduction to the sparse case}
Let us say a graph $G$ is {\em $\varepsilon$-sparse} if every vertex has degree less than $\varepsilon|G|$. We say
$G$ is {\em $(\alpha,\beta)$-coherent} if there do not exist disjoint subsets $A,B$ of $V(G)$, anticomplete to each other,
such that $|A|\ge \alpha$ and $|B|\ge \beta$.
A one-vertex graph is $\varepsilon$-sparse for all $\varepsilon>0$,
and $(\alpha,\beta)$-coherent for all $\alpha,\beta>0$, so our standard hypothesis that $G$ is suitably coherent and suitably
sparse does not exclude the case $|G|=1$, and we always need to assume separately that $|G|>1$. But we observe:
\begin{thm}\label{big}
If $c, \varepsilon>0$, and $\varepsilon\le 1/2$, and $G$ is $\varepsilon$-sparse and $(\varepsilon|G|^{1-c}, \varepsilon|G|)$-coherent, and $|G|>1$,
then $|G|> 1/\varepsilon$.
\end{thm}
\noindent{\bf Proof.}\ \
Suppose that $|G|\le 1/\varepsilon$. If some distinct $u,v\in V(G)$ are non-adjacent,
$\{u\},\{v\}$ form an anticomplete pair, both of cardinality at least $\varepsilon|G|$, a contradiction.
So $G$ is a complete graph; but its maximum degree is less than $\varepsilon|G|$ and $\varepsilon\le 1/2$, which is impossible since $|G|>1$.
This proves \ref{big}.~\bbox
\bigskip
If $G$ is a graph and $v\in V(G)$, a {\em $G$-neighbour} of $v$ means a vertex of $G$ adjacent to $v$ in $G$.
A theorem of R\"odl~\cite{rodl} implies the following:
\begin{thm}\label{rodlthm}
For every graph $H$ and all $\eta>0$ there exists $\delta>0$ with the following property.
Let $G$ be an $H$-free graph. Then there exists $X\subseteq V(G)$ with $|X|\ge \delta |G|$, such that
one of $G[X]$, $\overline{G}[X]$ is $\eta$-sparse.
\end{thm}
Consequently, in order to prove \ref{mainthm}, it suffices to prove the following:
\begin{thm}\label{sparsethm}
Let $c>0$ with $1/c$ an integer, and let $H$ be a graph with branch-length at least $4c^{-1}+5$. Then there
exists $\varepsilon>0$ such that
every $\varepsilon$-sparse $(\varepsilon|G|^{1-c}, \varepsilon|G|)$-coherent graph $G$ with $|G|>1$ contains $H$.
\end{thm}
\noindent{\bf Proof of \ref{mainthm}, assuming \ref{sparsethm}.\ \ }
Let $c>0$ with $1/c$ an integer, and let $H_1,H_2$ have branch-length
at least $4c^{-1}+5$. For $i = 1,2$, choose $\varepsilon_i$ such that \ref{sparsethm} holds with
$\varepsilon=\varepsilon_i$ and $H=H_i$. Let $\eta=\min(\varepsilon_1,\varepsilon_2,1/2)$.
Choose $\delta$ such that \ref{rodlthm} holds taking $H=H_1$. Let $\varepsilon=\eta\delta$. We claim that $\varepsilon$ satisfies \ref{mainthm}.
Let $G$ be a graph with $|G|>1$ that is $H_1$-free and $\overline{H_2}$-free.
We must show that there is
a pure pair $A,B$ in $G$ with $|A|\ge \varepsilon|G|$ and $|B|\ge \varepsilon|G|^{1-c}$.
From the choice of $\delta$, there exists $X\subseteq V(G)$ with $|X|\ge \delta |G|$, such that
one of $G[X]$, $\overline{G}[X]$ is $\eta$-sparse; and by \ref{big} we may assume that $|G|> 1/\varepsilon\ge \delta$, and so $|X|>1$.
In the first case, since $\eta\le \varepsilon_1$, \ref{sparsethm} applied to $G[X]$
implies that there is
an anticomplete pair $A,B$ in $G[X]$ with $|A|\ge \eta|X|$ and $|B|\ge \eta|X|^{1-c}$.
Thus
$$|A|\ge \eta|X|\ge \eta\delta|G|=\varepsilon|G|$$
and
$$|B|\ge \eta|X|^{1-c}\ge \eta\delta^{1-c}|G|^{1-c}\ge \eta\delta|G|^{1-c}=\varepsilon|G|^{1-c},$$
as required. In the second case we argue similarly, working in $\overline{G}[X]$. This proves \ref{mainthm}.~\bbox
\bigskip
The remainder of the paper is devoted to proving \ref{sparsethm}.
\section{Finding a path of specified length}
A {\em levelling} in $G$ is a sequence $(L_0,L_1,\ldots, L_k)$ of disjoint subsets of $V(G)$ with $k\ge 1$ such that
\begin{itemize}
\item $|L_0|=1$;
\item $L_{i-1}$ covers $L_i$ for $1\le i\le k$; and
\item $L_0\cup\cdots\cup L_{i-2}$ is anticomplete to $L_i$ for all $i\in \{2,\ldots, k\}$.
\end{itemize}
We denote $L_0\cup L_1\cup\cdots\cup L_k$ by $V(\mathcal{L})$.
We call $L_k$ the {\em base} of the levelling $\mathcal{L}=(L_0,L_1,\ldots, L_k)$, and $V(\mathcal{L})\setminus L_k$ is called the
{\em heart} of $\mathcal{L}$. We call $k$ the {\em height} of the levelling, and the unique vertex in $L_0$ is the {\em apex}.
We call $L_{k-1}$ the {\em penultimate level} of the levelling (for want of a better name).
A path $P$ is {\em $\mathcal{L}$-vertical} if $V(P)\subseteq V(\mathcal{L})$ and $|V(P)\cap L_i|\le 1$ for $0\le i\le k$.
We need the following lemma:
\begin{thm}\label{repeat}
Let $\rho\ge 1$ be some real number, let $K,k>0$ be integers with $K>k$, and let $n_1,\ldots, n_K$ be non-negative integers,
all less than $\rho^{K/k-2-1/k}$.
Then there exists $i\in \{1,\ldots, K-k\}$ such that $\rho n_i\ge n_j$ for $j=i+1,\ldots, i+k$.
\end{thm}
\noindent{\bf Proof.}\ \ Suppose not; then for each $i\in \{1,\ldots, K-k\}$ there exists $f(i)$
such that $i<f(i)\le i+k$ and $\rho n_i< n_{f(i)}$. Define $x_1=1$
and $x_{i+1}=f(x_i)$ provided $x_i\le K-k$. Let $x_1,\ldots, x_t$ be defined
by this process; thus $K-k<x_t\le K$. Since $x_{i+1}-x_i\le k$ for each $i$,
it follows that $tk\ge K-1$. Since $n_{x_2}>\rho n_{x_1}$ and $n_{x_2}$
is an integer, it follows
that $n_{x_2}\ge 1$. Thus for $2\le i\le t$, $n_{x_i}\ge \rho^{i-2}$, and so
$n_{x_t}\ge \rho^{t-2}\ge \rho^{K/k-2-1/k}$, contrary to the hypothesis.
This proves \ref{repeat}.~\bbox
Next we need:
\begin{thm}\label{findpath}
Let $c>0$ such that $c^{-1}$ is an integer, and define $r=1+1/c$.
Let $\ell\ge 1$ be an integer, and define $K=r^{\ell}-1$, and $k=r^{\ell-1}-1$.
Let $\varepsilon>0$, and let $G$ be an $\varepsilon$-sparse $(\varepsilon|G|^{1-c}, \varepsilon|G|)$-coherent graph.
Let $B_0,B_1,\ldots, B_{K}\subseteq V(G)$ be disjoint, where $B_0\ne \emptyset$ and $B_1,\ldots, B_{K}$
each have cardinality
at least $r^{2\ell}\varepsilon|G|$.
Then either:
\begin{itemize}
\item there is an induced path of length $\ell$,
with vertices $p_0, p_1,\ldots, p_{\ell}$ in order, and
$$1\le t_1<t_2<\cdots<t_{\ell}\le K,$$
such that $p_0\in B_0$, and $p_i\in B_{t_i}$ for $1\le i\le \ell$; or
\item $|B_0|\le K\varepsilon|G|^{1-c}$, and
there are sets
$C_1,\ldots, C_{K-k}$ with union $B_0$, such that for
each $i$ with $1\le i\le K-k$, and each $j$ with $i\le j\le i+k$, at least
$r^{2\ell-2}\varepsilon|G|$ vertices in $B_j$ have no neighbour in $C_i$.
\end{itemize}
\end{thm}
\noindent{\bf Proof.}\ \
We proceed by induction on $\ell$. Suppose first that $\ell=1$. If there is an edge between $B_0$ and $B_1\cup\cdots\cup B_K$, then the first bullet holds;
and if $B_0$ is anticomplete to $B_1\cup\cdots\cup B_K$,
then since $H$ is $(\varepsilon|G|^{1-c}, \varepsilon|G|)$-coherent and $|B_1|\ge r^{2\ell}\varepsilon|G|\ge \varepsilon|G|$, it follows that $|B_0|<\varepsilon|G|^{1-c}$, and the second bullet
holds, taking $C_1,\ldots, C_{K-k}=B_0$. Thus we may assume that $\ell\ge 2$ and the result holds for $\ell-1$.
Define $\rho=|G|^c$.
Let $B_0=\{v_1,\ldots, v_n\}$.
For all $i\in \{1,\ldots, K-k\}$ and all $j\in \{i,\ldots, i+k\}$, define $A^0_{i,j}=\emptyset$, and $A^0=\emptyset$.
Inductively for $h = 1,\ldots, n$ we will define
\begin{itemize}
\item a set $X^h_i\subseteq B_i$ for each $i\in \{1,\ldots, K\}$
\item the {\em type} of $v_h$ (one of the numbers $1,\ldots, K-k$);
\item a set $A^h_{i,j}\subseteq B_{j}$
for each $i\in \{1,\ldots, K-k\}$ and each $j\in \{i,\ldots, i+k\}$; and
\item a set $A^h$, which is the union of $A^h_{i,j}$ over all $i\in \{1,\ldots, K-k\}$ and all $j\in \{i,\ldots, i+k\}$
\end{itemize}
as follows. Suppose that $1\le h\le n$, and $A^{h-1}$ and $A^{h-1}_{i,j}$ are defined for all $i,j$ with
$1\le i\le K-k$ and $i\le j\le i+k$.
For $1\le i\le K$ let $X^{h}_i$
be the set
of vertices in $B_i\setminus A^{h-1}$ adjacent to $v_{h}$.
Since $(K+1)/(k+1)=2+1/c$, it follows that $K>(2+1/c)k+1$, and so $1/c<K/k-2-1/k$.
Hence for $1\le i\le K$,
$|X^h_i|\le |G|< \rho^{K/k-2-1/k}$.
By \ref{repeat} applied to the numbers $|X^{h}_1|,\ldots, |X^{h}_K|$,
there exists $t$ with $1\le t\le K-k$ such that $\rho |X^{h}_t|\ge |X^{h}_{j}|$ for $j=t,\ldots, t+k$. Choose some such $t$, which we call
the type of $v_{h}$. For each $i\in \{1,\ldots, K-k\}$ and each $j\in \{i,\ldots, i+k\}$ define
$A^{h}_{i,j}=A^{h-1}_{i,j}\cup X^{h}_j$.
This completes the inductive definition.
\\
\\
(1) {\em $\rho |A^h_{i,j}|\ge |A^h_{i,i}|$ for all $h \in \{1,\ldots, n\}$ and all $i\in \{1,\ldots, K-k\}$ and all
$j\in \{i,\ldots, i+k\}$.}
\\
\\
$A^h_{i,j}$ is the disjoint union of the sets $X^h_{j}$ for all $h\in \{1,\ldots, n\}$ such that $v_h$ has type $i$;
and $A^h_{i,i}$ is the disjoint union of $X^h_{i}$ for the same values of $h$. Since $\rho |X^{h}_i|\ge |X^{h}_{j}|$
for each such $h$, this proves (1).
\\
\\
(2) {\em If $v_h$ has type $i$, then every vertex of $B_j$ adjacent to $v_h$
belongs to $A^h$, for all $h\in \{1,\ldots, n\}$, all $1\le i\le K-k$, and all $j\in \{i,\ldots, i+k\}$.}
\\
\\
Let $x\in B_j$ be adjacent to $v_h$.
If $x\in A^{h-1}$, then the claim holds since $A^{h-1}\subseteq A^h$.
If $x\notin A^{h-1}$ then $x\in X^h_j$ from the definition of $X^h_j$, and since $v_h$ has type $i$, it follows that
$$x\in X^h_j\subseteq A^{h}_{i,j}\subseteq A^h.$$
This proves (2).
\bigskip
For $1\le i\le K-k$, let $C_i$ be the set of vertices in $B_0$ that have type $i$. Thus $C_1,\ldots, C_{K-k}$
are pairwise disjoint and have union $B_0$. We note that
$$r^{2\ell}-r^{2\ell-2}=(3+4/c+1/c^2)(k+1)^2\ge 2(k+1)^2.$$
\\
\\
(3) {\em We may assume that $|A^n_{i,j}|> k\varepsilon|G|$ for some $i\in \{1,\ldots, K-k\}$ and some $j\in \{i,\ldots, i+k\}$.}
\\
\\
Suppose not.
Let $1\le j\le K$. Since $A^{n}\cap B_j$ is the union of the sets $A^n_{i,j}$ for all $i\in \{1,\ldots, K\}$
with $j-i\in \{0,\ldots, k\}$, it follows that $|A^n\cap B_j|\le k(k+1)\varepsilon|G|$.
Now let $1\le i\le K-k$; by (2),
$C_i$ is anticomplete to $B_j\setminus A^n$, for all $j\in \{i,\ldots, i+k\}$. Since
$$|B_j\setminus A^n|=|B_j|-|B_j\cap A^n|\ge r^{2\ell}\varepsilon|G|-k(k+1)\varepsilon|G|\ge r^{2\ell-2}\varepsilon|G|\ge \varepsilon|G|$$
and $G$ is $(\varepsilon|G|^{1-c}, \varepsilon|G|)$-coherent, it follows that $|C_i|<\varepsilon|G|^{1-c}$.
Hence $|B_0|\le K\varepsilon|G|^{1-c}$; and so the
second bullet of the theorem holds. This proves (3).
\bigskip
From (3), we may choose $h\in \{1,\ldots, n\}$ minimum such that $|A^h_{i,j}|> k\varepsilon|G|$ for some $i\in \{1,\ldots, K-k\}$ and some $j\in \{i,\ldots, i+k\}$.
Define $D$ to be the set of all $v_{h'}\in C_i$
with $1\le h'\le h$. From the minimality of $h$, and since $G$ is $\varepsilon$-sparse, it follows that
$|A^h_{i,j}|\le (k+1)\varepsilon|G|$ for all $i\in \{1,\ldots, K-k\}$ and all $j\in \{i,\ldots, i+k\}$.
Consequently $|A^h\cap B_i|\le (k+1)^2\varepsilon|G|$ for $1\le i\le K$.
Now choose $i\in \{1,\ldots, K-k\}$ such that $|A^h_{i,j}|> k\varepsilon|G|$ for some $j\in \{i,\ldots, i+k\}$. By (1),
$|A^h_{i,i}|> k\varepsilon|G|/\rho= k\varepsilon|G|^{1-c}$.
For $j=i+1,\ldots, i+k$, let $D_j$ be the set of all
vertices in $B_j$ that have no neighbour in $D$. Thus $B_j\setminus A^h\subseteq D_j$, and so
$$|D_j|\ge r^{2\ell}\varepsilon|G|-(k+1)^2\varepsilon|G|\ge r^{2\ell-2}\varepsilon|G|.$$
Since $|A^h_{i,i}|> k\varepsilon|G|^{1-c}$, it follows from the inductive hypothesis (with $\ell$ replaced by $\ell-1$, and $B_0$ replaced by $A^h_{i,i}$, and $B_1,\ldots, B_K$
replaced by $D_{i+1},\ldots, D_{i+k}$)
that there is an induced path of length $\ell-1$,
with vertices $p_1,\ldots, p_{\ell}$ in order, and
$$i+1\le t_2<\cdots<t_{\ell}\le i+k,$$
such that $p_1\in A^h_{i,i}$, and $p_i\in B_{t_i}$ for $2\le i\le \ell$. Choose $p_0\in D$ adjacent to $p_1$,
and define $t_1=i$; then the path with vertices $p_0, p_1,\ldots, p_{\ell}$ is induced and satisfies the first bullet of
the theorem. This proves \ref{findpath}.~\bbox
\bigskip
The main result of this section is the following:
\begin{thm}\label{getpath}
Let $c>0$, such that $c^{-1}$ is an integer.
Let $\ell, s,t>0$ be integers, and let $d>0$.
Then there exists $\varepsilon>0$ with the following property.
Let $G$ be an $\varepsilon$-sparse $(\varepsilon|G|^{1-c}, \varepsilon|G|)$-coherent graph, and for
$i = 1,2$, let $\mathcal{L}_i$ be a levelling in $G$, with heart $H_i$,
apex $a_i$, and base $B_i$, satisfying:
\begin{itemize}
\item $V(\mathcal{L}_1)\cap V(\mathcal{L}_2)=\{a_1\}\cap \{a_2\}$;
\item $V(\mathcal{L}_2)\setminus \{a_2\}$ is anticomplete to $H_1$,
and if $a_1\ne a_2$ then $a_2$ is anticomplete to $H_1$;
\item $\mathcal{L}_1, \mathcal{L}_2$ have heights $s,t$ respectively; and
\item $|B_i|\ge d|G|$ for $i = 1,2$.
\end{itemize}
Then there is an induced path (or cycle, if $a_1=a_2$) of length $\ell+s+t$ between $a_1, a_2$, with vertex set a subset of $H_1\cup B_1\cup H_2\cup B_2$.
\end{thm}
\noindent{\bf Proof.}\ \
For each integer $i\ge 0$, let $k_{i}=(2+1/c)^{i}-1$.
Choose $\varepsilon$ such that $k_{\ell}k_{\ell+1}\dots k_{\ell+t-h}\varepsilon<d$ and
$k_{\ell+t} (k_{2\ell+2t}+2)\varepsilon\le d$.
Define
$d_{i}=(2+1/c)^{2i}\varepsilon$ for each integer $i\ge 0$.
Let $\mathcal{L}_1=(L_0,\ldots, L_{s})$ and $\mathcal{L}_2= (M_0,\ldots, M_t)$ say; thus $L_s=B_1$ and $M_t=B_2$.
Let $Z_0=\emptyset$. For $i = 1,\ldots, k_{\ell+t}$, we will inductively define $Z_i\subseteq L_{s-1}$ with $Z_{i-1}\subseteq Z_i$,
and $D_i\subseteq L_s$ with $D_1,\ldots, D_i$ pairwise disjoint, satisfying
\begin{itemize}
\item $d_{\ell+t}|G|\le |D_i|\le (d_{\ell+t}+\varepsilon)|G|$
\item $D_i$ is the set of all vertices in $L_s$ that have a neighbour in $Z_i$ and have no neighbour in $Z_{i-1}$
(and so $D_1\cup\cdots\cup D_i$ is the set of all vertices in $L_s$ that have a neighbour in $Z_i$).
\end{itemize}
Thus, suppose that $1\le i<k_{\ell+t}$, and
$Z_0,\ldots, Z_{i-1}$ and $D_1,\ldots, D_{i-1}$ are defined satisfying the conditions above. It follows that
$$|D_1\cup\cdots\cup D_{i-1}|\le (i-1) (d_{\ell+t}+\varepsilon)|G|\le k_{\ell+t}(d_{\ell+t}+\varepsilon)|G|-d_{\ell+t}|G|.$$
But $d_{\ell+t}+\varepsilon=(k_{2(\ell+t)}+2)\varepsilon$, and $k_{\ell+t} (k_{2\ell+2t}+2)\varepsilon\le d$, so
$$|D_1\cup\cdots\cup D_{i-1}|\le k_{\ell+t}(k_{2(\ell+t)}+2)\varepsilon|G|-d_{\ell+t}|G|\le (d-d_{\ell+t})|G|.$$
Hence at least $d_{\ell+t}|G|$ vertices in $L_s$ do not belong to $D_1\cup\cdots\cup D_{i-1}$. All these vertices have a neighbour
in $L_{s-1}\setminus Z_{i-1}$ and have no neighbour in $Z_{i-1}$; and so there exists $Z_i$ with
$Z_{i-1}\subseteq Z_i\subseteq L_{s-1}$, minimal such that at least $d_{\ell+t}|G|$ vertices in $L_s$
have a neighbour in $Z_i$ and have none in $Z_{i-1}$. Let this set of vertices be $D_i$. Since $G$ is $\varepsilon$-sparse,
the minimality of $Z_i$ implies that $|D_i|\le (d_{\ell+t}+\varepsilon)|G|$. This completes the inductive definition.
We will try to construct a path (or cycle) satisfying the theorem that starts from $a_2$, runs down through layers of $\mathcal{L}_2$,
jumps to some $D_i$, runs through some of $D_{i+1}, D_{i+2},\ldots, $ to make it the right length, and then runs up to $a_1$ through
the layers of $\mathcal{L}_1$. The sets $Z_i$ are designed so that when the path has run through enough $D_i$'s
to make its length correct, we can exit into the heart of $\mathcal{L}_1$ without picking up unwanted chords. Note that the only edges
between $V(\mathcal{L}_2)$ and $V(\mathcal{L}_1)$ have an end in the base of $\mathcal{L}_1$ (or are incident with $a_1$, if $a_1=a_2$).
Let $\mathcal{Q}=(Q_0,\ldots, Q_t)$ be a levelling in $G$. We say it is a {\em sub-levelling} of $\mathcal{L}_2$
if $Q_i\subseteq M_i$ for $0\le i\le t$. For $0\le h\le t$, we say that such a sub-levelling $\mathcal{Q}=(Q_0,\ldots, Q_t)$
is {\em $h$-good} if
\begin{itemize}
\item there exists $g\in \{1,\ldots, k_{\ell+t}-k_{\ell+t-h}+1\}$, and for each $j\in \{g,\ldots, g+k_{\ell+t-h}-1\}$
there exists $F_j\subseteq D_j$, such that $F_j$ is anticomplete to $Q_0\cup Q_1\cup\cdots\cup Q_{h-1}$, and $|F_j|\ge d_{\ell+t-h}|G|$; and
\item $|Q_t|>k_{\ell}k_{\ell+1}\dots k_{\ell+t-h}\varepsilon|G|^{1-c}$.
\end{itemize}
Since $d|G|>k_{\ell}k_{\ell+1}\dots k_{\ell+t-h}\varepsilon|G|^{1-c}$
it follows that $\mathcal{L}_2$ is $0$-good.
Choose $h\le t$ maximum such that some sub-levelling $\mathcal{Q}=(Q_0,\ldots, Q_t\}$ of $\mathcal{L}_2$ is $h$-good,
and let $g$ and the sets $F_j\;(j\in \{g,\ldots, g+k_{\ell+t-h}-1\})$ be as in the definition.
Let $K=k_{\ell+t-h}$. Since each
$|F_j|\ge d_{\ell+t-h}|G|$, we may apply \ref{findpath}, replacing $B_0$ by $Q_h$, and replacing $\ell$ by $\ell+t-h$,
and replacing the sequence $B_1,\ldots, B_{k_{\ell}}$ by $F_g,\ldots, F_{g+K-1}$. There are two possible outcomes of \ref{findpath}.
The first outcome is: there is an induced path $P$ of length $\ell+t-h$,
with vertices $p_0, p_1,\ldots, p_{\ell+t-h}$ in order, and
$$g\le t_1<t_2<\cdots<t_{\ell+t-h}\le g+K-1,$$
such that $p_0\in Q_h$, and $p_i\in F_{t_i}$ for $1\le i\le \ell+t-h$. In this case, choose a $\mathcal{Q}$-vertical path $Q$ between
$a_2$ and $p_0$ (therefore of length $h$); choose a neighbour
$v$ of $p_{\ell+t-h}$ in $Z_{t_{\ell+t-h}}$; and choose an $\mathcal{L}_1$-vertical path $R$ between $a_1, v$ (therefore
of length $s-1$). We claim that
$$a_2\hbox{-} Q\hbox{-} p_0\hbox{-} p_1\hbox{-} p_{\ell+t-h}\hbox{-} v\hbox{-} R\hbox{-} a_1$$
is an induced path or cycle. To show this, we must check that
\begin{itemize}
\item $V(P)\cap V(Q)=\{p_0\}$, and $V(P)\setminus \{p_0\}$ is anticomplete to $V(Q)\setminus \{p_0\}$; this is true since
$Q_0,\ldots, Q_{h-1}$ are anticomplete to $F_g,\ldots, F_{g+K-1}$ from the definition of $h$-good.
\item $V(P)\cap V(R)=\emptyset$, and the edge with ends $p_{\ell+t-h}$ and $v$ is the only edge between $V(P)$ and $V(R)$;
this is true since $L_0,\ldots, L_{s-2}$ are anticomplete to $L_s$, and $v\in Z_{t_{\ell+t-h}}$ is anticomplete to
$D_{t_1},\ldots, D_{t_{\ell+t-h-1}}$.
\item $V(Q)\cap V(R)=\{a_1\}\cap \{a\}$, and every edge between $V(Q)$ and $V(R)$ has an end in $\{a_1\}\cap \{a\}$;
this is true from the hypothesis.
\end{itemize}
This proves the path or cycle is indeed induced, and since it has length $\ell$, the theorem holds.
The second outcome of \ref{findpath} is: $\ell+t-h>0$, and $|Q_h|\le K\varepsilon|G|^{1-c}$, and,
writing $k=k_{\ell+t-h-1}$,
there are sets
$C_g,\ldots, C_{g+K-k-1}$ with union $Q_h$, such that for
each $i$ with $g\le i\le g+K-k-1$, and each $j$ with $i\le j\le i+k$, at least
$d_{\ell+t-h-1}|G|$ vertices in $F_j$ have no neighbour in $C_i$.
Since
$$|Q_h|\le K\varepsilon|G|^{1-c}< k_{\ell}k_{\ell+1}\dots k_{\ell+t-h}\varepsilon|G|^{1-c}<|Q_t|$$
it follows that $h<t$.
For $g\le i\le g+K-k-1$, let $X_i$ be the set of vertices in $Q_t$ that are joined to a vertex in $C_i$
by a $\mathcal{Q}$-vertical path. Since $\mathcal{Q}$
is a levelling and $C_g,\ldots, C_{g+K-k-1}$ have union $Q_h$, it follows that $X_g,\ldots, X_{g+K-k-1}$ have union $Q_t$;
and since
$|Q_t|>k_{\ell}k_{\ell+1}\dots k_{\ell+t-h}\varepsilon|G|^{1-c}$,
there exists $i$ with $g\le i\le g+K-k-1$ such that
$$|X_i|\ge |Q_t|/K>k_{\ell}k_{\ell+1}\dots k_{\ell+t-h-1}\varepsilon|G|^{1-c}.$$
For $h\le h'\le t$ let $Q'_{h'}$ be the set of vertices in $Q_{h'}$ that are joined to a vertex in $C_i$
by a $\mathcal{Q}$-vertical path. Thus $Q'_h=C_i$, and
$$(Q_0,\ldots, Q_{h-1},Q'_h, Q'_{h+1},\ldots, Q'_t)$$
is an $(h+1)$-good sub-levelling of $\mathcal{L}_2$, a contradiction. This proves \ref{getpath}.~\bbox
\bigskip
The next result is a form of \ref{getpath} with the same hypotheses except that the bases of the two levellings need not be disjoint.
\begin{thm}\label{getpath2}
Let $c>0$, such that $c^{-1}$ is an integer.
Let $\ell, s,t>0$ be integers, and let $d>0$.
Then there exists $\varepsilon>0$ with the following property.
Let $G$ be an $\varepsilon$-sparse $(\varepsilon|G|^{1-c}, \varepsilon|G|)$-coherent graph.
For $i = 1,2$, let $\mathcal{L}_i$ be a levelling in $G$, with heart $H_i$,
apex $a_i$, and base $B_i$. Suppose that:
\begin{itemize}
\item for $i = 1,2$, $|B_i|\ge d|G|$;
\item $V(\mathcal{L}_1)\cap V(\mathcal{L}_2)=(\{a_1\}\cap \{a_2\}) \cup (B_1\cap B_2)$; and
\item every edge between $H_1$ and $V(\mathcal{L}_2)$ has both ends in $V(\mathcal{L}_1)$.
\end{itemize}
Let $\mathcal{L}_1, \mathcal{L}_2$ have heights $s,t$ respectively.
Then there is an induced path (or cycle, if $a_1=a_2$) of length $\ell+s+t$ between $a_1, a_2$, with vertex set a subset of $H_1\cup B_1\cup H_2\cup B_2$.
\end{thm}
\noindent{\bf Proof.}\ \
Given $d>0$ let $d'=d/3$, and choose $\varepsilon$ to satisfy \ref{getpath} with $d$ replaced by $d'$. We may assume that
$\varepsilon\le d'$ by reducing $\varepsilon$.
We will show that $\varepsilon$ satisfies the theorem. Let $G$, $\mathcal{L}_1$, $\mathcal{L}_2$ satisfy the
hypotheses of the theorem, and let $H_i, a_i, B_i\;(i=1,2)$ and $s,t$ be as above.
Let $\mathcal{L}_1=(L_0,\ldots, L_s)$; thus $L_s=B_1$.
Choose $L_{s-1}'\subseteq L_{s-1}$ minimal such that at least $d'|G|$ vertices in $B_1$ have a neighbour in $L_{s-1}'$.
Let $L_s'$ be the set of vertices in $L_s$ that have a neighbour in $L_{s-1}'$. Thus $d'|G|\le |L_s'|\le (d'+\varepsilon)|G|\le 2d'|G|$.
Let $\mathcal{L}_1'$ be the levelling $(L_0,\ldots, L_{s-1}, L_{s-1}', L_s')$. Let $\mathcal{L}_2'$ be the levelling obtained from
$\mathcal{L}_2$ by replacing its base by $B_2\setminus L_s'$. Then $|L_s'|\ge d'|G|$, and $|B_2\setminus L_s'|\ge d|G|-2d'|G|\ge d'|G|$. Hence $\mathcal{L}_1'$, $\mathcal{L}_2'$ satisfy the hypotheses of \ref{getpath}, and the result follows. This proves
\ref{getpath2}.~\bbox
\bigskip
When we apply \ref{getpath2}, in the final section, it will be to levellings $\mathcal{L}_1, \mathcal{L}_2$
such that the only edges between $V(\mathcal{L}_1), V(\mathcal{L}_2)$ are either incident with the common apex (if there is one)
or between the base of one and one of the last two terms of the other; so \ref{getpath2} is stronger than we need.
\section{Expansion}
If $X\subseteq V(G)$, $N(X)$ denotes the set of vertices in $V(G)\setminus X$ with a neighbour in $X$, and $N[X]=N(X)\cup X$.
A graph $G$ is {\em $\tau$-expanding} if $|N[X]|\ge \min(\tau|X|,|G|/2)$ for every subset $X\subseteq V(G)$.
\begin{thm}\label{makeexpanding}
Let $c>0$, and let $G$ be a $(|G|^{1-c}/4, |G|/4)$-coherent graph.
Then
there exists $Y\subseteq V(G)$ with $|Y|\le |G|^{1-c}/4$ such that $G\setminus Y$ is $|G|^{c}$-expanding.
\end{thm}
\noindent{\bf Proof.}\ \ Let $\alpha=|G|^{1-c}/4$ and $\tau = |G|^{c}$.
Choose $Y\subseteq V(G)$ maximal such that $|Y|\le \alpha$ and $|N[Y]|\le \tau|Y|$ (possibly $Y=\emptyset$).
Let $W=V(G)\setminus Y$.
If $G[W]$ is $\tau$-expanding then the theorem holds, so we assume not.
Thus
there exists $X\subseteq W$ such that $|N[X]\cap W|< \min(\tau|X|, |W|/2)$. Consequently $X\ne \emptyset$.
But
$$|N[X\cup Y]|\le |N[Y]|+|N[X]\cap W|\le \tau|Y|+\tau|X|,$$
and so from the maximality of $Y$, it follows that $|X\cup Y|> \alpha$.
Now $|N[Y]|\le \tau|Y|\le \tau\alpha=|G|/4$, and $|N[X]\cap W|\le |W|/2\le |G|/2$; so
$$|N[X\cup Y]|\le |N[Y]| + |N[X]\cap W|\le 3|G|/2.$$
Let $U=V(G)\setminus N[X\cup Y]$; then $|U|\ge |G|/4$.
But $X\cup Y$ is anticomplete to $U$, contradicting that $G$ is $(|G|^{1-c}/4, |G|/4)$-coherent.
This proves \ref{makeexpanding}.~\bbox
\bigskip
If $u,v$ are vertices of a graph $G$, it is sometimes convenient to call the distance between $u,v$ in $G$ the {\em $G$-distance}
between $u,v$.
We deduce:
\begin{thm}\label{smallrad}
Let $c>0$, and let $G$ be a $(|G|^{1-c}/4, |G|/4)$-coherent graph.
Then there exists $u\in W$ and an integer $k< 1+1/c$, such that:
\begin{itemize}
\item at most $|G|/2$ vertices have $G$-distance less than $k$ from $u$; and
\item at least $|G|/4$ vertices have $G$-distance exactly $k$ from $u$.
\end{itemize}
\end{thm}
\noindent{\bf Proof.}\ \
By \ref{makeexpanding}, there exists $Y\subseteq V(G)$ with $|Y|\le |G|^{1-c}/4$
such that $G[G\setminus Y]$ is $\tau$-expanding, where $\tau = |G|^{c}$.
Choose $u\in V(G)\setminus Y$, and for each integer $i\ge 0$ let $M_i$ be the set of vertices of $G$
that have $G$-distance at most $i$ from $a$. Since $G\setminus Y$ is $\tau$-expanding,
it follows that for all $i\ge 0$, $|M_{i+1}\setminus Y|\ge \min(\tau|M_i\setminus Y|,|W\setminus Y|/2)$.
For each $i\ge 1$, let $L_i=M_i\setminus M_{i-1}$.
\\
\\
(1) {\em There exists $k\le 1+1/c$ such that $|L_k|\ge |W|/4$.}
\\
\\
Since $G\setminus Y$ is $\tau$-expanding, it is connected, and so there exists $i$ such that $V(G)\setminus Y\subseteq M_i$.
Since $|V(G)\setminus Y|\ge 3|G|/4$, we may choose $j\ge 0$ minimum such that $|M_j|\ge |G|^{1-c}/4$.
Hence for each $i\in \{1,\ldots, j-1\}$, $|M_{i}|< |G|^{1-c}/4<|V(G)\setminus Y|/2$,
and so $|M_{i}\setminus Y| \ge \tau|M_{i-1}\setminus Y|$ since $G\setminus Y$
is $\tau$-expanding. Since $|M_0\setminus Y|=1$,
it follows that $|M_{j-1}\setminus Y|\ge \tau^{j-1}$. Hence
$$|G|^{(j-1)c}=\tau^{j-1}\le |M_{j-1}\setminus Y|\le |M_{j-1}|<|G|^{1-c}/4,$$
and so $(j-1)c < 1-c$, that is, $j< 1/c$.
Since $G$ is $(|G|^{1-c}/4, |G|/4)$-coherent,
and $M_j$ is anticomplete to $V(G)\setminus N(M_j)$, it follows that $|V(G)\setminus N(M_j)|<|G|/4$. But also
$|M_{j-1}|<|G|^{1-c}/4$ (or $j=0$), and so
$|L_j\cup L_{j+1}|\ge |G|-|G|/4 - |G|^{1-c}/4\ge |G|/2$. Thus some $k\in \{j, j+1\}$ satisfies the claim. This proves (1).
\bigskip
Choose $k$ as in (1), minimum. Thus $|L_{k-1}|<|G|/4$, and $|M_{k-2}|< |G|^{1-c}/4$ since
$G$ is $(|G|^{1-c}/4, |G|/4)$-coherent. Thus $|M_{k-1}|\le |G|/2$.
This proves \ref{smallrad}.~\bbox
\section{Covering sequences}
Let us say a {\em covering} $\mathcal{L}$ in $G$ is a triple $(a,H,B)$ where $H,B$ are disjoint subsets of $V(G)$, $a\in H$, $H$ covers $B$, and
$G[H]$ is connected. We call $a$ the {\em apex}, $H$ the {\em heart}, and $B$ the {\em base} of the covering.
If for every vertex $v\in B$ there is a path between $a,v$ of length
at most $r$, we say that $(a,H,B)$ has {\em height} at most $r$, and the least such $r$ is the height.
For instance, if $(L_0,\ldots, L_k)$ is a levelling with $k>0$, and $L_0=\{a\}$, then $(a,L_0\cup\cdots\cup L_{k-1}, L_k)$ is a covering
of height $k$.
A {\em covering sequence} in $G$ is a sequence $(\mathcal{L}_1,\ldots, \mathcal{L}_n)$ of coverings in $G$, with hearts
$H_1,\ldots, H_n$ say, such that $H_1,\ldots, H_n$ are pairwise disjoint and pairwise anticomplete. We call $n$ its {\em length}.
We say such a sequence has {\em height} at most $r$ if each term has height at most $r$.
If $\mathcal{M}=(\mathcal{L}_1,\ldots, \mathcal{L}_n)$ is a covering sequence, we define $V(\mathcal{M})$ to be the union of the sets
$V(\mathcal{L}_i)$ for $1\le i\le n$.
A covering sequence $(\mathcal{L}_1,\ldots, \mathcal{L}_n)$ is a
{\em multicovering} if $\mathcal{L}_1,\ldots, \mathcal{L}_n$ all have the same base, and then this common base is called the {\em base}
of the multicovering.
The main result of this section says that a graph with the usual properties (suitably coherent, suitably sparse) contains a
multicovering of length any specified constant, with height at most about $1/c$ and with base of linear cardinality. We prove this
in several steps. We begin with:
\begin{thm}\label{getmulticover1}
Let $n\ge 0$ be an integer. Let $c>0$ such that $1/c$ is an integer; let $\varepsilon>0$ with $\varepsilon\le 2^{-n-2}$; and let
$G$ be an $\varepsilon$-sparse $(\varepsilon|G|^{1-c}, \varepsilon|G|)$-coherent graph.
Then there is a covering sequence $(\mathcal{L}_1,\ldots, \mathcal{L}_n)$ in $G$, where $\mathcal{L}_i=(a_i, H_i, B_i)$
for $1\le i\le n)$, such that:
\begin{itemize}
\item for $1\le i<j\le n$, $H_i$ is anticomplete to $B_j$;
\item for $1\le i\le n$, $\mathcal{L}_i$ has height at most $1/c$; and
\item for $1\le i\le n$, $|B_i|\ge 2^{-i-1}|G|$.
\end{itemize}
\end{thm}
\noindent{\bf Proof.}\ \ We proceed by induction on $n$. If $n=0$ the result is trivial, so we assume that $n\ge 1$ and the result
holds for $n-1$.
By \ref{smallrad}, there exists $u\in V(G)$ and an integer $k< 1+1/c$ (and hence $k\le 1/c$,
since $1/c$ is an integer), such that:
\begin{itemize}
\item at most $|G|/2$ vertices of $G$ have distance less than $k$ from $u$; and
\item at least $|G|/4$ vertices of $G$ have distance exactly $k$ from $u$.
\end{itemize}
For $0\le i\le k$ let $L_i$ be the set of all vertices of $G$ with distance exactly $i$ from $u$. Then
$(L_0,\ldots, L_k)$ is a levelling, with height at most $1/c$; and $|L_k|\ge |G|/4$, so the theorem holds for $n=1$.
Choose $L_{k-1}'\subseteq L_{k-1}$ minimal such that at least $|G|/4$ vertices in $L_k$ have a neighbour in $L_{k-1}'$, and let
$L_k'$ be the set of vertices in $L_k$ that have a neighbour in $L_{k-1}'$. Thus $|L_k'|\le (1/4+\varepsilon)|G|$ since $G$ is $\varepsilon$-sparse.
Let $\mathcal{L}_1$ be the levelling $(L_0,\ldots, L_{k-2}, L_{k-1}', L_k')$, and let $H_1$ be its heart. Thus
$|V(\mathcal{L}_1)|\le (3/4+\varepsilon)|G|$.
Let $W$ be the set of vertices of $G$ not in $V(\mathcal{L}_1)$; so $|W|\ge (1/4-\varepsilon)|G|$.
Since $W$ is anticomplete to $H_1$, and $1/4-\varepsilon\ge \varepsilon$
and $G$ is $(\varepsilon|G|^{1-c}, \varepsilon|G|)$-coherent, it follows that $|H_1|\le \varepsilon|G|^{1-c}$, and so
$|W|\ge (3/4-\varepsilon)|G|-\varepsilon|G|^{1-c}\ge |G|/2$.
Hence $G[W]$ is $(2\varepsilon)$-sparse and $((2\varepsilon)|W|^{1-c}, (2\varepsilon)|W|)$-coherent. From the inductive hypothesis applied to $G[W]$,
there is a covering sequence $(\mathcal{L}_2,\ldots, \mathcal{L}_n)$ in $G[W]$, where $\mathcal{L}_i=(a_i, H_i, B_i)$ for $2\le i\le n$,
such that:
\begin{itemize}
\item for $2\le i<j\le n$, $H_i$ is anticomplete to $B_j$;
\item for $2\le i\le n$, $\mathcal{L}_i$ has height at most $1/c$; and
\item for $2\le i\le n$, $|B_i|\ge 2^{-i}|W|\ge 2^{-i-1}|G|$.
\end{itemize}
But then $(\mathcal{L}_1,\ldots, \mathcal{L}_n)$ satisfies the theorem. This proves \ref{getmulticover1}.~\bbox
\bigskip
\begin{thm}\label{getmulticover2}
Let $n\ge 0$ be an integer, and let $m=2^{2n}$. Let $c>0$ such that $1/c$ is an integer, let $\varepsilon>0$
with $\varepsilon\le 2^{-m-2}$, and let
$G$ be an $\varepsilon$-sparse $(\varepsilon|G|^{1-c}, \varepsilon|G|)$-coherent graph.
Then there is a covering sequence $(\mathcal{L}_1,\ldots, \mathcal{L}_n)$ in $G$, where $\mathcal{L}_i=(a_i, H_i, B_i)$ for $1\le i\le n$,
such that:
\begin{itemize}
\item for $1\le i\le n$, $\mathcal{L}_i$ has height at most $1/c$;
\item for $1\le i\le n$, $|B_i|\ge 2^{-m-1}|G|$;
\item either $B_1,\ldots, B_n$ are pairwise disjoint and $H_i$ is anticomplete to $B_j$ for all distinct
$i,j\in \{1,\ldots, n\}$, or $B_1=B_2=\cdots=B_n$.
\end{itemize}
\end{thm}
\noindent{\bf Proof.}\ \
Choose $\mathcal{L}_1,\ldots, \mathcal{L}_m$ as in \ref{getmulticover1}, where each $\mathcal{L}_i$ has base of cardinality
at least $2^{-i-1}|G|$, where $\mathcal{L}_i=(a_i, H_i, B_i)$ for $1\le i\le m$.
Each has height at most $1/c$.
For $1\le i\le m$, $H_1,\ldots, H_{i-1}$ are anticomplete to $B_i$, but $H_{i+1},\ldots, H_m$ might have neighbours in $B_i$.
Choose $B_i'\subseteq B_i$ of cardinality at least $|B_i|/2^{m-i}\ge |G|/2^{m+1}$, such that for each $j\in \{i+1,\ldots, m\}$,
either every vertex in $B_i'$
has a neighbour in $H_j$, or none do. Let $\mathcal{L}_i'$ be the covering obtained from $\mathcal{L}_i$
by replacing its base by $B_i'$. Then $(\mathcal{L}_1',\ldots, \mathcal{L}_m')$ is a covering sequence, and for $1\le i<j\le m$,
$H_i$ is anticomplete to $B_j$, and either $H_j$ is anticomplete to $B_i$ or $H_j$ covers $B_i$.
By Ramsey's theorem, since $m=2^{2n}$, there exists $I\subseteq \{1,\ldots, m\}$ with $|I|=n$ such that
either
\begin{itemize}
\item for all distinct $i,j\in m$, $H_i$ is anticomplete to $B_j$ (and hence $B_i\cap B_j=\emptyset$), or
\item for all $i,j\in I$ with $i<j$, $H_j$ is complete to $B_i$.
\end{itemize}
In both cases the theorem holds. This proves \ref{getmulticover2}.~\bbox
\bigskip
Now we prove the main result of this section. Its proof is closely related to the proof of the main theorem of~\cite{cats}.
\begin{thm}\label{getmulticover3}
Let $c>0$ such that $1/c$ is an integer, and let $n\ge 0$ be an integer. Then there exist $\varepsilon,d>0$ with
the following property. If $G$ is an $\varepsilon$-sparse $(\varepsilon|G|^{1-c}, \varepsilon|G|)$-coherent graph then
$G$ contains a multicovering of length $n$ and height at most $1+c^{-1}$, and with base of cardinality at least
$d|G|$.
\end{thm}
\noindent{\bf Proof.}\ \ Define $m=3^{n}$, and let $N=2^{2m}$. Let $x=2^{-N-1}$. Choose $\varepsilon>0$ such that
$\varepsilon,d\le x3^{-n}$.
It follows that $\varepsilon\le 2^{-N-2}$.
From \ref{getmulticover2}, we may assume that
there is a covering sequence $(\mathcal{L}_1,\ldots, \mathcal{L}_m)$ in $G$,
such that:
\begin{itemize}
\item $V(\mathcal{L}_1),\ldots, V(\mathcal{L}_m)$ are pairwise disjoint;
\item for $1\le i\le m$, $\mathcal{L}_i$ has height at most $1/c$;
\item for $1\le i\le m$, the base of $\mathcal{L}_i$ has cardinality at least $x|G|$; and
\item for all distinct $i,j\in \{1,\ldots, m\}$, every edge between $V(\mathcal{L}_i)$ and $V(\mathcal{L}_j)$ is between the base of
$V(\mathcal{L}_i)$ and the base of $V(\mathcal{L}_j)$.
\end{itemize}
Let $t, d_1,\ldots, d_t> 0$ be integers, where $d_1,\ldots, d_t\le n$. Let us say a {\em battery} of {\em type} $(d_1,\ldots, d_t)$ is a
sequence of $t$ multicoverings
$(\mathcal{M}_1,\ldots, \mathcal{M}_t)$ in $G$, such that:
\begin{itemize}
\item $V(\mathcal{M}_1),\ldots, V(\mathcal{M}_t)$ are pairwise disjoint;
\item for $1\le i\le t$, $\mathcal{M}_i$ has length $d_i$, and height at most $1+1/c$, and the first term of $\mathcal{M}_i$
has height at most $1/c$;
\item for $1\le i\le t$, the base of $\mathcal{M}_i$ has cardinality at least $x3^{1-d_i}|G|$;
\item for all distinct $i,j\in \{1,\ldots, m\}$, every edge between $V(\mathcal{M}_i)$ and $V(\mathcal{M}_j)$ is between the base of
$V(\mathcal{M}_i)$ and the base of $V(\mathcal{M}_j)$.
\end{itemize}
Thus $G$ contains a battery of type $(1,\ldots, 1)$, of length $m$. Choose a battery $\mathcal{B}$ of type $(d_1,\ldots, d_t)$ with $t$ minimum such that
$2^{d_1}+\cdots +2^{d_t}\ge m$.
Let $\mathcal{B}=(\mathcal{M}_1,\ldots, \mathcal{M}_t)$. For $1\le i\le t$, let the base of
$\mathcal{M}_i$ be $B_i$.
If some $d_i=n$, then the $i$th term of $\mathcal{B}$ is a multicovering satisfying the theorem, so we assume not. If
$t=1$ then $2^{d_1}\ge m=2^n$, and so $d_1\ge n$, a contradiction; so $t\ge 2$, and $d_1,\ldots, d_t<n$.
Consequently each
$|B_i|\ge x3^{1-(n-1)}|G|\ge \varepsilon|G|$. By reordering the
terms of the battery,
we may assume that $d_t\le d_1,\ldots, d_{t-1}$.
Since $G$ is $(\varepsilon|G|^{1-c}, \varepsilon|G|)$-coherent, and $|B_t|\ge \varepsilon|G|$, for $1\le i<t$
there are fewer than $\varepsilon|G|^{1-c}\le 2|B_i|/3$ vertices in $B_i$ that have no neighbour in $B_t$. Hence
we may choose $X\subseteq B_t$ minimal such that for some $i\in \{1,\ldots, t-1\}$, at least $|B_i|/3$ vertices in $B_i$
have a neighbour in $X$. For $1\le i<t$, let $Y_i$ be the set of vertices in $B_i$ that have a neighbour in $X$, and $Z_i=B_i\setminus Y_i$.
By reordering, we may assume that $|Y_1|\ge |B_1|/3$. From the minimality of $X$, $|Y_i|\le |B_i|/3+\varepsilon|G|$ for $2\le i\le t-1$,
and so $|Z_i|\ge 2|B_i|/3-\varepsilon|G|\ge |B_i|/3$.
Let $\mathcal{M}_1=(\mathcal{L}_1,\ldots, \mathcal{L}_{d_1})$, and let the first term of $\mathcal{M}_t$ be $\mathcal{L}=(a,H,B_t)$.
Let $\mathcal{L}'$ be the covering $(a, H\cup X, Y_1)$, which therefore has height at most $1+1/c$. Let
$\mathcal{M}_1'$ be obtained from $\mathcal{M}_1$ by replacing its base by $Y_1$ and adding a new final term $\mathcal{L}_1'$; so
$\mathcal{M}_1'$ has length $d_1+1$. For $2\le i\le t-1$, let $\mathcal{M}_i'$ be obtained from $\mathcal{M}_i$
by replacing its base by $Z_i$. Then $(\mathcal{M}_1',\ldots, \mathcal{M}_{t-1}')$ is a battery of type $(d_1+1, d_2,\ldots, d_{t-1})$.
Since $d_1\ge d_t$, it follows that
$$2^{d_1+1}+\cdots +2^{d_{t-1}}\ge 2^{d_1}+\cdots +2^{d_t}\ge 2^m,$$
a contradiction to the choice of $\mathcal{B}$. This proves \ref{getmulticover3}.~\bbox
\section{Making spiders}
Let $\mathcal{L}_1,\ldots, \mathcal{L}_n$ be coverings in $G$, such that
\begin{itemize}
\item $\mathcal{L}_1,\ldots, \mathcal{L}_n$ all have the same apex $a$;
\item for $1\le i\le n$ let $\mathcal{L}_i$ have heart $H_i$; then for $1\le i<j\le n$, $H_i\setminus \{a\}$ is disjoint from and
anticomplete to $H_j\setminus \{a\}$.
\end{itemize}
We call $(a,\mathcal{L}_1,\ldots, \mathcal{L}_n)$ a {\em spider} in $G$, and $a$ is its {\em apex}. Its {\em height} is the maximum of the heights of
$\mathcal{L}_1,\ldots, \mathcal{L}_n$, and its {\em length} is $n$. It has {\em mass} $b$ where $b$ is the minimum cardinality
of the bases of $\mathcal{L}_1,\ldots, \mathcal{L}_n$. The union of the hearts of $\mathcal{L}_1,\ldots, \mathcal{L}_n$ is called
the {\em heart} of the spider. We call $\mathcal{L}_1,\ldots, \mathcal{L}_n$ the {\em members} of the spider.
\begin{thm}\label{getspider}
Let $c>0$ such that $1/c$ is an integer, and let $n\ge 1$ be an integer. Then there exists $\varepsilon,d>0$ with
the following property. If $G$ is an $\varepsilon$-sparse $(\varepsilon|G|^{1-c}, \varepsilon|G|)$-coherent graph with $|G|\ge 2$ then
$G$ contains a spider of length $n$ and height at most $2+2c^{-1}$, and with mass at least
$d|G|$.
\end{thm}
\noindent{\bf Proof.}\ \
Choose $\varepsilon,d'$ as in \ref{getmulticover3} (with $d'$ replacing $d$). By reducing $\varepsilon$ we may assume that
$\varepsilon\le d'/2$, and $\varepsilon<1/3$. Let $d=d'/2$. We claim that $\varepsilon,d$
satisfy the theorem. Let $G$ be as in the theorem; then \ref{getmulticover3} implies that
$G$ contains a multicovering $(\mathcal{L}_1,\ldots, \mathcal{L}_n)$ of length $n$ and height at most $1+c^{-1}$,
and with base $B$ of cardinality at least $d'|G|$.
Choose $a\in B$.
Let $1\le i\le n$, and let $H_i$ be the heart of $\mathcal{L}_i$.
Then $G[H_i\cup \{a\}]$ is connected, and every vertex of $H_i\cup \{a\}$ can be joined to $a$
by a path of $G[H_i\cup \{a\}]$ with length at most $1+2/c$. Hence $(a,H_i\cup \{a\}, B\setminus \{a\})$ is a covering of height
at most $2+2/c$, say $\mathcal{L}_i'$.
Consequently $(\{a\}, \mathcal{L}_1',\ldots, \mathcal{L}_n')$ is a spider of length $n$ and height at most $2+2c^{-1}$, and mass
$|B|-1\ge d'|G|-1$. By \ref{big}, $|G|\ge 1/\varepsilon\ge 2/d'$, and so $d'|G|-1\ge d|G|$. This proves \ref{getspider}.~\bbox
\bigskip
A {\em troupe} of spiders is a set of spiders such that their hearts are pairwise anticomplete.
\begin{thm}\label{gettroupe}
Let $c>0$ such that $1/c$ is an integer, and let $m\ge 0$, $n\ge 1$ be integers. Then there exist $\varepsilon,d>0$ with
the following property. If $G$ is an $\varepsilon$-sparse $(\varepsilon|G|^{1-c}, \varepsilon|G|)$-coherent graph, then
$G$ contains a troupe of $m$ spiders, each of length $n$ and height at most $2+2/c$, and with mass at least
$d|G|$.
\end{thm}
\noindent{\bf Proof.}\ \
Let $\varepsilon',d'$ satisfy \ref{getspider} (with $\varepsilon,d$ replaced by $\varepsilon',d'$ respectively).
Define $\varepsilon=\varepsilon'(d'/2)^{m}$ and $d=d'(d'/2)^{m}$. We will show that $\varepsilon,d$
satisfy the theorem.
We proceed by induction on $m$. For $m=0$ the result is trivial, so we assume that $m\ge 1$ and the result holds for
$m-1$. By \ref{getspider} $G$ contains a spider of length $n$ and height at most $2+2c^{-1}$, and with mass at least
$d'|G|$; say $(a_1,\mathcal{L}_1,\ldots, \mathcal{L}_n)$. For $1\le j\le n$ let $\mathcal{L}_j=(a_1, H_j, B_j)$.
We may assume that every vertex of $H_j$ has $G[H_j]$-distance from $a_1$ at most $1+2c^{-1}$ (because any other vertices can
be deleted). Since
$H$ is connected, we can find a sequence of induced subgraphs of $H$, starting with the subgraph with one vertex $a_1$,
and adding vertices one by one, in such a way that each of these graphs is connected and every vertex is joined to $a_1$
by a path of length at most $1+2c^{-1}$ within the subgraph. Choose one of these subgraphs, say $H_j'$, the first such that
at least $d|G|$
vertices in $B_j$ have a neighbour in $H_j'$. Let $B_j'$ be the set of vertices in $B_j$ with a neighbour in $H_j'$.
Thus $d|G|\le |B_j'|\le (d+\varepsilon)|G|$ since $G$ is $\varepsilon$-sparse.
Let $\mathcal{L}_j'$ be the covering $(a_1,H_j', B_j')$.
The union of $B_1',\ldots, B_n'$ has cardinality at most $n(d+\varepsilon)|G|\le d'|G|/2$
and so there is a subset $X\subseteq B_1$ of cardinality at least $d'|G|/2$, anticomplete to $H_1',\ldots, H_n'$.
Then $\mathcal{S}_1=(a,\mathcal{L}_1',\ldots, \mathcal{L}_n')$ is a spider of length $n$ and height at most $2+2c^{-1}$, and with mass at least
$d|G|$; and $X$ is anticomplete to the heart of this spider.
Let $\varepsilon''= 2\varepsilon/d'$, and $d''= 2d/d'$.
Since $|X|\ge d'|G|/2$, it follows that $G[X]$ is $\varepsilon''$-sparse and $(\varepsilon''|G|^{1-c}, \varepsilon''|G|)$-coherent
Since $\varepsilon''\le \varepsilon'(d'/2)^{m-1}$ and so on, we can apply the inductive hypothesis to $G[X]$, and deduce
that there is a troupe of $m-1$ spiders in $G[X]$, each of length $n$ and height at most $2+2c^{-1}$, and with mass at least
$d''|X|=(2d/d')|X|\ge d|G|$. But then adding $\mathcal{S}_1$ to this troupe gives a troupe of $m$ spiders satisfying the theorem.
This proves \ref{gettroupe}.~\bbox
\bigskip
So, our graph contains a troupe of spiders, of arbitrarily large cardinality, and each with arbitrarily large length, all of height
at most $2+2/c$, and with bases of linear cardinality.
The next result converts the members of these spiders to levellings, but we need to be careful exactly what we mean. In a levelling,
all edges from heart to base start from the penultimate level of the levelling. We need more than this: we need that for every two
levellings involved as members of spiders in the troupe, every edge from the heart of one to the base of
the other leaves from the penultimate level of the first, and this is more tricky to arrange.
Let us first state the definition formally. Let $n\ge 1$ and let $\mathcal{L}_1,\ldots, \mathcal{L}_n$ be levellings in a graph
$G$, all with the same apex $a$,
such that for $1\le i<j\le n$, every edge of $G$ between $V(\mathcal{L}_i)\setminus \{a\}$ and $V(\mathcal{L}_j)\setminus \{a\}$
is between the base of $\mathcal{L}_i$ and the base of $\mathcal{L}_j$. We call $(a,\mathcal{L}_1,\ldots, \mathcal{L}_n)$ a
{\em lobster} in $G$, and $a$ is its {\em apex}. Its {\em height} is the maximum height of $\mathcal{L}_1,\ldots, \mathcal{L}_n$,
and its {\em length} is $n$. It has {\em mass} $b$ where $b$ is the minimum cardinality
of the bases of $\mathcal{L}_1,\ldots, \mathcal{L}_n$. Its {\em heart} is the union of the
hearts of $\mathcal{L}_1,\ldots, \mathcal{L}_n$. We call $\mathcal{L}_1,\ldots, \mathcal{L}_n$ the {\em members} of the lobster.
A {\em troupe} of lobsters is a set $\{\mathcal{T}_1,\ldots, \mathcal{T}_m\}$ of lobsters, such that for all $i,j\in \{1,\ldots, m\}$:
\begin{itemize}
\item for $1\le i<j\le m$, the heart of $\mathcal{T}_i$ is disjoint from and anticomplete to the heart of $\mathcal{T}_2$;
\item let $\mathcal{L}, \mathcal{M}$ each be a member of one of $\mathcal{T}_1,\ldots, \mathcal{T}_m$, with $\mathcal{L}\ne \mathcal{M}$,
and let $\mathcal{L}=\{L_0,\ldots, L_k)$; then there is no edge between $L_0\cup\cdots\cup L_{k-2}$ and the base of $\mathcal{M}$.
\end{itemize}
\begin{thm}\label{spiderlobster}
Let $c>0$ such that $1/c$ is an integer, and let $m\ge 0$, $n\ge 1$ be integers. Then there exists $\varepsilon,d>0$ with
the following property. If $G$ is an $\varepsilon$-sparse $(\varepsilon|G|^{1-c}, \varepsilon|G|)$-coherent graph, then
$G$ contains a troupe of $m$ lobsters, each of length $n$ and height at most $2+2/c$, and with mass at least
$d|G|$.
\end{thm}
\noindent{\bf Proof.}\ \ Let $\varepsilon,d'$ satisfy \ref{gettroupe} with $d$ replaced by $d'$.
Define $w(h)=d'(1+2/c)^{-h}$ for $0\le h\le n$, and define $d=w(n)$. We claim that $\varepsilon, d$ satisfy the theorem.
Let $G$ be an $\varepsilon$-sparse $(\varepsilon|G|^{1-c}, \varepsilon|G|)$-coherent graph. By \ref{gettroupe}
there is a troupe of spiders $\{\mathcal{S}_1,\ldots, \mathcal{S}_m\}$ in $G$, each of length $n$ and height at most $2+2/c$,
and with mass at least $d|G|$. Let the members of these spiders (in some order) be $\mathcal{L}_1,\ldots, \mathcal{L}_n$,
and for $1\le i\le n$ let $\mathcal{L}_i=\{a_i,H_i, B_i)$. We shall convert them one by one to levellings, at each step
shrinking all the bases.
Let $X^0=B_1\cup\cdots\cup B_n$, and for $1\le i\le n$ let $X^0_i$ be the set of all vertices in $X^0$ with a neighbour in $H_i$
(thus $B_i\subseteq X^0_i$). Inductively, let $1\le h\le n$, and suppose that we have defined $X^{h-1}$ and
$\mathcal{L}'_1,\ldots, \mathcal{L}_{h-1}'$, and $X^{h-1}_i$ for $1\le i\le n$, satisfying:
\begin{itemize}
\item for $1\le i\le h-1$, $\mathcal{L}'_i$ is a levelling; its heart is a subset of $H_i$, and $a_i$ is its apex; its height
is at most $2+2/c$;
\item for $1\le i\le h-1$, $X^{h-1}_i$ is the set of all vertices in $X^{h-1}$ with a neighbour in the heart of $\mathcal{L}'_i$,
and for $h\le i\le m$, $X^{h-1}_i$ is the set of all vertices in $X^{h-1}$ with a neighbour in the heart of $\mathcal{L}_i$;
\item for $1\le i\le h-1$, every edge between the heart of $\mathcal{L}'_i$ and $X^{h-1}$ has an end in the penultimate level of
$\mathcal{L}'_i$; and
\item for $1\le i\le n$, $|X^{h-1}_i|\ge w(h-1)|G|$.
\end{itemize}
For $0\le j\le 1+2/c$, let $L_j$ be the set of vertices
in $H_h$ with $G[H_h]$-distance to $a_h$ exactly $j$. Thus every vertex $v\in X^{h-1}_h$ has a neighbour in one of
$L_0,\ldots, L_j$ where $j\le 1+2/c$, and the smallest such $j$ is called the {\em type} of $v$. There are only
$1+2/c$ possible types, and so there exists $k\in \{1+2/c\}$ such that at least $|X^{h-1}_h|/(1+2/c)$
vertices in $X^{h-1}_h$ have type $k$. Consequently, since
$$|X^{h-1}_h|/(1+2/c)\ge w(h-1)|G|/(1+2/c)= w(h)|G|,$$
there exists $k\in \{1+2/c\}$ minimum such that at least $w(h)|G|$
vertices in $X^{h-1}_h$ have type $k$. Let $X^h_h$ be the set of all vertices in $X^{h-1}_h$ that have type $k$.
Let $\mathcal{L}'_h=(L_0,\ldots, L_k,X^h_h)$. Thus $\mathcal{L}'_h$ is a levelling with height $k+1\le 2+2/c$.
Let $Z^h$ be the set of vertices in $X^{h-1}_h$ with type less than $k$.
Thus $|Z^h|\le (2/c)w(h)|G|$.
For $1\le i\le n$ with $i\ne h$, define $X^h_i=X^{h-1}_i\setminus Z^h$. Thus $|X^h_i|\ge |X^{h-1}_i|-|Z^h|$,
and so $|X^h_i|\ge w(h-1)|G|- (2/c)w(h)|G|\ge w(h)|G|$.
Let $X^h$ be the union of the sets $X^h_i\;(1\le i\le n)$.
This completes the inductive definition.
For $1\le i\le m$, let $\mathcal{T}_i$ be the lobster obtained from $\mathcal{S}_i$ by replacing each member $\mathcal{L}_j$
by $\mathcal{L}_j'$. This makes a troupe of lobsters satisfying the theorem, and so proves \ref{spiderlobster}.~\bbox
\section{Part assembly}
Now we put these several pieces together to prove \ref{mainthm}, which we restate:
\begin{thm}\label{mainthmagain}
Let $c>0$ with $1/c$ an integer, and let $H_1,H_2$ be graphs with branch-length at least $4c^{-1}+5$. Then there exists $\varepsilon>0$ such that
if $G$ is a graph with $|G|>1$ that is $H_1$-free and $\overline{H_2}$-free, then there is
a pure pair $A,B$ in $G$ with $|A|\ge \varepsilon|G|$ and $|B|\ge \varepsilon|G|^{1-c}$.
\end{thm}
We saw in section 2 that to prove \ref{mainthmagain}, it suffices to show:
\begin{thm}\label{sparsethmagain}
Let $c>0$ with $1/c$ an integer, and let $H$ be a graph with branch-length at least $4c^{-1}+5$. Then there exists $\varepsilon>0$ such that
if $G$ is an $H$-free graph with $|G|>1$ and with maximum degree at most $\varepsilon|G|$, then there is
an anticomplete pair $A,B$ in $G$ with $|A|\ge \varepsilon|G|$ and $|B|\ge \varepsilon|G|^{1-c}$.
\end{thm}
\noindent{\bf Proof.}\ \
By adding more vertices to $H$, we may assume that if $X$ denotes the set of vertices of $H$ that have degree different from two,
then every cycle of $H$ contains at least one vertex in $X$, and every path in $H$ with both ends in $X$ has length at least
$4c^{-1}+5$, and every cycle of $H$ has length at least $4c^{-1}+5$. Let $X=\{x_1,\ldots, x_m\}$.
Consequently $H$ can be obtained from the set $X$
of $m$ isolated vertices by adding
\begin{itemize}
\item paths with ends in $X$ and each of length at least $4/c+5$, and
\item cycles with exactly one vertices in $X$, of length at least $4/c+5$
\end{itemize}
where every vertex of $V(H)\setminus X$ belongs to exactly one of these paths and cycles, and has degree exactly two in $H$.
Let the paths be $R_i\; (i\in I_1)$, and let the cycles be $R_i\;(i\in I_2)$, where $I_1\cap I_2=\emptyset$. For $i\in I_1$,
let $R_i$ have ends $(u_i, v_i)$ (ordered arbitrarily) and have length $\ell_i$, and for $i\in I_2$,
let $u_i=v_i$ be the unique vertex of $R_i$ in $X$, and let $R_i$ have length $\ell_i$.
Thus $H$ is determined up to isomorphism
by a knowledge of $X$, the pairs $(u_i,v_i)\;(i\in I_1\cup I_2)$, and the
numbers $\ell_i\;(i\in I_1\cup I_2)$. Let $I=I_1\cup I_2$, and for each $i\in I$ let $\alpha_i\in \{1,\ldots, m\}$ such that $x_{\alpha_i}=u_i$
and let $\beta_i\in \{1,\ldots, m\}$ such that $x_{\beta_i}=u_i$.
Let $n$ be the maximum degree of $H$. Choose $\varepsilon,d$ as in \ref{spiderlobster}. By reducing $\varepsilon$, we may assume that
$\varepsilon\le d/|H|$, and that $\varepsilon$ satisfies
\ref{getpath2} for all choices of integers $s,t>0$ satisfying $s,t\le 2+2c^{-1}$. We claim that
$\varepsilon$ satisfies the theorem. Let $G$ be an $\varepsilon$-sparse $(\varepsilon|G|^{1-c}, \varepsilon|G|)$-coherent graph. We must
show that $G$ contains $H$. By \ref{spiderlobster},
$G$ contains a troupe $\{\mathcal{S}_1,\ldots, \mathcal{S}_m\}$ of $m$ lobsters, each of length $n$ and height at
most $2+2c^{-1}$, and with mass at least $d|G|$.
Let $I=\{1,\ldots, p\}$. For $1\le h\le p$, choose a member $\mathcal{L}_{2i-1}$ of $\mathcal{S}_{\alpha_i}$
and a member $\mathcal{L}_{2i}$ of $\mathcal{S}_{\beta_i}$, in such a way that the levellings $\mathcal{L}_i, \mathcal{M}_i\;(i\in I)$ are all different.
(This is possible from the definition of $n$).
We will prove that for all $h\in I$, there is a path $P_h$ (or cycle, if the two apexes are equal) between the
apex of $\mathcal{L}_{2h-1}$ and the apex of $\mathcal{L}_{2h}$ of length $\ell_h$, such that the union of $P_1,\ldots, P_p$
makes an induced subgraph of
$G$ isomorphic to $H$. We will choose these paths and cycles in order. Also for $1\le h\le p$ we need to choose
a subset of the base of each $\mathcal{L}_k$ for $2h<k\le 2p$, and a subset of the penultimate level of $\mathcal{L}_k$; these
are denoted by $X^h_{k}$ and $Y^h_k$.
We denote by $P_h^*$ the set of vertices of $P_h$ different from its ends (if it is a path)
or different from the apex of $A_h$ (if it is a cycle). In either case $|P_h^*|=\ell_h-1$.
For $0\le h\le p$ let $w_h=(4p)^{p-h}d$.
For $1\le i\le 2p$, let $X^0_i$ be the base of $\mathcal{L}_i$, and let
$Y^0_i$ be the penultimate level of $\mathcal{L}_i$. Let $a_i$ be its apex.
Let $B$ be the union of the sets $X^0_i$ for $1\le i\le 2p$.
Now inductively, suppose we have chosen
the first $h-1$ paths or cycles, say $P_1,\ldots, P_{h-1}$, where $1\le h\le p$, satisfying:
\begin{itemize}
\item for $1\le g\le h-1$, if $a_{2g-1}\ne a_{2g}$, then $P_g$ is an induced path joining these apexes,
of length $\ell_g$; and if the apexes are equal then $P_g$ is a cycle of length $\ell_g$ containing this apex;
\item for $1\le g\le h-1$, $P_g^*$ is anticomplete
to the hearts of $\mathcal{L}_{2h+1},\ldots, \mathcal{L}_{2p}$.
\end{itemize}
Suppose moreover that for $2h+1\le i\le 2p$ we have chosen $X^{h-1}_{i}\subseteq X^0_{i}$ and $Y^{h-1}_i\subseteq Y^0_{i}$, such that
for all $i\in \{2h+1,\ldots, 2p\}$:
\begin{itemize}
\item $X^{h-1}_i$ is the set of all vertices in $B$ with a neighbour in $Y^{h-1}_i$;
\item $X^{h-1}_i\cup Y^{h-1}_i$ is anticomplete to $P_1^*,\ldots, P^*_{h-1}$;
\item $|X^{h-1}_{i,j}|\ge w_{h-1}|G|$.
\end{itemize}
We choose $P_h$ as follows. For $2h+1\le i\le 2p$, choose $Y_i\subseteq Y^{h-1}_i$ minimal such that at least $(w_h+\varepsilon(|H|-1))|G|$ vertices
in $B$ (necessarily all in $X^{h-1}_i$) have a neighbour in $Y_i$, and let $X_i$ be the set of vertices in $B_i$ with a neighbour
in $Y_i$. From the minimality of $Y_i$,
$$(w_h+\varepsilon(|H|-1))|G|\le |X_i|\le (w_h+\varepsilon|H|)|G|.$$
Let $Z=X_{2h+1}\cup\cdots\cup X_{2p}$.
Thus $|Z|\le (2p-2)(w_h+\varepsilon\ell_h )|G|$. For $i = 2h-1, 2h$ let $X_i=X^{h-1}_i\setminus Z$. Thus
$$|X_i|\ge |X^{h-1}_i|-|Z|\ge (w_{h-1}- 2p(w_h+\varepsilon\ell_h ))|G|$$
for $i = 2h-1, 2h$. Now $\ell_h\le |H|$ and $\varepsilon|H| \le d\le w_h$, so $w_h+\varepsilon\ell_h \le 2w_h$; and hence
$$|X_i|\ge (w_{h-1}- 2(2p-2)w_h)|G|\ge w_h|G|\ge d|G|$$
since $w_{h-1}=4pw_h$. For $i = 2h-1,2h$ let $\mathcal{L}_i'$ be the levelling obtained from $\mathcal{L}_i$
by replacing its base by $X_i$.
Now $\mathcal{L}_{2h-1}'$, $\mathcal{L}_{2h}'$ both have height at most $2+2/c$, and $\ell_h\ge 5+4/c$.
By \ref{getpath2} applied to the levellings $\mathcal{L}_{2h-1}'$, $\mathcal{L}_{2h}'$, there is an induced path $P_h$
between $a_{2h-1}, a_{2h}$ (or a cycle, if $a_{2h-1}=a_{2h}$) of length $\ell_h$, with vertex set included in
$V(\mathcal{L}_{2h-1}')\cup V(\mathcal{L}_{2h}')$. Consequently $P_h^*$ is anticomplete to $Y_i$ for $2h+1\le i\le 2p$,
and to $P_1^*,\ldots, P_{h-1}^*$. It might have neighbours in $X_i$ for $2h+1\le i\le 2p$, but since $|P_h^*|\ge |H|$,
there are at most $\varepsilon|H|$ such vertices. For $2h+1\le i\le 2p$, let $X^h_i$ be the set of vertices in $X_i$
with no neighbour in $P_h^*$. Thus $|X^h_i|\ge |X_i|-\varepsilon|H|\ge w(h)|G|$. This completes the inductive definition.
But then the union of $P_1,\ldots, P_p$ forms an induced subgraph isomorphic to $H$. This proves \ref{sparsethmagain},
and hence completes the proof of \ref{mainthm}.~\bbox
\section{Further extension}
We have found a kind of strengthening of \ref{mainthm}, that we state without proof.
For $\ell\ge 2$, Let us say a graph $H$ is {\em $\ell$-handled}
if there are induced subgraphs $P_0,\ldots, P_k$ of $H$, for some $k\ge 1$, such that:
\begin{itemize}
\item $P_0$ is a forest;
\item every path of $P_0$ has length at most $\ell$;
\item $P_1,\ldots, P_k$ are pairwise vertex-disjoint paths, each of length at least $\ell$;
\item for $1\le i\le k$, $V(P_i\cap P_0)$ consists exactly of the two ends of $P_i$; and
\item $H=P_0\cup P_1\cup\cdots\cup P_k$.
\end{itemize}
Then:
\begin{thm}\label{treethm}
There exists $\gamma>0$ with the following property.
Let $c>0$ with $1/c$ an integer, and let $H_1,H_2$ be $\gamma c^{-1}$-handled graphs. Then there exists $\varepsilon>0$ such that
if $G$ is a graph with $|G|>1$ that is $H_1$-free and $\overline{H_2}$-free, then there is
a pure pair $A,B$ in $G$ with $|A|\ge \varepsilon|G|$ and $|B|\ge \varepsilon|G|^{1-c}$.
\end{thm}
Then the essentials of \ref{mainthm} follow from \ref{treethm} by taking $P_0$ to be the subgraph of $H$ induced on the set of
all vertices of degree
at least three and their neighbours. But we feel that \ref{treethm} is not very satisfactory, because if the forest $P_0$
has long paths, the hypothesis requires the paths $P_1,\ldots, P_k$ to be long too. We would prefer a version of \ref{treethm} where we omit the
second bullet from the definition of $\ell$-handled, but so far we cannot prove it.
A weaker form of \ref{mainthm} will be proved for a wider class of graphs in~\cite{pure8}. Let $H$ be a graph.
If $E(H)\ne \emptyset$, we define the {\em congestion}
of $H$ to be the maximum of $1-(|J|-1)/|E(J)|$, taken over all subgraphs
$J$ of $H$ with at least one edge; and if $E(H)=\emptyset$, we define
the congestion of $H$ to be zero. Thus the congestion of $H$ is always non-negative, and equals zero
if and only if $H$ is a forest; and, for instance, long cycles have smaller congestion than short cycles.
In~\cite{pure8} we will prove:
\begin{thm}\label{congthm}
Let $c>0$, and let $H_1,H_2$ be graphs with congestion at most $c/(9+15c)$.
Then there exists $\varepsilon>0$ such that
if $G$ is a graph with $|G|>1$ that is $H_1$-free and $\overline{H_2}$-free, then there is
a pure pair $A,B$ in $G$ with $|A|,|B|\ge \varepsilon|G|^{1-c}$.
\end{thm}
This is pleasing because of the following weak converse (easily proved with a random graph argument that we omit):
\begin{thm}\label{conjconverse}
Let $c>0$, and let $H_1,H_2$ be graphs both with congestion more than $c$.
There is no $\varepsilon>0$ such that
for every graph $G$ with $|G|>1$ that is $H_1$-free and $\overline{H_2}$-free, there is
a pure pair $A,B$ in $G$ with $|A|, |B|\ge \varepsilon|G|^{1-c}$.
\end{thm}
The result \ref{congthm} does not contain \ref{mainthm}, because in \ref{congthm} neither of $A,B$ have to have linear
cardinality. What if we ask for a strengthened version of \ref{congthm} that would contain \ref{mainthm} (by requiring one of $|A|,|B|$ to be linear)?
We pose that as a conjecture:
\begin{thm}\label{conj}
{\bf Conjecture:} For all $c>0$, there exists $\sigma>0$ with the following property. Let $H_1,H_2$ be graphs with congestion at most $\sigma$.
There exists $\varepsilon>0$ such that
if $G$ is a graph with $|G|>1$ that is $H_1$-free and $\overline{H_2}$-free, then there is
a pure pair $A,B$ in $G$ with $|A|\ge \varepsilon|G|$ and $|B|\ge \varepsilon|G|^{1-c}$.
\end{thm}
|
{
"timestamp": "2021-05-11T02:21:04",
"yymm": "2105",
"arxiv_id": "2105.03956",
"language": "en",
"url": "https://arxiv.org/abs/2105.03956"
}
|
\section{Introduction}
Clogging is a phenomenon that usually arises when particles pass through narrow bottleneck structures~\cite{zuriguel2014invited}.
It is often expressed as the jamming arch formed by several interactive particles in front of the bottleneck, which significantly decreases or even stops the flow through the bottleneck.
The phenomenon occurs in different systems of inert particles such as granular material in the silo~\cite{zuriguel2014invited,arevalo2016clogging,hidalgo2013force}, dense suspension of colloidal particles~\cite{vanhecke2009jamming, zuriguel2014clogging, guariguata2012jamming,genoves2011Crystal} or electrons on the surface of liquid helium~\cite{rees2011point,rees2019commen}.
This type of clogging is usually stable if there is no external disturbance to break the balance between the particles that form the clogging~\cite{zuriguel2014invited, lazano2012break}.
Clogging can also be observed in the movement of animals~\cite{garcimartin2015flow} and humans~\cite{garcimartin2016flow} when congestion and high motivation coincide, for instance, when a large number of passengers at a train station enter carriages through a narrow train door with high motivation, or when fans at entrances to a concert hall are all trying to get in and find places near the stage~\cite{adrian2020crowds}.
Unlike with the clogging of inert particles, clogging in systems with humans is temporary.
The duration of clogs depends on the motivation level of the pedestrians involved in the clogging~\cite{zuriguel2014clogging, garcimartin2015flow, garcimartin2016flow, adrian2020crowds}.
Although the clogging of humans may last a relatively long time in some extreme cases and sometimes even leads to severe injuries~\cite{taylor1990the, krausz2012love}, in most normal cases, its duration is short even in competitive situations~\cite{adrian2020crowds}.
In the literature, the short-term nature of the clogs is often attributed to the fluctuation in the load to the humans in the clog.
This fluctuation, in turn, may be the result of the flexibility and elasticity of the human body.
Moreover, some clogs are avoided before forming, through complex steering mechanisms that include cognitive processes and control of the body.
However, microscopic models based on physical principles merely focus on guaranteeing volume exclusion.
They do not take the above-mentioned factors sufficiently into account, which could lead to prolonged clogs (clogs interrupting flow for a long time) or even stable clogs similar to inert particles.
One study, \cite{kirchner2003friction}, examined this phenomenon using a cellular automaton (CA) model.
A friction parameter was introduced for an improved description of the clogging of pedestrians.
In another study, \cite{yanagisawa2009intro}, the friction parameter was extended to a function of the number of agents in clogging for a more realistic result of the pedestrian outflow through the exit.
The effect of queuing and pushing behavior in front of the bottleneck on the overall dynamics of the crowd is explored using another CA model in \cite{fischer2020micro}, where a local pushing mechanism is used \cite{yates2015incorporating}.
Another global pushing mechanism is proposed in \cite{almet2015push}.
Furthermore, game theory is combined with CA models in some studies to better reproduce the movement of pedestrians~\cite{von2015spatial, zheng2011conflict}.
Prolonged clogs and stable clogs can also be observed in the social force models for pedestrian dynamics by increasing the desired velocity of the agents~\cite{helbing2000simula}.
Introducing random behavioral variations is important to mitigate these clogs in simulations~\cite{helbing2005self, hidalgo2017simula}.
Further studies~\cite{parisi2005microscopic,parisi2007morphological} used the social force model to study the effect of desired velocity and the exit door on the duration of clogs.
Clogs caused by higher desired velocity in force-based models result in lower flow through bottlenecks, a phenomenon also known as ``faster-is-slower''~\cite{helbing2005self, parisi2005microscopic, parisi2007morphological, pastor2015experiment, patterson2017clogging}.
In this paper, we focus on prolonged clogs occurring in the generalized collision-free velocity model (GCVM)~\cite{xu2019generalized}, a first-order microscopic model for pedestrian dynamics.
It is based on the collision-free speed model~\cite{tordeux2016collision}, and strictly follows the principle of volume exclusion to guarantee that there is no overlap among agents.
Therefore, clogs that result in long-term interruption of flow occur frequently in simulations of bottlenecks, particularly in narrow exits.
We aim to quantify these prolonged clogs by exploring decisive factors behind their occurrence in the bottleneck scenario, to purposefully improve the GCVM for reproducing pedestrians' movement more realistically.
The effect of three types of factors is examined in this study.
The first category includes two parameters of the spatial boundaries, i.e., the width and the position of the bottleneck exit.
The second category consists of algorithmic factors related to the implementation of the GCVM, including the time step size and the update scheme (e.g., sequential or parallel update) for the agents in the simulations.
Third, several model parameters such as the strength of impact among agents in the GCVM, and the shapes of the agents are analyzed.
The results are used to ascertain the relationship between these factors and the occurrence of prolonged clogs.
This paper is organized as follows.
Section~\ref{sec:setup} introduces the bottleneck scenario for the simulations.
In section~\ref{sec:definition}, we briefly define the GCVM and introduce the method used for identifying prolonged clogs in numerical simulations.
Section~\ref{sec:experiment} compares simulation results obtained with various factors and shows the corresponding analysis.
Finally, we conclude with a discussion in section~\ref{sec:conclusion}.
\section{The bottleneck scenario for simulations}
\label{sec:setup}
The bottleneck scenario for simulations in this study is shown in figure~\ref{fig:geo}.
It is composed of three parts separated by red dashed segments.
The source area, a $\SI{8}{\metre}\times\SI{8}{\metre}$ square in gray, the moving area, a rectangular room with an area of $\SI{10}{\metre}\times\SI{8}{\metre}$, and the exit, a corridor measuring $\SI{2}{\metre}\times w$.
In section~\ref{sec:experiment} different values of $w$ and $d$ (the position of the exit with respect to the lower horizontal wall) are used to determine the effect of the structure of the bottleneck has on the occurrence of the prolonged clogs.
\begin{figure}[H]
\centering
\includegraphics[width=0.8\linewidth]{1.pdf}
\caption{The bottleneck scenario for simulations.}
\label{fig:geo}
\end{figure}
In order to determine the decisive factors behind the appearance of prolonged clogs, simulations are implemented in the bottleneck scenario with different initial and boundary conditions.
In each simulation, 400 agents are generated with a constant rate at random positions in the source area and these move through the moving area to leave the scenario by the exit.
During this process, clogs may appear, leading to an interruption of the bottleneck flow.
A clog interrupting the flow longer than the time threshold $T_w$ is identified as a prolonged clog.
Since prolonged clogs can last a long time and so as to ensure that the blockage does not stop the dynamics of the system, resulting in an impractically long simulation time, we manually solve them by moving one of the agents involved in the clog to free space in the source area.
The details of this manual clog-solving procedure will be elaborated in the next section.
The number of prolonged clogs in each simulation is recorded.
Then the results of different simulations are compared to explore the relationship between these factors and the occurrence of prolonged clogs.
The model for pedestrian dynamics and the approach to identify clogs are presented in the following section.
\section{Introducing the model and identifying prolonged clogs}
\label{sec:definition}
We begin this section with a brief introduction to the GCVM, which is the model used in this study.
It is defined as
\begin{equation}
\dot{X}_i(X_i,X_j,\dots)=\vec{e}_i(X_i,X_j,\dots)\cdot V_i(X_i,X_j,\dots),
\end{equation}
where $X_i$ is the position of agent $i$, $V_i$ is a scalar denoting its speed, and $\vec{e}_i$ is a unit vector representing its direction of movement.
The direction of movement $\vec{e}_i$ is calculated first by using the equation
\begin{equation}
\label{equ:2}
\vec{e}_i=u \cdot\bigg(\vec{e}_i^{~0}+\sum_{j\in J_i} k\cdot \exp\Big(\frac{-s_{i,j}}{D}\Big)\cdot \vec{n}_{i,j}+\vec{w}_i\bigg).
\end{equation}
Here, $u$ is a normalization constant such that $\Vert\vec{e}_i\Vert=1$.
$\vec{e}_i^{~0}$ is a unit vector representing the desired direction of the agent.
This is calculated according to reference lines indicated by the red dashed segments in figure~\ref{fig:geo}.
The vector $\vec{e}_i^{~0}$ points to the middle of the reference line when agent $i$ is not in the range of the reference line; otherwise, it points to the nearest point on the reference line.
More details of the calculation method are given in \cite{chraibi2012validated}.
$J_i$ is the set of agents that contains all neighbors affecting the moving direction of agent $i$.
The magnitude of the impact from these neighbors is a function of $s_{i,j}$, which is the distance between the edges of agent $i$ and $j$ along the line connecting their centers.
Parameters $k>0$ and $D>0$ are used to calibrate the strength and range of the impact, respectively.
The effect of $k$ and $D$ on the strength of impact is shown in figure~\ref{fig:2a} and a similar analysis for the effect of $k$ and $D$ can be found in \cite{hein2019agent}.
The direction of the impact from agent $j$ to $i$ is denoted by the unit vector $\vec{n}_{i,j}$, which depends on the relative positions of both agents.
$\vec{w}_i$ is the effect from walls and obstacles in the room, which is calculated analogously to the effect from neighbors.
\begin{figure}[H]
\centering
\subfigure[]{\includegraphics[width=0.45\linewidth]{2a.pdf}\label{fig:2a}}
\subfigure[]{\includegraphics[width=0.45\linewidth]{2b.pdf}\label{fig:2b}}
\caption{
(a) The effect of $k$ and $D$ on the strength of impact.
(b) The speed functions with different $V^0_i$ and $T$.
\label{fig:ParaEffect}
}
\end{figure}
Then the speed on the new moving direction is obtained using the equation
\begin{equation}
\label{equ:3}
V_i=\min\Big\{V^0_i,\max\big\{0,\frac{s_i}{T}\big\}\Big\}.
\end{equation}
The speed is a function of $s_i$, which is the maximum space of agent $i$ in the new direction of movement $\vec{e}_i$ without overlapping with other agents.
In equation~\eqref{equ:3}, $V^0_i$ is the free speed of agent $i$, the speed that is achieved by moving without interference from other agents.
The parameter $T>0$ is the slope of the speed-headway relationship.
The speed functions with different $V^0_i$ and $T$ are shown in figure~\ref{fig:2b}.
The value of $T$ could be used to model the level of motivation in simulations.
A decrease of $T$ at constant $s_i$ leads to a smaller distance between agent $i$ and the nearest agent in front, which corresponds to behavior with a higher level of motivation.
A more detailed introduction to the GCVM can be found in~\cite{xu2019generalized}.
Since there is no overlapping among agents in the GCVM, the space occupied by one agent is not available to other agents.
Therefore, clogging occurs when the direction of movement, $\vec{e}_i$, of two agents point toward each other and the distance $s_{i,j}$ between them is too small for them to move.
A representative case is shown in figure~\ref{fig:3a}.
It could be formalized by
\begin{equation}
\label{equ:4}
\begin{cases}
s_{i,j}&\le \epsilon,\\
V_i+V_j&\le \lambda, \\
\vec{e}_{i,j}\cdot\vec{e}_i&<0,\\
\vec{e}_{i,j}\cdot\vec{e}_j&>0,
\end{cases}
\end{equation}
where $\vec{e}_{i,j}$ is the unit vector points from the center of agent $j$ to $i$,
$\epsilon$ is a threshold used to determine whether the distance between these two agents is small enough to form a clog,
and $\lambda$ is the threshold of speed to ascertain whether these two agents are almost stationary.
The last two conditions in equation~\eqref{equ:4} denote that these two agents are moving toward each other.
In the present study, $\epsilon$ is equal to the radius of agents, and $\lambda$ is set as $(V_i^0+V_j^0)/100$.
A clog formed by more than two agents contains at least two agents satisfying equation~\eqref{equ:4}.
There could be many pairs of agents that satisfy the definition of clogging in equation~\eqref{equ:4} at any time and any place in the simulation.
We treat clogs that interrupt the flow longer than time period $T_w$ as prolonged clogs.
These prolonged clogs occur almost around exits, as the degree of freedom in the direction of movement is limited by the wall.
An example of prolonged clogs is shown in figure~\ref{fig:3b}, a clog consisting of four red agents is formed in front of the bottleneck and interrupts the flow.
After $T_w=\SI{2}{\second}$, the clog is solved manually by moving one of the agents in the clog.
As for clogs that do not interrupt the flow or last less than $T_w$ seconds, we do not destroy them artificially since these can be solved automatically by agents adjusting their direction of movement.
A clog formed by two red agents, which can be automatically solved in $T_w=\SI{2}{\second}$, is shown in figure~\ref{fig:3c}.
\begin{figure}[H]
\centering
\subfigure[]{\includegraphics[width=0.4\linewidth]{3a.pdf}\label{fig:3a}}
\subfigure[]{\includegraphics[width=0.25\linewidth]{3b.pdf}\label{fig:3b}}
\subfigure[]{\includegraphics[width=0.25\linewidth]{3c.pdf}\label{fig:3c}}
\caption{
(a): When two agents are about to cause clogging,
$\vec{e}_i$ and $\vec{e}_j$ are directions of movement of agent $i$ and $j$,
$\vec{e}_{i,j}$ is the unit vector points from the center of agent $j$ to $i$,
$s_{i,j}$ is the distance between the edges of agent $i$ and $j$ along the line connecting their centers.
(b): A prolonged clog is manually solved after interrupting the flow for \SI{2}{\second}.
(c): A clog is solved automatically by agents adjusting their direction of movement.
}
\label{fig:clogging}
\end{figure}
The flowchart in figure~\ref{fig:removepro} illustrates how to count the prolonged clogs in simulations, where $t$ is the current time, $t_p$ is the time at which the last agent enters the exit, $t_m$ is the time of the last manual clog-solving process, $N_s$ is the number of prolonged clogs, $\Delta t$ is the time step size in the simulation, and $t_c$ is the smaller of $t-t_p$ and $t-t_m$.
\begin{figure}[H]
\centering
\includegraphics[width=0.85\linewidth]{4.pdf}
\caption{The process of solving and counting prolonged clogs.
$t$ is the current time,
$t_p$ is the time of the last agent entering the exit,
$t_m$ is the time of the last manual clog-solving process,
$N_s$ is the number of prolonged clogs,
$\Delta t$ is the time step size in the simulation,
$t_c$ is the smaller of $t-t_p$ and $t-t_m$,
$T_w$ is the time threshold.}
\label{fig:removepro}
\end{figure}
For each time step of a simulation, a non-zero flow through the measurement line between moving area and exit is an indicator that no prolonged clogs occur.
Otherwise, we will check whether $t_c$ is greater than the threshold $T_w$, and whether there are agents satisfying the definition of clogging in equation~\eqref{equ:4}.
A prolonged clog is identified if these two conditions are met.
It is treated as a new clog if $t_p$, the time when the last agent crossed the bottleneck, is not less than $t_m$, the time of the last manual removal of an agent.
Regardless of whether the clog is new or already existing, one of the two agents forming the closest clog to the exit is moved manually to free space in the source area.
It should be noted that breaking up a prolonged clog may require more than one manual clog-solving process, which results in $t_p < t_m$.
The number of prolonged clogs is counted from the beginning of the simulation to the last agent leaving the simulation scenario.
\section{Simulation results}
\label{sec:experiment}
In each simulation, one or two factors were selected for variation.
The other factors were set to default values as shown in table ~\ref{tab:defaultPara}.
\begin{table}[H]
\centering
\caption{Default values of factors in simulations. $w$ is the width of the exit, $d$ is the distance between the center of the exit and the lower horizontal wall of the moving area, $k$ and $D$ are parameters used to calibrate the strength and range, respectively, of the impact from neighbors in the movement direction, $V^0_i$ is the free speed, and $T$ is the slope of the speed-headway relationship.}
\label{tab:defaultPara}
\setlength{\tabcolsep}{3pt}
\begin{tabular}{cc}
\hline
Factors & Default values\\
\hline
Agent generation rate & 8 Agents/$\SI{}{\second}$ \\
Agent shape & circle ($r=\SI{0.2}{\meter}$) \\
Update method & parallel update \\
Time step size $\Delta t$ & $\SI{0.05}{\second}$ \\
$w$ (figure~\ref{fig:geo}) & $\SI{0.8}{\meter}$ \\
$d$ (figure~\ref{fig:geo}) & $\SI{4}{\meter}$ \\
$k$ (equation~\eqref{equ:2}) & 3 \\
$D$ (equation~\eqref{equ:2}) & $\SI{0.1}{\meter}$ \\
$V_i^0$ (equation~\eqref{equ:3}) & $\SI{1.34}{\meter\per\second}$ \\
$T$ (equation~\eqref{equ:3}) & $\SI{0.3}{\second}$ \\
\hline
\end{tabular}
\end{table}
To improve the efficiency of simulations, a series of simulations were implemented firstly to select the suitable $T_w$, the time span between the formation, and artificial termination of a prolonged clog for subsequent simulations.
We ran simulations in four bottleneck scenarios, where the value of $w$ was 0.8, 1.0, 1.2, and 1.6~$\SI{}{\meter}$, respectively.
For each scenario, simulations with $T_w$ from $\SI{0}{\second}$ to $\SI{4}{\second}$ were implemented.
We ran each simulation four times with different distributions of agents in the source area.
The relationship between $T_w$ and the mean values of $N_s$, the number of prolonged clogs, from the four runs are shown in figure~\ref{fig:findTw}, where the error bars indicate the standard deviations.
The results in scenarios with a different value of $w$ are represented by different marks and colors.
In all four scenarios, $N_s$ dis not change significantly when $T_w$ was longer than $\SI{2}{\second}$.
Therefore, this value for $T_w$ was selected for the following simulations.
\begin{figure}[H]
\centering
\includegraphics[width=0.6\linewidth]{5.pdf}
\caption{The correlation between $N_s$ (the number of prolonged clogs) and $T_w$ (the time span between the formation and artificial termination of a prolonged clog) for different values of $w$ (the width of the exit).
The error bars show the standard deviations.}
\label{fig:findTw}
\end{figure}
In the following, we ran each simulation four times.
The mean value of $N_s$ from the four runs reflects the effect of the factors observed on the occurrence of prolonged clogs.
Moreover, the time lapse between two consecutive agents entering the exit, and the trajectories of agents were analyzed for the selected factors.
\subsection{Parameters of spatial boundaries}
\label{sec:spatial}
The effects of the width and the position of the exit are explored in this subsection.
Three exit positions ($d=$ 4.0, 2.0, or $w/2$ $\SI{}{\meter}$) and six widths ($w=$ 0.8, 1.0, 1.2, 1.6, 2.0, or 2.5 $\SI{}{\meter}$) were selected for the simulations.
The exit was located in the middle of two lateral walls of the moving area when $d=\SI{4}{\meter}$ and adjacent to the lower horizontal wall when $d=w/2$.
Figure~\ref{fig:6} shows the correlation between $N_s$ and $w$ for different values of $d$.
The position of the exit does not alter the fact that $N_s$ decreases to zero as $w$ increases.
Moreover, there is no prolonged clog when the exit is wider than $\SI{1.6}{\meter}$ for all three positions.
Besides the effect of $w$, when $d=w/2$ (the exit is adjacent to the lower horizontal wall of the moving area), $N_s$ was significantly less than with the other two locations.
We assumed that this difference was caused by the reduced degree of freedom in the possible directions in which agents will move.
\begin{figure}[H]
\centering
\includegraphics[width=0.6\linewidth]{6.pdf}
\caption{The correlation between $N_s$ (the number of prolonged clogs) and $w$ (the width of the exit) for different values of $d$ (the distance between the center of the exit and the lower horizontal wall of the moving area).
The error bars show the standard deviations.}
\label{fig:6}
\end{figure}
In order to quantitatively analyze the influence of the width of the exit ($w$) on the clogs, we examined the time lapses $\delta$ between two consecutive agents passing the exit, for different values of $w$.
The value of $\delta$ reflects the sustained time of clogs interrupting the flow.
The probability distribution function $P(t>\delta)$, also known as the survival function, is sensitive to changes in the spatial boundaries, e.g. the width of the bottleneck \cite{zuriguel2014clogging,garcimartin2015flow,garcimartin2016flow,garcimartin2014experiment}.
We analyzed the results of simulations when $d = \SI{4}{\meter}$.
The survival functions of different values of $w$ are compared in figure~\ref{fig:7a}.
It can be observed that the probability of a higher value of $\delta$ decreases as $w$ increases.
Besides, the occurrence of prolonged clogs leads to plateaus in the survival functions of $w = 0.8$ and $w = \SI{1.0}{\meter}$.
Basically, in these two cases, the actual values of $\langle\delta\rangle$, the mean time lapse, are unknown as clogs lasting longer than $\SI{2}{\second}$ are manually solved.
In fact, the actual value of $\langle\delta\rangle$ without manually removal of clogs may probably tend to infinite.
Nevertheless, in order to obtain an estimate for the lower bound of $\langle\delta\rangle$ and study its dependence on $w$, we treated all $\delta>\SI{2}{\second}$ as $\delta=\SI{2}{\second}$ in the calculation of the mean value of $\delta$.
The correlation between the value of $\langle\delta\rangle$ and $w$ is shown in figure~\ref{fig:7b}.
The mean value and standard deviation of $\delta$ both decrease with increasing $w$.
\begin{figure}[H]
\centering
\subfigure[]{\includegraphics[width=0.45\linewidth]{7a.pdf}\label{fig:7a}}
\subfigure[]{\includegraphics[width=0.45\linewidth]{7b.pdf}\label{fig:7b}}
\caption{
(a): The survival functions of $\delta$ for simulations with different values of $w$ when $d = \SI{4}{\meter}$.
(b): The correlation between the $\langle\delta\rangle$ (the mean time lapse between two consecutive agents entering the exit) and $w$ when $d = \SI{4}{\meter}$.
The error bars show the standard deviations.
}
\end{figure}
\subsection{Algorithmic factors}
The effect of update methods and the time step sizes $\Delta t$ to solve the equation of motion are analyzed in this subsection.
Two update methods were adopted: the parallel update and the sequential update.
When we used the parallel update, the direction of movement, speed and location of all the agents were updated at the same time.
When the sequential update was used, the direction of movement, speed and location of agents were updated one by one according to the distance to the exit.
The agents near the exit had more effect on the dynamic of the system than the agents further away from the exit.
Therefore, the agent with a greater effect, i.e., the agent closer to the exit, was updated first in the sequential update.
For each update method, simulations were performed with different values of $\Delta t$ from $\SI{0.01}{\second}$ to $\SI{0.125}{\second}$.
The correlation between $N_s$ and $\Delta t$ for two update methods is shown in figure~\ref{fig:technical}.
The effect of $\Delta t$ on $N_s$ is marginal for both update methods.
To explain the reason behind the results, an extreme case is considered here where two agents $i$ and $j$ are moving directly toward each other, which means $\vec{e}_i=-\vec{e}_j$.
We assume that their direction of movement is fixed, and $s_{i,j}<T \cdot V^0_i$.
According to equation~\eqref{equ:3}, their speeds $V_i$ and $V_j$ are both equal to $s_{i,j}/T$.
They will not overlap and, consequently, form a clog if their speeds satisfy
\begin{equation}
(V_i+V_j)\cdot \Delta t\le s_{i,j},
\end{equation}
which can be transformed to $2\cdot \Delta t \le T$.
This example illustrates that adopting a lower value of $\Delta t$ or substituting the sequential update cannot hinder the occurrence of clogging, since the scarcity of available space is not changed.
Therefore, the occurrence of prolonged clogs in the simulations with the GCVM is not an algorithmic issue.
\begin{figure}[H]
\centering
\includegraphics[width=0.6\linewidth]{8.pdf}
\caption{ The correlation between $N_s$ (the number of prolonged clogs) and $\Delta t$ (time step size) for different update methods.
The error bars show the standard deviations.}
\label{fig:technical}
\end{figure}
\subsection{Parameters of the GCVM}
\label{subsec:Model_factor}
In this subsection, the effect of several parameters in the GCVM is examined, including the slope of the speed-headway relationship $T$ and the free speed $V^0$ in equation~\eqref{equ:3}.
The strength and range of the effect of neighbors in the direction of movement, $k$ and $D$ in equation~\eqref{equ:2}, and the shapes of agents are also studied.
First, we looked at the effect of $T$ and $V^0$.
We ran simulations with different $V^0$ (1.34, 3.34, or 5.34 $\SI{}{\meter\per\second}$) and different $T$ (0.1, 0.3, 0.5, 0.8, or 1.0 $\SI{}{\second}$).
The correlation between $N_s$ and $T$ for different values of $V^0$ is shown in figure~\ref{fig:9a}.
For all three values of $V^0$, as $T$ increased, $N_s$ decreased initially, then remained relatively stable.
A decrease in $T$ led to a smaller slope of the speed-headway function, see equation~\eqref{equ:3} and figure~\ref{fig:2b}.
With decreasing $T$ agents move closer, which reduces the space available to resolve clogs.
This is in accordance with the finding that clogging is more likely to occur in scenarios with higher level of motivation \cite{garcimartin2016flow, hidalgo2017simula, pastor2015experiment, garcimartin2014experiment}.
The level of motivation has been shown to have an effect on the time lapse $\delta$ \cite{garcimartin2016flow, hidalgo2017simula, pastor2015experiment}.
Figure ~\ref{fig:9b} shows the survival functions of $\delta$ in the simulations with $V^0=\SI{3.34}{\meter\per\second}$, which is similar to the result of granular media experiment \cite{pastor2015experiment}.
These survival functions can be approximately separated into two successive regimes by $\delta=\SI{1.2}{\second}$.
For $\delta \leq \SI{1.2}{\second}$, increasing $T$ leads to a higher value of $P(t>\delta)$, while for $\delta>\SI{1.2}{\second}$, increasing $T$ reduces $P(t>\delta)$.
The mean time lapse $\langle\delta\rangle$ for each regime is shown in figure~\ref{fig:9c}.
As we mentioned above, the actual values of $\langle\delta\rangle$ for the region of $\delta>\SI{1.2}{\second}$ are unknown as clogs lasting longer than $\SI{2}{\second}$ are manually solved.
Therefore, we treated all $\delta>\SI{2}{\second}$ as $\delta=\SI{2}{\second}$ in the calculation of the mean value of $\delta$.
The obtained values are the lower bound of the real ones.
Decreasing $T$ can be interpreted as increasing the level of motivation, which results in an increase in the free flow rate ($\delta \leq \SI{1.2}{\second}$) as well as an increase in the probability of clogging ($\delta>\SI{1.2}{\second}$).
\begin{figure}[H]
\centering
\subfigure[]{\includegraphics[width=0.45\linewidth]{9a.pdf}\label{fig:9a}}
\subfigure[]{\includegraphics[width=0.45\linewidth]{9b.pdf}\label{fig:9b}}
\subfigure[]{\includegraphics[width=0.45\linewidth]{9c.pdf}\label{fig:9c}}
\subfigure[]{\includegraphics[width=0.45\linewidth]{9d.pdf}\label{fig:9d}}
\caption{
(a): The correlation between $N_s$ (the number of prolonged clogs) and $T$ (the slope of the speed-headway relation) for different values of $V^0$ (the free speed).
The error bars show the standard deviations.
(b): The survival functions of $\delta$ (the time lapse between two consecutive agents entering the exit) in the simulations with different values of $T$ when $V^0=\SI{3.34}{\meter\per\second}$.
(c): The mean time lapse $\langle\delta\rangle$ versus $T$ when $V^0=\SI{3.34}{\meter\per\second}$, for $\delta \leq \SI{1.2}{\second}$ and $\delta>\SI{1.2}{\second}$, respectively.
(d): The survival functions of $\delta$ (the time lapse between two consecutive agents entering the exit) in the simulations with different values of $V^0$ when $T=\SI{0.8}{\second}$.}
\label{fig:TandV0}
\end{figure}
However, a higher value of $V^0$, which can be interpreted as the expression of a higher motivation level, leads to lower values of $N_s$.
We analyzed the results of simulations with $T=\SI{0.8}{\second}$.
The survival functions of different values of $V^0$ are compared in figure~\ref{fig:9d}.
The probability of a higher value of $\delta$ decreases as $V^0$ increases.
According to equation~\eqref{equ:3}, the speed of agents in the GCVM depends on the overlapping-free spaces in their directions of movement.
Although a higher $V^0$ increases the maximum possible speed of agents, it has little effect in congested areas due to limited space.
Therefore, the effect of $V^0$ in the GCVM on the motivation level of present simulations is marginal, as most of the investigated situations represent congested conditions.
Moreover, a higher $V^0$ allows agents to move faster in low density situations, which results in the reduction of $N_s$.
Note, that in force-based models~\cite{hidalgo2017simula} the driving force increases with increasing $V^0$, hence $V^0$ can have an effect in congested situations as well.
Then we examined the effect of $k$ and $D$.
Higher values of $k$ and larger $D$ led to agents being more stimulated to deviate from their desired directions.
We ran simulations with different values of $k$ (0.2, 0.5, 1, 2, 3, 4, 5, or 6) and different values of $D$ (0.01, 0.02, 0.05, or 0.1~$\SI{}{\meter}$).
The correlation between $N_s$ and $k$ for different values of $D$ is shown in figure~\ref{fig:10}.
\begin{figure}[H]
\centering
\includegraphics[width=0.6\linewidth]{10.pdf}
\caption{
The correlation between $N_s$ (the number of prolonged clogs) and $k$ for different values of $D$.
The error bars show the standard deviations.
}
\label{fig:10}
\end{figure}
It can be seen that $N_s$ increases with increasing $k$ and increasing $D$.
We assume the reason for this is that lower values of $k$ and $D$ decrease the neighbor's impact on agents, which leads to the queuing behavior.
We show in figure~\ref{fig:11a} the trajectories of agents in the simulation when $k$ is 0.2 and $D$ is $\SI{0.01}{\meter}$, which shows a strong queuing behavior.
When the impact among agents increased consistently, agents began to deviate from their desired direction until the queuing behavior was broken.
Figure~\ref{fig:11b} shows the trajectories of agents in the simulation when $k$ is 3 and $D$ is $\SI{0.01}{\meter}$.
\begin{figure}[H]
\centering
\subfigure[]{\includegraphics[width=0.45\linewidth]{11a.png}\label{fig:11a}}
\subfigure[]{\includegraphics[width=0.45\linewidth]{11b.png}\label{fig:11b}}
\caption{
(a): Trajectories of agents when $k$ is 0.2 and $D$ is $\SI{0.01}{\meter}$.
(b): Trajectories of agents when $k$ is 3.0 and $D$ is $\SI{0.01}{\meter}$.
}
\label{fig:kandD}
\end{figure}
The final factor analyzed was the shapes of agents.
In the previous sections, a pedestrian's shape was modeled as circles with a constant radius.
To study the influence of the shape, we also performed simulations where pedestrians were modeled as velocity-based ellipses~\cite{xu2019generalized}.
The length of the semi-axis along the walking direction is a constant value $a$.
The length of the other semi-axis along the shoulder equals $b$, which is defined as
\begin{equation}
b=b_{\min}+\frac{b_{\max}-b_{\min}}{1+e^{\beta \cdot(V-\gamma)}}~,
\end{equation}
where $b_{\max}$ is the maximum value which is equal to half of a static pedestrian's width, $b_{\min}$ is equal to the half of a moving pedestrian's minimum width, $V$ is the speed of the agent, and parameters $\beta$ and $\gamma$ are used to adjust the shape of the function.
Simulations in this part are performed with three constant circles with different radius values $r$ (0.15, 0.20, or 0.25~$\SI{}{\meter}$) and a velocity-based ellipse ($a=\SI{0.20}{\meter}$, $b_{\min}=\SI{0.15}{\meter}$, $b_{\max}=\SI{0.25}{\meter}$, $\beta=50$, $\gamma=0.1$).
Figure~\ref{fig:12a} shows the correlation between $N_s$ and $w$ for different shapes.
The result of the ellipse is close to the result of the circle with $r=\SI{0.20}{\meter}$, which is between the result of the smallest circle ($r=\SI{0.15}{\meter}$) and of the biggest circle ($r=\SI{0.25}{\meter}$).
The possible explanation of this result is that the shape of the dynamic ellipse varies with the speed of agents.
For example if the speed of an agent is $\SI{0.1}{\meter\per\second}$, the dynamic ellipse becomes a circle with $r=\SI{0.2}{\meter}$.
Which means that in high density situations the agents tend to have a circular shape instead.
\begin{figure}[H]
\centering
\subfigure[]{\includegraphics[width=0.45\linewidth]{12a.pdf}\label{fig:12a}}
\subfigure[]{\includegraphics[width=0.45\linewidth]{12b.pdf}\label{fig:12b}}
\caption{
(a) The correlation between $N_s$ (the number of prolonged clogs) and $w$ (the width of the exit) for different shapes of agents.
The error bars show the standard deviations.
(b) The correlation between $N_s$ and $w/r$ for different $r$ (the radius of agents).
The error bars show the standard deviations.}
\label{fig:shapes}
\end{figure}
A finding in \cite{Zuriguel2005jamming} is that the probability of clogs stopping the flow decreases with an increasing ratio between the size of the orifice and the size of the beads.
Therefore, we plot figure~\ref{fig:12b} with $w/r$ (the ratio between the width of the exit and the radius of the agents) as the horizontal axis.
It seems that the number of prolonged clogs is not affected by the absolute values of $w$ and $r$, provided that $w/r$ remains the same.
\section{Conclusion}
\label{sec:conclusion}
In the present paper, we focus on prolonged clogs that occur in bottleneck scenarios with the GCVM.
A general definition of prolonged clogs has been given.
Then a series of simulations in a bottleneck scenario were implemented to analyze the effect of various factors on the occurrence of prolonged clogs.
From the simulation results, the following conclusions can be drawn.
First, the number of prolonged clogs decreases as the width of the exit increases.
Second, the occurrence of prolonged clogs cannot be eliminated by adopting a smaller time step size or updating the positions of agents sequentially.
Third, a decrease in $T$ in the GCVM leads to smaller distance between agents, which corresponds to a behavior with a higher level of motivation.
Meanwhile, decreasing $T$ reduces the space available for agents to resolve clogs, which increases the number of prolonged clogs.
This is in accordance with the fact that clogging is more likely to occur in scenarios with a higher level of motivation.
Fourth, reducing the degree of freedom in the possible directions in which agents will move can reduce or even eliminate the occurrence of prolonged clogs.
For instance, this can be facilitated by the queuing behavior in figure.~\ref{fig:11a} as well as by locating the exit adjacent to the lower horizontal wall of the moving area.
Finally, when the ratio between the width of the exit and the radius of agents increases, the number of prolonged clogs decreases.
\section*{Acknowledgments}
The authors are grateful to the HGF Knowledge-transfer project under Grant No. WT-0105.
Qiancheng Xu thanks the funding support from the China Scholarship Council (Grant NO. 201706060186).
\bibliographystyle{elsarticle-num}
|
{
"timestamp": "2021-05-11T02:21:04",
"yymm": "2105",
"arxiv_id": "2105.03954",
"language": "en",
"url": "https://arxiv.org/abs/2105.03954"
}
|
\section{\textbf{Introduction}}
The Stern-Gerlach experiment has long been hailed as the first direct evidence of the quantum nature of spin of particles. One observes two distinct peaks corresponding to the spin-up and spin-down states of the emergent spin-1/2 particle beams on a detector placed at the far end of a Stern-Gerlach apparatus if one prepares the entrant particle beam in the $\style{font-size:20px}{\vert+\rangle} $ eigenstate of the spin operator $\style{font-family:'Times New Roman'}{\style{font-size:20px}{S_x}} $. A typical SG experiment involves the following: a particle beam in a pure spin state, say in the $\style{font-size:20px}{\vert+\rangle} $ eigenstate of $\style{font-family:'Times New Roman'}{\style{font-size:20px}{S_x}} $ enters the Stern-Gerlach magnet. The results of the measurement of the $\style{font-family:'Times New Roman'}{\style{font-size:20px}{S_z}} $ spin component of the emergent particle beam are consistent when one regards $\style{font-family:'Times New Roman'}{\style{font-size:20px}{S_x}} $ as a coherent superposition over the spin eigenstates of $\style{font-family:'Times New Roman'}{\style{font-size:20px}{S_z}} $ . To estimate spin coherence, one generally measures the \textit{x}-component of the spin to obtain $\style{font-size:20px}{\vert S_x\rangle} $ . As the quantum states of the split wave-packets evolve in time, their spatial components eventually become orthogonal to each other and spin coherence is completely lost. It must be noted that our analysis however, considers a micron-sized mesoscopic neutral test mass prepared initially in the ground state of a harmonic oscillator trap which is thereafter released from the trap and made to propagate through the Stern-Gerlach apparatus. We make use of some important results that appeared in the very early works of Schwinger \textit{et al.} in our analysis.
We consider the typical setup of a Stern-Gerlach interferometer. An inherent assumption in our analysis is that we consider the magnetic field gradient to exist only along the \textit{z}-direction. We work under the assumption that the field gradient applied is time dependent. As a brief introduction to the Stern-Gerlach theory [see [2]], we associate the familiar Pauli matrices $\style{font-size:20px}{\sigma _x,\sigma _y,\sigma _z} $ with the magnetic moment of the incoming particle beam. The magnetic moment $\style{font-size:20px}{\overrightarrow\mu} $ can hence be expressed as $\style{font-size:20px}{\overrightarrow\mu=\mu\sigma } $ , where $\style{font-size:20px}\sigma $ is the set of the familiar Pauli matrices. The force acting on the entrant particle beam can be expressed as the gradient of the interaction energy between the magnetic moment of the particles and the applied field gradient, as follows
\let\saveeqnno\theequation
\let\savefrac\frac
\def\displaystyle\savefrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\displaystyle\savefrac
\gdef\theequation{1}
\let\theHequation\theequation
\label{dfg-5aad480a4241}
\begin{array}{@{}l}F(t) = \nabla(\vec{\mu}.\vec{B}(z,t)).\end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
Maxwell's equation $\style{font-size:20px}{\overrightarrow\nabla.\overrightarrow B(z,t)=0} $ dictates that if a finite force is to act on the incoming particle beam, in addition to a field gradient along the \textit{z}-direction, it is essential to invoke the presence of a field gradient in the \textit{x}-\textit{y} plane as well. In our analysis however, we choose to ignore this effect by assuming that by some means, we are able to suppress the effect of the field gradient present in the \textit{x}-\textit{y} plane and the field gradient applied along the \textit{z}-direction is dominant. For reasons listed out in [2], we consider a linear expansion of the field $\style{font-size:20px}{\overrightarrow B(z,t)} $ , as
\let\saveeqnno\theequation
\let\savefrac\frac
\def\displaystyle\savefrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\displaystyle\savefrac
\gdef\theequation{2}
\let\theHequation\theequation
\label{dfg-36fd2edcdd47}
\begin{array}{@{}l}\vec{B}(z,t) \approx \vec{B}(t) + \frac{\partial B}{\partial z}(t) z.\end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
The interaction energy in the SG setup is then $\style{font-size:20px}{-\overrightarrow\mu.\overrightarrow B(t)=-\mu\frac{\partial B_z}{\partial z}(t)\sigma _zz-\mu B(t)\sigma _z} $ which serves as the potential energy term in the Stern-Gerlach Hamiltonian. The Hamiltonian for the Stern-Gerlach setup assumes the form
\let\saveeqnno\theequation
\let\savefrac\frac
\def\displaystyle\savefrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\displaystyle\savefrac
\gdef\theequation{3}
\let\theHequation\theequation
\label{dfg-f35ef0bfc087}
\begin{array}{@{}l}H = \frac{p^{2}}{2m} - f(t)\sigma _zz - \mu B(t)\sigma _z,\end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
where $\style{font-family:'Times New Roman'}{\style{font-size:20px}{f(t)=\mu\frac{\partial B_z}{\partial z}}}(t) $ . We solve for the temporal evolution of the phase-space variables $\style{font-size:20px}{z(t)} $ and $\style{font-size:20px}{p(t)} $ by using the Heisenberg equation of motion, with the Hamiltonian given by Eq. (3). Note that \textit{z} and $\style{font-size:20px}{p_z} $ denote the position and momentum operators respectively. A trivial computation yields the following equations of motion for the split wave-packets and the temporal evolution of the Pauli spin operators $\style{font-size:20px}{\sigma _z} $and $\style{font-size:20px}{\sigma _+} $ (note that $\style{font-size:20px}{\sigma _+=\sigma _x+i\sigma _y} $) [see [2]]
\let\saveeqnno\theequation
\let\savefrac\frac
\def\displaystyle\savefrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\displaystyle\savefrac
\gdef\theequation{4.a}
\let\theHequation\theequation
\label{dfg-107654413b6c}
\begin{array}{@{}l}p(t) = p_0 + (\sigma _z)_0 .\Delta{p(t)}, \end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
\let\saveeqnno\theequation
\let\savefrac\frac
\def\displaystyle\savefrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\displaystyle\savefrac
\gdef\theequation{4.b}
\let\theHequation\theequation
\label{dfg-57e83c172c57}
\begin{array}{@{}l}z(t) = z_0 + p_0\frac{t}{m} + (\sigma _z)_0.\biggl(\Delta{z(t)} + \frac{t}{m}\Delta{p(t)}\biggr),\end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
\vskip-1.5\baselineskip
\let\saveeqnno\theequation
\let\savefrac\frac
\def\displaystyle\savefrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\displaystyle\savefrac
\gdef\theequation{4.c}
\let\theHequation\theequation
\label{dfg-052285c48e1c}
\begin{array}{@{}l}\sigma _z (t) = (\sigma _z)_0,\end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
and
\let\saveeqnno\theequation
\let\savefrac\frac
\def\displaystyle\savefrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\displaystyle\savefrac
\gdef\theequation{4.d}
\let\theHequation\theequation
\label{dfg-6b9c3eabb3d2}
\begin{array}{@{}l}\sigma _+(t) = \exp\biggl(-i\biggl(\frac{2}{\hbar}\int_{0}^{t}\mu B(t')dt' + \frac{2\Delta{p(t)}z_0}{\hbar} - \frac{2\Delta{z(t)}p_0}{\hbar}\biggr)\biggr)(\sigma _+)_0.\end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
Here, $\style{font-size:20px}{z_0} $ and $\style{font-size:20px}{p_0} $ are initial conditions that we set to solve for the equations of motion, namely that the particle beam enters the SG interferometer at the point $\style{font-size:20px}{z_0} $ with a non-zero momentum $\style{font-size:20px}{p_0} $ . The time dependent parameters in Eq. (4.a) and Eq. (4.b) denote the macroscopic displacements of the split wave-packets in phase-space. These are given as
\let\saveeqnno\theequation
\let\savefrac\frac
\def\displaystyle\savefrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\displaystyle\savefrac
\gdef\theequation{5.a}
\let\theHequation\theequation
\label{dfg-a5231e6f0d88}
\begin{array}{@{}l}\Delta{p(t)} = \displaystyle\int_{0}^{t} f(t') dt', \end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
and
\let\saveeqnno\theequation
\let\savefrac\frac
\def\displaystyle\savefrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\displaystyle\savefrac
\gdef\theequation{5.b}
\let\theHequation\theequation
\label{dfg-0767e4a8f8a2}
\begin{array}{@{}l}\Delta{z(t)} = \displaystyle\int_{0}^{t} \frac{f(t')}{m}(t - t') dt'. \end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
Note that for a constant force \textit{f}, Eq. (5.a) and Eq. (5.b) assume the form
\let\saveeqnno\theequation
\let\savefrac\frac
\def\displaystyle\savefrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\displaystyle\savefrac
\gdef\theequation{5.c}
\let\theHequation\theequation
\label{dfg-9576d3b9b798}
\begin{array}{@{}l}\Delta{p(t)} = ft, \end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
and
\let\saveeqnno\theequation
\let\savefrac\frac
\def\displaystyle\savefrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\displaystyle\savefrac
\gdef\theequation{5.d}
\let\theHequation\theequation
\label{dfg-ef80bb739453}
\begin{array}{@{}l}\Delta{z(t)} = \frac{f t^{2}}{2m}.\end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
Alternatively, from Eq. (5.a) and Eq. (5.b), we define the temporal evolution of the displacement of the split wave-packets in position space as
\let\saveeqnno\theequation
\let\savefrac\frac
\def\displaystyle\savefrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\displaystyle\savefrac
\gdef\theequation{5.e}
\let\theHequation\theequation
\label{dfg-457a46b1f6cc}
\begin{array}{@{}l}\Delta{\bar{z}(t)} = -\displaystyle\int_{0}^{t}\frac{f(t')}{m}t' dt' = \Delta{z(t)} - \frac{t}{m}\Delta{p(t)}\end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
Schwinger \textit{et al}. and co-workers pioneered early works on the realization of a full-loop Stern-Gerlach interferometer. Through a detailed analysis of the spin dynamics of particle beams in a Stern-Gerlach interferometer, they were able to arrive at a closed-form expression for the visibility in terms of the coordinate wave function of the \textit{initially prepared} spatial state (i.e., at time \textit{t} = 0), given as [3]
\let\saveeqnno\theequation
\let\savefrac\frac
\def\displaystyle\savefrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\displaystyle\savefrac
\gdef\theequation{6}
\let\theHequation\theequation
\label{dfg-b2cf583b0f75}
\begin{array}{@{}l}\phi_{coherence} = \displaystyle\int_{-\infty}^{\infty} \psi_i^*(z - \Delta{\bar{z}(t)}) \psi_i(z + \Delta\bar{z}(t)) \times \exp\biggl(-\frac{2i\Delta{p_z(t)}}{\hbar}\biggr) dz ,\end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
where $\style{font-size:20px}z $ in Eq. (6) denotes the eigenvalue of the position operator \textit{z}. Note that $\style{font-size:20px}{\psi_i(z')} $ denotes the coordinate wave function of the \textit{initially prepared} spatial state. For a \textit{stationary} Gaussian wave-packet (one that undergoes no temporal evolution), they arrived at a closed-form expression for the visibility in the SG interferometer as follows [see [3]]
\let\saveeqnno\theequation
\let\savefrac\frac
\def\displaystyle\savefrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\displaystyle\savefrac
\gdef\theequation{7}
\let\theHequation\theequation
\label{dfg-7bda17d8ce85}
\begin{array}{@{}l}\phi_{coherence} = \exp\biggl(-\frac{1}{2}\biggl(\biggl(\frac{\Delta{z}}{\delta{z}}\biggr)^{2} + \biggl(\frac{\Delta{p_z}}{\delta{p_z}}\biggr)^{2}\biggr)\biggr),\end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
where $\style{font-size:20px}{\delta z} $ and $\style{font-size:20px}{\delta p_z} $ denote the initial position and momentum uncertainties in the Gaussian wave-packet respectively. It must be noted that the authors denote the visibility by \textit{C} in [3]. A key point to observe is that the visibility in the SG interferometer undergoes a Gaussian decay with increasing spatial and momentum splitting between the wave-packets [4]. This result however, suffers from a major drawback, due in part to the fact that it does not account for the temporal evolutions of the spatial and momentum splitting $\style{font-size:20px}{\Delta z(t)} $ and $\style{font-size:20px}{\Delta p_z(t)} $ between the split wave-packets respectively, namely that the consequences arising from Eq. (5.e) have been ignored. To this end, we perform a complete analysis of the visibility for the cases of the non-squeezed and squeezed thermal coherent states of the Quantum harmonic oscillator by taking into account the effect of Eq. (5.e). We note that it is necessary to keep the spatial and momentum splitting between the wave-packets as low as possible to \textit{maximize} the visibility, at least to the extent that the following conditions are satisfied
\let\saveeqnno\theequation
\let\savefrac\frac
\def\displaystyle\savefrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\displaystyle\savefrac
\gdef\theequation{8.a}
\let\theHequation\theequation
\label{dfg-ac71e88bb591}
\begin{array}{@{}l}\Delta{z} \ll \delta{z},\end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
and
\let\saveeqnno\theequation
\let\savefrac\frac
\def\displaystyle\savefrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\displaystyle\savefrac
\gdef\theequation{8.b}
\let\theHequation\theequation
\label{dfg-b5a487d29362}
\begin{array}{@{}l}\Delta{p_z} \ll \delta{p_z}.\end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
\section{\textbf{Analysis of the visibility for the case of a non-squeezed thermal coherent state of the Quantum harmonic oscillator}}
The explicit form of the coordinate wave function for a general non-squeezed coherent state $\style{font-size:20px}{\vert\beta\rangle} $ of the Quantum harmonic oscillator is given as (we consider a one-dimensional case) [refer appendix A for a detailed derivation of Eq. (9.a)]
\let\saveeqnno\theequation
\let\savefrac\frac
\def\displaystyle\savefrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\displaystyle\savefrac
\gdef\theequation{9.a}
\let\theHequation\theequation
\label{dfg-1d3f145369d2}
\begin{array}{@{}l}\psi_\beta(z, t) = \biggl(\frac{m\omega}{\pi\hbar}\biggr)^{1/4} \exp\biggl(i\xi(t) + i\sqrt{\frac{2m\omega}{\hbar}}\Im{[\beta(t)]}z - \frac{m\omega}{2\hbar}\biggl(z - \sqrt{\frac{2\hbar}{m\omega}}\Re{[\beta(t)]}\biggr)^{2}\biggr), \end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
where the quantities $\sqrt{\frac{2m\omega}{\hbar}}\Im{[\beta(t)]} $ and $\sqrt{\frac{2\hbar}{m\omega}}\Re{[\beta(t)]} $ are the expectation values of the momentum and position operators $\style{font-size:20px}{p_z} $ and \textit{z} respectively. Note that in Eq. (9.a), $\style{font-size:20px}{\xi(t)} $ is merely a time dependent phase factor. The time evolution operator for the coherent state of a Quantum harmonic oscillator assumes the form
\let\saveeqnno\theequation
\let\savefrac\frac
\def\displaystyle\savefrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\displaystyle\savefrac
\gdef\theequation{9.b}
\let\theHequation\theequation
\label{dfg-4140ada69e9d}
\begin{array}{@{}l}\hat U(t) \equiv \exp(-i\omega t),\end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
for which the temporal evolution of the coherent state $\style{font-size:20px}{\vert\beta\rangle} $ is given as
\let\saveeqnno\theequation
\let\savefrac\frac
\def\displaystyle\savefrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\displaystyle\savefrac
\gdef\theequation{10}
\let\theHequation\theequation
\label{dfg-d2844b153f88}
\begin{array}{@{}l}\beta(t) = \beta(0) \exp(-i\omega t).\end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
We now wish to compute analytically the visibility in a SG interferometer for the non-squeezed coherent state defined in Eq. (9.a). Note that for a total interferometric time $\style{font-size:20px}\tau $ (i.e., the time elapsed between the instant at which the wave-packets are initially split and the instant at which we begin to bring them together for recombination), from Eq. (5.e), we can express $\style{font-size:20px}{\Delta{\overline z(\tau)}} $ in terms of the final spatial and momentum splitting between the wave-packets as
\let\saveeqnno\theequation
\let\savefrac\frac
\def\displaystyle\savefrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\displaystyle\savefrac
\gdef\theequation{11}
\let\theHequation\theequation
\label{dfg-33d06fc2eff3}
\begin{array}{@{}l}\Delta{\bar{z}(\tau)} = \Delta{z(\tau)} - \frac{\tau}{m} \Delta{p_z(\tau)}.\end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
We are required to consider the form of the \textit{initially prepared} spatial state for the analysis of the visibility in the SG interferometer (i.e., at time \textit{t} = 0). To this end, we consider $\style{font-size:20px}{\psi_\beta(z,t=0)} $ which from Eq. (9.a) can be written as
\let\saveeqnno\theequation
\let\savefrac\frac
\def\displaystyle\savefrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\displaystyle\savefrac
\gdef\theequation{12}
\let\theHequation\theequation
\label{dfg-27e92701885e}
\begin{array}{@{}l}\psi_\beta(z, t = 0) = \biggl(\frac{m\omega}{\pi\hbar}\biggr)^{1/4} \exp\biggl(i\xi(0) - \frac{m\omega}{2\hbar}\biggl(z - \sqrt{\frac{2\hbar}{m\omega}}\beta(0)\biggr)^{2}\biggr). \end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
Note that the expectation value of the momentum operator $\style{font-size:20px}{p_z} $ vanishes at time \textit{t} = 0. We now use the form of the wave function obtained in Eq. (12) (for the non-squeezed coherent state) to solve for the visibility parameter. We denote the visibility by $\style{font-size:20px}{\phi_{non-squeezed}} $. From Eq. (6), Eq. (11) and Eq. (12), we have
\begin{eqnarray*}\phi_{non-squeezed} = \biggl(\frac{m\omega}{\pi\hbar}\biggr)^{1/4} \displaystyle\int_{-\infty}^{\infty} \exp\biggl(-\frac{m\omega}{2\hbar}\biggl(\biggl(z - \Delta{\bar{z}(\tau)} - \sqrt{\frac{2\hbar}{m\omega}}\beta(0)\biggr)^{2} + \biggl(z + \Delta{\bar{z}(\tau)} - \sqrt{\frac{2\hbar}{m\omega}}\beta(0)\biggr)^{2}\biggr)\biggr)... \end{eqnarray*}
\vskip-1.5\baselineskip
\let\saveeqnno\theequation
\let\savefrac\frac
\def\displaystyle\savefrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\displaystyle\savefrac
\gdef\theequation{13.a}
\let\theHequation\theequation
\label{dfg-f5798fe1aa5c}
\begin{array}{@{}l}\times\exp\biggl(-2i \frac{\Delta{p_z(\tau)}}{\hbar}\biggr) dz,\end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
which upon simplification (refer appendix B for the calculations involved herein) gives us for the visibility parameter
\begin{eqnarray*}\phi_{non-squeezed} = \exp\biggl(-\frac{m\omega}{\hbar}(\Delta{z(\tau)})^{2} - \frac{(1 + \omega^{2}\tau^{2})}{m\hbar\omega}(\Delta{p_z(\tau)})^{2} + \frac{2\Delta{z(\tau)}\Delta{p_z(\tau)}\omega\tau}{\hbar}\biggr)... \end{eqnarray*}
\vskip-1.5\baselineskip
\let\saveeqnno\theequation
\let\savefrac\frac
\def\displaystyle\savefrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\displaystyle\savefrac
\gdef\theequation{13.b}
\let\theHequation\theequation
\label{dfg-a2013cc9d835}
\begin{array}{@{}l}\times\exp\biggl(-2i\sqrt{\frac{2\hbar}{m\omega}}\beta(0)\frac{\Delta{p_z(\tau)}}{\hbar}\biggr).\end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
We note that the phase term in Eq. (13.b) is constant for a given set of experimental parameters and hence plays no role in the estimation of the visibility parameter. We are thus only concerned with the 'amplitude' part of $\style{font-size:20px}{\phi_{non-squeezed}} $ which is given by the first exponential factor.
For the ground state of the Quantum harmonic oscillator, we define a characteristic length scale $\style{font-size:20px}{\sigma _0} $ and take the \textit{initial} uncertainty in the measure of the position to be roughly equal to this length scale. For the ground state of the Quantum harmonic oscillator, we have
\let\saveeqnno\theequation
\let\savefrac\frac
\def\displaystyle\savefrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\displaystyle\savefrac
\gdef\theequation{14.a}
\let\theHequation\theequation
\label{dfg-7ee04e9b931a}
\begin{array}{@{}l}\delta{z} \approx \sigma _0 = \sqrt{\frac{\hbar}{2m\omega}}. \end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
We know that the coherent state of the QHO is a minimum uncertainty state that saturates the uncertainty principle, for which we have $\style{font-size:20px}{\delta z\delta p_z=\hslash/2} $. Using Eq. (14.a) and the uncertainty principle, Eq. (13.b) can be recast into the following form to account for the experimental errors in the measure of the phase-space variables (we consider only the 'amplitude' part of $\style{font-size:20px}{\phi_{non-squeezed}} $ )
\let\saveeqnno\theequation
\let\savefrac\frac
\def\displaystyle\savefrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\displaystyle\savefrac
\gdef\theequation{14.b}
\let\theHequation\theequation
\label{dfg-3ef9127ab5d3}
\begin{array}{@{}l}|\phi_{non-squeezed}| = \exp\biggl(-\frac{1}{2}\biggl(\biggl(\frac{\Delta{z(\tau)}}{\delta{z}}\biggr)^{2} + (1 + \omega^{2}\tau^{2})\biggl(\frac{\Delta{p_z(\tau)}}{\delta{p_z}}\biggr)^{2}\biggr) + \frac{2\Delta{z(\tau)}\Delta{p_z(\tau)}\omega\tau}{\hbar}\biggr).\end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
As outlined in Eq. (8.a) and Eq. (8.b), we note that similar conditions must be satisfied for the case of the non-squeezed coherent state of the QHO to maximize the visibility parameter.
We now seek to obtain an upper bound on the temperature at which the wave-packet of the neutral test mass must be initially cooled in the ground state of the harmonic oscillator trap. We note that in the classical approximation and at a finite temperature \textit{T}, the equipartition theorem states that the average thermal energy of the QHO must equal $\style{font-size:20px}{k_BT} $ , with one-half of the contribution coming from the kinetic energy term and one-half of the contribution coming from the potential energy term in the Hamiltonian of the Quantum harmonic oscillator (here $\style{font-size:20px}{k_B} $ is the Boltzmann's constant). Cooling the neutral test mass to the ground state of the harmonic trap implies that the following relation must hold (note that the ground state energy of the QHO is given by $\frac{1}{2}\hbar\omega $ )
\let\saveeqnno\theequation
\let\savefrac\frac
\def\displaystyle\savefrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\displaystyle\savefrac
\gdef\theequation{15}
\let\theHequation\theequation
\label{dfg-62c94a5f08ab}
\begin{array}{@{}l}\hbar\omega = k_BT\end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
Suppose that we consider impose certain error tolerances in the measures of the phase-space variables, namely the accuracy to which we can maintain the ratios $\style{font-size:20px}\Delta{z(\tau)}/\delta {z} $ and $\style{font-size:20px}{\Delta{p_z(\tau)}/\delta p_z} $ over the total interferometric time $\style{font-size:20px}\tau $. We thus impose the following constraints
\let\saveeqnno\theequation
\let\savefrac\frac
\def\displaystyle\savefrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\displaystyle\savefrac
\gdef\theequation{16.a}
\let\theHequation\theequation
\label{dfg-f362f3acb7b4}
\begin{array}{@{}l}\bigg|\frac{\Delta{z(\tau)}}{\delta{z}}\bigg| \approx \eta_1, \end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
and,
\let\saveeqnno\theequation
\let\savefrac\frac
\def\displaystyle\savefrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\displaystyle\savefrac
\gdef\theequation{16.b}
\let\theHequation\theequation
\label{dfg-04fc4076e4af}
\begin{array}{@{}l}\bigg|\frac{\Delta{p_z(\tau)}}{\delta{p_z}}\bigg| \approx \eta_2. \end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
Note that $\style{font-size:20px}{\eta_1} $ and $\style{font-size:20px}{\eta_2} $ are such that $\style{font-size:20px}{0<\eta_1,\eta_2<1} $ .
Suppose that we desire a certain accuracy in the measure of the visibility parameter. We set the exponential factor in Eq. (14.b) to be less or than equal to a desired value $\style{font-size:20px}\eta $ , subject to which we estimate a bound on the temperature \textit{T} required (note that $\style{font-size:20px}{0<\eta<1} $). We thus have
\let\saveeqnno\theequation
\let\savefrac\frac
\def\displaystyle\savefrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\displaystyle\savefrac
\gdef\theequation{17.a}
\let\theHequation\theequation
\label{dfg-f5aaf6bcc730}
\begin{array}{@{}l}\frac{1}{2}\biggl(\biggl(\frac{\Delta{z(\tau)}}{\delta{z}}\biggr)^{2} + (1 + \omega^{2}\tau^{2})\biggl(\frac{\Delta{p_z(\tau)}}{\delta{p_z}}\biggr)^{2}\biggr) - \frac{2\Delta{z(\tau)}\Delta{p_z(\tau)}\omega\tau}{\hbar}\biggr) \leq \eta\end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
This is a quadratic inequality in $\style{font-size:20px}\omega $ which we can solve for by using the quadratic formula. From Eq. (15), Eq. (16.a), Eq. (16.b) and Eq. (17.a), we have for the temperature \textit{T} (note that \textit{T} {\textgreater} 0)
\let\saveeqnno\theequation
\let\savefrac\frac
\def\displaystyle\savefrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\displaystyle\savefrac
\gdef\theequation{17.b}
\let\theHequation\theequation
\label{dfg-a744c823949a}
\begin{array}{@{}l}T \leq \frac{\hbar}{k_B\eta_2^{2}\tau^{2}}\Bigg[\frac{2\Delta{z(\tau)}\Delta{p_z(\tau)}\tau}{\hbar} + \sqrt{\biggl(\frac{2\Delta{z(\tau)}\Delta{p_z(\tau)}\tau}{\hbar}\biggr)^{2} - 2\eta_2^{2}\tau^{2}\biggl(\frac{\eta_1^{2} + \eta_2^{2}}{2} - \eta\biggr)}\Bigg].\end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
We see that the temperature required primarily depends on the admissible experimental errors, the total interferometric time and the final spatial and momentum splitting between the wave-packets before they are brought together for recombination.
\section{\textbf{Analysis of the visibility for the case of a squeezed thermal coherent state of the Quantum harmonic oscillator}}
Squeezed coherent states are often encountered in the study of quantum optics. A generic version of the wave function of a squeezed coherent state of the Quantum harmonic oscillator assumes the form
\let\saveeqnno\theequation
\let\savefrac\frac
\def\displaystyle\savefrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\displaystyle\savefrac
\gdef\theequation{18}
\let\theHequation\theequation
\label{dfg-807800ae953b}
\begin{array}{@{}l}\psi_\beta(z, t) = \frac{1}{\sqrt{s}}\biggl(\frac{m\omega}{\pi\hbar}\biggr)^{1/4} \exp\biggl(i\xi(t) + i\sqrt{\frac{2m\omega}{\hbar}}\Im{[\beta(t)]}z - \frac{m\omega}{2\hbar s^{2}}\biggl(z - \sqrt{\frac{2\hbar}{m\omega}}\Re{[\beta(t)]}\biggr)^{2}\biggr),\end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
where \textit{s} is a dimensionless free parameter, referred to as the squeezing parameter of the squeezed state. Effectively, one can squeeze the coherent state in either the position or the momentum quadrature which would lead to a modified scaling in the phase space parameters for the harmonic oscillator state. The characteristic length scale that we define for the squeezed state of the QHO now reads
\let\saveeqnno\theequation
\let\savefrac\frac
\def\displaystyle\savefrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\displaystyle\savefrac
\gdef\theequation{19}
\let\theHequation\theequation
\label{dfg-377d932289c7}
\begin{array}{@{}l}\delta z\approx\sigma _0(s)=\sqrt{\frac{\hbar}{2m\omega}}s.\end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
We note that this is now an explicit function of the squeezing parameter \textit{s}. If $\style{font-size:20px}{0<s<1} $ , it refers to a squeezing in the position quadrature and if $\style{font-size:20px}{s>1} $, it refers to a squeezing in the momentum quadrature. In general, the initial uncertainties in the measure of the position and momentum of the initially prepared wave-packet (one that corresponds to the ground state of a harmonic trap) get modified as follows
\let\saveeqnno\theequation
\let\savefrac\frac
\def\displaystyle\savefrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\displaystyle\savefrac
\gdef\theequation{20.a}
\let\theHequation\theequation
\label{dfg-97c2d4b5768d}
\begin{array}{@{}l}\delta{z}_{squeezed} = \delta{z}.s, \end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
and
\let\saveeqnno\theequation
\let\savefrac\frac
\def\displaystyle\savefrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\displaystyle\savefrac
\gdef\theequation{20.b}
\let\theHequation\theequation
\label{dfg-e7bef20e820b}
\begin{array}{@{}l}\delta{p_z}_{squeezed} = \frac{\delta{p_z}}{s},\end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
where $\style{font-size:20px}{\delta z} $ and $\style{font-size:20px}{\delta p_z} $ denote the initial uncertainties in the measure of the position and momentum of the initially prepared wave-packet in the non-squeezed case. We assume a similar approach as outlined in section II to compute the visibility parameter in a general Stern-Gerlach interferometer for the case of a squeezed coherent state of the QHO. We consider the form of the initially prepared spatial state (given by Eq. (18)) at time \textit{t} = 0 to solve for the visibility parameter. The visibility parameter for the squeezed case is denoted by $\style{font-size:20px}{\phi_{squeezed}} $ and we denote the initial uncertainties in the measure of the position and momentum of the initially prepared wave-packet by $\style{font-size:20px}{\delta z'} $ and $\style{font-size:20px}{\delta p_z'} $ respectively. From Eq. (6), Eq. (11) and Eq. (18), we obtain for the visibility parameter $\style{font-size:20px}{\phi_{squeezed}} $ (refer appendix C for the calculations involved herein)
\let\saveeqnno\theequation
\let\savefrac\frac
\def\displaystyle\savefrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\displaystyle\savefrac
\gdef\theequation{21}
\let\theHequation\theequation
\label{dfg-cf9ce1dee5c9}
\begin{array}{@{}l}|\phi_{squeezed}| = \exp\biggl(-\frac{1}{2}\biggl(\biggl(\frac{\Delta{z(\tau)}}{\delta{z'}}\biggr)^{2} + \biggl(1 + \frac{\omega^{2}\tau^{2}}{s^{4}}\biggr)\biggl(\frac{\Delta{p_z(\tau)}}{\delta{p_z'}}\biggr)^{2}\biggr) + \frac{2\Delta{z(\tau)}\Delta{p_z(\tau)}\omega\tau}{\hbar s^{2}}\biggr).\end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
Note that we have ignored the phase term that arises in the computation of the visibility parameter since being a fixed quantity for a given squeezing parameter \textit{s}, it plays no role in the estimation of $\style{font-size:20px}{\phi_{squeezed}} $ . We have also used Eq. (18) and the uncertainty principle to obtain $\style{font-size:20px}{\phi_{squeezed}} $ in terms of the initial uncertainties in the measure of the position and momentum of the initially prepared wave-packet. We note that the conditions outlined in Eq. (8.a) and Eq. (8.b) must be met in order to maximize the visibility parameter in the SG interferometer.
To estimate the temperature to which the neutral test mass must be cooled to the ground state of the harmonic trap (now squeezed), we consider certain error tolerances in the measure of the phase-space observables as before, namely that we have
\let\saveeqnno\theequation
\let\savefrac\frac
\def\displaystyle\savefrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\displaystyle\savefrac
\gdef\theequation{22.a}
\let\theHequation\theequation
\label{dfg-860cee8ea96a}
\begin{array}{@{}l}\Bigg|\frac{\Delta{z(\tau)}}{\delta{z'}}\Bigg| \approx \eta_1,\end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
and
\let\saveeqnno\theequation
\let\savefrac\frac
\def\displaystyle\savefrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\displaystyle\savefrac
\gdef\theequation{22.b}
\let\theHequation\theequation
\label{dfg-53490c62d680}
\begin{array}{@{}l}\Bigg|\frac{\Delta{p_z(\tau)}}{\delta{p_z'}}\Bigg| \approx \eta_2. \end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
For a given squeezing \textit{s} in either the position or the momentum uncertainties, we seek to obtain a constraint on the temperature required for the initially prepared wave-packet, subject to a certain desired accuracy $\style{font-size:20px}\eta $ in the measure of the visibility parameter $\style{font-size:20px}{\phi_{squeezed}} $ . As in section II, we consider cooling the neutral test mass to the ground state of the harmonic trap (now scaled due to squeezing), for which Eq. (15) holds true. We now have
\let\saveeqnno\theequation
\let\savefrac\frac
\def\displaystyle\savefrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\displaystyle\savefrac
\gdef\theequation{23.a}
\let\theHequation\theequation
\label{dfg-d1fe36fe3b09}
\begin{array}{@{}l}\frac{1}{2}\biggl(\biggl(\frac{\Delta{z(\tau)}}{\delta{z'}}\biggr)^{2} + \biggl(1 + \frac{\omega^{2}\tau^{2}}{s^{4}}\biggr)\biggl(\frac{\Delta{p_z(\tau)}}{\delta{p_z'}}\biggr)^{2}\biggr) - \frac{2\Delta{z(\tau)}\Delta{p_z(\tau)}\omega\tau}{\hbar s^{2}}\biggr) \leq \eta, \end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
where we now consider error tolerances $\style{font-size:20px}{\eta_1} $ and $\style{font-size:20px}{\eta_2} $ in the measure of $\style{font-size:20px}{\Delta{z(\tau)}/\delta z'} $ and $\style{font-size:20px}\Delta{p_z(\tau)}/\delta {p_z'} $ respectively. From Eq. (15) and Eq. (23.a), we get for the temperature \textit{T} (note that \textit{T} {\textgreater} 0)
\let\saveeqnno\theequation
\let\savefrac\frac
\def\displaystyle\savefrac{\displaystyle\savefrac}
\begin{eqnarray}
\let\frac\displaystyle\savefrac
\gdef\theequation{23.b}
\let\theHequation\theequation
\label{dfg-10618b7383f5}
\begin{array}{@{}l}T(s) \leq \frac{\hbar s^{4}}{k_B\eta_2^{2}\tau^{2}}\Bigg[\frac{2\Delta{z(\tau)}\Delta{p_z(\tau)}\tau}{\hbar s^{2}} + \sqrt{\biggl(\frac{2\Delta{z(\tau)}\Delta{p_z(\tau)}\tau}{\hbar s^{2}}\biggr)^{2} - 2\frac{\eta_2^{2}\tau^{2}}{s^{4}}\biggl(\frac{\eta_1^{2} + \eta_2^{2}}{2} - \eta\biggr)}\Bigg].\end{array}
\end{eqnarray}
\global\let\theequation\saveeqnno
\addtocounter{equation}{-1}\ignorespaces
We see that besides other experimental parameters, the temperature \textit{T }strongly depends on the squeezing parameter \textit{s}. An intuitive observation that one make in Eq. (23.b) is that the temperature can be scaled up considerably if one considers the squeezing parameter \textit{s} to be relatively large. From Eq. (20.b), we note that this corresponds to squeezing the initially prepared wave-packet in momentum space.
\section{\textbf{Discussion}}
We now consider a few sample cases for both, the non-squeezed and squeezed cases to get a feel of the numbers involved. Suppose that we prepare the neutral test mass in a non-squeezed coherent state of the harmonic trap and cool it to its motional ground state at a finite temperature \textit{T}. We consider an error tolerance of $\style{font-size:20px}{10^{-1}} $ in the measure of $\style{font-size:20px}\Delta{z(\tau)}/\delta {z} $ to be maintained over the course of the total interferometric time (i.e., with respect to a scale set by the initial uncertainty in the measure of the position of the test mass in the trap) and an error tolerance of say, $\style{font-size:20px}{10^{-3}} $ in the measure of $\style{font-size:20px}{\Delta{p_z(\tau)}/{\delta p_z}} $ , also to be maintained over the course of the total interferometric time $\style{font-size:20px}\tau $. As mentioned in section II, we denote these error tolerances by $\style{font-size:20px}{\eta_1} $ and $\style{font-size:20px}{\eta_2} $ respectively. We consider a total interferometric time of $\style{font-size:20px}{\tau=0.5} $ seconds and we seek to estimate the temperature required for obtaining a visibility of a desired value, say $\style{font-size:20px}\eta $ in the SG interferometer. Here, we take $\style{font-size:20px}\eta $ to be $\style{font-size:20px}{10^{-1}} $ . We consider a maximum spatial split size of 10 microns between the wave-packets in the SG interferometer, corresponding to which we require a maximum momentum splitting of approximately 5.275 x $\style{font-size:20px}{10^{-34}} $ kg-m/sec between the wave-packets. We consider a mesoscopic test mass, of the order of $\style{font-size:20px}{10^{-14}} $ kg. Using Eq. (17.b), we obtain an upper bound of 6.4835 \textit{nK} on the temperature to which the test mass must be cooled in the ground state of the harmonic trap. Given these parameters, we obtain a visibility of about 94.89\% in the SG interferometer (using Eq. (14.b)). For the same set of experimental parameters, we set $\style{font-size:20px}\eta $ equal to $\style{font-size:20px}{10^{-2}} $ . The upper bound on the temperature that we obtain in this case is 3.4016 \textit{nK}, with the visibility being about 99.25\%.
In this context however, we note that a squeezing in the momentum quadrature can help scale up the temperatures required considerably (i.e., when \textit{s} is relatively large, as can be seen in Eq. (20.b)). We now consider a maximum spatial split size between the wave-packets of about 20 microns. For $\style{font-size:20px}{\eta_1=10^{-1}} $ , $\style{font-size:20px}{\eta_2=10^{-3}} $ , $\style{font-size:20px}{\eta=10^{-1}} $ , $\style{font-size:20px}{m\approx10^{-14}} $ kg and a maximum momentum split size of about 2.6375 x $\style{font-size:20px}{10^{-34}} $ kg-m/sec, we observe that a squeezing in the initial momentum uncertainty to about a tenth of its initial value (i.e., \textit{s} = 10) yields an upper bound of 3.129 $\style{font-size:20px}{\mu K} $ on the temperature of the initially prepared wave-packet and a visibility of roughly 94.87\% in the SG interferometer (using Eq. (21)). For a larger value of \textit{s}, say \textit{s} = 50, we observe that for the same set of parameters as considered previously (i.e., the case for \textit{s} = 10), we obtain an upper bound of 78.22 $\style{font-size:20px}{\mu K} $ on the temperature of the initially prepared wave-packet, with the visibility in the SG interferometer being about 94.88\%.
This clearly demonstrates that a squeezing of the initially prepared wave-packet in momentum space is both, desirable and effective. It is worth commenting here that squeezing the wave-packet in position space yields temperatures in the nanokelvins range (i.e., for $\style{font-size:20px}{0<s<1} $) which is far lower than what one would require when the initially prepared wave-packet is squeezed in momentum space, as has been demonstrated in the above cases. We also note that our results allow for a lot of experimental flexibility, in the sense that the visibility in the SG interferometer and the temperatures required for the initially prepared harmonic trap are primarily dependent on the desired experimental errors in the measure of the phase-space variables and the total time-of-flight of the split wave-packets in the SG interferometer, which the experimenter has good control over. Hence, the interplay of parameters external to our Stern-Gerlach analysis that may or may not have to be assumed is non-existent.
\section{\textbf{Summary}}
In this work, we have derived the closed-form expressions for the visibility in a general full-loop Stern-Gerlach interferometer for the cases of non-squeezed and squeezed thermal coherent states of the Quantum harmonic oscillator. In an effort to maximize the visibility obtained in the SG interferometer, we have analytically obtained constraints on the required temperatures of the initially prepared harmonic traps for both, the non-squeezed and squeezed coherent state cases in terms of the experimental errors that one must account for in the measure of the phase-space variables, the total time-of-flight of the wave-packets inside the SG interferometer and the desired accuracy in the measure of the visibility in the SG interferometer. We have shown that masses of the order of $\style{font-size:20px}{10^{-14}} $ kg and spatial split sizes of the order of microns (with the inclusion of suitable experimental errors) can be used to obtain relatively high visibilities in the SG interferometer over time scales as high as 0.5 seconds, thus confirming that a proposal of the kind put forward in [1] can be realized in principle. We have demonstrated that for the case of the squeezed coherent state, a squeezing in the initial momentum uncertainty of the prepared spatial state would prove far more effective, since given the dependence of the temperature \textit{T} on the squeezing parameter \textit{s}, temperatures of higher values (those that can be easily achieved for say, in conventional magneto-optical traps) can be implemented in a typical experimental setup. There however, remain a few issues that need to be addressed. For instance, we have chosen to ignore the fluctuations in the magnetic field gradient in our analysis. This of course, requires a full and rigorous QFT treatment in the context of the problem at hand. We also assume that the Pauli spin operator $\style{font-size:20px}{\sigma _z} $ is a constant of the motion, which would not be the case for an arbitrary configuration of the field gradient in the SG interferometer setup. We work under the assumption that the \textit{z}-component of the field gradient greatly suppresses the effects that arise due to the presence of the field gradient in the \textit{x}-\textit{y} plane (the presence of which is required due to Eq. (1)), in which case the assumption that $\style{font-size:20px}{\sigma _z} $ is a constant of the motion holds approximately good. We wish to circle back to these issues in some future works.
\section*{\textbf{Acknowledgements}}The author (Y. L.) wishes to express his gratitude to his mentors, S. Bose and A. Mazumdar for their continued and generous support. This research did not receive any specific grant from funding agencies in the public, commercial or not-for-profit sectors.
\textit{Conflict of interest}: The author declares no conflict of interest with any third party.
\clearpage
\textbf{References}
[1] Bose S, Mazumdar A, Morley GW, Ulbricht H, Toro\v{s} M, Paternostro M, et al. Spin Entanglement Witness for Quantum Gravity. Physical Review Letters. 2017;119(24):1-7.
[2] Englert BG, Schwinger J, Scully MO. Is spin coherence like Humpty-Dumpty? I. Simplified treatment. Foundations of Physics. 1988;18(10):1045-1056.
[3] Schwinger J, Scully MO, Englert BG. Is spin coherence like Humpty-Dumpty? II. General theory. Zeitschrift f$\style{font-size:20px}{\ddot u} $r Physik D Atoms, Molecules and Clusters. 1988;10(2-3):135-144.
[4] Keil M, Machluf S, Margalit Y, Zhou Z, Amit O, Doblowski O, et al. Stern-gerlach interferometry with the atom chip. arXiv. 2020;(September).
~
~
|
{
"timestamp": "2021-05-11T02:15:26",
"yymm": "2105",
"arxiv_id": "2105.03785",
"language": "en",
"url": "https://arxiv.org/abs/2105.03785"
}
|
\section{Introduction}
Representing a new chapter in condensed matter physics and material science, topological quantum states~\cite{add1,add2} have attracted the wide attention of researchers since their discovery. The topological concepts of electronic structures have been widely predicted and verified ~\cite{add3,add4,add5,add6,add7,add8,add9,add10,add11,add12,add13,add14}. To date, thousands of topological electronic materials have been widely predicted and experimentally observed ~\cite{add15,add16,add17,add18,add19,add20,add21,add22,add23,add24,add25}. Moreover, phonons, the most basic emergent boson of crystalline lattices, are the energy quantum of lattice vibration. Phonons make important contributions to the thermal conductivity, and specific heat of non-metal. The coupling between phonons and electrons determines the properties of many physical phenomena, including superconductivity, thermoelectric effect and carrier mobility~\cite{add26,add27}. By analogy with well-studied electronic systems, topological concepts have been introduced to the field of phonons and named topological phonons~\cite{add28,add29,add30,add31,add32,add33}. It is worth mentioning that unlike electrons, phonons are bosons and are not restricted by the Pauli exclusion principle, so we can focus on the topological signatures of a wide frequency range.
Very recently, topological phonons in solid-state materials have been studied theoretically and experimentally. A series of three-dimensional (3D) materials with rich topological phononic states have been proposed: (i) different types of nodal point phonons~\cite{add34,add35,add36,add37,add38,add39,add40,add41,add42,add43,add44,add45}, including the single and high degenerate Weyl point phonons, ideal type II Weyl point phonons, unconventional triangular Weyl point phonons, Dirac point phonons, triply nodal point phonons, and six-fold nodal point phonons; (ii) different types of nodal line phonons ~\cite{add46,add47,add48,add49,add50,add51,add52,add53,add54}, including nodal-ring phonons, Weyl nodal straight line phonons, helical nodal line phonons, Weyl open nodal line phonons, and hourglass nodal-net phonons; and (iii) multiple nodal surface phonons ~\cite{add55}. Among them, double Weyl point phonons in parity-breaking FeSi ~\cite{add34} and phononic helical nodal lines in MoB$_2$ ~\cite{add48} have been verified via inelastic X-ray scattering.
\begin{figure}
\includegraphics[width=7.8cm]{Figure1}
\caption{(a) and (b) crystal structures of KCuS with $Pnma$-type structures under different viewsides; (c) 3D coordinate system of Brillouin zone (BZ) and some high symmetry points; (d) schematic diagram of two-fold degenerate hourglass Weyl nodal line (HWNL) and four-fold degenerate Dirac nodal line (DNL) phonons in 3D BZ.
\label{fig1}}
\end{figure}
Although phononic Weyl nodal line (WNL) states~\cite{add53,add54} have been proposed in some solid-state materials before, to the best of our knowledge, the phononic DNL states have rarely been predicted by other researchers. More importantly, a natural question is whether two-fold degenerate WNL phonons and four-fold degenerate DNL phonons can coexist in one single solid-state material. In this paper, we answer this question with certainty. For the first time, via first-principles calculations and symmetry analysis, we propose that KCuS with $Pnma$ symmetry group is a topological phononic material exhibiting Weyl and DNL phonons. Remarkably, the predicted nodal line phonons are special in that (i) the two-fold degenerate WNL states are formed by the neck crossing point of the hourglass-like dispersion~\cite{add56,add57,add58,add59,add60}; (ii) the four-fold degenerate DNLs belong to open nodal lines; (iii) the Dirac and Weyl phonon bands are nearly flat and the only ``clean'' bands in the 5.0--5.2-THz range; (iv) phonon surface states occur only in the [100] and [001] surfaces, unlike those of typical phononic nodal-line materials; and (v) KCuS provides a good platform to study the entanglement between the HWNL phonons and the DNL phonons.
\begin{figure}
\includegraphics[width=7.8cm]{Figure2}
\caption{ (a) phonon dispersion of $Pnma$ KCuS along $\Gamma$--X--S--Y--$\Gamma$--Z--U--R--T--Z--X--U--Y--T--S--R paths and (b) enlarged figure of R1 region in (a); (c) some selected symmetry points in $k_x=\pi$ plane; (d) calculated phonon dispersions along A--E--A$^{\prime}$, B--F--B$^{\prime}$, C--G--C$^{\prime}$, and D--H--D$^{\prime}$ paths, where the linear phonon band crossing points are marked with orange circles.
\label{fig2}}
\end{figure}
\section{Computational Methods}
We used density functional theory to calculate the ground state of KCuS with a $Pnma$ structure and the GGA-PBE ~\cite{add61} formalism for the exchange-correlation functional. We used the projector augmented-wave method for the interactions between ions and valence electrons and set the energy cutoff to 600 eV. We used a $\Gamma$--centered k-mesh of 5$\times$5$\times$5 for BZ sampling. We performed the lattice dynamic calculations to obtain the phonon dispersion of KCuS at its equilibrium lattice constants in the PHONOPY package ~\cite{add62} using density functional perturbation theory. We simulated the topological behaviors of the [100], [010], and [001] phonon surface states by constructing a Wannier tight-binding Hamiltonian of phonons ~\cite{add63}.
\section{Results and Discussion}
Figs.~\ref{fig1}(a) and~\ref{fig1}(b) show the crystal structure of KCuS with a $Pnma$-type structure under different viewsides. KCuS contains 12 atoms (i.e., four Cu, four K, and four S atoms located at the 4a (0.0, 0.5, 0.0), 4c (0.84639, 0.75, 0.52184), and 4c (0.41155, 0.25, 0.72286) Wyckoff positions, respectively). The lattice constants of KCuS are obtained with the help of first-principles calculations, and the values are a = 10.726 {\r{A}}, b = 5.3087 {\r{A}}, and c = 6.348 {\r{A}}, respectively. Due to the KCuS hosts orthorhombic-type crystal structure, there are nine independent elastic constants: $C_{11}$, $C_{12}$, $C_{13}$, $C_{22}$, $C_{23}$, $C_{33}$, $C_{44}$, $C_{55}$, $C_{66}$. These above-mentioned independent elastic constants are equal to 19.929 GPa, 13.589 GPa, 15.712 GPa, 33.184 GPa, 14.278 GPa, 18.756 GPa, 7.002 GPa, 7.883 GPa, and 3.081 GPa, respectively, according to our calculations. These elastic constants obey the following elastic stability criteria~\cite{add64}:
\begin{equation}\label{1}
\left\{
\begin{aligned}
&C_{11}>0;~C_{11} \times C_{22}>C_{12}^{2};\\
&C_{11} \times C_{22} \times C_{33}+2C_{12} \times C_{13} \times C_{23}\\
&~~-C_{11} \times C_{23}^{2}-C_{22} \times C_{13}^{2}-C_{33} \times C_{12}^{2}>0;\\
&C_{44}>0;~C_{55}>0;~C_{66}>0
\end{aligned}
\right.
\end{equation}
Hence, it can be concluded that $Pnma$ KCuS is mechanically stable.
Based on the selected high symmetry points in Fig. ~\ref{fig1}(c), we determined the dynamical stability of $Pnma$ KCuS through the phonon dispersion calculations. The phonon dispersion of KCuS along the $\Gamma$--X--S--Y--$\Gamma$--Z--U--R--T--Z--X--U--Y--T--S--R paths is shown in Fig.~\ref{fig2}(a). Obviously, the absence of imaginary frequency modes in the phonon dispersion indicates that KCuS is dynamically stable. We focused on the four phonon bands appearing in the 4.6-5.6 THz range (labeled R1 region); region R1 is well separated from the other phonon bands. For clarity, the enlarged phonon frequency of R1 is shown in Fig.~\ref{fig2}(b), and we divided R1 into two regions, namely, R2 and R3. In the R2 region, the phonon bands along the S-R path are four-fold degenerate; however, in the R3 region, one two-fold degenerate phonon band crossing point along the Y--$\Gamma$ path can be found. For $Pnma$-type KCuS, its symmetry operators are summarized as follows: two screw rotations $\widetilde{C_{2z}}=\{C_{2z}|\frac{1}{2}0\frac{1}{2}\}$ and $\widetilde{C_{2y}}=\{C_{2y}|0\frac{1}{2}0\},$ a spatial inversion \textit{P}, and time-reversal symmetry ${\cal{T}}$ with ${\cal{T}}^2=1$ (since it is a spinless system).
We first come to study the four-fold degenerate nodal line phonons along the S-R path. KCuS hosts three orthogonal two-fold screw rotation axes. Considering a combined antiunitary operation $\cal{T}$$\widetilde{C_{2i}}$,$(i=x,y,z),$ one can easily derive that $(\cal{T}$$\widetilde{C_{2\textit{i}}})^2=e^{\textit{ik}_{\textit{i}}}.$ Consequently, at the corresponding plane, $k_i=\pi,$ one has $(\cal{T}$$\widetilde{C_{2\textit{i}}})^2=-1.$ That is, the phonon dispersions along all boundary planes $(k_{x/y/z}=\pi)$ are at least two-fold degenerate. Therefore, the whole $k_{x/y/z}=\pi$ planes are covered by nodal surface phonons~\cite{add55}. As shown in Fig.~\ref{fig2}(b), one can see that there are two double degenerate nodal lines along Y-T-S, and these two nodal lines are formed into one four-fold degenerate nodal line along the S-R path. Hence, the four-fold degenerate DNL (e.g., S-R path) is sitting at the hinge between the $k_x=\pi$ and $k_y=\pi$ planes. That is, the phononic DNL (e.g., S-R path) is formed by the crossing of the phononic nodal surface states of $k_x=\pi$ and $k_y=\pi$ planes. Note that the DNL phonons along S-R belong to open nodal line states~\cite{add54,add55,add56,add57,add58,add59,add60,add61,add62,add63,add64,add65}, as shown in Fig.~\ref{fig2}(c). More interestingly, as shown in Fig.~\ref{fig2}(c) and Fig.~\ref{fig2}(d), we selected a series of phonon band crossing points (E, F, G, and H points) along the S-R path and showed the phonon dispersions along the A--E--A$^{\prime}$, B--F--B$^{\prime}$, C--G--C$^{\prime}$, and D--H--D$^{\prime}$ paths. Obviously, it can be concluded that the DNL along the S-R path is formed by a series of linear phonon band crossing points. These phonon band crossing points are nearly flat with small frequency variation (see the orange circles in Fig.~\ref{fig2}(d); the orange circles are almost flat in frequency), and exhibit a large linear frequency range.
To further prove the occurrence of the phononic DNL along the S-R path, the symmetry analysis is shown as follows: The DNL lies at the hinge between two planes (e.g., $k_x=\pi$ and $k_y=\pi$); it is the invariant subspaces of $\widetilde{C_{2z}}$, $\widetilde{M_y}$, and a combined operation $\cal{T}$$\widetilde{C_{2y}}$. The commutations between them are given by,\begin{equation}\label{2}
\widetilde{C_{2z}}\widetilde{M_y}=\mathcal{T} _{010}\widetilde{M_y}\widetilde{C_{2z}},
\end{equation}
\begin{equation}\label{3}
\widetilde{M_y}(\mathcal{T} \widetilde{C_{2y}})=(\mathcal{T} \widetilde{C_{2y}})\widetilde{M_{y}},
\end{equation}where $\mathcal{T} _{010}$ is the translation along the y-direction.
\\Along the S-R path, one has \begin{equation}\label{4}
\{\widetilde{C_{2z}},\widetilde{M_y}\}=0, [\widetilde{M_y},(\mathcal{T} \widetilde{C_{2y}})]=0.
\end{equation}The Bloch states along this path can be chosen as the eigenstates of $\widetilde{M_y}$, characterized by its eigenvalues, $|g_y=\pm1\rangle$. Due to Eq.(\ref{3}), $|g_y=1\rangle$ and $\mathcal{T} \widetilde{C_{2y}}|g_y=1\rangle$ are degenerate due to a Kramer-like degeneracy, with $(\mathcal{T} \widetilde{C_{2y}})^2=-1$ at this path. In addition, according to Eq.(\ref{2}), the anticommutation relationship also implies another degeneracy; $|g_y=1\rangle$ and $\widetilde{C_{2z}}|g_y\rangle$ are degenerate, corresponding to the opposite $g_y$. Consequently, $\{|g_y=1\rangle,|g_y=-1\rangle, \mathcal{T} \widetilde{C_{2y}}|g_y=1\rangle,T\widetilde{C_{2y}}|g_y=-1\rangle\}$ are degenerate along this path; there is indeed a four-fold degenerate line (i.e., a DNL) along the S-R path.
\begin{figure}
\includegraphics[width=7.8cm]{Figure3}
\caption{(a) enlarged phonon dispersion of $Pnma$ KCuS in R3 region of Fig.~\ref{fig2}(b) where hourglass-type phonon band crossing along $\Gamma$-Y path is marked with circle; (b) series of selected symmetry points in $k_z=0$ plane; (c) detailed phonon dispersions along a$_m$-b$_m$ (m = 1, 2, 3, 4) paths where hourglass-type phonon band crossings are marked with orange circles.
\label{fig3}}
\end{figure}
We now come to study the two-fold degenerate phonon band crossing point (see Fig.~\ref{fig3}(a)) along the Y-$\Gamma$ path in R3. $k_z=0$ plane is an invariant subspace of $\widetilde{M_z}$, and the Bloch states on it can be characterized by its eigenvalues $g_z=\pm e^{{ik_x}/{2}}$. According to the Kramer-like degeneracy, we know that the $k_i=\pi$ $(i=x,y,z)$ plane has double degeneracy. At the Y point, which lies on plane $k_y=\pi$, one has $[\widetilde{M_z},(\mathcal{T} \widetilde{C_{2y}})]=0$ and $g_z=\pm1$, so that $\{|g_z=1\rangle,|g_z=1\rangle\}$ are degenerate at this point. In contrast, at the P point (see Fig. S1), a generic point along X-S, lying on the $k_x=\pi$ plane, one has $[\widetilde{M_z},(\mathcal{T} \widetilde{C_{2x}})]=0$, but $g_z=\pm i$, such that $\{|g_z=+i\rangle,|g_z=-i\rangle\}$ are degenerate. Consequently, from P to Y, there must be a doublet switching (see in Fig. S1), leading to a neck crossing point of the hourglass-like dispersion. Due to the presence of $\widetilde{M_z}$, such a crossing point is not isolated but inducing a line within the $k_z=0$ plane, as shown in Fig.~\ref{fig3}(b). The shape of the nodal line in the $k_z=0$ plane is shown in Fig.~\ref{fig3}(b) using black lines. Moreover, we selected a series of symmetry points, a$_1$-a$_4$ (b$_1$-b$_4$), between the $\rm{\Gamma(Y)}$ and X (S) points to show the detailed phonon dispersions along the a$_m$-b$_m$ (m = 1, 2, 3, 4) paths. The results are shown in Fig.~\ref{fig3}(c); along the a$_m$-b$_m$ paths, a series of phonon band crossing points appear, and they belong to the neck crossing points of the hourglass-like dispersion~\cite{add56,add57,add58,add59,add60}. Therefore, one can see that the two-fold degenerate nodal line phonons in the $k_z=0$ plane should be HWNL phonons. Normally, the hourglass nodal line (HNL) can move in the $k_z=0$ plane; however, the HWNL as exhibited in Fig.~\ref{fig3}(b) is unmovable, as its two endpoints are pinned at the S point. Moreover, as shown in Fig.~\ref{fig3}(c), one can see that the hourglass-type phonon band crossings (see orange circles) are almost flat in frequency.
Finally, we start to study the phonon surface spectra of $Pnma$-type KCuS. As shown in Fig.~\ref{fig4}, one can conclude that the phonon surface states can only appear in the [100] and [001] surfaces. However, for the [010] surface, no phonon surface state can be observed (see Fig.~\ref{fig4}(c)). To understand the physics, we employed Zak phase calculations in this work. The Zak phase is the Berry phase of a straight line normal to the surface and crossing the bulk BZ. A $\pi$ Zak phase generally indicates the existence of nontrivial topological surface states. For lines normal to the [100] and [001] surfaces, the Zak phase equals $\pi$ (see the inset figures of Figs.~\ref{fig4}(b) and~\ref{fig4}(d)), corresponding to the appearance of surface states. However, for a line normal to the [010] surface, the Zak phase equals 0 (see the inset figure of Fig.~\ref{fig4}(c)), corresponding to the disappearance of surface states.
\begin{figure}
\includegraphics[width=7.8cm]{Figure4}
\caption{(a) 3D bulk BZ and [001], [010], [100] surface BZs; (b)-(d) phonon surface spectra on [100], [010], [001] surfaces, respectively. The inset figures in (b)-(d) indicate the values of Zak phases (0 or $\pi$) for lines normal to the [100], [010], [001] surfaces. The nontrivial phonon surface states in the [100] and [001] surfaces are indicated by black arrows. For the [010] surface, no phonon surface states appear.
\label{fig4}}
\end{figure}
\section{Conclusions}
In summary, based on first-principles calculations and symmetry analysis, nearly flat phononic DNL and phononic HWNL states were predicted in one solid-state material, KCuS with a $Pnma$ structure. The $Pnma$-type KCuS was predicted to be mechanically and dynamically stable in terms of theory. Moreover, the phonon surface states in the [100], [010], and [001] surfaces of this material were investigated; according to the results, the topological phonon surface states of KCuS only appeared in the [100] and [001] surfaces due to the $\pi$ Zak phases. The current work predicted the appearance of coexisting two-fold and four-fold degenerate nodal line phonons in a single material for the first time. The KCuS reported here therefore can be viewed as a good platform to study the entanglement between DNL phonons and HWNL phonons in the future.
\emph{\textcolor{blue}{Acknowledgments}} X.T. Wang is grateful for the support from the National Natural Science Foundation of China (No. 51801163) and the Natural Science Foundation of Chongqing (No. cstc2018jcyjA0765).
|
{
"timestamp": "2021-05-11T02:21:11",
"yymm": "2105",
"arxiv_id": "2105.03963",
"language": "en",
"url": "https://arxiv.org/abs/2105.03963"
}
|
\section{Introduction}
\label{sec:intro}
Image steganography and image steganalysis
are a pair of antagonistic techniques, wherein the former conceals secret messages within cover images, and the latter looks for embedding artifacts to reveal the presence of secret messages within stego images.
Since JPEG is the most common image format and widely used in daily life, steganography and steganalysis for JPEG images are of both academic and practical value.
Modern JPEG steganographic methods are designed according to a distortion minimization framework \cite{distortion}. The distortion can be simply represented in an additive form, which is calculated as the sum of the embedding costs of modified DCT (Discrete Cosine Transform) coefficients. As a result, defining embedding cost is very important for steganography. In the past decade, some additive cost functions \cite{UNIWARD, UED, UERD, GUED, MS, JMIPOD} have been proposed either heuristically or based on a statistical model.
\if
In MOD (Model Optimized Distortion) \cite{MOD}, the embedding costs are defined associating with CC-PEV (Cartesian Calibration-PEV) steganalytic features \cite{CCPEV}. However, the costs optimized according to a specific model may be over-fitted and may not perform well when a steganalyzer utilizes a more complete feature set. In J-UNIWARD (JPEG-Universal Wavelet Relative Distortion) \cite{UNIWARD}, image residuals are obtained by filtering the decompressed image with a bank of wavelet filters, and then integrated into embedding costs. Although not only non-zero AC (Alternating Current) coefficients but also DC (Direct Current) coefficients and zero ACs are modified, J-UNIWARD achieves better security performance compared to previous methods. To avoid the high computational complexity in wavelet domain, UED (Uniform Embedding Distortion) \cite{UED}computes embedding costs in the DCT domain and makes embedding changes uniformly spread in DCT coefficients of different magnitudes. Moreover, it is important to design embedding costs associated with the correlation within the same DCT block and that among different DCT blocks, as these two characteristics are commonly captured by JPEG steganalysis. Following this principle, UERD (Uniform Embedding Distortion Revisited) \cite{UERD} and GUED (Generalized Uniform Embedding Distortion) \cite{GUED} define embedding costs by considering the impacts of different DCT frequency-modes and different DCT blocks.
To explore texture information more delicately, microscope technique is utilized in J-MSUNIWARD (Microscope JPEG-Universal Wavelet Relative Distortion) and MSUERD\_SPA (Microscope Uniform Embedding Distortion Revisited Filtering in Spatial Domain) to refine the existing costs by means of highlighting the details of the image with high-pass filters \cite{MS}.
\fi
Most existing well-performed embedding costs exploit {JPEG DCT characteristics, including the texture level of DCT blocks, the correlation among DCT coefficients, and the different impacts of DCT frequency-modes.} These additive embedding costs can be adjusted into non-additive embedding costs \cite{BBC++} or side-informed embedding costs \cite{sideinfor}.
On the other side, many efficient JPEG steganalytic methods are designed in a supervised machine learning fashion where high-dimensional hand-crafted image statistical features \cite{JRM, GFR,PHARM,diversity,selection} were constructed.
Their performance can be further improved by incorporating selection-channel information \cite{SCA,JPEGphaseaware}.
In recent years,
with the rapid development of deep learning techniques, steganalytic methods based on CNNs (Convolutional Neural Networks), including PNet \cite{PNet}, J-XuNet \cite{XuJPEG}, Zeng-CNN \cite{zeng}, and SRNet \cite{SRNet}, have achieved good performance.
To overcome the great challenges presented by deep learning aided steganalysis,
it is also tempting to take the advantages of deep learning for steganography so as to obtain better steganographic security.
In fact, deep machine learning can be applied in both discriminative and generative tasks.
There are already some good deep learning based data hiding methods available \cite{baluja2017hiding},
however, they can neither resist advanced steganalytic or forensic detection, nor achieve
error-free message decoding.
A compromised feasible solution is to learn better embedding costs under the distortion minimization framework.
Compared to traditional cost functions predefined by heuristics or a fixed model,
learning costs from scratch by deep learning is appealing
for that it can automatically learn intrinsic cover generative representation from big data and be adjusted dynamically on demand.
There are two types of state-of-the-art techniques for automatic embedding cost learning.
One is generative adversarial network (GAN) based and the other is
reinforcement learning (RL) based.
In the first type,
the earliest work is ASDL-GAN (Automatic Steganographic Distortion Learning framework with Generative Adversarial Network) \cite{ASDLGAN},
in which
a generator is designed to learn embedding change probabilities
so as to resist a discriminator which aims to distinguish between cover and simulated stego images.
A neural network based embedding simulator is used to simulate message embedding.
UT-GAN (U-Net and Double-Tanh Framework using GAN) \cite{UTGAN} has improved ASDL-GAN by utilizing more advanced network architectures and a double-Tanh activation function based simulator.
In the second type,
SPAR-RL (Steganographic Pixel-wise Actions and Rewards with Reinforcement Learning) \cite{SPARRL}
has been proposed
by using a sampling based simulator to sample embedding actions
so as to overcome the ``trade-off drawback'' (namely, the drawback of trading-off accurate simulated modifications for attenuated gradients) caused by neural network based (or activation function based) embedding simulators in ASDL-GAN/UT-GAN.
In SPAR-RL, a policy network aims to generate embedding policies, i.e., embedding change probabilities, by maximizing the rewards assigned from an environment (or called critic) for the sampled modification actions.
There is yet another related technique employing deep learning for steganography called adversarial embedding (ADV-EMB) \cite{ADVEMB}.
It imitates the effect of adversarial examples (AE) by utilizing the gradients of a CNN steganalyzer
for adjusting costs to evade detection.
{A min-max strategy \cite{minmax} can further be applied to construct a set of trained steganalyzers so that a min-max game equilibrium can be gradually obtained to optimize the performance of the adjusted costs.}
However, such technique can only adjust the off-the-shelf embedding costs while it is incapable of
generating costs from scratch.
In fact, it can be used as an adjoint technique as non-additive cost functions \cite{BBC++} to further boost the steganographic performance, which has been demonstrated in \cite{SPARRL} to improve the costs generated by SPAR-RL.
The idea of learning embedding costs in single image steganography can be further extended to batch steganography \cite{batch}.
Although some effective automatic cost learning methods have been proposed so far,
the methods are only applicable to spatial images for that the costs are explicitly defined on pixels rather than DCT coefficients.
They cannot be well applied on JPEG images for that the very embedding units reside in DCT domain.
{JS-GAN (JPEG Steganography using GAN) \cite{JSGAN} is the earliest method attempted to automatically learn JPEG embedding costs.
But its network architecture does not fully consider the peculiarity of JPEG images and its activation function based embedding simulator has the ``trade-off drawback'' mentioned above. Therefore, its performance is still inferior to conventional methods such as the baseline J-UNIWARD (JPEG-Universal Wavelet Relative Distortion) \cite{UNIWARD}.
Note that most existing JPEG steganographic methods \cite{UED, UERD, GUED, MS} except the one from the UNIWARD family \cite{UNIWARD} are designed to tailor JPEG characteristics.
The special DCT $8\times8$ mode structure is the major source of difficulties that hinder
a successful migration from a spatial pixel-oriented method to a JPEG DCT-oriented one.
This fact may also hold true for steganography based on deep learning.
For instance, both inter-block correlations and intra-block correlations exist among DCT coefficients, which are not easily captured by conventional convolution operations presented in previous automatic cost learning methods \cite{ASDLGAN,UTGAN,SPARRL,JSGAN}.
To address the above problems, in this paper we extend the RL based automatic embedding cost learning scheme with a sampling based simulator to DCT domain and propose a method called JEC-RL (JPEG Embedding Cost with Reinforcement Learning).
Our method explicitly exploits JPEG domain knowledge utilized in conventional steganography and steganalysis.
Such domain knowledge includes the embedding cost measurement from two aspects in conventional steganography \cite{UNIWARD,UERD,GUED,MS},
i.e., the texture complexity of a DCT block and the
position of its DCT frequency-mode,
and also includes the important aspect in modern JPEG steganalysis \cite{JRM, GFR,PHARM,SCA,PNet,XuJPEG,zeng,SRNet,selection,JPEGphaseaware,diversity}, i.e., the intra-block and inter-block correlations of DCT elements.
In this paper, we propose a three-module composed policy network to consider these factors.
These modules operate in serial,
gradually extracting useful features from a decompressed JPEG image and converting them into embedding policies for DCT elements.
Moreover, since all DCT coefficients are possible to be modified during data embedding,
many deep CNN steganalyzers equipped with high-pass filters \cite{Xu, Yedroudj} may not be suitable to be adopted as an environment network for assigning rewards.
We investigate two properties that an effective neural network based environment may follow.
First, $8\times8$ DCT basis filters are better than high pass filters to be used in a preprocessing layer {to provide sufficient frequency resolution}.
Second, a ``wide" network is better than a ``deep" network for efficient gradient propagation and thus leading to better reward assignment.
The contributions and technical highlights of this work are summarized as follows.
\begin{itemize}
\item A practical automatic cost learning method named JEC-RL has been proposed based on JPEG domain knowledge. Experiments show that JEC-RL can learn effective costs that
outperform existing additive cost functions against feature-based and CNN-based steganalyzers. This is the first work that an automatic cost learning method starting from scratch can achieve outstanding performance for JPEG images.
{Furthermore, the data-driven learning architecture can make better use of the texture calculation process in conventional JPEG steganographic methods.}
\item
A policy network for generating embedding policies has been constructed with three modules following a domain-transition design paradigm.
In this paradigm,
texture complexity is firstly evaluated in spatial domain by a \textit{pixel-level texture complexity evaluation module}.
Then, by means of a \textit{DCT feature extraction module}, DCT features with inter-block and intra-block correlations can be learned in an end-to-end learning fashion.
Finally, through a \textit{mode-wise rearrangement module}, DCT features are rearranged into $8\times8$ block structure as embedding policies.
\item A gradient-oriented environment network has been adopted to facilitate reward assignment.
Extensive ablation studies show that utilizing a preprocessing layer equipped with a bank of $8\times8$ DCT basis filters can {sense the modifications in all DCT modes so as to provide sufficient frequency resolution}.
Besides, extensive ablation studies show that increasing the network capacity by widening the structure can enable stable gradient propagation so as to provide discriminative rewards.
\end{itemize}
This paper is organized as follows. In Section \ref{sec:fundamentals}, we give fundamental knowledge related to the proposed steganographic method, including the existing cost functions for JPEG images and the cost learning methods with deep learning techniques.
In Section \ref{sec:SPARRLv1v2}, we analyze and experimentally verify the ineffectiveness of existing automatic cost learning methods for JPEG images.
In Section \ref{sec:method}, we introduce the proposed JEC-RL by giving the details of its design paradigm and network architecture. In Section \ref{experiment}, we present extensive experimental results to demonstrate the performance.
In Section \ref{sec:conclusion}, we draw conclusions.
\section{Fundamentals And Background}
\label{sec:fundamentals}
\subsection{Notations}
\label{Sec: notation}
In the remaining of this paper, capital bold letters and the corresponding lowercase letters respectively represent the matrices and the elements within matrices. Specifically, the spatial grayscale cover and stego images are denoted as $\mathbf X = (x_{i,j})^{H \times W }$ and $\mathbf Y = (y_{i,j})^{H \times W }$, respectively, where $H$ and $W$ are the height and width of the image.
Without loss of generality, assume $H$ and $W$ are the multiples of 8.
The JPEG grayscale cover and stego images are respectively denoted as $\mathbf X^{J} = (x_{a,b}^{k,l})^{H \times W }$ and $\mathbf Y^{J} = (y_{a,b}^{k,l})^{H \times W }$, where $1 \leq a \leq H/8 $, $1 \leq b \leq W/8 $, $1 \leq k,l \leq 8$. Note that
$x_{a, b}^{k, l}$ (or $y_{a, b}^{k, l}$) is the
$\big(8\times(a-1)+k,8\times(b-1)+l\big)$-th element in $\mathbf X^{J}$ (or $\mathbf Y^{J}$), which corresponds to the DCT coefficient in the $(a,b)$-th DCT block and the $(k,l)$-th DCT frequency-mode.
Without loss of generality, we assume $H=W=256$.
\subsection{Basics of Embedding Costs}
\label{Sec: JPEGrho}
According to distortion minimization framework \cite{distortion}, the message embedding process can be formulated as an optimization problem with a payload constraint given as follows
\begin{equation}\label{equ:problem}
\min \limits_{\mathbf{Y}^{J} } D(\mathbf{X}^{J}, \mathbf{Y}^{J} ), \quad
\text{s.t. } \psi(\mathbf{Y}^{J} ) = C,
\end{equation}
where $D(\mathbf{X}^{J}, \mathbf{Y}^{J})$ is a function measuring the overall distortion
caused by modifying $\mathbf{X}^{J}$ to $\mathbf{Y}^{J}$, $\psi(\mathbf{Y}^{J})$ is the payload conveyed by $\mathbf{Y}^{J}$, and $C$ is the target payload.
A distortion function defined in an additive form can be expressed as
\begin{equation}\label{equ:add_distortion_all}
D({\mathbf{X}^{J}, \mathbf{Y}^{J}}) = \sum_{a=1}^{H/8} \sum_{b=1}^{W/8} \sum_{k=1}^{8} \sum_{l=1}^{8} \rho_{a,b}^{k,l}(m_{a,b}^{k,l})\mathcal{I}(m_{a,b}^{k,l}\neq0),
\end{equation}
where
$\rho_{a,b}^{k,l}(m_{a,b}^{k,l})$ is the additive embedding cost of modifying $x_{a,b}^{k,l}$ into $y_{a,b}^{k,l}=x_{a,b}^{k,l}+m_{a,b}^{k,l}$, and $\mathcal{I}(z)$ is the indicator function as
\begin{equation}\label{equ:delta}
\mathcal{I}(z)=
\begin{cases}
1, \quad \text{if } z \text{ is true},\\
0, \quad \text{if } z \text{ is false}.
\end{cases}
\end{equation}
{In general, the more risk the modification may introduce, the larger the cost value is.}
Steganographic codes such as STC (Syndrome-Trellis Codes) \cite{STCs} can be applied in practice.
For simulation purpose,
an optimal embedding simulator \cite{simulator} can be used to compute the embedding change probabilities under a given payload as
\begin{equation}\label{eq:p+}
\begin{aligned}
p_{a,b}^{k,l}(m) = \frac{e^{-\lambda\rho_{a,b}^{k,l}(m)}}{ \sum_{\tilde{m} \in \mathcal{M}} e^{-\lambda\rho_{a,b}^{k,l}(\tilde{m})}}, \quad m \in \mathcal{M},
\end{aligned}
\end{equation}
and $\lambda$ is a parameter determined by the payload constraint which can be computed as
\begin{equation}
-\sum_{a=1}^{H/8}\sum_{b=1}^{W/8}\sum_{k=1}^{8}\sum_{l=1}^{8}
\sum_{m \in \mathcal{M}}
p_{a,b}^{k,l}(m) \text{log}_2 p_{a,b}^{k,l}(m) =C.
\label{equ:capacity}
\end{equation}
In this paper, we focus on the case of ternary embedding, wherein
$\mathcal{M}=\{+1,0,-1\}$, and suppose $\rho_{a,b}^{k,l}(+1) = \rho_{a,b}^{k,l}(-1) = \rho_{a,b}^{k,l}$, and $\rho_{a,b}^{k,l}(0) = 0$.
\subsection{Embedding Costs in Conventional JPEG Steganography}
\label{Sec: JPEGrho2}
Generally speaking, in many existing JPEG steganographic methods, the embedding cost is measured from two aspects \cite{UERD},
including the texture complexity of a DCT block and the position of its DCT frequency-mode:
\begin{equation}\label{eq:JPEGrho}
\begin{aligned}
\rho_{a,b}^{k,l} = \rho_{a,b}^{<\text{block}>}.\rho_{k,l}^{<\text{mode}>}
\end{aligned}
\end{equation}
wherein $\rho_{a,b}^{<\text{block}>}$ is the block-level suitability of the $(a,b)$-th DCT block and $\rho_{k,l}^{<\text{mode}>}$ is the mode-level suitability of the $(k,l)$-th DCT frequency-mode.
Please note that $\rho_{a,b}^{<\text{block}>}$ can be set as the reciprocal of the block-level texture complexity and thus is closely related to the image content, while $\rho_{k,l}^{<\text{mode}>}$ relies on the position of the DCT frequency-mode $(k,l)$ and is independent of the image content.
In conventional methods \cite{UNIWARD,UERD,GUED,MS}, $\rho_{a,b}^{<\text{block}>}$ and $\rho_{k,l}^{<\text{mode}>}$ are computed independently.
Take UERD \cite{UERD} for example. The block-level suitability $\rho_{a,b}^{<\text{block}>}$ is defined as the reciprocal of the weighted sum of the block energy of the $(a,b)$-th DCT block and its neighboring blocks, given as
\begin{equation}\label{eq:UERDrho}
\begin{aligned}
\rho_{a,b}^{<\text{block}>} =
\frac{1}{E_{a,b}+0.25\cdot \sum_{\hat{E}\in \mathbb{\hat{E}}_{a,b}}\hat{E}},
\end{aligned}
\end{equation}
where
$ \mathbb{\hat{E}}_{a,b}$ is the set of the block energy for the blocks located in the eight-neighborhood of the $(a,b)$-th DCT block,
and
the block energy $E_{a,b}$ of the $(a,b)$-th DCT block is defined as
\begin{equation}
\begin{aligned}
\label{eq:E}
E_{a,b} = \sum_{k=1}^{8}\sum_{l=1}^{8}|x_{a,b}^{k,l}| \cdot s_{k,l},
\end{aligned}
\end{equation}
where $s_{k,l}$ is the quantization step of the $(k,l)$-th DCT frequency-mode.
The mode-level suitability
$\rho_{k,l}^{<\text{mode}>}$ is defined according to the quantization steps $s_{k,l}$ as
\begin{equation}\label{eq:moderho}
\begin{aligned}
\rho_{k,l}^{<\text{mode}>} = \begin{cases}
0.5\cdot(s_{k+1,l}+s_{k,l+1}), \quad \text{if} \quad (k,l) = (1,1),\\
s_{k,l}, \quad \text{otherwise}.
\end{cases}
\end{aligned}
\end{equation}
The overall embedding costs $\rho_{a,b}^{k,l}$ is obtained according to \eqref{eq:JPEGrho}.
\begin{figure}[tb!]
\centering
\includegraphics[width=0.48\textwidth]{Images//summary.pdf}
\caption{
Three deep learning techniques for embedding costs.
}\label{fig:learn costs}
\vspace{-0.8cm}
\end{figure}
\subsection{Using Deep Learning for Embedding Costs}
\label{Sec: JPEGrho2}
There exist three kinds of deep learning techniques developed for embedding costs in steganography, including generative adversarial network (GAN) \cite{ASDLGAN, UTGAN, JSGAN}, reinforcement learning (RL) \cite{SPARRL}, and adversarial examples (AE) \cite{ADVEMB,zhang2018adversarial,minmax}.
Fig. \ref{fig:learn costs} shows the diagrams of their respective working flow.
They may share some similarities in network structure,
and their convergence status all reach a kind of equilibrium\footnote{This statement holds when an environment network is used in \cite{SPARRL} and a min-max strategy is used in \cite{minmax}. However, when a fix environment is used or the min-max strategy is disabled, the steganalysis side is considered as static.},
but they are different in their working mechanism.
To better understand their differences and why RL is adopted in our proposed method,
we compare these techniques side-by-side in details from the following aspects.
\begin{itemize}
\item \textit{Scope of application:} Both GAN-based and RL-based method can be applied to automatically learn embedding costs from scratch, while AE-based methods are used to improve off-the-shelf embedding costs.
In other words, GAN-based and RL-based methods can work independently while
AE-based methods work as a kind of adjoint post-processing technique.
\item \textit{Construction:} In GAN-based methods, a generator and a discriminator work together in the training stage to compete with each other, and only the generator is needed in the deployment stage to generate embedding costs. Both the generator and the discriminator are implemented by learnable neural networks.
In AE-based methods, there should be a well-trained differentiable steganalyzer (or a set of steganalyzers in \cite{minmax}) for adjusting predefined embedding costs.
In RL-based method, there exists a policy network and an environment (or called critic).
In principle, the actions are sampled according to the embedding policies outputted by the policy network, and
the environment can be any flexible module to yield rewards for the sampled actions.
In practical implementation of \cite{SPARRL}, both policy network and the environment are
realized by learnable neural networks.
\item \textit{The usage of gradient:} In GAN-based methods,
the gradient with respect to the network parameters
is used to update the parameters of both the discriminator and the generator.
In AE-based methods, the gradient with respect to the image elements
is used to asymmetrically adjust the embedding costs.
In RL-based method, the gradient with respect to environment network
is used to update the parameters of the environment network,
while such gradient is also incorporated into the reward function to guide the update of the parameters of the policy network.
Note that if a non-gradient based reward function is designed, the gradient may not be necessarily involved in the update process of the policy network.
\item \textit{Generating simulated stegos during the training stage:}
In GAN-based methods,
the simulated stegos are generated by a neural network based or activation function based simulator, where the simulated modifications are inaccurate floating values.
In the RL-based method,
the simulated stegos are sampled according to the embedding policies (probabilities),
where the simulated modifications are accurate integer values.
In the AE-based methods, genuine stegos are generated with an optimal embedding simulator
or a practical stego code as mentioned in Section \ref{Sec: JPEGrho} according to the adjusted costs.
In \cite{minmax}, several differentiable steganalyzers must be successively trained by a corresponding updated stego image set. Note that in the AE-based methods, since the input costs are predefined by a cost function or obtained by GAN-based or RL-based methods,
the updating processes for adjusted costs and for the differentiable steganalyzers
have much less number of training rounds (typically 8 or 9 in \cite{minmax}) compared to
the GAN-based or RL-based methods.
Both the discriminator in GAN and the environment network in RL are updated in a mini-batch fashion by
using only several pairs of cover and simulated stego images,
while the steganalyzers in AE must be updated by all training genuine stego images.
\end{itemize}
\if
The first work that can automatically learn embedding costs from scratch is ASDL-GAN (Automatic Steganographic Distortion Learning framework with Generative Adversarial Network) \cite{ASDLGAN}, which is designed with a GAN structure.
UT-GAN (U-Net and Double-Tanh Framework using GAN) \cite{UTGAN} has improved ASDL-GAN by utilizing a more sophisticatedly designed network architecture. With the help of the non-differentiable computation mechanism in reinforcement learning (RL), SPAR-RL (Steganographic Pixel-wise Actions and Rewards with Reinforcement Learning) \cite{SPARRL} is a new automatic cost learning framework \BL{with an embedding action sampling
mechanism}
that can overcome the dilemma of gradient vanishing and modification deviation in
neural-network-based embedding simulators in ASDL-GAN/UT-GAN. It utilizes
a policy network to learn optimal embedding policies associated with embedding costs via maximizing the reward assigned by an environment network.
\red{In recent years, deep learning techniques have been applied to learn embedding costs in steganography. As shown in Fig. \ref{fig:learn costs}, these methods can be divided into two stages, including learning embedding costs from scratch by generative adversarial network (GAN) or reinforcement learning (RL), and adjusting pre-defined embedding costs by adversarial examples technique.
As for the first stage, the earliest work is ASDL-GAN (Automatic Steganographic Distortion Learning framework with Generative Adversarial Network) \cite{ASDLGAN}.
In ASDL-GAN, the generator tires to learn embedding change probabilities for input cover image, which are then fed into a neural-network-based embedding simulator to obtain modifications. The modification map is added to the cover image as stego image. The discriminator aims to distinguish between cover and stego images.
The goal of the generator is to maximize the softmax cross-entropy loss of discriminator, and thus the gradients are backpropated from discriminator's cross-entropy loss through neural-network-based embedding simulator for updating generator's parameters.
UT-GAN (U-Net and Double-Tanh Framework using GAN) \cite{UTGAN} has improved ASDL-GAN by utilizing more advanced architecture for generator and simulator.
Although UT-GAN has outperformed traditional methods, the neural-network-based embedding simulator would inevitably lead to the dilemma between gradient vanishing and large modification deviation.
SPAR-RL (Steganographic Pixel-wise Actions and Rewards with Reinforcement Learning) \cite{SPARRL} is a new automatic cost learning framework.
With the mechanism of non-differentiable computation in RL, it can overcome the drawbacks of neural-network-based embedding simulators in ASDL-GAN/UT-GAN.
Specifically, it utilizes the policy network to generate embedding policies, i.e., embedding change probabilities, and then utilizes a sampling-based embedding simulator to sample actions, i.e., modifications.
The policy network aims to maximize the rewards assigned for the sampled actions, and thus there is no requirement for propagating gradients through embedding simulator.
In this way, SPAR-RL can simultaneously sample discrete modifications and avoid gradient vanishing issue. SPAR-RL and ASDL-GAN/UT-GAN can share the network structure for policy network and generator, environment network and discriminator, and can share the alternant updating strategy for these two networks. Their differences lie in the structure of embedding simulator and the optimization goal of policy network and generator.}
\red{As for the stage of adjusting the pre-defined embedding costs, adversarial examples technique plays an important role.
ADV-EMB (Adversarial Embedding) \cite{ADVEMB} asymmetrically adjusts a minimum set of embedding costs according to the gradients of target CNN steganalyzer.
Furthermore, the choice of the proportion of adjusted embedding costs is investigated under the game theoretic framework \cite{Shi}.
ADV-EMB can be combined with minmax strategy, wherein the least detectable stego image for the best CNN steganalyzer is selected at each iteration to train a new target CNN steganalyzer.
Please note that GAN/RL and adversarial examples techniques can improve the security performance of steganographic methods in different stages of cost learning, as shown in Fig. \ref{fig:learn costs}.
In \cite{SPARRL}, experimental results show that the embedding costs learned by SPAR-RL can be adjusted by ADV-EMB for further security improvement. In this paper, we focus on the first stage, which aims to learn embedding costs from scratch.}
\fi
\if
SPAR-RL \cite{SPARRL} is an automatic cost learning framework using deep reinforcement learning, wherein an \textit{agent}, playing the role of the steganographer, aims to learn the optimal \textit{embedding policies} associated with embedding costs, while an \textit{environment} gives reward feedbacks to the feasible actions taken by the agent.
Considering the tremendously high search space for image-level modification actions, in SPAR-RL, the image-level action is decomposed into parallel pixel-wise actions.
In doing so, the policy network takes the cover image $\mathbf X = (x_{i,j})^{H \times W}$ as input, and outputs an intermediate matrix $\mathbf Q =({q}_{i,j})^{H \times W}$.
A policy matrix $\bm{\Pi} = ({\pi}_{i,j})^{H \times W} $ can be computed from $\mathbf{Q}$, where $\pi_{i,j}$ is the embedding policy of image element $x_{i,j}$, i.e., the probability distribution of possible modification actions. The agent applies stochastic sampling on the embedding policies and takes pixel-wise modification actions.
A modification map $\mathbf{M} = (m_{i,j})^{H \times W}$ is formed and then added to the cover image $\mathbf{X} = (x_{i,j})^{H \times W}$ to obtain a simulated stego image $\mathbf{Y}= (y_{i,j})^{H \times W}$.
On the environment side, an environment network is used to calculate a gradient matrix $\mathbf G = (g_{i,j})^{H \times W}$, wherein $g_{i,j}$ is the gradient of the environment networks's loss function with respect to modification $m_{i,j}$.
A reward matrix $\mathbf R = (r_{i,j})^{H \times W}$ is obtained with $\mathbf M = (m_{i,j})^{H \times W}$ and $\mathbf G = (g_{i,j})^{H \times W}$ where the pixel-wise reward $r_{i,j}$ is positive when the signs of $m_{i,j}$ and $g_{i,j}$ are the same, indicating that such modification action should be encouraged. Otherwise the modification action is discouraged. The overall reward is summed up by the pixel-wise rewards.
To learn the optimal embedding policies, the policy network and the environment network are alternately updated in such a way that the former aims at maximizing the overall reward for the taken actions while the latter tries to distinguish between the cover and the simulated stego image and then returns gradients for reward assignment.
When the learning process converges, the embedding policies, i.e., embedding change probabilities, can be inversely converted to embedding costs for practical message embedding.
\fi
\section{Ineffectiveness of applying SPAR-RL to learn JPEG embedding costs}
\label{sec:SPARRLv1v2}
\subsection{Brief Review of SPAR-RL}
\label{sec:SPARRL}
In SPAR-RL \cite{SPARRL}, an {agent} playing the role of the steganographer, aims to learn the optimal {embedding policies} associated with embedding costs, while an {environment} gives reward feedbacks to the feasible actions taken by the agent.
The actions are simulated embedding modifications stochastically sampled according to the probabilities defined by embedding policies.
Considering the tremendously high search space for image-level modification actions, the image-level action is decomposed into parallel pixel-wise actions in SPAR-RL.
A policy network is responsible for taking the cover image as input, and outputting a policy matrix defining the probability distribution of possible modification actions.
On the environment side, an environment network is used to obtain a reward matrix
to indicate whether the corresponding sampled modification actions should be
encouraged or discouraged.
SPAR-RL-v1 and SPAR-RL-v2 were two implementations proposed in \cite{SPARRL} with different network capacities in policy network and environment network .
To learn the optimal embedding policies, the policy network and the environment network are alternately updated in a similar way as GAN for that the former aims to maximize the overall reward for the taken actions while the latter returns rewards and at the same time updates itself {towards better classification} between the cover and the simulated stego images.
When the learning process converges, the embedding policies can be inversely converted to embedding costs for practical message embedding.
\begin{table}[t!]
\renewcommand\arraystretch{1}
{\caption{
$P_{\text E}$ of steganographic methods against GFR steganalyzer under the setting of JPEG quality factor 75.}
\label{tab:SPARRLv2}}
\centering
\begin{tabular}{ccccccc}
\toprule
\multirow{2}{*}{\textbf{Steganalyzer}}& \textbf{Steganographic}& \multicolumn{5}{c}{\textbf{Payload (bpnzAC)}} \\\cline{3-6}
& \textbf{method}& {\textbf{0.1}} & {\textbf{0.3}}& {\textbf{0.5}} \\\midrule
\multirow{2}{*}{GFR}&J-UNIWARD & 45.38\% & 29.56\% & 15.01\%\\
&A-SPAR-RL-v2 &42.95\% &29.20\% &13.97\% \\
\bottomrule
\end{tabular}
\vspace{0.2cm}
\renewcommand\arraystretch{1}
{\caption{
$P_{\text E}$ of A-SPAR-RL-v2 and its environment network in five independent experiments.}
\label{tab:5experiment}}
\centering
\begin{tabular}{p{2.5cm}p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.6cm}<{\centering}}
\toprule
&$\#1$ &$\#2$ &$\#3$ &$\#4$ &$\#5$ \\\midrule
A-SPAR-RL-v2 & 11.62\% & 11.56\% &21.72\% &22.04\% &12.37\% \\
Environment network & 44.75\% & 42.80\% &28.04\% &37.45\% &40.04\% \\
\bottomrule
\end{tabular}
\end{table}
\begin{figure}[t!]
\centering
\includegraphics[width=0.45\textwidth]{Images//diagram.pdf}
\caption{The overview diagram of the proposed JEC-RL.}
\label{fig:diagram}
\vspace{-0.8cm}
\end{figure}
\subsection{Ineffectiveness of SPAR-RL for JPEG}
\label{sec:disSPARRL}
SPAR-RL cannot be well applied on JPEG images.
The reason comes from two aspects.
First, the environment networks are designed to capture the embedding traces left in spatial domain and may fail to evaluate discriminative features in DCT domain. Therefore, the returned rewards may not provide effective evaluation for DCT coefficients' modification actions on 64 DCT modes. Besides, such environment network architecture may not be able to converge steadily in the situation of JPEG cost learning.
Second, the policy network works on spatial pixels rather than DCT coefficients. In contrast to spatial pixels, DCT coefficients are arranged into $8\times8$ frequency-mode structure and thus exhibit both inter-block and intra-block correlations
Both correlations have impacts on the security performance, but may not be fully captured by the common convolution operations used in SPAR-RL.
We have conducted an experiment based on an adapted version of SPAR-RL-v2, abbreviated as A-SPAR-RL-v2, wherein the JPEG images are decompressed to spatial domain before fed into the policy network and environment network. The policy network directly outputs embedding policies for DCT coefficients.
Experiments were conducted on the $\textit{BOSSBase}$ with JPEG quality factor 75, and GFR was used for evaluation. Detailed experimental settings were the same as those given in Section \ref{sec:setting}.
The results are shown in Table \ref{tab:SPARRLv2}. We can observe that although A-SPAR-RL-v2 can still work to some extent, its performance is inferior to a baseline method J-UNIWARD, not to mention other methods proposed recently.
Besides, A-SPAR-RL-v2 would encounter severe convergence issue.
As shown in Table \ref{tab:5experiment}, among five independent experiments, the security performance of A-SPAR-RL-v2 ranges from $11.56\%$ to $22.04\%$ in the case of 0.4 bpnzAC.
To further investigate the phenomenon, we use the environment network in A-SPAR-RL-v2 to classify the cover images and stego images generated by J-UNIWARD on 1.0 bpnzAC.
We can observe that the model whose environment network has better detection performance is more likely to learn a more secure cost learning method.
However, the environment network in A-SPAR-RL-v2 has not considered the JPEG domain knowledge, and thus its detection performance is quite unstable.
To implement effective JPEG cost learning method, the mechanism of the environment network should be further investigated.
\section{JEC-RL by exploiting JPEG Domain knowledge}
\label{sec:method}
In order to address the issue of automatic cost learning for JPEG images, we propose JEC-RL based on JPEG domain knowledge.
We first give an overview of the proposed method, and then
show the details of two important components, i.e., the policy
network following the domain-transition design paradigm and the gradient-oriented environment network.
\subsection{Method Overview}
The design methodology of JEC-RL is extended from SPAR-RL framework \cite{SPARRL}, where a policy network yields optimal embedding policies through iterative interactions with an environment network, as shown in Fig. \ref{fig:diagram}.
In this paper, the policy network follows a domain-transition paradigm consisting of three modules.
The first module takes JPEG image elements $\mathbf X^{J} = (x_{a,b}^{k,l})^{H \times W }$ as input and outputs a matrix $\mathbf T = (t_{i,j})^{H \times W }$, which contains the evaluation of texture complexity for each pixel in spatial domain.
The second module transforms spatial texture complexity features $\mathbf T = (t_{i,j})^{H \times W }$ to DCT frequency features $\mathbf F = (f_{i,j,d})^{H/8 \times W/8 \times 64}$ via spatial aggregation and frequency conversion.
The third module rearranges the DCT features into $8\times8$ DCT mode structure
{to obtain a policy matrix} $\bm{\Pi}^{J} = (\bm{\pi}_{a,b}^{k,l}(m))^{H \times W }$, where $\bm{\pi}_{a,b}^{k,l}(m)$ is the embedding policy, i.e., the probability of possible modification actions, for image element $x_{a,b}^{k,l}$.
In ternary embedding, the possible modification actions are $+1, 0, -1$.
Then, the agent can apply stochastic sampling following the embedding policies and take the corresponding sampled pixel-wise modification actions.
A modification map $\mathbf M^{J} = (m_{a,b}^{k,l})^{H \times W }$ is formed and then added to the cover image $\mathbf X^{J} = (x_{a,b}^{k,l})^{H \times W }$ to obtain a simulated stego image $\mathbf Y^{J} = (y_{a,b}^{k,l})^{H \times W }$.
As for the environment network, it is responsible for reward assignment, where the rewards are defined associated with the back-propagated gradients.
In our work, a wide architecture
equipped with a fixed preprocessing layer with $8\times8$ DCT bases is proposed to accomplish the task well.
The environment network takes ${\mathbf X}^{J}$ and ${\mathbf Y}^{J}$ as input, and then calculates a gradient matrix ${\mathbf G}^{J} = (g_{a,b}^{k,l})^{H \times W}$, wherein $g_{a,b}^{k,l}$ is the gradient of the environment network's loss function with respect to the sampled modification $m_{a,b}^{k,l}$.
A reward matrix ${\mathbf R}^{J} = (r_{a,b}^{k,l})^{H \times W}$ is obtained with ${\mathbf M}^{J}$ and ${\mathbf G}^{J}$ as shown in \eqref{eq:DG function}, wherein $r_{a,b}^{k,l}$ evaluates the contribution of the action $m_{a,b}^{k,l}$ on deceiving the steganalyzer.
In the learning phase, the policy network is trained with the environment network by iteratively generating simulated stego images and then receiving rewards for updating its learning parameters. %
In the deployment phase, the well-trained policy network is used to output a policy matrix which can be converted to embedding costs.
With the help of steganography codes such as STC \cite{STCs}, genuine stego images can be generated.
{
The proposed JEC-RL can take the advantages of the JPEG domain knowledge in the network design.
On the policy network's side, the domain-transition paradigm can capture not only
the texture level of DCT blocks from spatial domain but also the correlation
among DCT coefficients, while different impacts of DCT
frequency-modes can be implicitly learned through the interactions
with the environmental network.
On the environment network's side, the preprocessing layer with $8\times8$ DCT bases can provide sufficient frequency resolution and is capable of propagating useful gradients for modification actions on different DCT modes. }
\subsection{Domain-transition Paradigm Based Policy Network}
\label{sec:policy network}
\begin{figure*}[t!]
\centering
\includegraphics[width=1.0\textwidth]{Images//ModeWise.pdf}
\caption{
Illustration of the DCT feature extraction and mode-wise rearrangement module.
Each color in the feature map corresponds to a specific frequency.
}\label{fig:ModeWise}
\vspace{-0.5cm}
\end{figure*}
Although it is straightforward to let the policy network operate in DCT domain for reducing computational complexity, it is rather difficult to directly apply convolution operations to DCT coefficients to extract effective features due to the peculiarity of the $8\times8$ DCT mode structure in JPEG images. As a result, in our proposed domain-transition paradigm with three consecutive modules, we firstly evaluate texture complexity in spatial domain, and then extract DCT features via spatial aggregation and frequency conversion, and finally arrange them according to DCT mode structure.
\subsubsection{Pixel-level texture complexity evaluation module}
This module evaluates image texture complexity for each pixel in spatial domain.
In general, such a task can be accomplished by a pixel-to-pixel CNN structure.
{Different network architecture can be applied according to the development of network design in the area of deep learning. In this paper, for fair comparison with SPAR-RL-v2 and JS-GAN, a U-Net \cite{UNet} alike structure is adopted, and its architecture is given in Fig. \ref{fig:fp} in Appendix.}
{On top of the CNN,
a set of inverse DCT (IDCT) basis filters with a stride of $8$
can be used to decompress the JPEG
image $\mathbf X^{J}$ into spatial representation $\mathbf X = (x_{i,j})^{H \times W }$ as the pixel input.}
In order to preserve more information, the elements in $\mathbf X$ are of floating values without rounding.
This first module outputs a matrix $\mathbf T = (t_{i,j})^{H \times W }$, where $t_{i,j}$ denotes the texture complexity of pixel $x_{i,j}$.
{In fact, it is possible to use the texture calculation process in existing JPEG steganographic methods for obtaining the texture complexity evaluation information.
For example, the image residuals obtained with three Daubechies wavelet filters in J-UNIWARD can be used as a pixel-level texture complexity matrix $\mathbf T = (t_{i,j,d})^{H \times W \times 3}$.
For another example, an 8 times
nearest-neighbor upsampling can be performed on the block energy
in UERD and given in \eqref{eq:E} to yield a pixel-level texture complexity matrix $\mathbf T = (t_{i,j})^{H \times W }$.}
{
Applying such texture calculation process into this module in our proposed cost learning method
can be considered as a way to improve existing methods, because the two subsequent modules in the policy network can be used afterwards to obtain better costs.
But it may be inferior to the proposed cost learning method whose first module is implemented as a pixel-to-pixel CNN, because the texture calculation process is not learnable.
}
\subsubsection{DCT feature extraction module}
This module
takes pixel-level texture complexity matrix
as input, and then aggregates pixel-level texture features in spatial domain and converts them into frequency features as $\mathbf{F}^{\prime\prime\prime} = ({f}^{\prime\prime\prime}_{i,j,d})^{H/8 \times W/8 \times 64}$, wherein ${f}^{\prime\prime\prime}_{i,j,d}$ is the DCT feature corresponding to the DCT coefficient in the $(i,j)$-th
DCT block and
$(\lceil d/8\rceil,(d-1)\%8+1)$-th
DCT mode.
To realize this function, the second module can be composed of a stack of three convolutional groups,
as shown in Fig. \ref{fig:sp} and illustrated detailedly in Fig. \ref{fig:ModeWise}.
The number of convolutional kernels in each group is set to be $64$, which equals to the total number of JPEG DCT frequency-modes.
In each convolutional group, the row and the column dimensions of the output feature maps are reduced by a half with respect to the input feature maps via performing convolution with a stride of $2$.
In this way, a 2-D input $\mathbf T =(t_{i,j})^{H \times W}$ can be turned into a 3-D feature map $\mathbf{F^{\prime\prime\prime}} = ({f}^{\prime\prime\prime}_{i,j,d})^{H/8 \times W/8 \times 64}$.
Note that a receptive field of $15\times15$ spatial texture complexity features is aggregated and then converted into a $1\times1\times64$ frequency feature vector. As shown in Fig. \ref{fig:ModeWise},
in all three convolutional groups, the intra-block characteristics can be captured via convolution along the channel direction,
while the inter-block characteristics can be captured via convolution along the column and the row direction. The channel dimension of the feature maps outputted by the last convolutional layer equals to the number of $64$ DCT frequency-modes, and each feature map is associated with a specific DCT frequency-mode. The extracted DCT features are related to coefficient-wise embedding policies.
Hence, to restrict them as a kind of probabilities, we
use a Sigmoid activation function at the end of the last convolutional group
to make the values of ${f}^{\prime\prime\prime}_{i,j,d}$ be in the range of $[0,1]$.
It should be noticed that this module performs the transition of features from spatial domain to DCT domain. But it is different from performing a DCT transform.
Firstly, the transition is not designed to be reversible like DCT, because the goal here is to
obtain embedding policies instead of performing frequency analysis.
Secondly, the receptive field may not be restricted to be $8\times8$, but depends on the implemented network structure.
In our implementation, the receptive field is $15\times15$.
\subsubsection{Mode-wise rearrangement module}
A mode-wise rearrangement module is performed at the end of the policy network to rearrange the 3-D feature maps $\mathbf{F^{\prime\prime\prime}} = ({f}^{\prime\prime\prime}_{i,j,d})^{H/8 \times W/8 \times 64}$ into a 2-D temporary matrix $\mathbf Q^{J} =(q_{a,b}^{k,l})^{H \times W}$
as
\begin{equation}\label{}
{q}_{a,b}^{k,l} = {f}^{\prime\prime\prime}_{a,b,(k-1)*8+l}.
\end{equation}
In this way, the $((k-1)*8+l)$-th feature map in $\mathbf{F}$ is forced to be linked with the $(k,l)$-th DCT frequency-mode for learning embedding policy, while the spatial positions of DCT blocks are preserved.
The final outputted policy tensor $\bm{\Pi}^{J} = (\bm{\pi}_{a,b}^{k,l}(m))^{H \times W }$ can be obtained by:
\begin{equation}
\begin{aligned}
\bm{\pi}_{a,b}^{k,l}(m=1)=\bm{\pi}_{a,b}^{k,l}(m=-1)
=q_{a,b}^{k,l}/2,
\label{equ:q1}
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
\bm{\pi}_{a,b}^{k,l}(m=0)=1-q_{a,b}^{k,l},
\label{equ:q2}
\end{aligned}
\end{equation}
where $m \in \mathcal{M} = \{-1, 0, 1\}$ is the possible modification actions in ternary embedding.
{
Our proposed mode-wise rearrangement is an inverse process of the $8\times8$ phase split module
proposed in PNet \cite{PNet}, which splits a feature map into 64 sub-feature maps in order to form individual phase-aware branches for JPEG steganalysis.
These two modules are founded on the same JPEG domain knowledge but their roles are different.
The phase split module is used for decomposing features
and such a procedure can be found in conventional steganalysis or forensics.
By contrast, our proposed mode-wise rearrangement module is used for composing features,
and to the best of our knowledge, it is introduced for the first time.}
\subsubsection{Learning procedure}
\label{Sec:Opt}
In the learning phase, the modification actions in ${\mathbf{M}}^{J} = ({m}_{a,b}^{k,l})^{H \times W }$ are sampled in a coefficient-wise manner according to
the policy matrix $\bm{\Pi}^{J} = (\bm{\pi}_{a,b}^{k,l}(m))^{H \times W }$.
Then, the environment network is used to evaluate the contribution of modification actions and returns a reward matrix ${\mathbf{R}}^{J} = ({r}_{a,b}^{k,l})^{H \times W }$, as described in detail in the next sub-section.
{In general, when a larger reward is assigned for ${m}_{a,b}^{k,l}$, via maximizing the overall rewards,
a larger probability $\pi_{a,b}^{k,l}(m={m}_{a,b}^{k,l}|\mathbf{X}^{J})$ would be obtained for the sampled modification
in the next training round, and vice versa.}
The parameters of the policy network can be updated with a loss function ${l_{ A}}$ defined by the weighted summation of a reward loss ${l_{ R}}$ and a capacity loss ${l_{ C}}$ as follows:
\begin{equation}
{l_{ A}} = \alpha \cdot {l_{ R}} + \beta \cdot {l_{ C}},
\label{eq:GenLoss}
\end{equation}
\begin{equation}
\small {l_{ R}} = -\frac{1}{H \times W}\sum_{a=1}^{H/8}\sum_{b=1}^{W/8}\sum_{k=1}^{8}\sum_{l=1}^{8}
r_{a,b}^{k,l} \cdot \text{log} \pi_{a,b}^{k,l}(m={m}_{a,b}^{k,l}|\mathbf{X}^{J}),
\label{eq:reward loss}
\end{equation}
\begin{equation}
\begin{aligned}
\small {l_{ C}} = & \left(-\sum_{a=1}^{H/8}\sum_{b=1}^{W/8}\sum_{k=1}^{8}\sum_{l=1}^{8} \sum_{m\in\mathcal{M}} \right.\\
&\left.\pi_{a,b}^{k,l}(m|\mathbf{X}^{J})\text{log}_{2}\pi_{a,b}^{k,l}(m|\mathbf{X}^{J})-C \right)^{2},
\label{eq:capacity loss}
\end{aligned}
\end{equation}
where $\alpha$ and $\beta$ are the weights, and $C$ is the target capacity.
The derivation of \eqref{eq:reward loss} follows the RL procedure \cite{PolicyGradient} and the details
can also be referred to \cite{SPARRL}, and
the capacity loss \eqref{eq:capacity loss} is originated from \eqref{equ:capacity}.
In the deployment phase, embedding costs can be converted from the policy tensor outputted by the well-trained policy network as
\begin{equation}
\label{eq:p_to_rho}
\rho_{a,b}^{k,l}=\ln \left(\frac{2}{q_{a,b}^{k,l}}-2 \right).
\end{equation}
\begin{figure}[t!]
\centering
\subfigure[$8\times8$ DCT basis filters]{\includegraphics[height=2.4cm]{Images//64DCTKernel_s-eps-converted-to.pdf}
\label{fig:64DCT_s}}
\hspace{.05in}
\subfigure[$4\times4$ DCT basis filters ]{\includegraphics[height=2.4cm]{Images//16DCTKernel_s-eps-converted-to.pdf}
\label{fig:16DCT_s}
}
\hspace{.05in}
\subfigure[SRM high-pass filters ]{\includegraphics[height=2.4cm]{Images//30SRMKernel_s-eps-converted-to.pdf}
\label{fig:30SRM_s}
}
\hspace{.05in}
\subfigure[DCT of $8\times8$ DCT basis filters ]{\includegraphics[height=2.4cm]{Images//64DCTKernel_f-eps-converted-to.pdf}
\label{fig:64DCT_f}
}
\hspace{.05in}
\subfigure[DCT of $4\times4$ DCT basis filters ]{\includegraphics[height=2.4cm]{Images//16DCTKernel_f-eps-converted-to.pdf}
\label{fig:16DCT_f}
}
\hspace{.05in}
\subfigure[DCT of SRM high-pass filters]{\includegraphics[height=2.4cm]{Images//30SRMKernel_f-eps-converted-to.pdf}
\label{fig:30SRM_f}
}
{\caption{Illustration of the DCT transform of different filter sets.}
\label{fig:DCTtransform}}
\vspace{-0.6cm}
\end{figure}
\subsection{Gradient-oriented Environment Network }
\label{sec:environment network}
As shown in \eqref{eq:reward loss}, it is important for the environment to provide discriminative rewards in the training process of JEC-RL.
Following \cite{SPARRL}, the coefficient-wise reward value in the reward matrix $\mathbf R^{J} = (r_{a,b}^{k,l})^{H \times W}$ is computed by
\begin{equation}\label{eq:DG function}
\begin{aligned}
r_{a,b}^{k,l} =\xi \cdot \text{sign}({m}_{a,b}^{k,l}) \cdot g_{a,b}^{k,l},
\end{aligned}
\end{equation}
where $\xi$ is a constant,
${m}_{a,b}^{k,l}$ is the sampled modification,
and $g_{a,b}^{k,l}$ is the gradient of the environment network's loss function $l_{ E}$ with respect to modification ${m}_{a,b}^{k,l}$.
The cross-entropy loss function of the environment network is
defined as
\begin{equation}
\begin{aligned}
{l_{ E}} = -z'_{0}log(z_{0})-z'_{1}log(z_{1}),
\label{eq:DisLoss}
\end{aligned}
\end{equation}
where $z_{0}$ and $z_{1}$ are the environment network's Softmax outputs for the cover image $\mathbf X^{J}$ and the simulated stego images ${\mathbf Y}^{J}$, respectively, and $z'_{0}$ and $z'_{1}$ are their corresponding ground-truth labels.
The environment network is not used during the deployment phase.
{The reward $r_{a,b}^{k,l}$ is positive when the signs of $m_{a,b}^{k,l}$ and $g_{a,b}^{k,l}$ are the same, indicating that such modification action should be encouraged. Otherwise the modification action is discouraged.}
{Please note that in a general RL setting, the agent aims at maximizing the overall reward, and it is not necessary to compete against a steganalyzer.
However, in the case of JEC-RL, the reward is constructed by the environment network's gradient, and therefore maximizing such reward is more or less related to compete against the environment network.}
Compared with a CNN steganalyzer which aims at achieving better detection accuracy, an environment network focuses more on providing informative rewards to improve the performance of the policy network.
For such a purpose, rather than using an off-the-shelf JPEG steganalyzer, we propose a gradient-oriented environment network modified from XuNet \cite{Xu} via utilizing an effective preprocessing layer and a wide network architecture. Its architecture is given in Fig. \ref{fig:e}
\subsubsection{Preprocessing layer with $8\times8$ DCT basis filters}
To capture embedding traces in both spatial and DCT domain while providing coefficient-wise rewards,
we use a JPEG-tailored preprocessing layer to handle the image.
A bank of $8\times8$ DCT basis filters is used in the preprocessing layer to obtain image residuals,
where each DCT basis filter is denoted as $\mathbf Z^{u,v} = (z_{i,j}^{u,v})^{8 \times 8}$ ($1 \leq i,j,u,v \leq 8$), and its element is computed as
\begin{equation}
\label{eq:z}
z_{i,j}^{u,v} = \frac{{w}_{u}{w}_{v}}{8}\text{cos}(\frac{\pi u(2i+1)}{16})\text{cos}(\frac{\pi v(2j+1)}{16}),
\end{equation}
where ${w}_{0}=1$, ${w}_{k}=\sqrt{2}$ for $k>0$,
$(u,v)$ denotes the frequency index of the filter,
and $(i,j)$ denotes the position of the element in the matrix.
To this end, in preprocessing layer, we decompress a JPEG image $\mathbf X^{J}$ into spatial representation without rounding $\mathbf X$ as input, and then obtain image residuals by $8\times8$ DCT basis filters.
Finally, to facilitate effective learning, we limit the range of the image residuals to $[-8,8]$ by ReLU-based truncation.
We note that the JPEG CNN steganalyzer J-XuNet and Zeng-CNN obtain their best steganalytic detection performance with a non-learnable preprocessing layer implemented by $4\times4$ and $5\times5$ DCT basis filters, respectively.
SRNet abandons a fixed preprocessing layer and utilizes multiple stacked learnable convolutional layers to capture image residuals.
As for spatial CNN steganalyzer, XuNet utilizes preprocessing layer equipped with SRM high-pass filters.
Although these preprocessing layers have obtained satisfied performance in detection task, the case for JPEG cost learning is quite different.
Since the modification actions are possible to be taken place in all $8\times8$ DCT modes,
the preprocessing layer in the environment network should provide sufficient frequency resolution to cover all 64 frequency bands and provide discriminative rewards for all DCT modes as well.
{We apply DCT transform to $8\times8$ DCT basis filters, $4\times4$ DCT basis filters, and 30 SRM high-pass filters, respectively, to investigate their frequency responses.
Filters are zero-padded to $8\times8$ if necessary.
The results are shown in Fig. \ref{fig:DCTtransform}.
There is no doubt that each filter in the $8\times8$ DCT basis filter set responses to a specific DCT frequency.
For the other two filter sets, their frequency responses mainly focus on some particular frequency bands and fail to fully cover all 64 frequency bands.
In fact, covering a wide range of frequency band in the environment network's preprocessing layer is beneficial for propagating effective gradients to modification actions on different DCT modes.
We will give experimental evidences in Section \ref{sec:investigation}.}
\subsubsection{Learnable layers with wide network structure}
As for the learnable layers in gradient-oriented environment network, the 5-layer XuNet is used as backbone.
To further improve the network capacity, we can increase the network width or network depth.
Note that although a CNN steganalyzer with a deeper structure, such as 22-layer J-XuNet or 12-layer SRNet, can achieve better steganalytic performance, the case for the environment network is different.
The environment network aims at providing more informative gradient information, and the deep architecture would be inefficient and unstable to propagate informative gradients for DCT coefficients.
Therefore, we use the 5-layer XuNet as backbone, and expand the width of each layer.
{Specifically, the width of the first to the fifth layer is expanded from 8, 16, 32, 64, 128 to 48, 48, 64, 128, 256, respectively.}
At the end of the network, there is a fully-connected layer followed by a Softmax function.
\section{Experiments}
\label{experiment}
In order to evaluate the performance of the proposed JEC-RL,
extensive experiments were carried out, including evaluating the security performance against state-of-the-art steganalyzers (given in Section \ref{sec:performance}),
conducting ablation studies on network architecture design (given in Section \ref{sec:ablation}),
incorporating the texture calculation process of traditional methods into design paradigm (given in Section \ref{sec:refine}),
investigating the effectiveness of the $8\times8$ DCT basis filters in environment network (given in Section \ref{sec:investigation}), and
visualizing the embedding pattern (given in Section \ref{sec:visulazation}).
The experimental settings are described in Section \ref{sec:setting}.
\begin{table*}[t!]
\renewcommand\arraystretch{1}
\scriptsize
{
\caption{$P_{\text E}$ of steganographic methods against different steganalyzers under the setting of JPEG quality factor 75.}
\label{tab:basic500k}}
\centering
\begin{tabular}{cclllll}
\toprule
\textbf{Steganalyzer} &
\textbf{Steganographic method}&
\textbf{0.1 bpnzAC} & \textbf{0.2 bpnzAC} & \textbf{0.3 bpnzAC} & \textbf{0.4 bpnzAC} & \textbf{0.5 bpnzAC}
\\\midrule
\multirow{4}{*}{PHARM}
&J-UNIWARD & 46.44\% & 39.98\% & 32.42\% & 24.46\% & 17.54\%\\
\cdashline{2-7}
&J-MSUNIWARD & 46.90\%& 40.86\% & 33.44\%& 26.15\% & 19.19\%\\
&MSUERD\_SPA & 46.45\%& 40.69\% & 34.52\%& 27.85\% & 21.92\%\\\cdashline{2-7}
&\textbf{JEC-RL} & \textbf{47.36\%} & \textbf{43.19\%} & \textbf{38.13\%} & \textbf{32.24\%} & \textbf{26.12\%}\\\midrule
\multirow{4}{*}{GFR}
&J-UNIWARD & 45.38\% & 37.63\% & 29.56\% & 21.65\% & 15.01\%\\
\cdashline{2-7}
&J-MSUNIWARD & 45.52\%& 38.34\% & 30.65\%& 22.99\% & 16.20\%\\
&MSUERD\_SPA & 45.84\%& 39.64\% & 32.99\%& 26.26\% & 19.59\%\\\cdashline{2-7}
&\textbf{JEC-RL} & \textbf{46.75\%} & \textbf{41.20\%} & \textbf{35.85\%} & \textbf{29.10\%} & \textbf{22.77\%}\\
\midrule
\multirow{4}{*}{SCA-GFR}
&J-UNIWARD & 42.39\% & 33.61\% & 25.35\% & 18.29\% & 12.86\%\\
\cdashline{2-7}
&J-MSUNIWARD & 42.20\%& 33.69\% & 25.73\%& 18.70\% & 13.23\%\\
&MSUERD\_SPA & 39.54\%& 31.49\% & 23.92\%& 18.27\% & 13.71\%\\\cdashline{2-7}
&\textbf{JEC-RL} & \textbf{44.99\%} & \textbf{39.16\%} & \textbf{32.94\%} & \textbf{26.65\%} & \textbf{21.13\%}\\
\midrule
\multirow{4}{*}{J-XuNet}
&J-UNIWARD & 40.82\% & 28.03\% & 19.67\% & 12.37\% & 7.59\%\\
\cdashline{2-7}
&J-MSUNIWARD & 40.68\%& 28.64\% & 20.11\%& 13.01\% & 8.09\%\\
&MSUERD\_SPA & 29.80\%& 18.82\% & 11.93\%& 8.08\% & 4.46\%\\\cdashline{2-7}
&\textbf{JEC-RL} & \textbf{45.94\%} & \textbf{33.93\%} & \textbf{24.70\%} & \textbf{17.22\%} & \textbf{12.78\%}\\
\midrule
\multirow{4}{*}{SRNet}
&J-UNIWARD & 31.86\% & 18.68\% & 10.74\% & 6.90\% & 3.71\%\\
\cdashline{2-7}
&J-MSUNIWARD & 32.10\%& 18.40\% & 11.21\%& 6.98\% & 3.91\%\\
&MSUERD\_SPA & 22.53\%& 11.62\% & 6.54\%& 3.93\% & 2.24\%\\\cdashline{2-7}
&\textbf{JEC-RL} & \textbf{39.36\%} & \textbf{24.29\%} & \textbf{16.08\%} & \textbf{9.89\%} & \textbf{6.55\%}\\
\bottomrule
\end{tabular}
\vspace{-0.6cm}
\end{table*}
\subsection{Settings}
\label{sec:setting}
\subsubsection{Steganographic methods}
\label{sec:steganographic methods}
Four steganographic methods {that can generate embedding costs from scratch}
were tested in our experiments, including J-UNIWARD (JPEG-Universal Wavelet Relative Distortion)\cite{UNIWARD}, J-MSUNIWARD (Microscope JPEG-Universal Wavelet Relative Distortion)\cite{MS}, MSUERD\_SPA (Microscope Uniform Embedding Distortion Revisited Filtering in Spatial Domain)\cite{MS}, and our proposed JEC-RL.
Being an effective traditional method, J-UNIWARD can be regarded as a kind of baseline, and J-MSUNIWARD is an improved version of J-UNIWARD, while MSUERD\_SPA is an improved version of UERD.
{Those methods that depend on pre-defined embedding costs, such as non-additive distortion methods and AE-based methods, were not included for comparison.
Since JS-GAN \cite{JSGAN} does not outperform J-UNIWARD, we did not include it neither.}
The payload is measured by bits per non-zero AC coefficient (bpnzAC) as in \cite{UNIWARD,UERD,MS}.
The optimal embedding simulator \cite{simulator} was employed for generating stego images.
The settings of JEC-RL are as follows.
The batch size $N$ was set as 24. The weighted parameters in \eqref{eq:GenLoss} were set as $\alpha=1$ and $\beta=10^{-7}$, respectively. The reward magnitude was set as $\xi=10^{7}$.
For all batch normalization layers, the momentum for the moving average was set as 0.999.
The Adam optimizer was used for optimization, wherein the learning rate was initialized as 0.0001 and it was decayed to $10\%$ in every 30,000 iterations.
The models at the 90,000-th training iteration under different payload rates were respectively used to calculate the embedding costs.
\subsubsection{Steganalyzers}
\label{sec:steganalyzer}
Five different steganalyzers were used to evaluate the security performance of the steganographic methods, including two feature-based methods, i.e., GFR (Gabor Filters Residual)\cite{GFR} and PHARM (PHase-Aware pRojection Model) \cite{PHARM}, one feature-based method utilizing selection channel information, i.e., SCA-GFR (Selection Channel Aware-Gabor Filters Residual)\cite{SCA}, and two CNN-based methods, i.e., J-XuNet \cite{XuJPEG} and SRNet \cite{SRNet}.
Security performance is evaluated as the detection error rate $P_{\text E}$ on the testing set, which is the average of the false alarm rate $P_{\text{FA}}$ and the missed detection rate $P_{\text{MD}}$:
\begin{equation}\label{eq:PE}
P_{\text E} = \min \limits_{P_{\text{FA}}} \frac{1}{2}(P_{\text{FA}} + P_{\text{MD}}(C)).
\end{equation}
\subsubsection{Image Set}
Three image sets were used in experiments.
\begin{itemize}
\item $\textit{Basic500k}$ ($256\times256$): it consists of 500,000 images, which were obtained by randomly selecting images with size larger than 256$\times$256 from ImageNet \cite{ImageNet} and then cropping their top-left 256$\times$256 regions. The images were further converted to grayscale and compressed with JPEG quality factor 75. This image set has been used in \cite{zeng} and \cite{ADVEMB}.
\item $\textit{BOSSBase}$ ($256\times256$): it consists of 10,000 images, taken from $\textit{BOSSBase}$ v1.01 image set \cite{BOSS} and down-sampled from $512\times512$ to $256 \times 256$ using the ``imresize'' Matlab function with \textit{Bicubic} Kernel. Then, the images were compressed into JPEG format.
\item $\textit{BOWS2}$ ($256\times256$): it consists of 10,000 images. The original $512\times512$ images \cite{BOW} were resized to $256\times256$ via the ``imresize'' Matlab function with \textit{Bicubic} Kernel. Then, the images were compressed into JPEG format.
\end{itemize}
In the experiments, the $\textit{Basic500k}$ was always used to train the JEC-RL.
$\textit{BOSSBase}$ and $\textit{BOWS2}$ were used to train the steganalyzers and evaluate the security steganographic performance.
Considering that different methods may require different amount of data for training, we treat feature-based and CNN-based methods differently.
Specifically, for feature-based methods such as GFR \cite{GFR}, PHARM \cite{PHARM}, and SCA-GFR \cite{SCA}, the images from $\textit{BOSSBase}$ were randomly split into a training set and a testing set with a proportion of $1:1$.
For CNN-based methods such as J-XuNet \cite{XuJPEG} and SRNet \cite{SRNet}, the images from $\textit{BOSSBase}$ were randomly split into a training set, validation set and a testing set with a proportion of $4:1:5$, and all images from $\textit{BOWS2}$ were included into the training set.
Note that the testing set was the same for feature-based steganalyzers and CNN-based steganalyzers.
\subsection{Security Performance Against Steganalysis}
\label{sec:performance}
In this part, we compare the security performance of different steganographic methods.
The experimental results under state-of-the-art steganalyzers
are shown in Table \ref{tab:basic500k}.
The following observations can be made.
\begin{itemize}
\item Compared with the baseline J-UNIWARD, JEC-RL has greatly improved the security performance. For example, in the case of 0.5 bpnzAC, the improvement is $8.58\%$, $7.76\%$, $8.27\%$, $5.19\%$, and $2.84\%$ against PHARM, GFR, SCA-GFR, Xu-Net, and SRNet, respectively.
\item Facing with the feature-based steganalyzers without selection channel information, MSUERD\_SPA performs the best among the three conventional steganographic methods. In this case, the improvement over MSUERD\_SPA is $0.91\%$, $1.56\%$, $2.86\%$, $2.84\%$, and $3.18\%$ against GFR at 0.1 to 0.5 bpnzAC, respectively.
\item When countering against the SCA-GFR and the CNN-based steganalyzers, J-MSUNIWARD is the best-performed conventional method. In such detection scenarios, JEC-RL can also achieve better security performance. For example, at 0.5 bpnzAC, the improvement is $7.90\%$, $4.69\%$, and $2.64\%$ against SCA-GFR, J-XuNet, and SRNet, respectively.
\item Even in a low payload situation such as 0.1 bpnzAC, JEC-RL can obtain significant improvement. When comparing with J-UNIWARD, the improvement against GFR and PHARM is around $1\%$. The improvement against more advanced CNN-based steganalyzers, i.e., J-XuNet and SRNet, is even more significant, which is more than $5\%$ and $7\%$, respectively.
\end{itemize}
It can be concluded that although the best conventional method may be different against different steganalyzers,
our proposed JEC-RL can always outperform the best one.
\begin{figure}[t!]
\center
\includegraphics[width=0.44\textwidth]{Images//ablation-eps-converted-to.pdf}
\label{fig:stable1}
\caption{Security performance of JEC-RL and its variants against GFR steganalyzer on 0.4 bpnzAC.}
\label{fig:ablation}
\vspace{-0.6cm}
\end{figure}
\subsection{Ablation Studies on Network Architecture}
\label{sec:ablation}
In this part, we conduct ablation studies on the proposed network architecture. Specifically, we take JEC-RL as a baseline, and investigate the impact of the first module of the policy network by Variant Type I, different kinds of preprocessing layer in the environment network by Variant Type II, III, and IV, and the impact of the depth and width of environment network by Variant Type V and VI.
\subsubsection{Variant Type I (learning block-level texture complexity
in the first module of policy network)}
The first module of the policy network applied in JEC-RL evaluates the pixel-level texture complexity.
In Variant Type I, the first module is
used to learn block-level texture complexity instead of pixel-level texture complexity.
To this end, for an input image with size $H \times W$,
the last layer of the first module outputs $H/8\times W/8 \times 64$ feature maps, rather than the $H\times W \times 1$ feature map in the original JEC-RL.
The second module takes such $H/8\times W/8 \times 64$ feature maps as input, and utilizes three convolutional groups to obtain $\mathbf{F^{\prime\prime\prime}} = ({f}^{\prime\prime\prime}_{i,j,d})^{H/8 \times W/8 \times 64}$, wherein all convolutional kernels are of size $3\times3\times64\times64$ and all convolutional strides equal to 1.
\subsubsection{Variant Type II (utilizing $4\times4$ DCT basis filters in the preprocessing layer of environment network)}
In Variant Type II, the $8\times8$ DCT basis filters were replaced with $4\times4$ DCT basis filters, which were applied in J-XuNet.
\subsubsection{Variant Type III (utilizing SRM filters in the preprocessing layer of environment network)}
In Variant Type III, the $8\times8$ DCT basis filters were replaced with 30 high-pass filters from SRM \cite{SRM}, which were adopted in SPAR-RL-v2.
\subsubsection{Variant Type IV (utilizing learnable filters in the preprocessing layer of environment network)}
In Variant Type IV, the $8\times8$ DCT basis filters were replaced with learnable convolutional filters, which were adopted in SRNet.
\subsubsection{Variant Type V (increasing the depth of environment network)}
Considering that deeper network architecture is preferable for better steganalytic detection performance.
In Variant Type V, the learnable part of the environment network (6-layer) was replaced with the 22-layer in J-XuNet.
\subsubsection{Variant Type VI (decreasing the width of environment network)}
In JEC-RL, the learnable part of the environment network is a width-expanded version of that in XuNet. In Variant Type VI, the original setting of network width in XuNet was adopted.
The experimental results on 0.4 bpnzAC with a GFR
steganalyzer are shown in Fig. \ref{fig:ablation}.
It can be observed that the JEC-RL outperforms conventional method, previous cost learning method, and its variants.
The following conclusions can be made.
\begin{itemize}
\item {It can be observed that the proposed JEC-RL and all of its variants outperform J-UNIWARD and A-SPAR-RL-v2, verifying the effectiveness of the domain-transition paradigm based policy network and the gradient-oriented environment network.}
\item To learn more effective embedding policies, learning fine-grained pixel-level texture complexity features is more effective than learning block-level features in the first part of the policy network.
\item The $8\times8$ DCT basis filters may be the most suitable one in the environment network's preprocessing layer.
The reason is that the $8\times8$ DCT basis filters
filters have a preference in propagating gradients to modifications on the DCT modes that are close to their own frequencies.
By contrast, the $4\times4$ DCT basis filters are coarse-grained and cannot cover the whole frequency band for 64 DCT frequency-modes, and the spatial SRM filters or learnable filters cannot perfectly match 64 JPEG DCT frequency-modes.
More detailed analysis of the $8\times8$ DCT basis filters is given in Section \ref{sec:investigation}.
\item To improve the environment network's capability, increasing the network width is more effective than increasing the network depth. Although a deeper CNN can usually obtain better security performance, it may lead to potential issues in inefficient gradients.
Therefore, increasing the depth of the environment network immoderately may degrade its performance in reward assignment for obtaining optimal embedding policies.
\end{itemize}
\subsection{Incorporating Texture Calculation Process in Traditional Steganographic Methods into Texture Evaluation Module}
\label{sec:refine}
{In this part, we try to incorporate the texture calculation process into the first module of the policy network, and then continue to utilize the two rest modules for cost learning.
We take J-UNIWARD and MSUERD\_SPA for example, and abbreviate the scheme as
JEC-RL(J-UNI) and JEC-RL(MSU).}
In JEC-RL(J-UNI), the first module directly outputs the pixel-level texture complexity matrix $\mathbf T = (t_{i,j})^{H \times W\times3 }$, wherein each channel is obtained by convolving the decompressed JPEG image with one of the three Daubechies wavelet filters, as it did in J-UNIWARD.
In JEC-RL(MSU), the first module firstly obtains
block-level texture $\mathbf T^{\prime} = (t_{a,b}^{\prime})^{H/8 \times W/8 }$ as the weighted sum of the block energy $E_{a,b}$ wherein $t_{a,b}^{\prime} = E_{a,b}+0.25\cdot \sum_{\hat{E}\in \mathbb{\hat{E}}_{a,b}}\hat{E}$, as it did in both MSUERD\_SPA and UERD shown in \eqref{eq:E}. Then, the block level texture matrix is nearest-neighbor upsampled to pixel-level texture $\mathbf T = (t_{i,j})^{H \times W}$ by a three layer learnable neural network.
{Since their texture calculation process contains none or few learnable parameters compared with a pixel-to-pixel CNN structure, a deeper network with six convolutional groups is used in the second module to facilitate effective learning, where the odd groups have a convolution stride of $1\times1$ and the even groups have a convolution stride of $2\times2$}.}
The experimental results are shown in Table \ref{tab:SPARRLJR}.
The following observations can be made.
\begin{enumerate}
\item JEC-RL(J-UNI) outperforms J-UNIWARD under all circumstances. For example, the improvement on 0.3 bpnzAC is $4.33\%$, $7.96\%$, and $4.39\%$ against GFR, SCA-GFR and J-XuNet, respectively.
\item MSUERD\_SPA outperforms J-UNIWARD against GFR, but is comparable or inferior to J-UNIWARD against SCA-GFR and J-XuNet.
JEC-RL(MSU) can make up for the deficiency and outperforms MSUERD\_SPA against SCA-GFR and J-XuNet by $5.89\%$ and $3.69\%$.
\item
JEC-RL(J-UNI) outperforms JEC-RL(MSU).
The reason is that the pixel-level texture can be directly obtained in J-UNIWARD, while that is upsampled from block-level texture in MSUERD\_SPA, which may leave some important fine-grained information.
\end{enumerate}
It can be concluded that JEC-RL(J-UNI)/JEC-RL(MSU) can make better use of the same texture information, implying that the DCT feature extraction module can effectively capture inter-block and intra-block characteristics. The performance of the vanilla JEC-RL is superior to JEC-RL(J-UNI)/JEC-RL(MSU), indicating that the design paradigm can maximize its learning ability via end-to-end optimizing its modules.
\begin{table}[t!]
\renewcommand\arraystretch{1.1}
{\caption{
$P_{\text E}$ of steganographic methods with JPEG quality factor 75. The numbers in parentheses show the performance difference between JEC-RL(J-UNI)/JEC-RL(MSU) and J-UNIWARD/MSUERD\_SPA.}
\label{tab:SPARRLJR}}
\scriptsize
\centering
\begin{tabular}{ccll}
\toprule
\textbf{Stegan-} &
\textbf{Steganographic}&
\multirow{2}{*}{\textbf{0.3 bpnzAC}} & \multirow{2}{*}{\textbf{0.5 bpnzAC}}\\\textbf{alyzer}& \textbf{method}& &
\\\midrule
\multirow{5}{*}{GFR}
&J-UNIWARD & 29.56\% & 15.01\% \\
&\textbf{JEC-RL(J-UNI)} & 33.89\%(\textbf{$\uparrow$4.33\%}) & 19.35\%(\textbf{$\uparrow$4.34\%})\\\cdashline{2-4}
\cdashline{2-4}
&MSUERD\_SPA & 32.99\%& 19.59\% \\
&\textbf{JEC-RL(MSU)} &31.49\%({$\downarrow$1.50\%}) & 17.09\%({$\downarrow$2.50\%})\\\cdashline{2-4}
&JEC-RL & 35.85\% & 22.77\% \\\cline{1-4}
&J-UNIWARD & 25.35\% & 12.86\% \\
SCA-&\textbf{JEC-RL(J-UNI)} & 33.31\%(\textbf{$\uparrow$7.96\%}) & 20.25 \%(\textbf{$\uparrow$7.39\%})\\\cdashline{2-4}
\cdashline{2-4}
GFR&MSUERD\_SPA & 23.92\%& 13.72\% \\
&\textbf{JEC-RL(MSU)} &29.81\%(\textbf{$\uparrow$5.89\%}) & 16.17 \%(\textbf{$\uparrow$2.45\%})\\\cdashline{2-4}
&JEC-RL & 32.94\% & 21.13\% \\\cline{1-4}
\multirow{4}{*}{J-XuNet}&J-UNIWARD & 19.67\% & 7.59\% \\
&\textbf{JEC-RL(J-UNI)} & 23.96\%(\textbf{$\uparrow$4.29\%}) & 10.49\%(\textbf{$\uparrow$2.90\%})\\
\cdashline{2-4}
&MSUERD\_SPA & 11.93\%& 4.46\% \\
&\textbf{JEC-RL(MSU)} & 15.62\%(\textbf{$\uparrow$3.69\%}) & 6.21\%(\textbf{$\uparrow$1.75\%})\\
\cdashline{2-4}
&JEC-RL & 24.70\% & 12.78\% \\
\bottomrule
\end{tabular}
\end{table}
\begin{figure*}[t!]
\centering
\subfigure[$8\times8$ DCT basis filters.]
{\includegraphics[height=3.2cm]{Images//64DCT_g_crop.pdf}
\label{fig:64DCT_g}}
\hspace{.1in}
\subfigure[$4\times4$ DCT basis filters.]
{\includegraphics[height=3.2cm]{Images//16DCT_g_crop.pdf}
\label{fig:16DCT_g}}
\hspace{.1in}
\subfigure[SRM high-pass filters.]
{\includegraphics[height=3.2cm]{Images//30SRM_g_crop.pdf}
\label{fig:30SRM_g}}
\hspace{.1in}
\subfigure[top-$n$-statistics.]{\includegraphics[height=3.2cm]{Images//fig111_crop.pdf}
\label{fig:select}}
\caption{The averaged accumulated gradient
component matrices ${\mathbf{E}}^{u,v} = ({e}^{u,v}(k,l))^{8 \times 8}$ $(1 \le k,l \le 8)$ over 1,000 images from $\textit{BOSSBase}$ for three filter sets ($8\times8$ DCT basis filters (a), $4\times4$ DCT basis filters(b), and 30 SRM high-pass filters(c)) and the curves of top-$n$-statistics $s_n$ (d). The elements within the same matrix are normalized to [0,1],
and the red hot color denotes large value while the blue cold color denotes small value in (a), (b) and (c).
}\label{fig:8x8}
\end{figure*}
\subsection{Investigation on the Role of Filters for Gradient Propagation}
\label{sec:investigation}
In this part, we further investigate the role of the filters in the preprocessing layer of JEC-RL
for propagating gradients in reward assignment.
In the backpropagation process, the gradients of the environment network's loss function are propagated through the filters in the preprocessing layer to the modifications.
We intend to investigate
how the gradients are propagated towards each DCT mode through each filter in a
filter set. We compare three filter sets, i.e., $8\times8$ DCT basis filters, $4\times4$ DCT basis filters, and 30 SRM high-pass filters.
Denote a filter $\mathbf Z^{u,v}$, where $(u,v)$
is the index in a filter set and specifically
$1 \le u,v\le 8$ for $8\times8$ DCT basis filters,
$1 \le u,v\le 4$ for $4\times4$ DCT basis filters, and
$1 \le u\le 5, 1 \le v\le 6$ for SRM filters, respectively.
$(u,v)$ also represents the corresponding frequency for DCT basis filters.
The feature map obtained by processing an input image $\mathbf{X}^{J}$ with the filter $\mathbf Z^{u,v}$ is denoted as $\mathbf S^{u,v} = (s_{i,j}^{u,v})^{H \times W}$.
The modification map is denoted as $\mathbf{M}^{J} = (m_{a,b}^{k,l})^{H \times W }$, where $(k,l)$ is the position of the DCT frequency-mode and $(a,b)$ is the position of the DCT block.
The gradient which we investigate is $\frac{\partial{s_{i,j}^{u,v}}}{\partial{m_{a,b}^{k,l}}}$
for that it can be regarded as the feedback route from the image residuals to the modifications.
We form an \textit{accumulated gradient component matrix} ${\mathbf{E}}^{u,v} = ({e}^{u,v}_{k,l})^{8 \times 8}$ ($1 \le k,l \le 8$))
for the $(u,v)$-th filter.
The $(k,l)$-th element in the matrix is obtained by
\begin{equation}
\label{eq:gradient}
\begin{aligned}
e^{u,v}_{k,l} = \sum_{a=1}^{H/8}\sum_{b=1}^{W/8} \Big( \sum_{i=1}^{H}\sum_{j=1}^{W} \left| \frac{\partial{s_{i,j}^{u,v}}}{\partial{m_{a,b}^{k,l}}} \right | \Big).
\end{aligned}
\end{equation}
In \eqref{eq:gradient}, the magnitude of the gradient components are first summed up
over all image residuals, and then summed up over all DCT blocks for the $(k,l)$-th DCT mode.
It may evaluate the overall effects of the gradients propagated
from all residuals towards the $(k,l)$-th DCT mode by the $(u,v)$-th filter.
The larger the element it is in the matrix,
the larger the accumulated gradient components propagate back to the corresponding
DCT mode through such a filter.
{We calculated the averaged value of ${\mathbf{E}}^{u,v}$ over 1,000 images from $\textit{BOSSBase}$ and
normalized the elements in each ${\mathbf{E}}^{u,v}$ to [0,1].
The results for the three filter sets can be visualized in Fig. \ref{fig:8x8}(a) to \ref{fig:8x8}(c).
It can be observed that the filters in a given filter set
may have a preference in propagating gradients over different DCT modes.
It is interesting to see that
$8\times8$ DCT basis filters and $4\times4$ DCT basis filters
have a preference in serving the DCT modes that are close to their own frequencies,
while most of SRM filters show a kind of preference in serving the DCT modes in the horizontal or vertical direction.
To further analyze how well each DCT mode can receive large accumulated gradients, we performed the following statistical analysis.
}
\begin{enumerate}
\item For each ${\mathbf{E}}^{u,v} = ({e}^{u,v}_{k,l})^{8 \times 8}$, sort its 64 elements ${e}^{u,v}_{k,l}$ in a descending order, and the corresponding sorting order of ${e}^{u,v}_{k,l}$ is denoted as ${o}^{u,v}_{k,l} \in \{1, 2, \cdots, 64\}$.
The larger the element, the smaller the order.
\item
In a given filter set with a number of $U \times V$ filters\footnote{We have $U=V=8$ for $8 \times 8$ DCT basis filters, $U=V=4$ for $4 \times 4$ DCT basis filters, and $U=5$, $V=6$ for SRM filters.)}, count how many times a DCT mode $(k,l)$ $(1 \le k,l \le 8)$ ranks among the top-$n$ in the sorting order, called
top-$n$-rate, as
\begin{equation}
\begin{aligned}
r_{k,l,n} = \sum_{u=1}^{U}\sum_{v=1}^{V} \delta({o}^{u,v}_{k,l} \leq n) \quad (1 \le k,l \le 8).
\end{aligned}
\end{equation}
\item Then, for a given $n$,
count the amount of DCT modes that can attain non-zero top-$n$-rate, called top-$n$-statistics, as
\begin{equation}
\begin{aligned}
s_n = \sum_{k=1}^{8}\sum_{l=1}^{8}\delta(r_{k,l,n}>0).
\end{aligned}
\end{equation}
\end{enumerate}
Fig. \ref{fig:select} shows the curve of $s_n$ ($0 \le n \le 64$) for the three filter sets.
It can be observed that
for a given $n$, the $8\times8$ DCT basis filter set can always obtain a top-$n$-statistics no less than the other two filter sets,
{indicating that the filters within such filter set are more likely to be responsible for propagating gradients to each of the 64 DCT modes.}
\begin{figure}[t!]
\centering
\subfigure[Cover image]{\includegraphics[width=2.4cm]{Images//cover13-eps-converted-to.pdf}
\label{fig:cover}}
\subfigure[MP of J-MSUNIWARD]
{\includegraphics[width=2.4cm]{Images//modification_jmsuniward_crop2-eps-converted-to.pdf}
\label{fig:m_JMSUNIWARD}
}
\subfigure[MP of MSUERD\_SPA]
{\includegraphics[width=2.4cm]{Images//modification_msuerdspa_crop2-eps-converted-to.pdf}
\label{fig:m_MSUERDSPA}
}
\hspace{.01in}
\subfigure[MP of A-SPAR-RL-v2]
{\includegraphics[width=2.4cm]{Images//modification_SPARRLv2_crop2-eps-converted-to.pdf}
\label{fig:m_ASPARRLv2}
}
\hspace{.01in}
\subfigure[MP of JEC-RL]{\includegraphics[width=2.4cm]{Images//modification13-eps-converted-to.pdf}
\label{fig:m_SPARRLJ}
}
\subfigure[ECP of JEC-RL]{\includegraphics[width=2.4cm]{Images//probability13-eps-converted-to.pdf}
\label{fig:prob}
}
{\caption{Illustration of the cover image ``01013.jpg" from $\textit{BOSSBase}$, the modification map (MP) of different steganographic methods, and the embedding change probabilities (ECP) of JEC-RL on 0.4 bpnzAC.}
\label{fig:modification}}
\end{figure}
\subsection{Analyzing the Embedding Patterns}
\label{sec:visulazation}
The modification map and the embedding change probabilities of a cover image are visualized in Fig. \ref{fig:modification}.
To better illustrate the modification map, we select some regions from a sample image.
Box 1 contains the regions with high texture complexity.
As for Box 2, those outside Box 3 are texture regions, while those inside Box 3 are smooth regions.
It can be observed that compared with the state-of-the-art traditional methods J-MSUNIWARD and MSUERD\_SPA, and A-SPAR-RL-v2 adapted from spatial domain,
the embedding pattern of JEC-RL is more likely to concentrate in texture regions such as those inside Box 1, and avoid to spread in the smooth regions such as those inside Box 3.
It can be observed that the embedding change probabilities obtained by JEC-RL have obvious
adaptivity on image content as well as on different DCT frequency-mode positions.
Specifically, the embedding change probabilities are mostly concentrated in the texture region of an image. Besides,
the embedding change probabilities
in the upper-left corner of DCT block is larger than those in the bottom-right corner.
It is widely acknowledged that in JPEG steganography the DCT coefficients in low or medium frequency-modes are more suitable to be modified than those in high frequency-modes.
As a result, most of the traditional methods are designed according to such heuristic rule.
This rule can also be verified by JEC-RL,
where the embedding change probabilities are automatically learned via the interactions between the policy network and the environment network in a data-driven manner.
\section{Conclusion}
\label{sec:conclusion}
In this paper, we have proposed an automatic cost learning method for JPEG images called JEC-RL, wherein the policy network and environment network are specifically designed according to JPEG domain knowledge.
Within the policy network, its first module outputs the pixel-level texture complexity, and then its second module generates DCT features, and finally the third module rearranges the DCT features into embedding policies represented in a $8\times8$ DCT frequency-mode structure.
As for the environment network, the preprocessing layer, the network depth, and the network width are carefully designed for the purpose of efficient gradient propagation for reward assignment.
{We stress that DCT basis filters are more suitable than high-pass filters in the preprocessing layer to provide sufficient frequency resolution
due to fact that embedding modifications are possible to be taken place in all DCT modes.}
Extensive experiments have demonstrated that JEC-RL can automatically learn outstanding JPEG embedding costs, and the following conclusions can be made.
\begin{enumerate}
\item Compared to the traditional steganographic methods, JEC-RL has achieved the state-of-the-art security performance against different advanced steganalyzers.
\item The network architecture applied in JEC-RL
{has been deliberately designed according to some JPEG DCT characteristics, including the texture level of DCT blocks, the correlation among DCT coefficients, and the different impacts of DCT frequency-modes. These considerations
have significant positive impacts on security performance.} Ablation studies have shown the effectiveness of the proposed three-module composed policy network and the gradient-oriented environment network equipped with the $8\times8$ DCT basis filters.
\item JEC-RL can be used to work with traditional methods such as J-UNIWARD and MSUERD\_SPA. Based on the same kind of texture complexity information, JEC-RL(J-UNI) and JEC-RL(MEU) can obtain more effective steganographic embedding costs than J-UNIWARD and MSUERD\_SPA.
\end{enumerate}
To the best of our knowledge, this paper is the first work that can learn JPEG embedding costs from scratch and outperform traditional cost functions.
{
To summarize, besides of using more advanced network structure for learning automatically,
the domain knowledge also plays an important role and should be exploited
in the task-specific schemes.
}
To further improve the performance, the following aspects may worth investigation. Firstly, it is interesting to learn embedding costs for non-additive distortion. For example, the policy network may generate joint embedding policies for DCT coefficients with high correlation, as it did in \cite{decompose} for joint distortion.
Secondly, the side information from uncompressed images \cite{sideinfor} may be taken into account in the policy network to obtain more secure modification actions.
{Thirdly, more attentions should be paid to the reward function.
For example, multiple environment networks can be involved to yield rewards in a way similar as min-max strategy \cite{minmax}. Besides, the rewards may also be designed in a way independent of the the gradients.}
Lastly, we hope to extend our cost learning method to other media carriers such as video or audio \cite{audio}.
\section{appendix}
\label{sec:appendix}
In this appendix, we show the architecture of the learnable part in JEC-RL in Fig. \ref{fig:Architecture}, including the pixel-level texture complexity evaluation module and the DCT feature extraction module in the policy network, and the gradient-oriented environment network.
The kernel configurations of the convolutional layers are given in the format: kernel width $\times$ kernel height $\times$ number of input feature maps $\times$ number of output feature maps. The sizes of the output
feature maps are given in the format: height $\times$ width $\times$ number of feature maps. In the first part of the policy network, each convolutional group consists of a convolutional layer, a batch normalization layer, and a ReLU activation function, while each deconvolutional group consists of a deconvolutional layer, a batch normalization layer, and a leaky ReLU activation function.
\begin{figure}[t]
\centering
\subfigure[Pixel-level texture complexity evaluation module.]{\includegraphics[height=6cm]{Images//Architecture1.pdf}
\label{fig:fp}}
\hspace{.1in}
\subfigure[DCT feature extraction module.]{\includegraphics[height=7cm]{Images//Architecture2.pdf}
\label{fig:sp}}
\hspace{.1in}
\subfigure[Gradient-oriented environment network.]{\includegraphics[height=6cm]{Images//Architecture3.pdf}
\label{fig:e}}
\caption{
Network structures used in JEC-RL.
}\label{fig:Architecture}
\end{figure}
\normalem
\bibliographystyle{IEEEtran}
|
{
"timestamp": "2021-05-11T02:18:35",
"yymm": "2105",
"arxiv_id": "2105.03867",
"language": "en",
"url": "https://arxiv.org/abs/2105.03867"
}
|
\section{Introduction}
\label{sec:introduction}
Deep learning models have demonstrated impressive performance on various pattern recognition tasks \cite{Krizhevsky:2012a0,He:201604,Devlin:2019ad,Graves:201383}. However, these models are vulnerable to adversarial examples \cite{Szegedy:2014cd,Goodfellow:2015c5}, which are maliciously crafted by adding small adversarial perturbations to natural images but can fool the target model. A number of adversarial attack methods have been developed \cite{Goodfellow:2015c5,Papernot:2016c4,Ilyas:20185d,Brendel:201866} to generate adversarial perturbations under various threat models, which help to identify the vulnerabilities of deep learning models and serve as a surrogate to evaluate adversarial robustness \cite{Carlini:20174b,Dong:20200a}.
With the rapid development of adversarial attack and defense methods, it is of great importance to evaluate the existing methods correctly and reliably \cite{Carlini:2019ba,Dong:20200a,Croce:202099}. It sometimes needs carefully designed adaptive attacks to evaluate the worst-case robustness of a particular defense in case of gradient obfuscation \cite{Athalye:201869,Tramer:20209b}. Those attack methods were manually designed by experts case by case, which requires considerable trial-and-error efforts. One may hope to automatically discover attack methods to reduce this burden, which can not only serve as a reasonable baseline for measuring the strength of human designed attack methods but also examine the rationality of assumptions made in these attack methods.
Such a desire to automate the attacks becomes even urgent when we consider the practical yet challenging setting of decision-based black-box attacks,
where the attacker can only query the target model for the final classification labels.
Although various decision-based attack methods have been proposed \cite{Brendel:201866,Cheng:20191b,Dong:2019ae,Brunner:2019b4,Cheng:20200c}, many of them are much more heuristic and exhibit unsatisfactory performance,
as compared to gradient-based white-box attack methods which could be optimal in some sense \cite{Madry:2018e9}.
Such a gap urges the need for automated attack methods more than other threat models, to understand the rationality of these heuristics, and even to discover new methods with better performance.
The problem of automatically discovering adversarial attack algorithms falls into the general direction of program synthesis,
which aims to automatically discover a program satisfying
a user intent specification \cite{gulwani2017program}. Many generic methodologies and techniques have been developed for program synthesis,
with the majority
focusing on software problems \cite{SolarLezama:200697,Gulwani:2011eb,Balog:2017ba}.
One the other side, the task of automating the process of solving machine learning problems is known as automated machine learning (AutoML) \cite{Feurer:2015ac}.
One most attractive direction of AutoML is neural architecture search (NAS), which aims to automatically discover good architectures of deep networks \cite{Zoph:201713}, while existing methods often start with expert designed layers. AutoML-Zero \cite{Real:202082} moves one-step further and shows the promise to search for a complete classification algorithm (e.g., two-layer neural networks) from scratch using basic mathematical operations as building blocks with minimal human participation.
However, the discovered classification algorithms are still far behind the current practice.
In this work, we propose to solve a practical yet challenging problem of decision-based adversarial attack with competitive performance by automatically searching for
attack algorithms.
We call our approach \textbf{Auto}mated \textbf{D}ecision-based \textbf{A}ttacks (\textbf{AutoDA{}}).
Technically, we design a search space constructed from basic mathematical operations, which provides sufficient expressiveness for the decision-based attack problem with affordable complexity. Similar search spaces are used by many program synthesis works aiming to automatically solve software problems \cite{Gulwani:2011eb}. Thus we adapt useful methodologies and techniques from program synthesis to AutoDA{}. For example, we use an algorithm template \cite{Srivastava:201354} to alleviate the difficulty of the search problem and use pruning techniques based on logical constraints \cite{gulwani2017program} imposed by the algorithm template and the adversarial attack problem. The design choice of constructing search space from basic mathematical operations is also similar to the previous mentioned AutoML-Zero \cite{Real:202082}.
However, due to the theoretical and practical differences between our problem and AutoML-Zero's, AutoDA{} settles on quite different design choices and implementations, e.g., we use the static single assignment (SSA) form instead of the three-address code (TAC) form
used in AutoML-Zero to define the search space for better sample efficiency and computational performance in our use case of generating random programs, as detailed in Section~\ref{ssec:search_space}.
To explore this search space efficiently, we develop a random search algorithm combined with several pruning techniques and intuitive priors. To further reduce computational cost, we utilize a small and fast model for evaluating attack algorithms during the search.
Despite the simplicity of the discovered top performing algorithms on this small model, they are also query-efficient when transferred to larger models and share common operation sequence with existing attack methods, which illustrates the rationality of some heuristics in existing works.
Our discovered algorithms consistently demonstrate comparable or better performance than the state-of-the-art decision-based methods when attacking normal and defensive models on the CIFAR-10 \cite{krizhevsky2009learning} and ImageNet \cite{Deng:2009ff} datasets.
\section{Related Work}
\label{sec:related_work}
\textbf{Adversarial attacks and defenses.} After deep learning models have been found to be vulnerable to adversarial examples \cite{Szegedy:2014cd,Goodfellow:2015c5}, lots of attack methods under different threat models have been developed recently.
In general, existing attacks can be categorized into the white-box and black-box attacks. Under the white-box setting, the attacker has full knowledge about the target model, and thus various gradient-based attack methods can be applied, such as the fast gradient sign method (FGSM) \cite{Goodfellow:2015c5}, the projected gradient descent (PGD) method \cite{Madry:2018e9}, and the C\&W method \cite{Carlini:20174b}. In contrast, under the black-box threat model, the attacker has limited access to the target model. For example, under the score-based black-box threat model, the attacker can only acquire the output probabilities of the black-box model with a limited number of queries \cite{Ilyas:20185d,Uesato:2018e9,cheng2019improving}. The decision-based black-box setting is more challenging because the attacker can only obtain the final classification labels by querying the target model \cite{Brendel:201866,Dong:2019ae,Brunner:2019b4,Cheng:20191b,Cheng:20200c}. This setting is yet more practical in real-world scenarios \cite{Brendel:201866}.
Due to the security threat, various defense methods have been proposed to defend against adversarial attacks \cite{Madry:2018e9}. However, many of them cause obfuscated gradients and can be broken by adaptive attacks \cite{Athalye:201869}. Currently, the most effective defense methods are based on adversarial training \cite{Dong:20200a,Madry:2018e9}.
\textbf{Program synthesis.} Our approach also relates to program synthesis, whose core problem is to generate a program that meets an intent specification \cite{gulwani2017program}. Many program synthesis works use traditional techniques, e.g., SKETCH \cite{SolarLezama:200697} solves the programming by sketching problem using SAT solver and Brahma \cite{Gulwani:2011eb} can efficiently discover highly nontrivial up to 20 lines loop-free bitvector programs from basic bitvector operations using SMT solver. Recent works may use machine learning techniques, e.g., DeepCoder \cite{Balog:2017ba} solves the programming by example problem using neural-guided search. These works mainly focus on software problems instead of machine learning problems.
Static analysis techniques are essential in these works and they are also useful for our AutoDA{}, as detailed below.
\section{Methods}
\label{sec:methods}
In this section, we present AutoDA{} in detail. For simplicity, we particularly focus on untargeted attacks in this work, where the attacker aims to cause misclassification on the victim classifier.
Nevertheless, our approach can be extended to targeted attacks straightforwardly.
\subsection{Overview}
\label{ssec:overview}
Discovering an algorithm that satisfies an intent specification is an undecidable problem in general \cite{gulwani2017program}, and thus is extremely hard. One approach to reduce the difficulty of this problem is to provide a template for the algorithm, which reduces the problem down to searching for missing components in this template \cite{Srivastava:201354}. Inspired by this approach, we choose the random walk framework for decision-based attacks under the \(\ell_2\) norm as our algorithm template. This framework is first proposed by the Boundary attack \cite{Brendel:201866} and used by many later decision-based attacks \cite{Dong:2019ae,Brunner:2019b4}.
As outlined in Alg.~\ref{alg:framework}, the random walk process starts at an adversarial starting point \(\bm{x}_1\), which could be obtained by keeping adding different large random noises to the original example \(\bm{x}_0\) until finding one that causes misclassification. In each iteration of the random walk, the attacker executes the \texttt{generate()} function to generate the next random point \(\bm{x}'\) to walk to based on the original example \(\bm{x}_0\) and the best adversarial example \(\bm{x}\) already found. \(\bm{x}'\) is usually generated by applying transformations to a Gaussian noise. If \(\bm{x}'\) is adversarial and is closer to \(\bm{x}_0\) than the old adversarial example \(\bm{x}\), we update the adversarial example \(\bm{x}\) to \(\bm{x}'\) since we found a better adversarial example with a smaller perturbation.
There are also some hyperparameters inside the \texttt{generate()} function controlling the step size of the random walk process. After each iteration, the framework would collect the success rate of whether \(\bm{x}'\) is adversarial and adjust the hyperparameters according to the success rate of several past trials.
There are two missing components in this template --- the \texttt{generate()} function and the hyperparameter adjustment strategy.
The main difference between existing attack methods lies in their \texttt{generate()} functions, while their strategies for hyperparameter adjustment are similar, i.e., they all adjust their hyperparameters to make the step size smaller when the success rate is too low and vice versa. Without loss of generality, we only search for the \texttt{generate()} function to make our problem easier, and settle on a predefined negative feedback strategy for adjusting hyperparameters similar to existing works, as detailed in the supplementary material.
To solve our problem, we follow the generic methodology from program synthesis: define a search space for the \texttt{generate()} function, design a search method, and search for programs with top performance under some designed evaluation metrics. Before diving into the details, we provide an overview of AutoDA{} first: (1) We choose a generic search space constructed from basic scalar and vector mathematical operations which provides sufficient expressiveness for our problem; (2) We use random search combined with several pruning techniques and intuitive priors to efficiently explore the search space; (3) We evaluate programs with a small and fast model on a small number of samples to reduce computational cost.
The whole system of AutoDA{} is complex.
We will elaborate design choices as well as important implementation details for the rest of this section and include more implementation details in the supplementary material.
\begin{algorithm}[tb]
\caption{Random walk framework for decision-based attacks under the \(\ell_2\) norm.}
\label{alg:framework}
\begin{algorithmic}
\STATE {\bfseries Data:} original example \(\bm{x}_0\), adversarial starting point \(\bm{x}_1\);
\STATE {\bfseries Result:} adversarial example \(\bm{x}\) such that the distortion \(\| \bm{x} - \bm{x}_0 \|_2\) is minimized;
\STATE \(\bm{x} \gets \bm{x}_1\); \(d_{\min} \gets \| \bm{x} - \bm{x}_0 \|_2\);
\WHILE {query budget is not reached}
\STATE \(\bm{x}' \gets \texttt{generate(}\bm{x}, \bm{x}_0\texttt{)} \);
\IF{\(\bm{x}'\) is adversarial \AND \(\|\bm{x}'-\bm{x}_0\|_2 < d_{\min}\)}
\STATE \(\bm{x} \gets \bm{x}'\); \(d_{\min} \gets \| \bm{x} - \bm{x}_0 \|_2\);
\ENDIF
\STATE update the success rate of whether \(\bm{x}'\) is adversarial;
\STATE adjust hyperparameters according to the success rate;
\ENDWHILE
\end{algorithmic}
\end{algorithm}
\subsection{Search Space}
\label{ssec:search_space}
Designing a search space is the art of trading off between expressiveness and complexity \cite{gulwani2017program}. On one hand, the search space should be expressive enough to include useful programs for the target problem. On the other hand, great expressiveness does not come for free --- it usually leads to high complexity. Searching over a complex space is both time-consuming and hard to implement. Instead of using a full-featured programming language like Python which provides more-than-needed expressiveness with high complexity, we choose to design a domain specific language (DSL) specialized for our problem that provides sufficient expressiveness with relative low complexity.
We list all the available operations in our AutoDA{} DSL in Table~\ref{tab:ops_list}. These operations are basic mathematical operations for scalars and vectors, and all vector operations have geometric meaning in the Euclidean space. Then we construct our search space for the \texttt{generate()} function as all valid static single assignment (SSA) form programs in this DSL with a given length and a given number of hyperparameters.
In our use case of generating random programs, we choose the SSA form widely used in modern compilers \cite{Lattner:20045d} instead of the three-address code (TAC) form used in AutoML-Zero \cite{Real:202082} for better sample efficiency and computational performance. Although the SSA and TAC forms are equivalent in the sense that they can be converted to each other, when generating random programs in the SSA form, we can enforce many wanted properties of these programs explicitly and straightforwardly, e.g., limiting the number of hyperparameters and avoiding unused inputs and operations. In contrast, for the TAC form, we need to generate programs first, then check their properties and reject the failed ones. Moreover, checking a TAC form program requires almost as much work as converting it into an equivalent SSA form program. Consequently, this generate-then-check process hurts sample efficiency and computational performance. It is worth noting that the AutoDA{} DSL only requires all vector variables to have the same dimension but does not restrict them to a specific dimension. This property of our DSL preserves the possibility for transferring the discovered programs to other datasets with different image shapes without modification, though hyperparameter's initial values might need extra tuning after changing the input dimension.
We design the program to accept three parameters: \(\bm{x}\) and \(\bm{x}_0\) as in the \mbox{\texttt{generate(}\(\bm{x}, \bm{x}_0\)\texttt{)}} function from Alg.~\ref{alg:framework}, as well as a random noise \(\bm{n}\) sampled from the standard Gaussian distribution \(\mathcal{N}(\bm{0}, \mathrm{I})\). Instead of providing operations for generating random noise in the AutoDA{} DSL, we provide the random noise as a parameter \(\bm{n}\). Combining with the SSA form, the program itself would be pure and more handy to do property testing efficiently.
The above designed AutoDA{} DSL has sufficient expressiveness for the decision-based adversarial attack under the \(\ell_2\) norm problem. For example, we can implement the Boundary attack's \texttt{generate()} function with our AutoDA{} DSL. We provide one possible implementation of it in the supplementary material. On the other hand, the AutoDA{} DSL does not have high complexity since it has no control flow and has only ten unary and binary operations. However, this search space is still huge, because its size grows at least exponentially with the length of the program. As a result, we need to design and implement an efficient search method.
\begin{table}[t]
\caption{List of available operations in the AutoDA{} DSL. The suffix of each operation's notation indicates the parameters' type of the operation, where \texttt{S} denotes scalar type, and \texttt{V} denotes vector type. For example, the \texttt{VS} suffix means the operation's first parameter is a scalar and second parameter is a vector. Detailed definitions are provided in the supplementary material.}
\label{tab:ops_list}
\vskip 0.1in
\begin{center}
\setlength{\tabcolsep}{12.0pt}
\begin{tabular}{lll}
\toprule
ID & Notation & Description \\
\midrule
1 & \texttt{ADD.SS} & scalar-scalar addition \\
2 & \texttt{SUB.SS} & scalar-scalar subtraction \\
3 & \texttt{MUL.SS} & scalar-scalar multiplication \\
4 & \texttt{DIV.SS} & scalar-scalar division \\
5 & \texttt{ADD.VV} & vector-vector element-wise addition \\
6 & \texttt{SUB.VV} & vector-vector element-wise subtraction \\
7 & \texttt{MUL.VS} & vector-scalar broadcast multiplication \\
8 & \texttt{DIV.VS} & vector-scalar broadcast division \\
9 & \texttt{DOT.VV} & vector-vector dot product \\
10 & \texttt{NORM.V} & vector \(\ell_2\) norm \\
\bottomrule
\end{tabular}
\end{center}
\vskip -0.1in
\end{table}
\subsection{Search Method}
\label{ssec:search_method}
Searching for programs is a combinatorial optimization problem, because the search space is finite when ignoring the initial values of hyperparameters. In this work, we develop a random search based method combined with several pruning techniques and intuitive priors. We choose random search due to several reasons. First, from a theoretical perspective, the no free lunch theorems for optimization \cite{Wolpert:1997e6} imply that random search is on average a good baseline method for combinatorial optimization. For example, random search based methods are shown to be competitive baselines in NAS \cite{Li:2019b2}. Second, from a practical perspective, random search is much simpler to implement efficiently and correctly than other methods, e.g., evolutionary search, and it can be surprisingly effective \cite{Balog:2017ba} when combined with other techniques, e.g., pruning techniques. Finally, random search can run in parallel by its nature, which helps us easily distribute tasks to multiple machines. For the hyperparameters, since the framework would adjust them automatically during the random walk process, we initialize them to a given fixed value to reduce implementation complexity and computational cost.
Unlike NAS works, in which the search spaces are usually constructed from expert-designed layers such that good architectures are dense in them, AutoDA{}'s search space is constructed from a generic DSL such that good programs should be quite sparse. Naive random search would waste most computation on meaningless programs. We mitigate this issue by introducing four techniques specialized for the decision-based attack problem from two aspects --- the random program generating process and the search process. We will conduct an ablation study on these four techniques to show their effectiveness in Section~\ref{ssec:search_method_ablation_study}.
For the random program generating process, we apply two intuitive priors to improve the quality of the generated programs: (1) \emph{Compact program}: We use a program generating algorithm that prefers to generate programs with less unused operations. It is noted that our algorithm could still generate programs with many unused operations, but with a lower probability.
(2) \emph{Predefined operations}: We add three predefined operations \(\bm{v} = \bm{x}_0 - \bm{x}\), \(d = \| \bm{v} \|_2\), and \(\bm{u} = \bm{v} / d\) to the program before randomly generating the remaining operations. These predefined operations are common for decision-based attacks under the \(\ell_2\) norm, because the program needs to minimize the distance between \(\bm{x}_0\) and \(\bm{x}\). Thus the distance \(d\) between \(\bm{x}\) and \(\bm{x}_0\) and the direction \(\bm{u}\) from \(\bm{x}\) to \(\bm{x}_0\) should be useful. These operations all appear at the very beginning of many existing methods, including the Boundary attack \cite{Brendel:201866}, the Evolutionary attack~\cite{Dong:2019ae}, and the state-of-the-art Sign-OPT attack~\cite{Cheng:20200c}. Again, programs left these predefined operations unused could still be generated, but with a lower probability.
Without reducing the size of our search space, both techniques just add priors to the generating process and increase the probability of generating better programs.
For the search process, we apply two pruning techniques to filter out trivially meaningless programs based on constraints imposed by the decision-based attack problem and the random walk algorithm template, including: (1) \emph{Inputs check}: We filter out programs that do not make use of all inputs, because they would be meaningless for the decision-based attack problem. This property is checked formally with some basic static analysis techniques \cite{Aho:19866d}. (2) \emph{Distance test}: We filter out programs that generate \(\bm{x}'\) violating the inequality \(\| \bm{x}' - \bm{x}_0 \|_2 < d_{\min}\) required by the framework in Alg.~\ref{alg:framework}. However, formally checking this property is extremely hard. Instead, we test this property on ten different inputs and filter out programs that fail in any of these tests. This informal test does not guarantee the inequality to hold for all inputs, but it works well in practice. The \emph{inputs check} and the \emph{distance test} are both done on CPU cores. By filtering out bad programs before they reach GPU, much less programs need to be evaluated on GPU. We will show that they save lots of expensive GPU computational cost for us in Section~\ref{ssec:searching_for_programs}.
\subsection{Evaluation Method}
\label{ssec:evaluation_method}
The last step is to define evaluation metrics such that we can distinguish good programs from bad ones. When evaluating the performance of decision-based attacks, we usually run them against many large deep models on different datasets to generate adversarial example for each sample in the test set. However, as running large models and attacking all samples in the test set are computationally expensive, this kind of evaluation is time-consuming and impractical for our problem with a huge search space. To address this issue, we leverage two strategies to make the evaluation fast and cheap.
First, we adopt a shrunk by a factor of 0.5 version of EfficientNet-B0 \cite{Tan:2019ad} for evaluation.
EfficientNets are small and fast deep models that achieve high accuracies on various benchmarks. We train different binary classifiers for each pair of labels on the CIFAR-10 dataset. These classifiers can process more than 60,000 images per second on a single GTX 1080 Ti GPU. Second, we evaluate programs on a handful of examples and take an average over the evaluation metrics to save GPU time. Instead of using an absolute \(\ell_2\) distance between the original example \(\bm{x}_0\) and the best adversarial example \(\bm{x}\) the program found, we use a relative distance \(\| \bm{x} - \bm{x}_0 \|_2 / \| \bm{x}_1 - \bm{x}_0 \|_2\) as the metric where \(\bm{x}_1\) is the adversarial starting point as in Alg.~\ref{alg:framework}. We call it \emph{\(\ell_2\) distortion ratio}. A lower \(\ell_2\) distortion ratio means a better program.
Even with the small and fast classifier, running programs for tremendous random walk iterations is still unbearable computationally expensive. However, adopting lots of iterations is necessary for hyperparameters adjustment strategies to take effect in existing methods \cite{Brendel:201866,Dong:2019ae}.
To mitigate this issue, we first evaluate programs for a small number of iterations and select several top performing programs according to the evaluation metric.
Then we perform a second round of evaluation of these programs for a larger number of iterations.
This evaluation strategy would also prefer choosing programs that achieve relatively high query-efficiency within few iterations. At the initial stage of the random walk process, the success rate is usually high, and thus the hyperparameters adjustment strategy tends to overshoot and harm the performance. So we disable hyperparameters adjustment in the small evaluation.
We generate random programs in the SSA form as described in Section~\ref{ssec:search_space} and Section~\ref{ssec:search_method}.
Though SSA form programs are easy to analyze, they are slow and memory-inefficient to run. To make our SSA form programs run faster and occupy less memory, we compile them into their equivalent TAC form programs. During the compilation, we discard unused operations and allocate memory slots efficiently, such that the output TAC form programs are usually shorter and thus run faster with less memory usage than the original SSA form programs.
\section{Experiments}
\label{experiments}
In this section, we first run AutoDA{} to search for top performing programs under the evaluation metric on the small classifiers as described in Section~\ref{ssec:evaluation_method}. We then compare the discovered algorithms with human designed attacks against different models on CIFAR-10 and ImageNet. Finally, we conduct an ablation study for the four techniques used in the search method of AutoDA{} proposed in Section~\ref{ssec:search_method} to show their effectiveness.
\subsection{Searching for Programs}
\label{ssec:searching_for_programs}
We first introduce the detailed settings.
For the search space, we limit the maximum length of the program to 20 (i.e., the length of the Boundary attack's \texttt{generate()} function in AutoDA{} DSL). We allow one scalar hyperparameter and set it to 0.01 initially.
We use the binary classifier for class 0 (airplane) and class 1 (automobile) of the CIFAR-10 dataset described in Section~\ref{ssec:evaluation_method}.
For the search method and the evaluation method,
we first generate programs randomly with all the four techniques introduced in Section~\ref{ssec:search_method}, then evaluate these programs in batches for 100 iterations to calculate their \(\ell_2\) distortion ratios. Each batch of programs is evaluated on one randomly selected example from the CIFAR-10 test set such that the \(\ell_2\) distortion ratios of these programs in the same batch can be compared with each other. The batch size here is 150, which achieves optimal performance on our hardware.
Second, we select the best program with lowest \(\ell_2\) distortion ratio from each batch of programs and evaluate it for 10,000 iterations on ten fixed examples from the CIFAR-10 test set to obtain their final \(\ell_2\) distortion ratios. Since the ten examples are fixed, these final \(\ell_2\) distortion ratios can be compared with each other.
We run this experiment for 50 times in parallel.
Each run allows a maximum number of 500 million queries to the classifier, which takes about two hours on one GTX 1080 Ti GPU.
In all 50 runs, we generate about 125 billion random programs. 45.475\% of these programs failed in the \emph{inputs check}, 54.497\% of them failed in the \emph{distance test}, and only 0.028\% of them survived both and continued to be evaluated against the classifier on GPU. These results show that the \emph{inputs check} and \emph{distance test} techniques save a lot of expensive GPU computational cost for us. As a result, we achieve a throughput of 294k programs per second per GTX 1080 Ti GPU.
We plot the lowest \(\ell_2\) distortion ratio found on the ten fixed examples in each run in Figure~\ref{fig:lra}. They average at 0.01797 with a standard deviation of 0.00043.
The top one \(\ell_2\) distortion ratio is 0.01699, the second place is 0.01705, while the third place is 0.01723. The first place and the second place programs perform quite similarly and we choose both of them to compare with human designed attacks. We call them \emph{AutoDA{} 1st} and \emph{AutoDA{} 2nd}, respectively. We show the SSA form programs of AutoDA{} 1st and 2nd in Figure~\ref{fig:programs}. We are surprised that they are quite short after discarding unused operations --- AutoDA{} 1st only uses 10 operations and AutoDA{} 2nd uses 13 operations, while the Boundary attack's \texttt{generate()} function uses 20 operations when expressed in the AutoDA{} DSL. We also observe that AutoDA{} 1st includes an operation sequence of \texttt{v8 = MUL(v3,s0)}; \texttt{s18 = DOT(v17,v8)}, and AutoDA{} 2nd includes \texttt{v7 = MUL(v3,s0)}; \texttt{s11 = DOT(v10,v7)}, where \texttt{s0} is the scalar hyperparameter and \texttt{v3} is the Gaussian noise. They share a pattern of \mbox{\texttt{DOT(*,MUL(v3,s0))}}. A similar pattern is also observed in the Boundary attack \cite{Brendel:201866}. Moreover, both discovered attacks use the three predefined operations which are also used in existing works. These similarities qualitatively suggest the rationality of some heuristics used in existing attack methods.
\begin{figure*}
\begin{minipage}[t]{0.53\textwidth}
\begin{figure}[H]
\vskip 0.1in
\begin{center}
\includegraphics[width=0.44\linewidth,align=t,cfbox=lightgray 1pt 0.3em 0em]{res/AutoDA_1st.pdf}
\hspace{0.2em}
\includegraphics[width=0.44\linewidth,align=t,cfbox=lightgray 1pt 0.3em 0em]{res/AutoDA_2nd.pdf}
\caption{The SSA form programs of AutoDA{} 1st and 2nd, where \texttt{s0} is the hyperparameter, \texttt{v1} is the original example \(\bm{x}_0\), \texttt{v2} is the adversarial example \(\bm{x}\) the random walk process already found, and \texttt{v3} is the standard Gaussian noise \(\bm{n}\). The return value of these programs is the next random point to walk to. The \texttt{s}-prefix in variable's name means the variable is a scalar, and \texttt{v}-prefix for vector. Unused operations are discarded for clarity. The original programs as well as the compiled TAC form programs are provided in the supplementary material.}
\label{fig:programs}
\end{center}
\vskip -0.2in
\end{figure}
\end{minipage} \hspace{0.5em}
\begin{minipage}[t]{0.45\textwidth}
\begin{figure}[H]
\vskip 0.1in
\begin{center}
\centerline{\includegraphics[width=\linewidth]{res/lra.pdf}}
\vskip -0.1in
\caption{Distribution of the lowest \(\ell_2\) distortion ratio found in each of the 50 runs of searching for programs in our experiment.}
\label{fig:lra}
\end{center}
\vskip -0.3in
\end{figure}
\begin{figure}[H]
\begin{center}
\centerline{\includegraphics[width=\linewidth]{res/ablation.pdf}}
\vskip -0.1in
\caption{Search method ablation study experiment results. Each column shows the top 200 lowest \(\ell_2\) distortion ratios found by each search method. From left to right, each column adds a new technique.}
\label{fig:ablation}
\end{center}
\vskip -0.2in
\end{figure}
\end{minipage}
\end{figure*}
\subsection{Results on CIFAR-10 and ImageNet}
\label{ssec:results_on_cifar_10_and_imagenet_models}
We benchmark the AutoDA{} 1st and 2nd programs we found for attacking various models under the \(\ell_2\) norm untargeted decision-based threat model on the CIFAR-10 \cite{krizhevsky2009learning} and ImageNet \cite{Deng:2009ff} datasets, and compare them with existing methods. We follow \citeauthor{Dong:20200a}'s benchmark methodology: We consider one attack to be successful after it finds adversarial example whose \(l_2\) distance w.r.t. the original example is smaller than \(\epsilon = 1.0\) on CIFAR-10, and whose normalized \(\ell_2\) distance w.r.t. the original example is smaller than \(\epsilon = \sqrt{0.001}\) on ImageNet (normalized \(\ell_2\) distance is defined as \(\| \cdot \|_2 / \sqrt{d}\) where \(d\) is the dimension of the input to the classifier). Then we use the \emph{attack success rate vs. queries} curve to show the effectiveness and efficiency of these attack algorithms, as well as the \emph{\(\ell_2\) distortion vs. queries} curve widely used in previous decision-based attack works \cite{Brendel:201866,Cheng:20200c}.
\begin{figure}[t]
\vskip 0.1in
\begin{center}
\centerline{\includegraphics[width=\columnwidth]{res/baseline.pdf}}
\caption{The \emph{\(\ell_2\) distortion vs. queries} and \emph{attack success rate vs. queries} curves on the three models on the CIFAR-10 dataset and the Inception-v3 model on the ImageNet dataset.}
\label{fig:baseline}
\end{center}
\vskip -0.2in
\end{figure}
We compare the following untargeted decision-based attack methods with our AutoDA{} 1st and 2nd: (1) The Boundary attack \cite{Brendel:201866}, (2) the Evolutionary attack \cite{Dong:2019ae}, and (3) the Sign-OPT attack \cite{Cheng:20200c}. The first two attacks are both based on the random walk framework. They are included in \citeauthor{Dong:20200a}'s benchmark, so we adapt their implementations. However, we disable the dimension reduction trick for these two attacks on ImageNet because we want to know the original attacks' strength. The third one is a recently proposed query-efficient attack based on zeroth-order optimization. We adapt the implementation from its official repository and leave all hyperparameters unmodified.
Together with our AutoDA{} 1st and 2nd attacks, we have five attacks to run. As for the initialization of hyperparameters in AutoDA{} 1st and 2nd, we adopt the original value 0.01 used in the search to attack the CIFAR-10 models. However, when transferred to ImageNet, they would fail to pass the distance test with their hyperparameters initialized to 0.01. To overcome this issue caused by changing dimension, we decrease their hyperparameters' initial value to 0.001 on ImageNet. The Sign-OPT attack might spend up to several hundreds of queries for finding the starting points, while other attacks do not. Thus we include these queries for finding the starting points in the total queries to make the comparison more fair. More details on how we run the five attacks are provided in the supplementary material.
We select the first 1,000 images from the CIFAR-10 test set and the first 1,000 images from the ImageNet test set to run our benchmark. We choose the normally trained ResNet50 \cite{He:201604} model on the CIFAR-10 dataset and normally trained Inception-v3 \cite{Szegedy:2016bd} model on the ImageNet dataset both provided by torchvision \cite{Paszke:201942} for the five methods to attack. Besides, we also aim to understand the strength of our attacks on stronger models, and thus we include \(\ell_2\) adversarially trained (\(\epsilon = 1.0\)) ResNet50 model provided by \citet{robustness} and \(\ell_\infty\) adversarially trained (\(\epsilon = 8/255\)) WRN model \cite{Zagoruyko:20160e} provided by \citet{Madry:2018e9}. The clean accuracies for these models on our benchmark examples are: 94.4\% for the normally trained ResNet50 model, 78.1\% for the normally trained Inception-v3 model, 82.4\% for the \(\ell_2\) adversarially trained ResNet50 model, and 87.3\% for the \(\ell_\infty\) adversarially trained WRN model.
We plot the \emph{\(\ell_2\) distortion vs. queries} and \emph{attack success rate vs. queries} curves for the five attacks on the three models on the CIFAR-10 dataset and the Inception-v3 model on the ImageNet dataset in Figure~\ref{fig:baseline}. We also provide \emph{attack success rate} at different number of \emph{queries} in Table~\ref{tab:asr} for numerical comparisons.
\begin{table*}[t]
\caption{The \emph{attack success rate} given different number of \emph{queries} on the three models on the CIFAR-10 dataset and the Inception-v3 model on the ImageNet dataset.}
\label{tab:asr}
\vskip 0.1in
\begin{center}
\begin{footnotesize}
\setlength{\tabcolsep}{1.9pt}
\begin{tabular}{c|rrr|rrr|rrr|rrr}
\toprule
Model & \multicolumn{3}{c|}{\thead{ResNet50 \\ (normal training)}}
& \multicolumn{3}{c|}{\thead{ResNet50 \\ (\(\ell_2\) adv. training)}}
& \multicolumn{3}{c|}{\thead{WRN \\ (\(\ell_\infty\) adv. training)}}
& \multicolumn{3}{c}{\thead{Inception-v3 \\ (normal training)}} \\
\midrule
Queries & \multicolumn{1}{c}{2,000} & \multicolumn{1}{c}{4,000}
& \multicolumn{1}{c|}{20,000}
& \multicolumn{1}{c}{2,000} & \multicolumn{1}{c}{4,000}
& \multicolumn{1}{c|}{20,000}
& \multicolumn{1}{c}{2,000} & \multicolumn{1}{c}{4,000}
& \multicolumn{1}{c|}{20,000}
& \multicolumn{1}{c}{2,000} & \multicolumn{1}{c}{4,000}
& \multicolumn{1}{c}{20,000} \\
\midrule \midrule
Boundary & 10.7\% & 28.4\% & 100.0\% & 0.6\% & 1.6\% & 21.6\% & 1.1\% & 2.7\% & 43.1\% & 7.0\% & 15.0\% & 86.8\% \\
Evolutionary & 64.9\% & 96.3\% & 100.0\% & 4.4\% & 8.9\% & 25.4\% & 7.7\% & 18.2\% & 49.4\% & 33.4\% & 59.2\% & 98.1\% \\
Sign-OPT & 76.1\% & 98.8\% & 100.0\% & 6.6\% & 12.6\% & \textbf{29.5\%} & 11.3\% & 24.1\% & \textbf{60.3\%} & 41.4\% & 69.1\% & \textbf{99.2\%} \\
\midrule
AutoDA{} 1st & \textbf{95.9\%} & \textbf{99.7\%} & 100.0\% & 9.7\% & \textbf{14.9\%} & 27.8\% & \textbf{19.2\%} & \textbf{28.3\%} & 57.4\% & \textbf{57.1\%} & \textbf{74.6\%} & 98.7\% \\
AutoDA{} 2nd & 95.6\% & 99.5\% & 100.0\% & \textbf{10.0\%} & 14.8\% & 27.7\% & 18.9\% & 27.1\% & 57.0\% & 56.5\% & 73.4\% & 97.8\% \\
\bottomrule
\end{tabular}
\end{footnotesize}
\end{center}
\vskip -0.1in
\end{table*}
From these curves and the table, we can observe that AutoDA{} 1st performs slightly better than AutoDA{} 2nd, which is consistent with their quite-close \(\ell_2\) distortion ratios shown in Section~\ref{ssec:searching_for_programs}. This fact suggests the rationality of using the \(\ell_2\) distortion ratio as the evaluation metric. Moreover, both the AutoDA{} 1st and 2st outperform the other three human designed baselines by a lot when the number of queries is smaller than 5,000. When the number of queries grows larger than 5,000, our AutoDA{} 1st attack method still outperforms the Boundary attack and the Evolutionary attack, while becomes slightly behind the Sign-OPT attack method, and only for defensive models and ImageNet model, this small gap is noticeable. These behaviors are consistent on all models and datasets we run our experiments on, demonstrating the great query-efficiency of our discovered attack methods especially under low number of queries, which is important in real-world scenarios for black-box attacks \cite{Brendel:201866,Ilyas:20185d}.
\subsection{Ablation Study on Search Method}
\label{ssec:search_method_ablation_study}
As described in Section~\ref{ssec:search_method}, we apply four techniques to our search method, which are (1) \emph{Predefined operations}, (2) \emph{Inputs check}, (3) \emph{Distance test}, and (4) \emph{Compact program}. To illustrate their effectiveness, we conduct the following ablation study on search method. Starting from the base search method using only naive random search, we add the four techniques one by one, so we get five different random search methods including the base one. For each of these five random search methods, we run it to evaluate 100,000 programs against the classifier for 100 iterations on five fixed examples and calculate the \(\ell_2\) distortion ratio for each program. We plot the top 200 lowest \(\ell_2\) distortion ratios that each search method found in Figure~\ref{fig:ablation}.
From the figure, we can observe that with more techniques added, the top 200 lowest \(\ell_2\) distortion ratios overall show a decreasing trend. For example, the lowest \(\ell_2\) distortion ratio in each column becomes lower and lower. These results demonstrate the effectiveness of the four techniques we applied to our search method. These results are qualitative, because these techniques might interfere with each other so that multiple techniques combined might bring improvement larger than the sum of improvements brought by applying each of them.
As a result, the absolute improvement shown in the figure does not imply the effectiveness of each technique.
\section{Conclusion and Discussion}
\label{sec:conclusion_and_discussion}
In this work, we propose to automate the process of discovering decision-based attack algorithms. Starting from the random walk framework as the algorithm template, we construct our generic search space from the AutoDA{} DSL, explore this search space using random search integrated with several pruning techniques and intuitive priors, and evaluate programs in the search space using a small and fast model. The discovered attack algorithms are simple, while consistently achieve high query-efficiency when transferred to both normal and defensive models on the CIFAR-10 and ImageNet datasets.
Many future extensions can be done to this work. First, we particularly focus on the untargeted decision-based threat model under the \(\ell_2\) norm in this work. Extending our approach to targeted attack should be straightforward, while extending to the \(\ell_\infty\) norm might need more efforts, because designing another search space specialized for the \(\ell_\infty\) norm is necessary. Second, we limit the search space to be relatively small and use a random search based method to explore it. More advanced search methods like evolutionary search and more computational resources could explore larger and more powerful search space, which should lead to better algorithms. Finally, advanced static analysis tools can help us simplify the discovered attack algorithms and identify important operations in these algorithms.
|
{
"timestamp": "2021-05-11T02:20:24",
"yymm": "2105",
"arxiv_id": "2105.03931",
"language": "en",
"url": "https://arxiv.org/abs/2105.03931"
}
|
\section{Introduction}
\input{sections/introduction.tex}
\section{Related work}
\input{sections/relatedwork.tex}
\section{Proposed Method}
\input{sections/method.tex}
\section{Experiments}
\input{sections/experiment.tex}
\section{Conclusions}
\input{sections/conclusions.tex}
\bibliographystyle{IEEEtran}
\subsection{Topometric map}
\label{sec:map}
\begin{comment}
\begin{figure}
\centering
\begin{minipage}[!t]{0.3\textwidth}
\includegraphics[width=\textwidth]{images/kitti_00_zoom.png}
\caption{The topometric map (blue) and real GPS trajectory (red) on satellite image.}
\label{fig:topometric_map}
\end{minipage}
\end{figure}
\end{comment}
The topometric map used in our method is a simple graph-like structured data $M = \{V, E\}$, where each vertex $v_i \in \mathbb{R}^2$ represents a waypoint $(x_i, y_i)$ and each edge $e_i$ represents a road segment. Similar to \cite{maplite}, the waypoints are transformed from latitude and longitude to UTM (The Universal Transverse Mercator) coordinates.
For urban area, it is possible to download the topometric map from OpenStreetMap \cite{Openstreetmap} which provides access to the public. In this work, as proof of concept, we propose a simple method to create topometric maps which can be used for both urban and rural areas. Specifically, we use publicly available satellite images from GoogleMap. By {\em clicking} and connecting some key points on the satellite map, {{\em e.g.}}, road intersections and sharp turns, we can easily build a sparse global graph. Then, we create a dense topometric map by applying linear interpolation on each edge. It is worth mentioning that it is also possible to create such topometric map in automatic or semi-automatic way, {{\em e.g.}}, by extracting road from satellite images \cite{VecRoad_20CVPR, 2018RoadTracer}. The created topometric map is quite noisy and imprecise, which cannot be directly used as global path for navigation. It is worth noting that the proposed process is mainly for demonstrating the robustness of our model. For real-world applications, we suggest to use OSM or other tools to create the topometric map. What we want to emphasize here is that our method does not need to construct HD-maps which require huge cost to build and maintain.
\subsection{Problem setting} Let $t$ denote the discrete time step, and let $s_t = (x_t, y_t)$ denote the UTM coordinate of the autonomous vehicle at time step $t$. Given the observed raw sensor data at time step $t$ and a coarse route extracted from the aforementioned noisy topometric map, the goal is to predict a future guidance trajectory over the next $T$ time steps, ${\mathbf{s}} = \{ s_t, \dots s_{t + T-1}\}$ and use this trajectory for local navigation.
\subsection{Architecture}
The overall architecture of the proposed model is a multi-task learning framework as illustrated in Fig.~\ref{fig:architecture}. The input of the model is the raw LiDAR point cloud and a local route extracted from noisy topometric map. The LiDAR point cloud is converted to a BEV (Bird's Eye View) map and the local route is converted to a binary image. The LiDAR BEV map and binary image are then concatenated and feeded into the convolutional neural network backbone to extract deep features. A waypoint position encoder takes the deep features as input and outputs waypoint heatmaps. The heatmaps are further sent to depth-wise convolution layers to obtain waypoint position embeddings. The waypoint feature encoder module takes as input the features from the backbone and outputs waypoint-wise features. The waypoint transformer follows the structure of \cite{Transformer}, which takes as input the waypoint positional embeddings and waypoint features, and predicts waypoints coordinates.
\subsubsection{Input representations}
\label{sec:input}
\begin{figure}
\centering
\begin{minipage}[!t]{0.4\textwidth}
\includegraphics[width=\textwidth]{images/input.png}
\caption{(a) The LiDAR BEV map. (b) The binary image of local route. (c) The binary image of recorded driving trajectory. The dash lines represent the axis of the ego-car coordinate.}
\label{fig:input}
\end{minipage}
\end{figure}
As our goal is to inference the local guidance trajectory in the ego-centric coordinate, it is straightforward to represent the scene from the BEV. Following \cite{2019End},
we convert the raw LiDAR point cloud to BEV representation. Our BEV map consists of $3$ channels, including height, intensity (reflectance) and density of point cloud. This results in a 3D tensor of size $H_0 \times W_0 \times 3$, where $H_0, W_0$ represents the y-x spatial dimension. In order to keep more context information, the BEV map also include some part of the scene behind the vehicle as illustrated in Fig.~\ref{fig:input} (a).
Given the location of the vehicle, we extract the local route from the topometric map. Because of the imprecise topometric map, we include both forward and backward waypoints in the local route. As it is shown in Fig.~\ref{fig:input} (b), the local route is further projected to the LiDAR BEV map with a fixed width of {\em virtual road} and converted to a binary image. In this work, we set default {\em virtual road} width as $2$ meters. The same process is applied to the recorded driving trajectories and we obtain binary image of the recorded driving trajectory as shown in (c). The binary image of local route and the LiDAR BEV map are used as input of the neural network for both training and testing. The binary image of the recorded driving trajectory is used as ground truth in the road attention header during training.
\subsubsection{Backbone}
Our backbone is adapted from the 3D object detection network of \cite{2019PIXOR}. It uses a feature pyramid network that combines high-resolution features with low-resolution ones. It consists of five blocks. The first block consists of two convolutional layers and the second to fifth blocks are composed of residual layers.
The final feature map is $4 \times$ down-sampling factor with respect to the input, {{\em i.e.}} $\frac{H_0}{4} \times \frac{W_0}{4} \times C$, where $C=96$ is the number of channels.
\subsubsection{Waypoint feature encoder}
The transformer module expects a sequence as input, hence we design a waypoint feature encoder to generate waypoint embeddings. The waypoint feature encoder takes as input the above deep feature map and outputs waypoint embeddings of size $N \times d$ for the transformer, where $N$ is the number of waypoints per trajectory. As illustrated in Fig.\ref{fig:architecture}, the waypoint feature encoder has two branches. The first branch consists of one convolution layer, which outputs feature map ${\mathbf{x}}_1$ with size $\frac{H_0}{8} \times \frac{W_0}{8} \times N$. The feature map is reshaped to size of $N \times d$. The second branch has two parts. The frontal part is a $3 \times 3$ convolution layer which outputs a road segmentation mask of size $\frac{H_0}{4} \times \frac{W_0}{4} \times 1$. The second part contains another convolution layer that takes the segmentation mask as input and output feature map ${\mathbf{x}}_2$ with size $\frac{H_0}{8} \times \frac{W_0}{8} \times N$. ${\mathbf{x}}_2$ is also reshape to $N \times d$. ${\mathbf{x}}_1$ and ${\mathbf{x}}_2$ are combined by element-wise addition to obtain the final waypoint embeddings. The road segmentation mask is learned by minimizing the binary cross entropy loss between predicted road mask and the ground truth virtual road mask as shown in Fig.~\ref{fig:input} (c).
\subsubsection{Waypoint positional encoding} Since the transformer architecture is permutation-invariant, we design a new module to generate waypoint positional embeddings to supplement the above waypoint embeddings. The waypoint position encoder module first uses a $3 \times 3$ convolution layer followed by spatial softmax layer to output waypoint heatmaps of size $\frac{H_0}{4} \times \frac{W_0}{4} \times N$. The waypoint heatmap model is learned by minimizing MSELoss (Mean Squared Error loss) between predicted heatmaps and ground truth heatmaps. We create the ground truth heatmaps using Gaussian distribution. The predicted waypoint heatmaps are sent to the second stage convolution layer to obtain positional features of size $\frac{H_0}{8} \times \frac{W_0}{8} \times N$. The positional feature are reshaped to $N \times d$ to obtain final waypoint positional embeddings, {{\em i.e.}}, ${\mathbf{x}}_3$ in Fig.~\ref{fig:architecture}.
\subsubsection{Waypoint transformer} The waypoint transformer follows the structure of Transformer \cite{Transformer}. It consists of a transformer encoder, transformer decoder and a feed forward network (FFN). The transformer encoder consists of multiple encoder layers. Each encoder layer has a standard architecture and consists of a multi-head self-attention module and a FFN. The transformer decoder has similar structure to the transformer encoder and transforms $N$ embeddings of size $d$ using multi-headed self-attention mechanisms. Similar to DETR \cite{DETR}, our model decodes $N$ waypoints in parallel. Since the decoder is also permutation-invariant, we add a learned positional encodings {{\em i.e.}}, the query embeddings, to make it output ordered waypoint embeddings. The FFN takes the embeddings and regresses the final waypoint coordinates. We use MSELoss to train the waypoint regressor.
\begin{figure}
\centering
\begin{minipage}[!t]{0.4\textwidth}
\includegraphics[width=\textwidth]{images/traj_align.png}
\caption{(a) The original predicted and ground truth waypoints. (b) The predicted waypoints after alignment.}
\label{fig:align}
\end{minipage}
\end{figure}
\section{Learning and inference}
We adopt the multi-task loss to train the full network. Specifically, we use the binary cross-entropy loss on the auxiliary task of road segmentation and MSELoss for the heatmap prediction and waypoint regression tasks.
Given the predicted road mask ${\mathbf{x}}$ of size $W' \times H'$ and binary image of the recorded driving trajectory ${\mathbf{y}}$ of the same size, the road segmentation loss is written as:
\begin{equation}
\label{eq:loss_att}
\mathcal{L}_{road} = - \sum_{i=1}^{H'} \sum_{j=1}^{W'} [{\mathbf{y}}_{i,j} \log{({\mathbf{x}}_{i,j})} + (1 - {\mathbf{y}}_{i,j}) \log{(1 - {\mathbf{x}}_{i,j})}].
\end{equation}
Given the predicted heatmaps ${\mathbf{x}}$ of size $W' \times H' \times N$ and ground truth heatmaps ${\mathbf{y}}$ generated from driving trajectory of the same size, the heatmap prediction loss is written as:
\begin{equation}
\label{eq:loss_heat}
\mathcal{L}_{heatmap} = \sum_{i=1}^{H'} \sum_{j=1}^{W'} \| {\mathbf{y}}_{i,j} - {\mathbf{x}}_{i,j} \|_2^2.
\end{equation}
Assume ${\mathbf{s}}$ and ${\mathbf{s}}^*$ are the predicted and ground truth trajectory respectively, the waypoint regression loss is then defined as follows,
\begin{equation}
\label{eq:loss_waypoint}
\mathcal{L}_{waypoint} = \sum_{t=0}^{T-1} \| s_t^i - s_t^j\|_2^2.
\end{equation}
The final loss is the sum of the road segmentation loss, heatmap prediction loss and waypoint regression loss, {{\em i.e.}} $\mathcal{L}_{all} = \mathcal{L}_{road} + \mathcal{L}_{heatmap} + \mathcal{L}_{waypoint}$.
The point-wise $L^2$ distance function in $\mathcal{L}_{waypoint}$ has a strong assumption that the driving speed on two trajectories are the same. Fig.~\ref{fig:align} (a) shows an example of two overlapping trajectories but with different driving speed. To overcome this drawback and better evaluate the prediction performance, we propose an aligned distance function by interpolating the source waypoints to the coordinates of the target waypoints as illustrated in (b). In this way, we can better measure the similarity of two trajectories. The proposed aligned distance function is used in our evaluation metrics.
The model is implemented in Pytorch and trained on $4$ RTX 2080 Ti GPUs with $11$ GB memory. We use a per-GPU batch size of $64$ and trained with Adam optimizer. The initial learning rate is $0.003$. All models are trained end-to-end from scratch for $120$ epochs. The model is deployed on IPC with a single GTX 1080 Ti GPU and the inference time is around $20$ ms, {{\em i.e.}} $50$ Hz, which satisfies requirement of real-time performance.
\subsection{Experimental setup}
\subsubsection{Dataset}
\begin{figure}
\centering
\begin{minipage}{0.4\textwidth}
\includegraphics[width=\textwidth]{images/kitti.png}
\caption{Recorded driving trajectories. Sequence $00$, $02$, $05$ and $07$ are used for training, and the rest are used as testing sets.}
\label{fig:kitti}
\end{minipage}
\end{figure}
For urban environment, we use KITTI raw data \cite{Geiger:2012}.
Following the naming rules of KITTI odometry benchmark, we use the sequence $00$, $02$, $05$ and $07$ as training data and $08$, $09$ and $10$ as testing data. There are in total $4720$ training samples and $1687$, $652$ and $349$ testing samples for $08$, $09$ and $10$ testing dataset respectively. The recorded driving trajectories are shown in Fig.~\ref{fig:kitti}.
As no rural area autonomous driving dataset is publicly available, we collect data at off-road environment by manual driving. The vehicle has a HESAI Pandar64 LiDAR and GPS/IMU system to capture point cloud and ground truth trajectories. We collected $656$ trajectories for training and another non-overlapping $701$ trajectories for testing. For both urban and rural dataset, we create the topometric maps following the proposed strategy in \secref{sec:map}.
\subsubsection{Metrics}
In this work, we follow the common evaluation protocol in trajectory prediction literature and use \textbf{FDE}, \textbf{ADE} and \textbf{HitRate} as evaluation metrics.
The final displacement error (\textbf{FDE}) is calculated by $\| s_{T-1} - s^*_{T-1}\|_2^2$, where ${\mathbf{s}}$ and ${\mathbf{s}}^*$ are the predicted and ground truth trajectory respectively. Because the final predicted waypoint can be used as the goal for local planning when considering local planning for obstacle avoidance, the \textbf{FDE} metric is useful for evaluating model's accuracy on predicting such goal waypoints.
The Average displacement error (\textbf{ADE}) is defined by $\frac{1}{T} \sum_{t=0}^{T-1} \| s_t - s^*_t\|_2^2$. For evaluating with multimodal trajectory predictors, we use the minimum average displacement error (\textbf{minADE}) which is defined by $minADE_k = \min_{{\mathbf{s}}^k} \frac{1}{T} \sum_{t=0}^{T-1}\| s^k_t - s^*_t\|$. As our model can be regarded as a special case of multimodal trajectory prediction, {{\em i.e.}} with modality number of $1$, we can use \textbf{minADE} to compare with other state-of-the-art multimodal models.
For better interpreting the performance in the context of planning, we also use the \textbf{HitRate} metric proposed in \cite{2019CoverNet}. We define $Hit_{k,d}$ as $1$ for a single sample if $\min_{{\mathbf{s}}^k} \max_{t=0}^{T-1} \| s^k_t - s^*_t\| < d$ otherwise $0$. Then we calculate the percentage of the {\em hit} samples and refer to it as the $HitRate_{k,d}$.
\subsection{Empirical results}
\begin{table}
\small
\centering
\begin{tabular}{ccccc}
\toprule
Model & FDE$\downarrow$ & \ade{1} $\downarrow$ & \hitrate{1} $\uparrow$ \\
\midrule
Heatmap Only & 1.63 & 1.23 & 0.83 \\
\midrule
Transformer0 & 0.79 & 0.34 & 0.92 \\
Transformer1 & 0.72 & 0.35 & 0.95 \\
Transformer2 & 0.66 & 0.28 & 0.95 \\
Transformer3 & 0.61 & 0.26 & 0.96 \\
\bottomrule
\end{tabular}
\caption{Ablation studies on KITTI-10.}
\label{tab:ablation}
\end{table}
\begin{table}
\small
\centering
\begin{tabular}{cccc}
\toprule
Perturbation & FDE$\downarrow$ & \ade{1} $\downarrow$ & \hitrate{1} $\uparrow$\\
\midrule
0.0 m & 0.61 & 0.25 & 0.97 \\
\midrule
1.0 m & 0.62$\pm$0.01 & 0.27$\pm$0.01 & 0.95$\pm$0.01 \\
2.0 m & 0.77$\pm$0.03 & 0.34$\pm$0.02 & 0.92$\pm$0.01 \\
3.0 m & 1.05$\pm$0.01 & 0.45$\pm$0.03 & 0.85$\pm$0.02 \\
\bottomrule
\end{tabular}
\caption{Robustness of the model under different lateral perturbations to the topometric map.}
\label{tab:perturbation}
\end{table}
\begin{table*}
\small
\centering
\begin{tabular}{cllcccccc}
\toprule
Dataset &Method & FDE$\downarrow$ & \ade{1} $\downarrow$ & \ade{2} $\downarrow$ & \ade{3} $\downarrow$ & \hitrate{1} $\uparrow$ & \hitrate{2} $\uparrow$ & \hitrate{3} $\uparrow$ \\
\midrule
\multirow{4}{*}{KITTI-08} & Const. vel. \& yaw & 2.40 & 1.04 & 1.04 & 0.1.04 & 0.71 & 0.71 & 0.71 \\
& CoverNet \cite{2019CoverNet} & 1.28 & 0.62 & 0.47 & {0.41} & 0.80 & 0.84 & 0.87 \\
& MTP \cite{2018Multimodal} & 1.06 & 0.53 & 0.45 & 0.42 & {0.85} & \textbf{0.88} & \textbf{0.89} \\
& MultiPath \cite{2019MultiPath} & {1.00} & {0.51} & {0.44} & {0.41} & {0.85} & 0.86 & \textbf{0.89} \\
& Ours & \textbf{0.81} & \textbf{0.39} & \textbf{0.39} & \textbf{0.39} & \textbf{0.87} & 0.87 & 0.87 \\
\midrule
\multirow{4}{*}{KITTI-09} & Const. vel. \& yaw & 2.66 & 1.00 & 1.00 & 1.00 & 0.55 & 0.55 & 0.55 \\
& CoverNet \cite{2019CoverNet} & 1.42 & 0.56 & 0.38 & 0.33 & 0.71 & 0.86 & 0.91 \\
& MTP \cite{2018Multimodal} & 0.95 & 0.39 & 0.29 & 0.25 & 0.87 & \textbf{0.94} & \textbf{0.96} \\
& MultiPath \cite{2019MultiPath} & {0.85} & {0.35} & \textbf{0.27} & \textbf{0.23} & {0.90} & \textbf{0.94} & \textbf{0.96} \\
& Ours & \textbf{0.71} & \textbf{0.29} & 0.29 & 0.29 & \textbf{0.92} & 0.92 & 0.92 \\
\midrule
\multirow{4}{*}{KITTI-10} & Const. vel. \& yaw & 2.11 & 0.90 & 0.90 & 0.90 & 0.64 & 0.64 & 0.64 \\
& CoverNet \cite{2019CoverNet} & 1.36 & 0.53 & 0.37 & 0.30 & 0.78 & 0.89 & 0.94 \\
& MTP \cite{2018Multimodal} & 0.73 & 0.31 & 0.26 & \textbf{0.23} & 0.93 & \textbf{0.97} & \textbf{0.97} \\
& MultiPath \cite{2019MultiPath} & {0.64} & {0.30} & \textbf{0.24} & \textbf{0.23} & {0.95} & \textbf{0.97} & \textbf{0.97} \\
& Ours & \textbf{0.61} & \textbf{0.25} & 0.25 & 0.25 & \textbf{0.97} & \textbf{0.97} & \textbf{0.97} \\
\midrule
\multirow{4}{*}{Rural} & Const. vel. \& yaw & 2.90 & 1.37 & 1.37 & 1.37 & 0.60 & 0.60 & 0.60 \\
& CoverNet \cite{2019CoverNet} & 2.69 & 1.37 & 0.96 & 0.78 & 0.65 & 0.75 & 0.79 \\
& MTP \cite{2018Multimodal} & 1.61 & {0.84} & {0.71} & \textbf{0.66} & {0.76} & {0.82} & {0.82} \\
& MultiPath \cite{2019MultiPath} & {1.59} & 0.90 & 0.75 & 0.71 & 0.75 & 0.81 & {0.82} \\
& Ours & \textbf{1.19} & \textbf{0.69} & \textbf{0.69} & 0.69 & \textbf{0.85} & \textbf{0.85} & \textbf{0.85} \\
\bottomrule
\end{tabular}
\caption{Results on KITTI and our collected rural scene dataset. Smaller minADE and FDE is better. Larger HitRate is better.}
\label{tab:results}
\end{table*}
We implemented several state-of-the-art multimodal baselines using the same input and backbone architectures, including CoverNet \cite{2019CoverNet}, MTP \cite{2018Multimodal} and MultiPath \cite{2019MultiPath}. As multimodal approaches predict multiple trajectories, we use a fixed number of $3$ in all our experiments. The main results are summarized in Table~\ref{tab:results}. The model denoted by {\em Const. vel. \& yaw} is a classic physics-based model. we use the vehicle's position observed in the past trajectory as measurement to estimate the vehicle's velocity and yaw angle. During the inference stage, we assume the velocity and yaw are unchanged.
From the overall results, we can see that physics-based single modality models are clearly not suitable for long-term predictions. In general, the models with regression-based header (MTP and MultiPath) significantly outperform the one with classification-based header {{\em i.e.}} the CoverNet baseline.
All these models obtain higher accuracies on urban environment than on rural environment. This is reasonable, because urban scene is more structured {{\em e.g.}} the boundaries of roads are clear and roads are mostly flat. We can also see from the results on KITTI that the simpler the scenario the higher the accuracy. KITTI-08 has the most complex trajectories, and the accuracy is the worst, while KITTI-10 is on the opposite side. Lastly, the model with anchors generally outperforms the one without anchors, {{\em e.g.}}, MultiPath vs MTP, especially for urban scenes. Comparing to the baseline models, our transformer based model constantly obtains higher accuracies.
\begin{figure*}
\centering
\begin{minipage}{0.9\textwidth}
\includegraphics[width=\textwidth]{images/vis.png}
\caption{Visualization of the predicted trajectories. The top row shows the results of the model without using transformer. The bottom row shows the results of the model with transformer. Blue: local route. Red: ground truth trajectory. Green: predicted trajectory.}
\label{fig:results}
\end{minipage}
\end{figure*}
\subsection{Ablation study}
In this section, we first conduct ablation studies to analyze different components in our proposed model. A baseline model without using transformer is denoted by {\em Heatmap only} in Table~\ref{tab:ablation}. We designed several variants of our transformer based model. {\em Transformer0} is the model without using road segmentation and without using waypoint positional encoder. {\em Transformer1} is the model without using road segmentation. {\em Transformer2} is the model without using waypoint positional encoder. {\em Transformer3} is the model without using transformer decoder. The {\em Heatmap only} model performs worst due to simple model architecture and ignoring the dependencies between waypoints. Removing the positional encoder (Transformer2) causes $0.05$ FDE drop and removing the road segmentation (Transformer1) causes $0.11$ FDE. Removing both (Transformer0) causes $0.18$ FDE drop. While removing the transformer decoder causes least performance drop.
We also show qualitative results in Fig.~\ref{fig:results}. The first row shows the prediction results from {\em Heatmap only} and the second row shows our transformer based model. The transformer based model predicts more accurate and smoother trajectories than the non-transformer one, showing the effectiveness of the proposed waypoint transformer architecture.
Next, we analyze the robustness of the model to the noise of the topometric map. We add random lateral perturbation to the topometric map in the testing set and run testing with the perturbed local route. Table~\ref{tab:perturbation} shows the accuracies of the model under different magnitude of lateral perturbation. Each random experiment is repeated for $3$ times and we report the mean and deviation. We have tried to repeat for $3, 5$ and $10$ times, and the statistics does not show much differences. Note that for other experiments in this work, we use fixed topometric maps, so they do not need to compute the standard deviations. For lateral perturbation of $1.0$ meters, the model keeps almost the same accuracies with deviation around $0.01$. When increasing the lateral perturbation to $3.0$ meter, FDE drops around $0.4$ meter, but still far less than the magnitude of the perturbation, {{\em e.g.}} $0.4$ vs $3.0$. It worth noting that the magnitude of random perturbation during training is $0.25$ meter. From this study, we can see the proposed model is robust the noise of the topometric map.
\subsection{Autonomous driving without HD map}
In \cite{2017Global}, OSM is used for global outer-urban navigation. The performance of this method depends on the quality and correctness of the terrain classification. In \cite{mit_nav_rural:2018}, a map-less driving framework is proposed which combines global sparse topological map with sensor-based perception system for local navigation. A followup work \cite{maplite} utilizes topological map and use road segmentation to register the topometric map in the vehicle frame thus enable local navigation. Other map-less autonomous driving techniques take end-to-end learning approaches \cite{conditional_imitation:2019, mit_variational:2019, learn_drive:2019}. However, these neural network-based driving approaches suffer from the compounding errors, lacking explainability and requiring a high amount of training data to generalize \cite{2019End}.
\subsection{Deep learning-based trajectory prediction}
The success of deep learning in many real-life applications prompts research on trajectory prediction. In \cite{lstm_traj}, the Long Short-Term Memory (LSTM) was successfully applied to predict vehicle locations using past trajectory data. Recently the multimodal approaches \cite{2019MultiPath, 2018Multimodal,2019CoverNet} are becoming popular. However, multimodal regression methods can easily suffer from {\em mode collapse} to a single mode. In \cite{2019CoverNet}, it proposed to predict trajectories by classifying on pre-generated trajectory set. In \cite{2019MultiPath}, the issue is address by using a fixed set of anchor trajectories. In this work, we propose a novel transformer based trajectory prediction model which obtains higher accuracies than state-of-the-art multimodal methods. Most of existing trajectory prediction papers focus on urban driving and rely on rich context information from HD maps. On the contrary, our method is general to both urban and rural environments and does not require HD maps.
|
{
"timestamp": "2021-05-11T02:18:36",
"yymm": "2105",
"arxiv_id": "2105.03869",
"language": "en",
"url": "https://arxiv.org/abs/2105.03869"
}
|
\section{Introdution}
Markov chain is a stochastic process that takes advantage of past and current prior information to predict the future information \cite{CKL1979A, 2021Stationary, WJ2009A}.
The first-order Markov chain only uses current information, and one of its representative applications is Google's PageRank \cite{Gleich2105PBW,langville2011google,page1999pagerank,Wu2012A}.
However, the first-order Markov chain does not consider historical information that is often significant in practical use.
Therefore, the time limitation makes this model sometimes seriously affect the accuracy of prediction, thereby reducing the credibility of conclusions.
Higher-order Markov chain is a generalization of the first-order Markov chain \cite{Ching2004,Ching2006,Ching2008,Raftery1985A}. It exploits the current state of several consecutive moments to predict the future state.
The advantage of the higher-order Markov chain lies in its high memory and ability to consider huge historical information, allowing us to make reasonable judgments in analyzing a variety of stochastic processes over time in multi-dimensional spaces \cite{Wen2020Multilinear}.
Corresponding to the PageRank model, the higher-order Markov chain also have an important application named higher-order PageRank \cite{Gleich2014Multilinear}.
However, one has to store a {\it dense} higher-order PageRank tensor in the higher-order PageRank problem, which is prohibitive for large-scale data sets.
In order to partially overcome this difficulty, Gleich, Lim, and Yu considered multilinear PageRank instead \cite{Gleich2014Multilinear,Li2014A}, which is an alternative to higher-order PageRank.
The multilinear PageRank problem has received great attention in recent years; see \cite{Cipolla2019Extrapolation,Gleich2014Multilinear,Pei2020A,Guo2018A,HW,Li2017The, Wen2020Multilinear,Li2014A,Dongdong2019Relaxation,Meini2018Perron} and the references therein.
For instance, Gleich {\it et al.} proposed five algorithms for the multilinear PageRank problem \cite{Gleich2014Multilinear}.
Cipolla {\it et al.} took advantage of extrapolation skills to accelerate the simplified topological $\varepsilon$-algorithm in its restarted form for multilinear PageRank \cite{Cipolla2019Extrapolation}.
Guo {\it et al.} proposed a modified Newton method and analyzed a residual-based error bound for the multilinear PageRank vector \cite{Pei2020A, Guo2018A}.
Meini and Poloni investigated Perron-based algorithms for multilinear PageRank \cite{Meini2018Perron}.
Liu {\it et al.} considered several relaxation algorithms for solving the tensor equation arising from the multilinear PageRank problem \cite{Dongdong2019Relaxation}.
Unfortunately, for a given damping factor $0<\alpha<1$, the existence and uniqueness of multilinear PageRank vector is not
guaranteed theoretically \cite{Gleich2014Multilinear}. Recently, some theoretical results are established on the existence and uniqueness of the multilinear PageRank vector; see \cite{Li2017The, Wen2020Multilinear,Li2014A,Dongdong2019Relaxation} and the references therein.
However, the conditions for these results are often complicated and some of them are difficult to use in practice.
As a comparison, the higher-order PageRank tensor always exists and is unique \cite{Gleich2014Multilinear}.
On the other hand, the approximation of the multilinear PageRank vector to the high-order PageRank tensor can be poor.
For instance, for a multilinear PageRank problem of order 3, the approximation obtained from multilinear PageRank is a rank-$1$ {\it symmetric} matrix, and it is a poor approximation to a two-order PageRank tensor that is a {\it nonsymmetric} matrix.
Thus, it is interesting to solve the higher-order PageRank problem directly, provided that both computational overhead and storage requirements could be reduced significantly.
In \cite{WuYubao}, Wu {\it et al.} proposed to formulate second-order Markov chains by using edge-to-edge transition
probabilities. The main idea is to construct the corresponding edge-to-edge formulated matrix according to the sparsity structure and dangling nodes structure. As the number of edges is about $n^2$, where $n$ is the number of
nodes, the computational cost will be very high for large-scale problems.
Recently, Ding {\it et al.} considered the large and sparse higher-order PageRank problem \cite{Ding2018Fast}.
The main idea of their work is to approximate large-scale solutions of sparse Markov chains by
two components called sparsity component and rank-one component, and
a truncated power method was proposed. In this method, one can determine solutions by formulating and solving sparse and rank-one
optimization problems via closed form solutions.
However, as the computational model is a modification to the original higher-order PageRank model, the error between the approximation and the exact solution can be large. Moreover, the truncated power method may converge slowly for large-scale problems.
Therefore, it is urgent to seek new technologies to accelerate the truncated power method.
Instead of the multilinear PageRank problem, we revisit the higher-order PageRank problem and consider how to solve it efficiently in this work. The contribution of this work is as follows.
First, we accelerate the truncated power method for high-order PageRank. In the improved version, there is no need to form and store the vectors arising from the dangling states, nor to store an auxiliary matrix of size $n$, where $n$ is the number of nodes.
Second, motivated by the fact that we are often concerned with several webpages with {\it top} PageRank values,
we propose a truncated power method with {\it partial updating} to further release the overhead. In this strategy,
one only needs to update some {\it important} columns of the approximation in each iteration.
Third, the truncated power method applies to a modified high-order PageRank model that is not mathematically equivalent to the original one.
In this work, we propose a sparse power method with partial updating for the original higher-order PageRank problem.
The organization of this paper is as follows.
In Section \ref{back}, we briefly review the higher-order PageRank problem and the multilinear PageRank problem.
In Section \ref{TPM}, we introduce the truncated power method due to Ding {\it et al.} for the higher-order PageRank problem.
To improve the truncated power method, we propose a variation on the truncated power method and a truncated power method with partial updating in Section \ref{rt}. The convergence of the proposed methods is established.
In Section \ref{irt}, we put forward a sparse power method with higher accuracy for the original higher-order PageRank problem, and take advantage of the idea of partial updating to accelerate this method. The convergence of the two methods is analysed.
Numerical experiments are performed on large and sparse real-world and synthetic data sets in Section \ref{ne}, to show the efficiency of the proposed strategy and the superiority of the proposed algorithms. Some concluding remarks are given in Section \ref{sec7}.
\section{Higher-order PageRank and multilinear PageRank}\label{back}
An order-$m$, $n$-dimensional tensor has $m$ indices that range from $1$ to $n$.
Without loss of generality, we assume that $m=3$ throughout this paper.
A transition tensor is a stochastic tensor that is nonnegative and where the sum over the first index is $1$.
In this section, we briefly review the higher-order PageRank and the multilinear PageRank problems \cite{Gleich2014Multilinear}.
In \cite{Gleich2014Multilinear}, Gleich {\it et al.} introduced the following {\it higher-order PageRank problem} that is a generalization to the classical PageRank problem \cite{Gleich2105PBW,langville2011google,page1999pagerank,Wu2012A}.
\begin{definition}\cite[Definition 3.2]{Gleich2014Multilinear}
Let $\mathcal{P}$ be an order-$m$, $n$-dimensional transition tensor representing an order-$(m-1)$ Markov chain, $\alpha \in (0,1)$ be a probability, and $\mathbf{v} \in \mathbb{R}^{n}$ be a stochastic vector.
Then the higher-order PageRank tensor $\mathcal{X}$ is the order-$(m-1)$, $n$-dimensional tensor that solves the linear system
\begin{equation}\label{2.2}
\mathcal{X}(i, j, \ldots, \ell)=\alpha \sum_{k} \mathcal{P}(i, j, \ldots, \ell, k) \mathcal{X}(j, \ldots, \ell, k)+(1-\alpha) v_{i} \sum_{k} \mathcal{X}(j, \ldots, \ell, k).
\end{equation}
\end{definition}
That is, the tensor $\mathcal{X}$ is the stationary distribution of a higher-order Markov chain, and we have the following result.
\begin{theorem}\cite[Corollary 3.5]{Gleich2014Multilinear}\label{lem-2}
The higher-order PageRank stationary distribution tensor $\mathcal{X}$ always exists and is unique. Also, the standard PageRank iteration will result in a $1$-norm error of $2(1 - (1 - \alpha)^{m-1})^{k}$ after $(m-1)k$ iterations.
\end{theorem}
Specifically, as $m=3$, we have $\mathcal{X}=\boldsymbol{X} \in \mathbb{R}^{n \times n}$, and \eqref{2.2} reduces to
\begin{equation}\label{3.1}
\boldsymbol{X_{ij}} = \alpha \sum_{k} \mathcal{P}_{ijk} \boldsymbol{X_{jk}} + (1-\alpha) v_{i} \sum_{k} \boldsymbol{X_{jk}}.
\end{equation}
The above equation can be reformulated as \cite{Gleich2014Multilinear}
\begin{equation}\label{3.4}
vec(\boldsymbol{X}) = [\alpha \boldsymbol{P} + (1-\alpha)\boldsymbol{V}] vec(\boldsymbol{X}),
\end{equation}
where $\boldsymbol{P} \in \mathbb{R}^{n^{2} \times n^{2}}$, $\boldsymbol{I} \in \mathbb{R}^{n \times n}$ is an identity matrix, and
\begin{equation}
\boldsymbol{V} = \mathbf{e}^{\boldsymbol{T}} \otimes \boldsymbol{I} \otimes \mathbf{v} \in \mathbb{R}^{n^{2} \times n^{2}}.
\end{equation}
Here $\mathbf{e}$ is the vector of all ones, $vec(\cdot)$ is the ``vec operator" that stacks the columns
of a matrix into a long vector, and $\otimes$ is the Kronecker product.
Therefore,
\begin{equation}\label{3.2}
\boldsymbol{X}(:,j)=\alpha \mathbf{P}_{j} \boldsymbol{X}(j,:)^{\boldsymbol{T}}+(1-\alpha) \widetilde{\mathbf{V}}\boldsymbol{X}(j,:)^{\boldsymbol{T}},\quad j=1,2,\ldots,n,
\end{equation}
where $\widetilde{\mathbf{V}} = \mathbf{v} \mathbf{e}^{\boldsymbol{T}} \in \mathbb{R}^{n \times n}$, and
\begin{equation}\label{eq25}
\mathbf{P}_{j}=\left[\begin{array}{cccc}
\mathcal{P}_{1 j 1} & \mathcal{P}_{1 j 2} & \cdots & \mathcal{P}_{1 j n} \\
\mathcal{P}_{2 j 1} & \mathcal{P}_{2 j 2} & \cdots & \mathcal{P}_{2 j n} \\
\vdots & \vdots & \ddots & \vdots \\
\mathcal{P}_{n j 1} & \mathcal{P}_{n j 2} & \cdots & \mathcal{P}_{n j n}
\end{array}\right] \in \mathbb{R}^{n \times n},\quad j=1,2,\ldots,n,
\end{equation}
are column stochastic matrices.
In practice, however, we can only get the {\it sparse} matrices $\mathbf{Q}_{j}$ rather than the dense matrices $\mathbf{P}_{j}$ \cite{Gleich2014Multilinear}
\begin{equation*}
\mathbf{Q}_{j}=\left[\begin{array}{cccc}
\mathcal{Q}_{1 j 1} & \mathcal{Q}_{1 j 2} & \cdots & \mathcal{Q}_{1 j n} \\
\mathcal{Q}_{2 j 1} & \mathcal{Q}_{2 j 2} & \cdots & \mathcal{Q}_{2 j n} \\
\vdots & \vdots & \ddots & \vdots \\
\mathcal{Q}_{n j 1} & \mathcal{Q}_{n j 2} & \cdots & \mathcal{Q}_{n j n}
\end{array}\right] \in \mathbb{R}^{n \times n},\quad j=1,2,\ldots,n.
\end{equation*}
The matrices $\{\mathbf{Q}_{j}\}_{j=1}^n$ model the original sparse data, and are often not column random matrices. Let $\mathbf{d}_{j} \in \mathbb{R}^{n}$ be the vectors contain the entries arising from the dangling states of $\mathbf{Q}_{j}$\footnote{When the elements $\mathcal{Q}_{ijk}$ are equal to zero for all $i=1,2,\ldots,n$, the state $(j,k)$ is called the {\it dangling state}, which refers to no connection from the state $(j,k)$ \cite{Gleich2014Multilinear}.}.
Similar to the classical PageRank model \cite{langville2011google,page1999pagerank}, we can make dangling corrections to $\mathbf{Q}_{j}$, i.e.,
$$
\mathbf{P}_{j} = \mathbf{Q}_{j} + \frac{1}{n} \mathbf{e} \mathbf{d}_{j}^{\boldsymbol{T}},\quad j=1,2,\ldots,n,
$$
which are stochastic matrices.
It is challenging to compute the stationary probability distribution of large and sparse high-order Markov
chains.
Mathematically, as the higher-order PageRank problem is the stationary distribution of a high-order Markov chain, the power method is a nature choice for its simplicity and efficiency \cite{WJ2009A}.
According to \eqref{3.2}, given a nonnegative matrix $\mathbf{A} \in \mathbb{R}_{+}^{n \times n}$, we define the operator $\mathscr{W}:~\mathbb{R}^{n \times n} \mapsto \mathbb{R}^{n \times n}$ as follows
\begin{eqnarray}\label{2.6}
\mathscr{W}(\mathbf{A})(:,j)&=&\alpha (\mathbf{P}_{j}\mathbf{A}(j,:)^{\boldsymbol{T}}) + (1-\alpha)\widetilde{\mathbf{V}}\mathbf{A}(j,:)^{\boldsymbol{T}}\nonumber\\
&=&{\small \alpha (\mathbf{Q}_{j} \mathbf{A}(j,:)^{\boldsymbol{T}}) + \frac{\alpha}{n}(\mathbf{d}_{j}^{\boldsymbol{T}} \mathbf{A}(j,:)^{\boldsymbol{T}}) \mathbf{e} + (1-\alpha)\|\mathbf{A}(j,:)\|_{1}\mathbf{v}}.
\end{eqnarray}
Given the initial guess $\boldsymbol{X}^{(0)} \in \mathbb{R}_{+}^{n}$ with its $l_{1}$-norm being 1, i.e., $\|\boldsymbol{X}^{(0)}\|_{l_{1}} = 1$, then the power method resorts to the following iterations
\begin{equation*}
\boldsymbol{X}^{(q+1)} = \mathscr{W}(\boldsymbol{X}^{(q)}),\quad q=0,1,\ldots,
\end{equation*}
and the main overhead is to update the columns of approximation in the following way
\begin{equation}\label{4.3}
\boldsymbol{X}^{(q+1)}(:,j) = \alpha (\mathbf{Q}_{j} \boldsymbol{X}^{(q)}(j,:)^{\boldsymbol{T}}) + \frac{\alpha}{n}(\mathbf{d}_{j}^{\boldsymbol{T}} \boldsymbol{X}^{(q)}(j,:)^{\boldsymbol{T}}) \mathbf{e} + (1-\alpha)\|\boldsymbol{X}^{(q)}(j,:)\|_{1}\mathbf{v},
\end{equation}
for $j=1,2,\ldots,n$. By Theorem \ref{lem-2}, the solution of \eqref{3.4} exists and is unique, and the power method converges as $0<\alpha<1$ \cite{WJ2009A}.
However, it is required to store the {\it dense} higher-order PageRank tensor $\mathcal{X}$ in the higher-order PageRank problem, which is prohibitive, especially for large-scale data sets. In order to partially overcome this difficulty, Gleich {\it et al.} \cite{Gleich2014Multilinear} considered the following multilinear PageRank vector due to Li and Ng \cite{Li2014A}, which is an alternative to the higher-order PageRank tensor.
\begin{definition}\cite[Definition 4.1]{Gleich2014Multilinear}
Let $\mathcal{P}$ be an order-$m$, $n$-dimensional transition tensor representing an order-$(m-1)$ Markov chain, $\alpha \in (0,1)$ be a probability, and $\mathbf{v} \in \mathbb{R}^{n}$ be a stochastic vector.
Then the multilinear PageRank vector is a stochastic solution of the following system of polynomial equations
\begin{equation}\label{2.3}
\mathbf{x}=\alpha \mathcal{P} \mathbf{x}^{(m-1)} + (1-\alpha)\mathbf{v},
\end{equation}
or equivalently
\begin{equation}\label{eq29}
\mathbf{x} = \alpha \boldsymbol{R} (\underbrace{\mathbf{x} \otimes \cdot \cdot \cdot \otimes \mathbf{x}}_{m-1\ terms}) + (1-\alpha)\mathbf{v},
\end{equation}
where
\begin{equation}\label{eqn210}
\boldsymbol{R}=\left[\begin{array}{ccc|ccc|c|ccc}
\mathcal{P}_{111} & \cdots & \mathcal{P}_{1n1} & \mathcal{P}_{112} & \cdots & \mathcal{P}_{1n2} & \cdots & \mathcal{P}_{11n} & \cdots & \mathcal{P}_{1nn} \\
\mathcal{P}_{211} & \cdots & \mathcal{P}_{2n1} & \mathcal{P}_{212} & \cdots & \mathcal{P}_{2n2} & \cdots & \mathcal{P}_{21n} & \cdots & \mathcal{P}_{2nn} \\
\vdots & \ddots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots \\
\mathcal{P}_{n11} & \cdots & \mathcal{P}_{nn1} & \mathcal{P}_{n12} & \cdots & \mathcal{P}_{nn2} & \cdots & \mathcal{P}_{n1n} & \cdots & \mathcal{P}_{nnn}
\end{array}\right]
\end{equation}
is a column stochastic matrix of the flattened tensor $\mathcal{P}$ along the first index.
\end{definition}
The following result shows the existence and uniqueness of the multilinear PageRank vector.
\begin{theorem}\cite[Theorem 4.3]{Gleich2014Multilinear}\label{lem-2.5}
Let $\mathcal{P}$ be an order-$m$ stochastic tensor and $\mathbf{v}$ be a nonnegative vector. Then the multilinear PageRank equation
\begin{equation*}
\mathbf{x}=\alpha \mathcal{P} \mathbf{x}^{(m-1)}+(1-\alpha) \mathbf{v}
\end{equation*}
has a unique solution when $\alpha \in (0,\frac{1}{m-1})$.
\end{theorem}
From Theorem \ref{lem-2} and Theorem \ref{lem-2.5}, we see that the higher-order PageRank tensor always exists and is unique, while the multilinear PageRank vector is not. Recently, some theoretical results are established on the existence and uniqueness of the multilinear PageRank vector; see \cite{Li2017The, Wen2020Multilinear,Li2014A,Dongdong2019Relaxation} and the references therein.
However, the conditions for these results are often complicated and some of them are difficult to use in practice.
On the other hand, the approximation of the multilinear PageRank vector to the high-order PageRank tensor can be poor.
For instance, if $m=3$, the approximation ${\bf x}{\bf x}^{\boldsymbol{T}}$ obtained from multilinear PageRank is a rank-$1$ {\it symmetric} matrix, and it is a poor approximation to a two-order PageRank tensor $\mathcal{X}$ that is a {\it nonsymmetric} matrix.
Thus, it is interesting to solve the higher-order PageRank problem \eqref{2.2} directly, provided that both the computational overhead and storage requirements can be reduced significantly. In this work, we consider how to solve the higher-order PageRank problem efficiently.
\section{The truncated power method for higher-order PageRank}\label{TPM}
In order to reduce the heavy storage requirements of the higher-order PageRank tensor, Ding {\it et al.} proposed a truncated power method for the higher-order PageRank problem \cite{Ding2018Fast}.
In this method, the following systems were considered instead of \eqref{3.2}:
\begin{equation}\label{3.3}
\boldsymbol{X}(:,j)=\alpha \mathbf{P}_{j} \boldsymbol{X}(j,:)^{\boldsymbol{T}}+(1-\alpha) \mathbf{G}(:,j),\quad j=1,2,\ldots,n,
\end{equation}
where $\mathbf{G} \in \mathbb{R}^{n \times n}$ is a nonnegative matrix with $\|\mathbf{G}\|_{l_{1}}=1$. Notice that this model \eqref{3.3} is
a modification to the original higher-order PageRank model \eqref{3.2}, and they are not mathematically to each other.
The key idea of the truncated power method is to approximate the stationary distribution matrix $\boldsymbol{X}$ by using $\alpha (\boldsymbol{S} + \mathbf{e}\boldsymbol{u}^{\boldsymbol{T}}) + (1-\alpha) \mathbf{G}$, where $\boldsymbol{S}$ is expected to be a {\it sparse} matrix containing those significantly large entries in $\boldsymbol X$, and $\boldsymbol{u}$ is a vector consisting of the background values for each column of $\boldsymbol{X}$.
Thus, the approximation obtained from the truncated power method is composed of a sparse matrix $\boldsymbol{S}$ and a vector $\boldsymbol{u}$ of dimension $n$, rather than an $n$-by-$n$ {\it dense} matrix.
Consequently, the storage requirements can be released significantly.
More precisely, let $\mathbf{A} \in \mathbb{R}_{+}^{n \times n}$ and $\mathbf{b} \in \mathbb{R}_{+}^{n}$, i.e., $\mathbf{A}$ and $\mathbf{b}$ are nonnegative matrix and vector, respectively. The following operators were introduced \cite{Ding2018Fast}
\begin{itemize}[leftmargin=3em]
\item $\mathscr{P}: \mathbb{R}^{n \times n} \mapsto \mathbb{R}^{n \times n}$, $\mathscr{P}(\mathbf{A})(:,j):= \mathbf{P}_{j}\mathbf{A}(j,:)^{\boldsymbol{T}}$,
\item $\mathscr{T}: \mathbb{R}_{+}^{n} \mapsto \mathbb{R}_{+}^{n}$, $\mathscr{T}(\mathbf{b}):= \mathbf{s}^{*} + \mathbf{e} \mu^{*}$,
\item $\tilde{\mathscr{T}}: \mathbb{R}_{+}^{n \times n} \mapsto \mathbb{R}_{+}^{n \times n}$, $\tilde{\mathscr{T}}(\mathbf{A}):= [\mathscr{T}(\mathbf{A}(:,1)),\mathscr{T}(\mathbf{A}(:,2)),\ldots,\mathscr{T}(\mathbf{A}(:,n))]$,
\item $\mathscr{Q}: \mathbb{R}_{+}^{n \times n} \mapsto \mathbb{R}_{+}^{n \times n}$, $\mathscr{Q}(\mathbf{A}):= \alpha \tilde{\mathscr{T}}(\mathscr{P}(\mathbf{A})) + (1-\alpha)\mathbf{G}$.
\end{itemize}
The operator $\mathscr{T}$ is associated with the thresholding method for solving the following subproblem \cite[Algorithm 1]{Ding2018Fast}
\begin{equation}\label{27}
\begin{array}{l}
(\mathbf{s}^{*}, \mu^{*}) = \mathop{\arg\min} \{\frac{1}{2}\|\mathbf{s}+\mathbf{e} \mu-\mathbf{b}\|_{2}^{2}+\beta\|\mathbf{s}\|_{1}\} \\
\ \ \ \ \ \ \ \ \ \ \ \ \ \text { s.t. } \mathbf{s} \geq 0, \mu \geq 0,\\
\end{array}
\end{equation}
where $\beta$ is a positive number and $\beta\|\mathbf{s}\|_{1}$ serves as an penalty term for sparsity. It was shown that the above optimization problem has a unique solution with explicit form \cite[Theorem 3.3]{Ding2018Fast}. An algorithm for solving this problem is given as follows.
\begin{algorithm}[H]\label{Alg10}
\caption{The thresholding method for solving the subproblem \eqref{27} \cite{Ding2018Fast}}
\SetKwInOut{Input}{Input}\SetKwInOut{Output}{Output}
\Input{The non-negative vector $\mathbf{b}$ and the parameter $\beta$;}
\Output{The sparse vector $\mathbf{s}$ (the non-zero components) and the background value $\mu$;}
\BlankLine
Sort the components of $\mathbf{b}$ into $b_{i_{1}} \geq \cdots \geq b_{i_{r}} > \beta \geq b_{i_{r+1}}\geq\cdots\geq b_{i_{n}}$\;
Find $d \in \{0,1, \ldots, r\}$ such that $(n-d) b_{i_{d}}>\sum\limits_{j=d+1}^{n} b_{i_{j}}+ n \beta \geq(n-d) b_{i_{d+1}}$ ($b_{i_{0}}:=+\infty$)\;
$\mu=\frac{1}{n-d} \sum\limits_{j=d+1}^{n} b_{i_{j}}+\frac{k}{n-d} \cdot \beta$\;
$s_{i_{j}}=\left\{\begin{array}{ll}b_{i_{j}}-\alpha-\mu, & j=1,2, \ldots, d, \\ 0, & j=d+1, d+2, \ldots, n ;\end{array}\right.$\;
{\bf return} $\mathbf{s}$, $\mu$\;
\end{algorithm}
Consider the following sequence
\begin{equation}\label{eq3.3}
\boldsymbol{X}^{(q)} = \alpha (\boldsymbol{S}^{(q)} + \mathbf{e}(\boldsymbol{u}^{(q)})^{\boldsymbol{T}}) + (1-\alpha)\mathbf{G}, \ \ q = 0,1,\ldots,
\end{equation}
then the iterative scheme for the truncated power method can be rewritten as
$$
\boldsymbol{X}^{(q+1)} = \mathscr{Q}(\boldsymbol{X}^{(q)}), \quad q = 0,1,\ldots
$$
Notice that for $\forall q \in \mathbb{N}$, we have $\boldsymbol{S}^{(q)} \in \mathbb{R}^{n \times n}_{+}$, $\boldsymbol{u}^{(q)} \in \mathbb{R}^{n}_{+}$ and $\|\boldsymbol{S}^{(q)} + \mathbf{e}{\boldsymbol{u}^{(q)}}^{\boldsymbol{T}}\|_{l_{1}} = 1$.
If we denote by $\boldsymbol{Y}^{(q+1)} = \mathscr{P}(\boldsymbol{X}^{(q)})$, then
$$
\boldsymbol{X}^{(q+1)} = \alpha \tilde{\mathscr{T}}(\boldsymbol{Y}^{(q+1)}) + (1-\alpha)\mathbf{G}, \quad q = 0,1,\ldots
$$
The truncated power method is described as follows, for more details, refer to \cite{Ding2018Fast}.
\begin{algorithm}[H]\label{Alg2}
\caption{A truncated power method for higher-order PageRank \cite{Ding2018Fast}}
\SetKwInOut{Input}{Input}\SetKwInOut{Output}{Output}
\Input{$\mathbf{Q}_{j}$, $\mathbf{d}_{j}$, $j=1,2,\ldots, n$, $\boldsymbol{S}^{(0)}$, $\boldsymbol{u}^{(0)}$, $\mathbf{G}$, $\alpha$, $\beta$, $tol$;}
\Output{$\boldsymbol{S}$, $\boldsymbol{u}$;}
\BlankLine
{\bf Let $q=0$ and $res=1$}\;
\While {$res > tol$}
{
$res = 0$\;
\For{$j=1,2,\ldots,n$}
{
$\boldsymbol{Y}^{(q+1)}(:,j) = \mathbf{P}_{j} (\alpha(\boldsymbol{S}^{(q)}(j,:)^{\boldsymbol{T}} + \boldsymbol{u}^{(q)}) +(1-\alpha)\mathbf{G}(j,:)^{\boldsymbol{T}})$\footnote{We mention that there is a typo in Algorithm 2 of \cite{Ding2018Fast}, where $\mathbf{G}(:,j)$ should be replaced with $\mathbf{G}(j,:)^{\boldsymbol{T}}$; see \eqref{eq3.3}.};
$[\boldsymbol{S}^{(q+1)}(:,j), \boldsymbol{u}^{(q+1)}(j)] = thresholding(\boldsymbol{Y}^{(q+1)}(:,j),\beta)$;\quad\%~{\tt sparsing}\
$res = res + \|(\boldsymbol{S}^{(q+1)}(:,j) - \boldsymbol{S}^{(q)}(:,j)) + (\boldsymbol{u}^{(q+1)}(:,j) - \boldsymbol{u}^{(q)}(:,j))\|_{1}$\;
}
$q = q + 1$\;
}
$\boldsymbol{S} = \boldsymbol{S}^{(q)}$, $\boldsymbol{u} = \boldsymbol{u}^{(q)}$\;
\end{algorithm}
\section{Improving the truncated power method for high-order PageRank problem}\label{rt}
Denote by $\mathbf{z} = \alpha(\boldsymbol{S}^{(q)}(j,:)^{\boldsymbol{T}} + \boldsymbol{u}^{(q)}) +(1-\alpha)\mathbf{G}(j,:)^{\boldsymbol{T}}$.
In Step 5 of Algorithm \ref{Alg2}, the matrix-vector multiplication $\mathbf{P}_{j} \mathbf{z}$ can be computed by
$\mathbf{Q}_{j}\mathbf{z}+ \mathbf{N}_{j} \mathbf{z}$ \cite{Ding2018Fast},
where $\mathbf{N}_{j}$ contains the entries arising from the set of dangling states. More precisely, let $\mathbf{d}_{j},j=1,2,\ldots,n$, be the vectors from the dangling states, we have that
\begin{equation}\label{eqn4.1}
\boldsymbol{Y}^{(q+1)}(:,j) = \mathbf{Q}_{j}\mathbf{z} + \frac{1}{n}(\mathbf{d}_{j}^{\boldsymbol{T}} \mathbf{z}) \mathbf{e}.
\end{equation}
Therefore, one has to form and store the vectors $\{\mathbf{d}_{j}\}_{j=1}^{n}$ explicitly,
which is prohibitive as $n$ is large. In this section, we propose two new strategies to improve the performance of the truncated power method.
\subsection{A variation on the truncated power method}
Recall that \eqref{3.3} and \eqref{3.2} are not mathematically equivalent, and the choice of $\mathbf{G}$ is very important.
Indeed, we can get different approximate solutions with different choices of $\mathbf{G}$.
In \cite{Ding2018Fast}, the matrix $\mathbf{G}$ is set to be a matrix that shares the same sparsity as $\mathcal{P}$.
Obviously, this option is non-optimal, and we have to store an auxiliary matrix in practice.
Thus, it is interested in using an appropriate matrix $\mathbf{G}$.
In this section, we propose to use
\begin{equation}\label{eqn42}
\mathbf{G} = \frac{1}{n}\mathbf{v}\mathbf{e}^{\boldsymbol{T}}
\end{equation}
An advantage is that there is no need to store an auxiliary matrix in the truncated power method any more.
Now we consider how to compute $\boldsymbol{Y}^{(q+1)}(:,j)$ without forming and storing $\mathbf{d}_{j}$ explicitly.
For any matrix $\mathbf{A} \in \mathbb{R}_{+}^{n \times n}$, we define the operators
\begin{itemize}[leftmargin=3em]
\item $\mathscr{G}: \mathbb{R}^{n \times n} \mapsto \mathbb{R}^{n \times n}$, $\mathscr{G}(\mathbf{A})(:,j):= \alpha \mathscr{P}(\mathbf{A})(:,j) + (1-\alpha)\mathbf{G}(:,j)$,
\item ${\mathscr{H}}: \mathbb{R}_{+}^{n \times n} \mapsto \mathbb{R}_{+}^{n \times n}$, ${\mathscr{H}}(\mathbf{A}):= \tilde{\mathscr{T}}(\mathscr{G}(\mathbf{A}))$,
\end{itemize}
and consider the sequence
\begin{equation*}
\tilde{\boldsymbol{X}}^{(q)} = \tilde{\boldsymbol{S}}^{(q)} + \mathbf{e} (\tilde{\boldsymbol{u}}^{(q)})^{\boldsymbol{T}}, \ \ q = 0,1,\ldots
\end{equation*}
Then the iterative scheme of the improved truncated power method can be written as
\begin{equation*}
\tilde{\boldsymbol{X}}^{(q+1)} = \mathscr{H}(\tilde{\boldsymbol{X}}^{(q)}), \ \ q = 0,1,\ldots
\end{equation*}
Let $\tilde{\boldsymbol{Y}}^{(q+1)} = \mathscr{G}(\tilde{\boldsymbol{X}}^{(q)})$, from \eqref{3.3} and $\mathbf{P}_{j} = \mathbf{Q}_{j} + \frac{1}{n} \mathbf{e} \mathbf{d}_{j}^{\boldsymbol{T}}$, we obtain
\begin{equation}\label{5.3}
\tilde{\boldsymbol{Y}}^{(q+1)}(:,j) = \alpha (\mathbf{Q}_{j} \tilde{\boldsymbol{X}}^{(q)}(j,:)^{\boldsymbol{T}}) + \frac{\alpha}{n}(\mathbf{d}_{j}^{\boldsymbol{T}}\tilde{\boldsymbol{X}}^{(q)}(j,:)^{\boldsymbol{T}})\mathbf{e} + (1-\alpha)\mathbf{G}(:,j).
\end{equation}
Multiplying $\mathbf{e}^{\boldsymbol{T}}$ on both sides of \eqref{5.3} yields
\begin{equation*}
\|\tilde{\boldsymbol{Y}}^{(q+1)}(:,j)\|_{1} = \alpha\|\mathbf{Q}_{j} \tilde{\boldsymbol{X}}^{(q)}(j,:)^{\boldsymbol{T}}\|_{1} + \alpha (\mathbf{d}_{j}^{\boldsymbol{T}}\tilde{\boldsymbol{X}}^{(q)}(j,:)^{\boldsymbol{T}}) + (1-\alpha) \|\mathbf{G}(:,j)\|_{1},
\end{equation*}
and thus
\begin{equation}\label{5.4}
\mathbf{d}_{j}^{\boldsymbol{T}}\tilde{\boldsymbol{X}}^{(q)}(j,:)^{\boldsymbol{T}} = \frac{\|\tilde{\boldsymbol{Y}}^{(q+1)}(:,j)\|_{1} - \alpha \|\mathbf{Q}_{j} \tilde{\boldsymbol{X}}^{(q)}(j,:)^{\boldsymbol{T}}\|_{1} - (1-\alpha)\|\mathbf{G}(:,j)\|_{1}}{\alpha}.
\end{equation}
Moreover, we have from $\tilde{\boldsymbol{Y}}^{(q+1)}(:,j)= \alpha \mathbf{P}_{j}\tilde{\boldsymbol{X}}^{(q)}(j,:)^{\boldsymbol{T}} + (1-\alpha)\mathbf{G}(:,j)$ that
\begin{equation}\label{5.5}
\|\tilde{\boldsymbol{Y}}^{(q+1)}(:,j)\|_{1} = \alpha \|\tilde{\boldsymbol{X}}^{(q)}(j,:)^{\boldsymbol{T}}\|_{1} + (1-\alpha)\|\mathbf{G}(:,j)\|_{1}.
\end{equation}
Substituting \eqref{5.5} into \eqref{5.4} gives
\begin{equation}\label{5.6}
\mathbf{d}_{j}^{\boldsymbol{T}}\tilde{\boldsymbol{X}}^{(q)}(j,:)^{\boldsymbol{T}} = \|\tilde{\boldsymbol{X}}^{(q)}(j,:)^{\boldsymbol{T}}\|_{1} - \|\mathbf{Q}_{j} \tilde{\boldsymbol{X}}^{(q)}(j,:)^{\boldsymbol{T}}\|_{1}.
\end{equation}
So we have from \eqref{5.3} that
\begin{align}\label{eqn4.7}
& \tilde{\boldsymbol{Y}}^{(q+1)}(:,j) \nonumber \\
& = \alpha (\mathbf{Q}_{j} \tilde{\boldsymbol{X}}^{(q)}(j,:)^{\boldsymbol{T}}) + \frac{\alpha}{n}(\|\boldsymbol{X}^{(q)}(j,:)^{\boldsymbol{T}}\|_{1} - \|\mathbf{Q}_{j} \tilde{\boldsymbol{X}}^{(q)}(j,:)^{\boldsymbol{T}}\|_{1})\mathbf{e} + (1-\alpha)\mathbf{G}(:,j) \nonumber \\
& = \alpha (\mathbf{Q}_{j} \tilde{\boldsymbol{X}}^{(q)}(j,:)^{\boldsymbol{T}}) + \frac{\alpha}{n}(\|\boldsymbol{X}^{(q)}(j,:)^{\boldsymbol{T}}\|_{1} - \|\mathbf{Q}_{j} \tilde{\boldsymbol{X}}^{(q)}(j,:)^{\boldsymbol{T}}\|_{1})\mathbf{e} + (1-\alpha)\mathbf{v}.
\end{align}
In summary, we have the following algorithm. The differences with Algorithm \ref{Alg2} are that we replace \eqref{eqn4.1} with \eqref{eqn4.7}, and there is no need to form and store the vectors $\mathbf{d}_{j},j=1,2,\ldots,n$, moreover, it is unnecessary to store an additional matrix $\mathbf{G}$; see Step 6 in Algorithm \ref{Alg3}.
\begin{algorithm}[H]
\caption{A variation on the truncated power method for higher-order PageRank}\label{Alg3}
\SetKwInOut{Input}{Input}\SetKwInOut{Output}{Output}
\Input{$\mathbf{Q}_{j}$, $j=1,2,\ldots, n$, $\tilde{\boldsymbol{S}}^{(0)}$, $\tilde{\boldsymbol{u}}^{(0)}$, $\mathbf{v}$, $\alpha$, $\beta$, $tol$;}
\Output{$\tilde{\boldsymbol{S}}$, $\tilde{\boldsymbol{u}}$;}
\BlankLine
{\bf Let $q=0$ and $res=1$}\;
\While {$res > tol$}
{
$res = 0$\;
\For{$j=1,2,\ldots,n$}
{
$\mathbf{y} = \tilde{\boldsymbol{S}}^{(q)}(j,:)^{\boldsymbol{T}} + \tilde{\boldsymbol{u}}^{(q)}$\;
$\tilde{\boldsymbol{Y}}^{(q+1)}(:,j) = \alpha (\mathbf{Q}_{j} \mathbf{y}) + \frac{\alpha}{n}(\|\mathbf{y}\|_{1} - \|\mathbf{Q}_{j} \mathbf{y}\|_{1})\mathbf{e} + \frac{1-\alpha}{n}\mathbf{v}$\;
$[\tilde{\boldsymbol{S}}^{(q+1)}(:,j), \tilde{\boldsymbol{u}}^{(q+1)}(j)] = thresholding(\tilde{\boldsymbol{Y}}^{(q+1)}(:,j),\beta)$\quad\%~{\tt sparsing}\
$res = res + \|(\tilde{\boldsymbol{S}}^{(q+1)}(:,j) - \tilde{\boldsymbol{S}}^{(q)}(:,j)) + (\tilde{\boldsymbol{u}}^{(q+1)}(:,j) - \tilde{\boldsymbol{u}}^{(q)}(:,j))\|_{1}$\;
}
$q = q + 1$\;
}
$\tilde{\boldsymbol{S}} = \tilde{\boldsymbol{S}}^{(q)}$, $\tilde{\boldsymbol{u}} = \tilde{\boldsymbol{u}}^{(q)}$\;
\end{algorithm}
Next, we consider the convergence of Algorithm \ref{Alg3}.
We first need the following lemma.
\begin{lemma}\cite[Theorem 3.4]{Ding2018Fast}\label{lem-5.1}
The operator $\mathscr{T} : \mathbb{R}_{+}^{n} \longmapsto \mathbb{R}_{+}^{n}$ is non-expansive, i.e., $\|\mathscr{T}(\mathbf{a}) - \mathscr{T}({\mathbf{b}})\|_{1} \leq \|\mathbf{a} - \mathbf{b}\|_{1}$.
\end{lemma}
The following result was mentioned in \cite[pp.77]{Ding2018Fast} without proof.
We give a proof here.
\begin{lemma}\label{lem-5.2}
The operator $\tilde{\mathscr{T}} : \mathbb{R}_{+}^{n \times n} \longmapsto \mathbb{R}_{+}^{n \times n}$ is non-expansive, i.e. $\|\tilde{\mathscr{T}}(\mathbf{A}) - \tilde{\mathscr{T}}(\mathbf{B})\|_{l_{1}} \leq \|\mathbf{A} - \mathbf{B}\|_{l_{1}}$.
\end{lemma}
\begin{proof}
We have that
\begin{equation}\label{5.2.1}
\|\tilde{\mathscr{T}}(\mathbf{A}) - \tilde{\mathscr{T}}(\mathbf{B})\|_{l_{1}} = \sum_{j=1}^{n}\|[\mathscr{T}(\mathbf{A}(:,j)) - \mathscr{T}(\mathbf{B}(:,j))\|_{1}.
\end{equation}
According to Lemma \ref{lem-5.1},
\begin{equation}\label{5.2.2}
\|[\mathscr{T}(\mathbf{A}(:,j)) - \mathscr{T}(\mathbf{B}(:,j))\|_{1} \leq \|\mathbf{A}(:,j) - \mathbf{B}(:,j)\|_{1}.
\end{equation}
Combining \eqref{5.2.1} and \eqref{5.2.2}, we get
\begin{equation*}
\|\tilde{\mathscr{T}}(\mathbf{A}) - \tilde{\mathscr{T}}(\mathbf{B})\|_{l_{1}} \leq \sum_{j=1}^{n} \|\mathbf{A}(:,j) - \mathbf{B}(:,j)\|_{1}
= \|\mathbf{A} - \mathbf{B}\|_{l_{1}},
\end{equation*}
which completes the proof.
\end{proof}
The following theorem shows the convergence of Algorithm \ref{Alg3}.
\begin{theorem}\label{the-5.6}
When $\alpha \in (0,1)$, the iteration $\tilde{\boldsymbol{X}}_{q+1} = \mathscr{H}(\tilde{\boldsymbol{X}}_{q})$ converges to the unique fixed point in $\mathbb{R}_{+}^{n \times n}$ with an arbitrary initial point $\tilde{\boldsymbol{X}}_{0} \in \mathbb{R}_{+}^{n \times n}$.
\end{theorem}
\begin{proof}
Recall that
$\mathscr{H}(\mathbf{A})= \tilde{\mathscr{T}}(\mathscr{G}(\mathbf{A}))$, so we have from Lemma \ref{lem-5.2} that
\begin{equation*}
\|\mathscr{H}(\mathbf{A}) - \mathscr{H}(\mathbf{B})\|_{l_{1}} = \|\tilde{\mathscr{T}}(\mathscr{G}(\mathbf{A})) - \tilde{\mathscr{T}}(\mathscr{G}(\mathbf{B}))\|_{l_{1}} \leq \|\mathscr{G}(\mathbf{A}) - \mathscr{G}(\mathbf{B})\|_{l_{1}}.
\end{equation*}
Moreover,
\begin{equation*}
\|\mathscr{G}(\mathbf{A}) - \mathscr{G}(\mathbf{B})\|_{l_{1}} = \alpha \|\mathscr{P}(\mathbf{A}) - \mathscr{P}(\mathbf{B})\|_{l_{1}} \leq \alpha \|\mathbf{A} - \mathbf{B}\|_{l_{l}},
\end{equation*}
where we used the fact that the $l_{1}$-norm of the linear operator $\mathscr{P}$ is $1$ \cite{Ding2018Fast}.
Thus,
$$
\|\mathscr{H}(\mathbf{A}) - \mathscr{H}(\mathbf{B})\|_{l_{1}} \leq \alpha \|\mathbf{A} - \mathbf{B}\|_{l_{l}}<\|\mathbf{A} - \mathbf{B}\|_{l_{l}},
$$
and the conclusion follows from the Banach fixed point theorem \cite{CiarletLNFA}.
\end{proof}
\subsection{A truncated power method with partial updating}
In each iteration of Algorithm \ref{Alg3}, we have to update all the columns $\tilde{\boldsymbol{S}}^{(q+1)}(:,j),j=1,2,\ldots,n$,
and the workload is prohibitive as $n$ is large.
In practice, however, we are often concerned with several webpages with {\it top} PageRank values.
To release the overhead, we propose to {\it partially update} the columns of $\tilde{\boldsymbol{S}}^{(q+1)}$.
In other words, we only update some ``important" columns of the approximation in each iteration.
Let us discuss it in more detail.
Denote by
\begin{equation}\label{eqn410}
PV^{(q)}(j) = \sum_{i=1}^{n}\boldsymbol{S}^{(q)}_{ij} + \boldsymbol{u}^{(q)}_{j},\quad q = 0,1,\ldots
\end{equation}
the {\it PageRank value} of the $j$-th webpage in the $q$-th iteration \cite{Ding2018Fast}. In the partially updating strategy, we first run $\tilde{\ell}$ (say, 1 or 2) iterations of Algorithm \ref{Alg3}.
Suppose that we are interested in the top $\varsigma$ PageRank values, and
let $\tau$ be the percentage of columns extracted from the previous iteration.
Let $\Omega^{(q)}$ be the index set of columns that is required to update during iterations of the new algorithm.
Next we consider how to update this set efficiently.
\begin{enumerate}[leftmargin=3em]
\item During the first $\tilde{\ell}$ iterations, we have to update all the columns, and $\Omega^{(q)} = \{1, 2, \ldots, n\}$ for $q = 0, 1, \ldots, \tilde{\ell}-1$, where $\Omega^{(q)}$ represents the set of columns need to be updated in the $(q+1)$-th iteration.
\item From the $(\tilde{\ell} + 1)$-th iteration, we only need to update the columns with top $card(\Omega^{(q)})$ PageRank values in the $q$-th iteration, where
\begin{equation}\label{eqn411}
card(\Omega^{(q)}) = \max\{\lfloor \tau \cdot card(\Omega^{(q-1)}) \rfloor, \varsigma\}.
\end{equation}
\end{enumerate}
Based on the above discussions, we have the following algorithm that partially update the columns of the approximate solutions during iterations. If $\tau = 100\%$, then Algorithm \ref{Alg5} reduces to Algorithm \ref{Alg3}.
\begin{algorithm}[H]
\caption{A truncated power method with partial updating for higher-order PageRank }\label{Alg5}
\SetKwInOut{Input}{Input}\SetKwInOut{Output}{Output}
\Input{$\mathbf{Q}_{j}$, $j=1,2,\ldots, n$, $\hat{\boldsymbol{S}}^{(0)}$, $\hat{\boldsymbol{u}}^{(0)}$, $\alpha$, $\beta$, $\varsigma$, $\tau$, $\tilde{\ell}$ and $tol$;}
\Output{$\hat{\boldsymbol{S}}$, $\hat{\boldsymbol{u}}$;}
\BlankLine
{\bf Let $q=0$ and $res=1$}\;
\While {$res > tol$}
{
$res = 0$\;
\eIf{ $q \leq \tilde{\ell}-1$ }
{$\Omega^{(q)}$ = \{1, 2, \ldots, $n$\}\;}
{{\bf Determine} $card(\Omega^{(q)})$ via \eqref{eqn411} and {\bf select the column indexes corresponding to the top $card(\Omega^{(q)})$ elements in $\{{PV^{(q)}(j)}\}_{ j \in \Omega^{(q-1)}}$ to form $\Omega^{(q)}$ }\quad\%~{\tt determine the index set}\;
}
\For{$j = \in \Omega^{(q)}$}
{
$\mathbf{y} = \hat{\boldsymbol{S}}^{(q)}(j,:)^{\boldsymbol{T}} + \hat{\boldsymbol{u}}^{(q)}$\;
$\hat{\boldsymbol{Y}}^{(q+1)}(:,j) = \alpha (\mathbf{Q}_{j} \mathbf{y}) + \frac{\alpha}{n}(\|\mathbf{y}\|_{1} - \|\mathbf{Q}_{j} \mathbf{y}\|_{1})\mathbf{e}
+ \frac{1-\alpha}{n}\mathbf{v}$\;
$[\hat{\boldsymbol{S}}^{(q+1)}(:,j), \hat{\boldsymbol{u}}^{(q+1)}(j)] = thresholding(\hat{\boldsymbol{Y}}^{(q+1)}(:,j),\beta)$\quad\%~{\tt sparsing}\
{\bf Compute $PV^{(q+1)}(j)$ by using \eqref{eqn410}}\;
$res = res + \|(\hat{\boldsymbol{S}}^{(q+1)}(:,j) - \hat{\boldsymbol{S}}^{(q)}(:,j)) + (\hat{\boldsymbol{u}}^{(q+1)}(:,j) - \hat{\boldsymbol{u}}^{(q)}(:,j))\|_{1}$\;
}
$q = q + 1$\;
}
$\hat{\boldsymbol{S}} = \hat{\boldsymbol{S}}^{(q)}$, $\hat{\boldsymbol{u}} = \hat{\boldsymbol{u}}^{(q)}$\;
\end{algorithm}
The theorem as follows shows the convergence of Algorithm \ref{Alg5}.
\begin{theorem}\label{Thm4.4}
Consider the following iteration in Algorithm \ref{Alg5}:
\begin{equation}\label{Pu1}
\hat{\boldsymbol{X}}^{(q+1)}(:,j) =
\left\{
\begin{array}{lcl}
\hat{\boldsymbol{X}}^{(q)}(:,j) & & j \in \Omega^{(0)} \backslash \Omega^{(q)},\\
\mathscr{T}(\alpha \mathbf{P}_{j}\hat{\boldsymbol{X}}^{(q)}(j,:)^{\boldsymbol{T}} + (1-\alpha)\mathbf{G}(:,j)) & & j \in \Omega^{(q)},
\end{array}
\right.
\end{equation}
where $\Omega^{(0)} = \{1,2,...,n\}$ and $q=0, 1, \ldots$
If $\alpha \in (0,1)$, then the iteration \eqref{Pu1} converges to the unique fixed point in $\mathbb{R}_{+}^{n \times n}$ with arbitrary initial point $\hat{\boldsymbol{X}}_{0} \in \mathbb{R}_{+}^{n \times n}$.
\end{theorem}
\begin{proof}
From \eqref{Pu1}, we obtain
\begin{equation*}
\begin{aligned}
& \|\hat{\boldsymbol{X}}^{(q+1)} - \hat{\boldsymbol{X}}^{(q)}\|_{l_{1}}\\
= & \sum_{j=1}^{n}\|\hat{\boldsymbol{X}}^{(q+1)}(:,j) - \hat{\boldsymbol{X}}^{(q)}(:,j)\|_{1}\\
= & \sum_{j \in \Omega^{(q)}}\|\hat{\boldsymbol{X}}^{(q+1)}(:,j) - \hat{\boldsymbol{X}}^{(q)}(:,j)\|_{1} + \sum_{j \in \Omega^{(0)} \backslash \Omega^{(q)}}\|\hat{\boldsymbol{X}}^{(q+1)}(:,j) - \hat{\boldsymbol{X}}^{(q)}(:,j)\|_{1}\\
= & \sum_{j \in \Omega^{(q)}}\|\hat{\boldsymbol{X}}^{(q+1)}(:,j) - \hat{\boldsymbol{X}}^{(q)}(:,j)\|_{1}, \quad q = 0, 1, \ldots
\end{aligned}
\end{equation*}
From Step 7 of Algorithm \ref{Alg5}, we notice that $\Omega^{(q+1)} \subseteq \Omega^{(q)} \subseteq \cdots \subseteq \Omega^{(0)}$. Thus, if $j \in \Omega^{(q+1)}$, then $j \in \Omega^{(q)}$, and
\begin{equation}\label{eqn413}
{\small\begin{aligned}
& \|\hat{\boldsymbol{X}}^{(q+2)} - \hat{\boldsymbol{X}}^{(q+1)}\|_{l_{1}}\\
= & \sum_{j \in \Omega^{(q+1)}}\|\mathscr{T}(\alpha \mathbf{P}_{j}\hat{\boldsymbol{X}}^{(q+1)}(j,:)^{\boldsymbol{T}} + (1-\alpha)\mathbf{G}(:,j)) - \mathscr{T}(\alpha \mathbf{P}_{j}\hat{\boldsymbol{X}}^{(q)}(j,:)^{\boldsymbol{T}} + (1-\alpha)\mathbf{G}(:,j))\|_{1}.
\end{aligned}}
\end{equation}
It is seen that $\hat{\boldsymbol{X}}^{(q+1)} \in \mathbb{R}^{n \times n}_{+}$ and $\hat{\boldsymbol{X}}^{(q)} \in \mathbb{R}^{n \times n}_{+}$ as $\hat{\boldsymbol{X}}^{(0)} \in \mathbb{R}^{n \times n}_{+}$. By Lemma \ref{lem-5.1} and \eqref{eqn413},
\begin{equation*}
\begin{aligned}
& \|\hat{\boldsymbol{X}}^{(q+2)} - \hat{\boldsymbol{X}}^{(q+1)}\|_{l_{1}}\\
\leq & \sum_{j \in \Omega^{(q+1)}}\|(\alpha \mathbf{P}_{j}\hat{\boldsymbol{X}}^{(q+1)}(j,:)^{\boldsymbol{T}} + (1-\alpha)\mathbf{G}(:,j)) - (\alpha \mathbf{P}_{j}\hat{\boldsymbol{X}}^{(q)}(j,:)^{\boldsymbol{T}} + (1-\alpha)\mathbf{G}(:,j))\|_{1}\\
= & \sum_{j \in \Omega^{(q+1)}}\|\alpha \mathbf{P}_{j}\big((\hat{\boldsymbol{X}}^{(q+1)}(j,:)^{\boldsymbol{T}} - \hat{\boldsymbol{X}}^{(q)}(j,:)^{\boldsymbol{T}}\big)\|_{1}
\leq \alpha \sum_{j \in \Omega^{(q+1)}}\|\hat{\boldsymbol{X}}^{(q+1)}(j,:)^{\boldsymbol{T}} - \hat{\boldsymbol{X}}^{(q)}(j,:)^{\boldsymbol{T}}\|_{1}\\
\leq & \alpha \sum_{j \in \Omega^{(q+1)}}\|\hat{\boldsymbol{X}}^{(q+1)}(j,:)^{\boldsymbol{T}} - \hat{\boldsymbol{X}}^{(q)}(j,:)^{\boldsymbol{T}}\|_{1} + \alpha \sum_{j \in \Omega^{(0)} \backslash \Omega^{(q+1)}}\|\hat{\boldsymbol{X}}^{(q+1)}(j,:)^{\boldsymbol{T}} - \hat{\boldsymbol{X}}^{(q)}(j,:)^{\boldsymbol{T}}\|_{1}\\
= & \alpha \sum_{j=1}^{n}\|\hat{\boldsymbol{X}}^{(q+1)}(j,:)^{\boldsymbol{T}} - \hat{\boldsymbol{X}}^{(q)}(j,:)^{\boldsymbol{T}}\|_{1}
= \alpha \|\hat{\boldsymbol{X}}^{(q+1)} - \hat{\boldsymbol{X}}^{(q)}\|_{l_{1}}.
\end{aligned}
\end{equation*}
So we complete the proof by the Banach fixed point theorem \cite{CiarletLNFA} and the fact that $0<\alpha<1$.
\end{proof}
\section{A sparse power method with partial updating for higher-order PageRank problem}\label{irt}
In Section \ref{rt}, we take $\mathbf{G}$ to be a fixed matrix.
However, similar to the truncated power method proposed in \cite{Ding2018Fast}, the problem \eqref{3.3} is not mathematically equivalent to the original problem \eqref{3.2}, either. In this section, we pay special attention to the original high-order PageRank problem \eqref{3.2} and consider how to solve it by using a sparse power method.
For any matrix $\mathbf{A} \in \mathbb{R}_{+}^{n \times n}$, we define the operator $\mathscr{Z}: \mathbb{R}_{+}^{n \times n} \mapsto \mathbb{R}_{+}^{n \times n}$ as
\begin{equation*}
{\mathscr{Z}}(\mathbf{A}):= \tilde{\mathscr{T}}(\mathscr{W}(\mathbf{A})),
\end{equation*}
where the operator $\mathscr{W}$ is defined in \eqref{2.6}.
Then we construct the sequence
\begin{equation*}
\check{\boldsymbol{X}}^{(q)} = \check{\boldsymbol{S}}^{(q)} + \mathbf{e} (\check{\boldsymbol{u}}^{(q)})^{\boldsymbol{T}}, \ \ q = 0,1,\ldots,
\end{equation*}
then the truncated power method for \eqref{3.2} can be written as
\begin{equation*}
\check{\boldsymbol{X}}^{(q+1)} = \mathscr{Z}(\check{\boldsymbol{X}}^{(q)}), \ \ q = 0,1,\ldots
\end{equation*}
We present the algorithm as follows. Similar to \eqref{5.3}--\eqref{eqn4.7}, we have the following for the matrix-vector products
\begin{align}\label{eqn51}
& \check{\boldsymbol{Y}}^{(q+1)}(:,j) \nonumber \\
& = \alpha (\mathbf{Q}_{j} \check{\boldsymbol{X}}^{(q)}(:,j)^{\boldsymbol{T}}) + \frac{\alpha}{n}(\|\check{\boldsymbol{X}}^{(q)}(:,j)^{\boldsymbol{T}}\|_{1} - \|\mathbf{Q}_{j} \check{\boldsymbol{X}}^{(q)}(:,j)^{\boldsymbol{T}}\|_{1})\mathbf{e} +(1-\alpha) \|\check{\boldsymbol{X}}^{(q)}(:,j)^{\boldsymbol{T}}\|_{1}\mathbf{v}.
\end{align}
\begin{algorithm}[H]
\caption{A sparse power method for higher-order PageRank}\label{Alg4}
\SetKwInOut{Input}{Input}\SetKwInOut{Output}{Output}
\Input{$\mathbf{Q}_{j}$, $j=1,2,\ldots, n$, $\check{\boldsymbol{S}}^{(0)}$, $\check{\boldsymbol{u}}^{(0)}$, $\mathbf{v}$, $\alpha$, $\beta$, $tol$;}
\Output{$\check{\boldsymbol{S}}$, $\check{\boldsymbol{u}}$;}
\BlankLine
{\bf Let $q=0$ and $res=1$}\;
\While {$res > tol$}
{
$res = 0$\;
\For{$j=1,2,\ldots,n$}
{
$\mathbf{y} = \check{\boldsymbol{S}}^{(q)}(j,:)^{\boldsymbol{T}} + \check{\boldsymbol{u}}^{(q)}$\;
$\check{\boldsymbol{Y}}^{(q+1)}(:,j) = \alpha (\mathbf{Q}_{j} \mathbf{y}) + \frac{\alpha}{n}(\|\mathbf{y}\|_{1} - \|\mathbf{Q}_{j} \mathbf{y}\|_{1})\mathbf{e} +(1-\alpha) \|\mathbf{y}\|_{1}\mathbf{v}$\;
$[\check{\boldsymbol{S}}^{(q+1)}(:,j), \check{\boldsymbol{u}}^{(q+1)}(j)] = thresholding(\check{\boldsymbol{Y}}^{(q+1)}(:,j),\beta)$\quad\%~{\tt sparsing}\
$res = res + \|(\check{\boldsymbol{S}}^{(q+1)}(:,j) - \check{\boldsymbol{S}}^{(q)}(:,j)) + (\check{\boldsymbol{u}}^{(q+1)}(:,j) - \check{\boldsymbol{u}}^{(q)}(:,j))\|_{1}$\;
}
$q = q + 1$\;
}
$\check{\boldsymbol{S}} = \check{\boldsymbol{S}}^{(q)}$, $\check{\boldsymbol{u}} = \check{\boldsymbol{u}}^{(q)}$\;
\end{algorithm}
\begin{remark}
In Algorithm \ref{Alg4}, the original high-order PageRank problem \eqref{3.2} is solved instead of \eqref{3.3}, we expect that the approximation obtained from Algorithm \ref{Alg4} is more accurate than those from Algorithm \ref{Alg2} and Algorithm \ref{Alg3}. The key of Algorithm \ref{Alg4} is to apply the sparse operator $\tilde{\mathscr{T}}$ to the matrix obtained in Step 6, and we call it a sparse power method for higher-order PageRank. Analogous to Algorithm \ref{Alg3}, there is no need to form and store the vectors $\{{\bf d}_{j}\}_{j=1}^n$, and we only need to store a vector $\mathbf{v}$ instead of a matrix $\mathbf{G}$. The difference is that the value before ${\bf v}$ is fixed in Algorithm \ref{Alg3}, while that is variable in Algorithm \ref{Alg4}; see Step 6 of the two algorithms.
\end{remark}
We are ready to discuss the convergence of Algorithm \ref{Alg4}. If there is a number $\eta \in (0,1)$ such that
$$
\|\check{\boldsymbol{X}}^{(q+2)} - \check{\boldsymbol{X}}^{(q+1)}\|_{l_{1}} \leq \eta \|\check{\boldsymbol{X}}^{(q+1)} - \check{\boldsymbol{X}}^{(q)}\|_{l_{1}},\quad q=0,1,\ldots
$$
in each iteration, then according to the Banach fixed point theorem \cite{CiarletLNFA}, Algorithm \ref{Alg4} converges.
Unfortunately, it seems difficult to prove it strictly. The reason is that the value before ${\bf v}$ is variable during iterations of Algorithm \ref{Alg4}; refer to \eqref{eqn51}.
This is unlike the proof of Theorem \ref{the-5.6}, where the value can be eliminated in two consecutive iterations.
To deal with this problem, the theorem as follows considers the convergence of Algorithm \ref{Alg4} from a probabilistic point of view.
\begin{theorem}\label{theorem5.4}
For any matrices $ \mathbf{A}, \mathbf{B} \in \mathbb{R}_{+}^{n \times n}$, suppose that the probability
\begin{equation}\label{eqn52}
{\rm Pr}(\|\tilde{\mathscr{T}}(\mathbf{A}) - \tilde{\mathscr{T}}(\mathbf{B})\|_{l_{1}} \leq \eta \|\mathbf{A} - \mathbf{B}\|_{l_{1}}) = \omega,
\end{equation}
where $0<\eta<1$, and $\omega$ is a probability related to $\eta$.
Then we have that
\begin{equation}
{\rm Pr}(\|\mathscr{Z}(\mathbf{A}) - \mathscr{Z}(\mathbf{B})\|_{l_{1}} \leq \eta \|\mathbf{A} - \mathbf{B}\|_{l_{1}}) \geq \omega.
\end{equation}
\end{theorem}
\begin{proof}
From the definition of the operator $\mathscr{Z}$, we have
\begin{equation*}
\|\mathscr{Z}(\mathbf{A}) - \mathscr{Z}(\mathbf{B})\|_{l_{1}} = \|\tilde{\mathscr{T}}(\mathscr{W}(\mathbf{A})) - \tilde{\mathscr{T}}(\mathscr{W}(\mathbf{B}))\|_{l_{1}}.
\end{equation*}
If $\|\tilde{\mathscr{T}}(\mathbf{A}) - \tilde{\mathscr{T}}(\mathbf{B})\|_{l_{1}} \leq \eta \|\mathbf{A} - \mathbf{B}\|_{l_{1}}$, then
\begin{equation*}
\begin{aligned}
&\|\mathscr{Z}(\mathbf{A}) - \mathscr{Z}(\mathbf{B})\|_{l_{1}} \leq \eta \|\mathscr{W}(\mathbf{A}) - \mathscr{W}(\mathbf{B})\|_{l_{1}} \\
= & \eta \sum_{j=1}^{n}\|[\alpha\mathbf{P}_{j} + (1-\alpha)\tilde{\mathbf{V}}](\mathbf{A}(j,:)^{\boldsymbol{T}} - \mathbf{B}(j,:)^{\boldsymbol{T}}) \|_{1}\\
\leq & \eta \sum_{j=1}^{n}\|(\mathbf{A}(j,:)^{\boldsymbol{T}} - \mathbf{B}(j,:)^{\boldsymbol{T}}) \|_{1}= \eta \|\mathbf{A} - \mathbf{B}\|_{l_{1}}.
\end{aligned}
\end{equation*}
Denote by $C$ the event $\|\tilde{\mathscr{T}}(\mathbf{A}) - \tilde{\mathscr{T}}(\mathbf{B})\|_{l_{1}} \leq \eta \|\mathbf{A} - \mathbf{B}\|_{l_{1}}$ and by $D$ the event $\|\mathscr{Z}(\mathbf{A}) - \mathscr{Z}(\mathbf{B})\|_{l_{1}} \leq \eta \|\mathbf{A} - \mathbf{B}\|_{l_{1}}$.
We see that the probability ${\rm Pr}(D|C) = 1$.
From \eqref{eqn52} and the total probability formula \cite{CKL1979A}, we obtain
\begin{equation*}
{\rm Pr}(D) \geq {\rm Pr}(C){\rm Pr}(D|C) = {\rm Pr}(C) = \omega,
\end{equation*}
which completes the proof.
\end{proof}
\begin{figure}[h]
\centering
\includegraphics[height=8cm,width=13cm]{Con2.jpg}\\
\caption{The values of $\rho=\frac{\|\tilde{\mathscr{T}}(\mathbf{A}) - \tilde{\mathscr{T}}(\mathbf{B})\|_{l_{1}}}{\|\mathbf{A} - \mathbf{B}\|_{l_{1}}}$ during iterations of Algorithm \ref{Alg4}, where we randomly generate two $1000 \times 1000$ non-negative matrices $\mathbf{A}$ and $\mathbf{B}$ for $10000$ times.} \label{fig.100}
\end{figure}
In general, the larger is $\eta$, the larger is $\omega$, and the probability of the event $D$ happens is higher.
Consequently, the probability of Algorithm \ref{Alg4} converges is higher.
However, we find experimentally that even if $\eta$ is relatively small, the probability of the event $D$ happens may still be high.
To show this more precisely,
we run Algorithm \ref{Alg4} with two $1000 \times 1000$ non-negative matrices $\mathbf{A}$ and $\mathbf{B}$
generated randomly for $10000$ times. The sparsity degree
is chosen as $\beta = \frac{1}{n^{2}}$, where $n$ is the size of the matrices.
In Figure \ref{fig.100}, we plot the 10000 values of the parameter denoted by
$$
\rho=\frac{\|\tilde{\mathscr{T}}(\mathbf{A}) - \tilde{\mathscr{T}}(\mathbf{B})\|_{l_{1}}}{\|\mathbf{A} - \mathbf{B}\|_{l_{1}}}.
$$
It is seen that in this experiment, if we take $\eta = 0.034$, then ${\rm Pr}(\|\tilde{\mathscr{T}}(\mathbf{A}) - \tilde{\mathscr{T}}(\mathbf{B})\|_{l_{1}} \leq \eta \|\mathbf{A} - \mathbf{B}\|_{l_{1}}) = 1$, which implies that the probability of Algorithm \ref{Alg4} converges can be very high. Experimentally, we find that Algorithm \ref{Alg4} works quite well in practice.
Finally, we apply the same trick of {\it partial updating} to Algorithm \ref{Alg4}, and get the following algorithm.
\begin{algorithm}[H]
\caption{A sparse power method with partial updating for higher-order PageRank}\label{Alg6}
\SetKwInOut{Input}{Input}\SetKwInOut{Output}{Output}
\Input{$\mathbf{Q}_{j}$, $j=1,2,\ldots, n$, $\bar{\boldsymbol{S}}^{(0)}$, $\bar{\boldsymbol{u}}^{(0)}$, $\alpha$, $\beta$, $\varsigma$, $\tau$, $\tilde{\ell}$, $tol$;}
\Output{$\bar{\boldsymbol{S}}$, $\bar{\boldsymbol{u}}$;}
\BlankLine
{\bf Let $q=0$ and $res=1$}\;
\While {$res > tol$}
{
$res = 0$\;
\eIf{ $q \leq \tilde{\ell}-1$ }
{$\Omega^{(q)}$ = \{1, 2, \ldots, $n$\}\;}
{{\bf Determine} $card(\Omega^{(q)})$ via \eqref{eqn411} and {\bf select the indexes corresponding to the top $card(\Omega^{(q)})$ elements in $\{{PV^{(q)}}(j)\}_{j \in \Omega^{(q-1)}}$ to form $\Omega^{(q)}$ }\quad\%~{\tt determine the index set}\;
}
\For{$j \in \Omega^{(q)}$}
{
$\mathbf{y} = \bar{\boldsymbol{S}}^{(q)}(j,:)^{\boldsymbol{T}} + \bar{\boldsymbol{u}}^{(q)}$\;
$\bar{\boldsymbol{Y}}^{(q+1)}(:,j) = \alpha (\mathbf{Q}_{j} \mathbf{y}) + \frac{\alpha}{n}(\|\mathbf{y}\|_{1} - \|\mathbf{Q}_{j} \mathbf{y}\|_{1})\mathbf{e} + (1-\alpha) \|\mathbf{y}\|_{1}\mathbf{v}$\;
$[\bar{\boldsymbol{S}}^{(q+1)}(:,j), \bar{\boldsymbol{u}}^{(q+1)}(j)] = thresholding(\bar{\boldsymbol{Y}}^{(q+1)}(:,j),\beta)$\quad\%~{\tt sparsing}\
{\bf Compute $PV^{(q+1)}(j)$ by using \eqref{eqn410}}\;
$res = res + \|(\bar{\boldsymbol{S}}^{(q+1)}(:,j) - \bar{\boldsymbol{S}}^{(q)}(:,j)) + (\bar{\boldsymbol{u}}^{(q+1)}(:,j) - \bar{\boldsymbol{u}}^{(q)}(:,j))\|_{1}$\;
}
$q = q + 1$\;
}
$\bar{\boldsymbol{S}} = \bar{\boldsymbol{S}}^{(q)}$, $\bar{\boldsymbol{u}} = \bar{\boldsymbol{u}}^{(q)}$\;
\end{algorithm}
The following theorem indicates that Algorithm \ref{Alg6} has a high probability of convergence. The proof is a combination of Theorem \ref{Thm4.4} and Theorem \ref{theorem5.4}, and thus is omitted.
\begin{theorem}
Consider the iterations of Algorithm \ref{Alg6}
\begin{equation}\label{Pu2}
\bar{\boldsymbol{X}}^{(q+1)}(:,j) =
\left\{
\begin{array}{lcl}
\mathscr{T}(\alpha \mathbf{P}_{j}\bar{\boldsymbol{X}}^{(q)}(j,:)^{\boldsymbol{T}} + (1-\alpha)\widetilde{\mathbf{V}}\bar{\boldsymbol{X}}^{(q)}(j,:)^{\boldsymbol{T}}) & & j \in \Omega^{(q)},\\
\bar{\boldsymbol{X}}^{(q)}(:,j) & & j \in \Omega^{(0)} \backslash \Omega^{(q)},
\end{array}
\right.
\end{equation}
where $\Omega^{(0)} = \{1,2,...,n\}$.
Given an arbitrary initial point $\bar{\boldsymbol{X}}_{0} \in \mathbb{R}_{+}^{n \times n}$, and for any matrices $ \mathbf{A}, \mathbf{B} \in \mathbb{R}_{+}^{n \times n}$, if
\begin{equation*}
{\rm Pr}(\|\tilde{\mathscr{T}}(\mathbf{A}) - \tilde{\mathscr{T}}(\mathbf{B})\|_{l_{1}} \leq \eta \|\mathbf{A} - \mathbf{B}\|_{l_{1}}) = \omega,
\end{equation*}
where $0<\eta<1$ and $\omega$ is a probability related to $\eta$.
Then we have
\begin{equation*}
{\rm Pr}(\|\boldsymbol{X}^{(q+2)} - \boldsymbol{X}^{(q+1)}\|_{l_{1}} \leq \eta \|\boldsymbol{X}^{(q+1)} - \boldsymbol{X}^{(q)}\|_{l_{1}}) \geq \omega, \ \ \ q = 0, 1, \ldots
\end{equation*}
\end{theorem}
\section{Numerical experiments}\label{ne}
In this section, we perform numerical experiments to show the numerical performance and the efficiency of the proposed algorithms for high-order PageRank.
More precisely, we compare the proposed algorithms with some state-of-the art algorithms on real-world and synthetic data sets.
All the experiments are run on a 16 cores double Intel(R) Xeon(R) CPU E5-2637 v4, and with CPU 3.50 GHz and RAM 256GB. The operation system is 64-bit Windows 10, and the numerical results are obtained from running the MATLAB R2018b.
\subsection{On implementation of the matrix-vector products for large-scale multilinear PageRank}\label{Sec6.1}
We first discuss some details on implementation of the algorithms for multilinear PageRank.
Recall from \eqref{eq29} that the multilinear PageRank problem in a second-order Markov chain can be written as
\begin{equation}\label{6.1}
\mathbf{x} = \alpha \boldsymbol{R} (\mathbf{x} \otimes \mathbf{x}) + (1-\alpha)\mathbf{v},
\end{equation}
where the matrix $\boldsymbol{R}$ is defined in \eqref{eqn210}. In the experiments, we choose $\mathbf{v} = \frac{\mathbf{e}}{n}$, and $\mathbf{e}$ is the vector of all ones.
Let
\begin{equation}\label{eqn62}
\widetilde{\boldsymbol{Q}}=\left[\begin{array}{ccc|ccc|c|ccc}
\mathcal{Q}_{111} & \cdots & \mathcal{Q}_{1n1} & \mathcal{Q}_{112} & \cdots & \mathcal{Q}_{1n2} & \cdots & \mathcal{Q}_{11n} & \cdots & \mathcal{Q}_{1nn} \\
\mathcal{Q}_{211} & \cdots & \mathcal{Q}_{2n1} & \mathcal{Q}_{212} & \cdots & \mathcal{Q}_{2n2} & \cdots & \mathcal{Q}_{21n} & \cdots & \mathcal{Q}_{2nn} \\
\vdots & \ddots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \ddots & \vdots \\
\mathcal{Q}_{n11} & \cdots & \mathcal{Q}_{nn1} & \mathcal{Q}_{n12} & \cdots & \mathcal{Q}_{nn2} & \cdots & \mathcal{Q}_{n1n} & \cdots & \mathcal{Q}_{nnn}
\end{array}\right],
\end{equation}
then we have from \eqref{eqn210} and \eqref{eqn62} that
\begin{equation}\label{eqn66}
\boldsymbol{R} = \widetilde{\boldsymbol{Q}} + \frac{1}{n} \mathbf{e} \tilde{\boldsymbol{d}}^{\boldsymbol{T}},
\end{equation}
where $\tilde{\boldsymbol{d}} \in \mathbb{R}^{n^{2}}$ is the vector composed of the entries arising from the dangling states of $\widetilde{\boldsymbol{Q}}$.
Given $\mathbf{x}^{(0)} \in \mathbb{R}_{+}^{n}$ with $\|\mathbf{x}^{(0)}\|_{1} = 1$, the following sequence
\begin{equation}\label{6.2}
\mathbf{x}^{(q+1)} = \alpha \boldsymbol{R} (\mathbf{x}^{(q)} \otimes \mathbf{x}^{(q)}) + (1-\alpha)\mathbf{v},\quad q=0,1,\ldots,
\end{equation}
is the key step of the fixed-point iteration method proposed in \cite{Gleich2014Multilinear} for multilinear PageRank. The fixed-point iteration method is popular for the computation of the multilinear PageRank problem. For instance, based on the shifted fixed-point iteration, Cipolla {\it et al.} \cite{Cipolla2019Extrapolation} investigate restarted extrapolation to speed up this method. Meini {\it et al.} \cite{Meini2018Perron} point that an interpretation of the solution as a Perron eigenvector can allow us to devise new fixed-point algorithms for its computation.
Recently, the convergence of the fixed-point iteration is revisited in \cite{HW}.
In \cite{Cipolla2019Extrapolation}, Cipolla {\it et al.} applied extrapolation skills to the shifted fixed-point iteration (SFPM) and the inner-outer iteration (IOM) proposed in \cite{Gleich2014Multilinear}, and presented the ESFPM and EIOM methods relied on the simplified topological $\varepsilon$-algorithm in its restarted form.
As the framework of the two compared algorithms ESFPM and EIOM is based on the computation of \eqref{6.2} during iterations, we focus on
computing $\mathbf{x}^{(q+1)}$ for a given vector $\mathbf{x}^{(q)}$ in this subsection.
First, we consider how to compute $\mathbf{x}^{(q+1)}$ without storing the vector $\tilde{\boldsymbol{d}}$.
By \eqref{eqn66}, the relation \eqref{6.2} can be reformulated as
\begin{equation}\label{6.4}
\mathbf{x}^{(q+1)} = \alpha \widetilde{\boldsymbol{Q}} (\mathbf{x}^{(q)} \otimes \mathbf{x}^{(q)}) + \frac{\alpha}{n} \mathbf{e} \tilde{\boldsymbol{d}}^{\boldsymbol{T}} (\mathbf{x}^{(q)} \otimes \mathbf{x}^{(q)}) + (1-\alpha)\mathbf{v},
\end{equation}
which is a stochastic vector.
Multiplying on both sides of \eqref{6.4} by $\mathbf{e}^{\boldsymbol{T}}$, we get
\begin{equation*}
1 = \alpha \|\widetilde{\boldsymbol{Q}} (\mathbf{x}^{(q)} \otimes \mathbf{x}^{(q)})\|_{1} + \alpha \tilde{\boldsymbol{d}}^{\boldsymbol{T}} (\mathbf{x}^{(q)} \otimes \mathbf{x}^{(q)}) + (1-\alpha),
\end{equation*}
and
\begin{equation}\label{6.5}
\tilde{\boldsymbol{d}}^{\boldsymbol{T}} (\mathbf{x}^{(q)} \otimes \mathbf{x}^{(q)}) = 1 - \|\widetilde{\boldsymbol{Q}} (\mathbf{x}^{(q)} \otimes \mathbf{x}^{(q)})\|_{1}.
\end{equation}
A combination of \eqref{6.5} and \eqref{6.4} gives
\begin{equation}\label{6.6}
\mathbf{x}^{(q+1)} = \alpha \widetilde{\boldsymbol{Q}} (\mathbf{x}^{(q)} \otimes \mathbf{x}^{(q)}) + \frac{\alpha}{n} (1 - \|\widetilde{\boldsymbol{Q}} (\mathbf{x}^{(q)} \otimes \mathbf{x}^{(q)}\|_{1}) \mathbf{e} + (1-\alpha)\mathbf{v}.
\end{equation}
Second, we consider how to calculate the matrix-vector product $\widetilde{\boldsymbol{Q}} (\mathbf{x}^{(q)} \otimes \mathbf{x}^{(q)})$
without forming and storing the vector $\mathbf{x}^{(q)} \otimes \mathbf{x}^{(q)}$ of size $n^2$.
Partition the matrix $\widetilde{\boldsymbol{Q}}$ into the following form
\begin{equation}\label{eqn68}
\widetilde{\boldsymbol{Q}} = [\widetilde{\boldsymbol{Q}}_{1}, \widetilde{\boldsymbol{Q}}_{2},\ldots,\widetilde{\boldsymbol{Q}}_{n}],
\end{equation}
where $\widetilde{\boldsymbol{Q}}_{j} \in \mathbb{R}^{n \times n}, j=1,2,\ldots,n$.
Let $\mathbf{x}^{(q)} = [\mathbf{x}^{(q)}_{1}, \mathbf{x}^{(q)}_{2}, \ldots, \mathbf{x}^{(q)}_{n}]^{\boldsymbol{T}}$, where $\mathbf{x}^{(q)}_{i}$ is the $i$-th element of $\mathbf{x}^{(q)}$.
Notice that
\begin{equation*}
\mathbf{x}^{(q)} \otimes \mathbf{x}^{(q)} =
\left[\begin{array}{c}
\mathbf{x}^{(q)}_{1}\mathbf{x}^{(q)}\\
\mathbf{x}^{(q)}_{2}\mathbf{x}^{(q)}\\
\vdots\\
\mathbf{x}^{(q)}_{n}\mathbf{x}^{(q)}
\end{array}\right],
\end{equation*}
where $\mathbf{x}^{(q)}_{j}\mathbf{x}^{(q)} \in \mathbb{R}^{n},~ j=1,2,\ldots,n$.
Thus,
\begin{equation}\label{6.7}
\widetilde{\boldsymbol{Q}} (\mathbf{x}^{(q)} \otimes \mathbf{x}^{(q)}) = [\widetilde{\boldsymbol{Q}}_{1}, \widetilde{\boldsymbol{Q}}_{2},\ldots,\widetilde{\boldsymbol{Q}}_{n}]
\left[\begin{array}{c}
\mathbf{x}^{(q)}_{1}\mathbf{x}^{(q)}\\
\mathbf{x}^{(q)}_{2}\mathbf{x}^{(q)}\\
\vdots\\
\mathbf{x}^{(q)}_{n}\mathbf{x}^{q}
\end{array}\right]
= \sum_{j = 1}^{n} \mathbf{x}^{(q)}_{j} \widetilde{\boldsymbol{Q}}_{j} \mathbf{x}^{(q)}.
\end{equation}
Substituting \eqref{6.7} into \eqref{6.5} gives
\begin{equation}
\mathbf{x}^{(q+1)} = \alpha \sum_{j = 1}^{n} \mathbf{x}^{(q)}_{j} \widetilde{\boldsymbol{Q}}_{j} \mathbf{x}^{(q)} + \frac{\alpha}{n} (1 - \|\sum_{j = 1}^{n} \mathbf{x}^{(q)}_{j} \widetilde{\boldsymbol{Q}}_{j} \mathbf{x}^{(q)}\|_{1}) \mathbf{e} + (1-\alpha)\mathbf{v}.
\end{equation}
In the numerical experiments, we perform \eqref{6.2} in the above way. One merit is that one only needs to form and store the sparse matrices $\widetilde{\boldsymbol{Q}}_{j},j=1,2,\ldots,n$, instead of the dense matrix $\boldsymbol{R}$.
\subsection{Experiments on real-world data set}\label{sec-real}
In this section, we run the algorithms on a real-world data set.
This data is the {\tt .GOV Web collection} arising in 2002 TREC \cite{voorhees2005trec}, with $n=100000$ web pages being linked by 39255 anchor terms \footnote{We thank Dr. Weiyang Ding for providing us with the data set and the MATLAB codes of the truncated power method.}.
We can describe the web link information in this real-world data through a $100000 \times 100000 \times 39255$ tensor with elements
$$
\mathcal{A}(i, j, s)=\left\{\begin{array}{ll}1, & \text { if webpage } i \text { links to webpage } j \text { via anchor term } s, \\ 0, & \text { otherwise }, \end{array}\right.
$$
whose sparsity is about $1.22 \times 10^{-7}$ \%.
Recently, Wu et al. \cite{WuYubao} proposed to formulate second-order Markov chains by using edge-to-edge transition probabilities, and developed Monte Carlo methods to compute stationary probability distributions.
However, Ding {\it et al.} \cite{Ding2018Fast} point that this model has some defects.
In contrast, the model constructed in \cite{Ding2018Fast} might be able to apply directly to the higher-order Markov chains, whereas the edge-to-edge formulation would not be straightforward.
For the second-order Markov chain, we have \cite{Ding2018Fast}
\begin{equation*}
\begin{aligned}
& \operatorname{Pr}\left(Y_{t}=i \mid Y_{t-1}=j, Y_{t-2}=k\right)\\
& = \sum_{s=1}^{m} \operatorname{Pr}\left(Y_{t}=i \mid N=s, Y_{t-1}=j\right) \cdot \operatorname{Pr}\left(N=s \mid Y_{t-1}=j, Y_{t-2}=k\right),
\end{aligned}
\end{equation*}
and the elements of the tensor $\mathcal{Q}$ turn out to be
\begin{equation*}
\mathcal{Q}(i, j, k)=\sum_{s=1}^{m}\left(\frac{\mathcal{A}(j, i, s)}{\sum_{i=1}^{n} \mathcal{A}(j, i, s)}\right) \cdot\left(\frac{\mathcal{A}(k, j, s)}{\sum_{s=1}^{m} \mathcal{A}(k, j, s)}\right).
\end{equation*}
Further, consider the matrices $\boldsymbol{U}_{j} \in \mathbb{R}^{n \times m}$ and $\boldsymbol{W}_{j} \in \mathbb{R}^{m \times n},j=1,2,\ldots,n$, with elements
\begin{equation}\label{r-1}
\boldsymbol{U}_{j}(i, s)=\frac{\mathcal{A}(j, i, s)}{\sum_{i=1}^{n} \mathcal{A}(j, i, s)} \quad \text { and } \quad \boldsymbol{W}_{j}(s, k)=\frac{\mathcal{A}(k, j, s)}{\sum_{s=1}^{m} \mathcal{A}(k, j, s)},
\end{equation}
respectively, then $\mathcal{Q}(i,j,k)$ can be reformulated as
\begin{equation*}
\mathbf{Q}_{j}(i, k)=\sum_{s=1}^{m} \boldsymbol{U}_{j}(i, s) \boldsymbol{W}_{j}(s, k),
\end{equation*}
which is the $(i,k)$-th element of the matrix
\begin{equation}\label{eqn611}
\mathbf{Q}_{j} = \boldsymbol{U}_{j} \boldsymbol{W}_{j},\quad j=1,2,\ldots,n.
\end{equation}
Indeed, the real data we get is not the tensor $\mathcal{A}$, but the position of non-zero elements in $\mathcal{A}$.
Therefore, in practical calculations, we only calculate $\{\boldsymbol{U}_{j}\}_{j=1}^{n}$ and $\{\boldsymbol{W}_{j}\}_{j=1}^{n}$ by using the information of the positions of non-zero elements of $\mathcal{A}$.
If $\mathcal{A}(j,i,s) = 0$, we just set $\boldsymbol{U}_{j}(i,s) = 0$, otherwise, we calculate $\boldsymbol{U}_{j}(i,s)$ by using \eqref{r-1}.
The computation of $\{\boldsymbol{W}_{j}\}_{j=1}^{n}$ is in a similar way. Thus, it is only necessary to form the matrix
\begin{equation}\label{eqn613}
\mathbf{Q} = [\mathbf{Q}_{1}, \mathbf{Q}_{2}, \ldots, \mathbf{Q}_{n}]
\end{equation}
for the high-order PageRank problem.
For the multilinear PageRank problem, we need to form the matrices $\{\widetilde{\boldsymbol{Q}}_{i}\}_{i=1}^n$; see \eqref{eqn68}.
However, it seems that there are no explicit formulae such as \eqref{eqn611} for $\widetilde{\boldsymbol{Q}}_i$. This is another advantage of
the higher-order PageRank model over the multilinear PageRank model.
There are two ways to settle this problem. The first one is to form the tensor $\mathcal{Q}$ directly, and then flatten it along the first index.
Indeed, it is seen that the two matrices $\widetilde{\boldsymbol{Q}}$ and $\mathbf{Q}$ are different only in the order of columns. Thus, the second way is to form $\mathbf{Q}$ firstly, and then get $\widetilde{\boldsymbol{Q}}$ by permutation.
In general, the second option will be much cheaper than the first one.
However, the second option is also prohibitive for large-scale problems.
For example, it took more than 24 hours to tansform the matrix $\mathbf{Q}$ to $\widetilde{\boldsymbol{Q}}$ in this experiment.
Therefore, in this experiment, we do not consider the multilinear PageRank problem for this real-world data set any more.
We run the power method, Algorithm \ref{Alg2} due to Ding {\it et al.} \cite{Ding2018Fast}, as well as the proposed Algorithm \ref{Alg3}--Algorithm \ref{Alg6} for this problem.
The damping factor is chosen as $\alpha=0.85, 0.90, 0.95, 0.99$, and the parameter $\beta$ for sparsing is chosen as $\frac{1}{n^{2}},\frac{1}{n^{3}}$ and $\frac{1}{n^{4}}$, respectively.
In Algorithm \ref{Alg2}, the matrix $\mathbf{G}$ is generated by using the MATLAB command {\tt sprand}.
As the sparsity of the tested matrix $\mathbf{Q}$ is about $1.84 \times 10^{-8} \%$, the sparsity of $\mathbf{G}$ is also chosen as $1.84 \times 10^{-8} \%$.
In Algorithm \ref{Alg5} and Algorithm \ref{Alg6}, we choose $\varsigma = 10$, $\tau = 0.1$ and $\tilde{\ell} = 1$.
The stopping criterion is
$$
\frac{\|\boldsymbol{X}^{(q+1)} - \boldsymbol{X}^{(q)}\|_{l_{1}}}{\| \boldsymbol{X}^{(q)}\|_{l_{1}}} \leq 1 \times 10^{-6}
$$
for all the algorithms.
We set the maximum number of iterations to be $20$.
So as to access the quality of the approximations, we denote by ``Error" the relative error between the ``computed" solution $\boldsymbol{\widetilde{X}}$ from the algorithms and the ``exact" solution $\boldsymbol{X}^{*}$:
$$
{\rm Error}=\frac{\|\boldsymbol{\widetilde{X}} - \boldsymbol{X}^{*} \|_{l_{1}}}{\|\boldsymbol{X}^{*}\|_{l_{1}}},
$$
where the ``exact" solution is obtained from running the power method on the original high-order PageRank problem \eqref{3.2} with stopping criterion $tol=10^{-6}$.
The numerical results of this experiment are shown in Appendix \ref{AppA}; see Table \ref{t-6}--Table \ref{t-7}, in which ``Iter" denotes the number of iterations, and ``CPU" represents the CPU time in seconds. Here the computation time is composed of that both for generating the matrices $\{\mathbf{Q}_{j}\}_{j=1}^{n}$ and for solving the high-order PageRank problem.
Some comments are in order.
First, we observe from Table \ref{t-6}--Table \ref{t-90} that the four proposed algorithms converge much faster than Algorithm \ref{Alg2}.
For instance, when $\alpha = 0.85$, Algorithm \ref{Alg3} and Algorithm \ref{Alg4} are about 75\% faster than Algorithm \ref{Alg2}, while Algorithm \ref{Alg5} and Algorithm \ref{Alg6} are about 90\% faster than Algorithm \ref{Alg2}.
Moreover, the superiority become more obvious as the damping factor $\alpha$ increases. In particular, when $\alpha=0.99$, we see that the power method, Algorithm \ref{Alg2}, Algorithm \ref{Alg3} and Algorithm \ref{Alg4} fail to work, while Algorithm \ref{Alg5} and Algorithm \ref{Alg6} run quite well.
Second, the accuracy of the proposed algorithms is much higher than that of Algorithm \ref{Alg2}. More precisely, the approximations obtained from Algorithm \ref{Alg3} and Algorithm \ref{Alg4} are about five orders higher, while those from Algorithm \ref{Alg5} and Algorithm \ref{Alg6} are about four orders higher than Algorithm \ref{Alg2}. Specifically, the accuracy of the approximations obtained from Algorithm \ref{Alg4} is the highest. This is because that it solves the original higher-order PageRank model directly.
Third, it seems that the convergence speed of Algorithm \ref{Alg5} and Algorithm \ref{Alg6} has little to do with the damping factor $\alpha$. Indeed, in the two algorithms, the updated columns will be fewer and fewer as the iteration proceeds, and we benefit from the partial updating strategy.
Fourth, in order to demonstrate the effectiveness of the sparse methods with partial updating in more detail, we present in
Table \ref{t-7} the number of the top-ten ranking web pages obtained from the power method and Algorithm \ref{Alg2}--Algorithm \ref{Alg6}, respectively.
Although the order of ranking changes a little, it is seen that the top-ten web pages from the six algorithms are about the same, and the partial updating strategy proposed is favorable.
\subsection{Experiments on synthetic data set}\label{AD}
In this section, we show performance of the proposed algorithms on a synthetic data set that is generated randomly \cite{Ding2018Fast}.
Before running the algorithms, we stress that even for medium-sized problems, the cost of transforming the matrix $\mathbf{Q}$ (for high-order PageRank) to the matrix $\widetilde{\boldsymbol{Q}}$ (for multilinear PageRank) can be very high; refer to \eqref{eqn613} and \eqref{eqn68}.
In this subsection, the test matrices $\mathbf{Q}$ are constructed by using the MATLAB build-in function
\begin{equation}\label{eqn614}
\mathbf{Q}=sprand(n, n^{2}, sparsity).
\end{equation}
In the first experiment, we choose the size $n = 2000, 4000, 6000, 8000, 10000$, and $sparsity = 0.0001\%$.
Table \ref{t-1} shows the CPU time in seconds for transforming $\mathbf{Q}$ to $\widetilde{\boldsymbol{Q}}$. It is seen that the CPU time is overwhelming when $n$ is large.
\begin{table}[h]\caption{The CPU time for transforming $\mathbf{Q}$ to $\widetilde{\boldsymbol{Q}}$}\label{t-1}
\centering
\begin{tabular}{ccc}
\toprule
Sparsity of $\mathbf{Q}$ & $n$ & CPU (s)\\
\midrule
\multirow{5}{*}{0.0001\%}
& 2000 & 11.57 \\
& 4000 & 485.75 \\
& 6000 & 3755.05 \\
& 8000 & 15022.28 \\
& 10000 & 48989.85 \\
\bottomrule
\end{tabular}
\end{table}
In the second experiment, we choose $n = 10000$ and $sparsity = 0.0001\%$ and $0.00001\%$ in \eqref{eqn614}, respectively.
We compare Algorithm \ref{Alg3}--Algorithm \ref{Alg6} with eight algorithms, i.e., the power method and Algorithm \ref{Alg2} for high-order PageRank, and the following six algorithms for multilinear PageRank:
\begin{itemize}[leftmargin=3em]
\item MNM \cite{Guo2018A}: The modified Newton method due to Guo {\it et al}.
\item Pb-Ns (Algorithm 2, Algorithm 3 and Algorithm 4 in \cite{Meini2018Perron}): The three Perron-based algorithms for multilinear PageRank due to Meini and Poloni.
\item ESFPM and EIOM (Algorithm 1 in \cite{Cipolla2019Extrapolation}): Two algorithms due to Cipolla {\it et al.}, which are based on the simplified topological $\varepsilon$-algorithm in their restarted form, to accelerate the SFPM and IOM algorithms.
\end{itemize}
In ESFPM and EIOM, we take the number of inner iterations as 2, and the parameter $\gamma$ is chosen as 1.
In Algorithm \ref{Alg5} and Algorithm \ref{Alg6}, we choose $\varsigma = 10$, $\tau = 0.1$ and $\tilde{\ell} = 1$.
Indeed, for synthetic data set, one can also generate $\widetilde{\boldsymbol{Q}}$ first, and then get $\mathbf{Q}$ by permutation.
For the sake of justification, we do not consider the time for matrix transformation any more, and the CPU time (in seconds) in the following tables only includes that for solving the high-order PageRank problem or the multilinear PageRank problem.
For the multilinear PageRank and the higher-order PageRank problems, we take advantage of
$$
\frac{\|\mathbf{x}^{(q+1)} - \mathbf{x}^{(q)}\|_{1}}{\|\mathbf{x}^{(q)}\|_{1}} \leq 1 \times 10^{-8}\quad{\rm and}\quad\frac{\|\boldsymbol{X}^{(q+1)} - \boldsymbol{X}^{(q)}\|_{l_{1}}}{\|\boldsymbol{X}^{(q)}\|_{l_{1}}} \leq 1\times 10^{-8}
$$
as the stopping criterion, respectively, where $\mathbf{x}^{(q)}$ and $\boldsymbol{X}^{(q)}$ stand for the approximations obtained from the $q$-th iteration of the algorithms, respectively.
Recall that we only need to store a sparse matrix $\boldsymbol{S}$ and a vector $\boldsymbol{u}$ in Algorithm \ref{Alg2}--Algorithm \ref{Alg6}, rather than an $n$-by-$n$ dense matrix.
Similarly, we denote by ``Error" the relative error between the ``computed" solution $\widetilde{\boldsymbol{X}}$ (which is $\widetilde{\boldsymbol{x}}\widetilde{\boldsymbol{x}}^T$ for multilinear PageRank problem) from the algorithms and the ``exact" solution $\boldsymbol{X}^{*}$:
$$
{\rm Error}=\frac{\|\widetilde{\boldsymbol{X}} - \boldsymbol{X}^{*} \|_{l_{1}}}{\|\boldsymbol{X}^{*}\|_{l_{1}}},
$$
where the ``exact" solution is obtained from running the power method on the original high-order PageRank problem \eqref{3.2} with stopping criterion $tol=10^{-12}$. The numerical results of the experiment are shown in Appendix \ref{AppB}; see Table \ref{t-2}--Table \ref{t-5}. In this experiment, we choose the damping factors as $\alpha=0,85,0.90,0.95,0.99$, and the parameters for sparsing as $\beta=\frac{1}{n^{2}},\frac{1}{n^{3}},\frac{1}{n^{4}}$, respectively.
Again, it is seen from Table \ref{t-2}--Table \ref{t-5} that Algorithm \ref{Alg3}--Algorithm \ref{Alg6} outperform Algorithm \ref{Alg2} both in terms of CPU time and accuracy. In particular, Algorithm \ref{Alg4} is the best in terms of accuracy, while Algorithm \ref{Alg6} converges the fastest as $sparsity=0.00001\%$.
All these show the advantages of our proposed strategies.
It is observed that ESFPM and EIOM can be faster than Algorithm \ref{Alg3}--Algorithm \ref{Alg4}. However, the accuracy of the two algorithms are poorer, which demonstrates the superiority of the high-order PageRank model over the multilinear PageRank model.
Moreover, we see that the MNM and the Pb-Ns algorithms do not work at all for this problem. Indeed, these methods are based on the Newton method, and they suffer from the difficulty of ``out of memory". Thus, they are
not suitable for large-scale problems.
\section{Conclusion}\label{sec7}
One of the important applications of high-order Markov
chain is high-order PageRank.
However, one has to store a {\it dense} higher-order PageRank tensor in the higher-order PageRank problem, which is prohibitive for large-scale problems.
Multilinear PageRank is an alternative to higher-order PageRank, however, unlike the higher-order PageRank problem, the existence and uniqueness of multilinear PageRank vector is not guaranteed theoretically, and the approximation of the multilinear PageRank vector to the high-order PageRank tensor can be poor in practice.
Thus, it is recommended to solve the high-order PageRank problem instead of the multilinear PageRank problem, if both the computational overhead and storage requirements can be reduced significantly.
The truncated power method approximates large-scale solutions of sparse Markov chains by
the sparsity component and the rank-one component. However, the accuracy of the approximation and the efficiency of this method may not be satisfactory.
In this work, we aim to accelerate the truncated power method for high-order PageRank. In our proposed methods, there is no need to form and store the vectors arising from the dangling states, and no need to store an auxiliary matrix. Moreover, we propose to speed up the computation by using a partial updating technique. Thus, the computational overhead can be reduced significantly. The convergence of the proposed methods are considered.
There are still some work needs to investigate further. For instance, although we proved that the sparse power method and the sparse power method with partial updating converge with high probability, the convergence of the two methods deserves further investigation. On the other hand, higher-order PageRank problem (say, $m\geq 4$) can be reduced to several third-order PageRank problems theoretically. For example, a fourth-order PageRank problem can be rewritten as $n$ third-order PageRank problems, however, the workload will be terrible if we solve it in such a way. Whether some
randomized algorithms can be applied is worth studying, and it is definitely a part of our future work.
|
{
"timestamp": "2021-05-11T02:18:44",
"yymm": "2105",
"arxiv_id": "2105.03874",
"language": "en",
"url": "https://arxiv.org/abs/2105.03874"
}
|
\section{Introduction}
After the upgrade of the Large Hadron Collider (LHC) at CERN in 2026~\cite{apollinari2017high}, the accelerator aims to deliver \num{5} - \num{7} times the nominal lu\-mi\-no\-sity of \SI{1e34}{\per\square\centi\meter\per\second}.
This results in new challenges in terms of data rate capabilities and radiation tolerance, especially for detector layers close to the interaction point of the colliding proton beams. Therefore, the ATLAS and CMS experiments will replace their currently installed tracking detectors with detectors featuring larger areas of silicon and decreased pixel pitch \cite{CERN-LHCC-2017-021, CERN-LHCC-2017-009}.
In particular, the ATLAS Inner Detector \cite{Aad_2008} is going to be replaced by an all-silicon tracking detector, the ATLAS Inner Tracker (ITk) \cite{CERN-LHCC-2017-021}. The expected \SI{1}{\mega\electronvolt} neutron equivalent fluence\footnote{after an integrated luminosity of \SI{2000}{\per\femto\barn}} for the innermost layer is approximately \SI{1e16}{\neq\per\square\centi\meter}, for the outer layers fluences\footnote{after an integrated luminosity of \SI{4000}{\per\femto\barn}} from \SI{2e15}{\neq\per\square\centi\meter} up to \SI{5e15}{\neq\per\square\centi\meter} are expected \cite{CERN-LHCC-2017-021}.
Since it was demonstrated in the past that hybrid pixel detectors \cite{rossi2006pixel} can be successfully operated in such harsh radiation environments at the LHC, it is foreseen to employ this technology also for the upgraded pixel detector.
However, the surface of the new ATLAS pixel detector is increased from approximately \SI{2}{\square\meter} to \SI{13}{\square\meter} \cite{CERN-LHCC-2017-021}, demanding large-area solutions with cost-effective designs.
An approach of employing monolithic (active) CMOS pixel detectors \cite{PERIC2007876,WERMES2016483} which combine the sensing and electronic processing functions, and thus sa\-ving the time-consuming and expensive hybridisation process, has been investigated \cite{Wang_2018, Caicedo_2019}.
With this CMOS pixel development an interesting option became attractive that utilises commercial CMOS process lines for the fabrication of planar sensors as the sensing part of hybrid pixel detectors. CMOS fabrications offer a high production throughput at comparatively low cost. Further benefits arise from the fact that several features are available to enhance the sensor design. For instance, polysilicon layers can be used to connect each pixel to a bias grid making it possible to test the functionality of the sensor at wafer level. Also, MIM (metal-insulator-metal) capacitors can be used to AC-couple the sensor pixels to the readout chip pixels preventing leakage current flowing into the readout channels. Moreover, several metal layers are available that can be exploited as re-distribution layers such that enlarged inter-gap pixels (between two readout chips) can be avoided.
These sensors are called \textit{passive} CMOS sensors since they do not have any active components implemented.
Passive CMOS sensors in a large pixel pitch design ($\num{50}\times\SI{250}{\square\micro\meter}$ pixels) were already investigated \cite{pohl_cmos}. The characterisation of the small pixel pitch design ($\num{50}\times\SI{50}{\square\micro\meter}$ pixels) before irradiation can be found in \cite{dieter2020}.
In the following the performance of irradiated passive CMOS sensors is studied to demonstrate their radiation tolerance and suitability for the upgrades of the LHC experiments.
\section{LFoundry passive CMOS pixel sensor}
\subsection{Pixel sensor design}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.65\linewidth]{LF_passive_pixel.pdf}
\caption{Simplified schematic cross section of an n-in-p pixel of the LFoundry passive CMOS pixel sensor. The charge collection electrode is an n-well (optionally with an additional deep n-well) with varying implant size between \SI{15}{\micro\meter} and \SI{30}{\micro\meter}. The polysilicon layer is omitted. For details see Fig.~\ref{fig:lfcmos_layout}. Small p-implantations (p-stop) isolate the pixels from each other.}
\label{fig:lfcmos_pixel}
\end{center}
\end{figure}
Passive CMOS n-in-p pixel sensors in \SI{150}{\nano\meter} LFoundry \cite{lfoundry} technology were manufactured on high-resistivity Czochralski wafers. The resistivity of the substrate is at least \SI{2}{\kilo\ohm\centi\meter}, as specified by the foundry. Measurements suggest that the resistivity is between \num{5} and \SI{6}{\kilo\ohm\centi\meter} \cite{pohl_cmos}. The substrate was thinned to a thickness of \SI{100}{\micro\meter}. The backside was processed by Ion Beam Services (IBS) \cite{IBS} including backside implantation as well a backside me\-ta\-llisation allowing for backside bias application.
The sensor consists of $\num{64} \times \num{64}$ pixels with a size of $\num{50} \times \SI{50}{\square\micro\meter}$, and has a total area of $\num{3.8} \times \SI{3.8}{\square\milli\meter}$.
Fig.~\ref{fig:lfcmos_pixel} shows a simplified schematic cross section of one pixel, Fig.~\ref{fig:lfcmos_layout} depicts the layout of the pixel matrix.
In order to investigate the charge collection properties and the pixel capacitance, various pixel designs were implemented. The left half of the pixel matrix consists of pixels with a regular n-implantation (n-well, denoted as NW), whereas the right half of the pixel matrix consists of pixels with an additional deep n-implantation (deep n-well, denoted as DNW). The size of the n-implantations varies in both dimensions between \SI{30}{\micro\meter} (top of the matrix) and \SI{15}{\micro\meter} (bottom of the matrix). To isolate the pixels from each other a small p-implantation (p-stop) is used. Moreover, a fine-pitch polysilicon layer encloses the n-implantations with the intention to increase the breakdown voltage, especially after irradiation.
The pixel matrix is surrounded by an n-implantation confining the active pixel region. In addition, six guard-rings isolate the pixels from the high voltage at the sensor edge.
The sensor is bump-bonded via solder bumps (by Fraunhofer IZM \cite{izm}) to the RD53A readout chip \cite{Garcia-Sciveres:2287593, Monteil:2019niy}, a prototype readout chip for the ATLAS ITk pixel detector.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.78\linewidth]{LFCMOS_layout_summary.pdf}
\caption{Left: Layout of the LFoundry passive CMOS pixel sensor. The left half of the pixel matrix consists of pixels with a n-implantation (NW), whereas the right half consists of pixels with an additional deep n-implantation (DNW). The size of the n-implantations varies in both dimensions between \SI{30}{\micro\meter} (top of the matrix) and \SI{15}{\micro\meter} (bottom of the matrix). Guard-rings isolate the pixels from the high voltage at the sensor edge. Right: Enlarged view of the different pixel designs.}
\label{fig:lfcmos_layout}
\end{center}
\end{figure}
\subsection{Pixel sensor irradiation}
The studied pixel detector has been step-wise irradiated to the target fluence to investigate the performance at different irradiation levels. In the first step, the detector was irradiated to a fluence of \SI{5e15}{\neq\per\square\centi\meter} at the MC40 cyclotron of the University of Birmingham \cite{Allport_2017} using \SI{27}{\mega\electronvolt} protons. In the second step, the detector was irradiated to a total fluence of \SI{1e16}{\neq\per\square\centi\meter} at the Proton Irradiation Site at the Bonn Isochronous Cyclotron \cite{wolf_ma} using \SI{14}{\mega\electronvolt} protons. The irradiations were performed uniformly in a cold environment and the device was left unpowered during irradiation. After each irradiation step the device was annealed for \SI{80}{\minute} at \SI{60}{\celsius}.
The co\-rres\-ponding total ionising doses\footnote{Important for surface damage affecting the readout chip.} created by protons were estimated to approximately \SI{660}{\mega\radian} (Birmingham) and \SI{580}{\mega\radian} (Bonn).
\section{Leakage current measurements}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.7\linewidth]{IV_curves_official_v9.pdf}
\caption{Leakage current as a function of the (reverse) applied bias voltage before (solid line) and after irradiation (dashed lines) to a fluence of \SI{5e15}{\neq\per\square\centi\meter} and \SI{1e16}{\neq\per\square\centi\meter}. The grey dashed line indicates the maximum allowed leakage current of \SI{35}{\micro\ampere\per\square\centi\meter} after a fluence of \SI{5e15}{\neq\per\square\centi\meter} according to the ATLAS ITk specifications. The leakage current is normalised to the total area of the sensor.}
\label{fig:iv_curve}
\end{center}
\end{figure}
To test the functionality of the sensors and to determine their maximum operational voltage, the leakage current is measured as a function of the (reverse) bias voltage (IV-curve). Fig.~\ref{fig:iv_curve} shows the IV-curves before and after irradiation of the tested sensor after bump-bonding. The IV-curve before irradiation was measured at room temperature, whereas the IV-curves after irradiation were measured at an environmental temperature of \SI{-25}{\celsius}. Before irradiation the maximum operational voltage is approximately \SI{220}{\volt}. After irradiation the sensor is still functional and there is no breakdown visible anymore up to the maximum tested operational voltage of \SI{350}{\volt}. Furthermore, the leakage current after a fluence of \SI{5e15}{\neq\per\square\centi\meter} is approximately \SI{23}{\micro\ampere\per\square\centi\meter} and meets the requirements for ATLAS ($<\SI{35}{\micro\ampere\per\square\centi\meter}$). As expected, the leakage current at a fluence of \SI{1e16}{\neq\per\square\centi\meter} is approximately twice as large as for a fluence of \SI{5e15}{\neq\per\square\centi\meter}.
The power dissipation of the sensor needed for a hit-detection efficiency larger than \SI{97}{\percent} (see Sec.~\ref{sec:eff}) is about \SI{7}{\milli\watt\per\square\centi\meter} (\SI{35}{\micro\ampere\per\square\centi\meter} at \SI{200}{\volt}) after a fluence of \SI{1e16}{\neq\per\square\centi\meter}. This is comparable to the power dissipation reported for 3D sensors \cite{Terzo:2744099}.
\section{Electronic noise}
An important parameter to quantify the performance of a sensor is the equivalent noise charge (ENC). The ENC distributions of the investigated pixel detector at different irradiation steps can be seen in Fig.~\ref{fig:noise_map}.
Before irradiation, an ENC of \SI{73}{\electrons} is measured. This is a value comparable to other planar sensor designs read out with the same amplifier chip. After irradiation, the ENC increases to about \SI{100}{\electrons}. The reason for that is most likely an increase in shot noise due the higher leakage current after irradiation (approximately \SI{90}{\micro\ampere} corresponding to \SI{22}{\nano\ampere} per pixel\footnote{Measured at a temperature of \SI{-17}{\celsius}.}). In addition, the performance of the analogue front-end is degraded by irradiation (i.e. the transconductance $g_m$ decreases) and is likely responsible for an unspecifiable additional noise contribution. Further, it cannot be excluded that the detector capacitance increases after irradiation which would also lead to an increase in noise.
Comparing different pixel designs, there is no significant difference in terms of noise observed before irradiation, although the measured pixel capacitances\footnote{Including contributions due to routing and bump-bonds.} are different for the various pixel geometries, \SI[separate-uncertainty=true]{33.5(2)}{\femto\farad} for NW30 pixels and \SI[separate-uncertainty=true]{22.4(2)}{\femto\farad} for NW15 pixels \cite{Kr_ger_2021}.
At a fluence of \SI{1e16}{\neq\per\square\centi\meter}, the noise of NW30 pixels is approximately \SI{8}{\percent} higher than for NW15 pixels.
Since this difference is small, the benefits in terms of noise reduction for pixels featuring small implants do not outweigh the disadvantages that arise in terms of hit-detection efficiency (see Sec.~\ref{sec:eff}).
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.7\linewidth]{Noise.pdf}
\caption{Equivalent noise charge distributions at different irradiation levels. The noise distributions for different pixel geometries are shown exemplary for the NW30 and NW15 pixels. The distributions are fitted with a Gaussian function to extract the mean noise. The uncertainties of the estimated means are approximately \SI{2}{\electrons}.}
\label{fig:noise_map}
\end{center}
\end{figure}
\section{Hit-detection efficiency measurement}\label{sec:eff}
A crucial detector property is the hit-detection efficiency, i.e. the probability with which a hit (a particle traversing a pixel) is recognised by the detector. For the application as a tracking detector the hit-detection efficiency has to be high for efficient hit finding and track reconstruction. Especially after irradiation, the hit-detection efficiency is of interest, and for ATLAS ITk it is required to be above \SI{97}{\percent} \cite{CERN-LHCC-2017-021}.
To measure the hit-detection efficiency the device under test (DUT) is placed in a beam-telescope setup consisting of six high-resolution Mimosa26 planes (EUDET-type beam telescope) \cite{Jansen2016} and an ATLAS FE-I4 \cite{GARCIASCIVERES2011S155} re\-fe\-rence plane. The Mimosa26 planes provide a high spatial resolution of approximately \SI{3}{\micro\meter} \cite{Jansen2016} allowing a precise track reconstruction. However, the time resolution of the Mimosa26 sensors is limited due to their rolling shutter readout (duration of \SI{115.2}{\micro\second}). In contrast, the FE-I4 reference plane provides a very good time-stamping capability with a precision better than \SI{25}{\nano\second}.
The challenge during track reconstruction is to ensure a correct time assignment of the Mimosa26 tracks, which is needed for a proper hit-detection efficiency measurement. Therefore, the ATLAS FE-I4 plane is used as a time reference plane such that the tracking hits in the Mimosa26 planes spatially match the hit in the timing reference plane, which thus provides the time stamp for the track.
The hit-detection efficiency of the investigated sensor was measured using a minimum ionising electron beam provided by the ELSA test beam facility~\cite{Heurich:2016ilc} (\SI{2.5}{\giga\electronvolt}) and the DESY II test beam facility \cite{Diener_2019} (\SI{5}{\giga\electronvolt}).
A scin\-ti\-lla\-tor in front of the telescope setup generates a trigger signal when particles traverse the setup. An EUDET-type Trigger Logic Unit (TLU) \cite{tlu} is used to distribute and synchronise the trigger signals with the different readout systems. The Mimosa26 telescope is read out without being triggered in continuous rolling shutter readout mode using the \textit{pymosa} software~\cite{pymosa}. The DUT is read out triggered using the \textit{BDAQ53} software~\cite{Daas_2021} and the ATLAS FE-I4 plane was read out triggered using the \textit{PyBAR} software~\cite{pybar}. The data is analysed using the \textit{beam telescope analysis} software~\cite{bta} including clustering, detector alignment, and track reconstruction as well as the final result analysis of the hit-detection efficiency and charge collection behaviour.
For all following measurements, the detector was tuned to a threshold of approximately \SI{1000}{\electrons} with a noise occupancy of less than \num{e-6} per pixel.
\begin{figure}
\begin{center}
\includegraphics[width=0.75\linewidth]{Residuals.pdf}
\caption{(Unbiased) residual distribution in one dimension at the DUT. The data is shown on a logarithmic scale. The distribution is fitted with a Gaussian function. The grey dashed line illustrates the maximum distance between a hit and track intersection (with the DUT) at which a hit contributes to the efficiency.}
\label{fig:residuals}
\end{center}
\end{figure}
Fig.~\ref{fig:residuals} illustrates the (unbiased) residual distribution (distance between hit and track intersection) in the y-dimension at the DUT. The residuals are centred around zero which indicates a correct alignment of the detector planes. Due to multiple scattering the residual distribution is smeared out and can be approximated with a Gaussian function. The deviation from the Gaussian function towards the tails originates from the fact that the probability for large scattering angles is enhanced as described in Molière's theory~\cite{moliere}. From a fit a residual width (1-$\sigma$ width) of \SI{18.7}{\micro\meter} is extracted. This is in agreement with the expectation since the residual width for unbiased tracks is the quadratic sum of the intrinsic resolution of the DUT ($\frac{\mathrm{pixel\,\,pitch}}{\sqrt{12}}$) and the pointing resolution at the DUT (a few \si{\micro\meter}). The pointing resolution in this setup is slightly worsened by the additional material of the cooling infrastructure (cooling box for DUT and PCB cooling plate). However, the resolution is still sufficient for in-pixel studies.
\begin{figure}
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{efficiency_irrad_vs_non_irrad_NW_30_25_larger_fontsize.pdf}
\caption{Hit-detection efficiency of NW-flavors}
\label{fig:eff_vs_bias_nw}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{efficiency_irrad_vs_non_irrad_DNW_30_25_larger_fontsize.pdf}
\caption{Hit-detection efficiency of DNW-flavors}
\label{fig:eff_vs_bias_dnw}
\end{subfigure}
\caption{Hit-detection efficiency as a function of bias voltage for different pixel flavors and different irradiation levels. The grey dashed line represents the requirement of an (in-time) efficiency larger than \SI{97}{\percent}. For the sake of clarity, not all pixel designs are shown. The quoted error bars are purely statistical. Left: NW-flavors. Right: DNW-flavors.}
\label{fig:eff_vs_bias}
\end{figure}
In order to reject noise hits (spatially uncorrelated) which artificially increase the hit-detection efficiency, hits are only considered as efficient if the residual of the track is smaller than a given distance (of \SI{120}{\micro\meter}). This efficiency search radius is illustrated in Fig.~\ref{fig:residuals}.
Fig.~\ref{fig:eff_vs_bias_nw} shows the hit-detection efficiency before and after irradiation as a function of the applied (reverse) bias voltage for the NW30 and NW25 pixel designs (regular n-implantation with a size of \num{30} and \SI{25}{\micro\meter}), Fig.~\ref{fig:eff_vs_bias_dnw} shows this for the DNW30 and DNW25 pixel designs (additional deep n-implantation with a size of \num{30} and \SI{25}{\micro\meter}). For the sake of clarity, other pixel designs are omitted here since they have a lower efficiency after irradiation.
\begin{figure}
\begin{center}
\includegraphics[width=0.8\linewidth]{in_pixel_eff_1e16_summary.png}
\caption{In-pixel efficiency map after a fluence of \SI{1e16}{\neq\per\square\centi\meter} for two different pixel flavors (left: NW30 and right: NW15) at a bias voltage of \SI{400}{\volt}.}
\label{fig:in_pixel_eff}
\end{center}
\end{figure}
Before irradiation, an efficiency larger than \SI{99.5}{\percent} is achieved at \SI{5}{\volt} only and there is no significant difference across the various pixel designs visible. After irradiation, the efficiency increases with increasing bias voltage. The measured efficiency after a fluence of \SI{5e15}{\neq\per\square\centi\meter} and a bias voltage of \SI{350}{\volt} is \SI[separate-uncertainty=true]{99.89(1)}{\percent} for the NW30 flavor and \SI[separate-uncertainty=true]{99.88(1)}{\percent} for the DNW30 flavor. This is well above the requirement of \SI{97}{\percent} after irradiation (grey dashed line in Fig.~\ref{fig:eff_vs_bias}). After a fluence of \SI{1e16}{\neq\per\square\centi\meter} the efficiency decreases further (especially for low bias voltages). However, at \SI{400}{\volt} an efficiency of \SI[separate-uncertainty=true]{99.79(1)}{\percent} for the NW30 and \SI[separate-uncertainty=true]{99.78(1)}{\percent} for the DNW30 flavor can still be achieved.
Especially, at the highest measured fluence, it is visible that flavors with a smaller implant size show a slightly lower efficiency\footnote{This is also true for the omitted (D)NW20 and (D)NW15 flavors.}. Furthermore, at low bias voltages, the hit-detection efficiency of designs with a deep n-well (same implant size) is higher compared to designs with only the standard n-well geometry, especially for smaller implant sizes.
Fig.~\ref{fig:in_pixel_eff} shows in-pixel efficiency maps (all data mapped onto a single pixel) at a fluence of \SI{1e16}{\neq\per\square\centi\meter} for two different pixel flavors (NW30 and NW15) at a bias voltage of \SI{400}{\volt}. One can see that the efficiency, for pixel designs with smaller implant size, is low at the pixel corners which is due to the low electric field (and charge sharing) in this region.
\section{Charge measurements and charge-collection efficiency}
In addition to the hit-detection efficiency, the charge collection behaviour was studied during test beams.
The readout chip already provides an internal charge information, called ToT (time over threshold). However, the precision of this measurement needed for a charge calibration (using radioactive sources) or charge measurements during test beams is not sufficient. This problem is circumvented by using the so-called \textit{TDC method} \cite{pohl_phd}. This method makes use of the chip's HitOR signal (logical OR of all discriminator outputs) whose length is proportional to the collected charge. This signal is sampled with a \SI{640}{\mega\hertz} clock provided externally by the readout system, thus enabling a precise charge measurement.
\begin{figure}
\begin{center}
\includegraphics[width=0.8\linewidth]{in_pixel_charge_1e16_summary.png}
\caption{In-pixel charge map in electrons after a fluence of \SI{1e16}{\neq\per\square\centi\meter} for two different pixel flavors (left: NW30 and right: NW15) at a bias voltage of \SI{400}{\volt}.}
\label{fig:in_pixel_charge}
\end{center}
\end{figure}
In Sec.~\ref{sec:eff} it was shown that, after irradiation, the efficiency for pixel designs with smaller implant sizes is low in the pixel corners where the electrical field is low. The same behaviour (low charge in pixel corners) is observed for the collected charge. The corresponding in-pixel charge maps (only events with cluster size of 1), after a fluence of \SI{1e16}{\neq\per\square\centi\meter}, can be seen in Fig.~\ref{fig:in_pixel_charge}. This is in agreement with the expectations and explains the efficiency loss since a lower electric field after irradiation leads to more charge carrier trapping, and thus to a smaller collected charge which in turn results in a lower efficiency for a given threshold.
In Fig.~\ref{fig:charge_spectra} the measured charge distributions for the different irradiation levels measured using the NW30 pixel design are depicted. The distributions follow a Landau function convoluted with a Gaussian function due to electronic noise. Furthermore, it is visible that the most probable value (MPV) extracted from a fit decreases with increasing fluence. The reasons for that are the facts that a) after irradiation the sensor can no longer be fully depleted at reasonable bias voltages and b) charge carrier trapping during charge col\-lection due to radiation damage of the bulk.
\begin{figure}
\begin{center}
\includegraphics[width=0.7\linewidth]{Charge_irrad_vs_non_irrad_NW30.pdf}
\caption{Measured charge distributions for different irradiation levels for the NW30 flavor. The distributions are fitted with a Landau-Gauss convolution to extract the most probable value (MPV). Before irradiation a MPV of \SI[separate-uncertainty=true]{6590(170)}{\electrons} is measured at \SI{80}{\volt}. At a fluence of \SI{5e15}{\neq\per\square\centi\meter} a MPV of \SI[separate-uncertainty=true]{5030(50)}{\electrons} is measured at \SI{350}{\volt}, and at a fluence of \SI{1e16}{\neq\per\square\centi\meter} a MPV of \SI[separate-uncertainty=true]{3670(50)}{\electrons} is measured at \SI{400}{\volt}.}
\label{fig:charge_spectra}
\end{center}
\end{figure}
The collected charge as a function of the bias voltage, illustrated in Fig.~\ref{fig:charge_vs_bias}, was studied. Each data point corresponds to the most probable value extracted from a fit to the measured charge distribution. The charge signal increases with bias voltage since the depleted volume extends with increasing bias voltage. Before irradiation, the amount of collected charge starts to saturate at approximately \SI{40}{\volt} leading to a charge signal of about \SI{6600}{\electrons}. This indicates that the sensor is fully depleted at approximately \SI{40}{\volt}. Assuming that \num{73} electrons per \si{\micro\meter} (extracted from a GEANT4 simulation) are created within the depletion zone, this yields a silicon bulk thickness of approximately \SI{90}{\micro\meter}. This values is reasonable since the nominal thickness of \SI{100}{\micro\meter} includes also the metal layers with a thickness of a few \si{\micro\meter}. After irradiation, the amount of collected charge decreases due to the fact that the sensor can no longer be fully depleted and charge carrier trapping sets in. The measured charge signal after a fluence of \SI{1e16}{\neq\per\square\centi\meter} is approximately \SI{3700}{\electrons} at the highest measured bias voltage of \SI{400}{\volt}.
\begin{figure}
\begin{center}
\includegraphics[width=0.7\linewidth]{Charge_irrad_vs_non_irrad_bias_voltage_NW30_NW25.pdf}
\caption{Collected charge as a function of the bias voltage for different irradiation levels. Data points are the most probable values extracted from a fit to the measured charge distributions. The error bars originate from the fit. The y-axis on the right-hand side shows the charge collection efficiency (CCE).}
\label{fig:charge_vs_bias}
\end{center}
\end{figure}
The measured charge signal can be translated to a charge collection effi\-ciency (CCE), as shown in Fig.~\ref{fig:charge_vs_bias} (axis on the right-hand side). The charge collection efficiency is obtained by dividing the measured charge by the maximum measured charge before irradiation (\SI{6600}{\electrons}). This yields a maximum charge collection efficiency of approximately \SI{80}{\percent} at a fluence of \SI{5e15}{\neq\per\square\centi\meter} and \SI{55}{\percent} at a fluence of \SI{1e16}{\neq\per\square\centi\meter}, respectively.
\section{Conclusion and outlook}
The radiation tolerance of $\SI{100}{\micro\meter}$ thin passive CMOS sensors fabricated in \SI{150}{\nano\meter} LFoundry technology has been investigated. The sensors are still functional even after a fluence of \SI{1e16}{\neq\per\square\centi\meter} and can be successfully operated. At this fluence a hit-detection efficiency of \SI[separate-uncertainty=true]{99.79(1)}{\percent} is measured (at \SI{400}{\volt}) for the NW30 design. The charge collection efficiency is measured to be approximately \SI{55}{\percent} at the maximum tested bias voltage of \SI{400}{\volt} after the highest fluence. In addition, the power dissipation of the sensor needed to meet the ATLAS ITk requirements in terms of efficiency is comparable to 3D sensors.
This demonstrates that passive CMOS sensors are radiation tolerant and withstand a fluence of \SI{1e16}{\neq\per\square\centi\meter}, the expected fluence for the future innermost ATLAS pixel detector layer.
The performance of passive CMOS sensors in terms of noise and hit-detection efficiency equals that of conventional planar sensors.
Full-size (RD53-sized) passive CMOS sensors using the NW30 geometry for the ATLAS and CMS experiments at the HL-LHC are already manufactured and currently under investigation.
\section*{Acknowledgements}
We would like to thank LFoundry and Ion Beam Services (IBS) for the fabrication and the processing of the backside of the sensors.
We also thank Laura Gonella for making the irradiation at the Birmingham Irradiation Facility possible. Further, we would like to thank the HISKP group for making the irradiation at the Proton Irradiation Site in Bonn possible.
This project has received funding from the Deutsche Forschungsgemeinschaft DFG, under grant agreement no. WE 976/4-1, the German Ministerium f\"ur Bildung, Wissenschaft, Forschung und Technologie (BMBF) under contract no. 05H15PDCA9, the H2020 project AIDA-2020, under grant agreement no. 654168, and from a Marie Sklodowska-Curie ITN Fellowship of the European Union’s Horizon 2020 program under grant agreement no. 675587-STREAM.
The measurements leading to these results have been performed at the Test Beam Facility at DESY Hamburg (Germany), a member of the Helmholtz Association (HGF)
|
{
"timestamp": "2021-05-11T02:21:06",
"yymm": "2105",
"arxiv_id": "2105.03959",
"language": "en",
"url": "https://arxiv.org/abs/2105.03959"
}
|
\section{0pt}{12pt plus 4pt minus 2pt}{0pt plus 2pt minus 2pt}
\titlespacing\subsection{0pt}{12pt plus 4pt minus 2pt}{0pt plus 2pt minus 2pt}
\titlespacing\subsubsection{0pt}{12pt plus 4pt minus 2pt}{0pt plus 2pt minus 2pt}
\setcounter{tocdepth}{5}
\setcounter{secnumdepth}{5}
\setlistdepth{9}
\renewlist{enumerate}{enumerate}{9}
\setlist[enumerate,1]{label=\arabic*)}
\setlist[enumerate,2]{label=\alph*)}
\setlist[enumerate,3]{label=(\roman*)}
\setlist[enumerate,4]{label=(\arabic*)}
\setlist[enumerate,5]{label=(\Alph*)}
\setlist[enumerate,6]{label=(\Roman*)}
\setlist[enumerate,7]{label=\arabic*}
\setlist[enumerate,8]{label=\alph*}
\setlist[enumerate,9]{label=\roman*}
\renewlist{itemize}{itemize}{9}
\setlist[itemize]{label=$\cdot$}
\setlist[itemize,1]{label=\textbullet}
\setlist[itemize,2]{label=$\circ$}
\setlist[itemize,3]{label=$\ast$}
\setlist[itemize,4]{label=$\dagger$}
\setlist[itemize,5]{label=$\triangleright$}
\setlist[itemize,6]{label=$\bigstar$}
\setlist[itemize,7]{label=$\blacklozenge$}
\setlist[itemize,8]{label=$\prime$}
\pagestyle{fancy}
\fancyhf{}
\chead{ \begin{Center}
\textit{No Author 1, 2, or 3 Last Name for Blind Peer Review}
\end{Center}}
\renewcommand{\headrulewidth}{0pt}
\setlength{\topsep}{0pt}\setlength{\parindent}{0pt}
\renewcommand{\arraystretch}{1.3}
\usepackage{nomencl}
\usepackage{etoolbox}
\makenomenclature
\renewcommand\nomgroup[1]{%
\item[\bfseries
\ifstrequal{#1}{I}{Sets and Indices}{%
\ifstrequal{#1}{P}{Parameters}{%
\ifstrequal{#1}{V}{Variables}{}}}%
]}
\newcommand\Nomenclature[2]{\nomenclature[#2]{#1}{#2}}
\usepackage[parfill]{parskip}
\begin{document}
\chead{}
\lhead{
\textit{Proceedings of the 2021 IISE Annual Conference\\
A. Ghate, K. Krishnaiyer, K. Paynabar, eds.}
}
\begin{Center}
{\fontsize{16pt}{19.2pt}\selectfont \textbf{Impacts of Privately Owned Electric Vehicles on Distribution System Resilience: A Multi-agent Optimization Approach}\\}
\end{Center}\par
\begin{Center}
{Sina Baghali, Zhaomiao Guo}\\
{Department of Civil, Environmental, and Construction Engineering, University of Central Florida}
\end{Center}\par
\begin{Center}
{\fontsize{12pt}{19.2pt}\selectfont \textbf{Abstract}}
\end{Center}
We investigate the effects of private electric vehicles (EVs) on the resilience of distribution systems after disruptions. We propose a framework of network-based multi-agent optimization problems with equilibrium constraints (N-MOPEC) to consider the decentralized decision making of stakeholders in transportation and energy systems. To solve the high-dimensional non-convex problem, we develop an efficient computational algorithm based on exact convex reformulation. Numerical studies are conducted to illustrate the effectiveness of our modeling and computational approach and to draw policy insights. The proposed modeling and computational strategies could provide a solid foundation for the future study of power system resilience with private EVs in coupled transportation and power networks.
\textbf{Keywords}\\
Distribution system resilience, electric vehicles, incentive design, multi-agent optimization, convex reformulation.
\section{Introduction}
Power system resilience reflects the versatility of a system to withstand and recover rapidly from unexpected disruptions \cite{khazeiynasab2020resilience,lei2018routing}. An increasing market penetration of private electric vehicles (EVs) provides new opportunities for enhancing power system resilience due to their mobility and fast regulating characteristics\cite{liu2016ev}. In this paper, we will investigate the possible impacts of private EVs on assisting the restoration process of distribution systems (DSs).
Current research on DS restoration with EVs have focused on a centralized perspective without considering the non-cooperative travel and charging behavior of private EVs, as well as other decentralized power system stakeholders \cite{haggi2019review}. For example, researchers in \cite{lei2018routing,xu2019resilience} consider EVs as a type of mobile energy sources (MESs) along with mobile energy storage systems and mobile generators in DS restoration. Authors in \cite{jamborsalamati2019enhancing} investigate the centralized real time management of DS restoration with EV aggregators and distributed generators (DGs). In all the mentioned studies, a centralized entity, i.e., distribution system operator (DSO) is in full control of DS and restoration strategies, making decisions for DGs and EVs. However, centralized controlling techniques are proven to be challenging in terms of costly communication infrastructure and single-point failures \cite{nejad2019distributed}. In addition, privately-owned EVs, as well as other energy providers in DSs, may have their own objectives and can not be considered as manageable elements.
Moreover, due to large-scale private EVs participating in Vehicle-to-Grid (V2G) services, power and transportation systems become more interdependent. However, most of the existing studies on power system restoration with EVs take a power-system-centric perspective and completely neglect transportation systems \cite{jamborsalamati2019enhancing,momen2020using,sun2018optimal} or simplify the modeling of transportation systems by considering constant predefined travel time on routes \cite{lei2018routing,yao2018transportable,yao2019resilient}. This stream of literature overlooks the inherent relationship between traffic distribution and travel time that are critical especially when large-scale EVs are available. As a consequence, spatial and temporal interdependence between transportation and power systems can not be properly investigated.
We address the aforementioned gaps by proposing a a framework of network-based multi-agent optimization problems with equilibrium constraints (N-MOPEC) in a coupled distribution and transportation system. The main contribution is two-fold: (1) The modeling framework captures the decentralized behavior of stakeholders during distribution system restoration process and the spatial and temporal interdependence between transportation and power systems, which allows for rigorous system analyses and optimal V2G incentives design. (2) To facilitate large-scale computation, we reformulate the multi-agent optimization problems as an exact convex optimization problem, which can be efficiently solved by commercial nonlinear solvers.
\vspace{-0.2cm}
\section{Mathematical Modeling}\label{sec:MAth_modeling}
We consider the following stakeholders: DSO, EV drivers, charging station aggregator (CSA), and DG owners for distribution system resilience. The decentralized decision making of each stakeholder is modeled in this section. Throughout this paper, set $\mathcal{I}$ represents the DS nodes, and $\mathcal{I}^{\mathrm{CS}}$, $\mathcal{I}^\mathrm{DG}$, and $\mathcal{I}^L$ are subsets of this set representing charging station nodes, DG nodes, and load nodes respectively. The time period of 24 hours are included in set $\mathcal{T}$, and $\mathcal{L}$ represents the set of distribution lines. Transportation networks are presented as a directed graph $\mathcal{G(\mathcal{N}, \mathcal{A})}$, within which EV drivers starting from set $\mathcal{R}$ select from charging station set $\mathcal{S}$ to provide grid restoration services and/or to charge.
\underline{{\bf DG Owners Modeling: }}Each DG $i$ ($\in \mathcal{I}^{DG}$) determines its generation quantity $p_{i,t}^{DG}$ for each time step $t$ ($\in \mathcal{T}$) to optimize its profits. Because individual DG does not have market power to influence the locational electricity prices $\rho_{i,t}$, the decision making of all DG owners can be aggregated into a single optimization problem, as formulated in model (\ref{mod:sp}).
\begin{subequations}\label{mod:sp}
\small{
\begin{align}
\max_{\substack{\boldsymbol{p}^{DG}} \geq \boldsymbol{0}} & \sum_{i \in \mathcal{I}^{DG}}\sum_{t \in \mathcal{T}} \big( \rho_{i,t} p_{i,t}^{DG} - C_i(p_{i,t}^{DG}) \big)
\label{obj:DG} \\
\text{ \ s.t.} & \ \underbar{P}_{i,t}^{DG} \leq p_{i,t}^{DG} \leq \bar{P}_{i,t}^{DG}, \ \forall i \in \mathcal{I}^{DG}, t \in \mathcal{T} \label{cons:DG_max}
\end{align}}
\end{subequations}
\setlength{\headsep}{0in}
\fancyhf{}
\chead{ \begin{Center}
\textit{S. Baghali and Z. Guo}
\end{Center}}
Objective (\ref{obj:DG}) maximizes the profits of DG owners calculated as the total revenue $\sum_{i \in \mathcal{I}^{\mathrm{DG}}}\sum_{t \in \mathcal{T}} \rho_{i,t} p_{i,t}^{DG}$ subtracting the total production costs $\sum_{i \in \mathcal{I}^{\mathrm{DG}}}\sum_{t \in \mathcal{T}} C_i(p_{i,t}^{DG})$. We assume $C_i(\cdot)$ to be a convex function with respect to $p_{i,t}^{DG}$ \cite{yan2018robust}. Constraint (\ref{cons:DG_max}) determines the upper and lower bounds ($\bar{P}_{i,t}^{DG}$/$\underbar{P}_{i,t}^{DG}$) for power generation at each DG node $i$ for time $t$. When DGs can be disconnected from the systems, $\underbar{P}_{i,t}^{DG} = 0$.
\underline{{\bf DSO Modeling: }} One of the key responsibilities of a DSO after disruptions is to restore services as soon as possible. Given different characteristics of the loads (e.g., hospital, emergency responses) and limited resources available, the DSO may need to prioritize the restoring procedures. We assume that the DSO intends to maximize the importance of loads being served within the system while minimizing the cost of energy purchased, which can be formulated in model $(\ref{mod:DSO})$.
\begin{subequations}
\small{
\label{mod:DSO}
\begin{align}
&\underset{\substack{\boldsymbol{p^d,v} \geq \boldsymbol{0},\\ \boldsymbol{p^s}, \boldsymbol{pf}, \boldsymbol{qf}}}{\max} && \sum_{i \in \mathcal{I}^{L}} \sum_{t \in \mathcal{T}} \omega_{i,t} p_{i,t}^d - \sum_{i \in \mathcal{I}^{\mathrm{DG}}\cup \mathcal{I}^{\mathrm{CS}}} \sum_{t \in \mathcal{T}} \rho_{i,t} p_{i,t}^s \label{obj:DSO} \\
&\text{\quad \ s.t.} && \sum_{l \in \mathcal{L}} pf_{l,t} \cdot \mathrm{LT}_{l,i} - \sum_{l \in \mathcal{L}} pf_{l,t} \cdot \mathrm{LF}_{l,i} = p_{i,t}^d - p_{i,t}^s, \ \forall i \in \mathcal{I}, t \in \mathcal{T} \label{cons:DSO_P_flow}\\
& &&\sum_{l \in \mathcal{L}} qf_{l,t} \cdot \mathrm{LT}_{l,i} - \sum_{l \in \mathcal{L}} qf_{l,t} \cdot \mathrm{LF}_{l,i} = q_{i,t}^d - q_{i,t}^s,\ \forall i \in \mathcal{I}, t \in \mathcal{T}\label{cons:DSO_Q_flow}\\
& && 0 \leq p_{i,t}^d \leq \bar{P}_{i,t}^d, \ \forall i \in \mathcal{I}, t \in \mathcal{T} \label{cons:DSO_d_bound}\\
& && q_{i,t}^d = (\bar{Q}_{i,t}^d/\bar{P}_{i,t}^d) \cdot p_{i,t}^d , \ \forall i \in \mathcal{I}, t \in \mathcal{T} \label{cons:DSO_Q_Pfactor} \\
& && pf_{l,t}^2 + qf_{l,t}^2 \leq \lambda_{l,t} \cdot (S_{l}^{\mathrm{max}})^2,\ \forall l \in \mathcal{L}, t \in \mathcal{T} \label{cons:DSO_line_stat}\\
& && v_{\mathrm{FN}_{l},t}-v_{\mathrm{TN}_l,t} \leq (1-\lambda_{l,t}) \cdot K + 2 \cdot(r_{l} \cdot pf_{l,t} + x_{l} \cdot qf_{l,t}), \ \forall l \in \mathcal{L}, t \in \mathcal{T} \label{cons:DSO_bigK1} \\
& && v_{\mathrm{FN}_{l},t}-v_{\mathrm{TN}_l,t} \geq (\lambda_{l,t}-1) \cdot K + 2 \cdot(r_{l} \cdot pf_{l,t} + x_{l} \cdot qf_{l,t}), \ \forall l \in \mathcal{L}, t \in \mathcal{T} \label{cons:DSO_bigK2} \\
& && (V_i^{\mathrm{min}})^2 \leq v_{i,t} \leq (V_i^\mathrm{max})^2, \ \forall i \in \mathcal{I}, t \in \mathcal{T} \label{cons:DSO_voltage_bounds}
\end{align}}
\end{subequations}
Objective (\ref{obj:DSO}) maximizes the weighted sum of load demand $ \sum_{i \in \mathcal{I}^l} \sum_{t \in \mathcal{T}} \omega_{i, t} p_{i,t}^d$, where the weights $\omega_{i, t}$ determine the priority of the loads $p_{i,t}^d$; $\sum_{i \in \mathcal{I}^{s}} \sum_{t \in \mathcal{T}} \rho_{i,t} p_{i,t}^s$ is the cost of energy $p_{i,t}^s$ purchased from either DG/EV owners, who want to participate in the restoration services. Operational constraints are formulated in (\ref{cons:DSO_P_flow})-(\ref{cons:DSO_voltage_bounds}), which are adapted based on the Dist-Flow equations proposed in \cite{baran1989network, guo2019impacts}. Active and reactive power flow balances are modeled in (\ref{cons:DSO_P_flow}) and (\ref{cons:DSO_Q_flow}), respectively. $pf_{l,i}$/$qf_{l,t}$ are active/reactive line flow of distribution line $l$ ($\in \mathcal{L}$) at time $t$ ($\in \mathcal{T}$), and $q_{i,t}$ is the reactive load demand at node $i$ at time step $t$. $\mathrm{LF}_{l,i}/\mathrm{LT}_{l,i}$ are mapping matrices where $\mathrm{LF}_{l,i}/\mathrm{LT}_{l,i}$ equals to 1 if line $l$ starts/connects from/to node $i$. Constraint (\ref{cons:DSO_d_bound}) limits the served load demand $p_{i,t}^d$ to be less than the expected load demand $\bar{P}_{i,t}^d$. Constraint (\ref{cons:DSO_Q_Pfactor}) maintains the same power factor for restored loads as the ratio between expected active ($\bar{P}^d_{i,t}$) and reactive ($\bar{Q}^d_{i,t}$) load demands. Constraint (\ref{cons:DSO_line_stat}) models line capacity $S_l^\mathrm{max}$ considering binary line status $\lambda_{l,t}$, with 0 indicating line outage. Constraints (\ref{cons:DSO_bigK1}) and (\ref{cons:DSO_bigK2}) calculate the squared value of voltage $v_{i,t}$ at each node where $\mathrm{FN}_l$ and $\mathrm{TN_l}$ are connection incidence matrices representing the start and end nodes of line $l$, respectively, and $r_l/x_l$ are resistance/reactance of line $l$. $K$ is a big number to ensure that when line $l$ is disconnected at $t$ (i.e., $\lambda_{l,t} = 0$), these two constraints are always satisfied. Constraint (\ref{cons:DSO_voltage_bounds}) defines the acceptable voltage range [$V_i^{\mathrm{min}}, V_i^{\mathrm{max}}$].
\underline{{\bf CSA Modeling: }} Communicating and controlling EVs/charging stations individually is challenging for DSO. Therefore, we assume a private CSA is responsible to coordinate the power transactions between all charging stations and DS. CSA aims to maximize its profits\cite{duan2020bidding} while maintaining required charging demand of each EV. The decision making of the CSA is formulated in model (\ref{model:agg}).
Objective (\ref{obj:Aggr}) maximizes CSA's profits, calculated as revenue of selling electricity to the DSO ($\sum_{i \in \mathcal{I}^{\mathrm{CS}}}\sum_{t \in \mathcal{T}} \rho_{i,t}p_{i,t}^{\mathrm{CS}}$) subtracting incentives ($\sum_{r \in \mathcal{R}} \sum_{s \in \mathcal{S}} \sum_{e \in \mathcal{E}} \alpha_{rs}^e q_{rs}^{\prime e}$) and battery degradation compensation (${\sum_{i \in \mathcal{I}^{CS}} \sum_{e \in \mathcal{E}} \sum_{t \in \mathcal{T}} C_{i,r,t}^{\mathrm{deg},e}(p_{i,r,t}^e)}$) paid to EV drivers. Charging station energy provision $p_{i,t}^\mathrm{CS}$ can have negative values representing that EVs are charging instead of discharging. In this case, $ \rho_{i,t}p_{i,t}^{\mathrm{CS}}$ would be the cost for CSA to charge EVs. Power prices $\rho_{i,t}$ and incentives $\alpha_{rs}^e$ provided to EV drivers are endogenously determined by market in the proposed modeling framework. Incentives $\alpha_{rs}^e$ are based on EV type $e$, which depends on their arriving state of charge (SOC) and charging needs. For example, an EV with higher arriving SOC ($\mathrm{soc}_{r,e,T^{\text{arr}}_{r,e}}$) and lower charging needs should receive higher incentives due to their higher value to the DS restoration. We adapt the Ah-throughput counting model \cite{peterson2010lithium} to model the degradation cost of EVs ($C_{i,r,t}^{\mathrm{deg},e}(p_{i,r,t}^e)$) as a set of linear constraints. An identical strategy is adopted in \cite{guo2019impacts}, which one can refer to for details. Constraints (\ref{con:Aggr_soc_calculation})-(\ref{con:Aggr_P_CS}) specify SOC requirements that the CSA needs to fulfill. We discretize EVs into different homogeneous groups $e$ based on their travel and charging characteristics, including arriving/departing time ($T_{r,e}^{\text{arr}}$/$T_{r,e}^{\text{dep}}$), arriving SOCs ($\mathrm{SOC}_{r,e}^{\text{arr}}$), and minimum departing SOC ($\mathrm{SOC}_{e}^{\text{dep}}$). Constraint (\ref{con:Aggr_soc_calculation}) models the dynamics of the aggregated SOC ($\mathrm{soc}_{r,e,t}$) of the EVs from $r$. Battery capacity $\mathrm{Cap}^e$ is considered to normalize the charged/discharged energy of EVs $\sum_{i \in \mathcal{I}^{\mathrm{CS}}} p_{i,r,t}^e$. Constraint (\ref{con:Aggr_soc}) limits the maximum and minimum SOC ($\overline{\mathrm{SOC}}_{e}$/$\underline{\mathrm{SOC}}_{e}$) for each groups of EVs based on battery specifications, and $q_{ri(s)}^{\prime e}$ is the flow of group $e$ EVs departing from node $r$ that selected charging station $i$ (connected to node $s$ in transportation system). The arrival SOC and minimum departure SOC of EVs are constrained in (\ref{con:Aggr_soc_arrival}) and (\ref{con:Aggr_soc_dep}), respectively. Constraint (\ref{con:Aggr_P_CS}) determines the total power supply/demand ($p_{i,t}^\mathrm{CS}$) of charging station $i \in \mathcal{I}^{\mathrm{CS}}$ at time $t$ by summing the normalized charging/discharging of EVs at each station where $S^{\mathrm{base}}$ is the nominal capacity of DS.
\begin{subequations}\label{model:agg}
\small{
\begin{align}
&\underset{\substack{\boldsymbol{p^{\mathrm{CS}},q^\prime,soc}}}{\max} && \sum_{i \in \mathcal{I}^{\mathrm{CS}}} \sum_{t \in \mathcal{T}} \rho_{i,t} p_{i,t}^{\mathrm{CS}} - \sum_{r \in \mathcal{R}} \sum_{s \in \mathcal{S}} \sum_{e \in \mathcal{E}} \alpha_{rs}^e q_{ri(s)}^{\prime e} - {\sum_{i \in \mathcal{I}^{CS}} \sum_{r \in \mathcal{R}} \sum_{t \in \mathcal{T}}\sum_{e \in \mathcal{E}} C_{i,r,t}^{\mathrm{deg},e}(p_{i,r,t}^e) } \label{obj:Aggr}\\
&\text{ \ s.t.} && \mathrm{soc}_{r,e,t} = \mathrm{soc}_{r,e,t-1} - \sum_{i \in \mathcal{I}^{\mathrm{CS}}}p_{i,r,t}^e/\mathrm{Cap}^e, \ \forall r \in \mathcal{R}, e \in \mathcal{E}, t \in (T_{r,e}^{\text{arr}}, T_{r,e}^{\text{dep}}] \label{con:Aggr_soc_calculation}
\\
& && \sum_{i \in \mathcal{I}^{\mathrm{CS}}}q_{ri(s)}^{\prime e}\underbar{$\mathrm{SOC}$}_{e} \leq \mathrm{soc}_{r,e,t} \leq \sum_{i \in \mathcal{I}^{\mathrm{CS}}}q_{ri(s)}^{\prime e} \overline{\mathrm{SOC}}_{e}, \ \forall r \in \mathcal{R}, e \in \mathcal{E}, t \in [T_{r,e}^{\text{arr}}, T_{r,e}^{\text{dep}}] \label{con:Aggr_soc}
\\
& &&\mathrm{soc}_{r,e,T^{\text{arr}}_{r,e}} = \sum_{i \in \mathcal{I}^{\mathrm{CS}}}q_{ri(s)}^{\prime e} \mathrm{SOC}_{r,e}^{\text{arr}}, \ \forall r \in \mathcal{R}, e \in \mathcal{E} \label{con:Aggr_soc_arrival}\\
& &&\mathrm{soc}_{r,e,T^{\text{dep}}_{r,e}} \geq \sum_{i \in \mathcal{N}^{\mathrm{CS}}}q_{ri(s)}^{\prime e} \mathrm{SOC}_{e}^{\text{dep}}, \ \forall r \in \mathcal{R}, e \in \mathcal{E} \label{con:Aggr_soc_dep}
\\
& &&p_{i,t}^{\mathrm{CS}} = \sum_{r \in \mathcal{R}} \sum_{e \in \mathcal{E}} p_{i,r,t}^{e}/S^{\mathrm{base}}, \ \forall i \in \mathcal{I}^{\mathrm{CS}}, t \in [T_{r,e}^{\text{arr}}, T_{r,e}^{\text{dep}}] \label{con:Aggr_P_CS}
\end{align}}
\end{subequations}
\underline{{\bf EV Drivers Modeling: }} Travel and charging behavior should be considered in modeling EV drivers. We have modeled the charging behavior of EVs based on their SOC requirements as a part of CSA responsibilities in (\ref{con:Aggr_soc_calculation}-\ref{con:Aggr_soc_dep}). In this section, we will model the routing and charging location choices of EVs in transportation system.
The utility function $U_{rs}^e$ of a driver in type $e$ selecting station $s$ is formulated in (\ref{eq:util}) \cite{guo2019impacts, TTE2021}. EV drivers make charging location choices based on four aspects: locational attractiveness $\beta_{0,s}$, travel time $-\beta_1 tt_{rs}$, charging cost/revenue from charging/discharging their stored energy $\beta_2 \alpha_{rs}^e$, and a random term $\epsilon$. In this study, we adopt a multinomial logit model for charging location choices, in which $\epsilon$ follows an extreme value distribution.
\begin{equation}
\small{
U_{rs}^e = \beta_{0,s} -\beta_1 tt_{rs} + \beta_2 \alpha_{rs}^e + \epsilon \label{eq:util}}
\end{equation}
The destination choice of EVs ($q_{rs}^e$) and path travel time ($tt_{rs}$) are coupled, see utility function (\ref{eq:util}). To capture these couplings, we adapt the classic combined distribution and assignment (CDA) model \cite{sheffi1985urban} to model their destination choices and route choices, as shown in (\ref{mod:cda}). Notice that $\forall \tau \in \mathcal{T}^\mathrm{arr}$, we will have an CDA model to solve for the traffic patterns at time $\tau$.
The objective function in (\ref{obj:cda_obj}) consists of two parts: the first part is the summation of the area under all the link travel cost functions $tt_a(\cdot)$ (e.g., Bureau of Public Roads (BPR) function); the second part is the entropy of traffic distribution $q_{rs}^e(\ln q_{rs}^e - 1)$ and utility terms (excluding time) in (\ref{eq:util}). Objective (\ref{obj:cda_obj}) is constructed in this form to guarantee the optimal solutions of (\ref{mod:cda}) are consistent with the first Wardrop principal \cite{wardrop1952some} and the multinomial logit destination choice assumption. For technical details to prove this claim, one can refer to \cite{sheffi1985urban}. Constraint (\ref{cons:cda_v_x}) calculates link flows $\boldsymbol{v}$ by summing link flows of EVs $x_{rs}^\tau$ and conventional vehicles $\bar{x}_{rs}^\tau$ traveling at time $\tau$. Constraints (\ref{cons:cda_x_q}-\ref{cons:cda_x_bar_q}) are the vehicle flow conservation at each node for the travel demand of EV (${q}_{rs}^e$) and conventional vehicle ($\bar{q}_{rs}^\tau$), respectively. $A$ is the node-link incidence matrix of transportation network, and $E_{rs}$ is the origin-destination (OD) incidence vector. Constraint (\ref{cons:cda_q_d}) guarantees the summation of EV traffic flow distribution to each $s$ equals to the total EV travel demand from $r$, $Q_r^e$. The equilibrium travel time for each OD pair $rs$ can be calculated as $tt_{rs} \doteq \eta^{\tau}_{rs,r} - \eta^{\tau}_{rs,s}$, where $\eta^{\tau}_{rs,n}$ is the dual variable for (\ref{cons:cda_x_q}).
\begin{subequations}\label{mod:cda}
\small{
\begin{align}
& \min_{\boldsymbol{x,\bar{x}},
\boldsymbol{q} \geq \boldsymbol{0} }
& & & &\sum_{a \in \mathcal{A}} \int_{0}^{v_a^{\tau}} tt_a(u) \mathrm{d}u + \frac{1}{\beta_1} \sum_{r \in \mathcal{R}, s \in \mathcal{S}}\sum_{e \in \mathcal{E}^{\tau}} q_{rs}^{e}\left(\ln q_{rs}^{e} - 1 - \beta_2 \alpha_{rs}^e - \beta_{0,s}\right) \label{obj:cda_obj}\\
& \text{\quad \ s.t.}
& & & & \boldsymbol{v}^\tau = \sum_{r \in \mathcal{R}, s \in \mathcal{S}} \boldsymbol{x}_{rs}^\tau + \sum_{r \in \bar{\mathcal{R}}, s \in \bar{\mathcal{S}}}\boldsymbol{\bar{x}}_{rs}^{\tau}, \ \forall \tau \in \mathcal{T}^\mathrm{arr} \label{cons:cda_v_x}\\
& & & \hspace{-1cm}(\boldsymbol{\eta}_{rs}^{\tau})& & A\boldsymbol{x}_{rs}^\tau = \sum_{e \in \mathcal{E}^{\tau}}q_{rs}^{e} E_{rs}, \ \forall r \in \mathcal{R}, s \in \mathcal{S}, \tau \in \mathcal{T}^\mathrm{arr} \label{cons:cda_x_q}\\
& & & & & A\boldsymbol{\bar{x}}_{rs}^{\tau} = \bar{q}_{rs}^{\tau}E_{rs}, \; \forall r \in \bar{\mathcal{R}}, s \in \bar{\mathcal{S}}, \tau \in \mathcal{T}^\mathrm{arr}\label{cons:cda_x_bar_q}\\
& & & & & \sum_{s \in \mathcal{S}} q_{rs}^{e} = Q_r^e, \forall r \in \mathcal{R}, e \in \mathcal{E}^{\tau}\label{cons:cda_q_d}
\end{align}}
\end{subequations}
\underline{{\bf Market Clearing Conditions: }} {The hourly market clearing conditions } can be stated as (\ref{eq:equi}). In a stable market, the power purchased and supplied by DSO needs to be balanced with locational power generation and power load, respectively. (\ref{eq:equi_gene}) guarantees that the total energy purchased by DSO is equal to the total energy generated at each node. { (\ref{eq:equi_char}) enforces the balance between EV flow of type $e$ demanded and supplied at each location $s$ from $r$.} Locational prices of electricity $\rho_{i,t}$ and EV incentives $\alpha_{rs}^e$ can be calculated from the dual variables for the market clearing conditions, respectively.
\begin{subequations}
\small{
\begin{align}
& (\rho_{i,t}) && p_{i,t}^s = p_{i,t}^{DG} + P_{i,t}^{\mathrm{CS}}, \; \forall i \in \mathcal{I}^{\mathrm{DG}} \cup \mathcal{I}^{\mathrm{CS}}, \ \forall t \in \mathcal{T} \label{eq:equi_gene}\\
& (\alpha_{rs}^e) && q_{rs}^{\prime e}= q_{rs}^{e}, \ \forall r \in \mathcal{R}, s \in \mathcal{S},e \in \mathcal{E} \label{eq:equi_char}
\end{align}
\label{eq:equi}}
\end{subequations}
\vspace{-0.7cm}
\section{Convex Reformulation}
The decision making of each stakeholder and market clearing conditions presented in Sections \ref{sec:MAth_modeling} are interdependent and need to be solved simultaneously to achieve the equilibrium solutions. However, due to the non-convex nature of the N-MOPEC, directly solving it is challenging. In this section, we propose an exact convex reformulation that can recover the optimal primal and dual variables efficiently. We observe that models (\ref{mod:sp})$\sim$(\ref{model:agg}) and (\ref{mod:cda}) are convex optimization problems with constraints completely separable. The objective functions of these models are almost separable except the multiplication terms of primal and dual variables in market clearing conditions (\ref{eq:equi}). This type of problem can be reformulated by linearly combining all the objective functions and constraints and applying the reverse procedures of Lagrangian relaxation on the market clearing conditions (\ref{eq:equi}) \cite{TTE2021}. Accordingly, the N-MOPEC can be reformulated as a single convex optimization problem (\ref{cons:combined}).
{\small
\begin{align}
\underset{\substack{(\boldsymbol{p^d,p^{DG},v,x,\bar{x},q,q^\prime})\geq \boldsymbol{0},\\
\boldsymbol{p^s,pf,qf,p,p^{\mathrm{CS}},\mathrm{soc}}}}{ \max} & \sum_{t \in \mathcal{T}}\big(\sum_{i \in \mathcal{I}} \omega_{i,t} p_{i,t}^d - \sum_{i \in \mathcal{I}^{DG}} C(p_{i,t}^{DG}) - \sum_{i \in \mathcal{I}^\mathrm{CS}, r \in \mathcal{R}, e \in \mathcal{E}} C_{i,r,t}^{deg,e}\big)
- { \frac{\beta_1}{\beta_2} \sum _{a \in \mathcal{A}, \tau \in \mathcal{T}^{\mathrm{arr}}} \int _{0}^{v_a^\tau} tt_a(u) \mathrm{d}u}
- \frac{1}{\beta _2}\sum_{\tau \in \mathcal{T}^{\mathrm{arr}}}\sum_{r \in R, s \in S} \sum_{e \in \mathcal{E}^\tau} q_{rs}^e\left(\ln q_{rs}^e - 1 - \beta _0^s\right)\nonumber
\\
\text{s.t} & \quad (\ref{cons:DG_max}), (\ref{cons:DSO_P_flow})\sim(\ref{cons:DSO_voltage_bounds}), (\ref{con:Aggr_soc_calculation})\sim(\ref{con:Aggr_P_CS}), (\ref{cons:cda_v_x})\sim(\ref{cons:cda_q_d}), (\ref{eq:equi}) \label{cons:combined}
\end{align}}%
\vspace{-0.5cm}
\section{Numerical Simulation}\label{sec:Numerical}
We test the proposed models and reformulation techniques with a four-node test system shown in \textbf{Figure \ref{fig:four_node_system}}. Nodes 2 and 3 are the connecting points of the distribution and transportation systems representing the location of charging stations. Nodes 2 $\sim$ 4 are load nodes where loads at node 3 have higher priority ($\omega_3 = 60$) compared to the other nodes ($\omega_{1} = \omega_{2} = 50$). All the distribution lines have the same capacity of $S_{l}^{\mathrm{max}} = 2$ (pu), and transportation links have the capacity of 20 vehicles/hour. EVs are in one of the three groups based on their arrival SOCs: low ($\mathrm{SOC}^{\mathrm{arr}}_{r,1}$ = 0.3), medium ($\mathrm{SOC}^{\mathrm{arr}}_{r,2} = 0.6$), and high ($\mathrm{SOC}^{\mathrm{arr}}_{r,3} = 0.8$). We assume that two traffic flows of 30 EVs/hour, with 10 EVs/hour from each group $e$, depart from transportation nodes 1 and 4 and arrive at random hours $\tau$ ($\in \mathcal{T}^\mathrm{arr}$) to charging stations. We consider losing line (1--2) unexpectedly at $t = 10$, disconnecting the important DG at node 1 from the system. The line comes back on at $t = 20$. During this period, generation from DG at node 4 is not enough to serve load demand of all the nodes, and we will considered two levels of $\mathrm{SOC}^{\mathrm{dep}}$ = 0.7 and 0.5 to investigate EVs participation in DS restoration.
\begin{figure}
\centering
\begin{tikzpicture}[
font=\sf \scriptsize,
>=LaTeX,
operator/.style={circle,draw,inner sep=-0.5pt,minimum height =0.5cm, fill=red!100, font = \large},
ct/.style={circle,draw,line width = .75pt,minimum width=0.8cm,inner sep=1pt,fill=white},
dot/.style = {circle,fill, inner sep=0.01mm, fill=black!15, node contents={}},
arr/.style = {red, thick, dashed, ->},
arrow/.style = {-Stealth,blue, thick},
]
\node[ct, name = dg1] {DG};
\node[ct, right = 7cm of dg1](dg2){DG};
\node[operator]at(9.4,0.5)(ln){} -- node[pos = 0.98, right]{Transportation nodes}(9.8,0.5);
\draw [arrow] (9.2,0) -- node[pos = 1, right]{\textcolor{black}{ Traffic flow}}(9.8,0);
\draw [arr] (9.2,-0.5) -- node[pos = 1, right]{\textcolor{black}{ Links}}(9.8,-0.5);
\node[operator, right = 2.2cm of dg1](l2){\textcolor{white}{2}};
\node[operator, right = 1.5cm of l2](l3){\textcolor{white}{3}};
\node[operator]at(2,-1)(l1){\textcolor{white}{1}};
\node[operator, right= 3cm of l1](l4){\textcolor{white}{4}};
\draw [arrow] (1.2,-1) -- node[pos = 0.19, left, font=\large]{30}(l1);
\draw [arrow] (6.3,-1) -- node[pos = 0.19, right , font=\large]{30}(l4);
\draw [line width=0.30mm] (0.85,-0.6) -- node[pos = 0.98, above, font=\large]{1}(0.85,0.6);
\draw [line width=0.30mm] (2.86,-0.6) -- (2.86,-.25);
\draw [line width=0.30mm] (2.86,0.25) -- node[pos = 0.89, above, font=\large]{2}(2.86,0.6);
\draw [line width=0.30mm] (4.87,-0.6) -- (4.87,-0.25);
\draw [line width=0.30mm] (4.87,0.25) -- node[pos = 0.89, above, font=\large]{3}(4.87,0.6);
\draw [line width=0.30mm] (6.88,-0.6) -- node[pos = 0.98, above, font=\large]{4}(6.88,0.6);
\draw[line width=0.30mm] (dg1) -- node[pos = 0.405]{}(l2);
\draw[line width=0.30mm] (l2) -- node[pos = 0.405]{}(l3);
\draw[line width=0.30mm] (l3) -- node[pos = 0.405]{}(dg2);
\draw [out=60, in=120, red, thick, dashed,->] (l2) to (l3);
\draw [out=240, in=-60, red, thick, dashed,->] (l3) to (l2);
\draw [out=90, in=210, red, thick, dashed,->] (l1) to (l2);
\draw [out=-90, in=30, red,thick, dashed,->] (l2) to (l1);
\draw [out=-40, in=95,red, thick, dashed,->] (l3) to (l4);
\draw [out=145, in=-90, red, thick, dashed,->] (l4) to (l3);
\node[above=0.05 cm of dg1]{$\bar{P}^\mathrm{DG}$=3 (pu)};
\node[below=0.05 cm of dg1]{Cost: 20 (\$/pu)};
\node[above=0.05 cm of dg2]{$\bar{P}^\mathrm{DG}$=1 (pu)};
\node[below=0.05 cm of dg2]{Cost: 2 (\$/pu)};
\node at (1.6,0.2) (r1) {$r_1$ = 5.75 $e^{-3}$};
\node at(1.6,-0.2)(x1){$x_1$ = 2.93 $e^{-3}$};
\node[right=0.6cm of r1](r2){$r_2$ = 3.076 $e^{-2}$};
\node[right=0.6cm of x1](x2){$x_2$ = 1.567 $e^{-2}$};
\node[right=0.5cm of r2](r3){$r_3$ = 2.284 $e^{-2}$};
\node[right=0.5cm of x2](x3){$x_3$ = 1.163 $e^{-2}$};
\end{tikzpicture}
\captionsetup{labelfont=bf}
\captionsetup{justification=raggedright,singlelinecheck=false}
\caption{Four node test system}
\label{fig:four_node_system}
\end{figure}
The load pickup graphs presented in \textbf{Figure \ref{fig:case2_load_soc70}, \ref{fig:case2_load_soc50}} shows that the system is able to provide the energy for the higher priority load at node 3. With lower $\mathrm{SOC}^{\mathrm{dep}}$, more lower priority loads are picked up. The total load loss decrease from 2.316 pu to 1.056 pu when $\mathrm{SOC}^{\mathrm{dep}}$ decrease from 0.7 to 0.5. The power injection of charging stations in \textbf{Figure \ref{fig:case2_node_power_injection_soc70}, \ref{fig:case2_node_power_injection_soc60}} shows more participation of EVs with $\mathrm{SOC}^{\mathrm{dep}} = 0.5$ during the disruption compared to $\mathrm{SOC}^{\mathrm{dep}} = 0.7$. Energy prices during the disruption increase drastically compared to the normal operation for all the nodes (see \textbf{Figure \ref{fig:case2_energy_price_soc70}, \ref{fig:case2_energy_price_soc50}}) because the system has to leverage more expensive energy sources during this time. By observing the disruption periods for both $\mathrm{SOC}^{\mathrm{dep}}$ levels, we have slightly lower energy prices when charging stations are able to participate in restoring the load. Comparing \textbf{Figure \ref{fig:case2_incentive_soc70}} and \textbf{Figure \ref{fig:case2_incentive_soc50}}, EVs with lower required $\mathrm{SOC}^{\mathrm{dep}}$ receive higher incentives because they are more flexible to provide energy to the system. In addition, we see higher incentives for group 1/2 departing from node 1 than node 4, because they arrive in different time with different energy prices and/or DS restoration requirements. The charging station selection of EVs only varies slightly for $\mathrm{SOC}^{\mathrm{dep}}$ levels (see \textbf{Figure \ref{fig:case2_EV_traffic_soc70}, \ref{fig:case2_EV_traffic_soc50}}), because incentives to go to different locations for each group of EVs from the same origins are very close so that travel time is dominant.
\begin{figure}
\begin{minipage}[b]{0.32\linewidth}
\centering
\subfloat[\label{fig:case2_load_soc70}]{\includegraphics[width=\linewidth ]{Figures/case2/case2_load_soc70.png}}
\end{minipage}
\begin{minipage}[b]{0.32\linewidth}
\centering
\subfloat[]{\includegraphics[width=\linewidth, ]{Figures/case2/case2_node_power_injection_soc70.png}\label{fig:case2_node_power_injection_soc70}}
\end{minipage}
\begin{minipage}[b]{0.32\linewidth}
\centering
\subfloat[]{\includegraphics[width=\linewidth, ]{Figures/case2/case2_energy_price_soc70.png}\label{fig:case2_energy_price_soc70}}
\end{minipage}
\begin{minipage}[b]{0.32\linewidth}
\centering
\subfloat[\label{fig:case2_load_soc50}]{\includegraphics[width=\linewidth ]{Figures/case2/case2_load_soc50.png}}
\end{minipage}
\begin{minipage}[b]{0.32\linewidth}
\centering
\subfloat[]{\includegraphics[width=\linewidth ]{Figures/case2/case2_node_power_injection_soc50.png}\label{fig:case2_node_power_injection_soc60}}
\end{minipage}
\begin{minipage}[b]{0.32\linewidth}
\centering
\subfloat[]{\includegraphics[width=\linewidth, ]{Figures/case2/case2_energy_price_soc50.png}\label{fig:case2_energy_price_soc50}}
\end{minipage}
\captionsetup{justification=raggedright,singlelinecheck=false}
\caption{Distribution system results: Expected and picked up load (a) $\mathrm{SOC}^{\mathrm{dep}}$ = 0.7 (d) $\mathrm{SOC}^{\mathrm{dep}}$ = 0.5. Nodal power injection (b) $\mathrm{SOC}^{\mathrm{dep}}$ = 0.7 (e) $\mathrm{SOC}^{\mathrm{dep}}$ = 0.5. Nodal energy price (c) $\mathrm{SOC}^{\mathrm{dep}}$ = 0.7 (f) $\mathrm{SOC}^{\mathrm{dep}}$ = 0.5}
\label{fig:case2_DS_results}
\end{figure}
\begin{figure}
\begin{minipage}[b]{0.24\linewidth}
\centering
\subfloat[]{\includegraphics[width=\linewidth ]{Figures/case2/case2_incentive_soc70.png}\label{fig:case2_incentive_soc70}}
\end{minipage}
\begin{minipage}[b]{0.24\linewidth}
\centering
\subfloat[]{\includegraphics[width=\linewidth ]{Figures/case2/case2_incentive_soc50.png}\label{fig:case2_incentive_soc50}}
\end{minipage}
\begin{minipage}[b]{0.24\linewidth}
\centering
\subfloat[]{\includegraphics[width=\linewidth ]{Figures/case2/case2_EV_traffic_soc70.png}\label{fig:case2_EV_traffic_soc70}}
\end{minipage}
\begin{minipage}[b]{0.24\linewidth}
\centering
\subfloat[]{\includegraphics[width=\linewidth ]{Figures/case2/case2_EV_traffic_soc50.png}\label{fig:case2_EV_traffic_soc50}}
\end{minipage}
\captionsetup{justification=raggedright,singlelinecheck=false}
\caption{Transportation system results: Charging station incentives for EVs (a) $\mathrm{SOC}^{\mathrm{dep}}$ = 0.7 (b) $\mathrm{SOC}^{\mathrm{dep}}$ = 0.5. EV traffic flows (c) $\mathrm{SOC}^{\mathrm{dep}}$ = 0.7 (d) $\mathrm{SOC}^{\mathrm{dep}}$ = 0.5}
\label{fig:case2_transportation_results}
\end{figure}
\section{Discussion}\label{sec:conclusion}
We investigate the impact of private EVs on distribution system restoration in a N-MOPEC framework, where each stakeholder, including DG owners, DSO, EV drivers, and CSA, maximizes their own objectives in a coupled transportation and distribution system. We reformulate the multi-agent problems as an equivalent convex optimization problem, which can be efficiently solved by commercial nonlinear solvers. Numerical results on the small-scale test system show that participation of EVs helps to reduce the load loss during restoration process, and the CSA could provide different incentives to EV drivers based on their value for the distribution system restoration.
This work can be extended in multiple directions. First, the proposed modeling framework can be used to design optimal incentives for EVs to participate in distribution system restoration services and guide the planning and expansion of transportation and power systems. Second, stochastic modeling on renewable energy sources, EV arrival and departure time, and arrival SOC can be further examined. Third, decomposition based algorithms can be developed based on the convex reformulation to facilitate computation for extreme large-scale systems.
\setstretch{0.9}
\bibliographystyle{IEEEtran}
|
{
"timestamp": "2021-05-11T02:17:13",
"yymm": "2105",
"arxiv_id": "2105.03828",
"language": "en",
"url": "https://arxiv.org/abs/2105.03828"
}
|
\section{Introduction}
Writing fast code is not the domain of humans, instead it is the domain of symbolic engines. Many core numerical routines, such as the FFTW suite for Fast Fourier Transformations, make use of symbolic calculations to vastly outperform purely handwritten methods \cite{frigo2005design}. Additionally, techniques like Herbie show how to improve numerical stability in a semi-automated fashion via symbolic rule applications \cite{panchekha2015automatically}. However, these techniques currently require sophisticated programmers to build domain-specific and scalable code optimizers. In this work we explore the question --- could these mathematical code optimizations be generally applied in an automated fashion to a whole high-performance programming language?
To meet this goal, we developed Symbolics.jl, a high-performance symbolic-numeric computing library with a type-dispatch system to give the generality necessary to target the full Julia programming language. Section \ref{sec:generality} shows how Symbolics.jl's simple and non-intrusive term interface solves the Expression Problem in symbolic computing and allows for extending the core functionality without inhibiting performance. Section \ref{sec:rewriting} showcases how this generic term interface allows for multiple complementary term rewriting systems to coexist, which allows for mixing modern e-graph code optimizers with traditional simplifiers. Section \ref{sec:codegen} discusses the code generation capabilities of Symbolics.jl. Given that our language is the JIT-compiled Julia language, the emitted code is compiled and executed in a fast runtime. This allows for both fast debug cycles and for using Julia's well-optimized numerical ecosystem with built-in parallelism in the generated code. Section \ref{sec:results} details instances where scientific codes can be automatically optimized using this mechanism.
\section{Generality without sacrificing performance}
\label{sec:generality}
Many programming systems suffer from the Expression Problem \cite{FF}, which is the inability to specialize functions on types not defined when a module is compiled. Dynamic multiple dispatch solves this by allowing functions to be specialized for types that are loaded at runtime effectively allowing developers to decouple data representations from actions on them \cite{bezanson2017julia}. Given that the Julia ecosystem significantly leverages multiple dispatch, we used it to develop a similarly generic yet high-performance system for symbolic computing. However, we do note that the ideas in this paper can also be implemented in any language with a sufficiently powerful generic programming capabilities, such as Haskell or Common Lisp.
The core of a symbolic library is its representation of mathematical terms. The choice of term representation can impose restrictions on generality. For example, the simplest term representation can be found in Lisp-like systems: terms are quoted expressions, which, in turn, are simply lists of lists or atoms. While this is an elegant and simple solution, it has a few shortcomings, most importantly:
\begin{enumerate}
\vspace{-8pt}
\item The user can, at best, define one overloading of $+$ and $*$ for all expressions, but expressions can be of different types. This is a symptom of the Expression Problem.
\vspace{-8pt}
\item The size of a term grows linearly with the number of operations unless terms are simplified during construction, which can be expensive using this data structure.
\end{enumerate}
\vspace{-10pt}
A solution to the first shortcoming is to use a parameterized \texttt{Term\{T\}(f, args)} struct to encode terms. Here, \texttt{T} is a type parameter, the symbolic type of the expression. We define operators, such as \texttt{+} and \texttt{*}, on the \texttt{Term\{<:Number\}} (numbers), and leave them open to be extended for non-number \texttt{Term}s. This allows the users to overload subsequent operations using Julia's multiple dispatch mechanism, and thus specialize the symbolic behavior in a manner that is dependent on the type of object being acted on. The solution is further refined later in this section, after a motivating example.
To solve the second shortcoming, Symbolics.jl employs a number of constructor-based simplification mechanisms. Multiplication and addition of numbers are the most common operations, yet simplifying commutativity and associativity in a rule-based system takes a long time. Instead, we use an idea from SymEngine\footnote{\url{https://symengine.org/design/design.html}} which is to formulate a canonical form in terms of \texttt{Add} and \texttt{Mul} types, which simplify expressions upon construction. \texttt{Add} represents a linear combination of terms: it stores the numeric coefficients and their corresponding terms in a dictionary as values and keys, respectively. \texttt{Mul} stores a product of factor terms: it stores bases and the corresponding exponents in a dictionary as keys and values, respectively. This allows us to use $O(1)$ dictionary lookups to simplify repeated addition and multiplication. In the best case, they take up $O(1)$ space, while \texttt{Term} would take $O(n)$ space for $n$ operations.
Having introduced these types, one may think we have lost the generality provided by a common term type on which all symbolic manipulation code can rely. However, this generality is regained by defining the following set of generic functions, which form an interface that a term should satisfy:
\begin{enumerate}
\vspace{-8pt}
\item \texttt{istree(x)} -- returns \texttt{true} if \texttt{x} is a term object.
\vspace{-8pt}
\item \texttt{operation(x)} -- returns the head (the function) of the term.
\vspace{-8pt}
\item \texttt{arguments(x)} -- returns a vector of arguments of the term.
\vspace{-8pt}
\item \texttt{symtype(x)} -- returns the symbolic type of the term.
\item \texttt{similarterm(x, f, args[, symtype])} -- constructs a term similar in type to \texttt{x} with \texttt{f} as head and \vspace{-8pt}
\texttt{args} as arguments.
\end{enumerate}
For example, for a term \texttt{t} of type \texttt{Add}, \texttt{istree(t)} returns true, and \texttt{operation(t)} returns the \texttt{+} generic function. \texttt{arguments(t)} returns all the terms of the linear combination multiplied by the coefficient sorted according to an arbitrarily chosen total ordering. Finally, \texttt{symtype} represents the appropriate type, which is set when the \texttt{t} is created.
Term manipulation code can use \texttt{operation} and \texttt{arguments} after checking to see if \texttt{istree} returns \texttt{true} on an object. Such code can also use \texttt{similarterm} function to create a term. In fact, \texttt{Add} and \texttt{Mul} were added to our system retroactively based on the above interface, and the term manipulation codes continued to work as they did. The next section shows an example of our rule-based rewriting system. The system uses the above interface to access and construct terms, and thus does not require knowledge about the internal storage of terms.
\section{Term rewriting}
\label{sec:rewriting}
The abstract term interface allows for multiple rewrite mechanisms to co-exist simultaneously as part of the same symbolic platform. Here we describe two core rewriting systems that work on top of the term abstraction: a classical term rewriter which composes deterministic functions that turn an expression into a different one and a novel e-graph-based expression rewriting system which can apply bi-directional rules and optimize the rewritten expressions to minimize a cost function to achieve specific goals.
\subsection{Classical rewriting}
Classical rewriting is useful for traversing the expression tree in a specific way. In this regime, a rewriter is simply a function which takes an expression and returns a modified expression. Symbolics.jl provides a macro-based syntax that creates pattern-matching rewriters which act similarly to those described by Sussman and Hanson \cite{sdf}. To illustrate how pattern matching in performed, we demonstrate the macro on the double-angle formula.
\vspace{-5pt}
\begin{verbatim}
r = @rule sin(2(~x)) => 2sin(~x)*cos(~x)
\end{verbatim}
\vspace{-5pt}
Here we use the \texttt{@rule} macro to create a rewriter to apply the canonical double-angle formula rule. \texttt{\textasciitilde x} on the left-hand side of the rule is called a ``slot'' and captures any object in the appropriate position in the expression tree. When \texttt{\textasciitilde x}\ appears on the right-hand side of the rule, the matched object is used in its place to perform the rewrite.
\vspace{-5pt}
\begin{verbatim}
@syms a::Real
r(2(deg2rad(a))) # => 2cos(deg2rad(a))*sin(deg2rad(a))
r(sin(3a)) # => 3a
\end{verbatim}
\vspace{-5pt}
Symbolics.jl contains a rewriter-combinator library for composing multiple rules. Briefly, \texttt{Chain([r1,r2])} combines two rewriters into one that applies \texttt{r1} and then \texttt{r2}. \texttt{IfElse(cond, r1, r2)} is a conditional rewriter. \texttt{Prewalk(r)} and \texttt{Postwalk(r)} take a rewriter \texttt{r} and produce a rewriter that traverses the tree and applies \texttt{r} to each tree node in pre-order or post-order, respectively. These walkers can also be run with multithreading to rewrite subtrees in parallel. Finally, \texttt{Fixpoint(r)} is a rewriter that applies \texttt{r} until there are no changes. The implementation of simplification on terms of \texttt{Number} subtypes uses 66 rules written in the rule-rewriting language, which are strung together by means of rewriter combinators. \label{sec:simplify}
\subsection{E-graph-based rewriting}
Classical rewriting falls short when the user's goal is to minimize a certain cost function and the system of rewrite rules comprises many rules interacting with one other. It is not straightforward to transform an axiomatic formal system of equational rules into a Noetherian (terminating) term-rewriting system, which is known to require a lot of user reasoning \cite{dershowitz1993taste}. To circumvent these issues, there has been newfound excitement in using equality saturation-based rewriting engines for such requirements. This novel technique allows users to define efficient term-rewriting systems with equational rules without having to worry about termination or the ordering of rules, resulting in algebraically compositional rewrite systems and efficient algorithms for minimizing cost functions on chains of expression rewrites. The Metatheory.jl \cite{Cheli2021} Julia package provides a generic rewriting backend relying on the \textit{equality saturation} algorithm and data structures called \textit{e-graphs} (equality graphs) \cite{egg}. The intuitive mechanism behind \emph{e-graph rewriting} is explained briefly in Figure \ref{fig:egraph}. For more details, we refer the reader to \cite{Cheli2021} and \cite{egg}.
\begin{figure}[H]
\centering
\includegraphics[width=0.6\textwidth]{egraphs}
\caption{Equality saturation constructs the e-graph from a set of rules applied to an input expression. The four depicted e-graphs represent the process of equality saturation for the equivalent ways to write $a * (2 * 3) / 6$. The dashed boxes represent equivalence classes, and regular boxes represent e-nodes.}
\label{fig:egraph}
\end{figure}
\section{High-performance code generation}
\label{sec:codegen}
Using Symbolics.jl, one can generate Julia code from symbolic expressions at run time, just-in-time compile them, and then execute them in the same session. This type of metaprogramming is much more convenient for mathematical code than macro-based metaprogramming. We expose a generic \texttt{toexpr} that turns expressions into executable Julia code. For convenience in generating sophisticated code, we have a library of term types that represent assignments, let-blocks, functions, array construction, and parallel map-reduce.
\section{Applications and results}
\label{sec:results}
\hspace{2em}\textbf{Overall speedup.} When tracing into the mass matrix computation of a rigid body dynamics system with 7 degrees of freedom, Symbolics.jl takes 0.0073 seconds, while SymPy takes 17.3 seconds. That is 2370x evaluated in a real-world application. Reproducible code can be found in the repository\footnote{\url{https://github.com/JuliaSymbolics/SymbolicUtils.jl/pull/254}}.
\textbf{Speedup from \texttt{Add} and \texttt{Mul}.} In a synthetic benchmark which generates a random expression of 1400 terms using $+$ and $*$, we get a speed up of 113$\times$ as compared to rule-based simplification if we use \texttt{Add} and \texttt{Mul} described in Section \ref{sec:generality}. Further details on this benchmark can be found in the repository\footnote{\url{https://github.com/JuliaSymbolics/SymbolicUtils.jl/pull/154#issuecomment-754302695}}.
\textbf{E-graph-based optimization for fewer CPU cycles} We developed a set of equality rules and a cost function to generate mathematically-equivalent expressions which are computed in fewer CPU cycles. To test the effectiveness of these techniques in a real-world scenario, we used the BioNetGen format to read in a 1122 ODE model of B Cell Antigen Signaling \cite{barua2012computational} and simplified the 24388 terms using Symbolics.jl. The generated code (explained in Section \ref{sec:codegen}) for the right-hand side accelerated from 15.023 $\mu$s to 7.372 $\mu$s per execution after the optimization, effectively halving the time required to solve the highly stiff ODEs. We have published the code for reproducing this benchmark in a Github Gist\footnote{\url{https://gist.github.com/shashi/a696020c6e65e1a3abfdbd74a3e6909c}}.
\textbf{Tracing and optimizing a PDE discretization.}
When coupled with the ability to automatically generate symbolic expressions by tracing Julia code, this code generation process becomes an effective automated code optimization tool. We took a reaction-diffusion PDE discretization code written in native Julia, evaluated it with symbols, extracted the Jacobian in the form of a sparse matrix (rather than letting the solver use automatic-differentiation), generated codes containing multi-threaded parallelism, and achieved an overall 157x improvement in the solver time. This process required approximately 5 lines of Symbolics.jl code added to the PDE solver\footnote{ModelingToolkit.jl was created to automate the application of these optimizations to differential equation models~\cite{modelingtoolkit}}. The fully reproducible example can be found in the tutorial section of the documentation\footnote{Tutorial: ``Automated Sparse Parallelism of Julia Functions via Tracing'' (v0.1)}.
\section{Acknowledgements}
The information, data, or work presented herein was funded in part by the Advanced Research Projects Agency-Energy (ARPA-E), U.S. Department of Energy, under Award Numbers DE-AR0001222, DE-AR0001211 and NSF grant OAC-1835443. And also in part by DARPA under agreement number HR0011-20-9-0016 (PaPPa).
\bibliographystyle{unsrt}
|
{
"timestamp": "2021-05-13T02:22:26",
"yymm": "2105",
"arxiv_id": "2105.03949",
"language": "en",
"url": "https://arxiv.org/abs/2105.03949"
}
|
\section{Introduction}
The harmonic heat flow was introduced by Eells and Sampson in \cite{EellsSampson}. They used it to prove one of the first general existence results for harmonic maps between Riemannian manifolds. Since then the harmonic heat flow has been an important tool in many existence results for harmonic maps. It has also been studied much as a subject of investigation in its own right.
Suppose $(M,g)$ and $(N,h)$ are Riemannian manifolds and $f \colon M \to N$ a smooth map. The harmonic heat flow is an evolution equation on one-parameter families of smooth maps $(f_t \colon M \to N)_{t\in [0,\infty)}$ that continuously deforms $f$ into a harmonic map. The parameter $t$ is often thought of as a time parameter. The harmonic heat flow equation is
\begin{align}\label{eq:heatflow}
\begin{split}
\der{f_t}{t} &= \tau(f_t)\\
f_0 &= f.
\end{split}
\end{align}
Here $\tau(f_t)$ is the tension field of $f_t$ (see \Cref{sec:heatflowpreliminaries}). Eells and Sampson prove in \cite{EellsSampson} (with contributions of Hartman in \cite{Hartman}) that if $M$ is compact and $N$ is complete and has non-positive curvature, then a solution of \Cref{eq:heatflow} exists for all $t\geq 0$. Moreover, if the images of the maps $f_t$ stay within a compact subset of $N$, then the harmonic heat flow converges, for $t\to\infty$, to a harmonic map $f_\infty \colon M\to N$ that is homotopic to $f$.
In this note we prove that when the limiting map satisfies a certain non-degeneracy condition (which will elaborated on in \Cref{sec:heatflowpreliminaries}), then the rate of convergence of the harmonic heat flow is exponential.
\begin{theorem}\label{thm:exponentialconvergence}
Let $(M,g)$ and $(N,h)$ be Riemannian manifolds with $M$ compact and with $N$ complete and of non-positive curvature. Let $(f_t)_{t\in [0,\infty)}$ be a solution to the harmonic heat flow equation. Assume that the maps $f_t$ converge to a limiting harmonic map $f_\infty \colon M \to N$, as $t\to\infty$, and assume that $f_\infty$ is a non-degenerate critical point of the Dirichlet energy functional. Then there exist constants $a,b>0$ such that
\[
\norm*{\der{f_t}{t}}_{L^2(f_t^*TN)} \leq a \cdot e^{-b \cdot t}
\]
for all $t\geq 0$. Moreover, the exponential decay rate (the constant $b$) depends only on $f_\infty$.
\end{theorem}
The exponential convergence rate of the harmonic heat flow has been observed before in several different settings. For example, in \cite{Topping} Topping proved that the harmonic heat flow for maps between 2-spheres converges exponentially fast in $L^2$ as $t\to\infty$. Similarly, in \cite{Wang} it is shown that the heat flow for mappings from the unit disk in $\mathbb{R}^2$ into closed Riemannian manifolds converges exponentially fast in $H^1$ when we assume that the Dirichlet energy along the heat flow is small.
Our result shows that this exponential convergence behaviour is actually present in a large class of examples. For instance, if $(N,h)$ has negative curvature, then any harmonic map into $N$ that does not map into the image of a geodesic is a non-degenerate critical point of the energy. Another example is provided by equivariant harmonic maps mapping into symmetric spaces of non-compact type. A result of Sunada (\cite{Sunada}) implies that such harmonic maps are non-degenerate critical points of the energy if and only if they are unique (see \cite[Lemma 2.1]{Slegers1}).
As a corollary to \Cref{thm:exponentialconvergence} we obtain that the Dirichlet energies along the harmonic heat flow also converge exponentially fast. For a smooth map $f\colon (M,g) \to (N,h)$ we denote by $E(f)$ its Dirichlet energy (see \Cref{sec:heatflowpreliminaries}).
\begin{corollary}\label{cor:convergenceenergy}
Let $(f_t)_{t\in[0,\infty)}$, $f_\infty$ and $b>0$ be as in $\Cref{thm:exponentialconvergence}$. Then there exists a constant $a'>0$ such that for all $t\geq 0$ we have
\[
\abs{E(f_t) - E(f_\infty)} \leq a'\cdot e^{-2b\cdot t}.
\]
\end{corollary}
\begin{acknowledgements}
The author wishes to thank Prof. Ursula Hamenst\"adt for many useful discussions. The author was supported by the IMPRS graduate program of the Max Planck Institute for Mathematics.
\end{acknowledgements}
\section{Preliminaries}\label{sec:heatflowpreliminaries}
We briefly introduce the concepts related to harmonic maps that we will need in our proof. We follow mostly the presentation given in \cite{EellsLemaireSelectedTopics} (see also \cite{EellsSampson}).
Let $(M,g)$ and $(N,h)$ be Riemannian manifolds and assume $M$ is compact. For any vector bundle $E\to M$ we denote by $\Gamma^k(E)$ the Banach space of $k$-times continuously differentiable sections of $E$. For any smooth map $f \colon M \to N$ let us denote by $\nabla$ the pullback connection on $f^*TN \to M$ induced by the Levi-Civita connection of $N$. By taking the tensor product with the Levi-Civita connection on $M$ we obtain an induced connection on the bundle $T^*M\otimes f^*TN$ which we will also denote by $\nabla$.
A smooth map $f \colon (M,g) \to (N,h)$ is a \textit{harmonic map} if it is a critical point of the Dirichlet energy
\[
E(f) = \frac{1}{2} \int_M \norm{df}^2 \vol_g.
\]
Here we consider $df$ as a section of the bundle $T^*M \otimes f^*TN$ that is equipped with the metric induced by the metrics $g$ and $h$. The \textit{tension field} of $f$ is the smooth section of $f^*TN$ that is defined as
\[
\tau(f) = \tr_g \nabla df = \sum_{i=1}^m (\nabla_{e_i} df)(e_i)
\]
where $(e_i)_{i=1}^m$ is any local orthonormal frame of $TM$ and $\nabla$ is the connection on $T^*M\otimes f^*TN$. A map $f \colon (M,g) \to (N,h)$ is harmonic if and only if its tension field vanishes identically.
The metric $g$ on $M$ and the metric on $f^*TN$ induced by the metric on $N$ give rise to the $L^2$ inner product
\[
\inner{s}{s'}_{L^2(f^*TN)} = \int_M \inner{s(m)}{s'(m)} \vol_g(m)
\]
for $s,s'\in \Gamma^0(f^*TN)$. The space $L^2(f^*TN)$ is defined to be the completion of $\Gamma^0(f^*TN)$ with respect to this inner product.
The Laplace operator induced by the pullback connection $\nabla$ on $f^*TN$ is the operator $\Delta \colon \Gamma^2(f^*TN) \to \Gamma^0(f^*TN)$ that is given by
\[
\Delta s = -\tr_g \nabla^2 s = - \sum_{i=1}^m (\nabla^2 s)(e_i,e_i)
\]
for $s\in \Gamma^2(f^*TN)$ and any (local) orthonormal frame $(e_i)_{i=1}^m$ of $TM$.
\begin{definition}
We define the Jacobi operator of a smooth map $f\colon M \to N$ to be the second order differential operator that acts on sections of $f^*TN$ as
\[
\mathcal{J}_f(s) = \Delta s - \tr_g R^N(s, df \cdot)df\cdot = - \sum_{i=1}^m \left[
(\nabla^2 s)(e_i,e_i) + R^N(s, df (e_i)) df(e_i)
\right]
\]
where $s\in \Gamma^2(f^*TN)$, $R^N$ is the curvature tensor\footnote{We define the curvature tensor as $R(X,Y)Z = \nabla_X \nabla_Y Z - \nabla_Y \nabla_X Z - \nabla_{[X,Y]}Z$ which differs from the convention chosen in \cite{EellsLemaireSelectedTopics}.} of $(N,h)$ and $(e_i)_{i=1}^m$ is any (local) orthonormal fame of $TM$.
\end{definition}
We can interpret the Jacobi operator as a densely defined operator
\[
\mathcal{J}_f \colon L^2(f^*TN) \to L^2(f^*TN).
\]
It is a linear elliptic and self-adjoint operator. Standard spectral theory for such operators implies the following facts.
\begin{proposition}\label{eq:spectraltheoryjacobioperators}
The Hilbert space $L^2(f^*TN)$ splits orthogonally into eigenspaces of $\mathcal{J}_f$. These eigenspaces are finite dimensional and consist of smooth sections. The spectrum of $\mathcal{J}_f$ is discrete and consists of real numbers. If $(N,h)$ is non-positively curved, then the eigenvalues of $\mathcal{J}_f$ are non-negative.
\end{proposition}
\begin{proof}
See \cite[Chapter IV]{Wells} (cf. \cite[Section 4]{EellsLemaireSelectedTopics}). It is proved in \cite[Proposition 1.23]{EellsLemaireSelectedTopics} that $\Delta$ is a positive operator. If $(N,h)$ is non-positively curved, then
\[
-\tr_g \inner{R^N(s, df \cdot)df\cdot}{s} = -\sum_{i=1}^m \inner{R^N(s, df (e_i)) df(e_i)}{s} \geq 0
\]
for any $s\in \Gamma^0(f^*TN)$ and hence it follows that the eigenvalues of $\mathcal{J}_f$ are non-negative.
\end{proof}
When $(N,h)$ has non-positive curvature it follows that each $\mathcal{J}_f$ has a well-defined lowest eigenvalue which we will denote by $\lambda_1(\mathcal{J}_f) \geq 0$. This quantity is called \textit{the spectral gap} of the operator $\mathcal{J}_f$. Using the min-max theorem the value $\lambda_1(\mathcal{J}_f)$ can alternatively be characterised as
\begin{equation}\label{eq:minmax}
\lambda_1(\mathcal{J}_t) = \min_{\substack{s\in \Gamma^2(f^*TN) \\ s\neq 0}} \frac{\inner{\mathcal{J}_f s}{s}_{L^2(f^*TN)}}{\norm{s}^2_{L^2(f^*TN)}}.
\end{equation}
If $f$ is harmonic, then the second variation of the energy at $f$ is given by
\[
\nabla^2 E(f)(s,s') = \int_{M} \left[\inner{\nabla s}{\nabla s'} - \tr_g \inner{R^n(s, df \cdot) df \cdot}{s'}\right] \vol_g = \inner{\mathcal{J}_f s}{s'}_{L^2(f^*TN)}
\]
for any $s, s'\in \Gamma^2(f^*TN)$. We stress that this equation only holds when $f$ is harmonic. A harmonic map is a non-degenerate critical point of the energy if the bilinear form $\nabla^2 E(f)$ is non-degenerate. This happens if and only if $\ker \mathcal{J}_f = 0$. In the case that $(N,h)$ has non-positive curvature this is equivalent to $\lambda_1(\mathcal{J}_f) > 0$.
As mentioned in the introduction, the existence of a solution to the harmonic heat flow equation is due to Eells and Sampson. We record the facts relevant to our proof here in the following theorem. We denote by $C^k(M,N)$ the Banach manifold of $k$-times continuously differentiable maps from $M$ to $N$.
\begin{theorem}\label{thm:heatflowfacts}
Assume $(M,g)$ is compact and $(N,h)$ is complete and of non-positive curvature. Let $f\colon M \to N$ be a smooth map. A solution $(f_t)_{t\in [0,\infty)}$ to the harmonic heat flow equation (\Cref{eq:heatflow}) exists for all time $t\geq 0$ and the map
\[
M\times [0,\infty) \to N \colon (m,t) \mapsto f_t(m)
\]
is smooth. Moreover, if the image of this map is contained in a compact subset of $N$, then the maps $f_t$ converge, for $t\to\infty$, to a harmonic map $f_\infty$ in any space $C^k(M,N)$.
\end{theorem}
The existence and smoothness of the solution is proved in \cite{EellsSampson} (Theorem 10.C p.154 and Proposition 6.B p.135). Note that Eells and Sampson prove these theorems under an additional assumption involving restrictions on a choice of isometric embedding $N\to \mathbb{R}^n$. Hartman proved in \cite[Assertion (A)]{Hartman} that this assumption is redundant. Finally, the convergence statement for $t\to\infty$ is proved in \cite[Assertion (B)]{Hartman}.
\section{Continuity of the spectral gap}
Our proof of \Cref{thm:exponentialconvergence} will rely on the fact that if $(f_t)_{t\in [0,\infty)}$ is a solution to the harmonic heat flow equation, then the associated family of Jacobi operators $\mathcal{J}_{f_t}$ is (in a loose sense) a continuous family of differential operators. The primary difficulty here is that these operators act on sections of different vector bundles. We deal with this problem in \Cref{prop:spectralgapconvergence} which will be the main tool in our proof.
Let us first introduce some notation. We will consider a family of smooth maps $(f_t)_{t\in [0,1]}$ and define $F \colon M \times [0,1] \to N$ as $F(m,t) = f_t(m)$. For each $t\in [0,1]$ we denote $E_t = f_t^* TN$ and $\mathcal{J}_t = \mathcal{J}_{f_t}$.
\begin{proposition}\label{prop:spectralgapconvergence}
Assume $F \colon M \times [0,1] \to N$ (as above) is continuous, each $f_t \colon M \to N$ is smooth and $[0,1] \to C^3(M,N) \colon t\mapsto f_t$ is continuous. Then
\[
\liminf_{t\to 0} \lambda_1(\mathcal{J}_t) \geq \lambda_1(\mathcal{J}_0).
\]
\end{proposition}
\begin{remark}
As we will see in the proof of this proposition, the statement is easily generalised to $\liminf_{t\to t_0} \lambda_1(\mathcal{J}_t) \geq \lambda_1(\mathcal{J}_{t_0})$ for $t_0 \in [0,1]$ (the choice of $t_0 = 0$ is in no way special). This means that the function $t\mapsto \lambda_1(\mathcal{J}_t)$ is lower semicontinuous. Because we don't need this full statement in our proof, we will restrict ourselves, for notational convenience, to $t_0 = 0$.
\end{remark}
As mentioned before, our main difficulty is that the differential operators $\mathcal{J}_t$ do not act on sections of the same vector bundle. To address this we first construct (local) homomorphisms between $E_t$ and $E_0$ which will allow us to locally identify these bundles.
Throughout this section we will consider the vector bundles $E_t = f_t^*TN$ as a subset of the larger vector bundle $F^*TN$ by identifying $M$ with $M\times \{t\} \subset M\times [0,1]$. Let us consider a chart $U$ of $M$ and a chart $V$ of $N$ such that for some $\epsilon>0$ the set $U\times [0,\epsilon)$ is mapped into $V$ by $F$. We will call such charts \textit{adapted charts}. To a pair of adapted charts we will associate, for $t\in [0,\epsilon)$, homomorphisms $\psi_t \colon E_t\vert_{U} \to E_0\vert_{U}$ as follows. Let us denote by $(y^\alpha)_{\alpha=1}^n$ the coordinates of the chart $V\subset N$. First, we note that $(E_\alpha)_{\alpha=1}^n$, with $E_\alpha = F^* \parder{}{y^\alpha}$, is a local frame of $F^*TN$ over $U\times [0,\epsilon)$. Furthermore, the sections $E_\alpha(\cdot, t)$ provide a frame of $E_t\vert_{U}$ for any fixed $t\in [0,\epsilon)$. If we write\footnote{Throughout this text we will use the Einstein summation convention.} an element $v\in E_t\vert_U$ as $v = v^\alpha E_\alpha(x,t)$ for some $x\in U$, then we define the map $\psi_t \colon E_t\vert_{U} \to E_0\vert_{U}$ as
\[
\psi_t(v^\alpha E_\alpha(x,t)) = v^\alpha E_\alpha(x,0).
\]
We note that for $t = 0$ we have $\psi_0 = \id$ hence, by continuity, $\psi_t$ is an isomorphism for any $t\in [0,\epsilon)$ if we take $\epsilon>0$ small enough (after possibly shrinking $U$).
Because $M$ is compact, it can be covered by a finite set of adapted charts. More precisely, there exists an $\epsilon>0$, a finite set of charts $\{\widetilde{U}_1, \hdots, \widetilde{U}_r\}$ of $M$ and charts $\{V_1, \hdots, V_r\}$ of $N$ such that $F$ maps each $\widetilde{U}_p\times[0,\epsilon)$ into $V_p$. Let us denote by $\psi_{t,p} \colon E_t\vert_{\widetilde{U}_p} \to E_0\vert_{\widetilde{U}_p}$ the homomorphisms associated to each pair $(\widetilde{U}_p, V_p)$ of adapted charts.
Before we proceed to the proof of \Cref{prop:spectralgapconvergence}, we will first use our choice of adapted charts to define $C^k$ norms on the spaces $\Gamma^k(E_t)$ which will be particularly well-adjusted to our arguments. Fix a $p\in \{1,\hdots, r\}$, let $(x_i)_{i=1}^m$ be the coordinates of the chart $\widetilde{U}_p \subset M$ and let $(y^\alpha)_{\alpha=1}^n$ be the coordinates of the chart $V_p \subset N$. We set, as before, $E_\alpha = F^*\parder{}{y^\alpha}$. By shrinking the open sets $\widetilde{U}_p$ slightly we can find precompact open subsets $U_p\subset \widetilde{U}_p$ such that the sets $\{U_p\}_{p=1}^r$ still cover $M$. A section $s\in \Gamma^k(E_t)$ can, locally on $\widetilde{U}_p$, be written as $s=s^\alpha E_\alpha(\cdot, t)$. Using this notation, we define, for $k\in \mathbb{N}$ and $t\in [0,\epsilon)$, the seminorms $\norm{\cdot}_{\Gamma^k(\overline{U}_p;E_t)}$ on $\Gamma^k(E_t)$ as
\[
\norm{s}_{\Gamma^k(\overline{U}_p;E_t)} = \sup \left\{
\abs*{\parder{^{\abs{\mu}}}{x^\mu} s^\alpha(x)} \colon x \in \overline{U}_p, 1\leq\alpha\leq n, \abs{\mu} \leq k
\right\}.
\]
Here $\mu=(\mu_1, \hdots, \mu_m)$ is a multi-index and $\parder{^{\abs{\mu}}}{x^\mu} = \parder{^{\mu_1}}{x_1^{\mu_1}}\cdots \parder{^{\mu_m}}{x_1^{\mu_m}}$. This expression is finite because $\overline{U}_p$ is compact in $\widetilde{U}_p$. We now define the norm $\norm{\cdot}_{\Gamma^k(E_t)}$ on $\Gamma^k(E_t)$ as
\[
\norm{s}_{\Gamma^k(E_t)} = \max_{p=1,\hdots, r} \norm{s}_{\Gamma^k(\overline{U}_p;E_t)}.
\]
These norms induce the usual Banach space structure on the spaces $\Gamma^k(E_t)$.
For any of the sets $U_p\subset M$, with $p=1,\hdots, r$, we will denote by $\Gamma^k(\overline{U}_p; E_t)$ the Banach space of sections of $E_t$ over $\overline{U}_p$ that extend to $k$-times differentiable sections over some open set containing $\overline{U}_p$. On this space $\norm{\cdot}_{\Gamma^k(\overline{U}_p;E_t)}$ defines a Banach norm.
By inspecting the definition of the (local) homomorphisms $\psi_{t,p} \colon E_t\vert_{\widetilde{U}_p} \to E_0\vert_{\widetilde{U}_p}$ and the seminorms $\norm{\cdot}_{\Gamma^k(\overline{U}_p;E_t)}$ we observe the following. For all $k\in \mathbb{N}$ and $t\in [0,\epsilon)$, if $s\in \Gamma^k(\overline{U}_p;E_t)$ is a section, then
\begin{equation}\label{eq:welladjustedness}
\norm{\psi_{t,p}(s)}_{\Gamma^k(\overline{U}_p;E_0)} = \norm{s}_{\Gamma^k(\overline{U}_p;E_t)}.
\end{equation}
We will use this compatibility between the homomorphisms and seminorms in our proof of \Cref{prop:spectralgapconvergence}.
\begin{proof}[Proof of \Cref{prop:spectralgapconvergence}]
Let the adjusted charts $(\widetilde{U}_p, V_p)$, associated homomorphisms $\psi_{t,p} \colon E_t\vert_{\widetilde{U}_p} \to E_0\vert_{\widetilde{U}_p}$ and choice of precompact opens $U_p\subset \widetilde{U}_p$ be as above.
Let us denote $\lambda = \liminf_{t\to 0} \lambda_1(\mathcal{J}_{t})$. There exists a sequence $(t_u)_{u\in \mathbb{N}} \subset [0,\epsilon)$ such that $t_u \to 0$ as $u\to\infty$ and
\[
\lim_{u\to\infty} \lambda_1(\mathcal{J}_{t_u}) = \lambda = \liminf_{t\to 0} \lambda_1(\mathcal{J}_{t}).
\]
It follows from \Cref{eq:spectraltheoryjacobioperators} that for each $u\in \mathbb{N}$ there exists a smooth eigensection $s_u \in \Gamma^\infty(E_t)$ such that $\mathcal{J}_{t_u} s_u = \lambda_1(\mathcal{J}_{t_u}) \cdot s_u$. We normalise such that $\norm{s_u}_{\Gamma^0(E_t)} = 1$ for all $u\in \mathbb{N}$.
For $p = 1,\hdots, r$ we denote $\sigma_{u,p} = \psi_{t_u,p}(s_u\vert_{\widetilde{U}_p}) \in \Gamma^\infty(\widetilde{U}_p;E_0)$. Our proof will rely on the following two lemmas.
\begin{lemma}\label{lem:convergence}
There exists a subsequence $(u_k)_{k\in \mathbb{N}}\subset \mathbb{N}$ such that for each $p = 1,\hdots, r$ the sequence $(\sigma_{u_k,p})_{k\in\mathbb{N}}$ converges in $\Gamma^2(\overline{U}_p;E_0)$ to a limiting section $\sigma_p \in \Gamma^2(\overline{U}_p;E_0)$. At least one of these limiting sections is not the zero section. Moreover, for all $p,q = 1,\hdots, r$ the sections $\sigma_p$ and $\sigma_q$ coincide on $\overline{U}_p \cap \overline{U}_q$.
\end{lemma}
In the second lemma we consider the operator $\mathcal{J}_0$ restricted to the open sets $U_p$. Since the Jacobi operators $\mathcal{J}_t$ are ordinary differential operators, it follows that the value of $\mathcal{J}_t s$ at a point in $M$ depends only on the germ of the section $s$ at that point. Hence, we can apply $\mathcal{J}_t$ also to sections that are not globally defined.
\begin{lemma}\label{lem:limitiseigenvector}
Consider the limiting sections $\sigma_p \in \Gamma^2(\overline{U}_p;E_0)$ as defined in \Cref{lem:convergence}. For all $p=1,\hdots, r$ we have on $U_p$ that
\[
\mathcal{J}_{0} \sigma_p = \lambda \cdot \sigma_p.
\]
\end{lemma}
We postpone the proof of these two lemmas and first finish proof of \Cref{prop:spectralgapconvergence}.
It follows from the last statement of \Cref{lem:convergence} that we can patch the limiting sections $\sigma_p$ together to obtain a well-defined global limiting section $\sigma\in \Gamma^2(E_0)$. More precisely, we let $\sigma \in \Gamma^2(E_0)$ be the section that on each $\overline{U}_p\subset M$ is given by $\sigma\vert_{\overline{U}_p} = \sigma_p$. Note that the sets $\overline{U}_p$ cover $M$ and that by \Cref{lem:convergence} the section is well-defined on intersections $\overline{U}_p \cap \overline{U}_q$. Because at least one of the limiting sections $\sigma_p$ does not vanish, it follows that $\sigma$ is not the zero section.
Now \Cref{lem:limitiseigenvector} implies that $\sigma$ is an eigensection of $\mathcal{J}_0$. Namely, we have
\[
\mathcal{J}_0 \sigma = \lambda \cdot \sigma
\]
because this holds on each subset $U_p \subset M$. It follows that $\lambda$ is an eigenvalue of $\mathcal{J}_0$ and hence that
\[
\lambda_1(\mathcal{J}_0) \leq \lambda = \liminf_{t\to 0} \lambda_1(\mathcal{J}_{t}).
\]
\end{proof}
We now prove \Cref{lem:convergence} and \Cref{lem:limitiseigenvector}. The proofs of these lemmas will rely on the fact that, in suitably chosen local coordinates, the coefficients of the differential operators $\mathcal{J}_t$ depend continuously on $t$.
Let us first introduce the necessary notation. Let $(\widetilde{U}_p, V_p)$ be a pair of adapted charts as before, $(x^i)_{i=1}^m$ the coordinates on $\widetilde{U}_p$ and $(y^\alpha)_{\alpha=1}^n$ the coordinates on $V_p$. We put again $E_\alpha = F^*\parder{}{y^\alpha}$. The Jacobi operators $\mathcal{J}_t$ are second order differential operators. Hence, in local coordinates they can be written as
\begin{equation}\label{eq:generaldiffop}
\mathcal{J}_{t} s(x) = \left\lbrace A_{\alpha}^{ij,\gamma}(x,t) \parder{^2 s^\alpha}{x^i x^j}(x) + B_{\alpha}^{i, \gamma}(x,t) \parder{s^\alpha}{x^i} (x) + C^\gamma_\alpha(x,t) s^\alpha(x) \right\rbrace E_\gamma(x,t),
\end{equation}
where $A_{\alpha}^{ij,\gamma}, B_{\alpha}^{i, \gamma}, C^\gamma_\alpha \colon \widetilde{U}_p \times [0,\epsilon) \to \mathbb{R}$ are suitable coefficient functions. Here we write any section $s$ of $E_t$ over $\widetilde{U}_p$ as $s=s^\alpha E_\alpha(\cdot, t)$.
Our proofs of \Cref{lem:convergence} and \Cref{lem:limitiseigenvector} are based on the following observation.
\begin{lemma}\label{lem:continuousdependencecoeffs}
Let $U'\subset \widetilde{U}_p$ be a precompact open subset. For all $i,j=1,\hdots,m$ and $\alpha, \gamma = 1,\hdots, n$, we have that the maps $t\mapsto A_\alpha^{ij,\gamma}(\cdot, t), t\mapsto B_\alpha^{i,\gamma}(\cdot, t)$ and $t\mapsto C_\alpha^\gamma(\cdot, t)$ are continuous mappings from $[0,1]$ into $C^1(\overline{U'})$.
\end{lemma}
\begin{proof}
Denote by $g^{ij}$ the coefficients of the inverse of the metric tensor $g$ with respect to the coordinates $(x^i)_{i=1}^m$ and by $\presuperscript{M}{\Gamma}^{k}_{ij}$ the Christoffel symbols of the Levi-Civita connection of $(M,g)$. The Jacobi operators are expressed locally as
\begin{align*}
\mathcal{J}_t s &= \Delta s - \tr_g R^N(s, df\cdot) df\cdot\\
&= -g^{ij}\left\{ \nabla_{\parder{}{x^i}} \nabla_{\parder{}{x^j}} s - \presuperscript{M}{\Gamma}^{k}_{ij} \nabla_{\parder{}{x^k}} s + R^N\left(s, \parder{f}{x^i}\right)\parder{f}{x^j}\right\},
\end{align*}
with $s \in \Gamma^2(\overline{U}_p; E_t)$. Recall that $\nabla$ is the pullback connection on the bundle $E_t = f_t^*TN$. Let us denote by $\presuperscript{N}{\Gamma}^{\gamma}_{\alpha \beta}$ the Christoffel symbols of the Levi-Civita connection of $(N,h)$ on the chart $V_p$. Then, for any $s = s^\alpha E_\alpha(\cdot, t) \in \Gamma^1(\widetilde{U}_p; E_t)$, we can write the pullback connection as
\[
\nabla_{\parder{}{x^i}} s(x) = \parder{s^\alpha}{x^i}(x) E_\alpha(x, t) + s^\alpha(x) \parder{f_t^\beta}{x^i}(x)\cdot \presuperscript{N}{\Gamma}^{\gamma}_{\alpha \beta}(f_t(x)) \cdot E_\gamma(x,t).
\]
The coefficient functions $A_\alpha^{ij,\gamma}, B_\alpha^{i,\gamma}, C_\alpha^\gamma$ can be calculated by filling in this expression for the connection $\nabla$ into the local expression for the Jacobi operators. It follows that these functions can be expressed entirely in terms of the quantities
\[
g^{ij}, \parder{f_t^\beta}{x^i}, \presuperscript{M}{\Gamma}^{k}_{ij}, (R^N)_{\alpha\beta\gamma}^\delta \circ f_t \text{ and }\presuperscript{N}{\Gamma}^{\gamma}_{\alpha \beta}\circ f_t
\]
and their first derivatives. Here $(R^N)_{\alpha\beta\gamma}^\delta$ denote the coefficients of the Riemann curvature tensor $R^N$ in the coordinates on $V_p$. As a result, in the expression for the coefficient functions only spatial derivatives of the functions $f_t$ up to second order appear. The statement of the lemma now follows immediately from our assumption that $[0,1]\to C^3(M,N) \colon t\mapsto f_t$ is a continuous mapping.
\end{proof}
We can now prove \Cref{lem:convergence}.
\begin{proof}[Proof of \Cref{lem:convergence}]
Fix a $p\in \{1,\hdots, r\}$. Let us write $s_u = s_u^\alpha E_\alpha(\cdot, t)$ on $\widetilde{U}_p$. Because each $s_u$ is an eigensection of the Jacobi operator $\mathcal{J}_{t_u}$, we find that they satisfy
\begin{equation}\label{eq:pdesystem}
\left[ \mathcal{J}_{t_u} - \lambda_1(\mathcal{J}_{t_u})\right] s_u = 0.
\end{equation}
Hence, on $\widetilde{U}_p$ the coefficients $(s_u^\alpha)_{\alpha=1}^n$ satisfy a second order linear elliptic system of differential equations. We will use Schauder estimates to obtain a uniform bound on the $C^{2,\mu}$-H\"older norm of these coefficients. To this end we will apply the results of \cite{Morrey}.
The system of differential equations in \Cref{eq:pdesystem} is elliptic because the Jacobi operators are elliptic differential operators. The bounds on the H\"older norms of solutions to this equation that are provided by Morrey's results depend on a uniform ellipticity constant which in Morrey's paper is denoted $M$ (defined in \cite[Equation 1.6]{Morrey}). This constant depends only on the coefficients of the second order part of the system in \Cref{eq:pdesystem}. That is, it depends only on the coefficients $A_{\alpha}^{ij,\gamma}$. Because, by \Cref{lem:continuousdependencecoeffs}, these coefficient functions depend continuously on $t$, it follows that the constant $M$ can be taken uniformly over $u\in \mathbb{N}$.
Take a precompact open $U'\subset \widetilde{U}_p$ such that $\overline{U}_p \subset U'\subset \overline{U'} \subset \widetilde{U}_p$. The coefficients of the system of differential equations in \Cref{eq:pdesystem} are a combination of the coefficients of $\mathcal{J}_{t_u}$ and the constant term $\lambda_1(\mathcal{J}_{t_u})$. It follows from \Cref{lem:continuousdependencecoeffs} that the $C^{0,\mu}$-H\"older norms (even $C^1$ norms) of the coefficients of $\mathcal{J}_{t_u}$ can be bounded uniformly in $u$. The constant term $\lambda_1(\mathcal{J}_{t_u})$ can also be bounded uniformly in $u$, since the sequence $(\lambda_1(\mathcal{J}_{t_u}))_{u\in \mathbb{N}}$ is convergent. So the coefficients of the system of differential equations in \Cref{eq:pdesystem} have uniformly (in $u$) bounded $C^{0,\mu}$-H\"older norms. Moreover, because we normalised the sections $s_u$ such that $\norm{s_u}_{\Gamma^0(E_t)} = 1$, it follows that the $C^0$ norm (and hence also the $L^2$ norm) of the coefficients $s^\alpha_u$ is also bounded uniformly in $u$. We now apply \cite[Theorem 4.7]{Morrey} (with $G=U', G_1 = U_p$, in the notation of that paper) to conclude that on $\overline{U}_p$ the $C^{2,\mu}$-H\"older norms of the coefficients $s_u^\alpha$ are uniformly bounded in $u$.
We recall the notation $\sigma_{u,p} = \psi_{t_u,p}(s_u\vert_{\widetilde{U}_p})$. It follows from the definition of the homomorphisms $\psi_{t,p}$ that $s_u$ and $\sigma_{u,p}$ have the same coefficients on $\widetilde{U}_p$. Namely, if we write $\sigma_{u,p} = \sigma_{u,p}^\alpha E_\alpha(\cdot, 0)$, then $s_{u}^\alpha = \sigma_{u,p}^\alpha$ for $\alpha = 1,\hdots, n$. Hence, also the $C^{2,\mu}$-H\"older norms of the coefficients $\sigma_{u,p}^\alpha$ are uniformly bounded. It now follows from the Arzel\`a-Ascoli theorem that there exists a subsequence of $(\sigma_{u,p})_{u\in \mathbb{N}}$ that converges in $\Gamma^2(\overline{U}_p; E_0)$ to a limiting section. We denote this limiting section by $\sigma_p$. By choosing subsequent refinements of the subsequence we can arrange for this to hold for each $p=1,\hdots, r$. We denote the indices of this subsequence by $(u_k)_{k\in \mathbb{N}} \subset \mathbb{N}$.
We now prove that is it not possible that all limiting sections $\sigma_p$ vanish identically. If this was the case, and all sections $\sigma_p$ vanish, then this would imply $\norm{\sigma_{u_k,p}}_{\Gamma^0(\overline{U}_p;E_0)} \to 0$ as $k\to\infty$ for all $p=1,\hdots, r$. However, this contracts that for all $u\in \mathbb{N}$ we have, by \Cref{eq:welladjustedness}, that
\[
\max_{p=1,\hdots, r} \norm{\sigma_{u,p}}_{\Gamma^0(\overline{U}_p;E_0)} = \max_{p=1,\hdots, r} \norm{s_u}_{\Gamma^0(\overline{U}_p;E_t)} = \norm{s_u}_{\Gamma^0(E_t)} = 1.
\]
Finally, we prove the last statement of the lemma. Let $(\widetilde{U}_p, V_p)$ and $(\widetilde{U}_q, V_q)$ be two pairs of adapted charts with corresponding local homomorphisms $\psi_{t,p}$ and $\psi_{t,q}$. Recall that the maps $\psi_{t,p} \colon E_t\vert_{\widetilde{U}_p} \to E_0\vert_{\widetilde{U}_p}$ are isomorphisms for $t$ small enough. It can be easily seen from the definition of these homomorphisms that, on the compact set $\overline{U}_p\cap \overline{U}_q$, the maps
\[
\psi_{t,q}\circ \psi_{t,p}^{-1} \colon E_0\vert_{\overline{U}_p\cap \overline{U}_q} \to E_0\vert_{\overline{U}_p\cap \overline{U}_q}
\]
converge uniformly to the identity map as $t\to 0$. It follows that
\begin{align*}
\sigma_p\vert_{\overline{U}_p \cap \overline{U}_q} &= \lim_{k\to\infty} \psi_{t_{u_k},p}(s_{u_k}\vert_{\overline{U}_p \cap \overline{U}_q})\\
&=\lim_{k\to\infty} \psi_{t_{u_k},q}\circ \psi_{t_{u_k},p}^{-1} \circ \psi_{t_{u_k},p}(s_{u_k}\vert_{\overline{U}_p \cap \overline{U}_q})\\ &= \lim_{k\to\infty} \psi_{t_{u_k},q}(s_{u_k}\vert_{\overline{U}_p \cap \overline{U}_q})\\
&= \sigma_q\vert_{\overline{U}_p \cap \overline{U}_q},
\end{align*}
where the limits are taken in $\Gamma^0(\overline{U}_p\cap \overline{U}_q;E_0)$.
\end{proof}
We finish this section with the proof of \Cref{lem:limitiseigenvector}.
\begin{proof}[Proof of \Cref{lem:limitiseigenvector}]
Fix a $p\in \{1,\hdots, p\}$. Let $(\widetilde{U}_p,V_p)$ be a pair of adapted charts and let the homomorphisms $\psi_{t,p}$ and the frame $(E_\alpha)_{\alpha=1}^n$ be as before.
We claim that
\begin{equation}\label{eq:operatornormconverges}
\norm{\psi_{t,p}\circ \mathcal{J}_{t} - \mathcal{J}_0 \circ \psi_{t,p}}_{\text{op}} \to 0 \text{ as } t\to 0.
\end{equation}
Here, $\norm{\cdot}_{\text{op}}$ is the operator norm on the space of bounded linear operators from $\Gamma^2(\overline{U}_p; E_t)$ to $\Gamma^0(\overline{U}_p; E_0)$ (equipped with the norms $\norm{\cdot}_{\Gamma^2(\overline{U}_p; E_t)}$ and $\norm{\cdot}_{\Gamma^0(\overline{U}_p; E_0)}$ respectively).
We denote
\begin{align*}
a_{\alpha}^{ij,\gamma}(x,t) &= A^{ij,\gamma}_{\alpha}(x,t) - A^{ij,\gamma}_{\alpha}(x,0)\\
b_{\alpha}^{i,\gamma}(x,t) &= B^{i,\gamma}_{\alpha}(x,t) - B^{i,\gamma}_{\alpha}(x,0)\\
c_{\alpha}^{\gamma}(x,t) &= C^{\gamma}_{\alpha}(x,t) - C^{\gamma}_{\alpha}(x,0).
\end{align*}
Then, for a section $s = s^\alpha E_\alpha(\cdot, t) \in \Gamma^2(\overline{U}_p; E_t)$, we have
\begin{align*}
[ \psi_{t,p}\circ \mathcal{J}_{t} &- \mathcal{J}_0 \circ \psi_{t,p}]s(x) \\&= \left\{a_{\alpha}^{ij,\gamma}(x,t) \parder{^2 s^\alpha}{x^i x^j}(x) + b_\alpha^{i,\gamma}(x,t) \parder{s^\alpha}{x^i}(x) + c_\alpha^\gamma(x,t) s^\alpha(x)\right\} E_\alpha(x,0).
\end{align*}
From this expression follows that
\[
\norm{\psi_{t,p}\circ \mathcal{J}_{t} - \mathcal{J}_0 \circ \psi_{t,p}}_{\text{op}} \leq \sum_{i,j,\alpha,\gamma} \norm{a_{\alpha}^{ij,\gamma}}_{C^0(\overline{U}_p)} + \sum_{i,\alpha,\gamma} \norm{b_{\alpha}^{i,\gamma}}_{C^0(\overline{U}_p)} + \sum_{\alpha,\gamma} \norm{c_{\alpha}^{\gamma}}_{C^0(\overline{U}_p)}.
\]
Our claim now immediately implied by the results of \Cref{lem:continuousdependencecoeffs}.
We use the notation $(u_k)_{k\in \mathbb{N}}$ and $\sigma_{u,p}$ as in \Cref{lem:convergence}. From that lemma follows that $\sigma_{u_k, p} \to \sigma_p$ in $\Gamma^2(\overline{U}_p;E_0)$. We use this to find
\[
\mathcal{J}_0 \sigma_p = \lim_{k\to\infty} \mathcal{J}_0 \sigma_{u_k,p} = \lim_{k\to\infty} \mathcal{J}_0 \psi_{t_{u_k},p}(s_{u_k}\vert_{\overline{U}_p}).
\]
From \Cref{eq:operatornormconverges} follows that
\[
\mathcal{J}_0 \sigma_p = \lim_{k\to\infty} \mathcal{J}_0 \psi_{t_{u_k},p}(s_{u_k}\vert_{\overline{U}_p}) = \lim_{k\to\infty} \psi_{t_{u_k},p} (\mathcal{J}_{t_{u_k}} s_{u_k}\vert_{\overline{U}_p}).
\]
Here we used that $\norm{s_{u_k}}_{\Gamma^2(\overline{U}_p; E_t)} = \norm{\sigma_{u_k}}_{\Gamma^2(\overline{U}_p; E_0)}$ remains bounded uniformly in $k$. Finally, using the fact that the sections $s_u$ are eigensections of the operators $\mathcal{J}_{t_u}$ gives
\[
\mathcal{J}_0 \sigma_p = \lim_{k\to\infty} \psi_{t_{u_k},p} (\mathcal{J}_{t_{u_k}} s_{u_k}\vert_{\overline{U}_p}) = \lim_{k\to\infty} \lambda(\mathcal{J}_{t_{u_k}}) \cdot \psi_{t_{u_k},p}(s_{u_k}\vert_{\overline{U}_p}) = \lambda \cdot \sigma_p
\]
because, by definition, $\lambda = \lim_{u\to\infty} \lambda_1(\mathcal{J}_{t_u})$.
\end{proof}
\section{Proof of \Cref{thm:exponentialconvergence}}
Our proof of \Cref{thm:exponentialconvergence} will rely on the fact that the Jacobi operator of the maps $f_t$ appears in the evolution equation for the quantity $\tau(f_t)$. Recall the notation $E_t = f_t^*TN$.
\begin{lemma}\label{lem:evolutionequation}
Assume the family of maps $(f_t)_{t\in [0,\infty)}$ satisfies the harmonic heat flow equation. Then
\[
\frac{1}{2} \der{}{t} \norm{\tau(f_t)}^2_{L^2(E_t)} = -\inner{\mathcal{J}_{f_t} \tau(f_t)}{\tau(f_t)}_{L^2(E_t)}.
\]
\end{lemma}
\begin{proof}
Assume $(x^i)_{i=1}^m$ are Riemannian normal coordinates around a point $x \in M$. In the following calculation we will consider the expression $\parder{f_t}{x^\alpha}$ as a local section of $f_t^*TN$. Because we are working in normal coordinates around $x$, we have that
\[
\tau(f_t) \vert_x = \tr_{g} \nabla df\vert_x = \nabla_{\parder{}{x^i}}(\parder{f}{x^i})\Big\vert_x.
\]
We use this to find that at the point $x$ and for any $t\geq 0$ we have
\begin{align*}
\nabla_{\parder{}{t}}\tau(f_t) &= \nabla_{\parder{}{t}}\left(\nabla_{\parder{}{x^i}}\left(\parder{f}{x^i}\right)\right)\\&= R^N \left(\parder{f}{t}, \parder{f}{x^i} \right)\parder{f}{x^i} + \nabla_{\parder{}{x^i}}\nabla_{\parder{}{x^i}} \left(\parder{f}{t}\right)\\
&= -\Delta \tau(f_t) + \tr_{g} R^N(\tau(f_t) , df \cdot) df \cdot = - \mathcal{J}_{f_t} \tau(f_t).
\end{align*}
To get the second equality we used that $\nabla_{\parder{}{t}}\parder{f}{x^i}=\nabla_{\parder{}{x^i}}\parder{f}{t}$ (see \cite[p.5]{EellsLemaireSelectedTopics}). Because $x\in M$ was arbitrary, we conclude that his equality holds everywhere. We use this to find that
\[
\frac{1}{2} \der{}{t} \norm{\tau(f_t)}^2_{L^2(E_t)} = \inner{\nabla_{\parder{}{t}}\tau(f_t)}{\tau(f_t)}_{L^2(E_t)} = - \inner{\mathcal{J}_{f_t} \tau(f_t)}{\tau(f_t)}_{L^2(E_t)}.
\]
\end{proof}
We can now give a proof of \Cref{thm:exponentialconvergence}.
\begin{proof}[Proof of \Cref{thm:exponentialconvergence}]
We apply \Cref{prop:spectralgapconvergence} to the family of maps $(f_t)_{t\in [0,\infty]}$. For this we pick some homeomorphism between $[0,\infty]$ and $[0,1]$ (mapping $\infty$ to $0$) so we can view the heat flow as a family of maps $(f_t)_{t\in [0,1]}$ indexed by $t\in [0,1]$. It then follows from \Cref{thm:heatflowfacts} that this family of maps satisfies the assumptions of \Cref{prop:spectralgapconvergence}. From this proposition follows that
\[
\liminf_{t\to\infty} \lambda_1(\mathcal{J}_{f_t}) \geq \lambda_1(\mathcal{J}_{f_\infty}).
\]
By assumption $f_\infty$ is a non-degenerate critical point of the energy so $\lambda_1(\mathcal{J}_{f_\infty}) > 0$. Put $b = \lambda_1(\mathcal{J}_{f_\infty})/2 > 0$. Then, for $t \geq t_0$ large enough we have $\lambda_1(\mathcal{J}_{f_t}) \geq b$. Using \Cref{lem:evolutionequation} and \Cref{eq:minmax} we see that for such $t\geq t_0$,
\[
\der{}{t} \norm{\tau(f_t)}_{L^2(E_t)}^2 = -2 \inner{\mathcal{J}_{f_t} \tau(f_t)}{\tau(f_t)}_{L^2(E_t)} \leq -2b \cdot \norm{\tau(f_t)}^2_{L^2(E_t)}.
\]
Gr\"onwalls's inequality (\cite{Gronwall}) yields that
\[
\norm{\tau(f_t)}_{L^2(E_t)}^2 \leq \norm{\tau(f_{t_0})}_{L^2(E_t)}^2 \cdot e^{-2b\cdot t}
\]
for $t\geq t_0$. So if we pick $a>0$ large enough, then
\[
\norm*{\der{f_t}{t}}_{L^2(E_t)} = \norm{\tau(f_t)}_{L^2(E_t)} \leq a\cdot e^{-b\cdot t}
\]
for all $t\geq 0$.
\end{proof}
We end with the proof of \Cref{cor:convergenceenergy}.
\begin{proof}[Proof of \Cref{cor:convergenceenergy}]
The evolution of the energy $E(f_t)$ along the harmonic heat flow is governed by the equation
\[
\der{}{t} E(f_t) = - \int_{M} \norm*{\tau(f_t)}^2 \vol_g = - \norm*{\der{f_t}{t}}^2_{L^2(E_t)}
\]
(see \cite[\S 6.C]{EellsSampson}). Applying the estimate of \Cref{thm:exponentialconvergence} gives
\[
\abs{E(f_t) - E(f_\infty)} = \int_{t}^\infty \norm*{\der{f_t}{t}}^2_{L^2(E_t)} dt \leq a\cdot \int_t^\infty e^{-2b\cdot t} dt \leq a'\cdot e^{-2b \cdot t}
\]
with $a'= a/(2b)$.
\end{proof}
\bibliographystyle{alpha}
|
{
"timestamp": "2021-05-18T02:35:24",
"yymm": "2105",
"arxiv_id": "2105.03910",
"language": "en",
"url": "https://arxiv.org/abs/2105.03910"
}
|
\section{Introduction}
\label{intro}
The complexity of modelling time-series data increases when accounting for discontinuous changes in dynamical behavior. As a motivational example, consider the Lotka-Volterra equations, a simplified model of predator-prey interactions. The system is described by the pair of ordinary differential equations (ODEs):
\begin{equation}
\frac{dx}{dt} = \alpha x - \beta x y \hspace{2em} \frac{dy}{dt} = \delta x y - \gamma y
\end{equation}
where $x$ and $y$ are the population size of predators and prey respectively. Coefficients $\alpha, \beta, \delta, \gamma$ describe interaction characteristics, such as the rate of encounter, and rate of successful predation per encounter. When these parameters are fixed, modelling this system from observed population trajectories is straightforward. However, external factors can perturb the system. Additional predators can suddenly be introduced via migration midway in an observed population trajectory, causing a jump discontinuity in the trajectory. The coefficients describing predator-prey interaction may also abruptly change, instantaneously changing the dynamical mode of the system. Systems featuring smooth dynamical flows (SDFs) interrupted by discontinuities are known as hybrid systems \cite{van2000introduction}. These discontinuities can arise as discrete jumps or instantaneous switches in dynamical mode \cite{ackerson1970state}, shown in Figure \ref{fig:lv_example} at times (a) and (b) respectively. We propose a method to model the hybrid trajectories which arise from hybrid systems.
\begin{figure}[ht]
\centering
\includegraphics[width=0.47\textwidth]{Figures/fig1.pdf}
\caption{A Lotka-Volterra hybrid trajectory composed of three smooth dynamical flows. The plot shows populations of predators and prey over time. At time (a), a jump discontinuity occurs. At time (b), a distributional shift in dynamical coefficients occurs.}
\label{fig:lv_example}
\end{figure}
Recently, the Latent ODE architecture \cite{rubanova2019latent} has been introduced to represent time series using latent dynamical trajectories. However, Latent ODEs are not designed to model discontinuous latent dynamics and, thus, represent hybrid trajectories poorly. Here, we propose the Latent Segmented ODE (LatSegODE), an extension of a Latent ODE explicitly designed for hybrid trajectories. Given a base model Latent ODE trained on the segments of SDFs between discontinuities, we apply the Pruned Exact Linear Time (PELT) search algorithm \cite{killick2012optimal} to model hybrid trajectories as a sequence of samples from the base model, each with a different initial state. The LatSegODE detects the positions where the latent ODE dynamics are restarted with a new initial state, thus modelling hybrid trajectories using a piece-wise continuous latent trajectory. We provide a novel way to use deep architectures in conjunction with offline changepoint detection (CPD) methods. Using the marginal likelihood under the Latent ODE as a score function, we find the Bayesian Occam's Razor \cite{mackay1992bayesian} effect automatically prevents over-segmentation in CPD methods.
We evaluate LatSegODE on data sets of 1D sine wave hybrid trajectories, Lotka-Volterra hybrid trajectories, and a synthetically composed UCI Character Trajectories data set. We demonstrate that the LatSegODE interpolates, extrapolates, and finds the changepoints in hybrid trajectories with high accuracy compared to current baseline methods.
\section{Background}
\subsection{Latent ODEs}
The Latent ODE architecture \cite{rubanova2019latent} is an extension of the Neural ODE method \cite{chen2018neural}, which provides memory-efficient gradient computation without back-propagation through ODE solve operations. Neural ODEs represent trajectories as the solution to the initial value problem:
\begin{align}
\frac{d h(t)}{dt} &= f_\theta(h(t), t) \\
h_{0:N} &= \text{ODESolve}(f_\theta, h0, t_{0:N})
\end{align}
where $f_\theta$ is parameterized by a neural network, and $h(t)$ represents hidden dynamics. The continuous dynamical representation allows Neural ODEs to natively incorporate irregularly sampled time series.
Latent ODEs arrange Neural ODEs in an encoder-decoder architecture. Observed trajectories are encoded using a GRU-ODE architecture \cite{de2019gru, rubanova2019latent}. The GRU-ODE combines a Neural ODE with a gated recurrent unit (GRU) \cite{cho2014learning}. Observed trajectories are encoded by the GRU into a hidden state, which is continuously evolved between observations by a Neural ODE parameterized by neural network $f_\theta$. The GRU-ODE encodes the observed data sequence into parameters for a variational posterior. Using the reparameterization trick \cite{kingma2013auto}, a differentiable sample of the latent initial state $z0$ is obtained. A Neural ODE parameterized by neural network $f_\Psi$ deterministically solves a latent trajectory from the latent initial state. Finally, a neural network $f_\Phi$ decodes the latent trajectory into data space. The Latent ODE architecture can thus be represented as:
\begin{align}
\mu_{z0}, \sigma_{z0}^2 &= \text{GRUODE}_{f_\theta}(x_{1:N}, t_{1:N})\\
z0 &\sim q(z0 | x_{1:N}) = \mathcal{N}(\mu_{z0}, \sigma_{z0}^2) \\
z_{1:N} &= \text{ODESolve}(f_\Psi, z0, t_{1:N}) \\
x_{i} &\sim \mathcal{N}(f_{\Phi}(z_{i}), \sigma^2) \hspace{1em} \text{for} \hspace{1em} i = 1, ..., N
\end{align}
where $\sigma^2$ is a fixed variance term. The Latent ODE is trained by maximizing the evidence lower-bound (ELBO). Letting $X = x_{1:N}$, the ELBO is:
\begin{equation}
\mathbb{E}_{z0 \sim q(z0|X)}[\log p(X)] - \text{KL}\left[q(z0|X)\;||\;p(z0)\right]
\end{equation}
\subsection{Representational Limitations of the Neural ODE}
Latent ODEs use Neural ODEs to represent latent dynamics, and thus inherit their representational limitations. The accuracy of an ODE solver used by a Neural ODE depends on the smoothness of the solution; the local error of the solution can exceed ODE solver tolerances when a jump discontinuity occurs \cite{calvo2008}. At a jump, adaptive ODE solvers will continuously reduce step size in response to increased error, possibly until numerical underflow occurs. Even if integration is possible across the jump, it is slow, and the global error of the solution can be adversely affected \cite{calvo2003}. Typically, these issues can be easily avoided by restarting ODE solutions at the discontinuity but this requires these positions to be known. Classical methods use the increase in local error or adaptive rejections associated with jump discontinuity as criteria to restart solutions \cite{calvo2008}. Recently, Neural Event ODEs \cite{chen2020learning} uses a similar paradigm of discontinuity detection, using an event function parameterized by a neural network to detect locations to restart the ODE solution. With all event detection approaches, failure to accurately detect jump discontinuity will cause the local error bound decrease to a lower order \cite{stewart2011}. Hybrid trajectories with discontinuous change in the dynamical coefficients present different but still hard modeling challenges due to the representational limitations of Neural ODEs.
Latent ODEs do not circumvent these limitations, and cannot generalize in hybrid trajectories. When a hybrid trajectory is encountered, the Latent ODE can only encode the exact sequence of SDFs into a single latent representation. Should a permutation of these SDFs arise at test time, the Latent ODE will not be able to reconstruct the test trajectory.
\begin{figure*}[t]
\centering
\includegraphics[width=0.9\textwidth]{Figures/Model.pdf}
\caption{Schematic of the LatSegODE reconstructing a hybrid trajectory. Arrows indicate computation flow. Data in each segment is encoded into parameters for the variational posterior, from which a latent initial state is sampled. Each latent segment is solved using shared latent dynamic $f_\Phi$, which continues until the next point of change. The latent trajectory is decoded into data space. At evaluation time, an arbitrary number of changepoints can be detected by the PELT algorithm. Plot adapted from \cite{rubanova2019latent}.}
\label{fig:schematic}
\end{figure*}
\section{Method}
The LatSegODE detects positions of jump discontinuity or switching dynamical mode by representing a hybrid trajectory as a piece-wise combination of samples from a learned base model Latent ODE. At each changepoint, the latent dynamics of the base model are restarted from a new initial state. We apply the PELT algorithm to efficiently search through all possible positions to restart ODE dynamics, and return changepoints that correspond to the positions of restart which maximize the joint probability of a hybrid trajectory. This avoids the need to train an event detector, and guarantees optimal segmentation, but the LatSegODE requires the availability of a training data set of SDFs on which the base model can be trained.
\subsection{Extension to Hybrid Trajectories}
We first define the class of hybrid trajectories which can be represented by the LatSegODE. Consider a sequential series of data $X = x_1, x_2, ..., x_N$ and associated times of observation $T = t_1, t_2, ..., t_N$. We represent a hybrid trajectory as a piece-wise sequence of $C$ continuous dynamical segments. Each observed data point can only belong to a single segment. Each segment is bounded by starting index $s_i$ and ending index $e_i$, where $0 \leq i \leq C$, $s_0 = 1$, and $e_C = N$. Segments are sequential and do not intersect, i.e., $s_{i+1} = e_{i} + 1$. The boundaries of segments represent locations of jump discontinuity or switch in dynamical mode. The trajectory within each segment is represented by a sample from the base model Latent ODE.
The LatSegODE can be applied to hybrid trajectories containing an unknown number and order of SDFs. The LatSegODE aims to approximate each SDF using a segment. Using offline CPD, the LatSegODE detects positions of jump discontinuity or switching dynamical mode, and introduces a latent discontinuity at the timepoint indexed by $s_i$. At these timepoints, indexed by $s_i$, the latent dynamics are restarted from a new latent initial condition $z0_i$, which is obtained from the Latent ODE encoder network acting on segment data points $x_{s_i:e_i}$. The latent dynamics are solved using the same latent Neural ODE parameterized by $f_\Phi$. We provide a schematic visualizing LatSegODE hybrid trajectory reconstruction in Figure \ref{fig:schematic}. The example hybrid trajectory is represented by a sequence of base model Latent ODE reconstructions, each starting from a new initial latent state which can discontinuously jump from the previous dynamic. An arbitrary number of restarts can be detected at test time.
To finish the problem formulation, we define $\mathcal{I}$ as the unknown ground truth set of segment boundaries and latent initial states, such that each hybrid trajectory is associated with set:
\begin{equation}
\mathcal{I} = (s_i, e_i, z0_i) \hspace{4em} 0 \leq i \leq C
\end{equation}
Where $Z = z_{0:C}$, the joint log probability of an observed hybrid trajectory can be represented as:
\begin{equation}
\log p(X, Z | s_{1:C}, e_{1:C}) = \sum^C_{i=0} \log p(x_{s_i : e_i}, z0_{i})
\end{equation}
This formulation assumes independence between observations in separate segments, such that $x_{s_i:e_i} \perp (X \setminus x_{s_i:e_i})$. While this assumption can be limiting in trajectories with long term dependencies, it also allows for increased reconstruction performance in the absence of inter-segment dependency. In these situations, given a trajectory with two dynamical modes, allowing latent dynamics to completely restart at the time of modal change allows for a better representation. In comparison, methods which cannot account for shifts in latent dynamics will be forced to adopt an averaged representation between the two dynamical modes. This intuition is later demonstrated in the experimental section.
We note that the LatSegODE does not represent the location of changepoints using a random process. Since event detection is non-probabilistic, the method is not suitable for hybrid trajectories which self-excite or otherwise change dynamical mode past the observed trajectory.
\subsection{Optimal Segmentation}
Given this formulation of hybrid trajectories, the key challenge is finding the unknown set $\mathcal{I}$ which maximizes the joint probability of an observed hybrid trajectory. We propose application of optimized search algorithms from the field of offline changepoint detection (CPD) to recover locations of jump discontinuity and switches in dynamical mode, and consequently $\mathcal{I}$. Through complexity penalization, these search algorithms can automatically determine the optimal number and location of segments without prior specification.
Offline CPD methods attempt to discover changepoints which define segment boundaries. A combination of segments which reconstruct a trajectory is referred to as a segmentation. We allow each observed timepoint to be a potential changepoint. Thus, the space of all possible segmentations is formed by all combinations of an arbitrary number of changepoints. At either extremes, placing no changepoints or a changepoint at each time of observation are both valid segmentations. The space of all possible segmentations grows exponentially ($2^N$) with the number of observations ($N)$.
The optimal partitioning method \cite{1381461} uses dynamical programming to search through this large space of solutions. Where $\mathcal{C}$ is a cost function, $m$ is the number of changepoints, and $\tau$ is a set of changepoints such that $\tau_0 = 0, \tau_{m+1} = n$, it minimizes
\begin{equation}
\sum_{i=1}^{m+1} \mathcal{C}(x_{\tau_{i-1} + 1: \tau_{i}}) + \beta
\end{equation}
with respect to $\tau$ using dynamic programming. Of all possible segmentations up to data index $t$, we let $F(t)$ represent the one which results in the minimal cost. This result is memoized. For a new data index $s > t$, we can extend the optimal solution via recursion
\begin{equation}
F(s) = \min_t F(t) + \mathcal{C}(x_{(t+1):s}) + \beta
\end{equation}
Thus, we begin by solving for $F(1)$, and incrementally extend the solution until $F(N)$, at which point the optimal segmentation is returned. The memoization of previous optimal sub-solutions allows a quadratic runtime with respect to number of observations. The full algorithm is provided in Appendix A. The $\beta$ term penalizes over-segmentation, and typically scales with the number of parameters introduced by each additional changepoint. When a maximum likelihood cost function is used without a $\beta$ penalty, optimal partitioning degenerates by placing a changepoint at each possible index. The presence of $\beta$ enforces a trade-off between accuracy and model complexity. With an appropriate $\beta$, this formulation also conveniently recovers the segmentation with the minimized Bayesian Information Criterion (BIC) \cite{schwarz1978estimating} through minimization of equation (11).
Choice of $\beta$ is a key challenge in using CPD methods with deep architectures. It is not always clear how many effective parameters are introduced by each additional segment, though this number is upper bounded by the dimensionality of the latent initial state. Additionally, the theoretical assumptions required by the BIC are violated by neural network architectures \cite{watanabe2013widely}. The LatSegODE circumvents these challenges by using the marginal likelihood under the Latent ODE as the score function for each segment.
We compute a Monte Carlo estimate of the marginal likelihood by importance sampling using a variational approximation to the posterior over the initial state:
\begin{align}
&\log p(x_{s:e}) = \log \int{p(x_{s:e}|z0) \, p(z0) \, \text{d}z0}\\
&= \mathbb{E}_{z0\sim q(z0|x_{s:e})} \left[ p(x_{s:e}|z0) \frac{p(z0)}{q(z0|x_{s:e})} \right]\\
&= \frac{1}{M}\sum_{j=1}^{M} \mathcal{N}(\overline{x_{s:e}} | x_{s:e}, \sigma^2) \frac{\mathcal{N}(z0_j|0, 1)}{\mathcal{N}(z0_j | \mu_{z0}, \sigma^2_{z0})}
\end{align}
where $\overline{x_{s:e}}$ is the output of the Latent ODE base model, $\mu_{z0}, \sigma^2_{z0}$ is obtained by the GRU-ODE encoder, and $z0_j$ is sampled as $\mathcal{N}(\mu_{z0}, \sigma^2_{z0})$. The variance $\sigma^2$ is fixed, and set to the same value used to compute the ELBO during training. We take $M$ samples for the Monte Carlo estimate.
Because we use the marginal likelihood, the complexity of the recovered segmentation is implicitly regularized by the Bayesian Occam's Razor \cite{mackay1992bayesian}. Reflecting this, in our experiments, we show that the penalization term $\beta$ can be set to $0$ without over-segmentation. Thus, we can simply set $\mathcal{C}$ in equation (11) to be the marginal likelihood computed by equation (15), and solve for the set of changepoints $\tau$ which maximize the joint probability of the entire trajectory using optimal partitioning (the original objective is a minimization, but this can trivially be switched to maximization).
The quadratic runtime of optimal partitioning can be reduced to between $\mathcal{O}(N)$ and $\mathcal{O}(N^2)$ through the pruned exact linear time (PELT) \cite{killick2012optimal} algorithm. Using an identical search algorithm, PELT introduces a pruning condition which allows removal of sub-solutions from consideration. Given the existence of $K$ such that for all changepoint indexes $s, t, T$ such that $t < s < T$:
\begin{equation}
\mathcal{C}(x_{(t+1):s}) + \mathcal{C}(x_{(s+1):T}) + K \leq \mathcal{C}(x_{(t+1):T})
\end{equation}
Then if
\begin{equation}
F(t) + \mathcal{C}(x_{(t+1):s}) + K \geq F(s)
\end{equation}
we are able to discard the changepoint $t$ from future consideration, asymptotically reducing the number of operations required. Due to noise in the estimates of the score function, finding an analytic method to determine $K$ is an area for further research. If $K$ is set too low, sub-optimal solutions are recovered. In practice, this issue is not limiting, as setting $K$ to a sufficiently high value allows for near-optimal solutions at the cost of higher runtime. This trade-off is documented in Appendix B.
The computation of $F(t)$, the optimal segmentation up to length $t$, and Monte Carlo estimate of the marginal likelihood can all be batch parallelized using GPU computation. An implementation is available at: \url{https://github.com/IanShi1996/LatentSegmentedODE}.
\subsection{When can I use this method?}
The LatSegODE requires a Latent ODE base model trained on a family of SDFs. We propose two scenarios where SDFs may be available. First, the LatSegODE is applicable when a training set of hybrid trajectories with labelled changepoints exists. In this case, given a training set of $N$ hybrid trajectories $X = (x^{(i)}, t^{(i)})_{i=1}^N$ each with $C$ labelled SDF boundaries $(s_{k}, e_{k})_{k=0}^C$, we treat each $x^{(i)}_{s_j:e_j}$ as an independent training trajectory, and train on the union of all SDFs. The LatSegODE can also be applied when physical simulation is available. In these scenarios, the base model can be trained on trajectories which are simulated in the range of dynamical modes which we expect in hybrid trajectories at test time. These two use cases are illustrated in the first two experiments.
\section{Related work}
\label{sec:related-works}
\textbf{Switching Dynamical Systems}:
Hybrid trajectories have previously been modelled as Switching Linear Dynamical Systems (SLDS). We provide a non-exhaustive summary of these methods. Typically, trajectories are represented by a Bayesian network containing a sequence of latent variables, from which observations are emitted. Latent variables are updated linearly, while a higher order of latent variable represents the current dynamical mode. Structured VAEs \cite{johnson2016composing} introduce a discrete latent variable to control dynamical mode, and use a VAE observation model. GPHSMMs \cite{nakamura2017segmenting} uses a Gaussian Process observation model within a hidden semi-Markov model. Kalman VAEs integrate a Kalman Filter with a VAE observation model \cite{fraccaro2017disentangled}. Models in this class are generally trained via an inference procedure \cite{dong2020collapsed}, while several are fully differentiable \cite{kipf2019compile}. These methods are unsupervised, requiring no training data with labelled changepoint locations.
In contrast, the LatSegODE requires a base model to be trained on SDFs. It does not model dependency between segments unlike methods such as rSLDS \cite{linderman2017bayesian}. At evaluation time, the LatSegODE operates without specification of the number of segments or dynamical modes. This is an advantage compared to previously discussed works, where performance is sensitive to these hyperparameters \cite{dong2020collapsed}.
The Neural Event ODE \cite{chen2020learning} is closely related to the LatSegODE. It represents observed dynamics using a Neural ODE and trains a neural network to detect the positions and update values of a switching dynamical system. The Neural Event ODE can be trained in an unsupervised fashion, without prior knowledge of change point locations in training data. When extrapolating past observed data, it is able to introduce additional change points, which the LatSegODE cannot model. However, the Neural Event ODE inherits the same limitations as the Neural ODE: it cannot model a data set which cannot be described by a single ODE function in data space. So, for example, two different dynamics cannot start from the same observed point. This issue is elaborated in Appendix C. The LatSegODE circumvents these limitation by modelling the data using an ODE in latent space.
\textbf{Offline Changepoint Detection}:
The LatSegODE closely relates to offline CPD, and we refer to \citet{truong2020selective} for an in-depth review. The LatSegODE leverages search algorithms from offline CPD, but represents the behavior within segments using a complex generative model, as opposed to a simple statistical cost function. The use of the Latent ODE allows for higher representational power and extrapolation/interpolation within segments. However, training data is required to fit the base model and, as such, its total runtime is significantly higher. Other methods have incorporated deep architectures with CPD search methods \cite{lee2018time}, but use a sliding window search with predefined window size, and use a feature distance metric to determine boundaries as opposed to the marginal likelihood used by LatSegODE.
\textbf{Miscellaneous}:
A distantly related class of methods classify individual observations into class labels, which can be seen as segmentation \cite{supratak2017deepsleepnet}. These approaches are distinct as they do not explicitly model dynamics, and require a fixed segment size and trajectory length, a limitation which the LatSegODE does not have. The LatSegODE does not treat positions of jump discontinuity or switching dynamical mode as a random variable, unlike methods that model these jumps as a random process \cite{mei2017neural, jia2019neural}.
\section{Experiments}
Here we investigate the LatSegODE's ability to simultaneously perform accurate reconstruction and segmentation on synthetic and semi-synthetic data sets.
When training the base model, we mask observations from the last 20\% of the timepoints and 25\% of internal timepoints, this 25\% is shared across all training and test examples. When evaluating the model on the test set, we use the 55\% of unmasked timepoints to infer the initial states and perform segmentation, and then attempt to reconstruct the observations at the masked timepoints. We report the mean squared error (MSE) between ground truth and predicted observations on test trajectories. We benchmark against auto-regressive and vanilla Latent ODE baselines for reconstructive tasks. We augmented the input data for the vanilla Latent ODE with a binary-valued time series denoting changepoint positions. This ensures it has access to the same information as the LatSegODE. We report performance on an extrapolation region which assumes the last observed dynamical mode continues. We attempted to benchmark against Neural ODEs and Neural Event ODEs, but found that their training did not converge on any of our benchmarks (see Appendix C).
We benchmark the segmentation performance of the LatSegODE against classic CPD algorithms using Gaussian kernelized mean change \cite{arlot2019kernel}, auto regressive \cite{bai2000vector}, and Gaussian Process \cite{lavielle2006detection} cost functions. These are denoted RPT-RBF, RPT-AR, and RPT-NORM respectively.
Segmentation performance is measured using the Rand Index \cite{rand1971objective}, the Hausdorff metric \cite{rockafellar2009variational}, and the F1 score. The Rand Index measures the overlap between the predicted segmentation and the ground truth segmentation. Given data points $x_{1:N}$, a membership matrix $A$ is defined such that $A_{ij} = 1$ if $x_i$ and $x_j$ are in the same segment. Otherwise, $A_{ij} = 0$. Membership matrices are generated for the ground truth segmentation $(A)$ and the predicted segmentation $(\Tilde{A})$. Using these two matrices, the Rand Index is calculated as:
\begin{equation}
\frac{\sum_{i<j} \mathds{1}[A == \Tilde{A}]}{N(N-1) / 2}
\end{equation}
The Hausdorff metric is a measure of the maximal error between the predicted segmentation and the ground truth segmentation. Given a set of ground truth changepoints $\mathcal{T}$ and predicted changepoints $\mathcal{P}$, the Hausdorff metric is computed as:
\begin{equation}
\max\left( \max_{\tau \in \mathcal{T}} \min_{\rho \in \mathcal{P}} |\tau - \rho|, \, \max_{\rho \in \mathcal{P}} \min_{\tau \in \mathcal{T}} |\tau - \rho|\right)
\end{equation}
We use the \verb+ruptures+ library \cite{truong2020selective} implementation of these baseline methods and metrics.
We found that the segmentation baselines performed extremely poorly when using penalized detection of changepoints. In response, we simplified the problem for them by providing the correct number of changepoints, so that they only needed to choose the correct locations. In contrast, we did not provide LatSegODE with the number of changepoints, thus the evaluation was biased in favor of the baselines. Also, we excluded trajectories with zero changepoints from this benchmark because they are trivially correct. Irregular locations of data observation is handled by applying linear interpolation prior to segmentation. An extended description of baselines, metrics, and experimental set up is provided in Appendix D.
\subsection{Sine Wave Hybrid Trajectories}
We evaluate the LatSegODE on a benchmark data set of 1D sine wave hybrid trajectories. Here, we assume access to trajectories with labelled changepoint positions, one of the situations where the LatSegODE can be realistically applied. We generate 7500 hybrid trajectories each containing up to two changepoints. Between each changepoint, segment trajectories are sine waves generated under random parameters. We hold out $300$ validation trajectories, $150$ test trajectories, and train the LatSegODE base model on the SDFs contained in the remaining trajectories. Data parameters, model architecture and hyper-parameters are reported in Appendix E. In Figure \ref{fig:sine_reconstruct}, we provide a visual comparison of the LatSegODE against baselines on an example test set trajectory.
\begin{figure}[ht]
\centering
\includegraphics[width=0.47\textwidth]{Figures/fig3.pdf}
\caption{Comparison against baselines in a sample 1D Sine Wave hybrid trajectory. Top: Reconstructed trajectories are shown. Data in the extrapolation region is held out from all models during training. Bottom: Segmentation results are shown. Each distinctly colored region represents a segment.}
\label{fig:sine_reconstruct}
\end{figure}
The LatSegODE outperforms baselines in both reconstruction and segmentation tasks. The presence of discontinuities prevent vanilla Latent ODEs from learning accurate representations. Although Latent ODEs can represent the initial SDFs, they lack the ability to represent switches to the dynamical mode. As time progresses, the Latent ODE reconstruction collapses near zero, a local minima which minimizes error given its reconstructive limitations. In contrast, because the LatSegODE can restart latent dynamics, it can represent trajectories with jump discontinuities. The LatSegODE provides an accurate reconstruction, and we see the periodic solution is cleanly captured in the extrapolation region. The GRU-ODE method can fit observed data well, but yields poor interpolations and extrapolations. The LatSegODE recovers the segmentation closest to the ground truth segmentation. The trends observed in this example trajectory are reflected in the overall test results, where the LatSegODE outperforms all baselines. These results are reported in Appendix F. We found that inclusion of the binary-valued changepoint location time series did not result in significant improvement, and we omit this feature from further experiments. We report the effects of the training set size and the number of samples per training trajectory on LatSegODE performance in Appendix K.
\begin{figure*}[ht]
\centering
\includegraphics[width=0.99\textwidth]{Figures/fig4.pdf}
\caption{Comparison of reconstructions of Lotka Volterra hybrid trajectories. Top row contains baseline reconstruction by Latent ODE. Bottom row shows reconstruction by LatSegODE. Sample hybrid trajectories contain the same number of ground truth changepoints in each column. Ground truth segments are shown as a contiguous background color block. Yellow background indicates extrapolation region which assumes that the last observed dynamical mode continues. Visualization inspired by ruptures package \cite{truong2020selective}.}
\label{fig:lv_results}
\end{figure*}
\subsection{Lotka-Volterra Hybrid Trajectories}
Next, we evaluate the LatSegODE on hybrid trajectories whose SDFs are the Lotka-Volterra dynamics described in equation (1). We simulate 34000/600/150 hybrid trajectories for the training/validation/test set. Lotka-Volterra dynamics are generated by randomly sampling coefficients $(\alpha, \beta, \delta, \gamma)$ from ranges $[(0.5, 1.5), (0.5, 1.5), (1.5, 2.5), (0.5, 1.5)]$ respectively. Each trajectory contains up to two changepoints, and at each changepoint we restart dynamics from new initial values sampled from $[(0.5, 1.5), (1.5, 2.5)]$. We re-sample the coefficient vector at changepoints, so the trajectories feature both jump discontinuity and switch of dynamical mode. We train the LatSegODE base model on the SDFs in the generated training trajectories. The vanilla Latent ODE baseline is trained on full hybrid trajectories, while other baselines were separately trained on both full trajectories and SDFs, with the best performing result reported. The data generation procedure, and model architectures/training is documented in Appendix G.
Results are reported in Table \ref{tab:lv-metrics}, where metrics are averages over 150 test trajectories. The LatSegODE outperforms baselines in both segmentation and reconstruction. An expanded evaluation with additional metrics and experiments is provided in Appendix H.
\begin{table}[ht]
\caption{Results on Lotka Volterra hybrid trajectories. Metrics generated using 150 test trajectories. Best result is bolded.}
\label{tab:lv-metrics}
\begin{center}
\begin{small}
\begin{sc}
\begin{tabular}{lcccr}
\toprule
Method & \multicolumn{1}{p{1cm}}{\centering Test\\MSE} & \multicolumn{1}{p{1cm}}{\centering Rand\\Index} & \multicolumn{1}{p{2cm}}{\centering Hausdorff\\Metric}\\
\midrule
LatSegODE & \textbf{0.068} & \textbf{0.9464} & \textbf{47.67} \\
\midrule
GRU$\Delta t$ & 0.1718 & - & -\\
GRU-ODE & 0.2747 & - & - \\
Latent ODE & 0.6155 & - & - \\
\midrule
RPT-RBF & - & 0.7956 & 84.7 \\
RPT-AR & - & 0.6994 & 164.65 \\
RPT-NORM & - & 0.7693 & 105.92 \\
\bottomrule
\end{tabular}
\end{sc}
\end{small}
\end{center}
\end{table}
\begin{figure*}[ht]
\centering
\includegraphics[width=0.99\textwidth]{Figures/fig6.pdf}
\caption{Example reconstruction on a long hybrid trajectory synthetically generated from UCI Character Trajectory data set. While most characters are accurately detected, an erroneous change point is introduced at $t \approx 6$. As segments are independent under PELT, future segments are not affected by this error, and reconstruction quality recovers after the introduction of the change point at $t \approx 9.5$.}
\label{fig:ct}
\end{figure*}
In Figure \ref{fig:lv_results}, we show sample trajectory reconstructions from the LatSegODE versus the vanilla Latent ODE baseline. All vanilla Latent ODE reconstructions over-fit to the changepoint locations observed in training data. It is difficult for vanilla Latent ODEs to generalize on permutations of the piece-wise hybrid training trajectories, because they need to encode all sequence information into a single latent initial state. When a permutation in the sequence of SDFs is encountered, the non-robust latent representation predicts arbitrary dynamical shifts. The vanilla Latent ODE performs badly even in the zero change point reconstruction in Figure \ref{fig:lv_results} where one might expect it to do well, likely because it is anticipating potential change points. In contrast, the structured nature of the LatSegODE bypasses this need to learn a complex latent representation. Segmenting trajectories into SDFs allows for a complex hybrid trajectories to be represented by a sequence of simpler dynamics, yielding the accurate reconstructions shown.
\begin{figure}[ht]
\centerline{\includegraphics[width=0.48\textwidth]{Figures/fig5.pdf}}
\caption{Example failure modes encountered in Lotka Volterra modelling. See Figure \ref{fig:lv_results} for legend.}
\label{fig:lv-fail}
\end{figure}
While the base model Latent ODE can powerfully represent SDFs, the LatSegODE method also inherits limitations of the architecture. We visualize two common failure modes in Figure \ref{fig:lv-fail}. We observed that learning limit cycles was challenging for the base model Latent ODE. In the top trajectory, imprecise base model representations cause deviation from the true periodic solution as time progresses. Eventually, enough error accumulates such that the accuracy gain from introducing a new segment overcomes the complexity cost of this action, resulting in over-segmentation. In the bottom trajectory, a failure mode is caused by the inability for the base model to generalize. Over-segmentation occurs if test trajectories contain SDFs which start outside of initial values founds in training data, such as at the second true changepoint. The base model cannot generalize well to unseen dynamical modes or initial values, so changepoints are erroneously introduced to improve fit. In Appendix I, we report data augmentation tricks which slightly improve generalization, remedying these issues.
This experiment also shows how the LatSegODE can be used in conjunction with physical simulators in a paradigm similar to simulator based inference \cite{cranmer2020frontier}. We train a MLP to map latent initial states from a trained base model to the labelled Lotka-Volterra coefficients of training SDFs. On test trajectories where the correct number of changepoints were predicted, we could recover the dynamical coefficients with a MSE of $\mathbf{0.08 \pm 0.01}$. In contexts such as Wright-Fisher population dynamics \cite{fisher1923xxi, wright1931evolution}, where forward simulation is available but cannot be expressed in closed form, the LatSegODE could be applied to solve inverse parameter estimation problems.
\subsection{UCI Character Trajectories}
Finally, we apply the LatSegODE to the UCI Character Trajectory data set \cite{Dua:2019}. This data set contains 2858 pen tip trajectories collected while writing letters of the alphabet. The trajectories are three dimensional, corresponding to x / y coordinates and pen pressure while writing one character. The data set is pre-processed by normalization and smoothing. Trajectories are regularly sampled with a maximum of 205 observations. We sanitized the data set by removing sections at the beginning and end of trajectories where no movement occurs. We use $5\%$ of the data for validation, and hold out $5\%$ for testing. The LatSegODE base model is trained on the remaining data, using each character trajectory as a SDF. Model architecture and hyper-parameters are reported in Appendix J.
We synthetically construct hybrid test trajectories by composing character trajectories. We randomly sampled a base character trajectory from the test set, then append up to two further randomly sampled character trajectories. To increase task difficulty, we add independent Gaussian noise with standard deviation of $0.2$. We also sub-sample the test trajectories to reduce number of observations the Latent ODE base model is able to condition upon. Using this method, we generate $75$ synthetic test hybrid trajectories, each containing zero to two changepoints. We report LatSegODE's segmentation performance on this synthetic test set in Table \ref{tab:ct-results}.
\begin{table}[ht]
\caption{Segmentation results on UCI Character Trajectories.}
\label{tab:ct-results}
\begin{center}
\begin{small}
\begin{sc}
\begin{tabular}{lcccr}
\toprule
Method & \multicolumn{1}{p{1cm}}{\centering Rand\\Index} & \multicolumn{1}{p{2cm}}{\centering Hausdorff\\Metric} & \multicolumn{1}{p{1cm}}{\centering F1\\Score} \\
\midrule
LatSegODE & \textbf{0.9732} & \textbf{4.493} & \textbf{0.977} \\
\midrule
RPT-RBF & 0.7956 & 84.7 & 0.656\\
RPT-AR & 0.6994 & 164.65 & 0.738\\
RPT-NORM & 0.7693 & 105.92 & 0.611\\
\bottomrule
\end{tabular}
\end{sc}
\end{small}
\end{center}
\end{table}
In Figure \ref{fig:ct}, we provide an example reconstruction of a hybrid trajectory constructed by composing six character trajectories sampled from the test set. In both this figure and Table \ref{tab:ct-results}, the LatSegODE performs well in reconstructing long sequences of realistic data with noise, and accurately detects position of change in dynamical mode.
\section{Scope and Limitations}
\textbf{Data Labelling}: The LatSegODE requires SDF training data, typically obtained by splitting hybrid trajectories using labelled changepoints. This can be hard to obtain, so ideally, LatSegODE could be extended so it could be trained directly on hybrid trajectories. One approach would be marginalizing over changepoints during training using an inference procedure or a iterated-conditional-modes-like procedure that iterates between estimating an optimal segmentation given the current base model, and updating the base model given the segmentation.
\textbf{Dependency on Dynamical Models}: The LatSegODE relies on a Latent ODE base model to capture SDF behavior. Thus, it inherits many limitations of Latent ODEs, but any future advancements in the architecture and training of Latent ODEs can be directly integrated. While we chose to use Latent ODEs due its powerful representational ability, it could be replaced with any model for which marginal likelihood can be computed. Bayesian approaches to Neural ODEs such as the ODE2VAE \cite{yildiz2019ode} and Neural ODE Process \cite{norcliffe2021neural}, as well as the Latent SDE \cite{li2020scalable} method, could replace the Latent ODE base model with modifications. Thus, our framework can be used a paradigm for an expanded family of methods which combine PELT and dynamical models.
\textbf{Runtime}: The runtime of the LatSegODE can be improved. The current implementation naively computes the ODE solution for the union of batch timepoints. \citet{chen2020learning} provide a change of variables method to solve ODEs with irregular timepoints in parallel. This can reduce the memory bottleneck of the current approach, allowing additional parallelism to decrease evaluation runtime. The LatSegODE can integrated with recent methods to regularize ODE dynamics \cite{kelly2020learning}, \cite{finlay2020train}, which decrease evaluation runtime.
\section{Conclusion}
Here, we present the LatSegODE which leverages Latent ODEs to represent hybrid trajectories. Using a Latent ODE base model trained on SDFs and the PELT changepoint detection algorithm, we identify positions of jump discontinuity and switching dynamical mode, and restart latent dynamics from new initial states at these points. We provide a novel integration of Latent ODEs and CPD methods that uses the marginal likelihood of segments as a scoring function. We find that this Bayesian Occam's Razor effect prevents over-segmentation. We compared LatSegODE to baselines on synthetic and semi-synthetic benchmarks. Through qualitative analysis of example reconstructions, we highlight LatSegODE's ability to represent hybrid trajectories, and demonstrate common failure modes. The LatSegODE outperforms all baselines in both reconstruction and segmentation, supporting it as a novel approach to modelling hybrid trajectories governed by hybrid systems.
\section*{Acknowledgements}
We thank Tianxing Li and David Duvenaud for their helpful feedback and preliminary reviewing. We also thank Haoran Zhang, Yulia Rubanova, and other members of the Morris Lab for many helpful suggestions. Finally, we thank the ICML reviewers of this paper for their insightful feedback. Resources used in preparing this research were provided, in part, by the Memorial Sloan Kettering Cancer Center, Province of Ontario, the Government of Canada through CIFAR, and companies sponsoring the Vector Institute \url{www.vectorinstitute.ai/partners}.
\nocite{bengio2015scheduled}
\nocite{dupont2019augmented}
\nocite{fu2019cyclical}
\nocite{kidger2020hey}
\nocite{kingma2014adam}
\nocite{rackauckas2020universal}
\section{PELT Algorithm}
The pruned exact linear time (PELT) algorithm is shown below:
\begin{algorithm2e}
\DontPrintSemicolon
\KwIn{Data observations $x_{1:N}$. Cost Function $\mathcal{C}$. Penalization $\beta$. Pruning parameter $K$.}
\KwOut{$cp$, position of changepoints.}
Initialize $F(0) = -\beta$. $cp = \{\}$, $R_1 = \{0\}$\;
\For{$\tau^* = 1, ..., N$}{
Calculate $F(\tau^*) = \min_{\tau \in R_{\tau^*}} [F(\tau) + \mathcal{C}(x_{(\tau+1) : \tau^*}) + \beta]$ \;
Let $\tau' = \arg \min_{\tau \in R_{\tau^*}}[F(\tau) + \mathcal{C}(x_{(\tau + 1):\tau^*} + \beta)$ \;
Append $\tau'$ to $cp$ \;
Set $R_{\tau^* + 1} = \{\tau \in R_{\tau^*}\cup \{\tau^*\} : F(\tau) + \mathcal{C}(x_{(\tau+1):\tau^*}) + \beta \leq F(\tau^*)$
}
\Return $cp$
\caption{PELT Algorithm}
\end{algorithm2e}
Array $R$ stores the changepoints under consideration for the optimal segmentation. Line 6 contains the pruning condition of PELT. After computation of the optimal sub-solution, PELT removes changepoints which cannot be optimal, as determined by equation (16-17) of the main text. In LatSegODE, we flip this algorithm to perform maximization instead of minimization by switching the relevant signs. We set $\beta$ to be zero, and use the marginal likelihood from equation (15) of the main text as $\mathcal{C}$. The choice of $K$ is problem dependent. Increasing K increases the threshold required to prune a changepoint. This increases segmentation accuracy at the cost of runtime.
\section{Runtime}
Here, we document the trade-off between $K$ and segmentation runtime. Setting a high $K$ allows more changepoints to be considered for the optimal segmentation. This means an optimal changepoint will be accidentally pruned with less frequency. Naturally, considering additional changepoints increases runtime, which we visualize in Figure \ref{fig:rt}.
\begin{figure}[ht]
\centering
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{Figures/seg_train_a.png}
\caption{One ground truth changepoint.}
\end{subfigure}
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{Figures/seg_train_b.png}
\caption{Two ground truth changepoints.}
\end{subfigure}
\caption{Run time as a function of $K$. X axis is in log scale.}
\label{fig:rt}
\end{figure}
We vary the $K$ term, and measure the wall runtime of LatSegODE segmentation. In subplot (a), we segment trajectories from the Lotka Volterra data set with one changepoint. These trajectories contains $410$ observations on average. In subplot (b), we segment trajectories from the Lotka Volterra data set containing two changepoints containing $620$ observations on average. We observe that segmentation accuracy increases as we increase $K$. This relationship is shown in Table \ref{apptab:Kseg}. Accuracy when using low values of $K$ is very poor. Accuracy rapidly increases as $K$ is increased, and then plateaus. For the Lotka-Volterra benchmark, a $K$ value around 50-100 offers the best performance for its runtime.
\begin{table}[ht]
\centering
\resizebox{0.75\textwidth}{!}{%
\begin{tabular}{l|llll|llll|}
\cline{2-9}
& True CP = 1 & & & & True CP = 2 & & & \\ \hline
\multicolumn{1}{|l|}{K} & \begin{tabular}[c]{@{}l@{}}Rand \\ Index\end{tabular} & \begin{tabular}[c]{@{}l@{}}Hausdorff \\ Metric\end{tabular} & \begin{tabular}[c]{@{}l@{}}F1 \\ Score\end{tabular} & \begin{tabular}[c]{@{}l@{}}Annot.\\ Error\end{tabular} & \begin{tabular}[c]{@{}l@{}}Rand \\ Index\end{tabular} & \begin{tabular}[c]{@{}l@{}}Hausdorff \\ Metric\end{tabular} & \begin{tabular}[c]{@{}l@{}}F1 \\ Score\end{tabular} & \begin{tabular}[c]{@{}l@{}}Annot.\\ Error\end{tabular} \\ \hline
\multicolumn{1}{|l|}{10} & 0.7781 & 131.6 & 0.5578 & 1.4 & 0.8662 & 150.8 & 0.5714 & 0.133 \\
\multicolumn{1}{|l|}{25} & 0.8344 & 103.8 & 0.8133 & 0.4 & 0.9342 & 105.8 & 0.6929 & 1.0 \\
\multicolumn{1}{|l|}{50} & 0.9024 & 66.4 & 0.9333 & 0.2 & 0.9585 & 24.6 & 0.5524 & 0.6 \\
\multicolumn{1}{|l|}{100} & 0.9016 & 66.6 & 0.9333 & 0.2 & 0.9659 & 19.6 & 0.5619 & 0.4 \\
\multicolumn{1}{|l|}{200} & 0.9031 & 66.2 & 0.9333 & 0.2 & 0.9582 & 22.4 & 0.4952 & 0.4 \\ \hline
\end{tabular}%
}
\caption{Segmentation accuracy as a function of $K$. Metrics averaged over 5 runs.}
\label{apptab:Kseg}
\end{table}
\section{Neural Event ODEs}
Here, we demonstrate our intuition on why the Neural ODE and Neural Event ODE methods are unable to converge within our problem domains. Neural ODEs represent trajectories as a deterministic function of its initial state. Furthermore, two different trajectories cannot evolve from an identical initial state. In experimental benchmarks, and time series modelling in general, trajectories may start at identical initial states but later diverge. Neural ODEs and Neural Event ODEs, which can only encode one trajectory per initial state, are not designed to represent this class of time series. Consequently, we observe that Neural ODEs and Neural Event ODEs do not converge during training on our benchmarks. An example is shown in Figure \ref{fig:eventode}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\textwidth]{Figures/EventODE.png}
\caption{An example of trajectories which cannot be simultaneously represented by a single Neural Event ODE. Both trajectories begin with an initial value of zero, but evolve with opposite dynamics.}
\label{fig:eventode}
\end{figure}
The figure shows two sine waves with opposite amplitudes a shared initial value of zero. As expected, we see that Neural Event ODE reconstructions are poor, and did not converge during training. The Neural Event ODE was trained using the sine wave benchmark data set. The ODE function is parameterized using a 3 layer neural network (NN) with Tanh activations and 256 units in hidden layers. We use a 2 layer NN with 256 width and ReLU activations to parameterize the update and event functions. Hyper-parameter search was performed by varying the number of layers and hidden layer width in the ranges (2, 3) and (64, 128, 256).
Latent ODEs does not suffer this representational limitation. We consider the example in the figure. The Latent ODE can represent this example by using two different latent trajectories which begin at non-identical latent initial states. The decoder neural network can then learn a surjective function which maps the non-identical latent initial states to identical initial values in data space. Extension of the Neural Event ODE to a latent architecture is a promising direction to circumvent limitations.
\section{General Experimental Design}
All experiments are run on a single Titan Xp GPU. Due to computational constraints, each training run was only performed once per hyper-parameter choice. Each segmentation evaluation was also only run once due to computational constraints.
\subsection{Reconstructive Baselines}
Reconstructive performance was baselined using several methods. First, we selected the GRU model, a standard choice in representing time series. The GRU $\Delta t$ model uses the time delta between observations as an additional input feature. Inclusion of the time delta improved extrapolation performance. The GRU-ODE model uses a Neural ODE to evolve hidden state between GRU updates. All of these methods are trained in an auto-regressive manner. During training, scheduled sampling \cite{bengio2015scheduled} is used. At each time step, with $25\%$ probability, the input data is replaced with the prediction output of the previous time point. We also mask $25\%$ of the observations at the end of the trajectory (in addition to masking described in main text), and include the prediction loss on these points. These models are trained using the mean squared error loss between ground truth and predicted data points.
As the LatSegODE is provided changepoint positions (and thus isolated simple dynamical flows) during training, we also provide this information to baseline methods. We trained models on trajectories which have been split into simple dynamical flows (SDFs), and also separately trained on entire hybrid trajectories. The best performing model resulting from these two training configurations is reported.
We trained vanilla Latent ODEs using only whole hybrid trajectories. Obtaining convergence was difficult, and required days of training. We use the training strategy from \cite{rackauckas2020universal}, where we iterative grow the learned reconstruction. We experimented using an augmented Neural ODE \cite{dupont2019augmented} to represent latent dynamics, but did not observe significant benefit. We applied an adjoint computation modification to speed up training \cite{kidger2020hey}, but did not perform rigorous empirical testing to evaluate its benefits.
We hacked data generation to enable faster experimentation. Generated observation times were irregularly sampled, but aligned for all trajectories. This enables batch training. In practice, Latent ODEs can handle non-aligned irregularly sampled time series by solving for the union of all time points in a mini-batch. This is drastically increases memory usage and runtime. Thus, for convenience, we adopt the aforementioned hack.
\subsection{Segmentation}
We report the specific formulations of the CPD methods and metrics used in our benchmarks. These descriptions are referenced from the \verb+ruptures+ \cite{truong2020selective} package documentation.
\subsubsection*{Metrics}
The Rand Index \cite{rand1971objective} measures the overlap between the predicted segmentation and the ground truth segmentation for a segmentation $S$ on data points $x_{1:T}$. A membership matrix $A$ is defined such that $A_{ij} = 1$ if $x_i$ and $x_j$ are in the same segment. Otherwise, $A_{ij} = 0$. Membership matrices are generated for both the ground truth segmentation $(A)$ and the predicted segmentation $(\Tilde{A})$. The Rand index is defined as:
\begin{equation}
\text{RandIndex} = \frac{\sum_{i<j} \mathds{1}[A == \Tilde{A}]}{T(T-1) / 2}
\end{equation}
The Hausdorff metric \cite{rockafellar2009variational} is a measure of the maximal distance between the predicted segmentation and the ground truth segmentation. Given a set of ground truth changepoints $t_1, t_2, ...$ and predicted changepoints $\hat{t_1}, \hat{t_2}, ...$, it is computed as:
\begin{equation}
\text{Hausdorff}(\{t_k\}_k, \{\hat{t}_k\}_k) = \max\{ \max_k \min_l |t_k - \hat{t}_l|, \max_k \min_l | t_l - \hat{t}_k |\}
\end{equation}
Intuitively, it returns the max of the set of distances from each predicted changepoint to their closest ground truth changepoint.
The F1 score is calculated as the standard F1 score, namely the harmonic mean between precision and recall:
\begin{equation}
F_1 = 2 \times \frac{\text{precision} \times \text{recall}}{\text{precision} + \text{recall}}
\end{equation}
A changepoint prediction is considered correct if a falls within 10 indices of a true changepoint. This definition of correctness is used to calculate precision and recall for the F1 score.
The annotation error reports the difference between the count of predicted changepoints, and the count of true changepoints.
\subsubsection*{Methods}
We outline the three cost functions used in the main text. Other cost functions (L1/L2 deviation, rank transformation, Mahalanobis distance) were evaluated not reported due to inferior performance. These cost functions are used with PELT. Alternative search methods such as binary segmentation and sliding windows were not considered. They are prone to returning sub-optimal results, as they are greedy methods.
\textbf{RPT-RBF}: This cost function detects changes in the distribution of a sequence of i.i.d. random variables. It introduces a kernel $k(., .): \mathbb{R}^d \times \mathbb{R}^d \rightarrow \mathbb{R}$ and a feature map $\Phi: \mathbb{R}^d \rightarrow \mathcal{H}$ where $\mathcal{H}$ is a Hilbert space. RPT-RBF embeds a signal as $\{\Phi(x_t)\}_t$, and detects changes in the mean so that the cost function on an interval $I$ is:
\begin{equation}
c(x_I) = \sum_{t\in I} ||\Phi(x_t) - \Bar{\mu}||^2_\mathcal{H}
\end{equation}
where $\Bar{\mu}$ is the empirical mean of the sub-trajectory $\{\Phi(x_t)\}_{t\in I}$. RPT-RBF uses the radial basis function, which is defined as:
\begin{equation}
k(x, y) = \exp(-\gamma ||x - y||^2)
\end{equation}
where $||.||$ is the Euclidean norm and $\gamma > 0$ is a smoothing parameter known as the bandwidth. Two other kernels are provided, which use a cosine similarity kernel, and a linear model. We use the RBF as it provided better performance.
\textbf{RPT-NORM}: This cost function scores a segment using a sequence of multivariate Gaussian random variables, i.e., a Gaussian Process. For a signal $\{x_t\}_t$ on interval $I$, this cost is defined as:
\begin{equation}
c(x_I) = |I| \log \det \hat{\Sigma}_I
\end{equation}
where $\hat{\Sigma}_I$ is the empirical covariance matrix of the data points $\{x_t\}_{t\in I}$.
\textbf{RPT-AR}: This cost function introduces an auto-regressive model. It represent unknown changepoint indices as $0 < t_1 < .. < t_N$. A piece-wise auto-regressive model is introduced:
\begin{equation}
x_t = z'_t \delta_j + \epsilon_t \hspace{2em} \forall t = t_j, ..., t_{j+1} - 1
\end{equation}
To clarify, $t_j$ is the segmentation boundary, while $t$ represents actual time indices. Thus, $j > 1$ represents the number of a segment. The variable $z_t = [x_{t-1}, ... x_{t-p}]$ is the lag vector, with $p$ being the order of the auto-regressive model.
On an interval $I$, the cost is defined as:
\begin{equation}
c(x_I) = \min_{\delta \in \mathbb{R}^p} \sum_{t\in I} || x_t - \delta' z_t ||^2_2
\end{equation}
For this cost function, the lag term is a hyper-parameter. We chose the default value (ten) and a small grid search around this value confirmed it performs the best.
\section{Sine Wave Experimental Setup}
We simulate 7500 total trajectories, of which 300 is used for validation and 150 for test. Each trajectory can contain up to two changepoints. The time length and number of observations in each SDF between changepoints is uniformly sampled from the ranges $(3,5)$ and $(50, 150)$ respectively. The amplitude and frequency of SDFs are uniformly sampled from the ranges $(-8, 8)$ and $(2,4)$, respectively. The phase of each SDF is also randomly drawn, which introduces jump discontinuities. The time points of observation were randomly sampled using a uniform distribution on the range of the segment, but were forcibly aligned for experimental convenience. The absolute change in amplitude between SDFs is at least $2.5$, and we added independent Gaussian noise with standard deviation $0.025$ to simulate noise. At evaluation time, all segmentation methods use a minimum segment length of $20$, and we use a $K$ term of $200$.
Next, we report architecture hyperparameters and training procedure. Unless otherwise stated, hyper-parameter search was mainly performed on by adjusting the depth and width of NNs used to parameterize the Neural ODEs in various architectures. We start with a 3 layer NN with 50 units, and increase width/depth until the model can fit the data.
\textbf{GRU / GRU$\Delta$t} The GRU model used 100 units in the GRU, and a 20 dimensional hidden state. A 2 layer NN with 100 units and ReLU activations is used for the output network. The GRU $\Delta$t model used an identical architecture. These hyperparameters were selected using grid search. We started with 50 units in the GRU, and increased number of units until overfit occurred. Both models were trained using Adamax \cite{kingma2014adam}, with a learning rate of 0.01. Learning rate was manually decayed to 1e-3 and then 1e-4 when loss plateaued. A batch size of $512$ was used. This was selected based on memory constraints, as larger batch sizes allowed for faster convergence.
\textbf{GRU-ODE} The GRU-ODE model used 100 units in the GRU, and a 10 dimensional hidden state. The output network is a 2 layer 100 unit NN with ReLU activations. The Neural ODE component of the GRU-ODE is parameterized by a 3 layer 100 unit NN with Tanh activations. We found that using batch size 1 provided good results, at the cost of a very long training run. The model was optimized using Adamax, with manual learning rate reduction on plateaus from 0.01 to 1e-3 to 1e-4.
\textbf{Latent ODE} The vanilla Latent ODE model used a encoder with a 32 dimensional hidden state. The latent dynamics were 16 dimensional. The dimensionality in the vanilla Latent ODE is larger than the base model used in the LatSegODE, as the latent initial state for vanilla Latent ODEs must encode more information. The encoder GRU contains 100 units, and the encoder Neural ODE was parameterized by a 3 layer 100 units NN with Tanh activations. The Neural ODE representing latent dynamics was parameterized identically. We used a 2 layer 100 unit NN with ReLU activations as the decoder. The Latent ODE is trained using Adamax with learning rate 0.01. The learning rate was manually decayed to 1e-3 when loss plateaued. A batch size of 128 was used. We used KL annealing \cite{fu2019cyclical}, such that the KL weight was 0 at the start of training, and reached 1 at epoch 5. The fixed variance used to compute the ELBO was set to 1. Latent dynamics were solved using the dopri5 solver using relative and absolute tolerances of 1e-8 and 1e-8. The encoder Neural ODE was solved using Euler's method.
\textbf{LatSegODE} The LatSegODE used a 10 dimensional hidden state in the encoder, and a 5 dimensional latent dynamical state. The original Latent ODE paper observed that the dimension of the encoder hidden state must be larger than the dimension of the latent dynamical state. Our experimentation supported this claim, and training was not possible if this rule was violated. The GRU in the GRU-ODE encoder uses 100 units, and the encoder Neural ODE is a 2 layer 100 unit NN with Tanh activations. The Neural ODE used to represent latent dynamics used the same hyperparameters. We found that Tanh activations were required for stable training in Latent ODEs. We attempted other activations such as Swish, ReLU, and Softplus, but found this caused numerical under/overflow. We used a 2 layer 100 unit decoder network with ReLU activations.
It is important to use ReLU activations in the decoder network. Alternative activations such as the Tanh or SoftPlus are bijective activations, meaning the initial observed data point in data space can never map to multiple latent initial states. Using the Tanh activation greatly limits the representational power of the Latent ODE base model when representing data trajectories which start at the same value, but later diverge.
The LatSegODE was trained using Adamax using learning rate 0.01, reduced to 1e-3 manually when validation loss plateaued. We used a batch size of 256. We used the dopri5 solver to solve latent dynamics with relative tolerance of 1e-5 and absolute tolerance of 1e-6. Increasing the ODE solve tolerance had no adverse effects on results and sped up training. The encoder Neural ODE used Euler's method to solve dynamics. We used KL annealing during training.
The fixed variance used to compute ELBO is set to 1. At segmentation time, it was critical to use the same fixed variance to evaluate marginal likelihood. We used $100$ Monte Carlo samples to estimate marginal likelihood. We used a $K$ term of $200$. To reduce the memory cost of segmentation, we rounded times of observation to the nearest 0.01, reducing the size of the union of time points which must be solved.
\pagebreak
\section{Sine Wave Results}
Experimental results on the Sine Wave dataset. The Latent ODE trained on input augmented with a binary time series of changepoint locations is denoted Aug. Latent ODE.
\begin{table}[ht]
\centering
\resizebox{0.75\textwidth}{!}{%
\begin{tabular}{l|lll|llll|}
\cline{2-8}
& \multicolumn{3}{l|}{Reconstruction} & \multicolumn{4}{l|}{Segmentation} \\ \hline
\multicolumn{1}{|l|}{Method} & \begin{tabular}[c]{@{}l@{}}Total \\ MSE\end{tabular} & \begin{tabular}[c]{@{}l@{}}Interp. \\ MSE\end{tabular} & \begin{tabular}[c]{@{}l@{}}Extrap. \\ MSE\end{tabular} & \begin{tabular}[c]{@{}l@{}}Rand \\ Index\end{tabular} ($\uparrow$) & \begin{tabular}[c]{@{}l@{}}F1 \\ Score\end{tabular} ($\uparrow$) & \begin{tabular}[c]{@{}l@{}}Hausdorff \\ Metric\end{tabular} ($\downarrow$) & \begin{tabular}[c]{@{}l@{}}Annot. \\ Error\end{tabular} \\ \hline
\multicolumn{1}{|l|}{LatSegODE} & \textbf{1.933} & \textbf{0.639} & \textbf{4.266} & \textbf{0.993} & \textbf{0.989} & \textbf{2.107} & 0.033 \\ \hline
\multicolumn{1}{|l|}{GRU} & 8.065 & 0.862 & 20.634 & - & - & - & - \\
\multicolumn{1}{|l|}{GRU $\Delta$t} & 6.911 & 0.973 & 17.560 & - & - & - & - \\
\multicolumn{1}{|l|}{GRU-ODE} & 6.616 & 0.815 & 17.130 & - & - & - & - \\
\multicolumn{1}{|l|}{Latent ODE} & 12.877 & 9.111 & 17.721 & - & - & - & - \\
\multicolumn{1}{|l|}{Aug. Latent ODE} & 13.273 & 4.914 & 26.922 & - & - & - & - \\ \hline
\multicolumn{1}{|l|}{RPT-RBF} & - & - & - & 0.918 & 0.837 & 25.220 & - \\
\multicolumn{1}{|l|}{RPT-AR} & - & - & - & 0.938 & 0.887 & 26.013 & - \\
\multicolumn{1}{|l|}{RPT-NORM} & - & - & - & 0.855 & 0.761 & 47.213 & - \\ \hline
\end{tabular}%
}
\caption{Evaluation on Sine Wave data set. Values are averages over 150 test trajectories. Arrows beside metric denote whether higher values ($\uparrow$) or lower values ($\downarrow$) indicate better performance.}
\label{tab:sine}
\end{table}
\section{Lotka-Volterra Experimental Setup}
Here, we report the parameters used to generate the Lotka Volterra data set, and the model hyper-parameters used during bench marking. As previously reported, we simulated $34000$ training hybrid trajectories, with $600$ trajectories used as a validation set, and $150$ used for test. Each trajectory contained zero to two changepoints. Changepoints in these trajectories are labelled, and the LatSegODE base model is trained on the SDFs between changepoints. The vanilla Latent ODE baseline is trained only on the hybrid trajectories, while other baselines were trained both on full hybrid trajectories and SDFs separately, with the best result reported.
Each SDF between changepoints was generated with between $175$ to $225$ observations, and ends at a time randomly sampled between $(14, 16)$. The coefficients for the Lotka-Volterra SDFs were uniformly sampled from ranges $(0.5, 1.5), (0.5, 1.5), (1.5, 2.5), (0.5, 1.5)$ for coefficients $\alpha, \beta, \delta, \gamma$ respectively. We enforced a minimum change in the norm of coefficients of $0.6$ between SDFs, and added independent Gaussian noise of $0.01$ to all trajectories. We masked $20\%$ of data points for interpolation testing, and the last $100$ data points for extrapolation testing, as reported in the main text.
In the next appendix, we separately evaluate performance on data sets containing jump discontinuity (JD) and without (SD). For the JD set, each SDF within a trajectory is restarted at a new initial population at changepoints, uniformly sampled from ranges $(1.5, 2.5), (0.5. 1.5)$ for $x, y$ respectively. In the SD set, SDFs are initialized from the same distribution, but remain continuous at subsequent changepoints. To clarify, trajectories in the SD set do not contain jump discontinuities at changepoints, and these locations only feature a switch in dynamical mode.
Next, we report the hyperparameters and training procedures in the Lotka Volterra experiments. We performed hyperparameter search over the width and depth of the NNs parameterizing Neural ODEs in our architectures. We started with a one layer 64 unit NN, and increased the number of layers and units until convergence was possible. Latent dimension selection was difficult. We started with a latent dimension of 64, and slowly decreased capacity until under-fitting occurred.
\textbf{GRU / GRU$\Delta$t} We found that the GRU always performed worse than the GRU$\Delta$t, so we did not report its results. The GRU $\Delta$t model used 16 dimensions in its hidden state, and 200 units in the GRU. A 2 layer 100 unit NN with ReLU activations was used as an output network. The model was trained using Adamax, where learning rate was decayed from 1e-2 to 1e-3 to 1e-4 each time validation loss plateaued. We used a batch size of 512.
\textbf{GRU-ODE} The GRU-ODE model used a hidden state with dimension 50, and GRU with 100 units. A 2 layer 100 unit output network with ReLU activations was used. The Neural ODE was parameterized with using a 3 layer 200 unit NN with Tanh activations. The GRU-ODE was trained by Adamax using a constant 1e-3 learning rate. The Neural ODE solutions were solved using dopri5 with relative and absolute tolerances of 1e-3 and 1e-4.
\textbf{Latent ODE} Latent ODE models were very hard to train, and many approaches were taken. First, we attempted to tune hyperparameters, increasing both latent dimension size, and the number of parameters in NNs parameterizing Neural ODEs. The rationale was that the added complexity in hybrid trajectories necessitated increased model complexity. We found that model reconstruction accuracy in both data sets were poor even after many epochs. A more successful strategy was to adopt an iterative growing scheme for training \cite{rackauckas2020universal}. We also adopted a forward prediction training strategy. We started training on a masked sub-trajectory with only the first 50 data points, and evaluate loss on prediction of the next 50 data points. The training loss was computed using all observed points. After each epoch, we increased the number of observed data points by 50. We also found that taking multiple samples of the latent initial state during training could stabilize gradient estimates, and we used 3 samples per trajectory.
Latent ODEs used were trained using Adamax with a constant learning rate of 5e-3, decayed to 1e-3 after 10 epochs. A batch size of 256 is used. Latent ODEs were trained using KL annealing such that a KL weight of 1 was reached after 10 epochs. A fixed variance of 0.01 was used in the ELBO.
The Latent ODE models had a hidden state of dimension 16, and a latent state of dimension 8. The encoder used 100 GRU units, with its Neural ODE parameterized by a 3 layer 100 unit NN with Tanh activations. The identical hyper parameters are used for the latent Neural ODE neural network. We used a linear decoder. Latent dynamics were solved using dopri5 with relative and absolute tolerance of 1e-4 and 1e-4.
\textbf{LatSegODE} The LatSegODE used a base model with a hidden state of dimension 16, and a latent state with dimension 8. The encoder used GRUs with 100 units, and both the encoder Neural ODE and latent Neural ODE was parameterized by 3 layer 100 unit NNs with Tanh activations. A decoder network with 2 layer 100 units with ReLU activations was used.
We trained our model using Adamax, with an initial learning of 5e-3. Learning rate was decayed to 1e-3 after 10 epochs, and reduced to 1e-4 after validation loss plateaued. A batch size of 256 was used. Latent dynamics were solved using dopri5, with relative and absolute tolerances of 1e-4 and 1e-4. Training used KL annealing such that KL weight started from 0 and reached 1 after 10 epochs. A fixed variance of 0.01 was used to compute the ELBO.
At segmentation time, we used a fixed variance of 0.01 to compute the marginal likelihood. We took 100 MC samples to compute the marginal likelihood, and rounded times of observation to 2 decimal places.
\pagebreak
\section{Lotka-Volterra Expanded Experiments}
We report the full results of Lotka-Volterra experiments on additional data sets, jump discontinuous (JD) and switching dynamics (SD). The JD set is identical to the set reported in the main text. The SD does not contain jump discontinuity at changepoints, and the trajectory only switches to a new dynamical mode. The SD set is included evaluate performance of the LatSegODE without discontinuous jumps, which may be theoretically easier to solve since ODE solves do not need to bridge a jump discontinuity. We find the opposite actually occurs, as the less distinct changepoints in the SD set yields a harder data set, shown by the decreased performance of baselines. We report the metrics in Table \ref{tab:lv-metrics} below.
\begin{table}[ht]
\centering
\begin{subtable}{\textwidth}
\centering
\resizebox{0.75\textwidth}{!}{%
\begin{tabular}{l|lll|llll|}
\cline{2-8}
& \multicolumn{3}{l|}{Reconstruction} & \multicolumn{4}{l|}{Segmentation} \\ \hline
\multicolumn{1}{|l|}{Method} & \begin{tabular}[c]{@{}l@{}}Total \\ MSE\end{tabular} & \begin{tabular}[c]{@{}l@{}}Interp. \\ MSE\end{tabular} & \begin{tabular}[c]{@{}l@{}}Extrap. \\ MSE\end{tabular} & \begin{tabular}[c]{@{}l@{}}Rand \\ Index\end{tabular} ($\uparrow$) & \begin{tabular}[c]{@{}l@{}}F1 \\ Score\end{tabular} ($\uparrow$) & \begin{tabular}[c]{@{}l@{}}Hausdorff \\ Metric\end{tabular} ($\downarrow$) & \begin{tabular}[c]{@{}l@{}}Annot. \\ Error\end{tabular} \\ \hline
\multicolumn{1}{|l|}{LatSegODE} & \textbf{0.068} & 0.0312 & \textbf{0.2396} & \textbf{0.9464} & \textbf{0.8268} & \textbf{47.67} & 0.76 \\ \hline
\multicolumn{1}{|l|}{GRU $\Delta$t} & 0.1718 & \textbf{0.0193} & 0.8329 & - & - & - & - \\
\multicolumn{1}{|l|}{GRU-ODE} & 0.2747 & 0.1201 & 2.0358 & - & - & - & - \\
\multicolumn{1}{|l|}{Latent ODE} & 0.6155 & 0.5072 & 0.9505 & - & - & & - \\ \hline
\multicolumn{1}{|l|}{RPT-RBF} & - & - & - & 0.7956 & 0.4167 & 84.7 & - \\
\multicolumn{1}{|l|}{RPT-AR} & - & - & - & 0.6994 & 0.4367 & 164.65 & - \\
\multicolumn{1}{|l|}{RPT-GPNorm} & - & - & - & 0.7693 & 0.42 & 105.92 & - \\ \hline
\end{tabular}%
}
\caption{Jump discontinuous (JD) test set.}
\end{subtable}
\vspace{1em}
\begin{subtable}{\textwidth}
\centering
\resizebox{0.75\textwidth}{!}{%
\begin{tabular}{l|lll|llll|}
\cline{2-8}
& \multicolumn{3}{l|}{Reconstruction} & \multicolumn{4}{l|}{Segmentation} \\ \hline
\multicolumn{1}{|l|}{Method} & \begin{tabular}[c]{@{}l@{}}Total \\ MSE\end{tabular} & \begin{tabular}[c]{@{}l@{}}Interp. \\ MSE\end{tabular} & \begin{tabular}[c]{@{}l@{}}Extrap. \\ MSE\end{tabular} & \begin{tabular}[c]{@{}l@{}}Rand \\ Index\end{tabular} ($\uparrow$) & \begin{tabular}[c]{@{}l@{}}F1 \\ Score\end{tabular} ($\uparrow$) & \begin{tabular}[c]{@{}l@{}}Hausdorff \\ Metric\end{tabular} ($\downarrow$) & \begin{tabular}[c]{@{}l@{}}Annot. \\ Error\end{tabular} \\ \hline
\multicolumn{1}{|l|}{LatSegODE} & \textbf{0.2284} & 0.2176 & \textbf{1.010} & \textbf{0.8758} & \textbf{0.6857} & \textbf{75.66} & 2.42 \\ \hline
\multicolumn{1}{|l|}{GRU $\Delta$t} & 0.4519 & \textbf{0.0383} & 2.427 & - & - & - & - \\
\multicolumn{1}{|l|}{GRU-ODE} & 0.4386 & 0.2176 & 3.280 & - & - & - & - \\
\multicolumn{1}{|l|}{Latent ODE} & 1.4027 & 1.1924 & 2.1314 & - & - & - & - \\ \hline
\multicolumn{1}{|l|}{RPT-RBF} & - & - & - & 0.7833 & 0.4233 & 77.77 & - \\
\multicolumn{1}{|l|}{RPT-AR} & - & - & - & 0.7020 & 0.4433 & 138.24 & - \\
\multicolumn{1}{|l|}{RPT-GPNorm} & - & - & - & 0.7672 & 0.4233 & 94.44 & - \\ \hline
\end{tabular}%
}
\caption{Switching dynamical mode (SD) test set.}
\end{subtable}
\caption{Expanded results on Lotka Volterra hybrid trajectory benchmark. Values are averages over 150 test trajectories. Arrows beside metric denote whether higher values ($\uparrow$) or lower values ($\downarrow$) indicate better performance.}
\label{tab:lv-metrics}
\end{table}
\section{Data Augmentation Strategies}
We report two data augmentation techniques which can increase training speed and the ability for the Latent ODE to generalize. These techniques are applied on data batches prior to each training iteration. First, we propose sub-sampling trajectories by randomly removing a percentage of observed points. This method resembles techniques which iteratively grows trajectory length throughout training to avoid local minima \cite{rackauckas2020universal}, and we hypothesize sub-sampling confers a similar benefit. We also introduce start-truncation, where the first $N$ data observations are cropped. We hypothesize this augmentation decreases generalization error by exposing the Latent ODE encoder to more initial states.
To demonstrate the efficacy of these augmentation methods, we generate $10000$ SDFs from each of the previous experimental domains, with $500$ trajectories held for validation and $500$ for test evaluation. Each trajectory contains $200$ samples. For sub-sampling, we randomly select between $40$ to $200$ data points to train per batch. For truncation, we randomly select the number of data points to remove from range $(0, 160)$. We train Latent ODE models with and without augmentation techniques, and plot the validation loss curve for each training run in Figure \ref{fig:loss_curves}.
\begin{figure}[ht]
\centering
\begin{subfigure}{0.4\textwidth}
\includegraphics[width=\textwidth]{Figures/sine_wave_loss.png}
\caption{}
\end{subfigure}
\begin{subfigure}{0.4\textwidth}
\includegraphics[width=\textwidth]{Figures/lv_loss.png}
\caption{}
\end{subfigure}
\caption{Validation loss curves using data augmentations. Plot (a) shows runs on Sine Wave data, while (b) shows runs on Lotka Volterra data.}
\label{fig:loss_curves}
\end{figure}
\begin{table}[ht]
\centering
\resizebox{0.47\textwidth}{!}{%
\begin{tabular}{l|ll|ll|}
\cline{2-5}
& \multicolumn{2}{l|}{Sine Wave} & \multicolumn{2}{l|}{Lotka Volterra} \\ \hline
\multicolumn{1}{|l|}{Method} & Test MSE & \begin{tabular}[c]{@{}l@{}}Runtime \\ (hrs)\end{tabular} & Test MSE & \begin{tabular}[c]{@{}l@{}}Runtime \\ (hrs)\end{tabular} \\ \hline
\multicolumn{1}{|l|}{None} & 1.8032 & 13:10 & 3.9742 & 22:55 \\
\multicolumn{1}{|l|}{Subsampling} & 0.1750 & 9:06 & 0.1005 & 18:44 \\
\multicolumn{1}{|l|}{Truncation} & 0.14713 & 9:28 & 0.6525 & 18:21 \\
\multicolumn{1}{|l|}{Both} & 0.3405 & 6:18 & 0.7672 & 13:11 \\ \hline
\end{tabular}%
}
\caption{Comparison of augmentation methods.}
\label{tab:aug}
\end{table}
The augmentation methods accelerate training, and achieve faster convergence. In Table \ref{tab:aug}, we report the generalization error calculated using test trajectories, and the wall time for training to complete 350 / 400 epochs, which both decrease. We report parameters used to run the data augmentation experiments.
The Sine Wave data set was generated using amplitudes and frequencies sampled from $(-8, 8)$ and $(2, 4)$ respectively. Phase was randomly sampled. $20000$ training trajectories of length 5 and with $200$ samples were generated. $2000$ validation trajectories were generated, and $2000$ test trajectories were generated. The Lotka-Volterra data set was generated using coefficients sampled from range $(0.5, 1.5), (0.5, 1.5), (1.5, 2.5), (0.5, 1.5)$ for $\alpha, \beta, \delta, \gamma$. The initial populations were sampled from range $(1.5, 2.5), (0.5, 1.5)$. $10000$ training trajectories were generated, with $500$ validation trajectories and $500$ test trajectories. Training trajectories contained $200$ samples, while validation and test trajectories contained $100$ samples. Trajectories were all of length $15$.
The Latent ODE in both Lotka Volterra and Sine Wave training runs were trained using Adamax with constant 1e-3 learning rate. A batch size of 256 was used. A latent dimension of 8 was used, while the encoder hidden state was of dimension 200. GRUs contained 200 units. A decoder network with 2 layer and 50 units with ReLU activation was used. In the Lotka Volterra data set, a 3 layer 200 units NN with Tanh activations parameterized both the encoder and latent Neural ODEs. In the Sine wave data set, the neural network width was reduced to 100.
\section{Character Trajectory Experimental Hyperparameters}
The LatSegODE base model was trained using a latent dimension of 8, and an encoder hidden state dimension of 16. Encoder GRUs contained 200 units. We parameterized the Neural ODEs in the encoder and latent dynamics using a 5 layer 200 unit NN with Tanh activations. The decoder network was a 3 layer 200 unit NN with ReLU activations. Hyperparameter search was performed by modifying the number of units in hidden layers and number of layers in all Neural ODE NNs. We started with 128 units per layer and 2 layers, and gradually increased the number of parameters until a good reconstruction was found.
The base model was trained using Adamax at a learning rate of 1e-3, reduced to 1e-4 when validation loss plateaued. Data augmentation was used, randomly sampling truncation bounds from (30, trajectory length) and the number of points to sub-sample from (30, trajectory length). KL annealing was used, such that the training run started with KL weight of 0, and reached a weight of 1 at epoch 50. We clipped gradients to a norm of 2. A fixed variance of 0.01 is used to compute the ELBO. Latent dynamics were solved using the dopri5 solver, with relative and absolute tolerances of 1e-5 and 1e-5. The encoder Neural ODE dynamics were solved using Euler's method.
At segmentation time, the LatSegODE used a fixed variance of 0.01 to compute marginal likelihood using 200 MC samples. A $K$ term of 100 was used. Time of observation was rounded to 2 decimal places. We set the minimum possible segment length to 20, similar to baselines.
\section{Sine Wave Ablation Study}
We report the effects of the number of training trajectories and number of samples per trajectory on LatSegODE performance. A unique LatSegODE is trained for each combination of the two parameters. We measure segmentation performance using the previously established metrics. The Sine Wave dataset is used with the same data generation and model hyper-parameters as in the main body Sine Wave experiments. The test set contains 75 trajectories each with zero to two changepoints. Each test trajectory contains 100 observed samples. The results are shown in Table \ref{tab:sine_ablate}. See Table \ref{tab:sine} for baseline results.
\begin{table}[h]
\centering
\resizebox{0.8\textwidth}{!}{%
\begin{tabular}{l|l|l|l|l|l|l|l|l|l|}
\cline{2-10}
& \multicolumn{9}{l|}{\# Trajectory Samples} \\ \cline{2-10}
& \multicolumn{3}{l|}{50} & \multicolumn{3}{l|}{100} & \multicolumn{3}{l|}{200} \\ \hline
\multicolumn{1}{|l|}{\begin{tabular}[c]{@{}l@{}}\# Train \\ Trajectory\end{tabular}} & \begin{tabular}[c]{@{}l@{}}Rand \\ Index\end{tabular} & F1 Score & \begin{tabular}[c]{@{}l@{}}Hausdorff\\ Metric\end{tabular} & \begin{tabular}[c]{@{}l@{}}Rand \\ Index\end{tabular} & F1 Score & \begin{tabular}[c]{@{}l@{}}Hausdorff\\ Metric\end{tabular} & \begin{tabular}[c]{@{}l@{}}Rand \\ Index\end{tabular} & F1 Score & \begin{tabular}[c]{@{}l@{}}Hausdorff\\ Metric\end{tabular} \\ \hline
\multicolumn{1}{|l|}{3000} & 0.700 & 0.577 & 56.0 & 0.71 & 0.666 & 50.21 & 0.853 & 0.512 & 29.28 \\ \hline
\multicolumn{1}{|l|}{10000} & 0.663 & 0.601 & 49.6 & 0.842 & 0.813 & 28.2 & 0.968 & 0.961 & 6.29 \\ \hline
\multicolumn{1}{|l|}{30000} & 0.866 & 0.809 & 25.28 & 0.911 & 0.900 & 15.23 & 0.981 & 0.979 & 4.92 \\ \hline
\end{tabular}
}
\caption{LatSegODE performance measured by segmentation metrics, as a function of training set size and number of samples in training trajectories.}
\label{tab:sine_ablate}
\end{table}
|
{
"timestamp": "2021-06-11T02:18:53",
"yymm": "2105",
"arxiv_id": "2105.03835",
"language": "en",
"url": "https://arxiv.org/abs/2105.03835"
}
|
\section{Introduction}
Classification of manifolds is a fundamental problem in geometry and topology.
There are tremendous investigations around this problem in both smooth and topological category.
For instance, in the general case, Wall \cite{Wal1, Wal3} studied the $(n-1)$-connected $2n$-manifolds and the $(n-1)$-connected $(2n+1)$-manifolds. For the concrete case with specified dimension, Bardon \cite{Bar} classified the simply connected $5$-manifolds, and Wall \cite{Wal2}, Jupp \cite{Jup} and Zhubr \cite{Zhu1, Zhu2} classified the simply connected $6$-manifolds. More recently, Kreck and Su \cite{KS} classified certain non-simply connected $5$-manifolds, while Crowley and Nordstr\"{o}m \cite{CN} and Kreck \cite{Kre} studied the classification problem of various kinds of $7$-manifolds.
In the mentioned literature, the homotopy classification of $M$ was usually carried out as a byproduct in terms of a system of invariants. However, it is almost impossible to extract nontrivial homotopy information of $M$ directly from the classification. On the other way around, the unstable homotopy theory is a powerful tool to study the homotopy properties of manifolds from both suspension aspect and loop aspect. From the suspension aspect, So and Theriault \cite{ST} determined the homotopy type of the suspension of connected $4$-manifold, while Huang \cite{H} studied the suspension of simply connected $6$-manifold. From the loop aspect, Beben and Theriualt \cite{BT1} studied the loop decompositions of $(n-1)$-connected $2n$-manifolds, while Beben and Wu \cite{BW} and Huang and Theriault \cite{HT} studied the loop decompositions of the $(n-1)$-connected $(2n+1)$-manifolds. The homotopy groups of these manifolds were also investigated by Sa. Basu and So. Basu \cite{BB, Bas} from different point of view. Moreover, a theoretical method of loop decomposition was developed by Beben and Theriault \cite{BT2}, which is quite useful for studying the homotopy of manifolds.
In this paper, we study the loop homotopy of certain simply connected $6$-manifolds arising from $4$-manifolds.
Let $N$ be a simply connected closed $4$-manifold such that $H^2(N;\mathbb{Z})\cong \mathbb{Z}^{\oplus d}$ with $d\geq 1$. A rank $3$ vector bundle $\xi$ over $N$ is classified by a map $f: N\longrightarrow BSO(3)$, where $BSO(3)$ is the classifying space of the special orthogonal group $SO(3)$. The sphere bundle of $\xi$
\begin{equation}\label{Mdefeq}
S^2\stackrel{i}{\longrightarrow} M\stackrel{p}{\longrightarrow} N
\end{equation}
defines the closed $6$-manifold $M$. Since the integral cohomologies of $N$ and $S^2$ are free and concentrate in even degree, the Serre spectral sequence of (\ref{Mdefeq}) collapses and $
H^\ast(M;\mathbb{Z})\cong H^\ast(N;\mathbb{Z})\otimes H^\ast(S^2;\mathbb{Z})$.
We may study the loop homotopy of $M$ through $S^1$-bundles over it. Start with the circle bundle
\[
S^1\stackrel{j}{\longrightarrow} Y\stackrel{\pi}{\longrightarrow} N
\]
classified by a {\it primitive} element $\alpha\in H^2(N;\mathbb{Z})$, in the sense that $\alpha$ is not divisible by any integer $k$ with $k\neq \pm 1$.
There is the pullback of fibre bundles
\begin{equation}\label{keydiag}
\begin{gathered}
\xymatrix{
&
S^2 \ar@{=}[r] \ar[d]^{\iota} &
S^2 \ar[d]^{i} \\
S^1\ar@{=}[d] \ar[r]^{\jmath} &
X \ar[d]^{\mathfrak{p}} \ar[r]^{\psi} &
M \ar[d]^{p}\\
S^1 \ar[r]^>>>>{j} &
Y \ar[r]^<<<<{\pi} &
N,
}
\end{gathered}
\end{equation}
which defines the closed $7$-manifold $X$ with bundle projections $\psi$ and $\mathfrak{p}$ onto $M$ and $Y$ respectively.
The following theorem proved in Section \ref{sec: pf} is crucial for studying the loop homotopy of $M$.
\begin{theorem}\label{pdtbundlethm}
The bundle $\pi^\ast(\xi)$ defined in (\ref{keydiag}) is trivial if
\begin{itemize}
\item either $\xi$ is non-Spin, and $\omega_2(\xi)\equiv \alpha~{\rm mod}~2$,
\item or $\xi$ is Spin.
\end{itemize}
In particular, in either case,
\[
X\cong S^2\times Y.
\]
\end{theorem}
It should be pointed out that when $\xi$ is non-Spin, we can always choose a primitive $\alpha$ such that $\omega_2(\xi)\equiv \alpha~{\rm mod}~2$ as discussed in and after the proof of Lemma \ref{Nbundlelemma}. We now state our main theorem which will be proved in Section \ref{sec: pf}.
\begin{theorem}\label{decomthm}
Let $N$ be a simply connected closed $4$-manifold such that $H^2(N;\mathbb{Z})\cong \mathbb{Z}^{\oplus d}$ with $d\geq 1$. Let $M$ be the total manifold of the sphere bundle of any rank $3$ vector bundle over $N$. Then
\begin{itemize}
\item if $d=1$,
\[ \Omega M\simeq S^1\times \Omega S^2\times \Omega S^5,\]
\item if $d\geq 2$,
\[
\Omega M\simeq S^1\times \Omega S^2\times\Omega (S^2\times S^3)\times \Omega\big(J\vee(J\wedge\Omega (S^2\times S^3))\big),
\]
where $J=\mathop{\bigvee}\limits_{i=1}^{d-2}(S^2\vee S^3)$.
\end{itemize}
\end{theorem}
From Theorem \ref{decomthm} and its proof, it can be easily seem that the decomposition in Theorem \ref{decomthm} is compatible with the $S^2$-bundle (\ref{Mdefeq}) after looping. In particular, this means that though the fibre bundle (\ref{Mdefeq}) does not split in general, its loop does.
Moreover, as discussed in \cite[Page 217]{BT1}, the term $J\vee(J\wedge\Omega (S^2\times S^3))$ in the decomposition of Theorem \ref{decomthm} is a bouquet of spheres. Hence by the Hilton-Milnor theorem, we see that $\Omega M$ is homotopy equivalent to a product of loops on spheres with $S^1$. Additionally since the decompositions of Theorem \ref{decomthm} only depends on the value of $d$ which is determined by and determines $H^\ast(M;\mathbb{Z})$, we have the rigidity property of $M$ after looping.
\begin{corollary}
Let $M$ and $M^\prime$ be two $6$-manifolds in Theorem \ref{decomthm}. Then $\Omega M\simeq \Omega M^\prime$ if and only if $H^\ast(M;\mathbb{Z})\cong H^\ast(M^\prime;\mathbb{Z})$. ~$\hfill\Box$
\end{corollary}
Theorem \ref{decomthm} can be improved if we pass from integral homotopy to rational homotopy. Indeed, by Theorem \ref{decomthm} it is straightforward to compute the homotopy groups of $M$ in terms of those of spheres. However, there is an additional Lie algebra structure on the homotopy groups of any $CW$ complex $X$. In rational homotopy theory, the graded Lie algebra $\pi_\ast(\Omega X)\otimes \mathbb{Q}$ is called the {\it homotopy Lie algebra} of $X$, and $X$ is called {\it coformal} if the rational homotopy type of $X$ is completely determined by its homotopy Lie algebra. If $X$ is further {\it formal}, that is the homotopy type of $X$ is determined by the graded commutative algebra $H^\ast(X;\mathbb{Q})$, $X$ is {\it Koszul} in the sense of Berglund \cite[Definition 1.1]{Ber}. In the latter case, $H^\ast(X;\mathbb{Q})$ is a {\it Koszul algebra} and $\pi_\ast(\Omega X)\otimes\mathbb{Q}$ is a {\it Koszul Lie algebra} \cite{Ber}. The following theorem concerns these additional structures on $M$ of Theorem \ref{decomthm}.
\begin{theorem}\label{coformalthm}
Let $N$ be a simply connected closed $4$-manifold such that $H^2(N;\mathbb{Z})\cong \mathbb{Z}^{\oplus d}$. Let $M$ be the total manifold of the sphere bundle of any rank $3$ vector bundle over $N$. Then
\begin{itemize}
\item if $d=1$, $M$ is not coformal,
\item if $d\geq 2$, $M$ is Koszul, and there is an isomorphism of graded Lie algebras
\[
\pi_\ast(\Omega M)\otimes\mathbb{Q}\cong H^\ast(M;\mathbb{Q})^{!\mathscr{L}ie},
\]
where $(-)^{!\mathscr{L}ie}$ is the Koszul dual Lie functor defined in \cite[Section 2]{Ber}.
\end{itemize}
\end{theorem}
We turn to the remaining case when $d=0$, that is $N\cong S^4$. Note we still have the $6$-manifold $M$ as constructed in (\ref{Mdefeq}). Though the homotopy classification of such manifolds was almost determined by Yamaguchi \cite{Yam}, this case is surprisingly much harder than the general one. We will explain this point after the statement of our result in this case proved in Proposition \ref{Md=0koddprop} and Proposition \ref{Md=0kevenprop}. Let $\eta_2: S^3\rightarrow S^2$ be the Hopf map.
For any integer $n$, let $S^{m}\{n\}$ be the homotopy fibre of the degree $n$ map on $S^{m}$.
\begin{theorem}\label{d=0thm}
Let $M$ be the total space of the sphere bundle of any rank $3$ vector bundle over $S^4$. Then $M$ has a cell structure of the form
\[
M\simeq S^2\cup_{k\eta_2}e^4\cup e^6,
\]
where $k\in \mathbb{Z}$. Let $k=p_1^{r_1}\cdots p_\ell^{r_\ell}$ be the prime decomposition of $k$.
We further have
\begin{itemize}
\item if $k$ is odd,
\[
\Omega M\simeq S^1\times \prod_{j=1}^{\ell} S^3\{p_j^{r_j}\}\times \Omega S^7,
\]
\item if $k=2^r$ with $r\geq 3$,
\[
\Omega M \simeq S^1\times S^3\{2^r\}\times \Omega S^7.
\]
\end{itemize}
\end{theorem}
Note that we still have the cohomology rigidity in this case since the homotopy type of $\Omega M$ only depends $k$ which is determined by the square of a generator in $H^2(M;\mathbb{Z})$. But it is less interesting since the rigidity of $M$ without looping holds except for the case when $k$ is even and $M$ is Spin \cite{Yam}.
Also note that Theorem \ref{d=0thm} is only a partial result.
To explain the difficulty in this case, remark that the proof of Theorem \ref{d=0thm} heavily relies on the result of \cite{HT} on the loop decomposition of $2$-connected $7$-manifolds.
As discussed in \cite[Section 6]{HT}, the case when $k=2^r m$ with $m$ odd and greater than $1$ is much more difficult. Also, since it is known that $S^3\{2\}$ is not an $H$-space \cite{C}, we can not have a decomposition of the form $\Omega M \simeq S^1\times S^3\{2\}\times \Omega S^7$ for the case when $k=2$. In contrast, the rational homotopy of $M$ in this case is simple. As showed in Lemma \ref{d=0rationallemma}, $M$ is rationally homotopic equivalent to $\mathbb{C}P^3$ or $S^2\times S^4$. Moreover, it is well known that $\mathbb{C}P^3$ is not coformal \cite[Example 4.7]{NM}, while $S^2\times S^4$ is Koszul \cite[Example 5.1 and 5.4]{Ber}.
Before we close the Introduction, let us make two remarks. Firstly, our results provide further evidence on the Moore conjecture. Recall that the Moore conjecture states that a simply connected finite $CW$ complex $Z$ is rationally elliptic if and only if it has a finite homotopy exponent at all primes, or equivalently, $Z$ is rationally hyperbolic if and only if it has unbound homotopy exponent at some prime. For $M$ in our context, it is elliptic if and only if when $d\leq 2$, and in any of these cases by \cite{J, CMN, CMN2} $M$ has a finite homotopy exponent at all primes. When $d\geq 3$, $M$ is hyperbolic so that $\Omega M$ has $\Omega(S^2\vee S^3)$ as product summand, hence it has no bound homotopy exponent for any prime $p$ (see \cite{G} for instance). Secondly, Amor\'{o}s and Biswas \cite{AB} characterized simply connected rationally elliptic compact K\"{a}hler threefolds in terms of the Hodge diamonds, and in particular, their second Betti numbers $b_2\leq 3$. For $M$ in our context, this is equivalent to $d\leq 2$, and our decompositions provide further information on the homotopy of $M$. For instance the homotopy groups of $M$ can be computed in terms of those of spheres.
The paper is organized as follows. In Section \ref{sec: prelim} we review a result of Duan and Liang \cite{DL} on free circle action on the $4$-manifold $N$ with further homotopy argument, and classify rank $3$ bundles over $N$. In Section \ref{sec: null}, we prove Proposition \ref{nullprop} which implies that under Lemma \ref{Nbundlelemma} one component of the classifying map $f$ of the bundle $\xi$ over $N$ is always trivial. This is crucial for proving Theorem \ref{pdtbundlethm}. In Section \ref{sec: pf}, we prove Theorem \ref{pdtbundlethm} and Theorem \ref{decomthm}. Section \ref{sec: d=0} is devoted to the remaining case when $d=0$ and we prove Theorem \ref{d=0thm} there. We discuss the rational homotopy theory of $6$-manifolds and prove Theorem \ref{coformalthm} in Section \ref{sec: rational}.
\bigskip
\noindent{\bf Acknowledgements.}
Ruizhi Huang was supported by National Natural Science Foundation of China (Grant nos. 11801544 and 11688101), and ``Chen Jingrun'' Future Star Program of AMSS. He would like to thank Professor Stephen Theriault for the international online lecture series ``Loop Space Decomposition'', which stimulated his research interest in the homotopy of $6$-manifolds. He also want to thank Professor Yang Su for helpful discussions on obstructions to trivializing vector bundles.
\section{Bundles over $4$-manifolds}
\label{sec: prelim}
In this section, we discuss necessary knowledge of vector bundles over simply connected $4$-manifolds of rank $2$ and $3$ which will be used in the subsequent sections.
\subsection{Rank $2$ bundles over $4$-manifolds}
\label{subsec: rank2}
It is well known that the isomorphism classes of orientable bundles of rank $2$ is equivalent to those of circle bundles.
Let $N$ be a simply connected manifold such that $H^2(N;\mathbb{Z})\cong \mathbb{Z}^{\oplus d}$ with $d\geq 1$.
Let us recall a result of Duan and Liang \cite{DL} on circle bundles over simply connected $4$-manifolds.
Let $\alpha\in H^2(N;\mathbb{Z})\cong \mathbb{Z}^{\oplus d}$ be a {\it primitive} element, that is, $\alpha$ is not divisible by any integer $k$ with $k\neq \pm 1$. It determines a circle bundle
\[
S^1\stackrel{j}{\longrightarrow} Y\stackrel{\pi}{\longrightarrow} N
\]
which defines the $5$-manifold $Y$ projecting to $N$ through $\pi$. In \cite[Theorem 2]{DL} Duan and Liang showed that $Y$ is homeomorphic to
\begin{itemize}
\item[(1).] either $\mathop{\#}\limits_{d-1}S^2\times S^3$,
\item[(2).] or $B\#\mathop{\#}\limits_{d-2}S^2\times S^3$,
\end{itemize}
where $B$ is a non-Spin simply connected closed $5$-manifold such that $H_2(B;\mathbb{Z})\cong \mathbb{Z}$, and (2) occurs if and only if $N$ is non-Spin and $\omega_2(N)\not\equiv \alpha~{\rm mod}~2$.
Recall for any $i\geq 2$, $\eta_i^2=\eta_{i+1}\circ \eta_i\in \pi_{i+2}(S^i)\cong \mathbb{Z}/2$ is the generator \cite{Tod}, where $\eta_i\in \pi_{i+1}(S^i)$ is the Hopf element.
\begin{lemma}\label{Btypelemma}
For the $5$-manifold $B$, there is a homotopy equivalence
\[
B\simeq (S^2\vee S^3)\cup_h e^5,
\]
such that $h=i_2\circ \eta_3 +b[i_1, i_2]$, where $b\in \mathbb{Z}$, $i_j$ ($1\leq j\leq 2$) is the inclusion of the corresponding sphere summand into $S^2\vee S^3$, and $[i_1, i_2]$ is their Whitehead product.
\end{lemma}
\begin{proof}
Since $B$ is simply connected and $H_2(B;\mathbb{Z})\cong \mathbb{Z}$, we have $B\simeq (S^2\vee S^3)\cup_{h^\prime} e^5$. Then since $B$ is non-Spin, as homotopy class we can write the attaching map $h^\prime=a i_1\circ \eta_2^2+ i_2\circ \eta_3+b^\prime [i_1,i_2]$, where $a\in \mathbb{Z}/2$ and $b^\prime \in\mathbb{Z}$. If $a=1$, then following \cite[Page 97]{Har} we can construct a diagram of homotopy cofibration
\[
\xymatrix{
S^4 \ar[r]^<<<<{h^\prime} \ar@{=}[d] & S^2\vee S^3 \ar[r] \ar[d]^{A}& B \ar[d]_{\simeq}\\
S^4 \ar[r]^<<<<{h} & S^2\vee S^3 \ar[r] & B^\prime,
}
\]
where
$
A=
\begin{pmatrix}
1 & \eta_2\\
0 & 1
\end{pmatrix}
$, and $h=A\circ h^\prime$ with $B^\prime$ as its homotopy cofibre. Then by the Five Lemma and the Whitehead theorem, it is clear that $B\simeq B^\prime$.
Moreover, by computation
\[
\begin{split}
A\circ h^\prime&=A_\ast (i_1\circ \eta_2^2+ i_2\circ \eta_3+b^\prime [i_1,i_2])\\
&=
\begin{pmatrix}
1 & \eta_2\\
0 & 1
\end{pmatrix}
\begin{pmatrix}
\eta_2^2 \\
\eta_3
\end{pmatrix}
+A\circ b^\prime [i_1,i_2]\\
&=
\begin{pmatrix}
0 \\
\eta_3
\end{pmatrix}
+b^\prime A\circ [i_1,i_2].
\end{split}
\]
Since $\Sigma (A\circ [i_1,i_2])=0$, $A\circ [i_1,i_2]$ is still a Whitehead product.
Hence $h=i_2\circ \eta_3+ b [i_1,i_2]$ with $b [i_1,i_2]=b^\prime A\circ [i_1,i_2]$ for some $b\in \mathbb{Z}$ as required.
\end{proof}
\subsection{Rank $3$ bundles over $4$-manifolds}
\label{subsec: rank3}
Let $N$ be a simply connected manifold such that $H^2(N;\mathbb{Z})\cong \mathbb{Z}^{\oplus d}$ with $d\geq 0$. A rank $3$ vector bundle $\xi$ over $N$ is classified by a map $f: N\longrightarrow BSO(3)$. The sphere bundle of $\xi$
\[
S^2\stackrel{i}{\longrightarrow} M\stackrel{p}{\longrightarrow} N
\]
defines the closed $6$-manifold $M$.
For $N$ there is the homotopy cofiber sequence
\begin{equation}\label{Ncofibreeq}
S^3\stackrel{\phi}{\longrightarrow} \bigvee_{i=1}^{d} S^2 \stackrel{\rho}{\longrightarrow} N \stackrel{q}{\longrightarrow} S^4\stackrel{\Sigma\phi}{\longrightarrow} \bigvee_{i=1}^{d} S^3,
\end{equation}
where $\phi$ is the attaching map of the top cell of $N$, $\rho$ is the injection of the $2$-skeleton, and $q$ is the pinch map onto the top cell. Denote $s: S^1\cong SO(2)\rightarrow SO(3)$ be the canonical inclusion of Lie groups.
\begin{lemma}\label{Nbundlelemma}
There is a surjection
\[
\Phi: [S^4, BSO(3)]\times [N, BS^1]\longrightarrow [N, BSO(3)]
\]
of pointed sets such that it restricts to $q^\ast$ on $[S^4, BSO(3)]$, and to $(Bs)_\ast$ on $[N, BS^1]$.
\end{lemma}
\begin{proof}
By (\ref{Ncofibreeq}), there is the exact sequence of pointed sets
\[
0=[\mathop{\bigvee}\limits_{i=1}^{d} S^3, BSO(3)] \stackrel{}{\longrightarrow}[S^4, BSO(3)] \stackrel{q^\ast}{\longrightarrow} [N, BSO(3)] \stackrel{\rho^\ast}{\longrightarrow} [\bigvee_{i=1}^{d} S^2, BSO(3)] \stackrel{}{\longrightarrow} [S^3, BSO(3)]=0,
\]
in a strong sense that, there is an action of $[S^4, BSO(3)]$ on $[N, BSO(3)]$ through $q^\ast$ such that the sets $\rho^{\ast-1} (x)$, $x\in [\mathop{\bigvee}\limits_{i=1}^{d} S^2, BSO(3)]$ are precisely the orbits.
It is known that $[\mathop{\bigvee}\limits_{i=1}^{d} S^2, BSO(3)]\cong \mathop{\oplus}\limits_{d}\mathbb{Z}/2\mathbb{Z}$ and $[S^4, BSO(3)]\cong \mathbb{Z}$.
Moreover, there is the commutative diagram
\[
\xymatrix{
[N, BS^1] \ar[r]^<<<<<{\rho^\ast}_<<<<<{\cong} \ar[d]^{(Bs)_\ast}&
[\mathop{\bigvee}\limits_{i=1}^{d} S^2, BS^1] \ar[d]^{(Bs)_\ast} \ar[r]^<<<<<{\cong}
& \mathop{\oplus}\limits_{d}\mathbb{Z} \ar[d]^{\mathop{\oplus}\limits_{d}\rho_2} \\
[N, BSO(3)] \ar[r]^<<<<{\rho^\ast}
& [\mathop{\bigvee}\limits_{i=1}^{d} S^2, BSO(3)]\ar[r]^<<<{\cong}
& \mathop{\oplus}\limits_{d}\mathbb{Z}/2\mathbb{Z},
}
\]
where $\rho^\ast$ is an isomorphism onto $[\mathop{\bigvee}\limits_{i=1}^{d} S^2, BS^1]\cong \mathop{\oplus}\limits_{d}\mathbb{Z}$, $\rho_2$ is the mod-$2$ reduction, and hence $(Bs)_\ast$ is surjective onto $[\mathop{\bigvee}\limits_{i=1}^{d} S^2, BSO(3)]$. Now for any $f\in [N, BSO(3)]$, $\rho^\ast(f)=(Bs)_\ast(x)$ for some $x\in [\mathop{\bigvee}\limits_{i=1}^{d} S^2, BS^1]$. Denote $\alpha=(\rho^{\ast-1})(x)$, then $Bs_\ast(\alpha)$ and $f$ belong to the same orbit of the action, for they have same image in $[\mathop{\bigvee}\limits_{i=1}^{d} S^2, BSO(3)]$ through $\rho^\ast$. Hence, there exists a $f^\prime\in [S^4, BSO(3)]$ such that $q^\ast(f^\prime)\cdot (Bs_\ast(\alpha))=f$.
This completes the proof of the lemma.
\end{proof}
From Lemma \ref{Nbundlelemma} and its proof, for the classifying map $f: N\rightarrow BSO(3)$, we have associated a pair of maps
\begin{equation}\label{Nbundleeq}
(f^\prime, \alpha)\in [S^4, BSO(3)]\times [N, BS^1] ~{\rm such}~{\rm that}~q^\ast(f^\prime)\cdot (Bs_\ast(\alpha))=f, ~\ \omega_2(\xi)\equiv \alpha~{\rm mod}~2.
\end{equation}
We also notice that if $\rho^\ast(f)\neq0$, or equivalently, $\xi$ is non-Spin, the element $\alpha$ can be always chosen to be primitive. This is important for our later use.
Let $\pi: W\rightarrow N$ be a map from a closed manifold $W$. The pullback of the bundle $\xi$ along $\pi$ has an associated sphere bundle
\[
S^2\stackrel{\iota}{\longrightarrow} Z\stackrel{\mathfrak{p}}{\longrightarrow} W
\]
which defines the closed manifold $Z$. The following lemma is critical for proving Theorem \ref{pdtbundlethm}.
\begin{lemma}\label{pdtbundlelemma}
Suppose for $W$ there is a homotopy cofibration
\[
W_{m-1}\stackrel{\varrho}{\longrightarrow} W\stackrel{\mathfrak{q}}{\longrightarrow} S^{m},
\]
such that $\pi\circ \varrho$ factor as
\[
W_{m-1}\stackrel{\pi_\prime}{\longrightarrow}\bigvee_{i=1}^{d} S^2 \stackrel{\rho}{\longrightarrow} N
\]
for some $\pi_\prime$, where $W_{m-1}$ is the $(m-1)$-skeleton of $W$.
Then if $f^\prime \circ q \circ \pi$ and $\alpha \circ \pi$ are both null homotopic, the bundle $\pi^\ast(\xi)$ is trivial, and in particular
\[
Z\cong S^2\times W.
\]
\end{lemma}
\begin{proof}
By the assumption, there is the diagram of homotopy cofibration
\[
\xymatrix{
W_{n-1} \ar[r]^{\varrho} \ar[d]^{\pi_\prime} &
W \ar[r]^{\mathfrak{q}} \ar[d]^{\pi} &
S^m \ar[d]^{\pi^\prime} \\
\mathop{\bigvee}\limits_{d} S^2 \ar[r]^{\rho}&
N \ar[r]^{q} &
S^4,
}
\]
which defines the map $\pi^\prime$. It follows that there is a morphism of exact sequences of pointed sets
\[
\xymatrix{
[S^4, BSO(3)] \ar[r]^{q^\ast} \ar[d]^{\pi^{\prime\ast}}&
[N, BSO(3)] \ar[r]^<<<<{\rho^\ast} \ar[d]^{\pi^\ast}&
[\mathop{\bigvee}\limits_{i=1}^{d} S^2, BSO(3)] \ar[d]^{\pi_\prime^\ast} \\
[S^m, BSO(3)] \ar[r]^{\mathfrak{q}^\ast} &
[W, BSO(3)] \ar[r]^<<<<{\varrho^\ast} &
[W_{m-1}, BSO(3)],
}
\]
such that the action of $[S^4, BSO(3)]$ on $[N, BSO(3)]$ is compatible with that of $[S^m, BSO(3)]$ on $[W, BSO(3)]$ through $\pi^{\prime\ast}$. Hence by (\ref{Nbundleeq}) the classifying map of $\pi^\ast(\xi)$
\[
\begin{split}
f\circ\pi
&=\pi^\ast\big(q^\ast(f^\prime)\cdot (Bs_\ast(\alpha))\big)\\
&=\pi^\ast(q^\ast(f^\prime))\cdot\pi^\ast((Bs_\ast(\alpha)))\\
&= (f^\prime \circ q \circ \pi)\cdot Bs_\ast(\alpha \circ \pi),
\end{split}
\]
which is null homotopic by the assumption. The lemma then follows immediately.
\end{proof}
Lemma \ref{Nbundlelemma} also gives a byproduct on the classification of rank $3$ vector bundles over $N$ via characteristic classes.
\begin{proposition}\label{classbundleNprop}
A rank $3$ vector bundle $\xi$ over $N$ is completely determined by its second Stiefel-Whitney class $\omega_2(\xi)$ and its first Pontryagin class $p_1(\xi)$.
\end{proposition}
\begin{proof}
Given any two rank $3$ vector bundles $\xi_1$ and $\xi_2$ over $N$. Suppose that $\omega_2(\xi_1)=\omega_2(\xi_2)$ and $p_1(\xi_1)=p_1(\xi_2)$. We want to show that $\xi_1\cong \xi_2$, or equivalently, $f_1\simeq f_2$, where $f_1$ and $f_2: N\rightarrow BSO(3)$ are the classifying maps of $\xi_1$ and $\xi_2$ respectively. By Lemma \ref{Nbundlelemma} and (\ref{Nbundleeq}), we have $f_1=q^\ast(f_1^\prime)\cdot (Bs_\ast(\alpha))$ for a pair of maps $(f_1^\prime, \alpha)\in [S^4, BSO(3)]\times [N, BS^1]$ such that $\omega_2(\xi_1)\equiv \alpha~{\rm mod}~2$. Since $\omega_2(\xi_1)=\omega_2(\xi_2)$, there exists some $f_2^\prime\in [S^4, BSO(3)]$ such that $f_2=q^\ast(f_2^\prime)\cdot (Bs_\ast(\alpha))$. Then to show $f_1\simeq f_2$, it suffices to show $f_1^\prime\simeq f_2^\prime$. Indeed, for either $\xi_i$ the expression of $f_i$ can be explicitly described as
\[
f_i: N\stackrel{\mu^\prime}{\longrightarrow} N\vee S^4\stackrel{\alpha\vee f_i^\prime}{\longrightarrow} BS^1\vee BSO(3)\stackrel{Bs\vee {\rm id}}{\longrightarrow}BSO(3)\vee BSO(3) \stackrel{\nabla}{\longrightarrow}BSO(3),
\]
where $\mu^\prime$ is the co-action map, $\nabla$ is the folding map. In particular, it is easy to see that
\[
p_1(\xi_i)=q^\ast(p_1(f_i^\prime))+\alpha^2,
\]
where we denote $p_1(f_i^\prime)$ to be the first Pontryagin class of the bundle over $S^4$ determined by $f_i^\prime$. Since $p_1(\xi_1)=p_1(\xi_2)$, it implies that $q^\ast(p_1(f_1^\prime))=q^\ast(p_2(f_i^\prime))$. Moreover, it is clear that $q^\ast: H^4(S^4;\mathbb{Z})\rightarrow H^4(N;\mathbb{Z})$ is an isomorphism, hence $p_1(f_1^\prime)=p_1(f_2^\prime)$.
Now since $[S^4, BSO(3)]\simeq \mathbb{Z}$ and the morphism $\frac{p_1}{4}:[S^4, BSO(3)]\rightarrow H^4(S^4;\mathbb{Z})$, sending each map to one fourth of the first Pontryagin class of the associated bundle, is an isomorphism \cite{HBJ}, we see that $f_1^\prime\simeq f_2^\prime$. The proposition follows by the previous argument.
\end{proof}
\section{The induced map between top cells is null homotopic}
\label{sec: null}
Let $N$ be a simply connected closed $4$-manifold such that $H^2(N;\mathbb{Z})\cong \mathbb{Z}^{\oplus d}$ with $d\geq 1$. Consider the circle bundle
\[
S^1\stackrel{j}{\longrightarrow} Y\stackrel{\pi}{\longrightarrow} N
\]
classified by a primitive element $\alpha\in H^2(N;\mathbb{Z})$, which defines the $5$-manifold $Y$.
By \cite[Lemma 1]{DL}, $Y$ has cell structure of the form
\[
Y\simeq (\mathop{\bigvee}\limits_{d-1} S^2\vee S^3)\cup e^5.
\]
Then by the cellular approximation theorem, there is the diagram of homotopy cofibration
\begin{equation}\label{piprimediag}
\begin{gathered}
\xymatrix{
\mathop{\bigvee}\limits_{d-1} S^2\vee S^3 \ar[d] \ar[r]^<<<{\varrho} &
Y \ar[d]^{\pi} \ar[r]^>>>>{\mathfrak{q}} &
S^5 \ar[d]^{\pi^\prime} \\
\mathop{\bigvee}\limits_{d} S^2 \ar[r]^{\rho} &
N\ar[r]^{q} &
S^4,
}
\end{gathered}
\end{equation}
where the bottom cofibration is part of (\ref{Ncofibreeq}), $\varrho$ is the inclusion of the $3$-skeleton of $Y$ followed by the quotient $\mathfrak{q}$, $\pi^\prime$ is induced from $\pi$. In this section, we prove the following key proposition for understanding rank $3$-bundles over $Y$.
\begin{proposition}\label{nullprop}
The induced map $\pi^\prime$ in Diagram (\ref{piprimediag}) is null homotopic.
\end{proposition}
\begin{proof}
We need to apply the classification result on circle bundles over $N$ in \cite[Theorem 2]{DL}, and accordingly separate the proof of Proposition \ref{nullprop} into three cases in Lemma \ref{Nnonspinlemma}, Lemma \ref{Nnonspin2lemma} and Lemma \ref{Nspinlemma} respectively, which together show the proposition.
\end{proof}
\begin{lemma}\label{Nnonspinlemma}
The induced map $\pi^\prime$ in Diagram (\ref{piprimediag}) is null homotopic when $N$ is non-Spin and $\omega_2(N)\equiv \alpha~{\rm mod}~2$.
\end{lemma}
\begin{proof}
In this case $Y\cong \mathop{\#}\limits_{d-1}S^2\times S^3$ by \cite[Theorem 2]{DL}. Recall that $\pi_5(S^4)$ is already in stable range, and then it suffices to prove that $\Sigma \pi^\prime$ is null homotopic. Moreover, it is well known that
\begin{equation}\label{sigmasumeq}
\Sigma (\mathop{\#}\limits_{d-1}S^2\times S^3)\cong \mathop{\bigvee}\limits_{d-1} (S^3\vee S^4)\vee S^6,
\end{equation}
and
\begin{equation}\label{sigmaneq}
\Sigma N\simeq \mathop{\bigvee}\limits_{d} S^3 \vee \Sigma \mathbb{C}P^2,
\end{equation}
when $N$ is non-Spin.
Consider the suspension of the right square of Diagram (\ref{piprimediag}), we obtain the following homotopy commutative diagram
\begin{equation}\label{sigmapidiag}
\begin{gathered}
\xymatrix{
S^6 \ar@{^{(}->}[r]^<<<<{j} \ar@{=}[dr] &
\Sigma (\mathop{\#}\limits_{d-1}S^2\times S^3) \ar[r]^<<<<{\Sigma\pi} \ar[d]^{\Sigma \mathfrak{q}} &
\Sigma N \ar[d]^{\Sigma q} \ar[r]^{Q} &
\Sigma \mathbb{C}P^2 \ar[d]^{q^\prime} \\
&
S^6 \ar[r]^{\Sigma \pi^\prime} &
S^5 \ar@{=}[r] &
S^5,
}
\end{gathered}
\end{equation}
where $j$ is the obvious inclusion under the homotopy equivalence (\ref{sigmasumeq}), $Q$ is the obvious projection under (\ref{sigmaneq}), and $q^\prime$ is the pinch map onto the top cell. In particular, $\Sigma\pi^\prime=q^\prime\circ (Q\circ \Sigma \pi\circ j)$. However, by \cite[(12)]{Muk}, $i_\ast: \pi_6(S^3)\rightarrow \pi_6(\Sigma \mathbb{C}P^2)$ is surjective, where $i$ is the inclusion of the bottom cell of $\Sigma \mathbb{C}P^2$. Then by applying the classical Blakers-Massey theorem \cite{BM} on the homotopy cofibration
\[
S^3\stackrel{i}{\longrightarrow} \Sigma \mathbb{C}P^2 \stackrel{q^\prime}{\longrightarrow} S^5,
\]
we see that $q^\prime_\ast: \pi_6(\Sigma \mathbb{C}P^2)\rightarrow \pi_6(S^5)$ is trivial, and hence $\Sigma \pi^\prime$ is null homotopic. This implies that $\pi^\prime$ is null homotopic, and the lemma follows.
\end{proof}
\begin{lemma}\label{Nnonspin2lemma}
The induced map $\pi^\prime$ in Diagram (\ref{piprimediag}) is null homotopic when $N$ is non-Spin and $\omega_2(N)\not\equiv \alpha~{\rm mod}~2$.
\end{lemma}
\begin{proof}
The proof is the same as that of Lemma \ref{Nnonspinlemma} except slight modifications explained below. Indeed, in this case $Y\cong B\#\mathop{\#}\limits_{d-2}S^2\times S^3$ by \cite[Theorem 2]{DL}. Then by Lemma \ref{Btypelemma} we see that $\Sigma B\simeq S^3 \vee \Sigma^2 \mathbb{C}P^2$. This implies that
\begin{equation}\label{sigmasumBeq}
\Sigma (B\#\mathop{\#}\limits_{d-2}S^2\times S^3)\cong \mathop{\bigvee}\limits_{d-2} (S^3\vee S^4)\vee S^3 \vee \Sigma^2 \mathbb{C}P^2.
\end{equation}
We then apply the same argument as in the proof of Lemma \ref{Nnonspinlemma} before Diagram (\ref{sigmapidiag}) with $\mathop{\#}\limits_{d-1}S^2\times S^3$ replaced by $B\#\mathop{\#}\limits_{d-2}S^2\times S^3$, and obtain the following homotopy commutative diagram slightly different with Diagram (\ref{sigmapidiag})
\begin{equation}\label{sigmapidiag2}
\begin{gathered}
\xymatrix{
S^4\ar[r]^<<<<{\varrho^\prime} &
\Sigma^2 \mathbb{C}P^2 \ar@{^{(}->}[r]^<<<<{j} \ar[dr]^{\mathfrak{q}^\prime} &
\Sigma (B\#\mathop{\#}\limits_{d-2}S^2\times S^3) \ar[r]^<<<<{\Sigma\pi} \ar[d]^{\Sigma \mathfrak{q}} &
\Sigma N \ar[d]^{\Sigma q} \ar[r]^{Q} &
\Sigma \mathbb{C}P^2 \ar[d]^{q^\prime} \\
&&
S^6 \ar[r]^{\Sigma \pi^\prime} &
S^5 \ar@{=}[r] &
S^5,
}
\end{gathered}
\end{equation}
where $\varrho^\prime$ is the inclusion of the bottom cell of $\Sigma^2\mathbb{C}P^2$, and $\mathfrak{q}^\prime$ is the pinch map onto its top cell. Then since by \cite[Proposition 8.1 (iii)]{Muk} $\pi_4(\Sigma \mathbb{C}P^2)=0$, the composition $Q\circ \Sigma \pi\circ j$ factor as
\[
Q\circ \Sigma \pi\circ j: \Sigma^2\mathbb{C}P^2\stackrel{\mathfrak{q}^\prime}{\longrightarrow} S^6\stackrel{\gamma}{\longrightarrow} \Sigma \mathbb{C}P^2
\]
for some map $\gamma$ and moreover $q^\prime\circ \gamma\simeq \Sigma \pi^\prime$. Hence, by applying the same argument as in the proof of Lemma \ref{Nnonspinlemma} after Diagram (\ref{sigmapidiag}), we show that $\pi^\prime$ is null homotopic. This completes the proof of the lemma.
\end{proof}
\begin{lemma}\label{Nspinlemma}
The induced map $\pi^\prime$ in Diagram (\ref{piprimediag}) is null homotopic when $N$ is Spin.
\end{lemma}
\begin{proof}
First by \cite[Theorem 2]{DL}, $Y\cong \mathop{\#}\limits_{d-1}S^2\times S^3$. Consider the non-Spin manifold $N\#\mathbb{C}P^2$ and the canonical projection $\theta: N\#\mathbb{C}P^2\rightarrow N$.
Denote $\tilde{\alpha}=\alpha\circ \theta$.
The projection $\theta$ determines the pullback of bundles
\begin{equation}\label{thetadiag}
\begin{gathered}
\xymatrix{
S^1\ar@{=}[d] \ar[r]^<<<<{\tilde{j}} &
B\#\mathop{\#}\limits_{d-1}S^2\times S^3 \ar[d]^{\Theta} \ar[r]^<<<{\tilde{\pi}} &
N\#\mathbb{C}P^2 \ar[d]^{\theta}\\
S^1 \ar[r]^<<<<{j} &
\mathop{\#}\limits_{d-1}S^2\times S^3 \ar[r]^<<<<<<{\pi} &
N,
}
\end{gathered}
\end{equation}
where the total space of the bundle $\tilde{\pi}$ is determined by \cite[Theorem 2]{DL} and the fact that $\omega_2(N\#\mathbb{C}P^2)\not\equiv \tilde{\alpha}~{\rm mod}~2$, and the degree of the map $\Theta$ is $1$ by an easy argument using the Gysin sequence.
Hence we can apply Lemma \ref{Nnonspin2lemma} and its proof to $\theta^\ast(\xi)$ over $N\#\mathbb{C}P^2$, and obtain the homotopy commutative diagram from the middle square of Diagram (\ref{sigmapidiag2}) with $N$ replaced by $N\#\mathbb{C}P^2$
\begin{equation}\label{tildediag}
\begin{gathered}
\xymatrix{
\Sigma (B\#\mathop{\#}\limits_{d-1}S^2\times S^3) \ar[r]^<<<<{\Sigma\tilde{\pi}} \ar[d]^{\Sigma \tilde{\mathfrak{q}}} &
\Sigma (N\#\mathbb{C}P^2) \ar[d]^{\Sigma \tilde{q}} \\
S^6 \ar[r]^{\Sigma \tilde{\pi}^\prime} &
S^5,
}
\end{gathered}
\end{equation}
where $\tilde{\pi}^\prime$ is null homotopic as showed in the proof of Lemma \ref{Nnonspin2lemma}. Now apply the suspension operator to Diagram (\ref{thetadiag}) and combine it with Diagram (\ref{tildediag}) and the middle square of Diagram (\ref{sigmapidiag}) which holds here for $N$, we have the homotopy commutative diagram
\[
\begin{gathered}
\xymatrix{
\Sigma(B\#\mathop{\#}\limits_{d-1}S^2\times S^3) \ar[d]^{\Sigma\Theta} \ar[r]^<<<{\Sigma\tilde{\pi}} \ar@/_3.5pc/[dd]_{\Sigma \tilde{\mathfrak{q}}}&
\Sigma (N\#\mathbb{C}P^2) \ar[d]^{\Sigma\theta} \ar@/^3.5pc/[dd]^{\Sigma \tilde{q}} \\
\Sigma(\mathop{\#}\limits_{d-1}S^2\times S^3) \ar[r]^<<<<<<{\Sigma\pi} \ar[d]^{\Sigma \mathfrak{q}}&
\Sigma N \ar[d]^{\Sigma q}\\
S^6 \ar[r]^{\Sigma \pi^\prime}&
S^5,
}
\end{gathered}
\]
and in particular, $\Sigma \pi^\prime\simeq \Sigma \tilde{\pi}^\prime$ is null homotopic. Then $\pi^\prime$ is null homotopic, and the lemma is proved.
\end{proof}
\section{Proof of Theorem \ref{pdtbundlethm} and Theorem \ref{decomthm}}
\label{sec: pf}
Let $N$ be a simply connected manifold such that $H^2(N;\mathbb{Z})\cong \mathbb{Z}^{\oplus d}$ with $d\geq 1$. A rank $3$ vector bundle $\xi$ over $N$ is classified by a map $f: N\longrightarrow BSO(3)$ with the associated sphere bundle
\[
S^2\stackrel{i}{\longrightarrow} M\stackrel{p}{\longrightarrow} N,
\]
which defines the closed $6$-manifold $M$.
Recall by Lemma \ref{Nbundlelemma} and (\ref{Nbundleeq}), the classifying map $f: N\rightarrow BSO(3)$ for the bundle $\xi$ is determined by a pair of maps $(f^\prime, \alpha)\in [S^4, BSO(3)]\times [N, BS^1]$ such that $f=q^\ast(f^\prime)\cdot (Bs)_\ast(\alpha)$ and $\omega_2(\xi)\equiv \alpha~{\rm mod}~2$, where $q$ and $s$ are defined before Lemma \ref{Nbundlelemma}.
In this section, Theorem \ref{pdtbundlethm} is proved in Lemma \ref{xinonspinlemma} for non-Spin case, and in Lemma \ref{xispinlemma} for Spin case.
\subsection{When $\xi$ is non-Spin}
It is clear that $\xi$ is non-Spin is equivalent to $\rho^\ast(f)\neq 0$ with $\rho$ defined in (\ref{Ncofibreeq}). Then as discussed after Lemma \ref{Nbundlelemma}, in this case we can choose $\alpha$ to be primitive such that $\omega_2(\xi)\equiv \alpha~{\rm mod}~2$.
Fix the circle bundle
\[
S^1\stackrel{j}{\longrightarrow} Y\stackrel{\pi}{\longrightarrow} N
\]
classified by such a primitive element $\alpha\in H^2(N;\mathbb{Z})$.
The Diagram (\ref{keydiag}) then defines the $S^2$-bundle
\[
S^2\stackrel{\iota}{\longrightarrow} X\stackrel{\mathfrak{p}}{\longrightarrow} Y
\]
as the sphere bundle of the pullback $\pi^\ast(\xi)$.
\begin{lemma}\label{xinonspinlemma}
Suppose $\xi$ is non-Spin and $\alpha$ is primitive such that
$\omega_2(\xi)\equiv \alpha~{\rm mod}~2$.
Then the pullback bundle $\pi^\ast(\xi)$ is trivial, and in particular
\[
X\cong S^2\times Y.
\]
\end{lemma}
\begin{proof}
Recall we have Diagram (\ref{piprimediag}) which induces the map $\pi^\prime: S^5\rightarrow S^4$. It follows that there is the diagram
\begin{equation}\label{nonspin1diag}
\begin{gathered}
\xymatrix{
Y \ar[r]^{\pi} \ar[d]^{\mathfrak{q}} &
N \ar[d]^{q} \ar[r]^{\alpha} &
BS^1 \ar[d]^{Bs} \\
S^5 \ar[r]^{\pi^\prime} &
S^4 \ar[r]^<<<<{f^\prime} &
BSO(3),
}
\end{gathered}
\end{equation}
such that the left square commutes and is the right square in Diagram (\ref{piprimediag}), and the top row is a homotopy fibration. In particular, $\alpha\circ \pi$ is null homotopic, and so is $(Bs_\ast)(\alpha\circ \pi)$. Also by Proposition \ref{nullprop}, $\pi^\prime$ is null homotopic, and so is $f^\prime \circ q \circ \pi\simeq f^\prime\circ \pi^\prime\circ \mathfrak{q}$. Then by Lemma \ref{pdtbundlelemma} the classifying map $f\circ \pi$ of the bundle $\pi^\ast(\xi)$ is null homotopic, and the lemma follows.
\end{proof}
\subsection{When $\xi$ is Spin}
In this case by Lemma \ref{Nbundlelemma}, the classifying map $f: N\rightarrow BSO(3)$ of $\xi$ is in the image of $q^\ast$, that is, there exists a map $f^\prime: S^4\rightarrow BSO(3)$ such that $f^\prime\circ q\simeq f$, and the bundle $\xi$ is the pullback of the bundle $\xi^\prime$ over $S^4$ classified by $f^\prime$.
Pick any primitive element $\alpha\in H^2(N;\mathbb{Z})$. It classifies the circle bundle $
S^1\stackrel{j}{\longrightarrow} Y\stackrel{\pi}{\longrightarrow} N$ which defines the $5$-manifold $Y$. As in Diagram (\ref{keydiag}) we have the sphere bundle $S^2\stackrel{\iota}{\longrightarrow} X\stackrel{\mathfrak{p}}{\longrightarrow} Y$ of $\pi^\ast(\xi)$.
\begin{lemma}\label{xispinlemma}
Suppose $\xi$ is Spin and $\alpha$ is any primitive element. Then the pullback bundle $\pi^\ast(\xi)$ is trivial, and in particular
\[
X\cong S^2\times Y.
\]
\end{lemma}
\begin{proof}
By Proposition \ref{nullprop} we have that that the map $\pi^\prime: S^5\rightarrow S^4$ induced from $\pi$ in Diagram (\ref{piprimediag}) is null homotopic. Since $f^\prime\circ q\simeq f$, from Diagram (\ref{piprimediag}) $f\circ \pi$ can be written as the composition
\[
f\circ \pi: Y\stackrel{\mathfrak{q}}{\longrightarrow} S^5\stackrel{\pi^\prime}{\longrightarrow} S^4\stackrel{f^\prime}{\longrightarrow}BSO(3).
\]
This already implies that the classifying map $f\circ \pi$ of $\pi^\ast(\xi)$ is null homotopic. The lemma is proved.
\end{proof}
\subsection{Proof of Theorem \ref{decomthm}}
Let us recall an important result of Theriault \cite{T} on loop homotopy decomposition of connected sum of manifolds.
\begin{theorem}\cite[Theorem 1.4]{T}\label{Tthm}
Let $M$ and $N$ be simply connected closed manifolds of dimension $n$ with $n\geq 2$. Let $M_{n-1}$ and $N_{n-1}$ be the $(n-1)$-skeletons of $M$ and $N$ respectively. Suppose $N_{n-1}$ is a suspension. If the inclusion $i: M_{n-1}\rightarrow M$ has the property that $\Omega i$ has a right homotopy inverse, then there is a homotopy equivalence
\[
\hspace{4cm}
\Omega(M\# N)\simeq \Omega M\times \Omega (N_{n-1}\vee(N_{n-1}\wedge \Omega M)).
\hspace{4cm}\Box
\]
\end{theorem}
We also need the homotopy decomposition of $B$ in Lemma \ref{Btypelemma} after looping.
\begin{lemma}\label{Bdeclemma}
For the $5$-manifold $B$ in Lemma \ref{Btypelemma}, there is a homotopy equivalence
\[
\Omega B\simeq \Omega(S^2\times S^3)
\]
\end{lemma}
\begin{proof}
Since $B$ has cell structure of the form $B\simeq (S^2\vee S^3)\cup e^5$, we can consider the circle bundle $S^1\stackrel{}{\rightarrow} Z\stackrel{}{\rightarrow} B$ classified by a generator of $H^2(B;\mathbb{Z})$, which defines the $6$-manifold $Z$. Then it is easy to see that $Z$ is $2$-connected. By \cite{S}, $Z\cong S^3\times S^3$. It follows that $\Omega B\simeq S^1\times \Omega (S^3\times S^3)$. Since it is well known that $\Omega S^2\simeq S^1\times \Omega S^3$, the lemma follows.
\end{proof}
We can now prove Theorem \ref{decomthm}.
\begin{proof}[Proof of Theorem \ref{decomthm}]
As in the Introduction, consider the circle bundle $S^1\stackrel{j}{\rightarrow} Y\stackrel{\pi}{\rightarrow} N$ classified by an element $\alpha \in H^2(N;\mathbb{Z})$ such that $\alpha$ is primitive. Suppose further that when $\xi$ is non-Spin, $\alpha\equiv \omega_2(\xi)~{\rm mod}~2$,
and when $\xi$ is Spin but $N$ is non-Spin, $\alpha\equiv \omega_2(N)~{\rm mod}~2$.
These conditions can be always satisfied by suitable choices of $\alpha$.
Then by Theorem \ref{pdtbundlethm}, the total space $X$ of the sphere bundle of $\pi^\ast(\xi)$ satisfies $X\cong S^2\times Y$. Hence by Diagram (\ref{keydiag}), we have
\begin{equation}\label{Mdecgeneq}
\Omega M\simeq S^1\times \Omega X\simeq S^1\times \Omega S^2 \times \Omega Y.
\end{equation}
Hence it remains to show the loop decomposition of the $5$-manifold $Y$.
We may divide the proof into three cases:
\begin{itemize}
\item [(1)] both $N$ and $\xi$ are non-Spin, and $\omega_{2}(N)\neq \omega_2(\xi)$.
\item [(2)] both $N$ and $\xi$ are non-Spin, and $\omega_{2}(N)=\omega_2(\xi)$,
\item [(3)] $N$ is Spin, or $\xi$ is Spin.
\end{itemize}
For the case (1), we have $d\geq2$, and $\alpha\equiv \omega_2(\xi)\neq \omega_2(N)~{\rm mod}~2$. Hence by \cite[Theorem 2]{DL}, $Y\cong B\#\mathop{\#}\limits_{d-2}S^2\times S^3$. If $d\geq 3$, by Theorem \ref{Tthm} there is a homotopy equivalence
\begin{equation}\label{Ydec1eq}
\Omega Y\simeq \Omega (S^2\times S^3)\times \Omega\big(J\vee(J\wedge\Omega (S^2\times S^3))\big)
\end{equation}
with $J=\mathop{\bigvee}\limits_{i=1}^{d-2}(S^2\vee S^3)$. Combining (\ref{Ydec1eq}) with (\ref{Mdecgeneq}), we obtain the desired homotopy decomposition of $\Omega M$ in the theorem. If $d=2$, then $Y=B$. By Lemma \ref{Bdeclemma} $\Omega Y\simeq \Omega (S^2\times S^3)$, and then the decomposition of $\Omega M$ in the theorem holds by (\ref{Mdecgeneq}).
For the cases (2) and (3), by \cite[Theorem 2]{DL} and our choice of $\alpha$ we have $Y\cong \mathop{\#}\limits_{d-1}S^2\times S^3$. If $d=1$, then $Y$ has to be $S^5$, and hence $\Omega M\simeq S^1\times \Omega S^2\times \Omega S^5$. If $d\geq 2$, either by Theorem \ref{Tthm} or \cite[Proposition 4.3]{BT1}, we obtain the decomposition (\ref{Ydec1eq}), and then the loop decomposition of $M$ in the theorem by (\ref{Mdecgeneq}). This completes the proof of the theorem.
\end{proof}
\section{The case when $d=0$}
\label{sec: d=0}
In this section, we study the case when $d=0$ and prove Theorem \ref{d=0thm} as an immediate corollary of Proposition \ref{Md=0koddprop} and Proposition \ref{Md=0kevenprop}. Indeed, we work in a slightly more general context, that is, to study the loop decomposition of the closed $6$-manifold $M$ with cell structure of the form
\begin{equation}\label{Md=0eq}
M\simeq S^2\cup e^4\cup e^6.
\end{equation}
Notice that $M$ in Theorem \ref{d=0thm} as the total space of a $S^2$-bundle over $S^4$ is an example of (\ref{Md=0eq}).
Yamaguchi \cite{Yam} almost determined the homotopy classification of $M$ in (\ref{Md=0eq}) with correction by \cite{MR} and \cite{Bau}, and summarized the criterion whether $M$ has the same homotopy type as a $S^2$-bundle over $S^4$ in \cite[Remark 4.8]{Yam} based on \cite{Sa}.
By (\ref{Md=0eq}) there are generators $x\in H^2(M;\mathbb{Z})$, $y\in H^4(M;\mathbb{Z})$ such that
\begin{equation}\label{xyreleq}
x^2=ky
\end{equation}
for some $k\in \mathbb{Z}$.
Consider the $S^1$-bundle
\begin{equation}\label{Xd=0def}
S^1\stackrel{j}{\longrightarrow} X\stackrel{}{\longrightarrow} M
\end{equation}
classified by $x\in H^2(M;\mathbb{Z})\cong [M, BS^1]$ which defines the closed $7$-manifold $X$.
Denote $P^n(k)$ be the Moore space such that the reduced cohomology $\widetilde{H}^\ast(P^n(k);\mathbb{Z})\cong \mathbb{Z}/k\mathbb{Z}$ if $\ast=n$ and $0$ otherwise \cite{N}.
\begin{lemma}\label{cellXlemma}
There is a homotopy equivalence
\[
X\simeq P^4(k)\cup e^7.
\]
\end{lemma}
\begin{proof}
The lemma can be proved directly by analyzing the Serre spectral sequence of the fibration $X\rightarrow M\stackrel{x}{\rightarrow} BS^1$ induced from (\ref{Xd=0def}). Here we provide an alternative proof using results in geometric topology. By \cite[Theorem 1.3]{Jiang}, $X$ is homotopy equivalent to the total space of a $S^3$-bundle over $S^4$. Then by the homotopy classification of $S^3$-bundles over $S^4$ \cite{Sa, CE}, $X$ is homotopy equivalent to $P^4(k^\prime)\cup e^7$ for some $k^\prime\in \mathbb{Z}$. Notice that $\pi_3(X)\cong \pi_3(M)\cong \pi_3(S^2\cup_{k\eta_2}e^4)\cong \mathbb{Z}/k$, where $\eta_2\in\pi_3(S^2)$ is the Hopf element. Then $k=k^\prime$ by $\pi_3(P^4(k^\prime)\cup e^7)\cong \mathbb{Z}/k^\prime$, and the lemma follows.
\end{proof}
Lemma \ref{cellXlemma} has an immediate consequence on the rational homotopy of $M$.\begin{lemma}\label{d=0rationallemma}
Let $M$ be a closed $6$-manifold with cell structure of the form (\ref{Md=0eq}).
Then if $k\neq 0$ there is a rational homotopy equivalence $M\simeq_{\mathbb{Q}}\mathbb{C}P^3$, and if $k=0$ $M\simeq_{\mathbb{Q}} S^2\times S^4$.
\end{lemma}
\begin{proof}
By Lemma \ref{cellXlemma}, $X\simeq P^4(k)\cup e^7$. Hence if $k\neq0$ $X\simeq_{\mathbb{Q}} S^7$, and then by (\ref{Xd=0def}) $M\simeq_{\mathbb{Q}}\mathbb{C}P^3$.
If $k=0$, then $M\simeq (S^2\vee S^4)\cup_f e^6$ with the attaching map $f: S^5\rightarrow S^2\vee S^4$. Since rationally $f$ is a Whitehead product, we have that $M\simeq_{\mathbb{Q}} S^2\times S^4$.
\end{proof}
\subsection{The subcase when $k$ is odd}
\label{subsec: kodd}
When $k$ is odd, the loop decomposition of the Poincar\'{e} complex $P^4(k)\cup e^7$ was determined by Huang and Theriault \cite{HT}. For any prime $p$, let $S^{m}\{p^r\}$ be the homotopy fibre of the degree $p^r$ map on $S^{m}$.
Let $k=p_1^{r_1}\cdots p_\ell^{r_\ell}$ be the prime decomposition of $k$. By \cite[Theorem 1.1]{HT}, when $k$ is odd there is a homotopy equivalence
\begin{equation}\label{HTthmeq}
\Omega(P^4(k)\cup e^7)\simeq \prod_{j=1}^{\ell} S^3\{p_j^{r_j}\}\times \Omega S^7.
\end{equation}
\begin{proposition}\label{Md=0koddprop}
Let $M$ be a closed $6$-manifold with cell structure of the form $S^2\cup_{k\eta_2} e^4\cup e^6$.
If $k$ is odd, then $M$ has the same homotopy type as a $S^2$-bundle over $S^4$, and there is a homotopy equivalence
\begin{equation}\label{Md=0decomeq}
\Omega M\simeq S^1\times \prod_{j=1}^{\ell} S^3\{p_j^{r_j}\}\times \Omega S^7.
\end{equation}
\end{proposition}
\begin{proof}
The homotopy equivalence (\ref{Md=0decomeq}) follows immediately from Lemma \ref{cellXlemma}, (\ref{Xd=0def}) and (\ref{HTthmeq}). For the first statement, recall that there is the fibre bundle \cite[Section 1.1]{HBJ}
\[
S^2\longrightarrow \mathbb{C}P^3\stackrel{}{\longrightarrow} S^4,
\]
classified by a generator of $\pi_4(BSO(3))\cong\mathbb{Z}$. Pullback this bundle along a self-map of $S^4$ of degree $k$, we obtain the $6$-manifold $M^\prime$ in the following diagram of $S^2$-bundles
\[
\xymatrix{
S^2\ar@{=}[d] \ar[r] &
M^\prime \ar[r]^{} \ar[d] &
S^4\ar[d]^{k} \\
S^2\ar[r] &
\mathbb{C}P^3\ar[r]^{} &
S^4.
}
\]
It is easy see that $x^{\prime 2}=k y^\prime$ where $x^\prime \in H^2(M^\prime;\mathbb{Z})$ and $y^\prime\in H^4(M^\prime;\mathbb{Z})$ are two generators.
By \cite[Corollary 4.6]{Yam}, when $k$ is odd the homotopy type of $M$ is uniquely determined by $k$, and hence $M\simeq M^\prime$. This completes the proof of the proposition.
\end{proof}
\subsection{The subcase when $k$ is even}
\label{subsec: keven}
In \cite[Section 6]{HT}, Huang and Theriault showed that for $P^4(2^r)\cup e^7$ with $r\geq 3$, there is an homotopy equivalence
\begin{equation}\label{HTpropeq}
\Omega(P^4(2^r)\cup e^7)\simeq S^3\{2^r\}\times \Omega S^7,
\end{equation}
if there is a map $P^4(2^r)\cup e^7\rightarrow S^4$ inducing a surjection in mod-$2$ homology.
\begin{proposition}\label{Md=0kevenprop}
Let $M$ be a closed $6$-manifold with cell structure of the form $S^2\cup_{2^r\eta_2} e^4\cup e^6$.
If $r\geq 3$, then there is a homotopy equivalence
\[
\Omega M \simeq S^1\times S^3\{2^r\}\times \Omega S^7.
\]
\end{proposition}
\begin{proof}
Recall by Lemma \ref{cellXlemma} and its proof $X\simeq P^4(2^r)\cup e^7$, and is homotopy equivalent to the total space of a $S^3$-bundle over $S^4$
\[
S^3\stackrel{}{\longrightarrow} X\stackrel{q}{\longrightarrow} S^4.
\]
It is clear that $q_\ast: H_4(X;\mathbb{Z}/2\mathbb{Z})\rightarrow H_4(S^4;\mathbb{Z}/2\mathbb{Z})$ is surjective. Hence by (\ref{HTpropeq}) $\Omega X\simeq S^3\{2^r\}\times \Omega S^7$. The lemma then follows from (\ref{Xd=0def}) immediately.
\end{proof}
\section{Coformality of $6$-manifolds}
\label{sec: rational}
In this section, we study the rational homotopy theory of $6$-manifolds as an application of our decompositions in Theorem \ref{decomthm}. We briefly recall some necessary terminologies used in this section, and for the detailed knowledge of rational homotopy theory one can refer to the standard literature \cite{FHT}.
Recall a $CW$ complex $X$ is rationally {\it formal} if its rational homotopy type is determined by the graded commutative algebra $H^\ast(X;\mathbb{Q})$; and is rationally {\it coformal} if its rational homotopy type is determined by the graded Lie algebra $\pi_\ast(\Omega X)\otimes \mathbb{Q}$, which is called the {\it homotopy Lie algebra} of $X$ denoted by $L_X$. Suppose $(\Lambda V_X, d)$ is a {\it Sullivan model} of $X$. The differential $d=\sum\limits_{i\geq 0}d_i$ with $d_i: V_X\rightarrow \Lambda^{i+1} V_X$, and $(\Lambda V_X, d)$ is {\it minimal} if the linear part $d_0=0$. In the latter case, $V_X$ is dual to $\pi_\ast(\Omega X)\otimes \mathbb{Q}$. Moreover, $X$ is coformal if and only if it has a {\it purely quadratic} Sullivan model $C^\ast (L_X, 0)=(\Lambda (sL_X)^{\#}, d_1)$, where $C^\ast(-)$ is the {\it commutative cochain algebra functor}, $s$ is the suspension and $\#$ is the dual operation.
\begin{proposition}\label{Mcoformalprop}
Let $M$ be the $6$-manifold in Theorem \ref{decomthm} such that $d\geq 2$. Then $M$ is coformal.
\end{proposition}
\begin{proof}
Consider the $S^2$-bundle
\begin{equation}\label{s2fibqeq}
S^2\stackrel{i}{\rightarrow} M\stackrel{p}{\rightarrow} N
\end{equation}
in Diagram (\ref{keydiag}). By \cite[Proposition 4.4]{NM} $N$ is coformal since $d\geq 2$, and hence has a minimal Sullivan model of the form $C^\ast (L_N, 0)=(\Lambda (sL_N)^{\#}, d_1)$ as the associated commutative cochain algebra of $(L_N,0)$ \cite[Example 7 in Chapter 24 (f)]{FHT}.
By \cite[Proposition 4.6]{NM} both $M$ and $N$ are formal. Then by \cite[Theorem A]{AK} the map $p$ is formal \cite[Definition 2.4]{AK}, that is, there is a homotopy commutative diagram
\[
\begin{gathered}
\xymatrix{
C^\ast (L_N, 0) \ar[r]^{\varphi(p)} \ar[d]^{m_N} &
(\Lambda V_M, d) \ar[d]^{m_M} \\
(H^\ast(N;\mathbb{Q}),0) \ar[r]^{p^\ast} &
(H^\ast(M;\mathbb{Q}),0),
}
\end{gathered}
\]
where $\varphi(p)$ is a relative minimal Sullivan model of $p$, and $m_N$ and $m_M$ are quasi-isomorphisms inducing identity on cohomology. In particular, the quotient of $\varphi(p)$ is a minimal Sullivan model of $S^2$, and hence is isomorphic to $(\Lambda(a,b), db=a^2)$ with ${\rm deg}(a)=2$. It follows that there is the short exact sequence of the linear part of the model of (\ref{s2fibqeq})
\begin{equation}\label{lineareq}
0\rightarrow ((sL_N)^{\#}, 0)\rightarrow (V_M, d_0) \stackrel{}{\rightarrow} (\mathbb{Q}(a,b),0)\rightarrow 0,
\end{equation}
such that $H^\ast(V_M, d_0)$ is dual to $\pi_\ast(M)\otimes \mathbb{Q}$. However, since the homotopy groups of (\ref{s2fibqeq}) splits, we see from (\ref{lineareq}) that the linear part $d_0=0$ for $V_M$ and hence $(\Lambda V_M, d)$ is a minimal model of $M$.
Now consider the Milnor-Moore spectral sequences $(E_i(S^2), d_i)$, $(E_i(M), d_i)$, $(E_i(N), d_i)$ \cite[Chapter 23 (b)]{FHT} of $S^2$, $M$ and $N$ in (\ref{s2fibqeq}) respectively, we have that the $E_2(S^2)\cong H^\ast(S^2;\mathbb{Q})$ and $E_2(N)\cong H^\ast(N;\mathbb{Q})$ by coformality of $S^2$ and $N$, and $E_2(M)=H^\ast(\Lambda V_M, d_1)$ converges to $H^\ast(M;\mathbb{Q})$.
By the naturality of the spectral sequences for (\ref{s2fibqeq}), there is the extension of Sullivan models in $E_1$-terms
\begin{equation*}\label{E1eq}
C^\ast (L_N, 0) \stackrel{}{\rightarrow} (\Lambda V_M, d_1)\stackrel{}{\rightarrow} (\Lambda(a,b), db=a^2),
\end{equation*}
which can be realized as a rational homotopy fibration
\[
S^2\stackrel{}{\rightarrow} M^\prime\stackrel{}{\rightarrow} N,
\]
where $M^\prime$ is a geometric realization of $(\Lambda V_M, d_1)$.
Hence $H^\ast(M^\prime;\mathbb{Q})\cong H^\ast(\Lambda V_M, d_1)$ as graded commutative algebras, and $M^\prime$ is formal by \cite[Theorem A]{AK}.
By an easy argument on the Serre spectral sequence, we see that as vector spaces $H^\ast(M^\prime;\mathbb{Q})\cong H^\ast(S^2;\mathbb{Q})\otimes H^\ast(N;\mathbb{Q})$ and hence is isomorphic to $H^\ast(M;\mathbb{Q})$. In particular, the Milnor-Moore spectral sequences of $M$ collapses at $E_2$-term, and $H^\ast(\Lambda V_M, d_1)\cong H^\ast(M;\mathbb{Q})$ as graded commutative algebras. This implies that $H^\ast(M^\prime;\mathbb{Q})\cong H^\ast(M;\mathbb{Q})$ as graded commutative algebras. Since both $M$ and $M^\prime$ are formal, $M\simeq_{\mathbb{Q}} M^\prime$ and has a purely quadratic minimal Sullivan model $(\Lambda V_M, d_1)$. Hence $M$ is coformal and the proposition is proved.
\end{proof}
We can now prove Theorem \ref{coformalthm}.
\begin{proof}[Proof of Theorem \ref{coformalthm}]
First it is well known that $\mathbb{C}P^i$ is not coformal for $i\geq 2$ by \cite[Example 4.7]{NM}. If $d=1$, then $M$ is determined by a fibre bundle $S^2\stackrel{}{\rightarrow} M\stackrel{}{\rightarrow} \mathbb{C}P^2$. It has a model of the form
\[
(\Lambda(c, x), dx=c^3)\stackrel{}{\longrightarrow} (\Lambda(c, x, a, b), \tilde{d})\stackrel{}{\longrightarrow}(\Lambda(a, b), db=a^2),
\]
where ${\rm deg}(c)={\rm deg}(a)=2$. By degree reason $\tilde{d}(a)=0$, and $\tilde{d}(b)=a^2+kc^2$ for some $k\in \mathbb{Q}$. This implies that $(\Lambda(c, x, a, b), \tilde{d})$ is minimal but not quadratic. Hence $M$ is not coformal.
When $d\geq 2$, by Proposition \ref{Mcoformalprop} $M$ is coformal. Moreover, Neisendorfer and Miller \cite[Proposition 4.6]{NM} showed that every simply connected $6$-manifold is formal. Hence by \cite[Theorem 1.2]{Ber} $M$ is Koszul. By \cite[Theorem 1.3]{Ber} there is an isomorphism of graded Lie algebras
\[
\pi_\ast(\Omega M)\otimes\mathbb{Q}\cong H^\ast(M;\mathbb{Q})^{!\mathscr{L}ie},
\]
where $(-)^{!\mathscr{L}ie}$ is the Koszul dual Lie functor.
\end{proof}
\bibliographystyle{amsalpha}
|
{
"timestamp": "2021-05-11T02:18:49",
"yymm": "2105",
"arxiv_id": "2105.03881",
"language": "en",
"url": "https://arxiv.org/abs/2105.03881"
}
|
\section{Characterization of UPMEM DPU}
This section presents a microbenchmark-based analysis of the UPMEM DPU.
The sustainable bandwidth of MRAM (i.e., MRAM to WRAM) is evaluates in Section~\ref{sec:mram-bandwidth}.
The sustainable bandwidth of WRAM (i.e., WRAM to core) is evaluates in Section~\ref{sec:wram-bandwidth}.
Finally, the throughput of arithmetic operations in the DPU core pipeline is evaluates in Section~\ref{sec:arith-throughput}.
\jgl{We can add a table with all microbenchmarks we have.}
\subsection{MRAM Bandwidth}\label{sec:mram-bandwidth}
A DPU accesses data from two memories (Figure~\ref{fig:scheme}): 1) a DRAM-based MRAM bank, via a DMA engine, and 2) an SRAM-based WRAM or scratchpad via load/store instructions.
In this section, we analyze the sustainable bandwidth that can be achieved accessing both memories with three different memory access patterns: 1) unit-stride access, 2) strided access, and 3) random access.
For the unit-stride access pattern, we implement the four versions of the STREAM benchmark~\cite{mccalpin1995} (i.e., copy, scale, add, triad).
For the strided access pattern, we design a microbenchmark where a tasklet with an identifier $tasklet\_id$ performs multiple iterations to read elements from positions $tasklet\_id + s * i$ of an input array and write to the same positions of an output array.
\ie{Why not space out the tasklets? In other words, read positions: $tasklet\_id * s * numIterations + i * s$.}
The stride $s$ is the distance between consecutive accesses and $i$ is the iteration number.
For the random access pattern, we implement the GUPS benchmark~\cite{gaekegups}, which performs read-modify-write operations on random positions of an array.
One strength of the UPMEM PIM architecture is the high memory bandwidth between MRAM and WRAM.
In this section, we first measure the bandwidth of DMA transfers of different sizes between MRAM and WRAM.
Second, we measure the sustainable bandwidth for three access patterns (i.e., unit-stride, strided, random) when scaling the number of tasklets and DPUs.
\subsubsection{\textbf{DMA Transfer Bandwidth}}
The UPMEM Software Development Kit (SDK) provides two types of functions for DMA transfers between MRAM and WRAM: 1) fixed-size transfers, with transfer size determined at compile time, and 2) variable-size transfers, with transfer size determined at runtime.
\sloppy
DPU functions for fixed-size transfers are \texttt{mram\_readN(mram\_source, wram\_destination)} (from MRAM to WRAM) and \texttt{mram\_writeN(wram\_source, mram\_destination)} (from WRAM to MRAM), where \texttt{N} is the transaction size in bytes (\texttt{N} is a multiple of 8 between 8 and 1,024 in SDK 2019.3.0~\cite{upmem-guide}).
DPU functions for variable-size transfers are \texttt{mram\_readX(mram\_source, wram\_destination, SIZE)} (from MRAM to WRAM) and \texttt{mram\_writeX(wram\_source, mram\_destination, SIZE)} (from WRAM to MRAM), where \texttt{SIZE} is the transaction size in bytes (\texttt{SIZE} is a multiple of 8 between 8 and 2,048 in SDK 2019.3.0~\cite{upmem-guide}).
We test all possible sizes for both fixed-size transfers and variable-size transfers.
We measure the number of cycles for each DMA transfer by using cycle counters provided by the UPMEM SDK~\cite{upmem-guide}.
Figure~\ref{fig:mram-bandwidth} shows the bandwidth of MRAM-WRAM (read) and WRAM-MRAM (write) DMA transactions.
\begin{figure}[h]
\centering
\includegraphics[width=0.9\linewidth]{figures/mram-bandwidth.pdf}
\vspace{-2mm}
\caption{MRAM-WRAM DMA transfer bandwidth: MRAM-WRAM read bandwidth (a) and WRAM-MRAM write bandwidth (b). \ie{How many tasklets?} }
\label{fig:mram-bandwidth}
\end{figure}
We make two observations.
First, the bandwidth of read and write transfers is very similar for the same transfer size.
Second, the bandwidth increases with the transfer size. It scales almost linearly between 8 and 256 bytes, and tends to saturate for larger sizes. The reason is that the latency increases slowly for DMA transfers between 8 and 256 bytes, and faster after 256 bytes.
For example, the latency of a 256-byte DMA transfer is only 30\% longer than the latency of an 8-byte DMA transfer. After 256 bytes, the DMA transfer latency rises up, with 2,048-byte transfers taking more than $3.5\times$ longer than 8-byte transfers.
From these observations, we extract a general recommendation for programmers: using the largest possible DMA transfer maximizes the bandwidth exploitation.
This recommendation is specially useful for workloads with streaming (unit-stride) access patterns.
In such cases, the size of the DMA transfer should only be limited by the available WRAM (64KB) and the number of running tasklets.
\subsubsection{\textbf{Streaming Access Bandwidth}}
We measure the sustainable MRAM bandwidth when scaling the number of DPUs and tasklets per DPU for three access patterns (i.e., unit-stride, strided, random).
We perform strong scaling experiments with arrays of size 16M elements.
We measure the execution cycles of our microbenchmarks, including transfers between MRAM and WRAM, accesses to WRAM, and possible computation (e.g., STREAM add, scale, triad).
Figure~\ref{fig:mram-patterns} shows the aggregated bandwidth for the three access patterns on 32 DPUs and 256 DPUs.
We test a number of tasklets between 1 and 16.
\begin{figure}[h]
\centering
\includegraphics[width=1.0\linewidth]{figures/mram-patterns-32.pdf}
\includegraphics[width=1.0\linewidth]{figures/mram-patterns-256.pdf}
\vspace{-2mm}
\caption{MRAM bandwidth for unit-stride (a), strided (b), and random (c) access patterns on 32 (top) and 256 (bottom) DPUs. \ie{There is no (a), (b), (c) in the figure.} \ie{Commas in the y-axis would help, or using GB/s.} \ie{Can we align GUPS to strided for 256 the way it is aligned for 32?} }
\label{fig:mram-patterns}
\end{figure}
We make eight key observations.
\ie{Eight is a lot. I think some can be merged together.}
\ie{Never mind, I see how they are each different. They are still a lot to keep track of. I recommend splitting the figures. Points 1-4,8 based on STREAM figure. Points 5-7 based on STRIDED/GUPS figure. We can even put GUPS on same plot as STRIDED.}
First, the bandwidth of STREAM copy is significantly higher than that of other microbenchmarks.
\ie{This is trivial.}
STREAM copy only uses consecutive \texttt{mram\_read} and \texttt{mram\_write} operations, and no actual accesses of the DPU threads to the WRAM.
It employs large DMA transfers (\texttt{mram\_readX}, \texttt{mram\_writeX} with 1,024-2,048 bytes).
Thus, it leverages the full bandwidth of the DMA transfers.
\ie{Comparing STREAM to STRIDED/GUPS is redundant with point 3. I think this point should just state the bandwidth from STREAM-copy and compare to nominal, and save the comparison to STREAM-compute to point 3.}
Second, the bandwidth of STREAM copy saturates with 2 tasklets.
This suggests that the DMA engine can sustain 2 simultaneous DMA transfers.
\ie{Do we know this for sure, or just concluding from the result?}
Third, STREAM add and, notably, scale and triad achieve much lower bandwidth than STREAM copy, which attains $1.64\times$, $12.43\times$, and $17.56\times$ higher bandwidth than STREAM, add, triad, and scale, respectively.
STREAM add, triad, and scale perform WRAM read and write accesses and execute integer operations, which burden the overall throughput.
\ie{What's the point here? It's obvious that more operations will decrease throughput. I think the interesting thing is that the memory latency does not hide arithmetic latency like it does on a CPU. But to make that point, it would be useful to show the difference between STREAM-copy and STREAM-add/scale/triad on CPU.}
Fourth, the throughput of STREAM add, scale, and triad saturates for more than 11 tasklets, which is \juan{the recommended number of tasklets to hide pipeline latencies}~\cite{devaux2019}.
STREAM add, scale, and triad execute, respectively, 1 arithmetic operation (addition), 1 arithmetic operation (multiplication), and 2 arithmetic operations (multiplication and addition) for every 2 32-bit input elements.
From this observation, a programming recommendation for DPU functions with more than 0.125 operations per byte is to use more than 11 tasklets per DPU.
Eight, all microbenchmarks scale linearly with the number of DPUs.
\ie{This is not the conclusion of point eight, right? This is the assumption to arrive at the conclusion of the aggregate bandwidth. Correct?}
In all microbenchmarks the workload is evenly partitioned across all DPUs and there are no synchronization needs across DPUs, which would require host CPU intervention.
With such linear scaling, we can expect, e.g., for STREAM copy, an aggregated bandwidth of nearly 2TB/s for systems with 2,048 DPUs, which is well beyond the bandwidth of current CPUs and GPUs with 3D-stacked memories~\cite{HBM, HMC2}.
\ie{I'm not sure if this is a fair comparison. We should compare iso-area/cost/energy. We can probably also merge this with point 1.}
\subsubsection{\textbf{Strided and Random Access Bandwidth}}
Fifth, the strided and random access patterns achieve a bandwidth that is at least one order of magnitude lower than that of STREAM copy.
\ie{I think there are two things at play here: stride and DMA transfer size. I think STREAM beats strided because of DMA transfer size, not stride size, and the evidence is that strided with stride 1 is not the same as stream.}
The main reason is that these access patterns cannot fully exploit the bandwidth of the MRAM-WRAM DMA transfers.
Since they have limited (e.g., short strides) to no (e.g., random accesses) spatial locality, they employ small DMA transfers (\texttt{mram\_read8}, \texttt{mram\_write8}) with lower bandwidth (Figure~\ref{fig:mram-bandwidth}).
\ie{That was just our choice in the implementation. We could have fetched more data without using it and gotten better performance, at least for the small strides.}
Sixth, the bandwidth of the strided and random access patterns increases with the number of tasklets.
\ie{So does STREAM. So what?}
The random access pattern needs some arithmetic and logic operations to compute the random numbers.
This makes its bandwidth saturate after 11 tasklets.
\ie{Is it fair to call it memory bandwidth if it is compute bound? To fairly measure memory bandwidth, maybe we should unroll the loop and get rid of the arithmetic.}
Seventh, the bandwidth of the strided access pattern is relatively independent of the actual stride.
This represents a clear distinction with respect to conventional CPU and GPU systems, where accesses with short strides can benefit from some spatial locality in caches.
\jgl{We may need to mention CPU results too. Or cite a previous CPU characterization work.}
\ie{You can get the same benefit on DPUs if you wanted. It's a matter of how the microbenchmarks were implemented. DPU just gives you more control because programmers manage caching.}
\ie{
\\
Below is a summary of how I think we should reorganize this section based on my comments above.
\begin{itemize}
\item STREAM figure. Observations:
\begin{enumerate}
\item Max bandwidth, aggregate bandwidth, compare to nominal (observations 1,8)
\item Max simultaneous DMAs (observation 2)
\item Memory latency does not hide arithmetic latency (observation 3)
\item Tasklets needed to fully utilize pipeline (observation 4)
\end{enumerate}
\item STRIDED/GUPS figure (redo experiment to take max of course-course, course-fine, and fine-fine DMAs and unroll accesses to minimize arithmetic). Observations:
\begin{enumerate}
\item Revisit observations 5-7 based on new results
\end{enumerate}
\end{itemize}
}
\clearpage
\subsection{\textbf{WRAM Bandwidth}}\label{sec:wram-bandwidth}
To measure the bandwidth of WRAM for different access patterns, we follow a similar approach to MRAM experiments in Section~\ref{sec:mram-bandwidth}.
For the three access patterns (i.e., unit-stride, strided, random), we allocate arrays in WRAM and access them repeatedly.
We measure the execution cycles of our microbenchmarks, including accesses to WRAM, and possible computation, and not including any transfers between MRAM and WRAM.
The results presented in this section are for 64-bit elements, but they are very similar for 32-bit elements, since the WRAM latency is equal for both element sizes~\cite{upmem-guide}.
\ie{I'm not sure if this experiment is useful because WRAM is not designed for bandwidth but for latency. Just like in CPU, people do not usually care about L1 cache bandwidth but they care about latency.}
Figure~\ref{fig:wram-patterns} shows the aggregated bandwidth for the three access patterns on 32 DPUs and 256 DPUs, with a number of tasklets between 1 and 16.
\begin{figure}[h]
\centering
\includegraphics[width=1.0\linewidth]{figures/wram-patterns-32.pdf}
\includegraphics[width=1.0\linewidth]{figures/wram-patterns-256.pdf}
\vspace{-2mm}
\caption{WRAM bandwidth for unit-stride (a), strided (b), and random (c) access patterns on 32 (top) and 256 (bottom) DPUs. \ie{Same comments as MRAM figure.}}
\label{fig:wram-patterns}
\end{figure}
We make five important observations.
First, the bandwidth of STREAM copy in WRAM is around half the bandwidth of STREAM copy in MRAM.
This observation appears to be counterintuitive, since the latency of a single 8-byte access to/from WRAM ($\sim110$ clock cycles, according to our measurements) is much shorter than the latency of an 8-byte DMA transfer to/from MRAM ($\sim420$ clock cycles, according to our measurements).
However, STREAM copy in MRAM uses large DMA transfers (1,024-2,048 bytes), which provide up to $72\times$ more bandwidth than 8-byte transfers.
The reason is that there are no WRAM accesses in STREAM copy in MRAM, as it only uses DMA transfers (Section~\ref{sec:mram-bandwidth}).
However, STREAM copy in WRAM copies one array in WRAM to another array in WRAM.
This operation is entirely executed in the pipeline and WRAM, not involving DMA engines.
Second, the WRAM bandwidth measured for STREAM add, scale, and triad is very similar to the MRAM bandwidth for these microbenchmarks (Figure~\ref{fig:mram-patterns}).
This reveals that the main bottlenecks for these microbenchmarks are the accesses to WRAM and the arithmetic computations, and not the large DMA transfers between MRAM and WRAM (1,024-2,048 bytes).
Third, the shape of the curves for the strided and random access patterns is very similar to that of the MRAM bandwidth curves (Figure~\ref{fig:mram-patterns}).
However, the WRAM bandwidth is more than twice higher than the MRAM bandwidth.
A main reason is that these microbenchmarks \ie{Which ones? MRAM ones or WRAM ones?} use small MRAM-WRAM DMA transfers (8 bytes) \ie{See my earlier criticism of this}, which take about $4\times$ more latency than 8-byte WRAM accesses.
As a result, the DMA transfers become the bottleneck for the strided and random access patterns.
Fourth, same as we observe for the MRAM bandwidth (STREAM add, scale, triad, strided, and random in Figure~\ref{fig:mram-patterns}), the throughput saturates for more than 11 tasklets for all these experiments.
Fifth, all microbenchmarks scale linearly with the number of DPUs.
As in our MRAM bandwidth measurements, there is no synchronization across DPUs, which ensures good scaling.
\ie{I do not think this point is necessary. This is by design. It is not insightful.}
\clearpage
\subsection{Arithmetic Operations}\label{sec:arith-throughput}
In this section, we analyze the throughput of arithmetic operations in the DPU pipeline, and the effect on the achievable throughput of workloads with different operational intensities.
\subsubsection{\textbf{Throughput of Arithmetic Operations}}
We measure the throughput of arithmetic operations (addition, subtraction, multiplication, division) for different data types (32-bit integer, 64-bit integer, float, double) in the DPU pipeline.
In these measurements, we do not include MRAM-WRAM DMA transfers, since we want to quantify the specific throughput of the pipeline.
Thus, our experiments perform read-modify-write operations in WRAM, including WRAM reads, execution of arithmetic operations, and WRAM writes.
Figure~\ref{fig:throughput-dpu} shows the throughput (in millions of operations per second, MOPS/s) for addition, subtraction, multiplication, and division with four data types (32-bit integer, 64-bit integer, float, double) on one DPU.
We change the number of tasklets between 1 and 16.
The throughput for 32-bit and 64-bit unsigned integers is the same as for integers.
\begin{figure}[h]
\centering
\includegraphics[width=0.9\linewidth]{figures/throughput-dpu.pdf}
\vspace{-2mm}
\caption{Throughput of arithmetic operations on one DPU for different data types (32-bit integer, 64-bit integer, float, double). \ie{Why do we use a different number of DPUs for memory and arithmetic? Using the same number would make it easier to compare the results.} }
\label{fig:throughput-dpu}
\end{figure}
Five key observations are as follows.
First, the throughput of all arithmetic operations and data types saturates after 11 tasklets.
Second, for all data types, the throughput of addition and subtraction is significantly higher than that of multiplication and division.
A major reason is that the DPU pipeline only includes an $8\times8$ single-cycle multiplier (because of the limited number of available metal layers~\cite{devaux2019}), which needs several cycles to perform 32-bit and 64-bit multiplications.
Third, the throughput for float and double values is more than three times lower than that for 32-bit and 64-bit integers.
The DPU pipeline does not feature native floating-point ALUs.
These operations are emulated by software.
Fourth, the throughput of addition/subtraction for 32-bit elements is 4-8\% higher than that for 64-bit elements.
This small difference is expected, since the code sequences for these operations (including accesses to WRAM) only differ in one instruction.
The 32-bit operation requires 6 instructions, while the 64-bit operation requires 7 instructions (one extra addition/subtraction).
We obtain these code sequences with UPMEM's Compiler Explorer~\cite{upmem-explorer}.
Fifth, the throughput of arithmetic operations in the DPU pipeline is relatively low compared to traditional CPU and GPU architectures.
With a maximum of 23.08 MOPS/s for 32-bit integer additions on one DPU, we can expect $\sim47$ GOPS/s on a full-blown configuration with 2,048 DPUs, i.e., more than one order of magnitude lower than current GPUs~\cite{volta}.
This suggests that DPUs are more recommendable devices for very memory-bound applications with limited computational needs.
\subsubsection{\textbf{Throughput versus Operational Intensity}}
We define the \emph{operational intensity} as the number of arithmetic operations per byte accessed from MRAM.
In this section, we present throughput results for microbenchmarks with different operational intensities.
Our microbenchmarks include MRAM-WRAM DMA transfers, WRAM read/write accesses, and variable arithmetic computation.
We change the operational intensity from very low values (0.0004883 operations/byte) to very high values (128 operations/byte), in order to measure the resulting throughput for different numbers of tasklets (from 1 to 16).
Figure~\ref{fig:throughput-dpu} shows the throughput for different values of operational intensity for 32-bit integer additions on one DPU.
We observe similar trends for other data types and arithmetic operations.
\begin{figure}[h]
\centering
\includegraphics[width=1.0\linewidth]{figures/ai-dpu.pdf}
\vspace{-2mm}
\caption{Throughput versus operational intensity for 32-bit integer additions on one DPU. Each dot represents the throughput for a value of operational intensity and a number of tasklets (i.e., number inside the dot).}
\label{fig:ai-dpu}
\end{figure}
We make several observations.
First, the throughput tends to increase for higher operational intensity.
More operations per byte imply less accesses to MRAM and WRAM. Thus, the overall throughput (MOPS/s) increases.
Second, the throughput increases for higher number of tasklets, but saturates after a certain number of tasklets.
Third, when the operational intensity is moderate or high (0.015625 or higher), the throughput saturates with 11 tasklets (the dots in Figure~\ref{fig:ai-dpu} corresponding to 11 tasklets or more overlap). This observation is inline with our observations in previous sections.
Fourth, when the operational intensity is low, the throughput saturates with less than 11 threads.
For example, for 0.0004883 operations/byte, the thoughput saturates for 3 tasklets.
The reason is that most of the cycles are spent in MRAM-WRAM DMA transfers, the bandwidth of which saturates with just 2 tasklets (Figure~\ref{fig:mram-patterns}.
However, an operational intensity as low as 0.0004883 operations/byte is extremely low, as it entails only one addition every 256 32-bit input elements.
We expect higher operational intensity in most real-world workloads and, thus, throughput values saturating with 11 tasklets.
\section{Performance Characterization of a UPMEM DPU}
This section presents a performance characterization of a UPMEM DPU using microbenchmarks to assess various architecture limits.
\juan{
Section~\ref{sec:arith-throughput} evaluates the throughput of arithmetic operations in the DPU core pipeline.
Sections~\ref{sec:mram-bandwidth} and~\ref{sec:wram-bandwidth} measure the sustainable bandwidth of MRAM
and WRAM,
respectively.
Section~\ref{sec:throughput-oi} studies the effect of the operational intensity of a workload on the achievable throughput of the DPU.
Finally, Section~\ref{sec:cpu-dpu} profiles the bandwidth between the main memory of the host and the MRAM banks.
}
\ie{The order of sections is a little odd. I suggest putting WRAM before MRAM.}
\subsection{\juan{Arithmetic Operations}}
\label{sec:arith-throughput}
In this section, we measure the throughput of arithmetic operations (addition, subtraction, multiplication, division) for different data types (32-bit integer, 64-bit integer, float, double) in the DPU pipeline.
Our experiment performs read-modify-write operations in WRAM, including WRAM reads, execution of arithmetic operations, and WRAM writes.
The measurements do not include MRAM-WRAM DMA transfers, since we only want to quantify the specific throughput of the pipeline.
\juan{The microbenchmark loops over a chunk of WRAM performing the read-modify-write operations.
The number of instructions inside the loop depends on the specific operation.
For example, for a 32-bit integer addition, the loop contains 6 instructions~\cite{upmem-guide}: WRAM address calculation (\texttt{lsl\_add}), WRAM read (\texttt{lw}), addition (\texttt{add}), WRAM write (\texttt{sw}), loop index update (\texttt{add}), and conditional branch (\texttt{jneq}), \izzat{as reported by} UPMEM's Compiler Explorer~\cite{upmem-explorer}.}
\ie{I am a little uncomfortable with the methodology of this experiment. I do not feel it accurately represents peak arithmetic throughput because it does not execute an arithmetic operation every cycle. In fact, it seems identical to the stream-add experiment for WRAM except}
Figure~\ref{fig:throughput-dpu} shows the throughput (in millions of operations per second, MOPS/s) for addition, subtraction, multiplication, and division with four data types (32-bit integer, 64-bit integer, float, double) on one DPU.
We change the number of tasklets between 1 and
\juan{24, which is the \izzat{maximum} number of hardware threads (Section~\ref{sec:dpu-architecture}).}
The throughput for 32-bit and 64-bit unsigned integers is the same as that for \izzat{signed} integers.
\begin{figure}[h]
\centering
\includegraphics[width=0.9\linewidth]{figures/throughput-dpu-wide.pdf}
\vspace{-2mm}
\caption{Throughput of arithmetic operations on one DPU for different data types (32-bit integer, 64-bit integer, float, double).
}
\label{fig:throughput-dpu}
\end{figure}
Five key observations are as follows.
First, the throughput of all arithmetic operations and data types saturates after 11 tasklets.
\juan{This observation is consistent with the description of the pipeline in Section~\ref{sec:dpu-architecture}}.
\juan{Second, the maximum throughput of addition/subtraction is 44.04 MOPS/s for 32-bit integer values, and 37.67 MOPS/s for 64-bit integer values.
These results match the expected throughput. The DPU is an in-order core that can retire one instruction per cycle.
Since the loop for 32-bit integers contains 6 instructions, one addition operation is executed every 6 cycles when the pipeline is full (after 11 tasklets). This determines a theoretical maximum of 44.33 MOPS/s at 267 MHz.
For 64-bit integer addition/subtraction the loop contains 7 instructions: the same 6 instructions as the 32-bit operation plus an addition/subtraction with carry-in bit (\texttt{addc/subc}) for the 32 most significant bits of the operands.
The theoretical maximum is 37.99 MOPS/s in this case.
If we unroll the loop, \ie{What is the unrolling factor?} the number of instructions is 3 for 32-bit integers (i.e., \texttt{lw, add, sw}) and 4 for 64-bit integers (i.e., \texttt{lw, add, addc, sw}). In these cases, our measures also match the expected 88.66 MOPS/s and 66.49 MOPS/s for 32-bit and 64-bit integers, respectively.}
\juan{Third, the throughput of integer addition and subtraction is significantly higher than that of multiplication and division.
}
\juang{A major reason is that the DPU pipeline does not include a complete $32\times32$-bit multiplier, because of the limited number of available metal layers~\cite{devaux2019}.
Multiplications and divisions are implemented with two operators (\texttt{mul\_step}, \texttt{div\_step})~\cite{upmem-guide}, which are based on bit shifting and addition. With these operators, multiplication and division can take up to 32 cycles to complete.}
\juan{Fourth, the throughput of addition/subtraction for float and double values is more than ten times lower than that for 32-bit and 64-bit integers.
The DPU pipeline does not feature native floating-point ALUs.
These operations are emulated by software~\cite{upmem2018}.
The use of the $8\times8$ multiplier for emulated floating-point multiplication/division makes the achievable throughput significantly smaller than that for addition/subtraction.}
\jgl{Fix writing here. Not $8\times8$ multiplier.}
Fifth, the throughput of arithmetic operations in the DPU pipeline is relatively low compared to traditional CPU and GPU architectures.
\juan{With a maximum of 44.04 MOPS/s for 32-bit integer additions on one DPU, we can expect $\sim28$ GOPS/s on a configuration with 640 DPUs}, i.e., more than one order of magnitude lower than current GPUs~\cite{volta}.
This suggests that \textbf{DPUs are more recommendable devices for very memory-bound applications with limited computational needs}.
\ie{I don't think this conclusion can be made yet. The absolute value of arithmetic throughput cannot determine compute-/memory-boundedness. It must be compared with memory bandwidth. I suggest removing this paragraph.}
\subsection{MRAM Bandwidth}\label{sec:mram-bandwidth}
Recall that a DPU accesses data from two memories: 1) a DRAM-based MRAM bank, via a DMA engine, and 2) an SRAM-based WRAM or scratchpad via load/store instructions.
This section evaluates the sustainable bandwidth that can be achieved by accessing MRAM with various access patterns.
\subsubsection{\textbf{Read and Write Bandwidth}}
\sloppy
The UPMEM Software Development Kit (SDK) provides functions for reading from MRAM to WRAM and writing from WRAM to MRAM via DMA transfers.
\juan{These functions are \texttt{mram\_read(mram\_source, wram\_destination, SIZE)} and \texttt{mram\_write(wram\_source, mram\_destination, SIZE)}, where \texttt{SIZE} is the transaction size in bytes and must be a multiple of 8 between 8 and 2,048 in SDK 2020.3.0~\cite{upmem-guide}.}
\juan{In this experiment, we measure the latency of individual DMA transfers of different sizes for a single tasklet, and compute the resulting bandwidth.
Figure~\ref{fig:mram-bandwidth} shows the read and write MRAM latency and bandwidth for read and write DMA transfers.}
\begin{figure}[h]
\centering
\includegraphics[width=0.8\linewidth]{figures/mram-bandwidth-tl1-wide.pdf}
\vspace{-2mm}
\caption{\juan{MRAM read and write bandwidth and latency (logarithmic scale) with MRAM-WRAM DMA transfers between 8 and 2,048 bytes.}}
\label{fig:mram-bandwidth}
\end{figure}
We make two observations.
\juan{The first one is that the bandwidth increases with the transfer size.
It scales almost linearly between 8 and 128 bytes, and tends to saturate for larger sizes.
The reason is that the latency increases slowly for DMA transfers between 8 and 128 bytes, and faster after 128 bytes.
For example, the latency of a 128-byte DMA transfer is only 32\% longer than the latency of an 8-byte DMA transfer. After 128 bytes, the DMA transfer latency rises up, with 2,048-byte transfers taking more than $6.8\times$ longer than 8-byte transfers.}
\juan{We can approximately model the latency as $\alpha + \beta \times size$, where $\alpha$ is the cost in cycles of sending the first 8 bytes, \ie{not exactly} $\beta$ is the bandwidth in bytes per cycle, and $size$ is the transfer size in bytes.
$\alpha$ is}
\juang{$\sim50$} \juan{cycles in our measurements. Thus, we can obtain $\beta$ is 2 bytes/cycle. This results in a theoretical maximum bandwidth of 536 MB/s at 267 MHz. Our measurements are within 20\% of that theoretical maximum.}
The second observation is that memory accesses are symmetric.
The bandwidth of read and write transfers is very similar for the same transfer size.
Based on these observations, \textbf{a general recommendation for programmers is to use large DMA transfer sizes when possible to maximize the bandwidth exploitation}.
This recommendation is particularly useful for workloads with streaming (unit-stride) access patterns.
In such cases, the size of the DMA transfer should only be limited by the available WRAM (64KB) and the number of running tasklets.
\juan{Given that the latency has a little increase between 8-byte and 128-byte transfers, \textbf{another recommendation is to fetch more bytes than necessary when we use short transfers}. This can increase the probability of accessing WRAM in later accesses, instead of fetching from MRAM. The effect would be similar to accessing entire cache lines in a CPU memory hierarchy.}
\subsubsection{\textbf{Streaming Access Bandwidth}}\label{sec:mram-streaming}
For streaming (unit-stride, \juan{i.e., sequential}) access patterns, we implement the four versions of the STREAM benchmark~\cite{mccalpin1995} (i.e., copy, add, scale, triad),
with two variants of STREAM copy (namely, \emph{copy} and \emph{copy-w}).
Copy reads the input from MRAM to WRAM via a DMA transfer then writes it from WRAM to MRAM via a DMA transfer without performing any WRAM loads/stores in the core.
We use 1024-byte DMA transfers.
Copy-w reads the input from MRAM to WRAM via a DMA transfer, copies the input in WRAM via loads/stores, then writes the copy from WRAM to MRAM via a DMA transfer.
We implement add, scale, and triad
by, in addition to copy-w, performing the corresponding computations \juan{(addition, multiplication, and addition + multiplication, respectively)}.
\juan{We measure the sustainable MRAM bandwidth with 1 DPU and 640 DPUs for each STREAM version while scaling the number of tasklets per DPU from 1 to 16.
The tasklets collectively stream 2M 8-byte elements per DPU (weak scaling), which are divided evenly across the tasklets.
Figure~\ref{fig:mram-stream} shows the aggregated bandwidth for 1 DPU and 640 DPUs.}
\begin{figure}[h]
\centering
\includegraphics[width=0.6\linewidth]{figures/stream-mram.pdf}
\vspace{-2mm}
\caption{MRAM bandwidth for streaming access patterns.}
\label{fig:mram-stream}
\end{figure}
\juan{We make four key observations.}
\juan{The first observation is that the bandwidth of copy is about 300GB/s for 640 DPUs, which is very close to the nominal bandwidth (i.e., 333.75GB/s for 640 DPUs, as derived in Section~\ref{sec:dpu-architecture}).}
The second observation is that the bandwidth of copy saturates with two tasklets.
Based on this observation, we suspect that the DMA engine is equipped to sustain up to two simultaneous DMA transfers.
\juan{The third observation is that the bandwidth for copy-w and add saturates at 9 and 10 tasklets, respectively. The saturation point is achieved when the latency of accessing data from/to MRAM is larger than the latency of executing instructions executed in the pipeline.
For example, for copy-w the number of instructions to copy an 8-byte element from one WRAM address to another WRAM address is 6~\cite{upmem-guide}: WRAM source address calculation (\texttt{lsl\_add}), WRAM read (\texttt{ld}), WRAM destination address calculation (\texttt{lsl\_add}), WRAM write (\texttt{sd}), loop index update (\texttt{add}), and conditional branch (\texttt{jneq}).
With an effective pipeline depth of 11 stages, $T$ tasklets need $6 \times 11 + T - 1$ cycles to execute 6 instructions each.
At the same time, reading and writing 8 bytes per tasklet from/to MRAM takes $8 \times T$ cycles (according to the 2 bytes/cycle bandwidth we measure in Section~\ref{sec:mram-bandwidth}).
For copy-w, the latency of the MRAM accesses starts being larger than the latency of executing the instructions after 9 tasklets.
For add, the number of instructions in the pipeline per 8-byte element is 10. As a result, the MRAM accesses take longer than the instruction execution after 10 tasklets.}
\juan{The fourth observation is that the bandwidth of scale and triad is one order or magnitude smaller than that of copy, copy-w, and add. The bandwidth of these two cases saturates at 11 tasklets.
Scale and triad
\juang{use costly multiplications, which are based on the \texttt{mul\_step} operator, as we explain in Section~\ref{sec:arith-throughput}.}
As a result, the instruction execution takes longer than the MRAM accesses for any number of tasklets. Thus, the saturation point is determined by the maximum pipeline throughput.}
\subsubsection{\textbf{Strided and Random Access Bandwidth}}\label{sec:mram-strided-random}
For strided access patterns \juan{(i.e., access patterns with a certain distance between consecutive accesses)}, an array is accessed at a constant stride, copying it into another array at the same stride.
We implement two different approaches: 1) coarse-grain DMA, and 2) fine-grain DMA.
In the coarse-grain DMA approach, a large contiguous segment (1024 bytes) of the array in MRAM is accessed via DMA, and strided access happens in WRAM.
This approach mostly resembles what CPU hardware does (i.e., reads large cache lines and strides through the cache lines).
In the fine-grain DMA approach, only the data of interest in MRAM is transferred via DMA.
This approach results in more DMA requests, but less total data transferred.
For the random access pattern, we implement the GUPS benchmark~\cite{gaekegups} which performs read-modify-write operations on random positions of an array.
Only fine-grain DMA is used for random access.
We measure the sustainable MRAM bandwidth for each microbenchmark while scaling the number of tasklets per DPU from 1 to 16.
\juan{The tasklets collectively stride through 2M 8-byte elements, divided evenly across the tasklets (strong scaling).
Figure~\ref{fig:mram-patterns} shows the aggregated bandwidth for 1 DPU.}
\begin{figure}[h]
\centering
\includegraphics[width=0.9\linewidth]{figures/strided-gups-mram-log.pdf}
\vspace{-5mm}
\caption{MRAM bandwidth for strided and random access patterns.}
\label{fig:mram-patterns}
\end{figure}
We observe that the bandwidth decreases as the stride size increases.
We also observe that the coarse-grain DMA approach has better bandwidth for small stride sizes while the fine-grain DMA approach has better bandwidth for larger strides and random access.
The larger the stride size, the more the data that will be fetched and unused in the coarse-grain DMA approach which is what makes fine-grain DMA become more attractive.
\juan{In these experiments, the threshold stride to use the fine-grain DMA approach is 8. This relates to the amount of bandwidth that is effectively used in the coarse-grain DMA approach. For example, for 16 tasklets and stride equal to 1, we measure 313.38 MB/s for coarse-grain DMA and 56.28 MB/s for fine-grain DMA. With stride 8, only one eighth of the 313.38 MB/s is effectively used in the coarse-grain DMA, which is lower than 56.28 MB/s.}
Based on these observations, \textbf{we recommend that programmers use the coarse-grain DMA approach for small stride workloads and the fine-grain DMA approach for large stride and random access workloads}.
\jgl{Do we want to remove "DMA"?}
\ie{I think it's fine to keep it if that's what UPMEM uses in the programming guide.}
\subsection{\textbf{WRAM Bandwidth}}
\label{sec:wram-bandwidth}
\ie{I think we should move WRAM before MRAM.}
This section evaluates the sustainable bandwidth that can be achieved by accessing WRAM with various access patterns.
The results presented in this section are for 64-bit elements, but they are very similar for 32-bit elements, since the WRAM latency is equal for both element sizes~\cite{upmem-guide}.
\subsubsection{\textbf{Streaming Access Bandwidth}}
We use the same streaming patterns to evaluate WRAM bandwidth that we use in Section~\ref{sec:mram-streaming} for MRAM, except that the arrays are allocated in WRAM and accessed repeatedly without being streamed from/to MRAM.
The results are shown in Figure~\ref{fig:wram-stream}.
\begin{figure}[h]
\centering
\includegraphics[width=0.6\linewidth]{figures/stream-wram.pdf}
\vspace{-2mm}
\caption{WRAM bandwidth for streaming access patterns.}
\label{fig:wram-stream}
\end{figure}
\juan{We observe that the achievable bandwidth for each version of STREAM is determined by the number of instructions that is executed in the pipeline.
For example, copy-w executes 6 instructions per 8-byte element (Section~\ref{sec:mram-streaming}). For 11 tasklets, the theoretical bandwidth is $\sim700$ MB/s in one DPU, which is very similar to our measurement.
The bandwidth of scale and triad is much smaller, because they use the costly multiplication based on the $8\times8$ bit multiplier.}
\subsubsection{\textbf{Strided and Random Access Bandwidth}}
\ie{I think we should remove this experiment}
We use the same strided and random access patterns to evaluate WRAM bandwidth that we use in Section~\ref{sec:mram-strided-random} for MRAM, except that the arrays are allocated in WRAM and without being streamed from/to MRAM.
Moreover, the distinction between coarse-grain and fine-grain DMA here is irrelevant since there is no MRAM access.
The largest stride used is 128 because that is the maximum size of a WRAM allocation in our experiments (1,024 bytes).
The results are shown in Figure~\ref{fig:wram-patterns}.
\begin{figure}[h]
\centering
\includegraphics[width=0.9\linewidth]{figures/strided-gups-wram.pdf}
\vspace{-2mm}
\caption{WRAM bandwidth for strided and random access patterns. \jgl{Improve this figure. Show max. bw equal to STREAM.}\ie{Why are stride 1 numbers different from STREAM?}}
\label{fig:wram-patterns}
\end{figure}
\juan{We observe that the achievable bandwidth is independent of the stride value.
We also observe that for every number of threads, the bandwidth is very similar to the bandwidth obtained by copy-w for the same number threads.
The reason for these observations is that the pipeline can access WRAM at 8 bytes/cycle, regardless of the address of the previous and next access. Same observations apply to GUPS.}
\subsection{\textbf{\juan{Throughput versus Operational Intensity}}}
\label{sec:throughput-oi}
We define the \emph{operational intensity} as the number of arithmetic operations per byte accessed from MRAM.
In this section, we present throughput results for microbenchmarks with different operational intensities.
Our microbenchmarks include MRAM-WRAM DMA transfers, WRAM read/write accesses, and variable arithmetic computation.
We change the operational intensity from very low values (0.0004883 operations/byte) to very high values (128 operations/byte), in order to measure the resulting throughput for different numbers of tasklets (from 1 to 16).
Figure~\ref{fig:throughput-dpu} shows the throughput for different values of operational intensity for 32-bit integer additions on one DPU.
We observe similar trends for other data types and arithmetic operations.
\begin{figure}[h]
\centering
\includegraphics[width=0.6\linewidth]{figures/ai-dpu-wide.pdf}
\vspace{-2mm}
\caption{Throughput versus operational intensity for 32-bit integer additions on one DPU. Each dot represents the throughput for a value of operational intensity and a number of tasklets (i.e., number inside the dot).}
\label{fig:ai-dpu}
\end{figure}
We observe that throughput begins to saturate (i.e., the benchmark transitions from being memory-bound to compute-bound) at around
\juan{0.25 OPS/B} \juang{(i.e., 1 operation for every 4 bytes)}.
This operational intensity is low,
\juan{indicating that \textbf{DPUs are fundamentally compute-bound processors}.}
We also observe that when the operational intensity is very low, the throughput saturates with less than the usual 11 threads.
\juan{For example, between 0.0004883 and 0.015625 OPS/B, the throughput saturates for 2 tasklets.}
The reason is that most of the cycles are spent in MRAM-WRAM DMA transfers, the bandwidth of which saturates with just 2 tasklets \juan{(STREAM copy in Figure~\ref{fig:mram-stream})}.
\juan{However, an operational intensity as low as 0.015625 OPS/B is extremely low, as it entails only one addition every 64 32-bit input elements}.
We expect higher operational intensity in most real-world workloads and, thus, throughput values saturating with 11 tasklets.
\subsection{\textbf{\juan{CPU-DPU Communication}}}
\label{sec:cpu-dpu}
\juan{In this section, we measure the bandwidth between the main memory of the host and the MRAM of DPUs.
The UPMEM SDK provides two types of CPU-DPU memory transfers~\cite{upmem-guide}: 1) memory interface, and 2) rank transfer interface.}
\juan{The memory interface provides two functions to copy a buffer to a DPU memory (\texttt{dpu\_copy\_to}) and from a DPU memory (\texttt{dpu\_copy\_from}).
The rank transfer interface provides a function to attribute buffers to a set of DPUs in a rank (\texttt{dpu\_prepare\_xfer}), and a function to execute the actual transfer (\texttt{dpu\_push\_xfer}).
This function supports parallel transfers to the DPUs in a rank, but requires that the transfer sizes to all DPUs are the same.}
\ie{Explain this earlier in system architecture.}
\juan{We measure the CPU-DPU bandwidth with two different experiments.
The first one obtains the bandwidth of load (CPU memory to DPU MRAM) and retrieve (DPU MRAM to CPU memory) transfers for a single DPU. We change the transfer size from 8 bytes to 32 MB.
The second experiment transfers 32 MB per DPU to a set of 1 to 64 DPUs within the same rank.\footnote{\juang{Preliminary experiments with more than one rank show that the current SDK does not handle simultaneous transfers to multiple ranks. We suspect this may be improved in future releases.}}}
\juan{Figure~\ref{fig:cpudpu} presents the results of both experiments.
We make several observations.
First, load and retrieve bandwidths for a single DPU (Figure~\ref{fig:cpudpu}(a)) are similar for the same transfer sizes. For the larger transfer size (32 MB), load and retrieve bandwidth achieve 0.38 GB/s and 0.32 GB/s, respectively.
Second, the bandwidth for a single DPU increases linearly between 8 bytes and 8 KB, and tends to saturate for larger sizes.
Third, for a rank (Figure~\ref{fig:cpudpu}(b)) the bandwidths of serial load and retrieve transfers remain flat for different numbers of DPUs. The transfers are executed serially, thus their latencies increase with the number of DPUs (as the total amount of transferred data increases).
Fourth, the bandwidths of the parallel transfers increase with the number of DPUs. Load transfers are more optimized than retrieve transfers. The maximum load bandwidth is 13.52 GB/s, while the maximum retrieve bandwidth is 3.96 GB/s.\footnote{\juang{We suspect that this difference is because of different implementations of both types of transfer in the current SDK, which may be improved in future releases.}}}
\begin{figure}[h]
\centering
\includegraphics[width=0.9\linewidth]{figures/cpudpu.pdf}
\vspace{-2mm}
\caption{\juan{Bandwidth of load (CPU memory to DPU MRAM) and retrieve (DPU MRAM to CPU memory) transfers for one DPU (a) and for a set of 1-64 DPUs within a rank (b). Y-axes use logarithmic scale.}}
\label{fig:cpudpu}
\end{figure}
\section*{{\Large AUTHORS' RESPONSE and SUMMARY OF CHANGES}}
We thank the reviewers for their valuable feedback. We have addressed all {the} comments, {including the PC summary comments}. \juan{All new text in the paper is in blue font.}
We first list the major changes we made to the paper in this revision. Then, we respond to each of the reviewers' comments and indicate how we have addressed them in the paper.
\section*{{MAJOR CHANGES TO THE PAPER}}
\begin{enumerate}
\item For the new version of the paper, we \textbf{reran all our experiments using real UPMEM PIM hardware} and revised the paper accordingly. To do so, we (1) ported all our microbenchmarks and benchmarks to a new SDK (2020.3, as opposed to 2019.3, which we used in our previous submission), (2) reran all experiments on a real hardware setup (640 DPUs organized in 10 ranks), and (3) updated the results in Section~\ref{sec:microbench} (Performance characterization of a UPMEM DPU) and Section~\ref{sec:evaluation} (Evaluation). Note that the real hardware was not available before the Winter 2020 deadline. \textbf{This change addresses point (i) of the summary of PC discussion (comment @A1 by Reviewer A), comment {B1} (review B), and comment {C1} (review C)}.
\item We added more detailed analysis of our microbenchmarking results. We added explanations for the measured arithmetic throughput (Section~\ref{sec:arith-throughput}) and WRAM bandwidth (Section~\ref{sec:wram-bandwidth}) by analyzing the instructions that get executed. In Section~\ref{sec:mram-read-write}, we derived the actual MRAM-WRAM bandwidth (2 bytes/cycle) from our real system measurements and provided a linear model that accurately matches our measurements. In Section~\ref{sec:mram-streaming}, for the STREAM benchmark, we explained the achieved bandwidth and the saturation points. \textbf{This change addresses points (ii) and (iii) of the summary of PC discussion (comment @A1 by Reviewer A), and comments {A1} and {A2} (review A) and comment {C2} (review C)}.
\item We \textbf{expanded the experiments carried out in the evaluation} (Section~\ref{sec:evaluation}). In the original paper, we reported strong scaling results and energy results for all benchmarks and compared to CPU and GPU for some benchmarks. In the new version, we (1) improved the strong scaling results by scaling to a larger number of DPUs and reporting the breakdown between DPU time and host communication time (Section~\ref{sec:strong}), (2) added weak scaling results (Section~\ref{sec:weak}), and (3) completed the comparison with CPU and GPU for {\emph{all}} benchmarks (Section~\ref{sec:comparison}). For all these additions/improvements, timing and energy results were obtained from real hardware instead of simulation/emulation. \textbf{This change addresses points (ii) and (iii) of the summary of PC discussion (Comment @A1 by Reviewer A), and comments {A1} and {A2} (review A) and comment {C2} (review C)}.
\item We \textbf{added a new discussion section about the suitability of the selected benchmarks for the UPMEM PIM architecture} (Section~\ref{sec:discussion}). Although the introduction to Section~\ref{sec:benchmarks} along with Figure~\ref{fig:roofline} had shown the suitability of the benchmarks for PIM architectures in general by showing that they are memory-bound, the new Section~\ref{sec:discussion} reflects on the comparison with CPUs and GPUs to further discuss suitability for UPMEM specifically. The section also provides hints for improvement of different aspects of the hardware and architecture. \textbf{This change addresses point (iv) of the summary of PC discussion (comment @A1 by Reviewer A)}.
\item We added a \textbf{new microbenchmark that measures the bandwidth between the main memory of the host CPU and the MRAM of the DPUs} in a rank (Section~\ref{sec:cpu-dpu}). This experiment could not be performed with the emulation platform that we used in our previous submission, and can now be done on the real hardware.
\item We expanded our benchmark suite with \textbf{5 new benchmarks} (namely, Reduction, 2 versions of Histogram, and 2 versions of Scan), for a total of 14 benchmarks. We added a description of the new benchmarks in Sections~\ref{sec:histogram} to~\ref{sec:scan}. We added the performance and energy evaluation of these benchmarks and a comparison to CPU and GPU versions in Section~\ref{sec:evaluation}. We also added an Appendix that compares different implementations of these benchmarks that use different communication primitives.
\end{enumerate}
{In the following pages, we respond to all feedback from reviewers, and indicate the changes that we made to paper to address their comments.}
\renewcommand\contentsname{}
\vspace{5pt}
\setcounter{tocdepth}{-1}
\tableofcontents
\addtocontents{toc}{~\hfill\textbf{Page}\par}
\newpage
\section*{{DETAILED RESPONSES}}
\subsection*{{{PC DISCUSSION} SUMMARY \\
Comment @A1 by Reviewer A.
Summary of PC Discussion and specific points to be addressed in 1-shot revision:
}}
\addcontentsline{toc}{part}{PC DISCUSSION SUMMARY}
\vspace{3mm} \boxbegin {\bf (i)} One of the main concerns of the reviewers pertains to validating the results using real hardware. If the real hard hardware is not available in the given timeframe or has limitation is conducting the desired experimental measurements, the authors should clearly and extensively discuss the limitation of their experimental methodology as well as try to provide (as much as possible) experimental evidences for the credibility/accuracy of their performance results on the emulated hardware. \boxend
We understand the reviewers' concern. When we submitted the previous version of the paper to SIGMETRICS 2020 Winter deadline, there were no real UPMEM PIM DIMMs available. As far as we knew from personal communication with UPMEM, their PIM DIMMs were in post-fabrication testing by that time. For this reason, we used their FPGA emulator. We gained access to the real UPMEM PIM hardware after the notification of SIGMETRICS 2020. Thus, we used the real {hardware} setup with 640 DPUs for all experiments in the new version of the paper. In the new version, we included the following paragraph in Section~\ref{sec:sys-org} to clarify the setup that we used:
\yboxbegin
\Paste{comment11a}
\yboxend
All results in the new version of the paper (architecture characterization in Section~\ref{sec:microbench} and evaluation in Section~\ref{sec:evaluation}) are measurements on the real hardware setup introduced in Section~\ref{sec:sys-org}. Most of the observations and insights we obtained from the FPGA emulator in the previous version of the paper are still valid on the real hardware setup. However, we did not include any results from the FPGA emulator in the new version of the paper, because (1) we can run all experiments on the read hardware, and (2) validating the FPGA emulator is not a purpose of our work. We believe that the FPGA emulator, like the software-based simulator that comes with the UPMEM SDK, can still be useful for programmers in early stages of PIM {software} development.
\vspace{3mm} \boxbegin {\bf (ii)} The paper lacks deep analysis of the reported performance numbers. Several examples of this are given in the reviews. \boxend
In the new version of the paper, we incorporated many new details to our performance analysis. We describe them next.
In Section~\ref{sec:arith-throughput}, we added an explanation for the measured throughput values for arithmetic operations by analyzing the instructions that get executed ($Throughput = 1 / (#instructions * cycle)$). The explanations we added are reproduced here for the reviewers' convenience:
\yboxbegin
\Paste{comment1a}
\yboxend
In Section~\ref{sec:wram-bandwidth}, we added an explanation for the measured WRAM bandwidth by analyzing the number of executed instructions and the number of bytes moved. The explanations we added are reproduced here for the reviewers' convenience:
\yboxbegin
\Paste{comment1b}
\yboxend
In Section~\ref{sec:mram-read-write}, we derived a model for the MRAM-WRAM read/write latency and added an explanation of the model. We derive the actual MRAM-WRAM bandwidth (2 bytes/cycle) from our measurements. The derivation we added is reproduced here for the reviewers' convenience:
\yboxbegin
\Paste{comment1c}
\yboxend
We also improved Figure~\ref{fig:mram-bandwidth} (Figure 2 in the previous version of the paper) which shows MRAM read/write latency and bandwidth by adding a line to the figure for the derived latency model. Our linear model matches accurately the measured latency of MRAM-WRAM transfers. We reproduce Figure~\ref{fig:mram-bandwidth} here for the reviewers' convenience:
\setcounter{figure}{3}
\yboxbegin
\Paste{comment1d}
\yboxend
In Section~\ref{sec:mram-streaming}, we added explanations of the achieved bandwidth of the STREAM benchmarks and their saturation points. We added the following paragraph about the bandwidth and saturation points of COPY and ADD:
\yboxbegin
\Paste{comment1e}
\yboxend
We added the following paragraph about the bandwidth and saturation points of SCALE and TRIAD:
\yboxbegin
\Paste{comment1f}
\yboxend
In Section~\ref{sec:mram-strided-random}, we added more details about the reason why the bandwidth of {fine-grained} DMA access is higher than the bandwidth of {coarse-grained} DMA access after a stride of 16. We reproduce the explanation here for the reviewers' convenience:
\yboxbegin
\Paste{comment1g}
\yboxend
In Section~\ref{sec:throughput-oi}, we added more evidence about the compute boundedness of the UPMEM DPUs by expanding Figure~\ref{fig:ai-dpu} (Figure 8 in the previous version of the paper). The original figure showed throughput vs. OP/B for 32-bit integer addition only. The expanded figure also shows results for 32-bit integer multiplication and 32-bit floating point addition and multiplication. The throughput for the latter 3 saturates with much fewer OP/B. We reproduce Figure~\ref{fig:ai-dpu} and the corresponding explanations from the paper here for the reviewers' convenience:
\setcounter{figure}{6}
\yboxbegin
\Paste{comment1h}
\yboxend
We completely rewrote Section~\ref{sec:evaluation} (Evaluation) and applied the following changes:
\begin{itemize}
\item While all results in the previous version of the paper were obtained with the emulator, all results in the new version of the paper were obtained on the real hardware.
\item We improved the strong scaling analysis (Section~\ref{sec:strong}) by (1) extending the evaluation to up to 640 DPUs (up to 256 DPUs in the previous version), (2) including communication cost between the host CPU and the UPMEM PIM system, and (3) inter-DPU synchronization cost via host CPU. Neither (2) nor (3) were possible in the emulator.
\item We added a new weak scaling analysis (Section~\ref{sec:weak}) within one rank of DPUs in order to analyze how the inter-DPU synchronization cost changes when we increase the number of DPUs.
\item We extended the performance and energy comparison to CPU and GPU (Section~\ref{sec:comparison}) to cover all of our benchmarks.
\item We added a new discussion section (Section~\ref{sec:discussion}) about (1) the suitability of the benchmarks to the UPMEM PIM Architecture, and (2) possible improvements to the hardware and architecture.
\end{itemize}
Since the changes involve the entire section, we do not replicate them here and refer the reviewers to the paper for details.
\vspace{3mm} \boxbegin {\bf (iii)} What non-trivial findings does the paper bring to the community? \boxend
We believe that our benchmarking {and analysis} results of \textbf{the first publicly available PIM system} are valuable for the research community. Since there is no prior study of a real PIM system, we think our results are useful, even if some of them may not be surprising.
Below is a summary of our major findings and how they are useful:
\begin{itemize}
\item {We provide the first deep understanding of and workloads for a real-world PIM system. Other works can build on this understanding and provide better PIM systems. Our workloads can be the \emph{de facto} standard workloads of evaluating PIM systems, both real and under design.}
\item The architecture limits reported in Section~\ref{sec:microbench}, such as bandwidth for different access patterns and throughput for different operations and data types, are useful for programmers who would like to reason about the potential performance of their workloads on the UPMEM system. {They are also useful for future hardware designers to build better PIM systems.}
\item The observation that the architecture is fundamentally compute-bound in Section~\ref{sec:microbench} is useful in demonstrating how the new architecture requires a paradigm shift in how we think about computation.
\item The benchmarks accompanying the paper are useful examples for programmers, and the paper also provides programming recommendations that verify and complement those in the programming guide. {We will open source these benchmarks, and we expect them to gain widespread use.}
\item The comparison to CPU and GPU in Section~\ref{sec:evaluation} is useful for programmers to anticipate how much performance improvement they can get from this hardware compared to traditional processors for different types of workloads.
\item The discussion of the limitations of the architecture that was newly added in Section~\ref{sec:discussion} is useful for guiding programmers as to what computation patterns, despite being memory-bound, may not be suitable for the architecture. These include workloads that require heavy use of mul/div, floating point, and global communication.
\end{itemize}
We replicate Section~\ref{sec:discussion} here for the reviewer’s convenience.
\renewcommand{\thesection}{\arabic{section}}%
\setcounter{section}{5}
\setcounter{subsection}{2}
\yboxbegin
\Paste{sec:discussion}
\yboxend
Section~\ref{sec:wram-bandwidth}: We remarked the following important observation:
\yboxbegin
\Paste{comment2a}
\yboxend
Section~\ref{sec:mram-read-write}: Since we observed that the bandwidth increases almost linearly for small-sized transfers, we included the following recommendation:
\yboxbegin
\Paste{comment2b}
\yboxend
Moreover, since the latency increases minimally for 8-byte to 128-byte transfers, we included this additional recommendation:
\yboxbegin
\Paste{comment2c}
\yboxend
Section~\ref{sec:mram-strided-random}: We proposed two DMA approaches for strided memory accesses. From our observations, we derived the following recommendation:
\yboxbegin
\Paste{comment2d}
\yboxend
Finally, we would also like to mention that the recommendations accompanying the product are recommendations from the designers of the UPMEM PIM architecture based on their knowledge of the system. We believe there is value in verifying these recommendations via benchmarking, and we believe this is one of the contributions of our work.
\vspace{3mm} \boxbegin {\bf (iv)} One suggestion to address points (ii) and (iii) above, the proposed benchmarks do not necessarily bring out the advantages of UPMEM over other PIMs. Can the authors add a few benchmarks, which can bring out the advantages of UPMEM over other PIMs or vice-versa. Such benchmarks and performance analysis can bring out the characteristics of UPMEM specifically. \boxend
We appreciate the suggestion from the reviewers. As we explain above, we addressed points (ii) and (iii) by adding deeper analysis, more interesting insights, programming recommendations, {and hints for hardware and architecture improvements} into the new version of the paper. We also added a new discussion section about the suitability of the selected benchmarks for the UPMEM PIM architecture (Section~\ref{sec:discussion}). Section~\ref{sec:discussion} reflects on the comparison with CPUs and GPUs to further bring out the characteristics of UPMEM specifically. The section also provides hints for improvement of different aspects of the hardware and architecture. We replicate Section~\ref{sec:discussion} here for the reviewers’ convenience:
\renewcommand{\thesection}{\arabic{section}}%
\setcounter{section}{5}
\setcounter{subsection}{2}
\yboxbegin
\Paste{sec:discussion}
\yboxend
Unfortunately, we cannot compare the UPMEM PIM architecture to other PIM architectures because, to our knowledge, there is no other publicly-available PIM architecture (as of October 2020).
{Performing a simulation based study of all types of PIM architectures would be very large and completely different undertaking from what we aim to do with this paper.}
\pagebreak
\subsection*{{REVIEW A}}
\addcontentsline{toc}{part}{REVIEW A}
\vspace{3mm}
\boxbegin {\bf (Comment A1)} The paper lacks a detailed performance analysis of the reported performance numbers. \boxend
We addressed this concern under \textbf{Comment (ii) of the Summary of the PC discussion.}
\vspace{3mm}
\boxbegin {\bf (Comment A2)} The performance characterization is in expected line and does not throw any new insight/observation. \boxend
We addressed this concern under \textbf{Comment (iii) of the Summary of the PC discussion.}
\vspace{3mm} \boxbegin {\bf (Comment A3)} Why does the performance saturate at 11 tasklets while the no. of pipeline stages is 14? \boxend
We included the following explanation in Section~\ref{sec:sys-org} to address the reviewer’s comment:
\yboxbegin
\Paste{comment3a}
\yboxend
\vspace{3mm} \boxbegin {\bf (Comment A4)} In Figure 3, why is the throughput for scale and triad significantly low, even compared to add? If the operations are pipelined (well), why should the bandwidth fall significantly? \boxend
Figure 3 is Figure~\ref{fig:mram-stream} in the new version of the paper. Scale and triad have lower throughput than add because they use multiplication which is performed by routines that can take many instructions and tens to hundreds of cycles. We added the following explanation to Section~\ref{sec:arith-throughput} to address the reviewer’s question:
\yboxbegin
\Paste{comment4a}
\yboxend
We also added the following paragraph to Section~\ref{sec:mram-streaming}:
\yboxbegin
\Paste{comment4b}
\yboxend
\vspace{3mm} \boxbegin {\bf (Comment A5)} In Figure 8, why is there is a dip in performance at an operation intensity of 0.125? \boxend
The reason for that dip was that we had two versions of the microbenchmark (with a slightly different instruction count): one for operational intensity lower than 0.125 and another for operational intensity equal to or greater than 0.125. In the new version of the paper, we improved the code to have a single version of the microbenchmark. The figure, which is now Figure~\ref{fig:ai-dpu}, does not {have} that dip because now we have a single code for the whole experiment. We have copied Figure~\ref{fig:ai-dpu} below, for the reviewer’s convenience. We also added three more plots for different operand types and operations (32-bit integer addition and multiplication, and 32-bit floating point addition and multiplication) for a more comprehensive evaluation:
\setcounter{figure}{6}
\yboxbegin
\Paste{comment5a}
\yboxend
\vspace{3mm} \boxbegin {\bf (Comment A6)} How are the benchmarks implemented? Are the tasklets and tasks explicitly specified? What is the software tool chain available for programming UPMEM? \boxend
In the previous version of the paper, we introduced DPU programming in Section~\ref{sec:dpu-architecture}. This is Section~\ref{sec:dpu-programming} in the new version of the paper. We reproduce relevant paragraphs of Section~\ref{sec:dpu-programming} here for the reviewer's convenience. The highlighted text is added to the new version of the paper for more clarification, addressing the reviewer's comment:
\yboxbegin
\Paste{comment6a}
\yboxend
\vspace{3mm} \boxbegin {\bf (Comment A7)} In the experimental methodology, report the working set size for each benchmark (in Table 2). \boxend
We added more details to Table~\ref{tab:datasets} about the working set sizes of benchmarks that were missing them. We show the revised Table~\ref{tab:datasets} below, for the reviewer’s convenience:
\setcounter{table}{1}
\yboxbegin
\Paste{comment7a}
\yboxend
\vspace{3mm} \boxbegin {\bf (Comment A8)} Is the working set size contained within WRAM? It would be interesting to look at benchmark input sizes where this is not the case. Also, in Table 2, are the reported MRAM-WRAM transfer sizes the best performing ones? \boxend
None of the working sets used in the previous version or the new version of the paper fit in WRAM. Yes, the reported MRAM-WRAM transfer sizes are the best performing ones. To make this more clear, we included the following text in the new version of the paper (in the first paragraph of Section~\ref{sec:evaluation}):
\yboxbegin
\Paste{comment8a}
\yboxend
\vspace{3mm} \boxbegin {\bf (Comment A9)} Wrt the discussion on Figure 11, it was mentioned that the best performing configuration (in terms of no. of tasklets) is the one corpg. to the no. of rows divided by the no. of DPUs. For an input size 2048 x 1024, this argument suggests that, 8 tasklets should be the best performing one
best perf. with 8 tasklets for 256DPUs, but the figure shows best performing one is 4. Also, explain why is there a performance increase beyond this no. of tasklets? \boxend
We agree that this explanation was not very clear. In the previous version of the paper, we did not try to give a fixed rule, but just highlighted some trends. In the new version of the paper, we did not include this experiment because it was not very insightful. Also, the execution on the real UPMEM PIM system produces different results, with 16 tasklets producing the best performance results for the matrices we used. We reproduce the plot for GEMV below, which is part of Figure~\ref{fig:1dpu_strong} in Section~\ref{sec:strong}:
\setcounter{figure}{19}
\yboxbegin
{\centering
\centering
\includegraphics[width=0.4\linewidth]{figures/gemv-1dpu.pdf}
\captionof{figure}{GEMV: Strong scaling results on 1 DPU. This plot is part of Figure~\ref{fig:1dpu_strong} in Section~\ref{sec:strong} of the paper.}
\label{fig:gemv-1dpu}
}
\yboxend
\vspace{3mm} \boxbegin {\bf (Comment A10)} For BIS, there is no performance increase when the no. of DPUs is increased. Why? \boxend
We renamed BIS to BS (Binary Search) in the new version of the paper. When running this benchmark on the real UPMEM setup, we identified opportunities for improving the benchmark, which improve its scaling behavior. For the reviewer's convenience, we reproduce below strong scaling results of BS for 1 rank and 10 ranks (these plots are, respectively, part of Figures~\ref{fig:64dpu_strong} and~\ref{fig:640dpu_strong}):
\yboxbegin
{\centering
\centering
\includegraphics[width=0.8\linewidth]{figures/bs-strong.pdf}
\captionof{figure}{BS: Strong scaling results on 1 rank (left) and 10 ranks (right). These plots are respectively part of Figures~\ref{fig:64dpu_strong} and~\ref{fig:640dpu_strong} in Section~\ref{sec:strong} of the paper.}
\label{fig:gemv-1dpu}
}
\yboxend
\pagebreak
\subsection*{{REVIEW B}}
\addcontentsline{toc}{part}{REVIEW B}
\vspace{3mm} \boxbegin {\bf (Comment B1)} The fact that a FPGA emulation is used and not the real system raises concerns on the credibility of the study. [...] Unfortunately, though, the fact of using an FPGA-based emulation weakens the credibility of the study: will the performance and energy consumption of the real system match those of the FPGA-based emulation? In this reviewer's opinion, the lack of any discussion on this represents a significant limitation of this work, and its acceptance should be considered only provided that the paper is extended to clarify this aspect. \boxend
We addressed this concern under \textbf{Comment (i) of the Summary of the PC discussion.}
\vspace{3mm} \boxbegin {\bf (Comment B2)} The comparison with GPU and CPU is based only on a subset of benchmarks. [...] Another relevant weakness of the study is to consider only a subset of the benchmarks used elsewhere in the paper when the UPMEM architecture is compared with CPUs and GPUs. The benchmarks omitted in this study include irregular applications, which are challenging for both UPMEM and GPU and where I would expect to see less significant gains with respect to CPUs. As a result, the conclusions drawn by the authors may be overly optimistic, and thus misleading. \boxend
We agree with the reviewer that a comprehensive evaluation should include comparisons for all benchmarks. In the new version of the paper, we compared all our 14 benchmarks to CPU and GPU implementations of them. We {provide} the comparison in Section~\ref{sec:comparison}. We {copy} the relevant figures {and explanations} below for the reviewer's convenience:
\yboxbegin
\setcounter{figure}{13}
\setcounter{section}{5}
\setcounter{subsection}{1}
\Paste{comment12a}
\yboxend
\pagebreak
\subsection*{{REVIEW C}}
\addcontentsline{toc}{part}{REVIEW C}
\vspace{3mm} \boxbegin {\bf (Comment C1)} Given the use of an FPGA-based platform, it is unclear what aspects of the findings can be trusted as they are and which ones might need to be re-examined. Given that you say that the actual product will be out in Spring 2020, might it not be better to simply run the experiments on it and report those findings instead? \boxend
We addressed this concern under \textbf{Comment (i) of the Summary of the PC discussion.}
\vspace{3mm} \boxbegin {\bf (Comment C2)} This is not necessarily a shortcoming, but I failed to find anything surprising (except may be that bandwidth is the bottleneck). What does the paper offer a user that the recommendations accompanying the product don't already suggest (e.g., you relied on the prescription that the number of tasklets per DPU be 11, and your results indicate that this prescription was indeed spot on). \boxend
We addressed this concern under \textbf{Comment (iii) of the Summary of the PC discussion.}
\vspace{3mm} \boxbegin {\bf (Comment C3)} It would be nice to offer a reason for why the comparison against performance on a CPU in Sec. 5.3 is "fair". What was common across the CPU and your setup: cost? Some aspects of resource capacity? \boxend
The UPMEM setup we used was the largest one available to us. The memory capacity in the UPMEM setup (64MB/DPU*640DPUs = 40GB) is comparable to the memory capacity in the CPU setup (64 GB). The cost of the UPMEM setup ((\$450/128DPUs)*640DPUs = \$2,250) is comparable to the cost of the GPU setup (\$3,000 for Titan V). That said, it is worth noting that the UPMEM hardware is still maturing and is expected to run at a higher frequency in the near future (400 MHz instead of 267 MHz) and potentially be manufactured at a smaller technology node. Hence, the results we report in our comparison with CPUs and GPUs underestimate the full potential of the UPMEM PIM architecture. This limitation is expected when comparing a novel architecture like UPMEM with mature architectures that (1) have been thoroughly optimized over decades, {and (2) can run a high frequencies due to much more mature process technologies.}
{We added this discussion at the end of Section~\ref{sec:comparison}. We reproduce it here for the reviewers' convenience:}
\yboxbegin
\Paste{commentC3a}
\yboxend
\vspace{3mm} \boxbegin {\bf (Comment C4)} Whose product is UPMEM PIM? \boxend
The UPMEM PIM architecture is a product of UPMEM (https://www.upmem.com). We indicated this in the second paragraph of the abstract (in both the previous and the new version of the paper):
\yboxbegin
\Paste{comment16a}
\yboxend
\vspace{3mm} \boxbegin {\bf (Comment C5)} What do the following stand for: DPU, MRAM. \boxend
We now include the meaning of these acronyms in the introduction and in Section~\ref{sec:upmem_pim}:
DPU = DRAM Processing Unit.
MRAM = Main RAM.
\vspace{3mm} \boxbegin {\bf (Comment C6)} Is the goal of the benchmarking to allow for a fair/reasonable comparison across different PIM architectures? Or to compare against CPUs? Or both? Please state the goal clearly. \boxend
Our work has several goals:
\begin{itemize}
\item To characterize the UPMEM PIM architecture and understand its limits and bottlenecks. These include memory bandwidth at different points in the memory hierarchy for different memory access patterns, compute throughput of different arithmetic operations for different datatypes, and strong and weak scalability for different communication patterns.
\item To provide a set of memory-bound benchmarks with different characteristics (memory access patterns, operations and data types, communication patterns) that can be used by architecture and systems researchers, {as well as programmers and programming language researchers,} to (1) further study the UPMEM PIM architecture, and (2) advance PIM hardware and software.
\item {To understand the best way to program existing hardware and take advantage of it, in addition to providing hints for better future PIM programming and design practices.}
\item To compare the performance of the UPMEM PIM architecture to CPUs and GPUs for memory-bound workloads.
\end{itemize}
Our goal is \emph{not} to compare to other PIM architectures, since there is no other real PIM hardware available as of October 2020.
{Performing a simulation based study of all types of PIM architectures would be very large and completely different undertaking from what we aim to do with this paper.}
However, we believe that our benchmarks can be used to compare the UPMEM architecture to other real PIM hardware that may be developed in the future.
These goals were partially stated in the original paper in the list of contributions in the introduction. To address the reviewer’s concern, we improve the list of contributions in the new version of the paper as follows:
\yboxbegin
\Paste{commentC6a}
\yboxend
\vspace{3mm} \boxbegin {\bf (Comment C7)} I didn't see a study of coherence overheads for intra-DPU parallelism. Are all your workloads highly parallel? Isn't this important to study? \boxend
In the UPMEM PIM architecture, there is no hardware support for coherence. Managing coherence is the programmer's responsibility. Programmers can use synchronization primitives (e.g., mutexes, handshakes, barriers) to ensure that data accessed by different tasklets is coherent.
{We explained this in Section~\ref{sec:dpu-programming}. In the current version of the paper, we improved the explanations to make them more clear. We reproduce the text below for the reviewers' convenience:}
\yboxbegin
\Paste{commentC7b}
\yboxend
{In our benchmark suite,} we include several benchmarks (UNIQUE, SELECT, Reduction, SCAN-SSA, SCAN-RSS, HSTL, BFS) that use these synchronization primitives.
We updated Table 1 to
{reflect them:}
\setcounter{table}{0}
\yboxbegin
\Paste{commentC7a}
\yboxend
\vspace{3mm} \boxbegin {\bf (Comment C8)} Fig. 2: as I was reading, my thought was that the shape of the curves is explained by lower interrupt processing overheads as the payload size grows. Would this not be the case? I didn't quite follow the latency related explanation you offered. Could you please clarify? \boxend
Indeed, as the reviewer says, the shape of the curves can be explained by the fixed cost of transfer (the interrupt processing overhead) getting amortized over a larger payload. We address the reviewer’s comment by deriving a latency model in Section~\ref{sec:mram-read-write} and adding the measured and modelled latency to Fig. 4 (Fig. 2 in the original paper). The text and figure are replicated below for the reviewer’s convenience:
\setcounter{figure}{3}
\yboxbegin
\Paste{comment1d}
\yboxend
\yboxbegin
\Paste{comment1c}
\yboxend
\vspace{3mm} \boxbegin {\bf (Comment C9)} State clearly what you mean by "single-stide" vs. "multi-stride", "Read/write" vs. "Streaming". I had to make guesses (which I think were correct, but why not just be clear and avoid possible confusion). \boxend
We apologize about the possible confusion. In the new version of the paper, we explain these terms at the corresponding places.
In Section~\ref{sec:wram-benchmark-desc}, we explained the meaning of streaming or unit-strided access:
\yboxbegin
\Paste{comment21a}
\yboxend
In Section~\ref{sec:mram-read-write}, we explained how we test read/write DMA transfers:
\yboxbegin
\Paste{comment21b}
\yboxend
These transfers were introduced in Section~\ref{sec:dpu-programming} in the following sentence:
\yboxbegin
\Paste{comment21c}
\yboxend
In the beginning of Section~\ref{sec:mram-strided-random}, we clarified what we mean by strided access:
\yboxbegin
\Paste{comment21d}
\yboxend
\vspace{3mm} \boxbegin {\bf (Comment C10)} Fig. 3: 256 DPUs times 11 tasklets per DPU is much smaller than the 16M tasklets you mention. What am I missing here? \boxend
We apologize for the possible confusion. Figure 3 is Figure~\ref{fig:mram-stream} in the new version of the paper. In the beginning of Section~\ref{sec:mram-streaming}, we added clarification that we change the number of tasklets per DPU from 1 to 16. These tasklets collectively stream (i.e., access in streaming) arrays of 2M 8-byte elements:
\yboxbegin
\Paste{comment22a}
\yboxend
\vspace{3mm} \boxbegin {\bf (Comment C11)} What computations are done in "Scale" and "Triad"? \boxend
Scale performs a multiplication per output array element, and triad performs a multiplication and an addition per output array element. In the new version of the paper, we explained this in the last sentence of Section~\ref{sec:wram-benchmark-desc}:
\yboxbegin
\Paste{comment23a}
\yboxend
\vspace{3mm} \boxbegin {\bf (Comment C12)} Shouldn't "Add" do worse than "Copy-w"? I must be misunderstanding how the work done by the two compare with each other. \boxend
We agree that the results of Add and Copy-w obtained on the FPGA emulator for the previous version of the paper were confusing. We reran ADD and COPY (which is Copy-w in the new version of the paper) on the real UPMEM PIM setup.
Figure~\ref{fig:wram-stream} shows that the WRAM bandwidth of COPY is higher than the bandwidth of ADD. We reproduce the figure here for reviewer's convenience:
\setcounter{figure}{2}
\yboxbegin
\Paste{comment24a}
\yboxend
The results are explained in the following text from Section~\ref{sec:wram-bandwidth}:
\yboxbegin
\Paste{comment24b}
\yboxend
Figure~\ref{fig:mram-stream} shows that the maximum MRAM bandwidth of COPY and ADD are the same, but COPY saturates {at} a lower number of tasklets:
\setcounter{figure}{4}
\yboxbegin
\Paste{comment24c}
\yboxend
The results are explained in the following text from Section~\ref{sec:mram-streaming}:
\yboxbegin
\Paste{comment1e}
\yboxend
\vspace{3mm} \boxbegin {\bf (Comment C13)} Fig. 9: this is nice but could you also offer a definition of the Roofline model to avoid any confusion. IN particular, define OPS/byte and OPS/sec precisely. \boxend
We address the reviewer’s concern by adding the following definitions in the beginning of Section~\ref{sec:benchmarks}:
\yboxbegin
\Paste{comment25a}
\yboxend
\section{Introduction}
In \juancrr{modern} computing systems, a \juancrr{large} fraction of the \juancrr{execution time} and energy consumption of modern \juancrr{data-intensive} workloads is spent moving data between memory and processor cores.
This \emph{data movement bottleneck}~\cite{mutlu2019,mutlu2020modern,ghoseibm2019,ghose2019arxiv,ghose.bookchapter19.arxiv,mutlu.msttalk17,mutlu2019enabling} stems from the fact that, for decades, the performance of processor cores has been increasing at a faster rate than the memory \juancrr{performance}.
The gap between an arithmetic operation and a memory access in terms of latency and energy keeps widening \juancrr{and the memory access is becoming increasingly more expensive}.
As a result, \juancrr{recent experimental studies report that} data movement accounts for 62\%~\cite{boroumand.asplos18} \juancr{(reported in 2018)}, 40\%~\cite{pandiyan.iiswc2014} \juancr{(reported in 2014)}, and 35\%~\cite{kestor.iiswc2013} \juancr{(reported in 2013)} of the total system energy in \juancrr{various} consumer, scientific, and mobile applications, respectively.
One promising way to alleviate the data movement bottleneck is \emph{processing-in-memory} (\emph{PIM}), which equips memory chips with processing capabilities\juancrr{~\cite{mutlu2020modern}}.
This paradigm has been explored for more than \juanc{50} years\juancrr{~\cite{Kautz1969,stone1970logic,shaw1981non, kogge1994, gokhale1995processing, patterson1997case, oskin1998active, kang1999flexram, Mai:2000:SMM:339647.339673, Draper:2002:ADP:514191.514197,aga.hpca17,eckert2018neural,fujiki2019duality,kang.icassp14,seshadri.micro17,seshadri.arxiv16,seshadri2013rowclone,seshadri2018rowclone,angizi2019graphide,kim.hpca18,kim.hpca19,gao2020computedram,chang.hpca16,xin2020elp2im,li.micro17,deng.dac2018,hajinazarsimdram,rezaei2020nom,wang2020figaro,ali2019memory,li.dac16,angizi2018pima,angizi2018cmp,angizi2019dna,levy.microelec14,kvatinsky.tcasii14,shafiee2016isaac,kvatinsky.iccd11,kvatinsky.tvlsi14,gaillardon2016plim,bhattacharjee2017revamp,hamdioui2015memristor,xie2015fast,hamdioui2017myth,yu2018memristive,syncron,fernandez2020natsa,alser2020accelerating,cali2020genasm,kim.arxiv17,kim.bmc18,ahn.pei.isca15,ahn.tesseract.isca15,boroumand.asplos18,boroumand2019conda,boroumand2016pim,boroumand.arxiv17,singh2019napel,asghari-moghaddam.micro16,DBLP:conf/sigmod/BabarinsaI15,chi2016prime,farmahini2015nda,gao.pact15,DBLP:conf/hpca/GaoK16,gu.isca16,guo2014wondp,hashemi.isca16,cont-runahead,hsieh.isca16,kim.isca16,kim.sc17,DBLP:conf/IEEEpact/LeeSK15,liu-spaa17,morad.taco15,nai2017graphpim,pattnaik.pact16,pugsley2014ndc,zhang.hpdc14,zhu2013accelerating,DBLP:conf/isca/AkinFH15,gao2017tetris,drumond2017mondrian,dai2018graphh,zhang2018graphp,huang2020heterogeneous,zhuo2019graphq,santos2017operand,mutlu2019,mutlu2020modern,ghoseibm2019,ghose2019arxiv,wen2017rebooting,besta2021sisa,ferreira2021pluto,olgun2021quactrng}}, but limitations in memory technology prevented \juancrr{commercial} hardware from \juancrr{successfully} materializing.
More recently, difficulties in DRAM scaling (i.e., \juancr{challenges in} increasing density \juancr{and performance} while \juancr{maintaining reliability,} latency and energy consumption)\juancrr{~\cite{kang.memoryforum14,liu.isca13,mutlu.imw13,kim-isca2014,mutlu2017rowhammer,ghose2018vampire,mutlu.superfri15,kim2020revisiting,mutlu2020retrospective,frigo2020trr,kim2018solar,raidr,mutlu2015main, mandelman.ibmjrd02, lee-isca2009,cojocar2020susceptible,yauglikcci2021blockhammer,patel2017reaper,khan.sigmetrics14,khan.dsn16,khan.micro17,lee.hpca15,lee.sigmetrics17,chang.sigmetrics17,chang.sigmetrics16,chang.hpca14,meza.dsn15,david2011memdvfs,deng2011memscale,hong2010memory,kanev.isca15,qureshi.dsn15}} have motivated innovations such as 3D-stacked memory\juancrr{~\cite{hmc.spec.2.0, jedec.hbm.spec,lee.taco16,ghose2019demystifying,ramulator,ahn.tesseract.isca15}} and nonvolatile memory\juancrr{~\cite{lee-isca2009, kultursay.ispass13, strukov.nature08, wong.procieee12,lee.cacm10,qureshi.isca09,zhou.isca09,lee.ieeemicro10,wong.procieee10,yoon-taco2014,yoon2012row}} which present \juancr{new} opportunities to redesign the memory subsystem while integrating processing capabilities.
3D-stacked memory integrates DRAM layers with a logic layer, which can embed processing elements.
Several works explore this approach, called \emph{processing-near-memory} \juancrrr{(\emph{PNM})}, to implement different types of processing components in the logic layer, such as general-purpose cores\juancrr{~\cite{deoliveira2021,boroumand.arxiv17,boroumand2016pim, boroumand.asplos18,boroumand2019conda,ahn.tesseract.isca15,syncron,singh2019napel}}, application-specific accelerators\juancrr{~\cite{zhu2013accelerating, DBLP:conf/isca/AkinFH15, DBLP:conf/sigmod/BabarinsaI15, kim.arxiv17, kim.bmc18, liu-spaa17, kim.isca16, DBLP:conf/IEEEpact/LeeSK15,cali2020genasm,alser2020accelerating,fernandez2020natsa,impica,singh2020nero}}, simple functional units\juancrr{~\cite{ahn.pei.isca15,nai2017graphpim,hadidi2017cairo}}, GPU cores\juancrr{~\cite{zhang.hpdc14, pattnaik.pact16, hsieh.isca16,kim.sc17}}, or reconfigurable logic~\cite{DBLP:conf/hpca/GaoK16, guo2014wondp, asghari-moghaddam.micro16}.
However, 3D-stacked memory suffers from high cost and limited capacity, and the logic layer has \juancrr{hardware} area and thermal dissipation constraints, which limit the capabilities of the embedded processing components.
On the other hand, \emph{processing-using-memory} \juancrrr{(\emph{PUM})} takes advantage of the analog \juancr{operational} properties of memory cells in SRAM\juancrr{~\cite{aga.hpca17,eckert2018neural,fujiki2019duality,kang.icassp14}}, DRAM\juancrr{~\cite{seshadri2020indram,seshadri.bookchapter17.arxiv,seshadri.bookchapter17,Seshadri:2015:ANDOR,seshadri.micro17,seshadri.arxiv16,seshadri2018rowclone,seshadri2013rowclone,angizi2019graphide,kim.hpca18,kim.hpca19,gao2020computedram,chang.hpca16,xin2020elp2im,li.micro17,deng.dac2018,hajinazarsimdram,rezaei2020nom,wang2020figaro,ali2019memory,ferreira2021pluto,olgun2021quactrng}}, or nonvolatile memory\juancrr{~\cite{li.dac16,angizi2018pima,angizi2018cmp,angizi2019dna,levy.microelec14,kvatinsky.tcasii14,shafiee2016isaac,kvatinsky.iccd11,kvatinsky.tvlsi14,gaillardon2016plim,bhattacharjee2017revamp,hamdioui2015memristor,xie2015fast,hamdioui2017myth,yu2018memristive,puma-asplos2019, ankit2020panther,chi2016prime,ambrosi2018hardware,bruel2017generalize,huang2021mixed}} to perform specific types of operations efficiently.
However, processing-using-memory is either limited to simple bitwise operations (e.g., majority, AND, OR)~\cite{aga.hpca17, seshadri.micro17,seshadri.arxiv16}, requires high area overheads to perform more complex operations~\cite{li.micro17, deng.dac2018,ferreira2021pluto}, \juancr{or requires significant changes to data organization, manipulation, and handling mechanisms to enable bit-serial computation, while still having limitations on certain operations~\cite{hajinazarsimdram,ali2019memory,angizi2019graphide}.}\footnote{\juancr{PUM approaches performing bit-serial computation~\cite{hajinazarsimdram,ali2019memory,angizi2019graphide} need to layout \juanc{data} elements vertically (i.e., all bits of an element in the same bitline), which (1) does \emph{not} allow certain data manipulation operations (e.g., shuffling \juanc{of data elements in an array}) and (2) requires \juanc{paying} the overhead of bit transposition, \juanc{when the format of data needs to change~\cite{hajinazarsimdram}, i.e., prior to performing bit-serial computation}}.}
Moreover, processing-using-memory \juancrr{approaches are usually} efficient \juancrr{mainly} for regular computations, since they naturally operate on a large number of memory cells (e.g., entire rows \juancrr{across many subarrays~\cite{salp,seshadri.micro17,seshadri.arxiv16,seshadri2013rowclone,seshadri2018rowclone,hajinazarsimdram,seshadri2020indram,seshadri.bookchapter17.arxiv,seshadri.bookchapter17,Seshadri:2015:ANDOR}}) simultaneously.
For these reasons, complete PIM \juancrr{systems} based on 3D-stacked memory or processing-using-memory have not \juancr{yet} been \juancrr{commercialized} in real hardware.
The UPMEM PIM architecture~\cite{upmem2018, devaux2019} is the first PIM \juancrr{system} to be \juancrr{commercialized} in real hardware.
To avoid the aforementioned limitations, it uses conventional 2D DRAM arrays and combines them with general-purpose processing cores, called \emph{DRAM Processing Units} (\emph{DPUs}), on the same chip.
Combining memory and processing components on the same chip imposes serious design challenges.
For example, DRAM designs use only \juancrr{three metal layers~\cite{weber2005current,peng2015design}}, while conventional processor designs typically use more than \juancrr{ten}~\cite{devaux2019,yuffe2011,christy2020,singh2017}.
While these challenges prevent the fabrication of fast \juancrr{logic} transistors, UPMEM overcomes these challenges via DPU cores that are \juancrr{relatively} deeply pipelined and fine-grained multithreaded\juancrr{~\cite{ddca.spring2020.fgmt,henessy.patterson.2012.fgmt,burtonsmith1978,smith1982architecture,thornton1970}} to run at several hundred megahertz.
The UPMEM PIM architecture provides several key advantages with respect to other PIM proposals.
First, it relies on mature 2D DRAM design and fabrication process, avoiding the drawbacks of \juancrr{emerging} 3D-stacked memory technology.
Second, the \juancrr{general-purpose} DPUs support a \juancr{wide} variety of computations and \juancrr{data} types, \juancrr{similar to simple modern general-purpose processors}.
Third, the architecture is suitable for irregular computations because the threads in a DPU can execute independently of each other (i.e., they are not bound by \juancr{lockstep execution as in} SIMD\footnote{\juancrr{\emph{Single Instruction Multiple Data} (\emph{SIMD})~\cite{ddca.spring2020.simd,henessy.patterson.2012.simd,flynn1966very} refers to an execution paradigm where multiple processing elements execute the \emph{same} operation on \emph{multiple} data elements simultaneously.}}).
\juancrr{Fourth, UPMEM provides} a complete software stack that enables DPU programs to be written in the \juancrr{commonly-used} C language~\cite{upmem-guide}.
\juancrr{Rigorously understanding the UPMEM PIM architecture, the first publicly-available PIM architecture, and its suitability to various workloads can provide valuable insights to programmers, users and architects of this architecture as well as of future PIM systems.
To this end, our work provides the first comprehensive \juancr{experimental characterization and} analysis of the first publicly-available real-world PIM architecture. \juancr{To enable our experimental studies and} analyses, we develop new microbenchmarks and a new benchmark suite, \juanc{which we openly and freely make available}~\cite{gomezluna2021repo}.}
We develop a set of microbenchmarks to \juancrr{evaluate, characterize, and understand} the limits of the UPMEM-based PIM system, \juancrr{yielding new insights}.
First, we obtain the compute throughput of a DPU for different arithmetic operations and data types.
Second, we measure the bandwidth of two \juancr{different} memory spaces that a DPU can \juancr{directly access using load/store instructions}: (1) a DRAM bank called \emph{Main RAM} (\emph{MRAM}), and (2) an SRAM-based scratchpad \juancr{memory} called \emph{Working RAM} (\emph{WRAM}).
We employ streaming (i.e., unit-stride), strided, and random memory access patterns to measure the \juancr{sustained} bandwidth of both \juancrr{types of} memories.
Third, we measure the \juancrrr{sustained} bandwidth between the standard main memory and the MRAM banks for different types and sizes of transfers, \juancr{which is important for the communication of the DPU with the host CPU and other DPUs}.
We present \emph{PrIM} (\emph{\underline{Pr}ocessing-\underline{I}n-\underline{M}emory benchmarks}), the first benchmark suite for a real PIM architecture.
PrIM includes 16 workloads from different application domains \juancrr{(e.g., dense/sparse linear algebra, databases, data analytics, graph processing, neural networks, bioinformatics, image processing)}, which we identify as memory-bound using the roofline model for a conventional CPU~\cite{roofline}.
We perform strong scaling\footnote{\juancrr{\emph{Strong scaling} refers to how the execution time of a program solving a particular problem varies with the number of processors for a fixed problem size~\cite{hager2010introduction.scaling,amdahl1976validity}.}} and weak scaling\footnote{\juancrr{\emph{Weak scaling} refers to how the execution time of a program solving a particular problem varies with the number of processors for a fixed problem size per processor~\cite{hager2010introduction.scaling,gustafson1988reevaluating}.}} experiments with the 16 benchmarks on a system with
2,556 DPUs, and compare \juancrr{their performance and energy consumption to their state-of-the-art CPU and GPU counterparts}.
\juancrr{Our extensive evaluation
provides new insights about suitability of different workloads to the PIM system, programming recommendations for software designers, and suggestions and hints for hardware and architecture designers of future PIM systems.}
All our microbenchmarks and PrIM benchmarks
are publicly \juanc{and freely} available~\cite{gomezluna2021repo} to serve as programming samples for \juancr{real} PIM architectures, \juancrr{evaluate and compare \juancr{current and} future PIM systems,} and help further advance PIM architecture, \juancr{programming, and software} research.\footnote{\juanc{We refer the reader to a recent overview paper~\cite{mutlu2020modern} on the state-of-the-art challenges in PIM research.}}
The main contributions of this work are as follows:
\begin{itemize}[noitemsep,topsep=0pt,leftmargin=8pt]
\item
We perform the first comprehensive \juancr{characterization and} analysis of the first publicly-available real-world PIM architecture. \juancr{We} analyze the \juancr{new} architecture's \juancr{potential,} limitations and bottlenecks. We analyze (1) memory bandwidth at different levels of the \juancr{DPU} memory hierarchy for different memory access patterns, (2) \juancr{DPU} compute throughput of different arithmetic operations for different data types, and (3) strong and weak \juancr{scaling characteristics} for different \juancr{computation} patterns. We find that (1) the UPMEM PIM architecture is fundamentally compute bound, since workloads with more complex operations than integer addition fully utilize the instruction pipeline before they can potentially saturate the memory bandwidth, and (2) workloads that require inter-DPU communication do \emph{not} scale well, since there is no direct communication channel among DPUs, \juancr{and} therefore, all inter-DPU communication takes place via the host CPU, i.e., through the narrow memory bus.
\item
We present \juancrr{and open-source} PrIM, the first benchmark suite for a real PIM architecture, composed of 16 \juanc{real-world} workloads that are memory-bound on conventional processor-centric systems. The workloads have different characteristics, \juanc{exhibiting heterogeneity in their} memory access patterns, operations and data types, \juanc{and} communication patterns. The PrIM benchmark suite provides a common set of workloads to evaluate the UPMEM PIM architecture with and can be useful for programming, architecture and systems researchers all alike to improve multiple aspects of \juancrr{future} PIM hardware and software.\juanc{$^5$}
\item
We compare the performance \juancrr{and energy consumption} of \juancrr{PrIM} benchmarks on two UPMEM-based PIM systems \juancrr{with 2,556 DPUs and 640 DPUs} to \juancr{state-of-the-art} conventional processor-centric systems, \juancr{i.e.,} CPUs and GPUs. \juancrr{Our analysis \juanc{reveals} several \juancr{new and interesting} findings.
\juancr{We highlight three major findings.}
First, both UPMEM-based PIM systems outperform a state-of-the-art CPU \juanc{(by 105.4$\times$ and 30.4$\times$, on average, respectively)} for 13 of the PrIM benchmarks, which do \emph{not} require intensive inter-DPU \juanc{synchronization} \juancrrr{or floating point operations}.\footnote{\juanc{Two of the other three PrIM benchmarks, Breadth-first Search (BFS) and Needleman-Wunsch (NW), pay the huge overhead of inter-DPU synchronization via the host CPU. The third one, Sparse Matrix-Vector Multiply (SpMV), makes intensive use of floating point multiplication and addition.}}
\juanc{Section~\ref{sec:comparison} provides a detailed analysis of our comparison of PIM systems to state-of-the-art CPU and GPU.}
Second, the 2,556-DPU PIM system is faster than a state-of-the-art GPU \juanc{(by 2.68$\times$, on average)} for 10 PrIM benchmarks with (1) streaming memory accesses, (2) little \juancr{or no} inter-DPU synchronization,
\juancrrr{and (3)} little \juancr{or no} use of complex arithmetic operations (i.e., integer multiplication/division, floating point operations).\footnote{We also evaluate the 640-DPU PIM system and find that it is \emph{slower} than the GPU for most PrIM benchmarks, but the performance gap between \juanc{the two systems (640-DPU PIM and GPU)} is significantly smaller for the 10 PrIM benchmarks that do \emph{not} need (1) heavy inter-DPU communication or (2) intensive use of multiplication operations. The 640-DPU PIM system is faster than the GPU for two benchmarks, which are not well-suited for the GPU. \juanc{Section~\ref{sec:comparison} provides a detailed analysis of our comparison.}}
\juancr{Third}, energy consumption \juancr{comparison of the PIM, CPU, and GPU systems} follows the same trends as the performance comparison}: \juanc{the PIM system yields large energy savings over the CPU and the CPU, for workloads where it largely outperforms the CPU and the GPU.}
\end{itemize}
\section{UPMEM PIM Architecture}\label{sec:upmem_pim}
\juancrr{We describe} the organization of a UPMEM PIM-enabled system (Section~\ref{sec:sys-org}), the architecture of a DPU core (Section~\ref{sec:dpu-architecture}), and important aspects of programming DPUs (Section~\ref{sec:dpu-programming}).
\subsection{System Organization}
\label{sec:sys-org}
\sloppy
Figure~\ref{fig:scheme} (left) depicts a UPMEM-based PIM system with (1) a \emph{host} CPU \juancrr{(e.g., an x86~\cite{saini1993design}, ARM64~\cite{jaggar1997arm}, or 64-bit RISC-V~\cite{waterman2016design} \juancr{multi-core system})}, (2) standard main memory (DRAM memory modules~\cite{kim2014memory,ca.fall2020.refresh,ca.fall2020.challenges,ca.fall2020.solution}), and (3) PIM-enabled memory (UPMEM modules)~\cite{upmem2018, devaux2019}. \juancrr{PIM-enabled memory can reside on one or more memory channels.
A UPMEM module is a standard \juancr{DDR4-2400 DIMM (module)}~\cite{jedec2012ddr4} with several PIM chips.
All DPUs in the UPMEM \juancr{modules} operate together as a parallel coprocessor \juancr{to the host CPU}.}
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{figures/DPU-system-dots.pdf}
\vspace{-4mm}
\caption{UPMEM-based PIM system with a host CPU, standard main memory, and PIM-enabled memory (left), and internal components of a UPMEM PIM chip (right)~\cite{upmem2018, devaux2019}.}
\label{fig:scheme}
\end{figure}
Inside each UPMEM PIM chip (Figure~\ref{fig:scheme} (right)), there are 8 DPUs. Each DPU has exclusive access to (1) a 64-MB DRAM bank, called \emph{Main RAM} (\emph{MRAM}) \ding{202}, (2) a 24-KB instruction memory, called \emph{Instruction RAM} (\emph{IRAM}) \ding{203}, and (3) a 64-KB scratchpad memory, called \emph{Working RAM} (\emph{WRAM}) \ding{204}.
MRAM \juancr{is} accessible by the host CPU (Figure~\ref{fig:scheme} (left)) for \juancrrr{\emph{copying}} input data (from main memory to MRAM) \ding{205}~ and \emph{retrieving} results (from MRAM to main memory) \ding{206}.
These \juancrrr{CPU-DPU and DPU-CPU} data transfers can be performed in parallel (i.e., concurrently across multiple MRAM banks), if the buffers transferred from/to all MRAM banks \juanc{are of the same size}. Otherwise, the data transfers happen serially \juancr{(i.e., \juancrrr{a} transfer from/to another MRAM bank starts after the transfer from/to an MRAM bank completes)}.
There is no support for direct communication between DPUs. All inter-DPU communication takes place through the host CPU by \emph{retrieving} results \juancr{from the DPU to the CPU} and \juancrrr{\emph{copying}} data \juancr{from the CPU to the DPU}.
\juancrr{The programming interface for serial transfers~\cite{upmem-guide}} provides functions for copying a buffer to (\texttt{dpu\_copy\_to}) and from (\texttt{dpu\_copy\_from}) a specific MRAM bank.
\juancrr{The programming interface for parallel transfers~\cite{upmem-guide}} provides functions for assigning buffers to specific MRAM banks (\texttt{dpu\_prepare\_xfer}) and then initiating the actual \juancrrr{CPU-DPU or DPU-CPU} transfers to execute in parallel (\texttt{dpu\_push\_xfer}).\footnote{\juancrr{Our preliminary experiments show that the UPMEM SDK 2021.1.1~\cite{upmem-guide}, which we use in this work, parallelizes only transfers to MRAM banks within the same rank, \emph{not} across ranks, but we suspect this \juanc{design} may be improved in future releases.}}
\juancrr{Parallel transfers require} that the transfer sizes \juancr{to/from all MRAM banks be} the same.
If the buffer to copy to all MRAM banks is the same, we can execute a broadcast \juancrrr{CPU-DPU} memory transfer (\texttt{dpu\_broadcast\_to}).
Main memory and PIM-enabled memory require different data layouts. While main memory uses the conventional horizontal DRAM mapping~\cite{GS-DRAM,devaux2019}, which maps consecutive 8-bit words onto consecutive DRAM chips, PIM-enabled memory needs entire 64-bit words mapped onto the same MRAM bank (in one PIM chip)~\cite{devaux2019}.
The reason for this special data layout in PIM-enabled memory is that \juancr{each DPU has access to only a single MRAM bank, but it} can operate on data types of up to 64 bits.
The UPMEM SDK includes a transposition library~\cite{devaux2019} to perform the necessary data shuffling when transferring data between main memory and MRAM banks.
These data layout transformations are transparent to programmers.
The \juancr{UPMEM SDK-provided} functions for serial/parallel/broadcast \juancrrr{CPU-DPU} and serial/parallel \juancrrr{DPU-CPU} transfers call the transposition library internally, and \juanc{the library} ultimately performs data layout conversion, as needed.
In current \juancrr{UPMEM-based PIM system} configurations~\cite{upmem}, the maximum number of UPMEM DIMMs is 20.
\juancrr{A UPMEM-based PIM system with 20 UPMEM modules} can contain up to 2,560 DPUs which amounts to 160 GB of PIM-capable memory.
\juancrr{Table~\ref{tab:pim-setups} presents the two \juancr{real} UPMEM-based PIM systems that we use in this work.}
\begin{table}[h]
\scriptsize
\captionof{table}{UPMEM-based PIM Systems.}
\label{tab:pim-setups}
\begin{minipage}{\textwidth}
\begin{center}
\vspace{-4mm}
\subcaption{Memory Parameters.}
\vspace{-2mm}
\resizebox{1.0\linewidth}{!}{
\input{figures/tab-setups}
}
\end{center}
\vspace{1mm}
\end{minipage}
\begin{minipage}{\textwidth}
\begin{center}
\subcaption{CPU Parameters.}
\vspace{-2mm}
\resizebox{0.92\linewidth}{!}{
\input{figures/tab-setups2}
}
\end{center}
\end{minipage}
\end{table}
We use a real UPMEM-based PIM system that contains 2,556 DPUs, \juancr{and a total of \juancrrr{159.75 GB} MRAM}. The DPUs are organized into 20 double-rank DIMMs, with 128 DPUs \juancr{per DIMM}.\footnote{There are four faulty DPUs in the system where we run our experiments. They cannot be used and do not affect system functionality or the correctness of our results, but take away from the system's full computational power of 2,560 DPUs.}
\juancr{Each DPU runs} at 350 MHz.
The 20 UPMEM DIMMs are in a dual x86 socket with 2 memory controllers per socket. Each memory controller has 3 memory channels~\cite{xeon-4215}.
In each socket, two DIMMs of conventional DRAM (employed as main memory of the host CPU) are on \juanc{one} channel of one of the memory controllers.
We also use \juancrr{an older real} system with 640 DPUs. \juancrr{The DPUs are organized into 10 single-rank DIMMs, with 64 DPUs \juancr{per DIMM}}.
The total amount of MRAM is thus 40 GB.
\juancr{Each DPU in this system runs} at 267 MHz.
The 10 UPMEM DIMMs are in an x86 socket with 2 memory controllers. Each memory controller has 3 memory channels~\cite{xeon-4110}.
Two DIMMs of conventional DRAM are on one channel \juancrr{of one of the memory controllers}.
\subsection{DRAM Processing Unit (DPU) Architecture}\label{sec:dpu-architecture}
A DPU \juancrr{(Figure~\ref{fig:scheme} (right))} is a multithreaded in-order 32-bit RISC core with a \juancr{specific} Instruction Set Architecture (ISA)~\cite{upmem-guide}.
The DPU \juancrr{has} 24 hardware threads, each with 24 32-bit general-purpose registers \juancr{(\ding{207}~ in Figure~\ref{fig:scheme} (right))}.
These hardware threads share an instruction memory \juancrr{(IRAM)} \ding{203}~ and a scratchpad memory \juancrr{(WRAM)} \ding{204}~ to store operands.
The DPU has a pipeline depth of 14 stages \ding{208}, however, only the last three stages of the pipeline (i.e., ALU4, MERGE1, and MERGE2 in Figure~\ref{fig:scheme} (right)) can execute in parallel with the DISPATCH and FETCH stages of the next instruction in the same thread.
Therefore, instructions from the same thread must be dispatched 11 cycles apart, requiring at least 11 threads to fully utilize the pipeline~\cite{comm-upmem}.
The 24 KB IRAM can hold up to 4,096 48-bit encoded instructions.
The WRAM has a capacity of 64 KB.
The DPU can access the WRAM through 8-, 16-, 32-, and 64-bit load/store instructions.
\juancrr{The ISA provides DMA instructions~\cite{upmem-guide} to move instructions from the MRAM bank to the IRAM, and data between the MRAM bank and the WRAM.}
The frequency of a DPU \juancrr{can potentially} reach 400 MHz~\cite{upmem}.
At 400 MHz, the \juancr{maximum possible} MRAM-WRAM bandwidth per DPU can achieve
around 800 MB/s.
Thus, the maximum aggregated \juancr{MRAM} bandwidth for a configuration with 2,560 DPUs can potentially be 2 TB/s.
\juancrr{However, the DPUs run at 350 MHz in our 2,556-DPU setup and at 267 MHz in the 640-DPU \juancr{system}.}
For this reason, the \juancr{maximum possible} MRAM-WRAM bandwidth per DPU in our setup is 700 MB/s (534 MB/s in the 640-DPU setup), and the maximum aggregated bandwidth for the 2,556 DPUs is 1.7 TB/s (333.75 GB/s \juancr{in the 640-DPU system}).
\subsection{DPU Programming}\label{sec:dpu-programming}
\juancrr{UPMEM-based PIM systems use \juancr{the} \emph{Single Program Multiple Data} (\emph{SPMD})~\cite{ddca.spring2020.gpu} programming model, where software threads, called \emph{tasklets}, (1) execute the same code but operate on different pieces of data, and (2) can execute different control-flow \juancr{paths} at runtime.}
\juancrr{Up to 24 tasklets can run on a DPU, since the number of hardware threads is 24.
Programmers determine the number of tasklets per DPU at compile time, \juancr{and tasklets are statically assigned to each DPU}.}
Tasklets inside the same DPU can share data \juancr{among each other} in MRAM and in WRAM, and can synchronize via \emph{mutexes}, \emph{barriers}, \emph{handshakes}, and \emph{semaphores}~\cite{rauber2013parallel}.
\juancrr{Tasklets in different DPUs do \emph{not} share memory or any direct communication channel. As a result, they cannot directly communicate or synchronize}.
\juancrr{As mentioned in Section~\ref{sec:sys-org}}, the host CPU handles communication of intermediate data between DPUs, and merges partial results into final ones.
\subsubsection{\juancrr{Programming Language and Runtime Library}}
\label{sec:language-library}
DPU programs are written in the C language with some library calls~\cite{upmem2018, upmem-guide}.\footnote{\juancrr{In this work, we use UPMEM SDK 2021.1.1~\cite{upmem-sdk}.}}
\juancr{The UPMEM SDK~\cite{upmem-sdk} supports common data types supported in the C language and the LLVM compilation framework~\cite{lattner2004llvm}.
For the complete list of supported instructions, we refer the reader to the UPMEM user manual~\cite{upmem-guide}.}
The UPMEM runtime library~\cite{upmem-guide} provides library calls to move (1) instructions from the MRAM bank to the IRAM, and (2) data between the MRAM bank and the WRAM (namely, \texttt{mram\_read()} for MRAM-WRAM transfers, and \texttt{mram\_write()} for WRAM-MRAM transfers).
\sloppy
The UPMEM runtime library also provides functions to (1) lock and unlock mutexes (\texttt{mutex\_lock()}, \texttt{mutex\_unlock()}), which create critical sections, (2) access barriers (\texttt{barrier\_wait()}), which suspend tasklet execution until all tasklets in the DPU reach the same point in the program, (3) wait for and notify a handshake (\texttt{handshake\_wait\_for()}, \texttt{handshake\_notify()}), which enables one-to-one tasklet synchronization, and (4) increment and decrement semaphore counters (\texttt{sem\_give()}, \texttt{sem\_take()}).
\juanc{Even though} using \juancr{the C language to program the DPUs} ensures a low learning curve, programmers need to deal with several challenges.
First, programming thousands of DPUs running up to 24 tasklets requires careful workload partitioning and orchestration.
Each tasklet has a tasklet ID that programmers can use for that purpose.
Second, programmers have to explicitly move data between the standard main memory and the MRAM banks, and \juancr{ensuring data coherence between the CPU and DPUs} (i.e., ensuring that CPU and DPUs use up-to-date and correct copies of data) is their responsibility.
Third, DPUs do \emph{not} employ cache memories.
The data movement between the MRAM banks and the WRAM is explicitly managed by the programmer.
\subsubsection{General Programming Recommendations}
\label{sec:general-recommendations}
General programming recommendations of the UPMEM-based PIM system that we find in the UPMEM programming guide~\cite{upmem-guide}, presentations~\cite{devaux2019}, and white papers~\cite{upmem2018} are as follows.
The first recommendation is to \textbf{execute on the DPUs portions of parallel code that are as long as possible}, avoiding frequent interactions with the host CPU.
This recommendation minimizes CPU-DPU and DPU-CPU transfers, which happen through the narrow memory bus (Section~\ref{sec:sys-org}), and \juanc{thus cause a} data movement bottleneck~\cite{mutlu2019enabling,mutlu2020modern,ghoseibm2019,ghose2019arxiv}, which the PIM paradigm promises to alleviate.
The second recommendation is to \textbf{split the workload into independent data blocks}, which the DPUs operate on independently \juanc{(and concurrently)}. This recommendation maximizes parallelism and minimizes the need for inter-DPU communication and synchronization, which incurs high overhead, as it happens via the host CPU using CPU-DPU and DPU-CPU transfers.
The third recommendation is to \textbf{use as many working DPUs in the system as possible}, as long as the workload is sufficiently large to keep \juanc{the DPUs} busy performing actual work. This recommendation maximizes parallelism and increases utilization of the compute resources.
The fourth recommendation is to \textbf{launch at least 11 tasklets in each DPU}, in order to fully utilize the fine-grained multithreaded pipeline, as mentioned in Section~\ref{sec:dpu-architecture}.
\gpboxbegin{\gptask{gpr}}
\begin{enumerate}[1., wide, labelsep=0.5em]
\item \textbf{Execute on the \juancrr{\emph{DRAM Processing Units} (\emph{DPUs})} portions of parallel code that are as long as possible.}
\item \textbf{Split the workload into independent data blocks, which the DPUs operate on independently.}
\item \textbf{Use as many working DPUs in the system as possible.}
\item \textbf{Launch at least 11 \emph{tasklets} \juancrr{(i.e., software threads)} per DPU.}
\end{enumerate}
\gpboxend
In this work, we perform the first comprehensive characterization and analysis of the UPMEM PIM architecture, which allows us to (1) validate these programming recommendations and identify for which workload characteristics they hold, as well as (2) propose additional programming recommendations and suggestions for future PIM software designs, \juanc{and (3) propose suggestions and hints for future PIM hardware designs, which can enable easier programming as well as broad applicability of the hardware to more workloads.}
\section{Summary \& Conclusion}
We present the first comprehensive \juancr{characterization and} analysis of a real commercial PIM architecture.
\juancr{Through this analysis, we develop a} rigorous, thorough understanding of the UPMEM PIM architecture, the first publicly-available PIM architecture, and its suitability to various \juancr{types of} workloads.
\juancr{First,} we conduct a characterization of the UPMEM-based PIM system using microbenchmarks to assess various architecture limits such as compute throughput and memory bandwidth, yielding new insights.
Second, we present PrIM, a benchmark suite of 16 memory-bound workloads from different application domains (e.g., \juancr{dense/sparse linear algebra, databases, data analytics, graph processing, neural networks, bioinformatics, image processing}).
Our extensive evaluation of PrIM benchmarks conducted on two real systems with UPMEM memory modules provides new insights about suitability of different workloads to the PIM system, programming recommendations for software designers, and suggestions and hints for hardware and architecture designers of future PIM systems.
We compare the performance and energy consumption of the UPMEM-based PIM systems for PrIM benchmarks to those of a state-of-the-art CPU and a state-of-the-art GPU, and identify key workload characteristics that can successfully leverage the key strengths of a real PIM system over conventional processor-centric architectures.
We believe and hope that our work will provide valuable insights to programmers, users and architects of this PIM architecture as well as of future PIM systems, and will represent \juancr{an enabling} \juanc{milestone} in the development of memory-centric \juancr{computing} systems.
\section{PrIM Benchmarks}
\label{sec:benchmarks}
We present the benchmarks included in
our open-source \emph{PrIM} (\emph{\underline{Pr}ocessing-\underline{I}n-\underline{M}emory}) \juanc{\emph{benchmark suite}}, the first benchmark suite for a real PIM architecture.
PrIM benchmarks are publicly \juanc{and freely} available~\cite{gomezluna2021repo}.
For each benchmark, we include \juancr{in this section} a description of its implementation on a UPMEM-based PIM system with multiple DPUs.
Table~\ref{tab:benchmarks} shows a summary of the benchmarks.
\juancr{We group benchmarks by the application domain they belong to.
Within each application domain, we sort benchmarks by (1) incremental complexity of the PIM implementation (e.g., we explain VA before GEMV) and (2) alphabetical order.
We use the order of the benchmarks in Table~\ref{tab:benchmarks} consistently throughout the rest of the paper.}
For each benchmark, the table includes (1) the benchmark's short name, which we use in the remainder of the paper, (2) memory access patterns of the benchmark (sequential, strided, random), (3) computation pattern (operations and data types), and (4) communication/synchronization type of the PIM implementation (intra-DPU, inter-DPU). For intra-DPU communication, the table specifies the synchronization primitives, such as barriers, handshakes, and mutexes, that the benchmark uses (Section~\ref{sec:language-library}).
\vspace{-2mm}
\begin{table}[h]
\begin{center}
\captionof{table}{\juancrr{PrIM} benchmarks.}
\vspace{-4mm}
\label{tab:benchmarks}
\resizebox{1.0\linewidth}{!}{
\input{figures/tab-benchmarks}
}
\end{center}
\end{table}
\vspace{-2mm}
All implementations of PrIM benchmarks follow the general programming recommendations presented in Section~\ref{sec:general-recommendations}.
Note that our goal is not \juancr{to provide} extremely optimized implementations, but implementations that follow the general programming recommendations and make good use of the resources in PIM-enabled memory with \juancr{reasonable programmer} effort.
For several benchmarks, where we can design more than one implementation that is suitable to the UPMEM-based PIM system, we develop all alternative implementations and compare them.
As a result, we provide two versions of two of the benchmarks, Image histogram (HST) and Prefix sum (SCAN). In the Appendix (Section~\ref{sec:appendix-results}), we compare these versions and find the cases (i.e., dataset characteristics) where each version of each of these benchmarks results in higher performance.
We also design and develop three versions of Reduction (RED). However, we do not provide them as separate benchmarks, since one of the three versions \juancr{always} provides higher performance than \juancr{(or at least equal to)} the other two (\juancr{see Appendix,} Section~\ref{sec:appendix-results}).\footnote{We provide the three versions of RED as part of the same benchmark. Users can select the version they want to test via compiler flags.}
Our benchmark selection is based on several criteria: (1) suitability for PIM, (2) domain diversity, and (3) diversity of memory access, computation, and communication/synchronization patterns, as shown in Table~\ref{tab:benchmarks}.
We identify the suitability of these workloads for PIM by studying their memory boundedness.
We employ the roofline model~\cite{roofline}, \juancrr{as described} in Section~\ref{sec:throughput-oi}, to quantify the memory boundedness of the \juancrr{CPU versions of the} workloads. Figure~\ref{fig:roofline} shows the roofline model on an Intel Xeon E3-1225 v6 CPU~\cite{xeon-e3-1225} with Intel Advisor~\cite{advisor}.
\juanc{In} these experiments, we use the first dataset for each workload in \juancr{Table~\ref{tab:datasets} (see Section~\ref{sec:evaluation})}.
\begin{figure}[h]
\centering
\includegraphics[width=0.7\linewidth]{figures/roofline-model-350-1.pdf}
\vspace{-4mm}
\caption{Roofline model for the CPU versions of the \juancr{14} PrIM workloads on an Intel Xeon E3-1225 v6 CPU.}
\label{fig:roofline}
\vspace{-2mm}
\end{figure}
\juancrr{We observe} from Figure~\ref{fig:roofline} that all \juancrr{of the CPU versions of the PrIM} workloads are in the \juancr{memory-bounded} area of the roofline model (i.e., \juancrr{the shaded region on the left side of the intersection between the DRAM bandwidth line and the peak compute performance line}).
Hence, these workloads are all limited by memory.
We conclude that \juancrr{all 14 CPU versions of PrIM} workloads are potentially suitable for PIM~\cite{deoliveira2021}.
\juancrr{We briefly describe each PrIM benchmark and its PIM implementation next.}
\vspace{-1mm}
\subsection{Vector Addition}
Vector Addition (VA)~\cite{blackford2002updated} takes two vectors $a$ and $b$ as inputs and performs their element-wise addition.
Our PIM implementation divides the input vectors \juancrr{$a$ and $b$} into as many equally-sized chunks as \juancrr{the number of} DPUs in the system, \juancrr{and makes a linear assignment (i.e., chunk $i$ assigned to DPU $i$).}
\juancrr{The host CPU loads one chunk of both vectors $a$ and $b$ to the MRAM bank of each DPU.}
Inside each DPU, \juancrr{we assign blocks of elements from $a$ and $b$ to tasklets in a cyclic manner (i.e., block $j$ assigned to tasklet $j \% T$ for a total number $T$ of tasklets per DPU).}
\juancrr{Each tasklet (1) moves the blocks of elements from $a$ and $b$ to the WRAM, (2) performs the element-wise addition, and (3) moves the results to the MRAM bank}.
Tasklets iterate as many times as needed until the whole chunk assigned to a DPU is processed.
\juanc{At the end of the execution on the DPUs, the CPU retrieves the output vector chunks from the MRAM banks to the host main memory and constructs the complete output vector.}
\vspace{-1mm}
\subsection{Matrix-Vector Multiply}
\label{sec:gemv}
Matrix-Vector multiply (GEMV)~\cite{blackford2002updated} \juancrr{is a dense linear algebra routine that} takes a matrix of size $m \times n$ and a vector of size $n \times 1$ as inputs and performs the multiplication between them, producing a new $m \times 1$ vector as a result.
Our PIM implementation of GEMV partitions the matrix across the DPUs available in the system, assigning a fixed \juancrr{number of consecutive} rows to each \juancrr{DPU}, while the input vector is replicated \juancrr{across} all DPUs.
\juancrr{The host CPU assigns each set of consecutive rows to a DPU using linear assignment (i.e., set of rows $i$ assigned to DPU $i$).}
Inside each DPU, tasklets are in charge of computing \juancrr{on the set} of the rows assigned to that DPU.
\juancrr{We assign a subset of consecutive rows from the set \juancr{of rows assigned to a DPU} to each tasklet (i.e., subset of rows $j$ assigned to tasklet $j$).}
First, each tasklet reads a block of elements, both from \juancrr{one row of the input matrix and from the vector, and places these elements} in the WRAM.
Second, \juancr{each tasklet performs} multiply and accumulation of those elements, and \juanc{it jumps} to the first step until \juanc{it reaches} the end of the row.
Third, \juancr{each tasklet stores} the sums of multiplications in MRAM.
\juancrr{Fourth, each tasklet repeats these three steps as many times as there are rows in its subset.}
\juancrr{Fifth, each DPU produces a contiguous chunk of elements of the output vector. The CPU retrieves the output vector chunks and builds the complete output vector.}
\vspace{-1mm}
\subsection{Sparse Matrix-Vector Multiply}
Sparse Matrix-Vector multiply (SpMV)~\cite{saad2003iterative.parallel} is a \juancrr{sparse} linear algebra routine where a sparse matrix is multiplied by a dense vector.
Our \juancrr{PIM} implementation of SpMV uses the Compressed Sparse Row (CSR) storage format~\cite{saad2003iterative.sparsematrices,liu2015csr5,kanellopoulos2019smash} to represent the matrix.
\juancrr{First, the host CPU distributes the rows of the matrix evenly across DPUs, using linear assignment (i.e., set of rows $i$ assigned to DPU $i$) as in GEMV (Section~\ref{sec:gemv})}.
\juancrr{Within} each DPU, \juancrr{the rows of the matrix are distributed evenly across tasklets (i.e., subset of rows $j$ assigned to tasklet $j$, same as in GEMV).}
The input vector is replicated across DPUs.
Each tasklet multiplies its \juancrr{subset of} rows with the input vector and produces a contiguous chunk of the output vector.
\juancrr{At the end of the execution on the DPUs,} the CPU copies back the output vector chunks \juanc{from the MRAM banks to the host main memory}, in order to construct the entire output vector.
\vspace{-1mm}
\subsection{Select}
Select (SEL)~\cite{ceri1985translating} is a database operator that, given an input array, filters the array elements according to a \juancrr{given input} predicate.
Our version of SEL removes the elements that satisfy the predicate, and keeps the elements that do not.
Our PIM implementation of SEL partitions the array across the DPUs available in the system.
The tasklets inside a DPU coordinate using handshakes \juancrr{(Section~\ref{sec:language-library})}.
First, each tasklet moves a block of elements to WRAM.
Second, \juancr{each tasklet filters the elements and counts} the number of filtered elements.
Third, each tasklet passes its number of filtered elements to the next tasklet using handshake-based communication, which inherently performs a prefix-sum operation~\cite{blelloch1989, gomezluna2015ds, yan2013streamscan} to determine where in MRAM to store the filtered elements.
The tasklet \juancrr{then} moves its filtered elements to MRAM.
Fourth, the host CPU performs the final merge of the filtered arrays returned by each DPU \juancrr{via serial \juancrrr{DPU-CPU} transfers, since parallel \juancrrr{DPU-CPU} transfers are not feasible because each DPU may return a different number of filtered elements}.
\vspace{-1mm}
\subsection{Unique}
Unique (UNI)~\cite{ceri1985translating} is a database operator that, for each group of consecutive array elements with the same value, removes all but the first of these elements.
Our PIM implementation \juancrr{of} UNI follows a similar approach to SEL.
The main difference lies \juancrr{in} the more complex handshake-based communication that UNI needs.
Besides the number of unique elements, each tasklet has to pass the value of its last unique element to the next tasklet.
This way, the next tasklet can check whether its first element is unique or not in the context of the entire array.
\vspace{-1mm}
\subsection{Binary Search}
Binary Search (BS)~\cite{knuth1971optimum} takes \juancrr{a sorted array} as input and finds the position of some \juancrr{query} values within \juancr{the sorted array}.
Our PIM implementation of binary search distributes the sorted array across the DPUs. Inside each DPU, tasklets are in charge of a subpartition of the assigned \juancrr{query} values. First, each tasklet checks the assigned set of \juancrr{query} values to find, moving them from \juancrr{the MRAM bank} to WRAM and iterating over them using a for loop. Second, each tasklet \juancrr{performs} the binary search algorithm, moving from left to right or vice-versa, depending on the current value to find.
\juancrr{Third, the tasklet stops the algorithm when it finds one query value.
Fourth, at the end of the execution on the DPUs, the host CPU retrieves the positions of the found query values.}
\vspace{-1mm}
\subsection{Time Series Analysis}
Time Series analysis (TS)~\cite{yeh2016matrix} \juancrr{aims} to find anomalies and similarities between subsequences of a given time series.
Our version of time series analysis is based on Matrix Profile~\cite{zhu2018matrix}, an algorithm that works in a streaming-like manner, where subsequences (or query sequences) coming from a source of data are compared to a well-known time series that \juancrr{has} the expected behavior.
Our PIM implementation of time series analysis divides the time series across the DPUs, adding the necessary overlapping between them, and replicating the query sequence \juancrr{across the tasklets to compare to the time series}. Different slices of the time series are assigned to different tasklets.
First, \juancr{each tasklet performs the dot product of its} slice of the time series and the query sequence.
Second, \juancr{each tasklet calculates} the similarity between the slice of the time series and the query sequence by computing the z-normalized Euclidean distance~\cite{zhu2018matrix}.
Third, \juancr{each tasklet compares} the calculated similarity to the minimum similarity (or maximum, depending on the application) found so far, and \juancr{updates} it if the calculated similarity is a new minimum (or maximum).
\juanc{Fourth, at the end of the execution on the DPUs, the host CPU retrieves the minimum (or maximum) similarity values and their positions from all DPUs, and finds the overall minimum (or maximum) and its position.}
\vspace{-1mm}
\subsection{Breadth-First Search}
Breadth-First Search (BFS)~\cite{bundy1984breadth} is a graph algorithm that labels each node in the graph with its distance from a given source node.
In our version, all edges have the same weight, therefore the distance represents the number of edges.
Our PIM implementation of BFS uses a \juanc{Compressed Sparse Row (CSR)}~\cite{saad2003iterative.sparsematrices,liu2015csr5,kanellopoulos2019smash} representation of the \emph{adjacency matrix}, \juancrr{which represents the graph. Each element $(i, j)$ of the adjacency matrix indicates whether vertices $i$ and $j$ are connected by an edge.}
Vertices are distributed evenly across DPUs, with each DPU receiving the \emph{neighbor lists} for the vertices that it owns. \juancrr{The neighbor list of vertex $i$ contains the vertex IDs of the vertices that are connected to vertex $i$ by an edge.}
Each DPU maintains its own local copy of the list of visited vertices in the graph, which is represented as a bit-vector.
\juanc{At the end of each iteration of the BFS algorithm, the host CPU merges all local per-DPU copies of the list of visited vertices.}
The whole list of visited vertices is called the \emph{frontier}.
\juancrr{At the beginning of each iteration, the host CPU broadcasts} the complete current frontier to all the DPUs.
Each DPU uses the current frontier to update its local copy of the visited list. The DPU keeps the vertices of the current frontier that correspond to the vertices that it owns and discards the rest.
The tasklets in the DPU (1) go through these vertices concurrently, (2) visit their neighbors, and (3) add the neighbors to the next frontier if they have not previously been visited. \juancrr{This approach to BFS is called \emph{top-down} approach~\cite{hwukirk2016.bfs,luo2010effective}.}
\juancrr{Tasklets use critical sections (implemented via mutexes)} to update the next frontier concurrently without \juancr{data conflicts}.
\juancrr{At the end of each iteration, the CPU retrieves} the next frontier produced by each DPU, and computes their union to \juancrr{construct} the complete next frontier.
The iterations continue until the next frontier is empty \juancr{at the end of an iteration}.
\vspace{-1mm}
\subsection{Multilayer Perceptron}
Multilayer perceptron (MLP)~\cite{hinton1987learning} is a class of feedforward artificial neural network with at least three layers: input, hidden and output.
Our PIM implementation of MLP \juancrr{performs MLP inference.
In each layer, the weights are arranged as a matrix and the input is a vector.
The computation in each layer is a matrix-vector multiplication. The implementation of each layer is based on our implementation of GEMV (Section~\ref{sec:gemv}). Thus, in each layer of MLP, the distribution of the workload among DPUs and tasklets is the same as in GEMV.
ReLU is the activation function at the end of each layer.
The output of a layer feeds the input of the next layer until the output layer is reached.}
\juanc{At the end of the output layer, the host CPU retrieves the output vector chunks from the MRAM banks, and constructs the complete output vector.}
\vspace{-1mm}
\subsection{Needleman-Wunsch}
\label{sec:nw}
Needleman-Wunsch (NW)~\cite{Needleman1970Ageneral} is a bioinformatics algorithm that performs global sequence alignment, i.e., it compares two biological sequences over their \emph{entire length} to find out the optimal alignment of these sequences. NW is a dynamic programming algorithm that consists of three steps: (i) initialize a 2D score matrix $m \times n$, where $m$, $n$ are the lengths of the sequences (i.e., the number of \emph{base pairs}, \emph{bps}, in each sequence); (ii) fill the score matrix by calculating the score for each cell \juancrr{in the matrix}, which is the maximum of the scores of the neighboring cells (left, top, or top-left cells) plus a penalty in case of a mismatch; and (iii) trace back the optimal alignment by marking a path from the cell on the bottom right back to the cell on the top left of the score matrix. Note that there may be \juancrr{more than one} possible optimal alignments between two sequences.
Our PIM implementation first fills the upper triangle (top-left part) of the 2D score matrix, and then the lower triangle (bottom-right part) of it. The matrix is partitioned into large 2D \juancrr{blocks}, and the algorithm iterates over the diagonals at a large \juancrr{block} granularity (from the top-left diagonal to the bottom-right diagonal). In each iteration, all large \juancrr{blocks} that belong to the same diagonal of the 2D score matrix are calculated in parallel by evenly distributing them across DPUs. Inside the DPU, each large 2D \juancrr{block} is further partitioned into small 2D sub-blocks. The tasklets of each DPU work on the diagonals at a small sub-block granularity, i.e., in each iteration \juanc{the tasklets of a DPU concurrently} calculate different small sub-blocks that belong to the same large block of one diagonal.
For each diagonal of the 2D score matrix, the host CPU retrieves the large \juancrr{blocks} produced by all DPUs. Then, it uses the filled cells of \juancrr{the} last row and the last column of each large \juancrr{block} as input to the next iteration (i.e., the next diagonal), since only these cells are neighboring cells with the next diagonal \juancrr{blocks}. The iterations continue until all diagonal large \juancrr{blocks} of the whole 2D score matrix are calculated. The host CPU finally uses the resulting 2D score matrix to trace back the optimal alignment.
In the Appendix (Section~\ref{app:nw}), we show additional experimental results for NW to \juancr{demonstrate that the computation of the complete problem and the computation of the longest diagonal scale differently across one rank of DPUs.}
\vspace{-1mm}
\subsection{Image Histogram}
\label{sec:histogram}
Image histogram (HST)~\cite{gomez2013optimized} calculates the histogram of an image, i.e., it counts the number of occurrences of each possible pixel value in an input image and stores the \juancr{aggregated counts of occurrences} into a set of bins.
We develop two PIM implementations of image histogram: \juancrr{short (HST-S) and long (HST-L).}
HST-S distributes chunks of the input image across tasklets running on a DPU. Each tasklet creates a local histogram in WRAM. When the local histograms are created, the tasklets synchronize with a barrier, and the local histograms are merged in a parallel manner.
Since each tasklet features a local histogram in WRAM, the maximum histogram size is relatively small (e.g., 256 32-bit bins, \juancr{when} running 16 tasklets).\footnote{\juanc{256 32-bit bins is the maximum histogram size for 16 tasklets (1) assuming power-of-two size of the histogram and (2) taking into account that each tasklet allocates a WRAM buffer for its chunk of the input image.}}
HST-L can generate larger histograms, the size of which is limited \juanc{only} by the total amount of WRAM, since only one local histogram per DPU is stored in WRAM.
\juancrr{Same as HST-S, HST-L distributes chunks of the input image across tasklets, which update the histogram in WRAM by using a mutex, in order to ensure that only a single tasklet updates the histogram at a time.}
\juancrr{Both HST-S and HST-L merge all per-DPU histograms into a single final histogram in the host CPU.}
We compare HST-S and HST-L for different histogram sizes in the Appendix (Section~\ref{app:histogram}), in order to find out which HST version is preferred \juancr{on the UPMEM-based PIM system} for each histogram size.
\vspace{-1mm}
\subsection{Reduction}
\label{sec:reduction}
Reduction (RED)~\cite{rabenseifner2004optimization} computes the sum of the elements in an input array.
Our PIM implementation of reduction has two steps. In the first step, each tasklet inside a DPU is assigned a chunk of the array. The tasklet accumulates all values of the chunk and produces a local reduction result.
In the second step,
after a barrier, a single tasklet reduces the partial results of all tasklets from the first step.
\juanc{At the end of the second step, the host CPU retrieves the reduction result.}
\juancrr{Alternatively, we can implement the second step as a parallel tree reduction~\cite{harris2007optimizing,degonzalo2019automatic}. We implement two versions of this parallel tree reduction, which use different intra-DPU synchronization primitives.
One of the versions uses handshakes for communication between tasklets from one level of the tree to the next one.
The other version uses barriers between levels of the tree.
In the Appendix (Section~\ref{app:reduction}), we compare the single-tasklet implementation to the two versions of parallel tree reduction.}
\vspace{-1mm}
\subsection{Prefix Sum (Scan)}
\label{sec:scan}
Prefix sum or scan (SCAN)~\cite{blelloch1989} is a parallel primitive that computes the prefix sum of the values in an array. We implement an exclusive scan: the $i$-th element of the output contains the sum of all elements of the input array from the first element to the ($i$-1)-th element.
We implement two PIM versions of scan: Scan-Scan-Add (SCAN-SSA)~\cite{yan2013streamscan, hwukirk2016.scan, sengupta2008efficient} and Reduce-Scan-Scan (SCAN-RSS)~\cite{yan2013streamscan, hwukirk2016.scan, dotsenko2008fast}.
Both versions assign a large chunk of the input array to each DPU.
SCAN-SSA has three steps. First, it computes the scan operation locally inside each DPU. Second, \juancr{it copies} the last element of the local scan to the host CPU, and \juancr{places it} in a vector in the position corresponding to the DPU order. The host \juanc{CPU} scans this vector and \juancr{moves} each result value to the corresponding DPU. Third, \juancr{it adds} the value computed in the host \juanc{CPU} to all elements of the local scan output in each DPU.
\juanc{Fourth, the host CPU retrieves the complete scanned array from the MRAM banks.}
SCAN-RSS also has three steps. First, it computes the reduction operation in each DPU. Second, \juancr{it copies} the reduction results to the host CPU, where \juancr{the host CPU scans them}. Third, \juancr{it moves} the result values of the scan operation in the host \juanc{CPU} to the corresponding DPUs, where \juancr{the tasklets perform} a local scan (including the value coming from the host \juanc{CPU}).
\juanc{Fourth, the host CPU retrieves the complete scanned array from the MRAM banks.}
The advantage of SCAN-RSS over SCAN-SSA is that \juancr{SCAN-RSS} requires fewer accesses to MRAM.
For an array of size $N$, SCAN-RSS needs $3 \times N$ + 1 accesses:
$N$ reads and 1 write for Reduce, and $N$ reads and $N$ writes for Scan.
SCAN-SSA needs $4 \times N$ accesses: $N$ reads and $N$ writes for Scan, and $N$ reads and $N$ writes for Add.
The advantage of SCAN-SSA over SCAN-RSS is that it requires less synchronization.
The reduction operation in SCAN-RSS requires
a barrier, but the addition operation in SCAN-SSA does not require any synchronization.
We expect SCAN-RSS \juancr{to perform} better for large arrays where access to MRAM dominates \juancr{the execution time}, and SCAN-SSA to perform better for smaller arrays where the
reduction that requires synchronization constitutes a larger fraction of the \juancr{entire} computation.
We compare both implementations of SCAN for arrays of different sizes in Appendix Section~\ref{app:scan}.
\vspace{-1mm}
\subsection{Matrix Transposition}
\label{sec:trns}
\vspace{-1mm}
Matrix transposition (TRNS)~\cite{cayley1858ii} converts an $M \times N$ array into an $N \times M$ array.
We focus on in-place transposition, where the transposed array occupies the same physical storage locations as the original array.
In-place transposition is a permutation, which can be factored into disjoint cycles~\cite{hungerford1997abstract}.
A straightforward parallel implementation can assign \juancrr{entire} cycles to threads. However, \juancr{in such a straightforward implementation}, (1) the length of cycles is \emph{not} uniform in rectangular matrices, causing load imbalance, and (2) the memory accesses are random \juancr{as operations are done on} single matrix elements (\juancr{without} exploiting spatial locality). Thus, efficient parallelization is challenging.
Our PIM implementation follows an efficient 3-step tiled approach~\cite{sung2014matrix, gomez2016matrix} that (1) exploits spatial locality by \juancr{operating on} tiles of matrix elements, as opposed to single elements, and (2) balances the workload by partitioning the cycles across tasklets.
To perform the three steps, we first factorize the dimensions of the $M \times N$ array as an $M' \times m \times N' \times n$ array, where $M = M' \times m$ and $N = N'\times n$.
The first step \juancr{operates on} tiles of size $n$. This step performs the transposition of an $M \times N'$ array, where each element is a tile of size $n$. The resulting array has dimensions $N' \times M \times n = N' \times M' \times m \times n$.
In the UPMEM-based PIM system, we perform this step using $n$-sized \juancrrr{CPU-DPU} transfers that copy the input array from the main memory of the host CPU to the corresponding MRAM banks.
The second step performs $N' \times M'$ transpositions of $m \times n$ tiles.
In each DPU, one tasklet transposes an $m \times n$ tile in WRAM. The resulting array has dimensions $N' \times M' \times n \times m$.
The third step \juancr{operates on} tiles of size $m$. This step performs transpositions of $N'$ arrays of dimensions $M' \times n$, where each element is a tile of size $m$. The resulting array has dimensions $N' \times n \times M' \times m$.
In each DPU, the available tasklets collaborate on the transposition of an $M' \times n$ array (with $m$-sized elements) using the algorithm presented in~\cite{sung2012dl}. \juancr{Differently from the algorithm in~\cite{sung2012dl}, which uses atomic instructions for synchronization}~\cite{gomez2013atomics}, our PIM implementation uses mutexes for synchronization of tasklets via an array of flags that keeps track of the moved tiles (\juanc{we choose this implementation because} the UPMEM ISA~\cite{upmem-guide} does \emph{not} include atomic instructions).
\juanc{After the three steps, the host CPU retrieves the transposed matrix from the MRAM banks.}
\section{Key Takeaways}\label{sec:discussion}
In this section, we reiterate several key empirical observations \juancr{in the form of} four key takeaways we \juancr{provide} throughout this paper. \juancr{We also provide} implications on workload suitability and good programming practices for the UPMEM PIM architecture, and suggestions for hardware and architecture designers of future PIM systems.
\noindent\paragraph{\textbf{Key Takeaway \#1}.}
\textbf{The UPMEM PIM architecture is fundamentally compute bound}.
Section~\ref{sec:mram-bandwidth} shows that workloads with more complex operations than integer addition fully utilize the instruction pipeline before they can potentially saturate the memory bandwidth.
Section~\ref{sec:throughput-oi} shows that even workloads with as simple operations as integer addition saturate the compute throughput with an operational intensity as low as 0.25 operations/byte (1 addition per integer accessed).
This \juancrr{key takeaway} shows that \textbf{the most suitable workloads for the UPMEM PIM architecture are memory-bound workloads}.
From a programmer's perspective, the architecture requires a shift in how we think about computation and data access, since the relative cost of computation vs. data access in the PIM system is very different from that in \juancrr{the dominant processor-centric architectures of today}.
\tboxbegin{\ttask{kt}}
\textbf{The UPMEM PIM architecture is fundamentally compute bound}.
As a result, \textbf{the most suitable workloads are memory-bound}.
\tboxend
\noindent\paragraph{\textbf{Key Takeaway \#2}.}
\textbf{The workloads most well-suited for the UPMEM PIM architecture are those with simple or no arithmetic operations}. \juancrr{This is because} DPUs include native support for \emph{only} integer addition/subtraction and bitwise operations. More complex integer (e.g., multiplication, division) and floating point operations are implemented using software library routines.
Section~\ref{sec:arith-throughput} shows that the arithmetic throughput of more complex integer operations and floating point operations are an order of magnitude lower than that of simple addition and subtraction.
Section~\ref{sec:comparison} shows that benchmarks with little amount of computation and no use of multiplication, division, or floating point operations (10 out of 16 benchmarks)
run faster (2.68$\times$ on average) on a 2,556-DPU system than on a \juancrr{state-of-the-art NVIDIA} Titan V GPU.
These observations show that \textbf{the workloads most well-suited for the UPMEM PIM architecture are those with no arithmetic operations or simple operations (e.g., bitwise operations and integer addition/subtraction)}.
Based on this \juancrr{key takeaway}, we recommend devising \juancrr{much} more efficient software library routines or, \juancrr{more importantly, specialized and fast} in-memory hardware for complex operations in future PIM architecture generations \juancrr{to improve the general-purpose performance of PIM systems}.
\tboxbegin{\ttask{kt}}
\textbf{The most well-suited workloads for the UPMEM PIM architecture use no arithmetic operations or use only simple operations (e.g., bitwise operations and integer addition/subtraction).}
\tboxend
\noindent\paragraph{\textbf{Key Takeaway \#3}.}
\textbf{The workloads most well-suited for the UPMEM PIM architecture are those with little global communication},
because there is no direct communication channel among DPUs.
As a result, there is a huge disparity in performance scalability of benchmarks that do \emph{not} require inter-DPU communication and benchmarks that do (especially if parallel transfers across MRAM banks cannot be used), as Section~\ref{sec:performance} shows.
This \juancrr{key takeaway} shows that \textbf{the workloads most well-suited for the UPMEM PIM architecture are those with little or no inter-DPU communication}.
Based on this \juancrr{takeaway}, we recommend that the \juancrr{hardware} architecture and the software stack be enhanced with support for inter-DPU communication (e.g., by leveraging new in-DRAM data copy techniques~\cite{seshadri2018rowclone,seshadri2013rowclone, wang2020figaro, rezaei2020nom, chang.hpca16, seshadri2020indram, seshadri.bookchapter17} \juanc{and providing better connectivity inside DRAM~\cite{chang.hpca16, rezaei2020nom}}).
\tboxbegin{\ttask{kt}}
\textbf{The most well-suited workloads for the UPMEM PIM architecture require little or no communication \juancr{across DRAM Processing Units (inter-DPU communication)}.}
\tboxend
\noindent\paragraph{\juanc{\textbf{Summary}.}}
We find that the workloads most suitable for the UPMEM PIM architecture in its current form are (1) memory-bound workloads with (2) simple or no arithmetic operations and (3) little or no inter-DPU communication.
\noindent\paragraph{\textbf{Key Takeaway \#4}.}
We observe that the existing UPMEM-based PIM systems greatly improve energy efficiency and performance over state-of-the-art CPU and GPU systems across many workloads we examine. Section~\ref{sec:comparison} shows that the 2,556-DPU and the 640-DPU systems are 20.1$\times$ and 9.4$\times$ faster, respectively, than a state-of-the-art Intel Xeon CPU, \juancr{averaged across the entire set of 16 PrIM benchmarks}.
The 640-DPU system is 1.56$\times$ more energy efficient than the CPU, \juancr{averaged across the entire set of 16 PrIM benchmarks, and 4.94$\times$ more energy efficient for 13 of the PrIM benchmarks}.
The 2,556-DPU system is faster (on average \juanc{by} 2.68$\times$) than the state-of-the-art GPU in 10 out of 16 PrIM benchmarks, which have three key characteristics that define a workload's PIM suitability: (1) streaming memory accesses, (2) \juanc{little or no} inter-DPU communication, and (3) \juanc{little or no} use of multiplication, division, or floating point operations.\footnote{\juanc{Note that these three characteristics are not exactly the same three characteristics highlighted by key takeaways \#1 to \#3, but more specific. The difference is that the 2,556-DPU system outperforms the GPU for memory-bound workloads with streaming memory accesses. \juancc{These workloads do not need to have \emph{only} streaming memory accesses}, since BS, HST-S, HST-L, and TRNS, \juancc{for which the 2,556-DPU system outperforms the GPU,} have also random accesses. Since all PrIM workloads (see Table~\ref{tab:benchmarks}) contain some streaming memory accesses, we cannot say that the 2,556-DPU system outperforms the GPU for workloads with \emph{only} strided and/or random accesses.}}
We expect that the 2,556-DPU system will \juancr{provide even higher} performance and energy \juancr{benefits}, and that future PIM systems \juancr{will be even better (especially after implementing our recommendations for future PIM hardware).}
If the architecture is improved based on our recommendations under Key Takeaways 1-3, we believe the \juanc{future} PIM system will be even more attractive, leading to much higher performance and energy benefits \juanc{versus state-of-the-art CPUs and GPUs} over potentially all workloads.
\tboxbegin{\ttask{kt}}
\begin{itemize}
\item UPMEM-based PIM systems \textbf{outperform state-of-the-art CPUs in terms of performance and energy efficiency on most of PrIM benchmarks}.
\item UPMEM-based PIM systems \textbf{outperform state-of-the-art GPUs on a majority of PrIM benchmarks}, and the outlook is even more positive for future PIM systems.
\item \juanc{UPMEM-based PIM systems are \textbf{more energy-efficient than state-of-the-art CPUs and GPUs on workloads that they provide performance improvements} over the CPUs and the GPUs.}
\end{itemize}
\tboxend
\ignore{
\hspace{\parindent}
\juangg{In Sections~\ref{sec:performance} and~\ref{sec:comparison}, we evaluate strong and weak scaling on the UPMEM PIM setup, and compare the performance and energy consumption of a system with 640 DPUs to those of a CPU and a GPU.
The analysis of these results provide us with 1) important insights about the suitability of different types of workloads to the UPMEM PIM architecture, and 2) hints about different aspects of the UPMEM PIM architecture that need to be improved to make it a better fit for a wider range of workloads.}
\hspace{\parindent}
\juangg{First, workloads with streaming memory accesses and little or no synchronization needs across DPUs (VA, UNI, TS, SEL, GEMV, MLP, SCAN-SSA, SCAN-RSS, RED, HSTS, HSTL) show good scaling on the UPMEM PIM system, as long as the work can be evenly partitioned across DPUs and tasklets. However, if the input has an irregular nature (e.g., sparse matrix in SpMV) scaling is sublinear due to load imbalance.}
\hspace{\parindent}
\juangg{Second, workloads with intense inter-DPU synchronization (BFS) are heavily burdened by the movement of intermediate results between the DPUs and the host CPU, and the sequential computation on the host. A \om{guideline} for architects is to devise mechanisms for peer-to-peer communication across DPUs without host intervention.}
\hspace{\parindent}
\juangg{Third, even though they may scale well on the UPMEM PIM system, workloads with an intense use of 32-bit/64-bit multiplication (TS, GEMV, MLP, SpMV) greatly suffer from the inefficient execution of this operation on the DPU pipeline. This also applies to division and floating-point operations. These workloads have a clear disadvantage on the UPMEM PIM architecture with respect to GPU execution, where multiply (and fused multiply-add) exist in the pipeline. Devising more efficient routines or specialized hardware for these operations is desirable in future architecture generations.}
}
\section{Evaluation of PrIM Benchmarks}
\label{sec:evaluation}
In this section, we evaluate the 16 PrIM benchmarks on
the 2,556-DPU system (Section~\ref{sec:sys-org}), unless otherwise stated.
\juanc{Our evaluation uses} the datasets presented in Table~\ref{tab:datasets}, which are publicly \juanc{and freely} available~\cite{gomezluna2021repo}.
Since these datasets are large and do not fit in WRAM, we need to use MRAM-WRAM DMA transfers repeatedly.
The results we present are for the best performing transfer sizes, which we include in Table~\ref{tab:datasets} \juanc{to facilitate the} reproducibility of our results.
\juancr{We provide the command lines we use to execute each benchmark along with all parameters in~\cite{gomezluna2021repo}.}
\begin{table}[h]
\astretch{1.2}
\begin{center}
\captionof{table}{\juancrr{Evaluated} Datasets.}
\vspace{-4mm}
\label{tab:datasets}
\centering
\resizebox{1.0\linewidth}{!}{
\input{figures/tab-datasets}
}
\end{center}
\vspace{-5mm}
\end{table}
First, we present performance and scaling results.
We evaluate strong scaling$^3$ for the 16 \juanc{PrIM} benchmarks \juancr{(Section~\ref{sec:strong})} on the 2,556-DPU system by running the experiments on (1) 1 DPU, (2) 1 rank (from 1 to 64 DPUs), and
(3) 32 ranks (from 256 to 2,048 DPUs).
The \juancr{goal} of these experiments is to evaluate how the performance of the UPMEM-based PIM system scales with the number of DPUs for a fixed problem size.
The ideal strong scaling is linear scaling, i.e., the \juancr{ideal speedup for strong scaling with $N$ DPUs} over the execution on a single DPU \juancr{should be $N$}.
We also evaluate weak scaling$^4$ for the 16 \juanc{PrIM} benchmarks \juancr{(Section~\ref{sec:weak})} on 1 rank (from 1 to 64 DPUs).
In this experiment, we evaluate how the performance of the UPMEM-based PIM system scales with the number of DPUs for a fixed problem size per DPU.
In an ideal weak scaling scenario, the execution time remains constant for any number of DPUs.
Second, we compare the performance and energy consumption of two full-blown UPMEM-based PIM
systems (Table~\ref{tab:pim-setups}) with 2,556 DPUs (newer system) and with 640 DPUs (older system) to those of a \juancr{state-of-the-art} Intel Xeon E3-1240 CPU~\cite{xeon-e3-1225} and a \juancr{state-of-the-art} NVIDIA Titan V GPU~\cite{titanv} \juancr{(Section~\ref{sec:comparison})}.
\juancr{In Section~\ref{sec:discussion},} we provide new insights about suitability of different workloads to the PIM system, programming recommendations for software designers, and suggestions and hints for hardware and architecture designers of future PIM systems.
\subsection{Performance and Scaling Results}
\label{sec:performance}
We evaluate the performance of all the benchmarks with strong and weak scaling experiments using the datasets in Table~\ref{tab:datasets}.
Section~\ref{sec:strong} presents strong scaling results for \juancrr{a single DPU, a single rank (from 1 to 64 DPUs), and for sets of 4 to 32 ranks (from 256 to 2,048 DPUs)}. We also evaluate the cost of inter-DPU synchronization.
In Section~\ref{sec:weak}, we analyze weak scaling on an entire rank for 1 to 64 DPUs. We include in the analysis the cost of inter-DPU synchronization \juancrr{via the host CPU, \juancr{as well as} \juancrrr{CPU-DPU} and \juancrrr{DPU-CPU} latencies.}
\subsubsection{\textbf{Strong Scaling Results}}
\label{sec:strong}
We evaluate strong scaling with three different configurations: (1) 1-16 tasklets inside one DPU, (2) 1-64 DPUs inside one rank, and (3) 4-32 ranks. For the experiments inside one rank and multiple ranks, we use the best-performing number of tasklets \juancrr{from the experiment on one DPU.}
\vspace{-1mm}
\noindent\paragraph{\textbf{One DPU}}
\label{sec:1dpu}
Figure~\ref{fig:1dpu_strong} presents execution time and speedup scaling (versus tasklet count) results for 16 benchmarks on a single DPU.
The speedup results (right y-axis of each plot) correspond to only the execution time \juanc{portion spent} on the DPU (\juanc{i.e., "DPU" portion of each bar} in Figure~\ref{fig:1dpu_strong}).
In these experiments, we set the number of tasklets to 1, 2, 4, 8, and 16.
The benchmarks distribute computation among the available tasklets in a data-parallel manner.
The datasets and their sizes are in Table~\ref{tab:datasets}.
The times shown in Figure~\ref{fig:1dpu_strong} are broken down into \juancrrr{the} execution time on the DPU ("DPU"), the time for inter-DPU communication via the host CPU (\juancrrr{"Inter-DPU"}), the time for CPU to DPU transfer of input data (\juancrrr{"CPU-DPU"}), and the time for DPU to CPU transfer of final results (\juancrrr{"DPU-CPU"}).
\begin{figure*}[h]
\includegraphics[width=1.0\linewidth]{figures/1dpu_strong-16-cr-350-2.pdf}
\vspace{-6mm}
\caption{Execution time (ms) of 16 benchmarks on 1, 2, 4, 8, and 16 tasklets in one DPU (left y-axis), and speedup (\juanc{considering only the portion of execution time spent on the DPU}) provided by more tasklets normalized to the performance of 1 tasklet (right y-axis).} \label{fig:1dpu_strong}
\vspace{-4mm}
\end{figure*}
We make the following \juancrr{five} observations from Figure~\ref{fig:1dpu_strong}.
\juancrr{First,} in
VA, GEMV, SpMV, SEL, UNI, BS, TS, MLP, NW, HST-S, RED, SCAN-SSA (Scan kernel), SCAN-RSS (both kernels), and TRNS (Step 2 kernel), the best performing number of tasklets is 16.
This is in line with our observations in Section~\ref{sec:arith-throughput}: a number of tasklets greater than 11 is usually a good choice to achieve the best performance from the pipeline.
These benchmarks show good scaling from 1 to 8 tasklets with speedups between
$1.5\times$ and $2.0\times$ \juancr{as we double} the number of tasklets until 8. From 8 to 16 tasklets, the speedups are between
$1.2\times$ and $1.8\times$ due to the pipeline throughput saturating at 11 tasklets.
\gboxbegin{\rtask{ko}}
\textbf{A number of tasklets greater than 11 is a good choice for most real-world workloads} we tested (16 kernels out of 19 kernels from 16 benchmarks), \juancr{as it fully utilizes the DRAM Processing Unit's pipeline}.
\gboxend
\juancrr{Second,} some of these benchmarks
(VA, GEMV, SpMV, BS, TS, MLP, HST-S) do not use synchronization primitives, while in others
(SEL, UNI, NW, RED, SCAN-SSA (Scan kernel), SCAN-RSS (both kernels)), synchronization across tasklets (via handshakes and/or barriers) is lightweight.
\juancrr{Third,} BFS, HST-L, and TRNS (Step 3) show limited scaling when increasing the number of tasklets because they use mutexes, which cause contention when accessing shared data structures (i.e., output frontier in BFS, local per-DPU histogram in HST-L, array of flags in TRNS (Step 3)). While in BFS using 16 tasklets \juancrr{provides the highest performance since it can compensate for the large synchronization cost, in HST-L and TRNS (Step 3) the best performing number of tasklets is 8 due to the high synchronization overheads beyond 8 tasklets.}
\gboxbegin{\rtask{ko}}
Intensive use of \textbf{intra-DPU synchronization across tasklets (e.g., mutexes, \juancr{barriers, handshakes}) may limit scalability}, \juancr{sometimes} causing the best performing number of tasklets \juancr{to} be lower than 11.
\gboxend
Fourth, SCAN-SSA (Add kernel) \juancr{experiences} speedups between $1.5\times$ and $2.0\times$ when \juancr{we double} the number of tasklets until 8. However, there is no speedup from 8 to 16 tasklets, even though this \juancrr{step of the SCAN-SSA benchmark} does \emph{not} use any synchronization primitives.
We observe the same behavior for the STREAM ADD \juancr{microbenchmark} in Figure~\ref{fig:mram-stream}, i.e., performance saturation happens before the 11 tasklets \juancr{required} to fully utilize the pipeline. As explained in Section~\ref{sec:mram-streaming}, the reason is that both STREAM \juancr{ADD} and SCAN-SSA (Add kernel) are \emph{not} compute-intensive kernels, since they perform only \juancr{one} integer addition per input element accessed from MRAM. As a result, the overall latency is dominated by the MRAM access latency, which hides the pipeline latency \juancr{(and thus performance saturates at fewer than 11 tasklets required to fully utilize the pipeline).}
\gboxbegin{\rtask{ko}}
\textbf{Most real-world workloads are in the compute-bound region} of the \juancr{DRAM Processing Unit} (all kernels \juancr{except} SCAN-SSA (Add kernel)), i.e., the pipeline latency dominates the MRAM access latency.
\gboxend
Fifth, while the amount of time spent on \juancrrr{CPU-DPU transfer}s and \juancrrr{DPU-CPU transfers} is relatively low compared to the time spent on DPU \juancr{execution} for most benchmarks, we observe that \juancrrr{CPU-DPU transfer} time is very high in TRNS.
The \juancrrr{CPU-DPU transfer} of TRNS performs step 1 of the matrix transposition algorithm~\cite{sung2014matrix,gomez2016matrix} by issuing $M' \times m$ data transfers of $n$ elements, \juancrr{as explained in Section~\ref{sec:trns}.}
Since we use a small $n$ value in the experiment (\juancr{$n = 8$}, as indicated in Table~\ref{tab:datasets}), the \juancrr{sustained \juancrrr{CPU-DPU} bandwidth is far from the maximum \juancrrr{CPU-DPU} bandwidth} (\juancr{see sustained CPU-DPU bandwidth for different transfer sizes in Figure~\ref{fig:cpudpu}a}).
\gboxbegin{\rtask{ko}}
\textbf{\juancr{\juancrrr{Transferring} large data chunks from/to the host CPU is preferred}} for input data and output results due to higher sustained \juancrrr{CPU-DPU}/\juancrrr{DPU-CPU} bandwidths.
\gboxend
\vspace{-1mm}
\noindent\paragraph{\textbf{One Rank (1-64 DPUs).}} We evaluate strong scaling with 1 to 64 DPUs. The size of the input is the dataset size we can fit in a single DPU (see Table~\ref{tab:datasets}). We especially examine \juancrrr{CPU-DPU transfer} and \juancrrr{DPU-CPU transfer times}, in order to assess how they change as we increase the number of DPUs.
Figure~\ref{fig:64dpu_strong} shows execution time \juancr{and speedup scaling (versus DPU count)} results for 1, 4, 16, and 64 DPUs.
The speedup results (right y-axis of each plot) correspond to only the execution time \juanc{portion spent} on the DPU (\juanc{i.e., the "DPU" portion of each bar} in Figure~\ref{fig:64dpu_strong}).
The breakdown of execution time is the same as that done in Figure~\ref{fig:1dpu_strong} for \juancr{the single-DPU} results.
\begin{figure*}[h]
\includegraphics[width=1.0\linewidth]{figures/64dpu_strong-16-cr-350-2.pdf}
\vspace{-6mm}
\caption{Execution time (ms) of 16 benchmarks on one rank (1, 4, 16, and 64 DPUs, using strong scaling$^3$) (left y-axis), and speedup (\juanc{considering only the portion of execution time spent} on the DPU) provided by more DPUs normalized to the performance of 1 DPU (right y-axis). Inside a DPU, we use the best-performing number of tasklets from Figure~\ref{fig:1dpu_strong}.}
\label{fig:64dpu_strong}
\vspace{-5mm}
\end{figure*}
We make the following seven observations from Figure~\ref{fig:64dpu_strong}.
First, we observe that DPU performance scaling is linear \juancr{with DPU count} (i.e., the execution times on the DPU reduce linearly as we increase the number of DPUs) for
VA, GEMV, SpMV, SEL, UNI, TS, MLP, HST-S, HST-L, RED, SCAN-SSA (both kernels), SCAN-RSS (both kernels), and TRNS (both kernels) (speedups between
$3.1\times$ and $4.0\times$ when increasing the number of DPUs by 4).
\juancr{As a result, increasing the DPU count from 1 to 64 for these benchmarks produces speedups between 37$\times$ (SpMV) and 64$\times$ (TS, TRNS).}
Second, scaling of DPU performance is sublinear for three benchmarks \juancr{(BS, BFS, NW)}.
\juancr{Increasing the DPU count from 1 to 64 for these three benchmarks produces speedups between 8.3$\times$ (BFS) and 27$\times$ (BS).}
\juancr{For BS, the speedups are sublinear (speedups between \juancrr{$2.4\times$ and $3.9\times$} \juanc{with four times the} DPUs) due to load imbalance across DPUs}, because different DPUs finish at different times depending on the particular values they look for in sorted arrays.
For BFS, the speedups are lower ($1.7-2.7\times$ when increasing the number of DPUs by 4) \juancr{also} due to load imbalance across DPUs produced by the irregular topology of the \emph{loc-gowalla} graph~\cite{graphgowalla}.
In NW, the speedups are between \juancrr{$1.9\times$ and $3.1\times$} when multiplying the DPU count by 4.
In this benchmark, the parallelization factor in each iteration (i.e., number of active DPUs) depends on the size of the diagonal of the 2D score matrix that is processed, \juancrr{and the number of large 2D blocks in the diagonal}. When we increase the number of DPUs by 4, the parallelization factor in smaller diagonals is low (\juancrr{i.e., equal to only the number of blocks in these diagonals}), and only increases up to 4$\times$ in the larger diagonals \juancrr{(i.e., when there are enough blocks to use all available DPUs)}. As a result, the scalability of NW is sublinear.
Third, the overhead (if any) of inter-DPU synchronization (\juanc{as depicted by the "Inter-DPU" portion of each bar} in Figure~\ref{fig:64dpu_strong}) is low in 14 of the benchmarks (VA, GEMV, SpMV, SEL, UNI, BS, TS, MLP, HST-S, HST-L, RED, SCAN-SSA, SCAN-RSS, TRNS). As a result, these benchmarks achieve higher performance when \juancr{we increase} the number of DPUs (even including the inter-DPU synchronization time).
There is no inter-DPU synchronization in
VA, GEMV, SpMV, BS, TS, MLP, and TRNS.
There is inter-DPU synchronization in HST-S and HST-L (for final histogram reduction), but its \juancr{overhead} is negligible.
\juancrr{The inter-DPU synchronization time is noticeable} in SEL, UNI, and RED (for final result merging) and in SCAN-SSA and SCAN-RSS (for intermediate scan step in the host). \juancrr{For 64 DPUs, the inter-DPU synchronization times of SEL, UNI, RED, SCAN-SSA, and SCAN-RSS account for 53\%, 91\%, 48\%, 42\%, and 17\% the execution times on the DPUs (not visible in Figure~\ref{fig:64dpu_strong}), respectively.}
\juancrr{Despite that,} we still obtain the best performance (including \juanc{portions of the} execution time \juanc{spent} on the DPUs, \juanc{i.e., "DPU",} and inter-DPU synchronization, \juanc{i.e., "Inter-DPU"}) with 64 DPUs \juancrr{for SEL, UNI, RED, SCAN-SSA, and SCAN-RSS}.
Fourth, we observe significantly higher overhead of inter-DPU synchronization for BFS and NW.
The \juancr{overall} performance (including \juanc{portions of the} execution time on the DPUs, \juanc{i.e., "DPU",} and inter-DPU synchronization, \juanc{i.e., "Inter-DPU"}) of 64 DPUs is only 5\% and 14\% higher than that of 16 DPUs for BFS and NW, respectively.
The reason in BFS is that, after each iteration, the CPU has to compute the union of the next frontier from all DPUs sequentially and redistribute it across the DPUs. Thus, the inter-DPU synchronization cost increases linearly with the number of DPUs.
In NW, the inter-DPU synchronization overhead is substantial due to a similar reason.
For each diagonal of the 2D score matrix, the host CPU (1) retrieves the results of the sub-blocks produced by all DPUs, and (2) sends the cells of the last row and the last column of each sub-block as input to the next diagonal (processed in the next iteration).
Fifth, we observe the \juancrrr{CPU-DPU transfer} and \juancrrr{DPU-CPU transfer} times decrease for
VA, GEMV, TS, MLP, HST-S, HST-L, RED, SCAN-SSA, SCAN-RSS, and TRNS, when we increase the number of DPUs in the strong scaling experiment for 1 rank.
These benchmarks use parallel \juancrrr{CPU-DPU} and \juancrrr{DPU-CPU} transfers between the main memory of the host CPU and the MRAM banks.
Sixth, the \juancrrr{CPU-DPU} and \juancrrr{DPU-CPU transfer times} do not decrease for BS and NW with increasing number of DPUs, even though \juanc{BS and NW} use parallel transfers.
BS distributes the values to search in a sorted array across the available DPUs, but the sorted array is replicated in each DPU. As a result, the total \juancrrr{CPU-DPU} time increases with the number of DPUs.
NW processes a diagonal in each iteration. For shorter diagonals, the algorithm does not need to use all available DPUs. Thus, more available DPUs does not always mean more parallelism in \juancrrr{CPU-DPU} and \juancrrr{DPU-CPU} transfers.
Seventh, the remaining benchmarks (SpMV, SEL, UNI, BFS) cannot use parallel transfers to \juancrrr{copy input data and/or retrieve results}.
In SEL and UNI, \juancrrr{DPU-CPU transfer times} \juancr{increase} when we increase the number of DPUs because we cannot use parallel transfers for retrieving results. In these two benchmarks, the size of the output in each DPU may differ as it depends on the element values of the input array.
SpMV and BFS cannot use parallel \juancrrr{CPU-DPU and DPU-CPU} transfers because the size of the inputs assigned to each DPU may be different (\juancr{e.g., different number of nonzero elements of different sparse rows in SpMV, different numbers of edges for different} vertices in BFS). As a result, we observe that \juancrrr{CPU-DPU} and \juancrrr{DPU-CPU transfer times} do not reduce in SpMV and BFS when increasing the number of DPUs.
\pboxbegin{\ptask{pr}}
\textbf{Parallel CPU-DPU/DPU-CPU transfers inside a rank of UPMEM DRAM Processing Units are recommended for real-world workloads} when all transferred buffers \juanc{are of the same size}.
\pboxend
\vspace{-3mm}
\noindent\paragraph{\textbf{32 Ranks (256-2,048 DPUs).}} We evaluate strong scaling with 4, 8, 16, and 32 ranks. The size of the input is the maximum dataset size we can fit in four ranks (i.e., 256 DPUs), as shown in Table~\ref{tab:datasets}. We do not include \juancrrr{CPU-DPU} and \juancrrr{DPU-CPU transfer times} in our performance analysis, because \juanc{these} transfers are \emph{not} simultaneous across ranks, as we mentioned in Section~\ref{sec:cpu-dpu}. Figure~\ref{fig:640dpu_strong} shows execution time \juancr{and speedup scaling (versus DPU count)} results for 256, 512, 1,024, and 2,048 DPUs, corresponding to 4, 8, 16, and 32 ranks.
The speedup results (right y-axis of each plot) correspond to only the execution time \juanc{portion spent} on the DPU (\juanc{i.e., the "DPU" portion of each bar} in Figure~\ref{fig:640dpu_strong}).
\begin{figure*}[h]
\includegraphics[width=1.0\linewidth]{figures/2048dpu_strong-16-cr-350-2.pdf}
\vspace{-6mm}
\caption{Execution time (ms) of 16 benchmarks on 4, 8, 16, and 32 ranks (256, 512, 1,024, and 2,048 DPUs, \juancr{using strong scaling$^3$}) (left y-axis), and speedup (\juanc{considering only the portion of execution time spent} on the DPU) provided by more DPUs normalized to the performance of 4 ranks (256 DPUs) (right y-axis). Inside a DPU, we use the best-performing number of tasklets from Figure~\ref{fig:1dpu_strong}.}
\label{fig:640dpu_strong}
\vspace{-4mm}
\end{figure*}
We make the following observations from Figure~\ref{fig:640dpu_strong}.
First, as in the experiments on one rank,
we observe that the execution times on the DPU (\juanc{i.e., the "DPU" portion of each bar} in Figure~\ref{fig:640dpu_strong}) reduce linearly with the number of DPUs (i.e., $\sim$2$\times$ when \juancr{we double} the number of DPUs, and $\sim$8$\times$ from 256 to 2,048 DPUs) for VA, GEMV, SEL, UNI, BS, TS, MLP, HST-S, HST-L, RED, SCAN-SSA (both kernels), SCAN-RSS (both kernels), and TRNS (both kernels).
For SCAN-SSA (Scan) and SCAN-RSS (Scan), we observe more than $8\times$ speedup \juancrr{when we scale from 256 to 2,048 DPUs}. The reason is that the amount of synchronization across tasklets (i.e., handshakes in Scan) reduces when we distribute the input array across more DPUs. However, the downside is that the inter-DPU communication cost increases, as we explain below.
Second, DPU performance scaling (\juanc{i.e., the "DPU" portion of each bar} in Figure~\ref{fig:640dpu_strong}) is sublinear for SpMV, BFS, and NW.
For \juancr{SpMV and BFS}, there is load imbalance across DPUs due to the irregular nature of graphs and sparse matrices.
For NW, we observe small speedups when \juancr{we double} the number of DPUs ($1.47\times$ from 256 to 512 DPUs, and $1.17\times$ from 512 to 1,024 DPUs), and almost no speedup (only 1\%) from 1,024 to 2,048 DPUs. As explained above, NW does not use all DPUs in all iterations, but only the number that is needed for the diagonal that is processed in each iteration. As a result, doubling the number of DPUs does not reduce the execution time \juanc{spent} on the DPU at the same rate.
\vspace{-0.5mm}
\gboxbegin{\rtask{ko}}
\textbf{Load balancing across DRAM Processing Units (DPUs) ensures linear reduction of the execution time \juanc{spent} on the DPUs} for a given problem size, when all available DPUs are used (as observed in strong scaling experiments).
\gboxend
Third, inter-DPU synchronization (as depicted \juanc{by the "Inter-DPU" portion of each bar} in Figure~\ref{fig:640dpu_strong}) imposes a small overhead (if any) for 12 of the benchmarks (VA, GEMV, SpMV, SEL, UNI, BS, TS, MLP, HST-S, HST-L, RED, and TRNS).
VA, GEMV, SpMV, BS, TS, MLP, and TRNS do not require inter-DPU synchronization.
For SEL, UNI, HST-S, HST-L, and RED, the inter-DPU synchronization involves \juancr{only} \juancrrr{DPU-CPU} transfers, since it is only used to merge final results at the end of execution.
The inter-DPU synchronization overhead increases with the number of DPUs, since the amount of partial results to merge increases. However, the inter-DPU synchronization cost is small, and a larger number of DPUs results in larger overall performance.
\gboxbegin{\rtask{ko}}
\textbf{The overhead of merging partial results from DRAM Processing Units in the host CPU is tolerable} across all \juancrrr{PrIM} benchmarks that need it.
\gboxend
Fourth, the inter-DPU synchronization imposes significant overhead when it requires more complex patterns (involving \juancr{both \juancrrr{CPU-DPU} and \juancrrr{DPU-CPU}} transfers). We observe this for four benchmarks (\juancr{BFS, NW, SCAN-SSA, and SCAN-RSS}).
For NW, we observe that inter-DPU synchronization times are significantly higher than DPU times. If we compare these results to the results in Figure~\ref{fig:64dpu_strong}, we conclude that this benchmark's overall performance is greatly burdened by inter-DPU synchronization when using more than one rank.
SCAN-SSA and SCAN-RSS perform a more complex intermediate step in the CPU: (1) the CPU gathers partial results from the first kernel (Scan in SCAN-SSA, Reduce in SCAN-RSS) from the DPUs (via \juancrrr{DPU-CPU} transfers), (2) the CPU performs a scan operation, and (3) the CPU returns a value to be used by the second kernel (Add in SCAN-SSA, Scan in SCAN-RSS) to each DPU (via \juancrrr{CPU-DPU} transfers).
The significant increase in "\juancrrr{Inter-DPU}" from 1,024 to 2,048 DPUs is due to the dual-socket system configuration (Section~\ref{sec:sys-org}), since the CPU in one socket obtains lower memory bandwidth from remote MRAM banks (i.e., in the other socket).
For BFS, the trend is even worse. We observe that the huge increase in the inter-DPU synchronization time makes 256 DPUs the best choice for executing BFS.
Our observations for BFS, SCAN-SSA, and SCAN-RSS are \emph{against} the general programming recommendation of using as many working DPUs as possible (Section~\ref{sec:general-recommendations}). These three benchmarks show that the best-performing number of DPUs is \juanc{limited} by the inter-DPU synchronization overhead.
\gboxbegin{\rtask{ko}}
\textbf{Complex synchronization across DRAM Processing Units (i.e., inter-DPU synchronization involving two-way communication with the host CPU)
imposes significant overhead, which limits scalability \juanc{to more DPUs}.}
This is more noticeable when DPUs involved in the synchronization span multiple ranks. \gboxend
\subsubsection{\juan{\textbf{Weak Scaling Results}}}
\label{sec:weak}
Figure~\ref{fig:64dpu_weak} shows the weak scaling results inside a single rank for 1, 4, 16, and 64 DPUs.
In each DPU, we run the number of tasklets that produces the best performance in Section~\ref{sec:1dpu}.
The size of the dataset per DPU is the size shown in Table~\ref{tab:datasets}.
The time is broken down into execution time on the DPU ("DPU"), inter-DPU synchronization time ("\juancrrr{Inter-DPU}"), and \juancrrr{CPU-DPU} and \juancrrr{DPU-CPU transfer times} ("\juancrrr{CPU-DPU}", "\juancrrr{DPU-CPU}"), similarly to the strong scaling results presented in Figures~\ref{fig:1dpu_strong} to~\ref{fig:640dpu_strong} in Section~\ref{sec:strong}.
\begin{figure*}[h]
\includegraphics[width=1.0\linewidth]{figures/64dpu_weak-16-cr-350-2.pdf}
\vspace{-5mm}
\caption{Execution time (ms) of 16 benchmarks on one rank (1, 4, 16, and 64 DPUs, \juancr{using weak scaling$^4$}). \juancr{Inside a DPU, we use the best-performing number of tasklets from Figure~\ref{fig:1dpu_strong}.}} \label{fig:64dpu_weak}
\end{figure*}
We make the following five observations from Figure~\ref{fig:64dpu_weak}.
First, we observe perfect \juancr{(i.e., linear)} weak scaling on the DPU for 14 benchmarks:
the execution time on the DPU (\juanc{i.e., the "DPU" portion of each bar} in Figure~\ref{fig:64dpu_weak}) remains constant for VA, GEMV, SpMV, SEL, UNI, BS, TS, MLP,
HST-S, HST-L, RED, SCAN-SSA, SCAN-RSS, and TRNS, as we increase the number of DPUs (and the dataset size).
Since there is no direct communication between DPUs \juancr{in these} kernels, the even distribution of workload \juancr{(i.e., load balance) among DPUs leads to} performance scaling.
We observe a similar trend \juancr{of perfect weak scaling for BFS} even though there is some load imbalance across DPUs \juancr{in BFS}.
\gboxbegin{\rtask{ko}}
\textbf{Equally-sized problems assigned to different \juancr{DRAM Processing Units (DPUs)} \juancr{and little/no inter-DPU synchronization lead to} linear weak scaling} of the execution time \juanc{spent} on the DPUs (i.e., constant execution time when we increase the number of DPUs and the dataset size accordingly).
\gboxend
Second, NW does \emph{not} scale linearly (i.e., the execution time \juanc{spent} on the DPU is \emph{not} constant) because the size of the problem does \emph{not} increase linearly with the number of DPUs.
We \juancr{increase the lengths of the input sequences to NW} \emph{linearly} with the number of DPUs (see Table~\ref{tab:datasets}, weak scaling dataset). Thus, the size of the 2D score matrix increases \emph{quadratically} with the number of DPUs.
As a result, the execution times on the DPUs increase when we increase the number of DPUs.
However, the longest diagonal of the 2D score matrix increases linearly with the number of DPUs. The processing time of this diagonal shows linear weak scaling as we increase the number of DPUs. We show these experimental results in the Appendix (Section~\ref{app:nw}).
Third, among the benchmarks that require inter-DPU synchronization (SEL, UNI, BFS, NW, HST-S, HST-L, RED, SCAN-SSA, and SCAN-RSS, the synchronization overhead (\juanc{i.e., the "Inter-DPU" portion of each bar in Figure~\ref{fig:64dpu_weak}}) is negligible for SEL, UNI,
HST-S, HST-L, RED, SCAN-SSA, and SCAN-RSS.
For NW, the inter-DPU synchronization time takes a significant fraction of the overall execution time,
and it increases with the number of DPUs because the total problem size (\juancr{and} thus, the number of iterations and the amount of inter-DPU synchronization) increases, as indicated above.
In BFS, the inter-DPU synchronization time increases linearly, as we explain \juancr{in Section~\ref{sec:strong} (Figures~\ref{fig:64dpu_strong} and~\ref{fig:640dpu_strong})} for strong scaling experiments. As a result, BFS obtains the best tradeoff between overall \juancr{execution time} (including \juanc{portions of the execution time spent on the DPUs, i.e., "DPU", and inter-DPU synchronization, i.e., "Inter-DPU"}) and number of DPUs \juancr{at} 16 DPUs (i.e., the ratio of overall execution time, \juanc{the "DPU" portions + the "Inter-DPU" portions}, over number of DPUs is lower for 16 DPUs).
Fourth, \juancrrr{CPU-DPU} and \juancrrr{DPU-CPU transfer times} increase slowly with the number of DPUs for the 13 benchmarks that use parallel transfers between main memory and MRAM banks (VA, GEMV, SEL (only \juancrrr{CPU-DPU}), UNI (only \juancrrr{CPU-DPU}), BS, TS, MLP,
HST-S, HST-L, RED, SCAN-SSA, SCAN-RSS, and TRNS).
As observed from Figure~\ref{fig:cpudpu}, the sustained \juancrrr{CPU-DPU} and \juancrrr{DPU-CPU} bandwidths increase sublinearly with the number of DPUs. On average, the increase in sustained \juancrrr{CPU-DPU}/\juancrrr{DPU-CPU} bandwidth for these 13 benchmarks from 1 DPU to 64 DPUs is 20.95$\times$/23.16$\times$.
NW uses parallel \juancrrr{CPU-DPU and DPU-CPU} transfers, but the \juancrrr{CPU-DPU transfer} and \juancrrr{DPU-CPU transfer times} increase with the number of DPUs because the amount of transferred data increases (i.e., the total problem size increases, as described above \juancr{in the second observation from Figure~\ref{fig:64dpu_weak}}).
Fifth, \juancrrr{CPU-DPU transfer} and \juancrrr{DPU-CPU transfer times} increase linearly with the number of DPUs for the benchmarks that cannot use parallel transfers.
SEL and UNI employ serial \juancrrr{DPU-CPU} transfers, as we discuss above. This makes the \juancrrr{DPU-CPU transfer times} in these two benchmarks increase dramatically with the number of DPUs, dominating the entire execution time.
In SpMV and BFS, where we cannot use parallel transfers due to the irregular nature of datasets, \juancrrr{CPU-DPU transfer} and \juancrrr{DPU-CPU transfer times} also increase significantly.
In \juanc{full-blown} real-world applications, where SEL, UNI, SpMV, or BFS may be just one of \juanc{the multiple or many kernels executed by the application}, the CPU-DPU transfer and DPU-CPU transfer times \juanc{can} be amortized and their overhead alleviated.
\vspace{1mm}
\gboxbegin{\rtask{ko}}
\textbf{Sustained bandwidth of parallel \juancrrr{CPU-DPU}/\juancrrr{DPU-CPU} transfers inside a rank \juancr{of DRAM Processing Units (DPUs)} increases sublinearly} with the number of DPUs.
\gboxend
\ignore{
\subsection{\juan{Energy Results}}
\label{sec:energy}
\juangg{We measure energy consumption for our 14 benchmarks in the strong scaling experiment with 1, 5, and 10 ranks.
In order to carry out the measurement, we obtain the energy consumed by the DIMMs connected to the memory controllers, which can be done in the x86 socket~\cite{guide2011intel}.
The measurements only include the energy of the PIM chips (DPUs and MRAM banks).}
\begin{figure*}[h]
\includegraphics[width=\linewidth]{figures/energy-10ranks-temp.pdf}
\vspace{-5mm}
\caption{Energy consumption (J) of 14 benchmarks on 64, 320, and 640 DPUs with the best performing number of tasklets per DPU (strong scaling).} \label{fig:energy-results}
\end{figure*}
\juangg{The main observation from Figure~\ref{fig:energy-results} is that the energy for each benchmark and number of ranks follows the same trends as the execution time shown in Figure~\ref{fig:640dpu_strong}. The reason is that, in the current setup, we measure the power consumed by all DIMMs in the system.}
\jgl{Move to appendix?}
}
\subsection{Comparison to CPU and GPU}
\label{sec:comparison}
We compare the UPMEM PIM architecture to a state-of-the-art CPU and a state-of-the-art GPU in terms of performance and energy consumption.
Our \juancr{goal} is to quantify the potential of the UPMEM PIM architecture as a general-purpose accelerator.
We use state-of-the-art CPU and GPU versions of PrIM benchmarks for comparison to our PIM implementations. The sources of the CPU and GPU versions of the benchmarks are listed in the Appendix (Table~\ref{tab:comparison}).
We compare the UPMEM-based PIM systems with 640 and 2,556 DPUs (Table~\ref{tab:pim-setups}) to an Intel Xeon E3-1225 v6 CPU~\cite{xeon-e3-1225} and an NVIDIA Titan V GPU~\cite{titanv} based on the Volta architecture~\cite{volta} for all our benchmarks.
Table~\ref{tab:pim-cpugpu} summarizes key characteristics of the CPU, the GPU, and the two UPMEM-based PIM systems.
\begin{table}[h]
\begin{center}
\captionof{table}{\juancr{Evaluated} CPU, GPU, and UPMEM-based PIM Systems.}
\vspace{-4mm}
\label{tab:pim-cpugpu}
\resizebox{1.0\linewidth}{!}{
\input{figures/tab-cpugpu}
}
\end{center}
\begin{flushleft}
\resizebox{0.4\linewidth}{!}{
$\begin{tabular}{l}
$^\star Estimated\ GFLOPS = 3.3\ GHz \times 4\ cores \times 2\ instructions\ per\ cycle$. \\
$^\dagger Estimated\ TDP = \frac{Total\ DPUs}{DPUs/chip} \times 1.2\ W/chip$~\cite{devaux2019}.
\end{tabular}$
}
\end{flushleft}
\end{table}
For our UPMEM-based PIM system performance measurements, we include the time spent in the DPU and the time spent for inter-DPU synchronization on the UPMEM-based PIM systems.
For our CPU and GPU performance measurements, we include only the kernel times (i.e., we do not include data transfers between the host CPU and the GPU in the GPU versions).
For energy measurements, we use Intel RAPL~\cite{rapl} on the CPU and NVIDIA SMI~\cite{smi} on the GPU.
In the UPMEM PIM systems, we obtain the energy consumed by the DIMMs connected to the memory controllers, which can be done in x86 sockets~\cite{guide2011intel}.
The measurements include only the energy of the PIM chips.
\subsubsection{Performance Comparison}
\label{sec:comparison-perf}
Figure~\ref{fig:comparison-perf} shows the speedup of the UPMEM-based PIM systems with 640 and 2,556 DPUs and the Titan V GPU over the Intel Xeon CPU.
\begin{figure*}[h]
\includegraphics[width=0.8\linewidth]{figures/comparison_cpugpu_wide-350-perf-1.pdf}
\vspace{-2mm}
\caption{Performance comparison between the UPMEM-based PIM systems with 640 and 2,556 DPUs, a Titan V GPU, and an Intel Xeon E3-1240 CPU. Results are normalized to the CPU performance \juancr{(y-axis is log scale)}.
\juancr{There are two groups of benchmarks: (1) benchmarks that are more suitable to the UPMEM PIM architecture, and (2) benchmarks that are less suitable to the UPMEM PIM architecture.}} \label{fig:comparison-perf}
\end{figure*}
We make four key observations from Figure~\ref{fig:comparison-perf}.
First, the 2,556-DPU system and the 640-DPU system are on average 20.1$\times$ and 9.4$\times$ faster than the CPU. The highest speedup is for UNI: the 2,556-DPU system is 629.5$\times$ and the 640-DPU system is 234.4$\times$ faster than the CPU.
Even benchmarks that make heavy use of integer multiplication (GEMV, TS, and MLP) are much faster on the UPMEM-based PIM systems (18.7-86.0$\times$ faster on the 2,556-DPU system, and 10.3-23.2$\times$ \juancr{faster} on the 640-DPU system).
This observation reflects the large performance improvements that workloads running on a conventional system with a CPU can experience if we expand the system with DIMMs of PIM-enabled memory (\juancr{see} Figure~\ref{fig:scheme}).
Second, the UPMEM-based PIM systems outperform the CPU for all of the benchmarks except SpMV, BFS, and NW.
SpMV has three characteristics that make it less suitable for UPMEM-based PIM systems: (1) it operates on floating point data, (2) it uses multiplication heavily, and (3) it suffers from load imbalance due to the irregular nature of sparse matrices. Regarding the first two characteristics, we know from our analyses in Sections~\ref{sec:arith-throughput} and~\ref{sec:throughput-oi} that floating point multiplication is very costly because of the lack of native support. Regarding the third characteristic, we know from our strong scaling evaluation in Section~\ref{sec:strong} that load imbalance across DPUs causes sublinear scaling for SpMV.
BFS performs much worse than CPU on the UPMEM-based PIM systems because of the large overhead of inter-DPU synchronization via the host CPU, as we discuss in Section~\ref{sec:performance}.
Since the inter-DPU synchronization overhead of BFS increases linearly with the number of DPUs, the 2,556-DPU system is significantly slower than the 640-DPU system.\footnote{BFS can obtain better performance by running it using much fewer DPUs. The reason is that BFS performance does not scale with many DPUs, as shown in Sections~\ref{sec:strong} and~\ref{sec:weak} (Figures~\ref{fig:64dpu_strong}-\ref{fig:64dpu_weak}). However, we do not \juanc{run BFS using much fewer DPUs} as we study the full-blown system performance utilizing \emph{all} DPUs in this experiment.}
Note that the goal of these experiments is \emph{not} to show the performance of the best-performing number of DPUs for a given workload, but the performance of the full-blown systems with \juanc{all} 2,556 DPUs and 640 DPUs \juanc{active} for each workload.
NW is one order of magnitude slower on both UPMEM-based PIM systems than on the CPU \juancr{due to the inter-DPU synchronization overhead}. The inter-DPU synchronization overhead of NW is not as dependent on the number of DPUs. As a result, the 2,556-DPU system has the same performance as the 640-DPU system for this benchmark.
Third, the 2,556-DPU system is faster than the GPU for 10 benchmarks: VA, SEL, UNI, BS, HST-S, HST-L, RED, SCAN-SSA, SCAN-RSS, and TRNS.
\juancr{These 10 benchmarks are more suitable to the UPMEM PIM architecture due to three key characteristics:}
(1) streaming memory accesses, (2) no or little inter-DPU communication, and (3) no or little use of \juancrrr{integer} multiplication, \juancrrr{integer} division, or floating point operations.
The speedups of the 2,556-DPU system over the GPU for these benchmarks range between 6\% (for SCAN-SSA) and 97.9$\times$ (for BS), \juancr{with an average of} 2.68$\times$.
It is especially interesting that the 2,556-DPU system outperforms the Titan V for some of these benchmarks, which are traditionally considered GPU-friendly and are subject of GPU optimization studies, libraries, and reference implementations, such as VA~\cite{cudasamples}, SEL and UNI~\cite{gomezluna2015ds,bell2012thrust}, HST-S and HST-L~\cite{gomez2013atomics,gomez2013optimized,van2013simulation}, RED~\cite{degonzalo2019automatic,harris2007optimizing}, SCAN-SSA~\cite{hwukirk2016.scan, sengupta2008efficient,bell2012thrust}, SCAN-RSS~\cite{yan2013streamscan, hwukirk2016.scan, dotsenko2008fast,merrill2015cuda}, and TRNS~\cite{sung2014matrix, gomez2016matrix, catanzaro2014decomposition}.
In summary, the UPMEM PIM architecture outperforms \juanc{the state-of-the-art GPU} for workloads \juanc{that exhibit} the three key characteristics \juanc{that make them potentially suitable for execution on the UPMEM-based PIM system}.
Fourth, the 640-DPU system is generally slower than the GPU, but for the 10 benchmarks where the 2,556-DPU system is faster than the GPU (VA, SEL, UNI, BS, HST-S, HST-L, RED, SCAN-SSA, SCAN-RSS, and TRNS) the average performance of the 640-DPU system is within 69\% the performance of the GPU.
Among these benchmarks, the 640-DPU system is faster than the GPU for two benchmarks: HST-S and BS.
The GPU version of histogram~\cite{gomez2013optimized,gomezluna2017chai} (the same one for HST-S and HST-L) uses atomic operations that burden the performance heavily~\cite{gomez2013atomics}.
As a result, the 640-DPU system is 1.89$\times$ faster than the GPU for HST-S.
For BS, the GPU version suffers from many random memory accesses, which greatly reduce the achievable memory bandwidth.
The 640-DPU system is 18.54$\times$ faster than the GPU for BS.
\gboxbegin{\rtask{ko}}
\textbf{The UPMEM-based PIM system \juanc{can outperform a state-of-the-art GPU}} on workloads with \textbf{three key characteristics}:
\begin{enumerate}[1.]
\item Streaming memory accesses
\item No or little inter-DPU synchronization
\item No or little use of integer multiplication, integer division, or floating point operations
\end{enumerate}
\juancr{These three key characteristics \juanc{make} a \textbf{workload \juanc{potentially suitable} to the UPMEM PIM architecture}.}
\gboxend
\subsubsection{Energy Comparison}
\label{sec:comparison-ener}
Figure~\ref{fig:comparison-ener} shows the energy savings of the UPMEM-based PIM system with 640 DPUs and the Titan V GPU over the Intel Xeon CPU.
At the time of writing, the 2,556-DPU system is not enabled to perform energy measurements, but we will aim to include them in an extended version of our work.
\begin{figure*}[h]
\includegraphics[width=0.8\linewidth]{figures/comparison_cpugpu_wide-350-ener-1.pdf}
\vspace{-2mm}
\caption{Energy comparison between the UPMEM-based PIM system with 640 DPUs, a Titan V GPU, and an Intel Xeon E3-1240 CPU. Results are normalized to the CPU performance \juancr{(y-axis is log scale)}.
\juancr{There are two groups of benchmarks: (1) benchmarks that are more suitable to the UPMEM PIM architecture, and (2) benchmarks that are less suitable to the UPMEM PIM architecture.}} \label{fig:comparison-ener}
\end{figure*}
We make three key observations from Figure~\ref{fig:comparison-ener}.
First, the 640-DPU system consumes, on average, 1.56$\times$ less energy than the CPU for all 16 benchmarks.
For 13 benchmarks (VA, GEMV, SEL, UNI, BS, TS, MLP, HST-S, HST-L, RED, SCAN-SSA, SCAN-RSS, and TRNS), the 640-DPU system provides \juancr{an} energy savings \juancr{of 4.94$\times$} over the CPU.
The maximum energy savings \juancr{is} 39.14$\times$ for UNI.
Our experiments show that the 640-DPU system, featuring PIM-enabled memory with a capacity of 40 GB, provides outstanding energy savings over a state-of-the-art Intel Xeon CPU (with memory capacity of 32 GB) for 13 out of 16 benchmarks.
This energy savings comes from the lower execution times of these 13 benchmarks on the 640-DPU system (Figure~\ref{fig:comparison-perf}).
We expect that the energy savings of the 2,556-DPU system, with $\sim$6$\times$ more DPUs, 160 GB of PIM-enabled memory, and higher frequency (350 vs. 267 MHz), over the CPU will be even higher due to higher performance (thus, \juanc{lower} static energy) and less data movement.
Second, the 640-DPU system is only less energy efficient than the CPU for SpMV, BFS, and NW, \juancr{which is} in line with our observations about performance (Section~\ref{sec:comparison-perf}).
Third, compared to the GPU, the 640-DPUs system consumes less energy for BS and HST-S, \juancr{since these are the two benchmarks for which the 640-DPU system outperforms the GPU (see Section~\ref{sec:comparison-perf}).}
\juancr{For the 2,556-DPU system, we expect energy results to follow the performance results in Section~\ref{sec:comparison-perf}.
The 10 benchmarks (VA, SEL, UNI, BS, HST-S, HST-L, RED, SCAN-SSA, SCAN-RSS, and TRNS) that run faster on the 2,556-DPU system than on the GPU will also likely consume less energy.
This is because the major cause of performance improvement and energy reduction is the same: the reduction in data movement between memory and processors that the UPMEM-based PIM systems provide.}
\gboxbegin{\rtask{ko}}
\textbf{The UPMEM-based PIM system \juanc{provides large} energy savings over \juanc{a state-of-the-art} CPU} due to higher performance (thus, \juanc{lower} static energy) and less data movement between memory and processors.
\juanc{\textbf{The UPMEM-based PIM system provides energy savings over a state-of-the-art CPU/GPU on workloads where it outperforms the CPU/GPU}.
This is because the source of both performance improvement and energy savings is the same: \textbf{the significant reduction in data movement between the memory and the processor cores}, which the UPMEM-based PIM system can provide for PIM-suitable workloads.}
\gboxend
\subsubsection{Discussion}
\label{sec:comparison-discussion}
These observations are useful for programmers to anticipate how much performance and energy savings they can get from the UPMEM hardware compared to traditional \juanc{CPU and GPU systems} for different types of workloads.
One limitation of this comparison is the difficulty of establishing a common control factor across all three types of systems (CPU, GPU, and UPMEM-based PIM system) to ensure a fair comparison.
To this end, the 640-DPU PIM system has comparable memory capacity to the CPU (40 GB vs. 32 GB).
However, the 2,556-DPU system has much higher memory capacity ($\sim$160 GB).
On the other hand, the \juanc{640-DPU} UPMEM-based PIM system and the GPU have comparable cost (the 640-DPU system being a little cheaper).
Other \juanc{hardware} characteristics, such as fabrication technology, process node, number of cores, or frequency (Table~\ref{tab:comparison}), are very different \juanc{across the four systems that we evaluate in Section~\ref{sec:comparison}}.
\juancr{We note} that the UPMEM hardware is still maturing and is expected to run at a higher frequency in the near future (400-450 MHz instead of 350 or 267 MHz) and potentially be manufactured with a smaller technology node~\cite{comm-upmem}.
Hence, the results we report in this comparison may underestimate the full potential of the UPMEM-based PIM architecture.
\ignore{
\juan{Our first observation is that the UPMEM PIM system consumes on average $6.32\times$ less energy than the CPU.
We also observe that the GPU can achieve more energy savings, but we expect that this difference reduces as the technology node of the UPMEM chip decreases.}
\todo{Revise.}
\ie{Include comment about fairness and that UPMEM results are underestimated (see Comment 15)}
}
\section{Related Work}
To our knowledge, \juancr{this paper} provides the first comprehensive \juancr{characterization and} analysis of the first publicly-available real-world PIM architecture \juancr{along with} the first \juancr{open-source} benchmark suite for a real-world PIM architecture.
We briefly review related work on PIM architectures.
There are two main approaches to PIM~\cite{mutlu2019,mutlu2020modern,ghoseibm2019,ghose2019arxiv}: (1) \emph{processing-using-memory} (\emph{PUM}) and (2) \emph{processing-near-memory} (\emph{PNM}).
No prior work on PUM or PNM provides results from real commercial systems or a benchmark suite to evaluate PIM architectures.
\emph{\textbf{Processing using memory (PUM)}} exploits the existing memory architecture and the operational principles of the memory cells and circuitry to perform operations within each memory chip at low cost.
Prior works propose PUM mechanisms using SRAM~\cite{aga.hpca17,eckert2018neural,fujiki2019duality,kang.icassp14}, DRAM~\cite{seshadri.micro17,seshadri.arxiv16,seshadri2018rowclone,seshadri2013rowclone,angizi2019graphide,kim.hpca18,kim.hpca19,gao2020computedram,chang.hpca16,xin2020elp2im,li.micro17,deng.dac2018,hajinazarsimdram,rezaei2020nom,wang2020figaro,ali2019memory,seshadri2020indram, seshadri.bookchapter17,seshadri.thesis16,Seshadri:2015:ANDOR,seshadri.bookchapter17.arxiv,ferreira2021pluto,olgun2021quactrng},
PCM~\cite{li.dac16},
MRAM~\cite{angizi2018pima,angizi2018cmp,angizi2019dna},
or RRAM/memristive~\cite{levy.microelec14,kvatinsky.tcasii14,shafiee2016isaac,kvatinsky.iccd11,kvatinsky.tvlsi14,gaillardon2016plim,bhattacharjee2017revamp,hamdioui2015memristor,xie2015fast,hamdioui2017myth,yu2018memristive,puma-asplos2019, ankit2020panther,chi2016prime,song2018graphr} memories.
PUM mechanisms enable different types of operations such as data copy and initialization~\cite{chang.hpca16,seshadri2018rowclone,seshadri2013rowclone,aga.hpca17,rezaei2020nom,wang2020figaro, seshadri.bookchapter17},
bulk bitwise operations (e.g., a functionally-complete set of Boolean logic operations)~\cite{seshadri.micro17,seshadri.arxiv16,li.dac16,angizi2018pima,angizi2018cmp,angizi2019dna,aga.hpca17,li.micro17,mandelman.ibmjrd02,xin2020elp2im,gao2020computedram, seshadri2020indram, Seshadri:2015:ANDOR},
and simple arithmetic operations (e.g., addition, multiplication, implication)~\cite{levy.microelec14,kvatinsky.tcasii14,aga.hpca17,kang.icassp14,li.micro17,shafiee2016isaac,eckert2018neural,fujiki2019duality,kvatinsky.iccd11,kvatinsky.tvlsi14,gaillardon2016plim,bhattacharjee2017revamp,hamdioui2015memristor,xie2015fast,hamdioui2017myth,yu2018memristive,deng.dac2018,angizi2019graphide,ferreira2021pluto}.
A recent work, called SIMDRAM~\cite{hajinazarsimdram}, designs a framework for implementing and executing arbitrary operations in a \juancr{bit-serial} SIMD fashion inside DRAM arrays, \juancr{building on the Ambit substrate~\cite{seshadri.micro17,seshadri.arxiv16}}.
\emph{\textbf{Processing near memory (PNM)}} integrates processing elements (e.g., functional units, accelerators, simple processing cores, reconfigurable logic) near or inside the memory (e.g.,~\cite{syncron,fernandez2020natsa,cali2020genasm,alser2020accelerating,kim.arxiv17,kim.bmc18,ahn.pei.isca15,ahn.tesseract.isca15,boroumand.asplos18,boroumand2019conda,boroumand.arxiv17,boroumand2016pim,singh2019napel,asghari-moghaddam.micro16,DBLP:conf/sigmod/BabarinsaI15,farmahini2015nda,gao.pact15,DBLP:conf/hpca/GaoK16,gu.isca16,guo2014wondp,hashemi.isca16,cont-runahead,hsieh.isca16,kim.isca16,kim.sc17,DBLP:conf/IEEEpact/LeeSK15,liu-spaa17,morad.taco15,nai2017graphpim,pattnaik.pact16,pugsley2014ndc,zhang.hpdc14,zhu2013accelerating,DBLP:conf/isca/AkinFH15,gao2017tetris,drumond2017mondrian,dai2018graphh,zhang2018graphp,huang2020heterogeneous,zhuo2019graphq,santos2017operand,boroumand2021polynesia,boroumand2021mitigating,besta2021sisa}).
Many of these PNM works place PIM logic inside the logic layer of 3D-stacked memories~\cite{syncron,cali2020genasm,alser2020accelerating,kim.arxiv17,kim.bmc18,ahn.pei.isca15,ahn.tesseract.isca15,boroumand.asplos18,boroumand2019conda,boroumand.arxiv17,boroumand2016pim,singh2019napel,guo2014wondp,hsieh.isca16,kim.isca16,kim.sc17,liu-spaa17,nai2017graphpim,pattnaik.pact16,pugsley2014ndc,zhang.hpdc14,DBLP:conf/isca/AkinFH15,gao2017tetris,drumond2017mondrian,dai2018graphh,zhang2018graphp,huang2020heterogeneous,zhuo2019graphq,boroumand2021polynesia,boroumand2021mitigating}, at the memory controller~\cite{hashemi.isca16,cont-runahead}, on the DDRX DIMMs~\cite{asghari-moghaddam.micro16,alves2015opportunities,medal2019}, or in the same package as the CPU connected via silicon interposers~\cite{fernandez2020natsa,singh2020nero}.
Another body of recent works study and propose solutions to system integration challenges in PIM-enabled systems, such as memory coherence~\cite{boroumand.arxiv17,boroumand2016pim, boroumand2019conda}, virtual memory~\cite{impica,hajinazar2020virtual}, synchronization~\cite{syncron}, or PIM suitability of workloads~\cite{deoliveira2021}.
Several works explore the acceleration opportunities offered by the UPMEM PIM architecture for bioinformatics~\cite{lavenier2016,lavenier2020}, skyline computation~\cite{Zois2018}, or compression~\cite{nider2020}.
Our work is the first one that performs a comprehensive architecture characterization of the UPMEM PIM architecture and studies the PIM suitability of a large number of workloads.
We are also the first to openly \juanc{and freely} provide the first benchmark suite for real PIM systems.
A recent work~\cite{kwon202125} presents a real-world PIM system with programmable near-bank \juancr{computation} units, called FIMDRAM, based on HBM technology~\cite{jedec.hbm.spec,lee.taco16}. The FIMDRAM architecture, \juancr{designed specifically for} machine learning applications, implements a SIMD pipeline with simple multiply-and-accumulate units~\cite{shin2018mcdram,cho2020mcdram}.
Compared to the \juancr{more general-purpose} UPMEM PIM architecture, FIMDRAM is focused on a specific domain of applications (i.e., machine learning), and thus it may lack flexibility \juancr{to support} a wider range of applications. A comprehensive \juancr{characterization and} analysis of the FIMDRAM architecture, along the lines of our work, \juancr{can} greatly help \juanc{researchers, programmers, and architects} to understand \juancr{its} potential.
\section*{{\Large APPENDIX}}
This appendix presents some additional results for one of our microbenchmarks (Section~\ref{app:throughput-oi}) and four of the PrIM benchmarks (Section~\ref{sec:appendix-results}).
Section~\ref{sec:appendix-comparison} shows the sources of the CPU and GPU versions of PrIM benchmarks.
\section{Arithmetic Throughput versus Number of Tasklets}
\label{app:throughput-oi}
Figure~\ref{fig:ai-dpu-tasklets} presents arithmetic throughput results for different numbers of tasklets at different operational intensities.
This figure shows a different view of the experimental results presented in Figure~\ref{fig:ai-dpu}, with the \juanc{goal} of showing the variation in arithmetic throughput for different operational intensities.
{\centering
\centering
\vspace{2mm}
\includegraphics[width=1.0\linewidth]{figures/ai-dpu-wide4-tasklets-350-1.pdf}
\vspace{-8mm}
\captionof{figure}{Arithmetic throughput versus number of tasklets for different operational intensities of (a) 32-bit integer addition, (b) 32-bit integer multiplication, (c) 32-bit floating point addition, and (d) 32-bit floating point multiplication. The legend shows the operational intensity values (in OP/B). The y-axis is log scale.}
\label{fig:ai-dpu-tasklets}
}
\vspace{4mm}
We make two key observations from Figure~\ref{fig:ai-dpu-tasklets}.
First, for any data type and operation, the highest possible throughput is achieved at 11 tasklets, i.e., the number of tasklets to fully utilize the pipeline.
However, the operational intensity at which the highest throughput value is reached depends on the actual data type and operation.
For example, the highest throughput of 32-bit integer addition is achieved at $\frac{1}{4}$~OP/B, i.e., 1 addition per 32-bit element. For floating point multiplication, \juanc{the highest throughput} is achieved at $\frac{1}{128}$~OP/B, i.e., 1 multiplication every 32 32-bit elements.
Second, for lower operational intensities, the number of tasklets necessary to reach the saturation throughput is less than 11.
This happens in the memory-bound regions, where the MRAM access latency dominates the overall latency. This observation is in line with our observations for COPY and ADD benchmarks in Section~\ref{sec:mram-streaming}.
\section{Extended results for Needleman-Wunsch, image histogram, reduction, and scan
}
\label{sec:appendix-results}
This section presents some additional results for four of the PrIM benchmarks.
First, we present an extended evaluation of NW (Section~\ref{app:nw}).
Second, we compare HST-S and HST-L for different histogram sizes (Section~\ref{app:histogram}).
Third, we show an evaluation of RED with three different mechanisms to perform local intra-DPU reduction (Section~\ref{app:reduction}).
Fourth, we compare SCAN-SSA and SCAN-RSS for different array sizes (Section~\ref{app:scan}).
\subsection{Needleman-Wunsch}
\label{app:nw}
We present additional results for the weak scaling experiment of NW.
In this experiment, we increase the length of the sequences to align proportionally to the number of DPUs. Thus, the size of the 2D score matrix increases quadratically with the number of DPUs.
Figure~\ref{fig:appendix_nw} shows weak scaling results of (a) the complete execution of NW (including all iterations) and (b) the execution of only the longest diagonal.
\begin{figure*}[h]
\includegraphics[width=0.6\linewidth]{figures/nw_comparison-350-1.pdf}
\vspace{-2mm}
\caption{Weak scaling evaluation of NW: (a) complete execution of NW, (b) execution of the longest diagonal.} \label{fig:appendix_nw}
\end{figure*}
We make two observations from Figure~\ref{fig:appendix_nw}.
First, the execution times on the DPUs for the complete execution (Figure~\ref{fig:appendix_nw}a) increase with the number of DPUs, since the size of the problem (the 2D score matrix) increases quadratically. We make the same observation in Section~\ref{sec:weak}.
Second, the execution times on the DPUs for the longest diagonal (Figure~\ref{fig:appendix_nw}b) remain flat \juanc{a number of DPUs increases}. The reason is that the length of the longest diagonal increases linearly with the length of the sequences and the number of DPUs. As a result, we observe linear weak scaling for the longest diagonal.
\juanc{These results show (1) {that} a larger number of active DPUs is {more} beneficial for NW {in the computation of the longest diagonals of the 2D score matrix}, and (2) why we do not observe linear scaling for the complete NW.}
\subsection{Image Histogram}
\label{app:histogram}
We present results for different histogram sizes for our two versions of histogram (HST-S, HST-L).
Figure~\ref{fig:appendix_histo} shows the execution time results for histogram sizes between 64 and 4096. The input is the one specified in Table~\ref{tab:datasets}, which is an image of 12-bit depth (thus, maximum histogram size is 4096).
\begin{figure*}[h]
\includegraphics[width=0.5\linewidth]{figures/histo_comparison-350-1.pdf}
\vspace{-5mm}
\caption{\juanc{Execution times (ms) of} two versions of histogram (HST-L, HST-S) on 1 DPU.} \label{fig:appendix_histo}
\end{figure*}
The results show that HST-S is
$1.6-2.5\times$ faster than HST-L for histograms between 64 and 1024 bins.
The performance of HST-S gets worse when increasing the histogram size because the number of tasklets that it is possible to \juanc{run on a DPU} reduces. For example, for 512 bins, only 8 tasklets can be launched because of the limited amount of WRAM (each tasklet has its own local histogram).
For 4046 bins, HST-S can only launch 2 tasklets.
After 2048 bins, HST-L performs faster, as its execution time is independent of the histogram size.
\subsection{Reduction}
\label{app:reduction}
We compare three versions of RED that we introduce in Section~\ref{sec:reduction}.
Recall that RED has two steps. In the first step, each tasklet accumulates the values of an assigned chunk of the input array. In the second step, RED performs the final reduction of the local sums of all tasklets.
The difference between the three versions \juanc{is in how the second step is implemented}.
The first version uses a single tasklet to perform a sequential reduction in the second step \juanc{(SINGLE in Figures~\ref{fig:appendix_red_sync} to~\ref{fig:appendix_red_2M})}.
The other two versions implement a parallel tree-based reduction in the second step \juanc{(see Section~\ref{sec:reduction})}. The only difference \juanc{between the other two versions} is the synchronization primitive used for synchronization at the end of each tree level: (1) barriers for all tasklets \juanc{(BARRIER in Figures~\ref{fig:appendix_red_sync} to~\ref{fig:appendix_red_2M})}, or (2) handshakes between pairs of tasklets \juanc{(HANDS in Figures~\ref{fig:appendix_red_sync} to~\ref{fig:appendix_red_2M})}.
Figure~\ref{fig:appendix_red_sync} shows the number of execution cycles needed to perform sequential (SINGLE) or the parallel tree-based (BARRIER, HANDS) reduction for 2 to 16 tasklets on one DPU.
\begin{figure*}[h]
\includegraphics[width=0.6\linewidth]{figures/reduction-tasklets-sync-350.pdf}
\vspace{-5mm}
\caption{\juanc{Effect of} sequential reduction (SINGLE) vs. parallel tree-based reductions (BARRIER, HANDS), \juanc{in the second step of the RED benchmark}.} \label{fig:appendix_red_sync}
\end{figure*}
We observe that the most efficient of the three versions is the sequential reduction (SINGLE).
However, it is only a few cycles faster \juanc{(6\% faster with 16 tasklets)} that the tree-based version with handshakes (HANDS).
We also observe the high cost of barriers when the number of tasklets increases.
These results indicate that synchronization primitives impose high overhead in the current implementation of the UPMEM PIM architecture.
Nevertheless, the relative weight of the final reduction is negligible when the input array is large.
Figure~\ref{fig:appendix_red_2K} shows the execution cycles of the three versions for an input array of 2K 64-bit elements with 2-16 tasklets on one DPU. The difference between the three versions is very small, but we still observe that SINGLE is \juanc{slightly} faster \juanc{(i.e., 2\% over HANDS, and 47\% over BARRIER)}.
\begin{figure*}[h]
\centering
\includegraphics[width=0.6\linewidth]{figures/reduction-tasklets-2K-350.pdf}
\vspace{-5mm}
\caption{\juanc{Execution cycles of} three versions of reduction of 2K 64-bit elements on 1 DPU.} \label{fig:appendix_red_2K}
\end{figure*}
For an array of 2M 64-bit elements (Figure~\ref{fig:appendix_red_2M}), the difference in performance \juanc{of the three versions} is completely negligible, since most of the execution cycles are spent in the first step of RED.
\begin{figure*}[h]
\centering
\includegraphics[width=0.6\linewidth]{figures/reduction-tasklets-2M-350.pdf}
\vspace{-5mm}
\caption{\juanc{Execution cycles of} three versions of reduction of 2M 64-bit elements on 1 DPU.} \label{fig:appendix_red_2M}
\end{figure*}
\subsection{Prefix Sum (Scan)}
\label{app:scan}
We compare the execution time of our two versions of scan, SCAN-SSA and SCAN-RSS, for different array sizes (2048, 4096, 8192, 16384, 65536 elements) \juanc{on the DPU}.
Figure~\ref{fig:appendix_scan} shows the execution time results. For both versions, the figure shows the breakdown of DPU kernel times (\juanc{"DPU Scan" + "DPU Add"} in SCAN-SSA, and \juanc{"DPU Reduction" + "DPU Scan"} in SCAN-RSS) and the intermediate scan in the host CPU ("\juancrrr{Inter-DPU}").
\begin{figure*}[h]
\centering
\includegraphics[width=0.6\linewidth]{figures/scan_comparison-350-1.pdf}
\vspace{-5mm}
\caption{Two versions of scan (SCAN-SSA, SCAN-RSS) on 1 DPU.} \label{fig:appendix_scan}
\end{figure*}
The main observation from these results is that SCAN-SSA runs faster for small arrays (2048-8192).
Scan kernel time and \juancrrr{Inter-DPU} time are very similar in both SCAN-SSA and SCAN-RSS, but the Add kernel is faster than the Reduction kernel for small sizes.
The reason is that the Reduction kernel is burdened by the overhead of intra-DPU synchronization (barrier) and the final reduction, where \juanc{only} a single tasklet works. This overhead \juanc{becomes} negligible for larger arrays. As a result, SCAN-RSS is faster for large arrays (more than 16384 elements).
\section{CPU and GPU versions of the benchmarks}
\label{sec:appendix-comparison}
Table~\ref{tab:comparison} shows the sources of the CPU and GPU versions of PrIM benchmarks, which we use for comparison purposes in Section~\ref{sec:comparison}.
{We provide these CPU and GPU versions as part of our PrIM benchmark suite~\cite{gomezluna2021repo}.}
\vspace{2mm}
\fboxsep0pt
\colorbox{white}{
\begin{minipage}{\textwidth}
\begin{center}
\captionof{table}{CPU and GPU versions of PrIM benchmarks.}
\vspace{-4mm}
\label{tab:comparison}
\resizebox{0.5\linewidth}{!}{
\input{figures/tab-comparison.tex}
}
\end{center}
\end{minipage}
}
\section{Performance Characterization of a UPMEM DPU}
\label{sec:microbench}
This section presents \juancrr{the first} performance characterization of a UPMEM DPU using microbenchmarks to assess various \juancrr{architectural} limits \juancr{and bottlenecks}.
Section~\ref{sec:arith-throughput-wram} evaluates the throughput of arithmetic operations and WRAM bandwidth of a DPU using a streaming \juancrrr{microbenchmark}.
Section~\ref{sec:mram-bandwidth} evaluates the \juancrr{sustained} bandwidth between MRAM and WRAM.
Section~\ref{sec:throughput-oi} evaluates the impact of the operational intensity of a workload on the \juancrrr{arithmetic} throughput of the DPU.
Finally, Section~\ref{sec:cpu-dpu} evaluates the bandwidth between the main memory of the host and \juancrrr{the MRAM banks}.
Unless otherwise stated, \juancr{we report experimental results} on the \juancrr{larger,} 2,556-DPU system presented in Section~\ref{sec:sys-org}.
\juancrr{All observations and trends identified in this section also apply to the \juancr{older 640-DPU system (we verified this experimentally)}.}
All microbenchmarks used in this section are publicly \juanc{and freely} available~\cite{gomezluna2021repo}.
\subsection{Arithmetic Throughput and WRAM Bandwidth}\label{sec:arith-throughput-wram}
The DPU pipeline is capable of performing one integer \juancrr{addition/subtraction} operation every cycle and up to one 8-byte \juancr{WRAM load/store} every cycle when the pipeline is full~\cite{devaux2019}.
Therefore, at 350 MHz, the theoretical peak arithmetic throughput is \juancrr{350 Millions of OPerations per Second (MOPS), assuming only integer addition operations are issued into the pipeline,} and the theoretical peak WRAM bandwidth is 2,800~MB/s.
In this section, we evaluate the arithmetic throughput and \juancr{sustained} WRAM bandwidth \juancrr{that can be achieved} by a streaming \juancr{microbenchmark (i.e., a benchmark with unit-stride access to memory locations)} and how \juancrr{the arithmetic throughput and WRAM bandwidth} vary with the number of tasklets deployed.
\subsubsection{\textbf{Microbenchmark Description}}\label{sec:wram-benchmark-desc}
To evaluate arithmetic throughput and WRAM bandwidth \juancr{in streaming workloads}, we implement \juancrr{a set of microbenchmarks~\cite{gomezluna2021repo}} where every tasklet loops over elements of an array in WRAM and performs read-modify-write operations.
We measure the time it takes \juancr{to perform} WRAM loads, arithmetic operations, WRAM stores, and loop control.
We do \emph{not} measure the time it takes \juancr{to perform} MRAM-WRAM DMA transfers \juancr{(we will study them separately in Section~\ref{sec:mram-bandwidth})}.
\noindent\paragraph{\textbf{Arithmetic Throughput.}} For arithmetic throughput, we examine the addition, subtraction, multiplication, and division operations for 32-bit integers, 64-bit integers, floats, and doubles.
Note that the throughput for unsigned integers is the same as that for signed integers.
As we indicate \juanc{at} the beginning of Section~\ref{sec:arith-throughput-wram}, the DPU pipeline is capable of performing one integer addition/subtraction operation every cycle, assuming that the pipeline is full~\cite{devaux2019}.
However, real-world workloads do not execute \juanc{\emph{only}} integer addition/subtraction operations. Thus, the theoretical peak arithmetic throughput of 350 MOPS is not realistic \juancr{for full execution of real workloads}.
Since the DPUs store operands in WRAM (Section~\ref{sec:dpu-architecture}), a realistic evaluation of arithmetic throughput should consider the accesses to WRAM to read source operands and write destination operands. One access to WRAM involves one WRAM address calculation and one load/store operation.
Listing~\ref{lst:ubench_arith} shows an example microbenchmark for \juancr{the} throughput evaluation of 32-bit integer addition. Listing~\ref{sublst:codea} shows our microbenchmark written in C. The operands are stored in \texttt{bufferA}, which we allocate in WRAM using \texttt{mem\_alloc}~\cite{upmem-guide} (line 2).
The \texttt{for} loop in line 3 goes through \juancr{each element of} \texttt{bufferA} and \juancr{adds} a scalar value \texttt{scalar} to each element.
In each iteration of the loop, we load one element of \texttt{bufferA} \juanc{into} a temporal variable \texttt{temp} (line 4), add \texttt{scalar} \juancr{to it} (line 5), and store the result \juanc{back} into the same position of \texttt{bufferA} (line 6).
Listing~\ref{sublst:codeb} shows the compiled code, which we can inspect using UPMEM's Compiler Explorer~\cite{upmem-explorer}.
The loop contains 6 instructions: WRAM address calculation (\texttt{lsl\_add}, line 3), WRAM load (\texttt{lw}, line 4), addition (\texttt{add}, line 5), WRAM store (\texttt{sw}, line 6), loop index update (\texttt{add}, line 7), and conditional branch (\texttt{jneq}, line 8).
For a 32-bit integer subtraction (\texttt{sub}), the number of instructions in the streaming loop is also 6, but for other operations and data types the number of instructions can be different \juanc{(as we show below)}.
\vspace{-1mm}
\begin{figure}[h]
\setcaptiontype{lstlisting}
\begin{minipage}{.53\textwidth}
\begin{lstlisting}[style=myC]
#define SIZE 256
int* bufferA = mem_alloc(SIZE*sizeof(int));
\end{lstlisting}
\vspace{-25pt}
\subcaption{C-based code.}
\label{sublst:codea}
\end{minipage}
\hspace{2mm}
\begin{minipage}{.41\textwidth}
\begin{lstlisting}[style=myC]
.LBB0_1: // Loop header
\end{lstlisting}
\vspace{-25pt}
\subcaption{Compiled code in UPMEM \juanc{DPU} ISA.}
\label{sublst:codeb}
\end{minipage}
\vspace{-2mm}
\caption{Microbenchmark for throughput evaluation of 32-bit integer addition~\cite{gomezluna2021repo}.}
\vspace{-1mm}
\label{lst:ubench_arith}
\end{figure}
\juancr{Given the instructions in the loop of the streaming microbenchmark (Listing~\ref{sublst:codeb})}, we can obtain the expected throughput of arithmetic operations.
\juancr{Only one out of the six instructions is an arithmetic operation (\texttt{add} in line 5 in Listing~\ref{sublst:codeb}).}
Assuming that the pipeline is full, the DPU issues (and retires) one instruction every cycle~\cite{devaux2019}. As a result, we need as many cycles as instructions in the streaming loop to perform one arithmetic operation. If the number of instructions \juancr{in the loop} is $n$ and the DPU frequency is $f$, we calculate the arithmetic throughput in operations per second (OPS) as expressed in Equation~\ref{eq:throughput}.
\begin{equation}
Arithmetic\ Throughput\ (in\ OPS) = \frac{f}{n}
\label{eq:throughput}
\end{equation}
For a 32-bit integer addition (Listing~\ref{lst:ubench_arith}), the expected arithmetic throughput on a DPU running at 350 MHz \juancr{is} 58.33 \juancr{millions of operations per second (MOPS). We verify this on real hardware in Section~\ref{sec:arith-throughput}.}
\noindent\paragraph{\textbf{WRAM Bandwidth.}}
\juanc{To evaluate sustained} WRAM bandwidth, we examine the four versions of the STREAM benchmark~\cite{mccalpin1995}, which are \juancrrr{COPY, ADD, SCALE, and TRIAD}, for 64-bit integers.
These \juancrrr{microbenchmarks} access two (\juancrrr{COPY, SCALE}) or three (\juancrrr{ADD, TRIAD}) arrays \juancrr{in a streaming manner (i.e., with unit-stride or sequentially)}.
The operations performed by \juancrrr{ADD, SCALE, and TRIAD} are addition, multiplication, and addition+multiplication, respectively.
\juancrr{In our experiments, we measure the \emph{sustained bandwidth} of WRAM, which is the average bandwidth that we measure over a relatively long period of time (i.e., while streaming through an entire array in WRAM).}
\juancrr{We can obtain the maximum theoretical WRAM bandwidth of our STREAM microbenchmarks, which depends on the number of instructions needed to execute the operations in each version of STREAM. Assuming that the DPU pipeline is full, we calculate the maximum theoretical WRAM bandwidth in bytes per second (B/s) with Equation~\ref{eq:bandwidth}, where $b$ is the total number of bytes read and written, $n$ is the number of instructions in a version of STREAM to read, modify, and write the $b$ bytes, and $f$ is the DPU frequency.}
\begin{equation}
WRAM\ Bandwidth\ (in\ B/s) = \frac{b \times f}{n}
\label{eq:bandwidth}
\end{equation}
For example, COPY executes one WRAM load (\texttt{ld}) and one WRAM store (\texttt{sd}) per 64-bit element.
These two instructions require 22 cycles to execute for a single tasklet.
When the pipeline is full (i.e., \juancr{with} 11 tasklets or more), $11 \times 16 = 176$ bytes are read and written in 22 cycles.
As a result, \juancr{$b = 176$ and $n = 22$, and thus,} the maximum theoretical WRAM bandwidth for COPY, \juanc{at $f$=350 MHz}, is
2,800 MB/s. \juancr{We verify this on real hardware in Section~\ref{sec:wram-bandwidth}.}
\subsubsection{\textbf{Arithmetic Throughput}}\label{sec:arith-throughput}
Figure~\ref{fig:throughput-dpu} shows how the \juancr{measured} arithmetic throughput on one DPU \juancrr{(in MOPS)} varies with the number of tasklets. We use 1 to 24 tasklets, which is the maximum number of hardware threads.
\begin{figure}[h]
\centering
\includegraphics[width=0.6\linewidth]{figures/throughput-dpu-wide-350-2.pdf}
\vspace{-3mm}
\caption{Throughput of arithmetic operations (ADD, SUB, MUL, DIV) on one DPU for four different data types: \juanc{(a) INT32, (b) INT64, (c) FLOAT, (d) DOUBLE.}}
\label{fig:throughput-dpu}
\vspace{-5mm}
\end{figure}
We make four key observations from Figure~\ref{fig:throughput-dpu}.
First, the throughput of all arithmetic operations and data types saturates after 11 tasklets.
This observation is consistent with the description of the pipeline in Section~\ref{sec:dpu-architecture}.
Recall that \juancr{the DPU uses} \juancrr{fine-grained multithreading across tasklets} to fully utilize \juancr{its} pipeline.
Since instructions in the same tasklet are dispatched 11 cycles apart, 11 tasklets is the minimum number of tasklets needed to fully utilize the pipeline.
\vspace{1mm}
\gboxbegin{\rtask{ko}}
\textbf{The arithmetic throughput of a DRAM Processing Unit saturates at 11 or more tasklets}.
\juancr{This observation is consistent for} different data types (INT32, INT64, UINT32, UINT64, FLOAT, DOUBLE) and operations (ADD, SUB, MUL, DIV).
\gboxend
Second, the throughput of addition/subtraction is
58.56 \juancrr{MOPS} for 32-bit integer values \juancrr{(Figure~\ref{fig:throughput-dpu}a)}, and
50.16 \juancrr{MOPS} for 64-bit integer values \juancrr{(Figure~\ref{fig:throughput-dpu}b)}.
\juancrr{The number of instructions inside the streaming loop for 32-bit integer additions/subtractions is 6 (Listing~\ref{lst:ubench_arith}).}
Hence, the expected throughput at
350 MHz is 58.33 \juancrr{MOPS (obtained with Equation~\ref{eq:throughput})}, which is close to \juancrr{what we measure (58.56 \juancrr{MOPS})}.
A loop \juancrr{with} 64-bit integer additions/subtractions contains 7 instructions: the same 6 instructions as the 32-bit version plus an addition/subtraction with carry-in bit (\texttt{addc/subc}) for the
\juancrr{upper 32} bits of the \juancrr{64-bit} operands.
Hence, the expected throughput at
350 MHz is 50 \juancrr{MOPS} which is also close to \juancrr{what we measure (50.16 \juancrr{MOPS})}.
Third, the throughput of integer multiplication and division is significantly lower than that of integer addition and subtraction \juanc{(note the large difference in y-axis scale between Figure~\ref{fig:throughput-dpu}a,b and Figure~\ref{fig:throughput-dpu}c,d)}.
A major reason is that the DPU pipeline does not include a complete $32\times32$-bit multiplier \juanc{due to} hardware cost concerns and limited number of available metal layers~\cite{devaux2019}.
Multiplications and divisions of 32-bit operands are implemented \juancrr{using two instructions} (\texttt{mul\_step}, \texttt{div\_step})~\cite{upmem-guide}, which are based on bit shifting and addition.
With these \juancrr{instructions}, multiplication and division can take up to 32 \juancrr{cycles (32 \texttt{mul\_step} or \texttt{div\_step} instructions)} to perform, depending on the values of the operands.
\juancrr{In case multiplication and division take 32 cycles}, the expected throughput \juancrr{(\juanc{Equation~\ref{eq:throughput}})} is
10.94 \juancrr{MOPS}, which is similar \juancrr{to what we measure (10.27 MOPS for 32-bit multiplication and 11.27 MOPS for 32-bit division, \juanc{as shown in} Figure~\ref{fig:throughput-dpu}a)}.
For multiplication and division of 64-bit integer operands, programs call two \juancrr{UPMEM runtime} library functions (\texttt{\_\_muldi3}, \texttt{\_\_divdi3})~\cite{upmem-guide, llvm-builtin} with 123 and 191 instructions, respectively. The expected throughput for these \juancr{64-bit operations} is significantly lower than for 32-bit operands, as our measurements confirm (2.56 MOPS for 64-bit multiplication and 1.40 MOPS for 64-bit division, \juanc{as shown in} Figure~\ref{fig:throughput-dpu}b).
Fourth, the throughput of floating point operations \juancrr{(as shown in Figures~\ref{fig:throughput-dpu}c and ~\ref{fig:throughput-dpu}d)} is more than an order of magnitude lower than that of integer operations.
A major reason is that the DPU pipeline does \emph{not} feature native floating point ALUs.
The UPMEM runtime library emulates these operations \juancr{in} software~\cite{upmem-guide, llvm-builtin}.
As a result, for each 32-bit or 64-bit floating point operation, the number of instructions executed in the pipeline is between several tens (32-bit floating point addition) and more than 2000 (64-bit floating point division).
This explains the low throughput. \juancrr{We measure 4.91/4.59/1.91/0.34 MOPS for FLOAT add/sub/multiply/divide (Figure~\ref{fig:throughput-dpu}c) and 3.32/3.11/0.53/0.16 MOPS for DOUBLE add/sub/multiply/divide (Figure~\ref{fig:throughput-dpu}d).}
\vspace{1mm}
\gboxbegin{\rtask{ko}}
\begin{itemize}[wide, labelsep=0.5em]
\item \textbf{\juancr{DRAM Processing Units (DPUs) provide native hardware} support for 32- and 64-bit integer addition and subtraction}, \juancr{leading to high throughput for these operations}.
\item \juancr{\textbf{DPUs do \emph{not} natively support 32- and 64-bit multiplication and division, and floating point operations}. These operations are emulated by the UPMEM runtime library, leading to much lower throughput.}
\end{itemize}
\gboxend
\subsubsection{\textbf{\juancrr{Sustained} WRAM Bandwidth}}
\label{sec:wram-bandwidth}
Figure~\ref{fig:wram-stream} shows how the \juancrr{sustained} WRAM bandwidth varies with the number of tasklets (from 1 to 16 tasklets).
In these experiments, we unroll the loop of the STREAM \juancrrr{microbenchmarks}, in order to \juancrr{exclude} loop control \juancrr{instructions}, and achieve the highest possible \juancr{sustained WRAM bandwidth. We make three major observations.}
\begin{figure}[h]
\centering
\includegraphics[width=0.6\linewidth]{figures/stream-wram-1dpu-350-1.pdf}
\vspace{-3mm}
\captionof{figure}{\juancrr{Sustained} WRAM bandwidth for streaming access patterns
}
\label{fig:wram-stream}
\end{figure}
\vspace{-2mm}
\juancr{First,} similar to arithmetic throughput, we observe that WRAM bandwidth saturates after 11 tasklets which is the number of tasklets needed to fully utilize the DPU pipeline.
\juancr{Second,} the maximum \juancr{measured} sustained WRAM bandwidth depends on the number of instructions needed to execute the operation.
\juancrr{For COPY, we measure
2,818.98 MB/s, which is similar to the maximum theoretical WRAM bandwidth of 2,800 MB/s, which we obtain with Equation~\ref{eq:bandwidth}} \juancr{(see Section~\ref{sec:wram-benchmark-desc})}.
ADD executes 5 instructions per 64-bit element: two WRAM loads (\texttt{ld}), one addition (\texttt{add}), one addition with carry-in bit (\texttt{addc}), and one WRAM store (\texttt{sd}).
In this case, $11 \times 24 = 264$ bytes are accessed in 55 cycles when the pipeline is full.
Therefore, the maximum \juancrr{theoretical WRAM} bandwidth for ADD is
1,680 MB/s, which is similar to \juancrr{what we measure}
(1,682.46 MB/s).
The \juancrr{maximum sustained WRAM} bandwidth \juancrr{for} SCALE and TRIAD is significantly smaller \juancrr{(42.03 and 61.66 MB/s, respectively)}, since \juancrr{these microbenchmarks} use the costly multiplication operation, \juancrr{which is a library function with 123 instructions (Section~\ref{sec:arith-throughput})}.
\juancr{Third, and importantly (but not shown in Figure~\ref{fig:wram-stream}),} \textbf{WRAM bandwidth is independent of the access pattern} (streaming, strided, random),\footnote{We have verified this observation using a microbenchmark (which we \juancc{also provide as part of our open source release~\cite{gomezluna2021repo}}), but do not show the \juancc{detailed} results here for brevity. This microbenchmark uses three arrays in WRAM, $a$, $b$, and $c$. Array $a$ is a list of addresses to copy from $b$ to $c$ (i.e., $c[a[i]] = b[a[i]]$). This list of addresses can be (1) unit-stride (i.e., $a[i] = a[i - 1] + 1$), (2) strided (i.e., $a[i] = a[i - 1] + stride$), or (3) random (i.e., $a[i] = rand()$). For a given number of tasklets and size of the arrays, we measure the \emph{same} execution time for \emph{any} access pattern (i.e., unit-stride, strided, or random), which verifies that WRAM bandwidth is independent of the access pattern.} since all 8-byte WRAM \juanc{loads and stores} take one cycle \juancrr{when the DPU pipeline is full, same as any other native instruction executed in the pipeline~\cite{devaux2019}}.
\vspace{1mm}
\gboxbegin{\rtask{ko}}
\textbf{The sustained bandwidth provided by the DRAM Processing Unit's internal Working memory (WRAM) is independent of the \juancr{memory} access pattern} (either streaming, strided, or random access pattern).
\textbf{All 8-byte WRAM \juanc{loads and stores} take one cycle}, when the DRAM Processing Unit's pipeline is full (i.e., with 11 or more tasklets).
\gboxend
\vspace{-2mm}
\subsection{MRAM Bandwidth \juancr{and Latency}}\label{sec:mram-bandwidth}
\juancr{Recall that a DPU, so as to be able to access data from WRAM via load/store instructions, should first transfer the data from its associated MRAM bank to its WRAM via a DMA engine.}
This section evaluates the bandwidth that can be sustained from MRAM, including read and write bandwidth (Section~\ref{sec:mram-read-write}), streaming access bandwidth (Section~\ref{sec:mram-streaming}), and strided/random access bandwidth (Section~\ref{sec:mram-strided-random}).
\subsubsection{\textbf{Read and Write \juancr{Latency and} Bandwidth}}\label{sec:mram-read-write}
In this experiment, we measure the latency of a single DMA transfer of different sizes for a single tasklet, and compute the corresponding \juancrr{MRAM} bandwidth.
These DMA transfers are performed via the \texttt{mram\_read(mram\_source, wram\_destination, SIZE)} and \texttt{mram\_write(wram\_source, mram\_destination, SIZE)} functions, where \texttt{SIZE} is the transfer size in bytes and must be a multiple of 8 between 8 and 2,048 according to
\juancrr{UPMEM SDK 2021.1.1}~\cite{upmem-guide}.
\noindent\paragraph{\textbf{Analytical Modeling.}}
We can \juancr{analytically} model the MRAM access latency \juancrr{(in cycles) using the linear expression in Equation~\ref{eq:mramlatency}},
where $\alpha$ is the fixed cost of a \juancr{DMA} transfer, $\beta$ \juancrr{represents the variable cost (i.e., cost per byte)}, and $size$ is the transfer size \juancrr{in bytes}.
\begin{equation}
MRAM\ Latency\ (in\ cycles) = \alpha + \beta \times size
\label{eq:mramlatency}
\end{equation}
After modeling the MRAM access latency \juancr{using Equation~\ref{eq:mramlatency}}, we can \juancr{analytically} model the MRAM bandwidth (in B/s) using Equation~\ref{eq:mrambandwidth}, where $f$ is the DPU frequency.
\begin{equation}
MRAM\ Bandwidth\ (in\ B/s) = \frac{size \times f}{MRAM\ Latency} = \frac{size \times f}{\alpha + \beta \times size}
\label{eq:mrambandwidth}
\end{equation}
\noindent\paragraph{\textbf{Measurements.}}
Figure~\ref{fig:mram-bandwidth} shows how the measured MRAM read and write latency and bandwidth vary with transfer size \juanc{and how well the measured latency follows the analytical model we develop above.}
\begin{figure}[h]
\centering
\includegraphics[width=0.8\linewidth]{figures/mram-bandwidth-tl1-wide-350-1.pdf}
\vspace{-2mm}
\captionof{figure}{MRAM read and write latency \juancrr{(log scale)} and bandwidth (log scale) for \juancrr{data} transfer sizes between 8 and 2,048 bytes.
The black dashed line represents latency estimates with a linear model \juancrr{(Equation~\ref{eq:mramlatency})}.}
\label{fig:mram-bandwidth}
\vspace{-4mm}
\end{figure}
In our measurements, we find that $\alpha$ is $\sim$$77$ cycles for \texttt{mram\_read} and $\sim$$61$ cycles for \texttt{mram\_write}. For both types of transfers, the value $\beta$ is 0.5~cycles/B. \juancr{The inverse of $\beta$ is the maximum theoretical MRAM bandwidth (assuming the fixed cost $\alpha = 0$), which results in 2 B/cycle.}
The latency values estimated with \juancr{our analytical model in Equation~\ref{eq:mrambandwidth} (as shown by the black dashed lines} in Figure~\ref{fig:mram-bandwidth}) accurately match the latency measurements (light blue \juancr{lines} in Figure~\ref{fig:mram-bandwidth}).
\vspace{1mm}
\gboxbegin{\rtask{ko}}
\begin{itemize}[wide, labelsep=0.5em]
\item \textbf{The \juancr{DRAM Processing Unit's Main memory (MRAM) bank} access latency increases linearly with the transfer size.
\item The maximum theoretical MRAM bandwidth is 2 bytes per cycle.}
\end{itemize}
\gboxend
\juancrr{We make four observations from Figure~\ref{fig:mram-bandwidth}.}
First, we observe that read and write accesses to MRAM are symmetric.
The latency and bandwidth of read and write transfers are very similar for a given \juanc{data} transfer size.
Second, we observe
that the sustained MRAM bandwidth (both read and write) increases with \juanc{data} transfer size.
\juancrr{The maximum sustained MRAM bandwidth we measure is 628.23 MB/s for read and 633.22 MB/s for write transfers (both for 2,048-byte transfers).
Based on this observation, a general recommendation to maximize \juancr{MRAM} bandwidth utilization is to \textbf{use large DMA transfer sizes when all the accessed data is going to be used}.
According to Equation~\ref{eq:mrambandwidth}, the theoretical maximum MRAM bandwidth is
700 MB/s at \juanc{a} DPU frequency of 350 MHz (assuming no fixed transfer cost, i.e., $\alpha = 0$).
Our measurements are within
12\% of this theoretical maximum.}
\vspace{1mm}
\pboxbegin{\ptask{pr}}
For data movement between the \juancr{DRAM Processing Unit's Main memory (MRAM) bank and the internal Working memory (WRAM)}, \textbf{use large DMA transfer sizes when all the accessed data is going to be used}.
\pboxend
\juancrr{Third, we observe that MRAM} latency changes
\juancrr{slowly} between 8-byte and 128-byte transfers.
According to Equation~\ref{eq:mramlatency}, the read latency for 128 bytes is 141 cycles \juancr{and the read latency for 8 bytes is 81 cycles. In other words, latency increases by only 74\% while transfer size increases by 16$\times$}.
The reason is that, for small \juanc{data transfer sizes}, the fixed cost ($\alpha$) of the transfer latency dominates the variable cost ($\beta \times size$).
\juanc{For large data transfer sizes, the fixed cost ($\alpha$) does \emph{not} dominate the variable cost ($\beta \times size$), and in fact the opposite starts becoming true.}
\juanc{We observe that, for} read transfers, $\alpha$ (77 cycles) represents 95\% of the latency for 8-byte reads and 55\% of the latency for 128-byte reads.
\juancrr{Based on this observation, one recommendation for programmers} is to \textbf{fetch more bytes than necessary within a \juancrr{128-byte} limit when using \juancrr{small data transfer sizes}}.
\juancr{Doing so increases} the probability of finding data in WRAM \juancr{for} later accesses, \juancr{eliminating future MRAM accesses}.
The program can simply check if the desired data has been fetched in a previous MRAM-WRAM transfer, before issuing a new \juancrr{small data} transfer.
\vspace{1mm}
\pboxbegin{\ptask{pr}}
For small transfers between the \juancr{DRAM Processing Unit's Main memory (MRAM) bank and the internal Working memory (WRAM)}, \textbf{fetch more bytes than necessary within a 128-byte limit}. \juancr{Doing so increases the \juan{likelihood} of finding data in WRAM for} later accesses \juancrrr{(i.e., the program can check \juanc{whether the desired data is} in WRAM before issuing a new MRAM access)}.
\pboxend
\juancrr{Fourth, MRAM bandwidth} scales almost linearly between 8 and 128 bytes \juancrr{due to the slow MRAM latency increase.}
\juancrr{After 128 bytes, MRAM bandwidth begins to saturate.}
The reason the \juancrr{MRAM} bandwidth saturates \juancrr{at large data transfer sizes \juancr{is related} to the inverse relationship of bandwidth and latency (Equation~\ref{eq:mrambandwidth}).
The fixed cost ($\alpha$) of the transfer latency \juancr{becomes} negligible with respect to the variable cost ($\beta \times size$) as the \juanc{data} transfer size increases.
For example, $\alpha$ for read transfers (77 cycles) represents \juanc{only} 23\%, 13\%, and 7\% of the MRAM latency for 512-, 1,024-, and 2,048-byte read transfers, respectively.
As a result, the MRAM read bandwidth increases by only 13\% and 17\% for 1,024- and 2,048-byte transfers over 512-byte transfers.}
Based on this observation, \textbf{the recommended data transfer size, when all the accessed data is going to be used, depends on a program's WRAM usage}, since WRAM has a limited size (only 64 KB). For example, if each tasklet of a DPU program needs to allocate 3 temporary WRAM buffers for data from 3 different arrays stored in MRAM, using 2,048-byte data transfers \juancr{requires that the size of each WRAM buffer is 2,048 bytes.
This limits the number of tasklets to 10, which is less than the recommended minimum of 11 tasklets (Sections~\ref{sec:general-recommendations} and~\ref{sec:arith-throughput}), since \juancr{$\frac{64 KB}{3 \times 2,048} < 11$}.}
In such a case, using 1,024-byte data transfers is preferred, since the bandwidth of 2,048-byte transfers is only 4\% higher than that of 1,024-byte transfers, according to our measurements (\juanc{shown in Figure~\ref{fig:mram-bandwidth}}).
\vspace{1mm}
\pboxbegin{\ptask{pr}}
\textbf{Choose the data transfer size \juancr{between the DRAM Processing Unit's Main memory (MRAM) bank and the internal Working memory (WRAM)} based on the program's WRAM usage}, as it imposes a tradeoff between the sustained MRAM bandwidth and the number of tasklets that can run in the \juancr{DRAM Processing Unit (which is dictated by the limited WRAM capacity)}.
\pboxend
\subsubsection{\juancrr{Sustained} Streaming Access Bandwidth}\label{sec:mram-streaming}
In this experiment, we use the same four versions of the STREAM benchmark~\cite{mccalpin1995} described in Section~\ref{sec:wram-benchmark-desc}, but include the MRAM-WRAM DMA transfer time in our measurements.
We also add another version of \juancr{the copy benchmark}, COPY-DMA, which copies data from MRAM to WRAM and back without performing any WRAM loads/stores in the \juancrr{DPU} core.
We use 1024-byte DMA transfers.
We scale the number of tasklets from 1 to 16.
The tasklets collectively stream 2M 8-byte elements \izzat{(total of 16 MB)}, which are divided evenly across the tasklets.
Figure~\ref{fig:mram-stream} shows how the MRAM streaming access bandwidth varies with the number of tasklets.
\vspace{-1mm}
\begin{figure}[h]
\centering
\includegraphics[width=0.6\linewidth]{figures/stream-mram-1dpu-350-1.pdf}
\vspace{-3mm}
\captionof{figure}{\juancrr{Sustained} MRAM bandwidth for streaming access patterns.}
\label{fig:mram-stream}
\vspace{-2mm}
\end{figure}
We make four key observations.
\juancrr{First,} the \juancrr{sustained MRAM} bandwidth of COPY-DMA is
624.02 MB/s, which is close to the \juancrr{theoretical maximum} bandwidth
(700 MB/s derived in Section~\ref{sec:dpu-architecture}).
The measured aggregate \juancrr{sustained} bandwidth for 2,556 DPUs is 1.6~TB/s.
In the 640-DPU system, we measure \juancrr{the sustained MRAM bandwidth to be} 470.50~MB/s \juancrr{per DPU} \juancr{(theoretical maximum = 534 MB/s), resulting in aggregate sustained MRAM bandwidth of 301~GB/s} for 640 DPUs.
\juancrr{Second, the MRAM} bandwidth of COPY-DMA saturates with two tasklets.
\juancrr{Even though} the DMA engine can perform \juancrr{only one data} transfer at a time~\cite{comm-upmem}, using two or more tasklets in COPY-DMA guarantees that there is always a DMA request enqueued to keep the DMA engine busy when a previous DMA request completes, \juancrr{thereby} achieving \juancrr{the highest MRAM} bandwidth.
\juancrr{Third, the MRAM} bandwidth for COPY and ADD saturates at 4 and 6 tasklets, respectively, i.e., \juancrr{earlier than} the 11 tasklets needed to fully utilize the pipeline.
This observation indicates that \juancr{these microbenchmarks are limited by access to MRAM (and not the instruction pipeline)}.
When the COPY benchmark uses fewer than 4 tasklets, the latency of pipeline instructions (i.e., WRAM loads/stores) is longer than the latency of MRAM accesses (i.e., MRAM-WRAM and WRAM-MRAM DMA transfers).
After 4 tasklets, this trend flips, and the latency of MRAM accesses becomes longer. The reason is that the MRAM accesses are serialized, such that the MRAM access latency increases linearly with the number of tasklets.
Thus, after 4 tasklets, the overall latency is \juancrr{dominated by} the MRAM access latency, which hides the pipeline latency.
\juancrr{As a result, the sustained MRAM bandwidth of COPY saturates with 4 tasklets at the highest MRAM bandwidth, same as COPY-DMA.}
Similar observations apply to the ADD benchmark with 6 tasklets.
\juancrr{Fourth,} the \juancrr{sustained MRAM} bandwidth of SCALE and TRIAD is \juancrr{approximately one order of} magnitude smaller than that of COPY-DMA, COPY, and ADD.
\juancr{In addition}, \juancrr{SCALE and TRIAD's MRAM} bandwidth saturates at 11 tasklets, i.e., the number of tasklets needed to fully utilize the pipeline.
This observation indicates that \juancrr{SCALE and TRIAD performance} \juanc{is} limited by pipeline throughput, not MRAM \juancrr{access}.
Recall that SCALE and TRIAD use costly multiplications, which are based on the \texttt{mul\_step} \juancrr{instruction}, as explained in Section~\ref{sec:arith-throughput}.
As a result, instruction execution in the pipeline has \juancr{much} higher latency than MRAM access.
Hence, it makes sense that \juancrr{SCALE and TRIAD} are bound by pipeline throughput, and thus the \juancrr{maximum sustained WRAM} bandwidth of SCALE and TRIAD (Figure~\ref{fig:wram-stream}) is the same as \juancrr{the maximum sustained MRAM} bandwidth (Figure~\ref{fig:mram-stream}).
\vspace{1mm}
\gboxbegin{\rtask{ko}}
\begin{itemize}[wide, labelsep=0.5em]
\item \textbf{When the access latency \juancr{to a DRAM Processing Unit's Main memory (MRAM) bank for a streaming} benchmark (COPY-DMA, COPY, ADD) \juancr{is larger than} the pipeline latency} (i.e., execution \juanc{latency} of arithmetic operations and WRAM accesses), \textbf{the performance of the \juancr{DRAM Processing Unit (DPU)} saturates at a number of tasklets (i.e., software threads) \juanc{smaller} than 11. This is a memory-bound workload.}
\item \textbf{When the pipeline latency for a \juancr{streaming} benchmark (SCALE, TRIAD) \juancr{is larger than the MRAM access latency}, the performance of a DPU saturates at 11 tasklets. This is a compute-bound workload.}
\end{itemize}
\gboxend
\subsubsection{\juancrr{Sustained} Strided and Random Access Bandwidth}\label{sec:mram-strided-random}
\juancrr{We evaluate the sustained MRAM bandwidth of strided and random access patterns.}
\juancrr{To evaluate strided access bandwidth in MRAM, we devise an experiment in which we write a new microbenchmark that accesses MRAM in a strided manner.
The microbenchmark accesses an array at a constant stride (i.e., constant distance between consecutive memory accesses), copying elements from the array into another array using the same stride.
We implement two versions of the microbenchmark, \emph{coarse-grained DMA} and \emph{fine-grained DMA}, to test both coarse-grained and fine-grained MRAM access.}
\juancrr{In coarse-grained DMA, the microbenchmark accesses via DMA} a large contiguous segment (1024~B) of the array in MRAM, and the strided access happens in WRAM.
\juancrr{The coarse-grained DMA} approach resembles what \juanc{state-of-the-art} CPU hardware does (i.e., reads large cache lines \juancr{from main memory} and strides through them in the cache).
\juancrr{In fine-grained DMA, the microbenchmark transfers via DMA} only the data \juancr{that will be used by the microbenchmark from} MRAM.
\juancrr{The fine-grained DMA} approach results in more DMA requests, but less total \juancr{amount of} data transferred \juancrr{between MRAM and WRAM}.
\juancrr{To evaluate random access bandwidth in MRAM}, we implement the GUPS benchmark~\cite{luszczek_hpcc2006}, which performs read-modify-write operations on random positions of an array.
\juancrr{We use only} fine-grained DMA for random access, since \juancrr{random} memory accesses in GUPS do not benefit from fetching large chunks of data, \juancrr{because they are \emph{not} spatially correlated}.
In \juancr{our} experiments, we scale the number of tasklets from 1 to 16.
The tasklets collectively access arrays in MRAM with (1) coarse-grained strided access, (2) fine-grained strided access, or (3) fine-grained random access.
\juancr{Each array contains} 2M 8-byte elements (total of 16MB), which are divided evenly across the tasklets.
Figure~\ref{fig:mram-patterns} shows how the \juancrr{sustained MRAM bandwidth varies with access pattern (strided and random access)} as well as with the number of tasklets.
\vspace{-2mm}
\begin{figure}[h]
\centering
\includegraphics[width=1.0\linewidth]{figures/strided-gups-mram-linear-350-1.pdf}
\vspace{-6mm}
\caption{\juancrr{Sustained} MRAM bandwidth for (a) coarse-grained strided and (b) fine-grained strided and random access patterns.}
\label{fig:mram-patterns}
\vspace{-2mm}
\end{figure}
\juancrr{We make four key observations.}
\juancrr{First, we measure maximum sustained MRAM bandwidth to be
622.36 MB/s for coarse-grained DMA (with 16 tasklets and a stride of 1, Figure~\ref{fig:mram-patterns}a), and
\juancr{72.58 MB/s} for fine-grained DMA (with 16 tasklets, Figure~\ref{fig:mram-patterns}b).}
\juancr{This difference in the sustained MRAM bandwidth values of coarse-grained DMA and fine-grained DMA is related to the difference in MRAM bandwidth for different transfer sizes (\juanc{as we analyze in} Section~\ref{sec:mram-read-write}).
While coarse-grained DMA uses 1,024-byte transfers, fine-grained DMA uses 8-byte transfers.}
\juancrr{Second, we observe that the sustained MRAM bandwidth of coarse-grained DMA (Figure~\ref{fig:mram-patterns}a)} decreases as the stride \juancrr{becomes larger}.
\juancr{This is due to the effective utilization of the transferred data, which decreases for larger strides (e.g., a stride of 4 means that only one fourth of the transferred data is effectively used).}
Third, the coarse-grained DMA approach has higher \juanc{sustained MRAM} bandwidth for \juancrr{smaller strides} while the fine-grained DMA approach has higher \juanc{sustained MRAM} bandwidth for larger strides.
The larger the stride in coarse-grained DMA, \juancr{the larger the amount of} fetched data \juancr{that} remains unused, causing fine-grained DMA to become more efficient with larger strides.
In these experiments,
the coarse-grained DMA approach achieves higher sustained MRAM bandwidth \juancr{than the fine-grained DMA approach} for strides between 1 and 8. For a stride of 16 or larger, the fine-grained DMA approach achieves higher sustained MRAM bandwidth.
This
is because with larger strides, the fraction of \juanc{transferred} data that is actually used by the microbenchmark becomes smaller (i.e., effectively-used MRAM bandwidth becomes smaller).
With a stride of 16 \juancr{and coarse-grained DMA, the microbenchmark} uses only one sixteenth of the fetched data.
As a result, we measure \juanc{the sustained MRAM bandwidth to be} 38.95 MB/s for coarse-grained DMA, which is only one sixteenth of the maximum sustained MRAM bandwidth of 622.36 MB/s, and is lower than the sustained MRAM \juancr{bandwidth of fine-grained DMA
(72.58 MB/s)}.
Fourth, the maximum sustained MRAM bandwidth for random access is 72.58 MB/s (with 16 tasklets, \juanc{as shown in} Figure~\ref{fig:mram-patterns}b). This bandwidth value is \juancr{very} similar to the maximum MRAM bandwidth of the fine-grained DMA approach for strided access (e.g., 72.58 MB/s with 16 tasklets and stride 4,096, \juanc{as shown in} Figure~\ref{fig:mram-patterns}b), since \juanc{our microbenchmark uses} fine-grained DMA for random access.
Based on these observations, we recommend that programmers \textbf{use the coarse-grained DMA approach for workloads with small \juancr{strides} and the fine-grained DMA approach for workload with large \juancr{strides or} random access patterns}.
\pboxbegin{\ptask{pr}}
\begin{itemize}[wide, labelsep=0.5em]
\item For strided access patterns with a \textbf{stride smaller than 16 8-byte elements, fetch a large contiguous chunk} (e.g., 1,024 bytes) \juancr{from a DRAM Processing Unit's Main memory (MRAM) bank}.
\item For \juancr{strided access patterns with} \textbf{larger strides and random access} patterns, fetch \textbf{only the data elements that are needed} \juancr{from an MRAM bank}.
\end{itemize}
\pboxend
\subsection{\textbf{Arithmetic Throughput versus Operational Intensity}}\label{sec:throughput-oi}
Due to its fine-grained multithreaded architecture~\cite{ddca.spring2020.fgmt,henessy.patterson.2012.fgmt,burtonsmith1978,smith1982architecture,thornton1970}, a DPU overlaps instruction execution \juanc{latency} in the pipeline and MRAM access \juanc{latency}~\cite{upmem-guide,devaux2019}.
As a result, the overall DPU performance is determined by the dominant \juanc{latency} (either instruction execution \juanc{latency} or MRAM access \juanc{latency}).
We observe this behavior in our experimental results in Section~\ref{sec:mram-streaming}, where the dominant latency (pipeline latency or MRAM access latency) determines the
sustained MRAM bandwidth for \juancr{different} versions of \juancrrr{the STREAM benchmark~\cite{mccalpin1995}}.
To further understand the DPU architecture, we design a new microbenchmark where we vary the \juancr{number} of pipeline instructions with respect to the \juancr{number} of MRAM accesses, and measure performance in terms of arithmetic throughput (in \juanc{MOPS}, as defined in Section~\ref{sec:wram-benchmark-desc}).
\juancr{By varying the number of pipeline instructions per MRAM access, we move from microbenchmark configurations where the MRAM access latency dominates (i.e., \emph{memory-bound regions}) to microbenchmark configurations where the pipeline latency dominates (i.e., \emph{compute-bound regions}).}
Our microbenchmark includes MRAM-WRAM DMA transfers, WRAM load/store accesses, and a variable number of arithmetic operations.
The number of MRAM-WRAM DMA transfers in the microbenchmark is constant, and thus the total MRAM latency is \juancr{also} constant. However, the latency of instructions executed in the pipeline varies with the variable number of arithmetic operations.
\juancr{Our} experiments aim to show how arithmetic throughput varies with operational intensity.
We define \emph{operational intensity} as the number of arithmetic operations performed per byte accessed from MRAM \juancr{(OP/B)}.
As explained in Section~\ref{sec:arith-throughput}, an arithmetic operation in the UPMEM PIM architecture takes multiple instructions to execute.
The experiment is inspired by the roofline model~\cite{roofline}, a performance analysis methodology that shows the performance of a program (arithmetic instructions executed per second) as a function of the \emph{arithmetic intensity} (arithmetic instructions executed per byte accessed from memory) of the program, \juancr{as compared to} the peak performance of the machine (determined by the compute throughput and the L3 and DRAM memory bandwidth).
Figure~\ref{fig:ai-dpu} shows results of arithmetic throughput versus operational intensity for representative data types and operations: (a) 32-bit integer addition, (b) 32-bit \juancr{integer} multiplication, (c) 32-bit floating point addition, and (d) 32-bit floating point multiplication.
Results for other data types (64-bit integer and 64-bit floating point) and arithmetic operations (subtraction and division) follow similar trends.
We change the operational intensity from very low values
($\frac{1}{2048}$ operations/byte, i.e., one operation \juancr{per} every 512 32-bit elements \juancr{fetched}) to high values (8 operations/byte, i.e., 32 operations per \juancr{every} 32-bit element \juancr{fetched}), \juancr{and} measure the resulting throughput for different numbers of tasklets (from 1 to 16).
\vspace{-2mm}
\begin{figure}[h]
\centering
\includegraphics[width=1.0\linewidth]{figures/ai-dpu-wide4-frac-350-1.pdf}
\vspace{-6mm}
\captionof{figure}{\juancr{Arithmetic} throughput versus operational intensity for (a) 32-bit integer addition, (b) 32-bit integer multiplication, (c) 32-bit floating point addition, and (d) 32-bit floating point multiplication. The number inside each dot indicates the number of tasklets.
\juancr{Both x- and y-axes are log scale.}}
\label{fig:ai-dpu}
\vspace{-2mm}
\end{figure}
We make four key observations from Figure~\ref{fig:ai-dpu}.
First, the four plots in Figure~\ref{fig:ai-dpu} show (1) \juancr{the} memory-bound region (where arithmetic throughput increases with operational intensity) and (2) \juancr{the} compute-bound region (where arithmetic throughput is flat at its maximum value) for each number of tasklets. For a \juancr{given} number of tasklets, the transition between the memory-bound \juancr{region} and the compute-bound region occurs when the latency of instruction execution in the pipeline surpasses the MRAM latency. We refer to the operational intensity value where the transition between \juancr{the} memory-bound region and \juancr{the} compute-bound region happens as the \emph{throughput saturation point}.
Second, arithmetic throughput saturates at low (e.g.,
$\frac{1}{4}$~OP/B for integer addition, i.e., 1 \juancr{integer} addition per \juancr{every} 32-bit element \juancr{fetched}) or very low (e.g.,
$\frac{1}{128}$~OP/B for floating point multiplication, i.e., 1 multiplication \juancr{per} every 32 32-bit elements \juancr{fetched}) operational intensity.
This result demonstrates that \textbf{\juanc{the} DPU is fundamentally a compute-bound processor} designed for workloads with low data reuse.
\gboxbegin{\rtask{ko}}
\textbf{The arithmetic throughput of a \juancr{DRAM Processing Unit (DPU)} saturates at low or very low operational intensity} (e.g., 1 integer addition per 32-bit element).
\juancr{Thus, \textbf{\juanc{the} DPU is fundamentally a compute-bound processor}.}
We expect \textbf{most real-world workloads be compute-bound in the UPMEM PIM architecture}.
\gboxend
Third, the throughput saturation point is lower for data types and operations that require more instructions per operation. For example, the throughput for 32-bit multiplication (Figure~\ref{fig:ai-dpu}b), which requires up to 32 \texttt{mul\_step} instructions (Section~\ref{sec:arith-throughput}), saturates at $\frac{1}{32}$~OP/B, while the throughput for 32-bit addition (Figure~\ref{fig:ai-dpu}a), which is natively supported (it requires a single \texttt{add} instruction), saturates at $\frac{1}{4}$~OP/B.
Floating point operations saturate earlier than integer operations, since they require from several tens to hundreds of instructions: 32-bit floating point addition (Figure~\ref{fig:ai-dpu}c) and multiplication (Figure~\ref{fig:ai-dpu}d) saturate at $\frac{1}{64}$ and $\frac{1}{128}$~OP/B, respectively.
Fourth, we observe that in the compute-bound regions (i.e., after the saturation points), \juancr{arithmetic} throughput saturates with 11 tasklets, which is the number of tasklets needed to fully utilize the pipeline.
On the other hand, in the memory-bound region, throughput saturates with fewer tasklets because the memory bandwidth limit is reached before the pipeline is fully utilized.
For example, at very low operational intensity values ($\leq$
$\frac{1}{64}$~OP/B), throughput of 32-bit integer addition saturates with just two tasklets which is consistent with the observation in Section~\ref{sec:mram-streaming} where COPY-DMA bandwidth saturates with two tasklets.
However, an operational intensity of
$\frac{1}{64}$~OP/B is extremely low, as it entails only one addition for every 64~B accessed (16 32-bit integers).
We expect higher operational intensity (e.g., greater than $\frac{1}{4}$~OP/B) in most real-world workloads~\cite{roofline,deoliveira2021} and, thus, \juancr{arithmetic throughput to saturate with 11 tasklets in real-world workloads}.
In the Appendix (Section~\ref{app:throughput-oi}), we present a different view of these results, where we show how arithmetic throughput varies with the number of tasklets at different operational intensities.
\subsection{\textbf{CPU-DPU Communication}}\label{sec:cpu-dpu}
The host CPU and the DPUs in PIM-enabled memory communicate via the memory bus.
The host CPU can access MRAM banks to (1) transfer input data from main memory to MRAM \juancrrr{(i.e., CPU-DPU)}, and (2) transfer results \juancr{back} from MRAM to main memory \juancrrr{(i.e., DPU-CPU)}, as Figure~\ref{fig:scheme} shows. We call these data transfers \juancrrr{CPU-DPU} and \juancrrr{DPU-CPU} transfers, respectively.
As explained in Section~\ref{sec:sys-org}, these data transfers can be \emph{serial} (i.e., \juancr{performed} sequentially across multiple MRAM banks) or \emph{parallel} (i.e., \juancr{performed} concurrently across multiple MRAM banks).
The UPMEM SDK~\cite{upmem-guide} provides functions for serial and parallel transfers.
For serial transfers, \texttt{dpu\_copy\_to} copies a buffer from \juancr{the host} main memory to a specific MRAM bank \juancrrr{(i.e., CPU-DPU)}, and \texttt{dpu\_copy\_from} copies a buffer \juancr{from} one MRAM bank to \juancr{the host} main memory \juancrrr{(i.e., DPU-CPU)}.
For parallel transfers, a program needs to use two functions.
First, \texttt{dpu\_prepare\_xfer} prepares the parallel transfer by assigning different buffers to specific MRAM banks.
Second, \texttt{dpu\_push\_xfer} launches the actual transfers to execute in parallel. One argument of \texttt{dpu\_push\_xfer} defines whether the parallel data transfer happens from \juancr{the host} main memory to the MRAM banks (i.e., \juancrrr{CPU-DPU}) or from the MRAM banks to \juancr{the host} main memory (i.e., \juancrrr{DPU-CPU}).
Parallel transfers have the limitation (in UPMEM SDK 2021.1.1~\cite{upmem-guide}) that the transfer sizes to all MRAM banks involved in the same parallel transfer need to be the same.
A special case of parallel \juancrrr{CPU-DPU} transfer (\texttt{dpu\_broadcast\_to}) broadcasts the same buffer from main memory to all MRAM banks.
In this section, we measure the sustained bandwidth of all types of \juancrrr{CPU-DPU} and \juancrrr{DPU-CPU} transfers between the \juancr{host} main memory and MRAM banks.
We perform two different experiments.
The first experiment transfers \juancr{a} buffer of \juancr{varying} size to/from a single MRAM bank. Thus, we obtain the sustained bandwidth of \juancrrr{CPU-DPU} and \juancrrr{DPU-CPU} transfers of different \juancr{sizes} for one MRAM bank.
In this experiment, we use \texttt{dpu\_copy\_to} and \texttt{dpu\_copy\_from} and vary the transfer size from 8 bytes to 32 MB.
The second experiment transfers buffers of size 32 MB per MRAM bank from/to a set of 1 to 64 MRAM banks within the same rank.
We experiment with both serial and parallel transfers (\texttt{dpu\_push\_xfer}), including broadcast \juancrrr{CPU-DPU transfers} \juancr{(\texttt{dpu\_broadcast\_to})}. Thus, we obtain the sustained bandwidth of serial/parallel/broadcast \juancrrr{CPU-DPU} transfers and serial/parallel \juancrrr{DPU-CPU} transfers for a number of MRAM banks in the same rank between 1 and 64.
We do not experiment with more than one rank, since we observe in preliminary experiments that the UPMEM SDK 2021.1.1~\cite{upmem-guide} only parallelizes transfers to MRAM banks within the same rank, not across ranks. Future releases of UPMEM SDK may include improvements to parallelize across ranks.
Figure~\ref{fig:cpudpu} presents the sustained bandwidth results of both experiments.
\vspace{-1mm}
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{figures/cpudpu-350-2.pdf}
\vspace{-3mm}
\caption{Sustained bandwidth (log scale \juancr{x- and y-axes}) of (a) \juancrrr{CPU-DPU} (\juancr{host} main memory to one MRAM bank) and \juancrrr{DPU-CPU} (one MRAM bank to \juancr{host} main memory) transfers of different sizes for one DPU, and (b) serial/parallel/broadcast \juancrrr{CPU-DPU} (\juanc{host} main memory to several MRAM banks) and serial/parallel \juancrrr{DPU-CPU} (several MRAM banks to \juanc{host} main memory) transfers of 32 MB for a set of 1-64 DPUs within one rank.}
\label{fig:cpudpu}
\vspace{-1mm}
\end{figure}
We make seven key observations.\footnote{Note that our measurements of and observations about CPU-DPU and DPU-CPU transfers are \juanc{both} platform-dependent (i.e., measurements and observations may change for a different host CPU) and UPMEM SDK-dependent (i.e., the implementation of CPU-DPU/DPU-CPU transfers may change in future releases of the UPMEM SDK). For example, our bandwidth measurements on the 640-DPU system \juanc{(not shown)} differ from \juanc{those on} the 2,556-DPU system (\juanc{but we find the trends we observe to be similar on both systems}).}
First, sustained bandwidths of \juancrrr{CPU-DPU} and \juancrrr{DPU-CPU} transfers for a single DPU (Figure~\ref{fig:cpudpu}a) are similar for transfer sizes between 8 and 512 bytes.
For transfer sizes greater than 512 bytes, sustained bandwidth of \juancrrr{CPU-DPU} transfers is higher than that of \juancrrr{DPU-CPU} transfers.
For the largest transfer size we \juancr{evaluate} (32 MB), \juancrrr{CPU-DPU} and \juancrrr{DPU-CPU} bandwidths are
0.33 GB/s and 0.12 GB/s, respectively.
Second, the sustained bandwidths of \juancrrr{CPU-DPU} and \juancrrr{DPU-CPU} transfers for a single DPU (Figure~\ref{fig:cpudpu}a) increase linearly between 8 bytes and 2 KB, and tend to saturate for larger \juancr{transfer} sizes.
\vspace{2mm}
\gboxbegin{\rtask{ko}}
\textbf{Larger \juancrrr{CPU-DPU} and \juancrrr{DPU-CPU} transfers between the host main memory and the DRAM Processing Unit's Main memory (MRAM) banks result in higher sustained bandwidth.}
\gboxend
Third, for one rank (Figure~\ref{fig:cpudpu}b) the sustained bandwidths of serial \juancrrr{CPU-DPU} and \juancrrr{DPU-CPU} transfers remain flat for different numbers of DPUs.
Since these transfers are executed serially, latency increases proportionally with the number of DPUs (hence, the total amount of data transferred). As a result, the sustained bandwidth does not increase.
Fourth, the sustained bandwidth of the parallel transfers increases with the number of DPUs, reaching the highest sustained bandwidth \juancr{values at} 64 DPUs.
The maximum sustained \juancrrr{CPU-DPU} bandwidth that we measure is
6.68 GB/s, while the maximum sustained \juancrrr{DPU-CPU} bandwidth is
4.74 GB/s.
However, we observe that the increase in sustained bandwidth \juancr{with DPU count} is sublinear. The sustained \juancrrr{CPU-DPU} bandwidth for 64 DPUs is 20.13$\times$ higher than that for one DPU. For \juancrrr{DPU-CPU} transfers, the sustained bandwidth increase of 64 DPUs to one DPU is 38.76$\times$.
\gboxbegin{\rtask{ko}}
\textbf{The sustained bandwidth of parallel \juancrrr{CPU-DPU} and \juancrrr{DPU-CPU} transfers between the host main memory and the DRAM Processing Unit's Main memory (MRAM) banks increases with the number of DRAM Processing Units inside a rank.}
\gboxend
Fifth, we observe large differences between sustained bandwidths of \juancrrr{CPU-DPU} and \juancrrr{DPU-CPU} transfers for both serial and parallel transfers.
These differences are due to different implementations of \juancrrr{CPU-DPU} and \juancrrr{DPU-CPU} transfers in UPMEM SDK 2021.1.1~\cite{comm-upmem}.
While \juancrrr{CPU-DPU} transfers use x86 AVX write instructions~\cite{avx2011intel}, which are asynchronous, \juancrrr{DPU-CPU} transfers use AVX read instructions~\cite{avx2011intel}, which are synchronous.
As a result, \juancrrr{DPU-CPU} transfers cannot sustain \juancr{as many memory accesses as \juancrrr{CPU-DPU} transfers, which results} in lower sustained bandwidths of both serial and parallel \juancrrr{DPU-CPU} transfers than the \juancrrr{CPU-DPU} transfer counterparts.
Sixth, sustained bandwidth of broadcast \juancrrr{CPU-DPU} transfers reaches up to 16.88 GB/s.
One reason why this maximum sustained bandwidth is significantly higher \juancr{than} that of parallel \juancrrr{CPU-DPU} transfers is better locality in the cache hierarchy of the host CPU~\cite{comm-upmem}.
While a broadcast transfer copies the \emph{same} buffer to \emph{all} MRAM banks, which increases temporal locality in the CPU cache hierarchy, a parallel \juancrrr{CPU-DPU} transfer copies \emph{different} buffers to \emph{different} MRAM banks. \juancr{These buffers are more likely to miss in the CPU cache hierarchy and need to be fetched from main memory into CPU caches before being copied to MRAM banks.}
Seventh, in all our experiments across an entire rank, the sustained bandwidth is lower than the theoretical maximum bandwidth of DDR4-2400 DIMMs (19.2 GB/s)~\cite{jedec2012ddr4}. We attribute this bandwidth loss to the transposition library~\cite{devaux2019} that the UPMEM SDK uses to map entire 64-bit words onto the same MRAM bank (Section~\ref{sec:sys-org}).
\gboxbegin{\rtask{ko}}
\textbf{The sustained bandwidth of parallel \juancrrr{CPU-DPU} transfers between the host main memory and the DRAM Processing Unit's Main memory (MRAM) banks is higher than the sustained bandwidth of parallel \juancrrr{DPU-CPU} transfers between the MRAM banks and the host main memory} due to different implementations of \juancrrr{CPU-DPU} and \juancrrr{DPU-CPU} transfers in the UPMEM runtime library.
\textbf{The sustained bandwidth of broadcast \juancrrr{CPU-DPU} transfers (i.e., the same buffer is copied to multiple MRAM banks) is higher than that of parallel \juancrrr{CPU-DPU} transfers (i.e., different buffers are copied to different MRAM banks)} due to higher temporal locality in the CPU cache hierarchy.
\gboxend
|
{
"timestamp": "2021-06-21T02:04:41",
"yymm": "2105",
"arxiv_id": "2105.03814",
"language": "en",
"url": "https://arxiv.org/abs/2105.03814"
}
|
\section{Introduction}
Perovskites with general chemical formula ABX$_3$ have found great attention in dielectric, optoelectronics, and solar cell applications due to their superb ferroelectric, piezoelectric, superconductive and photovoltaic properties~\cite{jaffe1971piezoelectric, C6TC04830G, zhang2015advantages}. During the last decade, in the field of solar cell applications, hybrid lead-halide perovskites namely, CH$_3$NH$_3$PbX$_3$ and CH(NH$_2$)$_2$PbX$_3$ (X = Cl, Br and I) have achieved great success owing to their small band gap, long carrier mobility, low manufacturing cost and high power conversion efficiency~\cite{deschler2014high, shi2015low, C4EE02988G, bi2016charge, C5CC08643D, C7TA00434F, C8CC01982G}. However, due to the presence of organic molecules, the stability of perovskites is affected towards heat, light and moisture, thereby degrading their efficiency with time in the practical world~\cite{park2019intrinsic}. Moreover, presence of toxic lead in these materials makes them hazardous for the environment~\cite{babayigit2016toxicity}. These shortcomings have hindered their practical applications.
In search of alternative perovskites that can alleviate the limitations of lead-halide perovskites, chalcogenide perovskites with S or Se-anion have been proposed for photovoltaic applications~\cite{sun2015chalcogenide}. Several prototypical chalcogenide perovskites (viz. SrHfS$_3$~\cite{hanzawa2019material}, AZrS$_3$ (A = Sr, Ca, Ba)~\cite{lee2005synthesis, majumdar2020emerging, perera2016chalcogenide}, along with their related phases) have been synthesized successfully. Amongst them, BaZrS$_3$ consists of earth-abundant elements and is having moderate band gap ($\sim$1.82 eV~\cite{li2019band}) ideal for photovoltaics. Moreover, it also exhibits ambipolar doping~\cite{meng2016alloying} and is stable against different environmental conditions~\cite{ravi2021colloidal}. In order to optimize the solar cell absorption, doping at Ba/Zr-sites have been attempted in this material~\cite{wei2020ti, perera2016chalcogenide, kuhar2017sulfide}. However, such doped/alloyed configurations seem to lack stability~\cite{sun2018chalcogenide}. Thin films of BaZrS$_3$ are also reported, which has directed the research towards its new phases named as Ruddlesden-Popper (RP) phases~\cite{comparotto2020chalcogenide}. Tremendous efforts have been invested to tune the electrical and optical properties~\cite{li2019band}. The RP phases as an imitative of the perovskite structure are evolving as a semiconductor for optoelectronic applications~\cite{song2020structure, ghosh2020charge}.
Their general formula is A$_{\textrm{n+1}}$B$_{\textrm{n}}$X$_{\textrm{3n+1}}$, where perovskite structure blocks of unit cell thickness ``n'' are separated by rock salt layer AX along [001] direction. Alternate perovskite blocks are displaced in in-plane direction by half of the unit cell. The RP phases are included in the broad category of ``2D perovskites'' owing to their layered structural arrangement (see Fig.~\ref{1}).
Note that several studies assigning to the layered perovskites as ``2D perovskites'' exist in the literature, where periodic stacking of perovskite layers result in a bulk structure. Their material properties can be tuned either by substitution or dimensional reduction~\cite{gill2020understanding, gill2021high, qiu20192d}. Due to quantum confinement effects~\cite{stanford2018emerging}, considerable change in bulk physical properties (such as bulk modulus, elastic modulus, charge carrier properties and optical properties) can be seen on reducing the dimension of material~\cite{lee2012quantum, kohn1965self}. Research in this field is highly evolving~\cite{ghosh2020charge, tian2020two,yu2019theoretical,hohenberg1964inhomogeneous}.
In optoelectronic materials exciton formation greatly influence the charge separation properties and hence, excitonic parameter such as exciton binding energy (E$_\textrm{B}$) acts as an important descriptor for optoelectronic applications. Solar cell performance depends upon the fraction of thermally dissociated excitons into electrons and holes, giving rise to the free-charge carriers. In addition, the concept of polarons has been used to explain multiple photo-physical phenomena in these materials~\cite{park2018excited}. Polaronic effects have been suggested to play an important role in the excitation dynamics and carrier transport. The separation of free charge is also influenced by the carrier mobility. Hence understanding the effect of electron-phonon coupling in terms of polaron mobility is important. A systematic study on the excitonic and polaronic effect in the RP phases of BaZrS$_3$ is hitherto unknown. The present Letter, therefore, explores the excitonic properties along with polaronic effect in RP phases of Ba$_{\textrm{n+1}}$Zr$_{\textrm{n}}$S$_{\textrm{3n+1}}$ (n=[1-3]) under the framework of Many Body Perturbation Theory. The electron-phonon coupling is also taken care of using Frohlich model to compute the polaron mobility.
The exciton binding energy is defined as the energy required to decouple the exciton into individual electron and hole pair. Theoretically, the exciton binding energy (E$_\textrm{B}$) is calculated by taking the difference of the energy of bounded electron-hole (e-h) pair (i.e., BSE gap) and unbounded e-h pair (i.e., GW gap). In order to determine the optical response of Ba$_{\textrm{n+1}}$Zr$_{\textrm{n}}$S$_{\textrm{3n+1}}$ (n=[1-3]) RP phases, we have calculated the imaginary part of dielectric function (Im ($\epsilon$)). Initially, we have benchmarked the exchange-correlation ($\epsilon_{\textrm{xc}}$) functional for our system. As it is already known that single shot GW (G$_0$W$_0$) calculation strongly depends on the starting point, we need to validate the suitable starting point for G$_0$W$_0$ calculation. Note that spin-orbit coupling (SOC) effect is negligible in these systems (see Fig. S3). Hence, we have excluded SOC in our calculations (see Fig. S5 in SI). The band gap of Ba$_2$ZrS$_4$, Ba$_3$Zr$_2$S$_7$ and Ba$_4$Zr$_3$S$_{10}$ are quite underestimated using PBE and the values are 0.61 eV, 0.42 eV and 0.34 eV, respectively. On the other hand, the same with default parameters (viz. exact exchange = 25$\%$ and screening parameter 0.2 \AA$^{-1}$) of HSE06 are 1.39 eV, 1.18 eV and 1.08 eV, respectively. The HSE06 numbers are in good agreement with the experimental findings~\cite{li2019band}. Further, the peak position, which is underestimated by PBE is improved by G$_0$W$_0$@PBE. Notably, the quasiparticle gaps computed using G$_0$W$_0$@PBE are overestimated in comparison to experimental band gap, since it does not take into account the exciton binding energy. However, the gaps are improved by employing BSE on top of G$_0$W$_0$@PBE. We find, the optical peak position of Ba$_2$ZrS$_4$, by performing G$_0$W$_0$@PBE and G$_0$W$_0$@HSE06 is 2.11 eV (see Fig. \ref{2}(a)) and 2.17 eV (see Fig. S4 in SI), respectively. Since, the results obtained from the consideration of HSE06 as the starting point of G$_0$W$_0$ deviate more from the experimental results, we have performed all the calculations using G$_0$W$_0$@PBE. Although G$_0$W$_0$@PBE gives better result than G$_0$W$_0$@HSE06, it is still far from the experimental optical band gap i.e., 1.33 eV.
\begin{figure}[h]
\centering
\includegraphics[width=0.80\textwidth]{RP-0.jpg}
\caption{Optimized crystal structure of Ba$_{\textrm{n+1}}$Zr$_\textrm{n}$S$_{\textrm{3n+1}}$ (n=[1-3]) Ruddlesden-Popper phases (RP phases).}
\label{1}
\end{figure}
However, it gets improved when we solve the BSE to obtain optical band gap due to the fact that BSE takes into account the excitonic effect. The latter is ignored in G$_0$W$_0$ calculation to obtain accurate optical spectra. Therefore, we have performed BSE@G$_0$W$_0$@PBE to incorporate e-h interactions. Similarly, we have performed G$_0$W$_0$@PBE and BSE@G$_0$W$_0$@PBE calculations to capture the optical and excitonic effect for Ba$_3$Zr$_2$S$_7$ and Ba$_4$Zr$_3$S$_{10}$, respectively.
We find the BSE peak position for Ba$_2$ZrS$_4$, Ba$_3$Zr$_2$S$_7$ and Ba$_4$Zr$_3$S$_{10}$ are 1.71 eV, 1.49 eV and 1.43 eV, respectively, whereas G$_0$W$_0$@PBE finds the peak at 2.11 eV, 1.82 eV and 1.69 eV, respectively (see Fig. \ref{2}(a-c)).
It should be noted here that these numbers are highly dependent on the k-mesh and it's very challenging (even with the fastest supercomputers) to converge the BSE calculation to obtain the excitonic peak for computing E$_\textrm{B}$ with absolute accuracy. In Fig.~\ref{2}, occurrence of red-shifted peak in BSE@G$_0$W$_0$@PBE than G$_0$W$_0$@PBE signifies the excitonic effect in the considered RP phases. The computed E$_\textrm{B}$ of first bright exciton of Ba$_2$ZrS$_4$, Ba$_3$Zr$_2$S$_7$ and Ba$_4$Zr$_3$S$_{10}$ RP phases are found to be 0.40 eV, 0.33 eV and 0.26 eV, respectively. The obtained E$_\textrm{B}$ values are somewhat on the larger side. In solar cell, for easy dissociation of exciton into free charge carriers (viz. e and h) at room temperature, a low E$_\textrm{B}$ is desirable ($k_\textrm{B}$T = 26 meV, T = 300 K). Thus the discrepancy in the BSE peak position from the experimental value~\cite{li2019band} may lead to incorrect E$_\textrm{B}$ values. Unfortunately, we have already ensured the highest possible k-mesh to compute the BSE@G$_0$W$_0$@PBE calculations. Involving a denser k-mesh is not feasible for the superstructures of RP phases for computing G$_0$W$_0$@PBE and BSE@G$_0$W$_0$@PBE -- solely due to computational limitation. Therefore, this limits us to estimate the accurate E$_\textrm{B}$ for the given systems. This is why to compute E$_\textrm{B}$, we have employed a combined state-of-the-art method comprising of Wannier-Mott and Density Functional Perturbation Theory (DFPT) approach as explained later. However, one can estimate the error bar in BSE peak position via a parameterized model for dielectric screening i.e., model-BSE approach (mBSE). This mBSE approach is computationally cheaper as compared to BSE but with similar accuracy to estimate the first peak position (see Fig. S6 in SI). Thus this enables us to sample Brillouin zone with higher number of k-mesh. As per the mBSE calculation, it is seen that a denser k-mesh sampling indeed red-shifts the BSE peak by $\sim$ 0.3 eV (see mBSE calculations details as in section V in SI).
Now, from the above analysis, despite having little discrepancy with the exact BSE peak position and corresponding E$_\textrm{B}$ values, it's certain that the first two excitons are bright excitons in the considered RP phases and several dark excitons also exist below the second bright exciton of these systems. From the above studies, it's also expected to have the correct trend i.e., the order of E$_\textrm{B}$ for Ba$_{\textrm{n+1}}$Zr$_\textrm{n}$S$_{\textrm{3n+1}}$ is n=1 $>$ n=2 $>$ n=3. From Fig. \ref{2}(a), we have observed the double peak character for excitonic peak of Ba$_2$ZrS$_4$ that may occur due to two nearby transitions.
Moreover, only Ba$_2$ZrS$_4$ exhibits direct band gap, whereas for other two higher RP phases in the series band gap becomes indirect in nature. In Fig.~\ref{4}, we have shown the broadening of excitonic peak. It is well known that lifetime ($\tau$) of exciton is inversely proportional to the same and therefore, the qualitative trend of exciton lifetime for Ba$_{\textrm{n+1}}$Zr$_\textrm{n}$S$_{\textrm{3n+1}}$ (n=[1-3]) is $\tau_{n=3}$ $>$ $\tau_{n=2}$ $>$ $\tau_{n=1}$. Note that, here broadening is computed from mBSE approach showing contribution due to the electron-hole interaction and it does not include the electron-phonon coupling effect.
\begin{figure}[htp]
\centering
\includegraphics[width=0.80\textwidth]{RP-1-new1.jpg}
\caption{Imaginary part (Im ($\epsilon$)) of the dielectric function for (a) Ba$_2$ZrS$_4$, (b) Ba$_3$Zr$_2$S$_7$, (c) Ba$_4$Zr$_3$S$_{10}$ using single shot GW (G$_0$W$_0$) and BSE. Imaginary part (Im($\varepsilon$)) of dielectric function for (d) Ba$_2$ZrS$_4$ (e) Ba$_3$Zr$_2$S$_7$ and (f) Ba$_4$Zr$_3$S$_{10}$ along E $||$ xy direction and imaginary part (Im($\varepsilon$)) of dielectric function for (g) Ba$_2$ZrS$_4$ (h) Ba$_3$Zr$_2$S$_7$ and (i) Ba$_4$Zr$_3$S$_{10}$ along E $||$ z direction, using G$_0$W$_0$ and BSE.}
\label{2}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.50\textwidth]{RP-3.jpg}
\caption{Full width at half maximum (FWHM) of exciton peak using mBSE approach with dense k-mesh. Broadening of exciton peak is mainly due to electron-hole interaction.}
\label{4}
\end{figure}
Ba$_{\textrm{n+1}}$Zr$_\textrm{n}$S$_{\textrm{3n+1}}$ (n=[1-3]) RP phases have tetragonal structure and exhibit optical anisotropy. Hence, it is required to study their optical and excitonic properties along E$\vert\vert$xy (i.e., in-plane along x- or y-direction) and E$\vert\vert$z (i.e., out-of-plane along z-direction) direction. We have observed anisotropy in Ba$_2$ZrS$_4$, Ba$_3$Zr$_2$S$_7$ and Ba$_4$Zr$_3$S$_{10}$ which can greatly affect their performance in practical application. Therefore, it is of paramount importance to understand the anisotropic effect in the optical and excitonic properties of Ba$_{\textrm{n+1}}$Zr$_{\textrm{n}}$S$_{\textrm{3n+1}}$ (n=[1-3]) RP phases. In Fig. \ref{2} (d-i), we have shown optical and excitonic contribution of Ba$_{\textrm{n+1}}$Zr$_\textrm{n}$S$_{\textrm{3n+1}}$ (n=[1-3]) RP phases along different directions viz. x, y and z.
Employing Shockley-Queisser (SQ) criterion~\cite{shockley1961detailed} for the solar cell and other optoelectronic devices, we can remark that Ba$_{\textrm{n+1}}$Zr$_\textrm{n}$S$_{\textrm{3n+1}}$ (n=[1-3]) RP phases are optically active in in-plane (i.e., along x- and y-direction) and optically inactive in out-of-plane (i.e., z-direction). These systems possess similar optical as well as excitonic properties along x- and y-direction (see Fig. \ref{2}(d-f)). However, along z-direction, their optical and excitonic spectra are not only blue-shifted but also the feature of G$_0$W$_0$ and BSE peaks are quite different than that in case of in-plane direction (see Fig. \ref{2}(g-i)). It is well known that exciton lifetime is inversely proportional to the width of the exciton peak. Hence, change in the feature of exciton peak greatly influences the excitonic parameters as well in different directions. The d-orbital contribution in valence band maximum (VBM) and conduction band minimum (CBm) could be responsible for the optical anisotropy in these systems. Here, Zr 4d$_{xy}$ orbital is contributing maximum in the CBm (see Fig. S7 in SI), whereas in bulk BaZrS$_3$ the CBm is contributed by d$_{yz}$ and d$_{xz}$. This is why in the latter, no significant anisotropy is observed~\cite{manish-chalco}.
We have used Wannier-Mott model~\cite{waters2020semiclassical} for a simple screened Coulomb potential. According to this model, the E$_\textrm{B}$ for screened interacting electron-hole (e-h) pair is given by:
\begin{equation}
\begin{split}
\textrm{E}_\textrm{B}=\left(\frac{\mu}{\epsilon_\textrm{eff}^2}\right)\textrm{R}_\infty
\label{eq1}
\end{split}
\end{equation}
\noindent where, $\mu$ is the reduced mass in term of rest mass of electron, $\epsilon_\textrm{eff}$ is the effective dielectric constant (which includes electronic as well as ionic contribution to dielectric constant) and R$_\infty$ is the Rydberg constant. The reduced mass of Ba$_2$ZrS$_4$, Ba$_3$Z$_2$S$_7$ and Ba$_4$Zr$_3$S$_{10}$ in term of electron rest mass are 0.32, 0.26 and 0.34, respectively (for more information regarding calculation of reduced mass and effective mass of electron and hole see section VII in SI). However, in the above expression, $\epsilon_\textrm{eff}$ is still unknown for these systems. It is already reported that lattice relaxation can influence the exciton binding energy~\cite{freysoldt2014first}. For example, if $\omega_{\textrm{LO}}$ corresponds to longitudinal optical phonon frequency and E$_\textrm{B}$ $<<$ $\hbar\omega_{\textrm{LO}}$, one needs to consider the effect of lattice relaxation to compute $\epsilon_\textrm{eff}$. However, if E$_\textrm{B}$ $>>$ $\hbar\omega_{\textrm{LO}}$, the effect of lattice relaxation can be ignored as in such cases $\epsilon_\textrm{eff}$ $\rightarrow$ $\epsilon_e$, where $\epsilon_e$ is the static value of dielectric constant at high frequency that mainly consists of electronic contribution.
Therefore, for $\epsilon_{\textrm{eff}}$, a value intermediate between the static electronic dielectric constant at high-frequency i.e., $\epsilon_e$ and the static ionic dielectric constant at low frequency i.e., $\epsilon_i$ should be considered.
\begin{figure}[h]
\centering
\includegraphics[width=1.0\textwidth]{RP-2-new.jpg}
\caption{Electronic (Im ($\varepsilon_e$) and Re ($\varepsilon_e$)) (a)-(c) and ionic (Im ($\varepsilon_i$) and Re ($\varepsilon_i$)) (d)-(f) contribution to dielectric function for Ba$_2$ZrS$_4$, Ba$_3$Zr$_2$S$_7$ and Ba$_4$Zr$_3$S$_{10}$. Red and black color correspond to real (Re($\varepsilon$)) and imaginary (Im($\varepsilon$)), respectively.}
\label{3}
\end{figure}
In Fig. \ref{3}(a-c) and Fig. \ref{3}(d-f), we have shown the electronic and ionic contribution to the dielectric function respectively, computed using DFPT approach. The static real part of ionic dielectric constant for Ba$_2$ZrS$_4$, Ba$_3$Z$_2$S$_7$ and Ba$_4$Zr$_3$S$_{10}$ are 39.35, 31.19 and 57.59, respectively. As per Fig. \ref{3}, a considerable increase in the static low frequency of ionic dielectric constant is attributed to the occurrence of optically active phonon modes below 10 meV. This shows the ionic nature of the RP phases. Using electronic and ionic contribution of the dielectric constant and equation \ref{eq1}, we have calculated the upper and lower bound for the E$_\textrm{B}$ (see Table\ref{Table1}). The effective value of the dielectric constant and hence, the binding energy lies in between these upper and lower bounds listed in Table\ref{Table1}.
We have observed that upper bound and lower bound values of E$_\textrm{B}$ for the considered RP phases are comparable with that of APbX$_3$ (A = MA, FA; X= I, Br) perovskites~\cite{basera2020capturing, Jain2021exciton}.
It should be noted that in bulk BaZrS$_3$ ionic contribution to dielectric constant has been observed to be negligible~\cite{manish-chalco}, whereas in RP phases significant role of ionic contribution to dielectric has been observed.
\begin{table}[htbp]
\caption{Electronic and ionic contribution to the dielectric constant for Ba$_{\textrm{n+1}}$Zr$_\textrm{n}$S$_{\textrm{3n+1}}$ (n=[1-3]) RP phases, where $\epsilon_e$ and $\epsilon_i$ correspond to the static value of electronic and ionic dielectric constant, respectively. E$_{\textrm{Bu}}$ and E$_{\textrm{Bl}}$ correspond to upper and lower bound of exciton binding energy, respectively.}
\begin{center}
\begin{tabular}[c]{|c|c|c|c|c|} \hline
Ba$_{\textrm{n+1}}$Zr$_\textrm{n}$S$_{\textrm{3n+1}}$ & $\epsilon_e$ & E$_{\textrm{Bu}}$ (meV) & $\epsilon_i$ & E$_{\textrm{Bl}}$ (meV)\\ \hline
Ba$_2$ZrS$_4$ & 8.94 & 56.00& 39.25 & 2.9 \\ \hline
Ba$_3$Zr$_2$S$_7$ & 8.32 & 50.14& 31.19 & 3.49 \\ \hline
Ba$_4$Zr$_3$S$_{10}$ & 8.51 & 63.45 & 57.59 & 1.39 \\ \hline
\end{tabular}
\label{Table1}
\end{center}
\end{table}
Recently, Ming \textit{et al.} have reported the effect of strain on the band gap and octahedron rotation for Ba$_2$ZrS$_4$.~\cite{ming2020octahedron}. According to their report, a significant change in the band gap and octahedron rotation is observed with the application of strain. However, in our case of Ba$_2$ZrS$_4$ RP phase, monotonic change in the band gap (but not as large as in case of Ming \textit{et al.}) has been observed on applying upto $\pm7\%$ strain along b-axis and c-axis (see Fig. S9 in SI). The effect on the band gap along b- and a-direction are symmetric. Further, the effect of strain on the band gap in out-of-plane direction is more significant than in in-plane direction. We have also noticed slight octahedral tilt under the application of strain along b-axis. In case of strain along c-axis, a small rotation of octahedron about c-axis has been observed. Here, we have calculated the elastic modulus of the RP phases that is given by
\begin{equation}
\begin{split}
\textrm{C}_{\textrm{3D}} = \left(\frac{1}{\textrm{V}}\right)\frac{\delta^2\textrm{E}}{\delta s^2}
\label{eq2}
\end{split}
\end{equation}
where V, s and E correspond to the volume of a unit cell, strain and total energy, respectively. We have observed that elastic modulus of the RP phases is larger than 2D RP phases~\cite{ming2020octahedron}. In order to compute carrier mobility, we have used deformation potential model~\cite{deformation, takagi1994universality, bruzzone2011ab, qiao2014high}. According to this model, the mobility of charge carrier is defined as:
\begin{equation}
\begin{split}
\mu_{\textrm{DP}} = \frac{(8\pi)^{\frac{1}{2}}\hbar^4e\textrm{C}_{\textrm{3D}}}{3(m^*)^{5/2}(k_\textrm{B}\textrm{T})^{3/2}\textrm{E}_{l}^2}
\label{eq2}
\end{split}
\end{equation}
where T is the temperature, m$^*$ is the effective mass of charge carrier and \textit{e} is the elementary charge of electron.
E$_l$ = $\Delta\textrm{V}/(\Delta\textrm{l}/\textrm{l}$) is the deformation potential (for more details regarding calculation of C$_{\textrm{3D}}$ and E$_l$ see section VIII in SI).
In Table \ref{Table2}, we have listed the values of C$_{\textrm{3D}}$, E$_l$ and $\mu_{\textrm{DP}}$ for electron and hole of Ba$_{\textrm{n+1}}$Zr$_\textrm{n}$S$_{\textrm{3n+1}}$ (n=[1-3]) RP phases. As elastic modulus decreases down the column (see Table \ref{Table2}), we can say that softness of the RP phases increases on increasing the value of n in Ba$_{\textrm{n+1}}$Zr$_\textrm{n}$S$_{\textrm{3n+1}}$ (n=[1-3]). Mobility of electron also decreases on increasing n in Ba$_{\textrm{n+1}}$Zr$_\textrm{n}$S$_{\textrm{3n+1}}$ (n=[1-3]).
\begin{table}[htbp]
\caption{Elastic modulus, deformation potential and predicted carrier mobility of Ba$_{\textrm{n+1}}$Zr$_\textrm{n}$S$_{\textrm{3n+1}}$ (n=[1-3]) RP phases.}
\begin{center}
\begin{tabular}[c]{|c|c|c|c|} \hline
Ba$_{\textrm{n+1}}$Zr$_\textrm{n}$S$_{\textrm{3n+1}}$ & C$_{3\textrm{D}}$ (eV \AA$^{-3}$) & E$_l$ (eV) & $\mu_{\textrm{DP}}$ (cm$^2$ V$^{-1}$ s$^{-1}$) \\ \hline
Ba$_2$ZrS$_4$ (e)& 1.23 & 6.36 & 2.16$\times$10$^4$ \\ \hline
Ba$_2$ZrS$_4$ (h)& 1.23 & 6.61 & 2.32$\times$10$^3$ \\ \hline
Ba$_3$Z$_2$S$_7$ (e)& 0.92 & 6.41 & 8.63$\times$10$^3$ \\ \hline
Ba$_3$Zr$_2$S$_7$ (h)& 0.92 & 6.39 & 4.72$\times$10$^1$ \\ \hline
Ba$_4$Zr$_3$S$_{10}$ (e)& 0.67 & 6.72 & 4.22$\times$10$^3$ \\ \hline
Ba$_4$Zr$_3$S$_{10}$ (h)& 0.67 & 6.73 & 8.27$\times$10$^1$ \\ \hline
\end{tabular}
\label{Table2}
\end{center}
\end{table}
\begin{table*}[htbp]
\caption{Polaron parameters for Ba$_{\textrm{n+1}}$Zr$_\textrm{n}$S$_{\textrm{3n+1}}$ (n=[1-3]) RP phases. }
\begin{center}
\begin{adjustbox}{width=0.70\textwidth}
\begin{tabular}[c]{|c|c|c|c|c|c|} \hline
Ba$_{\textrm{n+1}}$Zr$_{\textrm{n}}$S$_{\textrm{3n+1}}$ & 1/$\epsilon^*$ & $\alpha$ & $\textrm{m}_\textrm{P}$/$\textrm{m}^*$ & l$_{\textrm{P}}$ (\AA) & $\mu_{\textrm{P}}$ (cm$^2$V$^{-1}$s$^{-1}$)\\ \hline
Ba$_2$ZrS$_4$ & 0.09 & 1.84& 1.39 & 354.30 & 164.75 \\ \hline
Ba$_3$Z$_2$S$_7$ & 0.08 & 2.36 & 1.53 & 307.12& 117.20 \\ \hline
Ba$_4$Zr$_3$S$_{10}$ &0.10 & 1.77 & 1.37 & 292.49 & 76.39 \\ \hline
\end{tabular}
\end{adjustbox}
\label{Table3}
\end{center}
\end{table*}
After analyzing the specific free volume (for details see section IX in SI), we find that study of electron-phonon coupling is important in these materials. Also, the presence of polarization in the RP phases lays emphasis on the polaron study.
We have examined the electron-phonon interaction in our system by the mesoscopic model, viz. Frohlich's model~\cite{frohlich1954electrons, feynman1955slow, hellwarth1999mobility} for the polarons. The dressed ``quasiparticles'', formed due to screened interaction of electron and hole by the lattice, are known as polarons. Frohlich introduced a parameter to describe theoretically the momentum of electron in the field of polar lattice vibration. This parameter is known as dimensionless Frohlich coupling constant~\cite{biaggio1997band, frohlich1954electrons, feynman1955slow}.
\begin{equation}
\begin{split}
\alpha = \frac{1}{\epsilon^*}\sqrt{\frac{\textrm{R}_\textrm{y}}{\textrm{ch}\omega_{\textrm{LO}}}}\sqrt{\frac{\textrm{m}^*}{\textrm{m}_\textrm{e}}}
\label{eq3}
\end{split}
\end{equation}
where coupling constant $\alpha$ quantifies the electron-phonon coupling, m$^*$ is the effective mass of electron, m$_\textrm{e}$ is the rest mass of the electron, h is Planck's constant, c is the speed of light, $\omega_{\textrm{LO}}$ (in [cm$^{-1}$] units) is the optical phonon frequency, $1/\epsilon^*$ is the ionic screening parameter ($1/\epsilon^*$ = $1/\epsilon_{\infty}$ - $1/\epsilon_{\textrm{static}}$ were, $\epsilon_{\textrm{static}}$ and $\epsilon_{\infty}$ are static and high frequency dielectric constant) and $\textrm{R}_\textrm{y}$ is the Rydberg energy. We have observed that electron-phonon coupling constant of considered RP phases (Table\ref{Table3}), are smaller than that of their bulk BaZrS$_3$~\cite{manish-chalco}. Further, using the extended form of Frohlich's polaron theory, given by Feynman, the effective mass of polaron (m$_{\textrm{P}}$)~\cite{feynman1955slow} is defined as:
\begin{equation}
\begin{split}
\textrm{m}_{\textrm{P}} = \textrm{m}^* \left(1 + \frac{\alpha}{6} + \frac{\alpha^2}{40} + ......\right)
\label{eq4}
\end{split}
\end{equation}
where m$^*$ is the effective mass calculated from the band structure calculations (see section VII in SI). The polaron radii~\cite{sendner2016optical} can be calculated as follows:
\begin{equation}
\begin{split}
\textrm{l}_\textrm{P} = \sqrt{\frac{h}{2c\textrm{m}^*\omega_{\textrm{LO}}}}
\label{eq5}
\end{split}
\end{equation}
Polaron mobility according to the Hellwarth polaron model is defined as follows:
\begin{equation}
\begin{split}
\mu_{\textrm{P}} = \frac{(3\sqrt{\pi}e)}{2\pi \textrm{c}\omega_{\textrm{LO}}\textrm{m}^*\alpha} \frac{\textrm{sinh}{(\beta/2)}}{\beta^{5/2}}\frac{w^3}{v^3}\frac{1}{K}
\label{eq5}
\end{split}
\end{equation}
where, $\beta$ = hc$\omega_{\textrm{LO}}$/$k_{\textrm{B}}$T, e is the electronic charge, m$^*$ is the effective mass of charge carrier, $\textit{w}$ and $\textit{v}$ correspond to temperature dependent variational parameters. $\textit{K}$ is a function of $\textit{v}$, $\textit{w}$, and $\beta$~\cite{hellwarth1999mobility} i.e., defined as follows:
\begin{equation}
\begin{split}
K(a,b) = \int_0^\infty du[u^2 + a^2 - b\textrm{cos}(vu)]^{-3/2}\textrm{cos}(u)
\label{eq6}
\end{split}
\end{equation}
Here, a$^2$ and b are calculated as:
\begin{equation}
\begin{split}
a^2 = (\beta /2)^2 + \frac{(v^2 - w^2)}{w^2v} \beta coth(\beta v/2)
\label{eq7}
\end{split}
\end{equation}
\begin{equation}
\begin{split}
b = \frac{(v^2 - w^2)}{w^2v}\frac{\beta}{\textrm{sinh}(\beta v/2)}
\label{eq8}
\end{split}
\end{equation}
We have used lowest frequency of LO phonon i.e, 1.34, 1.46 and 1.36 THz for Ba$_2$ZrS$_4$, Ba$_3$Zr$_2$S$_7$ and Ba$_4$Zr$_3$S$_{10}$, respectively for our calculation. $\mu_{\textrm{P}}$ gives the upper limit of the charge carrier mobility, under the assumption that charge carrier (electron) interact only with the optical phonon. A significant change in the mobility of charge carrier can be seen on comparing the mobility of electron without including its interaction with the optical phonon (see Table~\ref{Table2}) and with including interaction with the optical phonon (see Table~\ref{Table3}). Here in Table \ref{Table3}, ionic screening is indicative of the ionicity for a system. On comparing our results with the hybrid inorganic-organic halide perovskites (ionic screening of MAPbI$_3$, MAPbBr$_3$ and MAPbCl$_3$ are 0.17, 0.18 and 0.22, respectively~\cite{sendner2016optical}), we can say that Ba$_2$ZrS$_4$, Ba$_3$Zr$_2$S$_7$ and Ba$_4$Zr$_3$S$_{10}$ are less ionic than MAPbX$_3$ (X = Cl, Br, I). Also, the obtained coupling constant is comparable or larger than MAPbX$_3$ (see Table \ref{Table3}). The lowering of mobility of charge carriers on the inclusion of LO phonon modes indicate that optical phonon modes are dominating over the acoustical phonon modes in these materials. Note that, in the absence of experimental data, these results may help as guideline for further research. Moreover, for qualitative analysis, our results are very informative to understand the charge transport properties of these RP phases. From Table\ref{Table3}, we can clearly see that on increasing n in Ba$_{\textrm{n+1}}$Zr$_\textrm{n}$S$_{\textrm{3n+1}}$ (n=[1-3]) i.e., down the column the polaron mobility decreases and for bulk BaZrS$_3$~\cite{majumdar2020emerging} phase it is very small. In view of this, the considered RP phases are expected to be better optical material than their bulk phase.
In conclusion, we have reported the electronic and excitonic properties of the RP phases of Ba$_{\textrm{n+1}}$Zr$_\textrm{n}$S$_{\textrm{3n+1}}$ (n=[1-3]) using Many Body Perturbation Theory. The exciton binding energy decreases on increasing the thickness of the perovskite layer. Double peak character is observed in the first excitonic peak calculated in in-plane direction of Ba$_2$ZrS$_4$. Only Ba$_2$ZrS$_4$ possesses direct band gap and with increasing n in Ba$_{\textrm{n+1}}$Zr$_\textrm{n}$S$_{\textrm{3n+1}}$ (n=[1-3]) the band gap becomes more indirect. Using Wannier-Mott approach, we have obtained the upper and lower bound of E$_\textrm{B}$, from the electronic and ionic contribution of the dielectric constant, respectively.
We have observed that ionic contribution is more significant in Ba$_{\textrm{n+1}}$Zr$_\textrm{n}$S$_{\textrm{3n+1}}$ than in bulk BaZrS$_3$.
The charge carrier mobility is maximum in Ba$_2$ZrS$_4$, as computed by employing deformation potential of the same. Further, amongst Ba$_{\textrm{n+1}}$Zr$_\textrm{n}$S$_{\textrm{3n+1}}$ and bulk BaZrS$_3$, the electron-phonon coupling constant is relatively smaller for former RP phases. From our polaron study, we conclude that the optical phonon modes are dominating as compared to the acoustical phonon modes for these systems. A large discrepancy is noticed in the mobility of charge carriers (which includes the effect of acoustical phonon modes only in electron-phonon coupling) and polaron mobility (which includes the effect of optical phonon modes in addition to the acoustic modes in electron-phonon coupling). It shows the dominating character of optical phonon modes in the electron-phonon coupling and must be studied to understand charge transport properties of RP phases.
\section{Computational Methods}
We have executed a systematic study to explore the optical, electronic and excitonic properties using Density Functional Theory (DFT)~\cite{kohn1965self,hohenberg1964inhomogeneous} and beyond approaches under the framework of Many Body Perturbation Theory~\cite{jiang2012electronic, fuchs2008efficient, basera2019self}. All calculations are performed with Projected Augmented Wave (PAW) potentials as implemented in Vienna $\textit{Ab initio}$ Simulation Package (VASP)~\cite{kresse1996efficiency,kresse1999ultrasoft}. The PAW potential of elements viz., Ba, Zr and S contain ten, twelve and six valence electrons, respectively. Ba$_{\textrm{n+1}}$Zr$_{\textrm{n}}$S$_{\textrm{3n+1}}$ (n=[1-3]) RP phases are tetragonal structure having space group I4/mmm [139]. All the structures are optimized using Generalized Gradient Approximation (GGA) as implemented in PBE~\cite{perdew1996generalized} exchange-correlation ($\epsilon_{\textrm{xc}}$) functional until the forces are smaller than 0.001 eV/\AA. The $\Gamma$-centered 2$\times$2$\times$2 k-mesh sampling is employed for optimization calculations (optimized structures are shown in Fig. \ref{1}). The electronic self-consistency loop convergence is set to 0.01 meV, and the kinetic energy cutoff is set to 600 eV for plane wave basis set expansion. To explore the optical properties and excitonic effects, Bethe-Salpeter Equation (BSE) is solved. Initially, we have used light 4$\times$4$\times$1 k-mesh for energy calculation (see Fig. S1). The convergence criteria for the number of occupied and unoccupied bands in BSE calculations is given in SI (see Fig. S2). In order to have improved spectral features with denser k-mesh, we have employed the model-BSE (mBSE)~\cite{bokdam2016role} approach. Following this, we have performed Density Functional Perturbation Theory (DFPT)~\cite{gajdovs2006linear} with k-mesh 12$\times$12$\times$1, to discern the role of ionic contribution to dielectric function along with electronic contribution. Note that for GW and BSE calculations, we have used converged NBANDS i.e., 800. Lastly, by employing Frohlich model approach~\cite{frost2017calculating}, we have studied polaron effect in our systems
\begin{acknowledgement}
DG acknowledges UGC, India, for the senior research fellowship [grant no. [1268/(CSIR-UGC NET JUNE 2018)]]. AS acknowledges IIT Delhi for the financial support. MJ acknowledges CSIR, India, for the senior research fellowship [grant no. [09/086(1344)/2018-EMR-I]]. SB acknowledges the financial support from SERB under core research grant (grant no. CRG/2019/000647). We acknowledge the High Performance Computing (HPC) facility at IIT Delhi for computational resources.
\end{acknowledgement}
\begin{suppinfo}
K-mesh convergence for PBE functional; Number of occupied (NO) and unoccupied (NV) bands convergence in BSE calculation; Band structure plot with PBE and PBE+SOC; BSE@G$_0$W$_0$@HSE06 for Ba$_2$ZrS$_4$; model-BSE (mBSE) approach; Projected density of states (PDOS); Effective mass of electron (e) and hole (h); Calculation of deformation potential energy and elastic modulus; Strength of electron-phonon coupling.
\end{suppinfo}
\begin{center}
{\Large \bf Supplemental Material}\\
\end{center}
\begin{enumerate}[\bf I.]
\item k-mesh convergence for PBE functional
\item Number of occupied (NO) and unoccupied (NV) bands convergence in BSE calculation
\item Band structure plot with PBE and PBE+SOC
\item BSE@G$_0$W$_0$@HSE06 for Ba$_2$ZrS$_4$
\item model-BSE (mBSE) approach
\item Projected density of states (PDOS)
\item Effective mass of electron (e) and hole (h)
\item Calculation of deformation potential energy and elastic modulus
\item Strength of electron-phonon coupling
\end{enumerate}
\vspace*{12pt}
\clearpage
\section{k-mesh convergence for PBE functional}
\begin{figure}[h!]
\centering
\includegraphics[width=1.0\textwidth]{SI-RP-1.jpg}
\caption{Real (Re($\varepsilon$)) and imaginary (Im($\varepsilon$)) part of dielectric function for Ba$_{\textrm{n+1}}$Zr$_\textrm{n}$S$_{\textrm{3n+1}}$ (n=[1-3]) RP phases using PBE exchange-correlation $\epsilon_{xc}$ functional.}
\label{1}
\end{figure}
Figure \ref{1}(a-c) shows the variation of real part (Re($\varepsilon$)) and Figure \ref{1}(d-f) shows the imaginary part (Im($\varepsilon$)) (see Figure \ref{1}(d-e)) of dielectric function for Ba$_{\textrm{n+1}}$Zr$_\textrm{n}$S$_{\textrm{3n+1}}$ (n=[1-3]) RP phases. On increasing k-mesh a significant change in Re($\varepsilon$) can be seen. However, no shift in first peak position of Im($\varepsilon$) part of dielectric function is observed on increasing k-mesh. Hence, 4x4x1 light k-mesh is sufficient to compute quasi particle band gap.
\section{Number of occupied (NO) and unoccupied (NV) bands convergence in BSE calculation}
\begin{figure}[h!]
\centering
\includegraphics[width=0.4\textwidth]{rp-no-nv.jpg}
\caption{Variation of imaginary part (Im($\varepsilon$)) of dielectric function with number of occupied (NO) and unoccupied (NV) bands using BSE for Ba$_2$ZrS$_4$.}
\label{2}
\end{figure}
\newpage
\section{Band structure plot with PBE and PBE+SOC}
\begin{figure}[h!]
\centering
\includegraphics[width=1.0\textwidth]{SI-band1.jpg}
\caption{Band structure of (a) Ba$_2$ZrS$_4$, (b) Ba$_3$Zr$_2$S$_7$ using PBE and PBE+SOC exchange-correlation $\epsilon_{\textrm{xc}}$ functional.}
\label{3}
\end{figure}
\newpage
\section{BSE@G$_0$W$_0$@HSE06 for Ba$_2$ZrS$_4$}
\begin{figure}[h!]
\centering
\includegraphics[width=0.6\textwidth]{rp-hse-s4.jpg}
\caption{Imaginary part (Im ($\epsilon$)) of the dielectric function for Ba$_2$ZrS$_4$ using G$_0$W$_0$@HSE06 and BSE@G$_0$W$_0$@HSE06.}
\label{4}
\end{figure}
\newpage
\section{model-BSE (mBSE) approach}
To compute exciton energy and E$_\textrm{B}$ precisely, one needs to accurately calculate the optical spectra or optical gap using conventional BSE@G$_0$W$_0$ approach. However, there is inconsistency observed in BSE exciton peak, due to insufficient high k-mesh (approx 8$\times$8$\times$1). We can not use high k-mesh because it is computationally very expensive. This results in the incorrect E$_\textrm{B}$ value. Therefore, to overcome this issue, a less expensive but robust mBSE approach was proposed. In this model, the convergence of the optical spectra as a function of the number of k-mesh is performed. This method is generally based on two approximations:
(i) Using Eq.~\ref{eq1} the RPA static screening W is replaced by a simple analytical model. Here, the dielectric function is replaced by the local model function:
\begin{equation}
\begin{split}
\varepsilon^{-1}_{\textrm{G,G}}(\textrm{$|\textrm{q+G}|$})=1-(1-\varepsilon^{-1}_{\infty})\exp(-\dfrac{\textrm{$|\textrm{q+G}|$}^2}{4\lambda^2})
\label{eq1}
\end{split}
\end{equation}\\
where, $\varepsilon_\infty$ is the static ion-clamped dielectric function in the high frequency limit. $\varepsilon_{\infty}^{-1}$ is calculated either from DFPT or G$_0$W$_0$. q and G are the wave vector and lattice vector of the reciprocal cell, respectively. $\lambda$ is the screening length parameter, calculated by fitting $\varepsilon^{-1}$ at small wave vectors with respect to $|\textrm{q+G}|$ (see Figure~\ref{6} (a-c)). The parameters obtained for the Ba$_{\textrm{n+1}}$Zr$_\textrm{n}$S$_{\textrm{3n+1}}$ (n=[1-3]) RP phases are collected in Table~\ref{Table1}.
\newpage
\begin{table}[htbp]
\caption{The calculated inverse of static ion-clamped dielectric function $\varepsilon_{\infty}^{-1}$ and the screening length parameter $\lambda$ (\AA$^{-1}$) used in mBSE (Eq. \ref{eq1}) for Ba$_{\textrm{n+1}}$Zr$_\textrm{n}$S$_{\textrm{3n+1}}$ (n=[1-3]) RP phases.}
\begin{center}
\begin{tabular}[c]{|c|c|c|} \hline
Ba$_{\textrm{n+1}}$Zr$_\textrm{n}$S$_{\textrm{3n+1}}$ & $\varepsilon_{\infty}^{-1}$ (PBE) & $\lambda$ (PBE) \\ \hline
Ba$_2$ZrS$_4$ & 0.14 & 1.17 \\ \hline
Ba$_3$Zr$_2$S$_7$ & 0.15 & 1.19 \\ \hline
Ba$_4$Zr$_3$S$_{10}$ & 0.15& 1.17 \\ \hline
\end{tabular}
\label{Table1}
\end{center}
\end{table}
\begin{figure}[h!]
\centering
\includegraphics[width=1.0\textwidth]{SI-RP-5.jpg}
\caption{Variation of inverse of the dielectric function $\varepsilon^{-1}$ with respect to $|\textrm{q+G}|$ for (a) Ba$_2$ZrS$_4$, (b) Ba$_3$Zr$_2$S$_7$, and (c) Ba$_4$Zr$_3$S$_{10}$, respectively. The red curve is obtained by fitting based on Eq. (5). The mBSE calculated spectra with different k-mesh for (d) Ba$_2$ZrS$_4$, (e) Ba$_3$Zr$_2$S$_7$ and (f) Ba$_4$Zr$_3$S$_{10}$, respectively.}
\label{5}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=0.60\textwidth]{mBSE-bse-comp.jpg}
\caption{Imaginary part (Im ($\varepsilon$)) of the dielectric functional for Ba$_3$Zr$_2$S$_7$ using BSE and mBSE.}
\label{6}
\end{figure}
Here, in Figure ~\ref{6}, we have shown that the imaginary part of the dielectric function calculated with BSE@GW@PBE, matches with the one, which is calculated with the model BSE (mBSE) method, where we have chosen PBE as the starting point. Therefore, we can say that the excitonic features (i.e first peak) information are always retained by mBSE approach. Notably, both the calculations are performed using 4$\times$4$\times$1 k-mesh with the same starting point.
\newpage
\section{Projected density of states (PDOS)}
\begin{figure}[h!]
\centering
\includegraphics[width=1.0\textwidth]{SI-RP-dos.jpg}
\caption{Partial density of states (pDOS) for (a) Ba$_2$ZrS$_4$ and (b) BaZrS$_3$. Orbital contribution of S and Zr in VBM and CBm, respectively for (c) Ba$_2$ZrS$_4$ and (d) BaZrS$_3$. }
\label{7}
\end{figure}
\newpage
\section{Effective mass of electron (e) and hole (h)}
\begin{figure}[h!]
\centering
\includegraphics[width=1.0\textwidth]{SI-band2.jpg}
\caption{Band structure of Ba$_{\textrm{n+1}}$Zr$_\textrm{n}$S$_{\textrm{3n+1}}$ (n=[1-3]).}
\label{8}
\end{figure}
To compute effective mass, we have obtained the bandstructure using PBE $\epsilon_{xc}$ functional (see Figure \ref{10}). Here, we have calculated the effective masses of electron (e) and hole (h) along the symmetric path $\Gamma$ $\rightarrow$ Z for Ba$_2$ZrS$_4$. In case of Ba$_3$Z$_2$S$_7$ and Ba$_4$Zr$_3$S$_{10}$, the effective masses of e and h are calculated along $\Gamma$ $\rightarrow$ Z and Z $\rightarrow$ $\Gamma$, respectively. All the effective and reduced masses are tabulated in Table \ref{Table2}. Note that, in Figure \ref{10}, the observed flatbands along high-symmetric path $\Gamma$ $\rightarrow$ X are along z-direction. While calculating the effective masses, we have excluded the high-symmetric path along z-direction.
\begin{table}[htbp]
\caption{Effective masses of electron, hole and reduced mass (in term of rest mass of electron (m$_\textrm{e}$)) for Ba$_{\textrm{n+1}}$Zr$_\textrm{n}$S$_{\textrm{3n+1}}$ (n=[1-3]) RP phases.}
\begin{center}
\begin{tabular}[c]{|c|c|c|c|} \hline
Ba$_{\textrm{n+1}}$Zr$_\textrm{n}$S$_{\textrm{3n+1}}$ & m$_{\textrm{e}}^{*}$ & m$_{\textrm{h}}^{*}$ & $\mu$ \\ \hline
Ba$_2$ZrS$_4$ & 0.18 & 0.43& 0.32 \\ \hline
Ba$_3$Zr$_2$S$_7$ & 0.23 & 1.85 & 0.26 \\ \hline
Ba$_4$Zr$_3$S$_{10}$ & 0.26 & 1.25 & 0.34 \\ \hline
\end{tabular}
\label{Table2}
\end{center}
\end{table}
\newpage
\section{Calculation of deformation potential energy and elastic modulus}
According to deformation potential model the carrier mobility is defined as:
\begin{equation}
\begin{split}
\mu_{\textrm{DP}} = \frac{(8\pi)^{\frac{1}{2}}\hbar^4e\textrm{C}_{\textrm{3D}}}{3(m^*)^{5/2}(k_\textrm{B}\textrm{T})^{3/2}\textrm{E}_{1}^2}
\label{eq2}
\end{split}
\end{equation}
where $k_{\textrm{B}}$, T, and m$^*$ correspond to the Boltzmann constant, temperature (i.e., 300 K), and effective mass, respectively. In the above equation, E$_1$ corresponds to the deformation potential constant along y-direction for electron and hole, respectively. It is given by:
\begin{equation}
\begin{split}
\textrm{E}_1 = \frac{\Delta \textrm{E}_\textrm{i}}{\Delta \textrm{l}/\textrm{l}_0}
\label{eq3}
\end{split}
\end{equation}
where, $\Delta \textrm{E}_\textrm{i}$ denotes the change in the energy or shift in the position of VBM or CBm under uniaxial strain along the y-direction. l$_0$ is the lattice constant along the transport direction. $\Delta \textrm{l}$ denotes the change or deformation in the lattice constant l$_0$ on application of uniaxial starin. The elastic modulus C$_{3\textrm{D}}$ is computed using (E - E$_0$)/V$_0$ = C($\Delta \textrm{l}/\textrm{l}_0)^2/2$, where E$_0$ and E are the total energy of undeformed system and deformed system. V$_0$ represents the equilibrium volume of the system. Note, herein, we have chosen strain range from -1.0\% to +1.0\% to obtain the fitted values of C$_{3\textrm{D}}$ and E$_1$ (see Figure \ref{10}).
\begin{figure}[h!]
\centering
\includegraphics[width=0.60\textwidth]{strain-band.jpg}
\caption{Band gap variation with strain along b- and c-direction for Ba$_2$ZrS$_4$ using PBE $\epsilon_{\textrm{xc}}$.}
\label{9}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=1.0\textwidth]{SI-RP-10.jpg}
\caption{Deformation potential and elastic modulus of RP phases.}
\label{10}
\end{figure}
\newpage
\section{Strength of electron-phonon coupling}
For the qualitative idea of electron-phonon coupling strength, we have calculated the specific free volume.
It is already reported in the literature that electron-phonon coupling depends on free volume (unoccupied space)~\cite{he2018unravelling}. Lattice free volume is defined as the difference between the unitcell volume and the constituent ions' volume. Following this, the ratio between the unoccupied volume and the total volume is defined as the specific free volume (see Table ~\ref{Table3}). From Table \ref{Table3}, we can clearly see that specific free volume is quite large and hence the electron-phonon coupling could be prompt in these systems. Hence, electron-phonon coupling is important in these systems.
\begin{table}[h!]
\caption{Specific free volume of Ba$_{\textrm{n+1}}$Zr$_\textrm{n}$S$_{\textrm{3n+1}}$ (n=[1-3]).}
\begin{center}
\begin{tabular}[c]{|c|c|c|c|} \hline
Ba$_{\textrm{n+1}}$Zr$_\textrm{n}$S$_{\textrm{3n+1}}$ & Specific free volume (\%)\\ \hline
Ba$_2$ZrS$_4$ & 94.33 \\ \hline
Ba$_3$Zr$_2$S$_7$ & 94.17 \\ \hline
Ba$_4$Zr$_3$S$_{10}$ & 94.10 \\ \hline
\end{tabular}
\label{Table3}
\end{center}
\end{table}
|
{
"timestamp": "2021-05-11T02:14:17",
"yymm": "2105",
"arxiv_id": "2105.03757",
"language": "en",
"url": "https://arxiv.org/abs/2105.03757"
}
|
\section{Introduction}
The inner core region of relaxed clusters of galaxies shows a complex structure of different gas phases.
Most of the gas mass is collisionally ionised and cooling via thermal bremsstrahlung and line emission in X-rays onto the central Brightest Cluster Galaxy (BCG).
The low temperature and high density of the cool core are indicative of a short radiative cooling time of $<1\rm\, Gyr$ (e.g. \citealt{1994ARA&A..32..277F, 2006MNRAS.366..417F}; \citealt{2004ApJ...612..817B}; \citealt{2006ApJ...648..164M}; \citealt{2014MNRAS.444.1236P}; \citealt{2019MNRAS.485.1757L}).
It predicts a massive cooling flow in the most massive clusters, such as A1835 and the Phoenix cluster (e.g. \citealt{1996MNRAS.283..263A}; \citealt{2012Natur.488..349M, 2014ApJ...784...18M}).
However, spectral evidence from the Reflection Grating Spectrometers (RGS) onboard \textit{XMM-}Newton only supports a mild cooling rate in cool cores, typically less than 10 per cent of the predicted rate in the absence of heating (e.g. \citealt{2001A&A...365L..99K}; \citealt{2003ApJ...590..207P}; \citealt{2019MNRAS.485.1757L}).
It requires a heating process that needs an energy source and an efficient mechanism to transport the energy throughout the core.
The central AGN interacts with its host environment.
For cool core clusters at a low Eddington fraction, AGN feedback operates in the kinetic mode, where gas accretion generates powerful relativistic jets which inflate bubbles (\citealt{2012ARA&A..50..455F}; \citealt{2012NJPh...14e5023M}).
Bubbles rise buoyantly with a mechanical power similar to the cooling rate in the absence of heating (e.g. \citealt{2002MNRAS.332..729C}; \citealt{2004ApJ...607..800B}; \citealt{2006MNRAS.373..959D}; \citealt{2006ApJ...652..216R}; \citealt{2012MNRAS.421.1360H}).
The temperature map of clusters is roughly isotropic, which suggests heating occurs away from the jet axis.
The mode of such energy transfer is still under debate.
While the energy can be propagated azimuthally by gravity waves, it can not transport the energy radially.
An alternative mode of powerful sound waves can provide the required velocity for radial energy transport (\citealt{2003MNRAS.344L..43F,2017MNRAS.464L...1F}), but a suitable energy dissipation mechanism needs to be developed.
Turbulent heating of the gas has also been of interest for this purpose.
The Hitomi Soft X-ray Spectrometer (SXS) made an accurate measurement of the level of turbulence at 164$\pm$10 km\,$\rm s^{-1}$ in the Perseus cluster (\citealt{2016Natur.535..117H}).
The energy density of isotropic turbulence is only 4 per cent of the thermal energy density there which is too low to reach the full cooling core.
Turbulence alone is insufficient to offset radiative cooling.
It is possible to measure an upper limit to the level of turbulence using the RGS in many X-ray peaked clusters (e.g. \citealt{2010MNRAS.402L..11S, 2011MNRAS.410.1797S,2013MNRAS.429.2727S}).
Since the RGS is a slitless spectrometer, the spectral lines are broadened by the spatial extent of the source in addition to other broadening processes.
The spatial broadening follows the RGS dispersion law,
$\Delta\lambda = 0.138\Delta\theta/m$\,\AA,
where $\Delta\lambda$ is the broadening in wavelength, $\Delta\theta$ is the angular offset from the central source in arcmin and $m$ is the spectral order.
It contributes significantly to the total line width in nearby sources (e.g. \citealt{2015A&A...575A..38P}).
To obtain a conservative limit, this artificial broadening can be corrected for by using the surface brightness profile of the European Photon Imaging Camera (EPIC) image (e.g. \citealt{2018MNRAS.480.4113P}; \citealt{2018MNRAS.478L..44B}).
A tight 90 per cent upper limit of 244 km\,$\rm s^{-1}$ is found in A1835 and 246 km\,$\rm s^{-1}$ in A2204 (\citealt{2018MNRAS.478L..44B}).
These limits are similar to the level of turbulence in the Perseus cluster found by the \citet{2016Natur.535..117H}.
Another interesting feature is the presence of H$\alpha$ emission in most cool core clusters. Many studies of the inner cluster core have shown that H$\alpha$ emission in the form of filaments is spatially aligned with the soft X-ray emitting gas and the two gas phases are likely mixing (e.g. \citealt{2003MNRAS.344L..48F,2006MNRAS.366..417F,2016MNRAS.461..922F}; \citealt{2005MNRAS.363..216C}).
No strong evidence of significant cooling below $\sim$1 keV suggests that the soft X-ray component is likely not cooling radiatively, but is mixing and powering the observed optical/IR emission due to the atomic and partially-ionised gas (\citealt{2003MNRAS.344L..48F}).
This situation can occur if the hot X-ray component interpenetrates the cold H$\alpha$ nebula and creates fast and energetic particles (\citealt{2011MNRAS.417..172F}).
The fast particles can then heat and excite the cold gas, powering the observed nebulosity (\citealt{2009MNRAS.392.1475F}).
In a previous work, we have shown that the thermal energy of the radiative cooling gas is sufficient as the power source for the optical/UV nebula in clusters with a cooling rate below $\sim$10 $\rm M_{\odot}\rm\,yr^{-1}$, but the most luminous clusters are likely powered by hotter gas or otherwise (\citealt{2019MNRAS.485.1757L}).
\citet{2013MNRAS.436..526C} argued that buoyant bubbles stretch fluid elements to form gaseous filaments with amplified magnetic field.
The release of magnetic energy allows dissipation into filaments.
Alternatively, H$\alpha$ filaments can also powered by Cosmic Rays (\citealt{2018ApJ...858...64R}).
The origin and fate of the molecular gas is another mystery.
A massive cold molecular gas reservoir is often present in the core and seen by CO lines from a component at $\sim$ 50K and/or NIR H$_2$ lines at $\sim$ 2000K (\citealt{2001MNRAS.328..762E, 2002MNRAS.337...49E}; \citealt{2003A&A...412..657S}; \citealt{2009MNRAS.395.1355W}; \citealt{2019A&A...631A..22O}; \citealt{2019MNRAS.490.3025R}).
Star formation of up to hundreds of $\rm M_{\odot}\rm\,yr^{-1}$ in the most massive clusters is a major consumer of the molecular gas deposit.
At the higher rates, the observed molecular gas reservoir will be depleted by star formation in $10^8$-$10^9$ yr if not replenished (e.g. \citealt{2018ApJ...853..177P}).
On the other hand, this timescale is comparable to the central radiative cooling time, which suggests the molecular gas cools from the hot X-ray atmosphere (e.g. \citealt{2019MNRAS.490.3025R}).
The molecular gas filaments have a smaller spatial extent and are often embedded in the H$\alpha$ nebula and hence the soft X-ray gas (e.g. \citealt{2019A&A...631A..22O}; \citealt{2019MNRAS.490.3025R}).
Surprisingly, the RGS spectra have revealed that the molecular gas mass is comparable to the X-ray gas mass emitting below 1 keV in a sample of nearby luminous clusters, e.g. 2A0335+096, A2052, A3581 (\citealt{2020MNRAS.497.1256L}).
These two gas phases are likely intermingled and the structural integrity is held by magnetic fields.
Both of our targets, RXCJ1504 and A1664, are remarkably luminous in both the X-ray and optical bands, and possess a massive molecular gas reservoir.
RXCJ1504 is one of the most massive low redshift cool core clusters at $z=0.2153$ with $M_{500}=1.25\times 10^{15}\rm M_{\odot}$ (\citealt{2011A&A...534A.109P}) and a high X-ray bolometric luminosity of 4.1$\times10^{45}$ $\,\rm erg s^{-1}$ and a classical cooling rate \footnote{In this work, we use the definition of the classical cooling rate as the ratio of gas mass enclosed in a radius with a radiative cooling time of 7.7 Gyr to the cooling time (see e.g. \citealt{2018ApJ...858...45M}).} of 1500-1900 $\rm M_{\odot}\rm\,yr^{-1}$ (\citealt{2005ApJ...633..148B}).
\citet{2011A&A...525L..10G} discovered a minihalo of 140 kpc in radius at the centre of the cluster confined to the cool core.
This suggests a tight connection between the X-ray emitting cool core and the relativistic plasma.
It also has an H$\alpha$ luminosity of 3.2$\times10^{43}$ $\,\rm erg s^{-1}$ making it one of the most luminous optical nebulae.
The observed UV flux indicates a strong star formation rate of 130 $\rm M_{\odot}\rm\,yr^{-1}$ (\citealt{2010MNRAS.406..354O}).
The inner 5 kpc of the cool core contains most gas from the massive molecular gas reservoir of 1.9$\pm$0.1$\times10^{10}$ $\rm M_{\odot}$ (\citealt{2018ApJ...863..193V}).
The kinematics of the molecular gas is complex as revealed by ALMA CO observations (e.g. \citealt{2018ApJ...863..193V}).
\citet{2018ApJ...863..193V} infer a turbulent velocity of 335$\pm15$ km\,$\rm s^{-1}$ for that gas and the central molecular filament shows a velocity range of $\sim$260 $\pm11$ km\,$\rm s^{-1}$ in RXCJ1504.
A dynamically young gas structure also shows an offset velocity of $\sim$\,-250 km\,$\rm s^{-1}$ from the rest of the gas in the BCG.
A1664 is one of the first cool core clusters in which CO emission was observed (e.g. \citealt{2001MNRAS.328..762E}).
It has a redshift of $z=0.1283$ with $M_{500}=4.06\times 10^{14}\rm M_{\odot}$ (\citealt{2011A&A...534A.109P}).
It has a classical cooling rate of 100$\pm$10 $\rm M_{\odot}\rm\,yr^{-1}$(\citealt{2018ApJ...858...45M}), and hosts a bright H$\alpha$ nebula of 1.5$\times10^{42}$ $\,\rm erg s^{-1}$ (\citealt{2006MNRAS.371...93W}).
The star formation rate is estimated to be 14 $\rm M_{\odot}\rm\,yr^{-1}$ in IR or 4.3 $\rm M_{\odot}\rm\,yr^{-1}$ in FUV (\citealt{2010ApJ...719.1619O}).
The BCG has a total molecular gas mass of 1.1$\pm$0.1$\times\,10^{10}$ $\rm M_{\odot}$ (\citealt{2014ApJ...784...78R}).
The molecular gas is also seen disturbed within 10 kpc of the core (\citealt{2014ApJ...784...78R}).
The CO(1-0) and CO(3-2) lines are well resolved into two Gaussian components with a velocity difference of $\sim$590 km\,$\rm s^{-1}$.
On a larger scale of $\sim$50 kpc, cold fronts are observed in the X-ray atmosphere produced by sloshing (\citealt{2019ApJ...875...65C}), and it is possible that core sloshing can affect lower temperature gas.
If the X-ray and molecular gas are related, they are likely sharing a similar velocity structure (for theoretical modelling, see e.g. \citealt{2017MNRAS.466..677G}).
At the present time, the \textit{XMM-}Newton RGS can place the most accurate constraint on the velocity of the soft X-ray emitting gas.
The dispersive nature of the RGS means the spectral resolution is improved with lower photon energies, and surpasses the Hitomi/XRISM resolution below 1 keV (12.4 \AA) for point-like and extended sources below 1 arcmin of spatial extent.
In this work, we present recent deep \textit{XMM-}Newton/RGS observations of these two X-ray luminous clusters: RXCJ1504.1-0248 (RXCJ1504) and Abell 1664 (A1664).
We measure radiative cooling rates and place constraints on turbulent velocity at the 90 per cent confidence level, which is an important proxy for heat propagation.
The structure of the paper is as follows.
Section \ref{sec:2} provides observational details of the clusters and the data reduction procedures.
Section \ref{sec:3} introduces the spectral models used to measure the cooling rate and place the upper limit on turbulent velocity.
Section \ref{sec:4} discusses the implications of our results and we try to correct for intrinsic absorption of the clusters.
We assume the following cosmological parameters: $H_{0} = 73 \rm \ km^{-1}Mpc^{-1}$, $\Omega_{\rm M} = 0.27$, $\Omega_{\rm \Lambda} = 0.73$.
\section{Observations and Data reduction}
\label{sec:2}
The \textit{XMM-}Newton observatory observed each of the clusters RXCJ1504 and A1664 for two orbits (PI Fabian).
The observational details are listed in Table \ref{tab:1}.
RXCJ1504 was observed between 15-Aug-2019 and 17-Aug-2019 and between 09-Feb-2020 and 10-Fen-2020. The offset of the roll-angle of the pointings between observations is 171.65 degrees.
A1664 was observed between 28-Jan-2020 and 29-Jan-2020 and between 30-Jan-2020 and 31-Jan-2020.
The offset of the roll-angle of the pointings is 0.65 degrees.
Here we used data from the RGS and the EPIC onboard \textit{XMM-}Newton.
We follow the standard data reduction procedure with the latest \textit{XMM-Newton} Science Analysis System v 17.0.0.
We extract the first and second order RGS spectra by the SAS task \textit{rgsproc}.
The second order spectra possess twice the spectral resolution and hence are used for turbulent velocity measurements.
We set the \textit{xpsfincl} mask to include 90$\%$ of the point spread function.
This is equivalent to a narrow 0.9 arcmin region.
We use template background files based on count rates in CCD 9 to produce background-subtracted spectra.
To achieve the highest S/N ratio, the RGS 1 and 2 spectra of both observations are stacked using the task \textit{rgscombine} and then processed by the task \textit{trafo} to be analysed by SPEX.
We check that the pointing of both observations is consistent to avoid spurious broadening of emission lines.
The spatial broadening of the RGS spectra is corrected by the surface brightness profile of the MOS image.
The MOS cameras are aligned with the associated RGS detectors and have slightly better spatial resolution than the pn detectors.
We only use MOS1 images from the earlier observation for each object (0840580101 for RXCJ1504 and 0840580301 for A1664).
The images are produced by the SAS task \textit{emproc}.
We extract the surface brightness profiles in the 0.5-1.8 keV energy band using the task \textit{rgsvprof}.
We used SPEX version 3.05.00 for spectral analysis with its default proto-Solar abundances of \citet{2009M&PSA..72.5154L} and ionisation balance (\citealt{2017A&A...601A..85U}).
The spectral fitting uses C-statistic (C-stat) minimisation which is equivalent to $\chi^2$ in low count statistics.
We adopt 68.3 per cent confidence level (1$\sigma$ uncertainty at $\Delta C=1$) for measurements.
For upper/lower limits, we only quote the 90 per cent confidence level (2$\sigma$ uncertainty at $\Delta C=2.71$) uncertainty, unless otherwise stated.
\begin{table*}
\centering
\caption{Observational details for RXCJ1504.1-0248 and A1664.}
\label{tab:1}
\begin{tabular}{ c c c c c c c c}
\hline
\hline Name & Redshift & $D_L$ (Mpc) & Scale (kpc/arcsec) & Obsid & Total RGS clean time (ks) & $N_{\rm H} (10^{20}\rm\,cm^{-2})$ \\
\hline
RXCJ1504.1-0248 & 0.21530 & 1030 & 3.36 & 0840580101/201 & 220 & 8.34 \\
A1664 & 0.12832 & 579 & 2.20 & 0840580301/401 & 222 & 12.8 \\
\hline
\end{tabular}
The redshifts are taken from the NED database (https://ned.ipac.caltech.edu/).
The total Galactic column density $N_{\rm H}$ is taken from the UK Swift Science Data Centre (see http://www.swift.ac.uk/analysis/nhtot/; \citealt{2005A&A...440..775K}; \citealt{2013MNRAS.431..394W}).\\
\end{table*}
\section{Results}
\label{sec:3}
The stacked RGS spectra are binned by a factor of 3 to be consistent with the spectral resolution and preserve most spectral information.
We fit the first order spectra over the 7-28 \AA\,band where the background is lower than the continuum.
We include the 7-20 \AA\, band for the second order spectra to use the most spectral information.
We use the collisional ionisation equilibrium component (\textit{cie}) and the cooling flow component (\textit{cf}) available in SPEX to construct our cooling flow models as described in \citet{2019MNRAS.485.1757L}. The \textit{cie} component represents a plasma with a free temperature $T$ and emission measure $EM=n_{\rm e}n_{\rm H}V$, where $n_{\rm e}$ and $n_{\rm H}$ are electron and proton densities and $V$ is the volume of the emitting gas.
We use the default value of $n_{\rm e}$ in SPEX.
It is typically used in single-temperature and two-temperature models to describe a cluster.
The \textit{cf} component consists of a set of \textit{cie} components and calculates the differential emission measure to match that of the required cooling rate
\begin{equation}
\label{equ:0}
\frac{d EM(T)}{d T}= \frac{5\dot{M}k}{2\mu m_{\rm H}\Lambda (T)}
\end{equation}
\noindent where $k$ is the Boltzmann constant, $\mu$ is the mean particle weight, $m_{\rm H}$ is the proton mass and $\Lambda (T)$ is the cooling function.
The maximum temperature of the \textit{cf} component is coupled to \textit{cie} component and we assume both components have the same abundances.
To reduce the number of free parameters, we fix the Ne/Fe and Mg/Fe ratios for both clusters.
We set Ne/Fe=0.8 and Mg/Fe=0.75 in RXCJ1504 and Ne/Fe=Mg/Fe=0.6 in A1664.
These ratios are measured from a 1\textit{cie}+1\textit{cf} model (Model 2) and do not change with additional \textit{cf} components.
The abundances of the other elements are coupled to Fe.
The \textit{cie} and \textit{cf} components are modified by redshift, cold Galactic absorption with solar abundances and spatial broadening (\textit{lpro}; \citealt{2015A&A...575A..38P}).
The \textit{lpro} component uses the surface brightness profile as the input.
The scale factor $s$ and the wavelength shift $\Delta\lambda$ are the free parameters.
The scale factor fit for the amount of line broadening and the wavelength shift corrects for the centroid of emission.
\subsection{Cooling flow analysis}
To construct the cooling flow models, we first model the hot plasma ($>2$ keV) in the multi-phased intracluster medium (ICM) with a \textit{cie} component (Model 1).
Three cooling flow models are then considered combining \textit{cie} and \textit{cf} component: complete (one-stage), one-stage with a free minimum temperature and two-stage models.
We define the 'complete' cooling rate as the rate measured from the \textit{cie} temperature down to the minimum temperature of the \textit{cf} component of 0.01 keV (Model 2).
This minimum temperature of 0.01 keV is the lowest possible value allowed in SPEX.
This one-stage cooling flow model is often sufficient for the spectra of clusters and groups with low statistics and a low \textit{cie} temperature (e.g. \citealt{2019MNRAS.485.1757L}), but not necessarily for RXCJ1504.1-0248 and A1664.
We then free the minimum temperature of the \textit{cf} component to include the possibility that the ICM stops cooling radiatively in X-rays at a higher temperature (Model 3).
This also leads to a 'two-stage' cooling flow model that has two \textit{cf} components, where the cooling rates are measured between the \textit{cie} temperature and 0.7 keV and between 0.7 keV and 0.01 keV, respectively.
We refer to the cooling rate between 0.7 keV and 0.01 keV as the residual cooling rate in this work (Model 4).
In a previous work, \citet{2019MNRAS.485.1757L} discussed the effect of the transition temperature between two cooling flow components on the cooling rates in the two-stage model.
For a high transition temperature up to 0.9 keV, the cooling rate above the transition temperature is likely increased by 20 per cent.
For a low transition temperature, the cooling rate is likely decreased by 10 per cent, while the residual cooling rate is over-predicted due to a narrow temperature range.
We found that the transition temperature of 0.7 keV is suitable for fitting the {Fe\,\scriptsize{XVII}} lines and its forbidden-to-resonance line ratio.
It is also consistent with the one-stage cooling flow model with a free minimum temperature.
The cooling rates, \textit{cie} temperatures and O and Fe abundances of three cooling flow models are detailed in Table~\ref{tab:RXCJcf} and~\ref{tab:A1664cf}.
We show the stacked RGS spectra in Fig.~\ref{fig:RXCJ1504} and ~\ref{fig:A1664} with the best fit cooling flow models.
In RXCJ1504, we find that both the one-stage cooling flow model with a free minimum temperature (Model 3) and the two-stage model (Model 4) yield the minimum C-stat for the same number of degrees of freedom (DoF).
The transition temperature of the two-stage model is consistent with the free minimum temperature of the one-stage model.
The other fit parameters are also consistent between these two models.
We hence conclude a cooling rate to 0.7 keV of 180$\pm$40 $\rm M_{\odot}\rm\,yr^{-1}$ and a residual cooling rate of from 0.7 keV less than 53$\rm M_{\odot}\rm\,yr^{-1}$ at 90 per cent confidence level.
The {Fe\,\scriptsize{XVII}} resonance line is seen in the spectrum and mixed with a broad feature at 15 \AA\,in rest wavelength (18.3 \AA\,in observed wavelength).
This indicates a cooling flow is present at around 0.7 keV.
There are several possibilities for the nature of the broad feature.
First, the {Fe\,\scriptsize{XVII}} resonance line is suppressed in the line of sight and re-emitted from the outer region.
The spatial extent of the gas emitting the {Fe\,\scriptsize{XVII}} resonance line is broadened which results in a broader line at 15 \AA\, due to the fact that the RGS detectors are slitless.
Second, the gaseous neutral iron in the interstellar medium has 2 deep and broad absorption edges at 17.2 \AA\,and 17.5 \AA\,.
However, most iron is in dust, which has a different edge shape and position.
A spectral modelling of the iron edge which does not account for dust might introduce some systematic effects including a spurious bump around 18 \AA\,(see, e.g., \citealt{2006ApJ...648.1066J}; \citealt{2013A&A...551A..25P}).
The {O\,\scriptsize{VII}} triplet is not observed.
Due to the high continuum emission of the hot gas, the mass of cold gas at 0.2 keV is difficult to detect.
The distinction between the complete and two-stage models is statistically significant.
While some fit parameters are consistent such as the \textit{cie} temperature and metallicities,
the two-stage model gives 3.6 times higher cooling rate above 0.7 keV than the complete cooling rate.
By comparison in \citet{2019MNRAS.485.1757L}, we find such a ratio of 2.2, 1.7, 3.8 in A262, Centaurus and M87, respectively, all of which show a significant statistical improvement in the two-stage model.
Given the 20-40 per cent uncertainty on the measurements,
this ratio is broadly consistent with other nearby cool core clusters (\citealt{2019MNRAS.485.1757L}).
In A1664, we find that the cooling flow models are improved by using a second line broadening component for the \textit{cf} components.
The second \textit{lpro} component uses the same surface brightness profile and we fit the scale factor and wavelength shift as the \textit{lpro} component for the \textit{cie}.
The extra line broadening component improves the C-stat by 5 in comparison with using just one broadening component in the complete cooling model.
We find that all the cooling flow models provide a similar fit to the spectrum with the same C-stat and consistent cooling rate.
As a result, we only report a complete, one-stage, cooling rate of 34$\pm$6$\rm M_{\odot}\rm\,yr^{-1}$ from the current data (Model 1).
\begin{figure}
\includegraphics[width=1\columnwidth]{RXCJ1504_cf_spectrum.png}
\caption{Stacked first order RGS spectrum of RXCJ1504.1-0248 in rest wavelength.
The RGS spectrum is shown in black and the best fit two-stage cooling models is seen in blue.
The background is seen in grey.
Strong emission lines are labelled in red.
The spectrum is overbinned by a factor of 6 for plotting purposes.
\label{fig:RXCJ1504}
}
\end{figure}
\begin{figure}
\includegraphics[width=1\columnwidth]{A1664_cf_spectrum.png}
\caption{As Fig.~\ref{fig:RXCJ1504}, the stacked RGS spectrum of A1664 with the best fit complete cooling flow model in red in rest wavelength.
\label{fig:A1664}
}
\end{figure}
\begin{table*}
\centering
\caption{XMM/RGS fit parameters for RXCJ1504.}
\label{tab:RXCJcf}
\begin{tabular}{l c c c c c c r}
\hline
\hline
& Model 1 & Model 2 & Model 3 & Model 4\\
\hline
Spectral components& 1\textit{cie} & 1\textit{cie}+1\textit{cf} & 1\textit{cie}+1\textit{cf} & 1\textit{cie}+2\textit{cf}\\
C-stat/DoF & 841/682 & 833/681 & 818/680 & 818/680 \\
Fe/H & 0.68$\pm$0.06&0.74$\pm$0.07& 0.76$\pm$0.08& 0.76$\pm$0.07 \\
O/H & 0.41$\pm$0.06&0.39$\pm$0.07& 0.44$\pm$0.07& 0.44$\pm$0.07 \\
$T_{\rm H}$ (keV) & 5.4$\pm$0.2 &5.6$\pm$0.2 & 6.3$\pm$0.4 & 6.3$\pm$0.4, 0.7 \\
$T_{\rm min}$ (keV) & n/a &0.01 & 0.7$\pm$0.1 & 0.7, 0.01 \\
$\dot{M}_{\rm H}$ ($\rm M_{\odot}\rm\,yr^{-1}$)& n/a& 50$\pm$20& 190$\pm$60& 180$\pm$40 \\
$\dot{M}_{\rm C}$ ($\rm M_{\odot}\rm\,yr^{-1}$)& n/a& n/a & n/a & $<$53 \\
\hline
\end{tabular}
\\
Model 1 is the single-temperature (1\textit{cie}) model, model 2 is the complete cooling (1\textit{cie}+1\textit{cf}) model, model 3 is the one-stage model with a free minimum temperature and model 4 is the two-stage (1\textit{cie}+2\textit{cf}) model.
$T_{\rm H}$ and $T_{\rm min}$ are the \textit{cie} temperature and the minimum temperature of the associated \textit{cf} component, respectively. $\dot{M}_{\rm H}$ is the cooling rate between $T_{\rm H}$ and $T_{\rm min}$. $\dot{M}_{\rm C}$ is the residual cooling rate between 0.7 and 0.01 keV in the two-stage model.
\end{table*}
\begin{table*}
\centering
\caption{XMM/RGS fit parameters for A1664. Models, parameters and labels are the same as Table \ref{tab:RXCJcf}.}
\begin{tabular}{l c c c c c c r}
\hline
\hline
& Model 1 & Model 2 & Model 3 & Model 4\\
\hline
Spectral components& 1\textit{cie} & 1\textit{cie}+1\textit{cf} & 1\textit{cie}+1\textit{cf} & 1\textit{cie}+2\textit{cf}\\
C-stat/DoF & 929/680 & 909/677 & 909/676 & 909/676 \\
Fe/H & 0.37$\pm$0.03&0.49$\pm$0.05& 0.49$\pm$0.06& 0.50$\pm$0.06 \\
O/H & 0.23$\pm$0.05&0.28$\pm$0.06& 0.58$\pm$0.06& 0.29$\pm$0.06 \\
$T_{\rm H}$ (keV) & 1.95$\pm$0.07&2.2$\pm$0.1 & 2.2$\pm$0.1 & 2.1$\pm$0.1, 0.7 \\
$T_{\rm min}$ (keV) & n/a &0.01 & $<$0.7 & 0.7, 0.01 \\
$\dot{M}_{\rm H}$ ($\rm M_{\odot}\rm\,yr^{-1}$)& n/a& 34$\pm$6& 34$\pm$6& 40$\pm$20 \\
$\dot{M}_{\rm C}$ ($\rm M_{\odot}\rm\,yr^{-1}$)& n/a& n/a & n/a & 30$\pm$10 \\
\hline
\label{tab:A1664cf}
\end{tabular}
\end{table*}
\subsection{Turbulence}
\subsubsection{Spatial broadening}
The observed line broadening in RGS spectra is the sum of thermal broadening, turbulent motion and spatial broadening.
Thermal broadening is already calculated in the thermal components such as \textit{cie} or \textit{cf}.
To place constraints on the turbulent velocity, we need to estimate the level of spatial broadening.
Since the scale of the hot ICM is much larger than that of the cool core, using the full spatial profile over-predicts the contribution to the spatial broadening and hence underestimate the turbulence.
We follow the method used in \citet{2018MNRAS.478L..44B} and \citet{2018MNRAS.480.4113P} for a more accurate estimate of spatial broadening due to the cool gas.
The SPEX task \textit{rgsvprof} gives a cumulative flux of the surface brightness profile of the MOS images as a function of wavelength.
This can be inverted into a Gaussian shaped profile as expected from the image.
Such a profile can be modelled by the sum of three Gaussians.
The central narrowest Gaussian represents the coolest gas in the core.
The bremsstrahlung continuum from the hot ICM is seen in the broadest outer Gaussian.
The remaining intermediate Gaussian provides the transition between the ICM and the cool core.
As we try to measure the turbulence in the cool core, the central and intermediate Gaussians are the relevant components in the estimation of spatial broadening.
The surface brightness profiles of RXCJ1504.1-0248 and A1664 are seen in Fig. ~\ref{fig:Surface_brightness}.
We find that the profile of A1664 is skewed and the asymmetry is seen in the DETY direction of the MOS 1 image.
The separation between the centre of the central and intermediate Gaussian is 0.045$\pm 0.001$ \AA.
From the RGS dispersion law, such a wavelength separation corresponds to a physical separation of 70 kpc.
This means the intermediate Gaussian component is indeed at the rim of the cool core.
We reconstruct two profiles of cumulative flux that can be used in SPEX, which include either using the central Gaussian alone or using both the central and intermediate Gaussians.
\begin{figure*}
\includegraphics[width=1\columnwidth]{RXCJ1504_SB_profile.png}
\includegraphics[width=1\columnwidth]{A1664_SB_profile.png}
\caption{Left panel: The surface brightness profile of RXCJ1504 (Data) and the components of three-Gaussian fit. Right panel: The surface brightness profile and Gaussian components of A1664.
\label{fig:Surface_brightness}
}
\end{figure*}
\subsubsection{Turbulent velocity measurements}
We simultaneously fit the first and the second order spectra to measure the turbulent velocity.
The observed spatial profile is replaced by the profiles reconstructed from the Gaussian approximations in the \textit{lpro} component.
To conserve the RGS dispersion law, we then set the scale factor of the lpro to $s=1$ for the first order and $s=0.5$ for the second order spectra.
We also fit the wavelength shift parameter in the \textit{lpro} component to adjust for redshift.
We use the single-temperature (1\textit{cie}) model and fit the micro-turbulent velocity ($v_{\rm turb}$) of the \textit{cie} components.
The 1-dimensional turbulent velocity is then $v_{\rm 1D}$=$v_{\rm mic}$/$\sqrt{2}$.
The fit parameters between the two sectors (first and second order spectra) are then coupled.
We summarise the velocity limits in Table~\ref{tab:turbulence}.
The total line width due to turbulence and spatial broadening is calculated by using the full spatial profile and setting the scale factor to 0.
The most accurate velocity limit is measured by simultaneously fitting the first and second order spectra.
We find that correcting the spatial broadening both using the central Gaussian alone and using both the central and intermediate Gaussians give consistent velocity limits.
We report that the best 90 per cent upper limit for RXCJ1504 and A1664 are both 300 km\,$\rm s^{-1}$.
\begin{table*}
\centering
\caption{Turbulent velocity limits for RXCJ1504 and A1664.}
\label{tab:turbulence}
\begin{tabular}{ c c c c c c c c c}
\hline
\hline
& & 1st order spectrum & & & 1st and 2nd order spectra& \\
& & Total width & Central & Central $\&$ intermediate& Total width & Central & Central $\&$ intermediate\\
\hline
RXCJ1504 &v$_{\rm 1D}$ (km\,$\rm s^{-1}$) & 600$\pm$100 & 500$\pm$100 & $<$260& 550$\pm$90 & 300$\pm$100 & $<$310\\
&C-stat/DoF & 840/682 & 839/682 & 839/682 & 3111/2093 & 3082/2093 & 3087/2093 \\
\hline
A1664 &v$_{\rm 1D}$ (km\,$\rm s^{-1}$) & 800$\pm$200 & $<$530 & $<$420 & 700$\pm$100 & $<$300 & $<$320 \\
&C-stat/DoF & 941/680 & 931/680 & 938/680 & 2934/2089 & 2919/2089 & 2941/2089 \\
\hline
\end{tabular}
\\
The columns of total width represent the line width measurements without the correction of line broadening.
The next column 'Central' is the turbulent velocity measured by correcting the spatial broadening using the central Gaussian only.
The velocity limits in the last column 'Central $\&$ intermediate' are corrected by using both the central and intermediate Gaussians in the spatial profile.
\end{table*}
\section{Discussion}
\label{sec:4}
\subsection{Soft X-ray and cooler gas}
It is possible to achieve a 3$\sigma$ measurement of the cooling rate by the two-stage model (Model 4) or better for many nearby, $z<0.01$, clusters (\citealt{2019MNRAS.485.1757L}), but only a few at the redshift similar to our targets or higher (e.g.\citealt{2015A&A...580A...6T}; \citealt{2018MNRAS.480.4113P}).
From the analysis of the deep observations of the luminous clusters RXCJ1504 and A1664,
we can provide reliable measurements of the cooling rate at the 4-5$\sigma$ confidence level.
Both targets are already well-studied in other energy bands as well as spatially resolved analysis in Chandra (e.g. RXCJ1504: \citealt{2005ApJ...633..148B}; \citealt{2009ApJS..182...12C}; \citealt{2010MNRAS.406..354O}; \citealt{2011MNRAS.410.1797S}; \citealt{2018ApJ...863..193V}; A1664: \citealt{2001MNRAS.328..762E}; \citealt{2006MNRAS.371...93W}; \citealt{2009ApJ...697..867K}; \citealt{2010ApJ...719.1619O}; \citealt{2014ApJ...784...78R}; \citealt{2019ApJ...875...65C}).
It is then of great interest to understand the role of such X-ray cooling rate in cluster evolution at intermediate redshifts.
To be more precise, in this section, we discuss the connection between the soft X-ray emitting gas and the cooler materials including the H$\alpha$ nebula, molecular gas reservoir, gas consumed by star formation activities and AGN accretion.
We are currently analysing the archival spectra of luminous clusters at intermediate redshifts ($0.1<z<0.6$) with known optical nebula and will report elsewhere.
The H$\alpha$ nebulae of RXCJ1504 and A1664 are exceedingly luminous for intermediate redshift clusters.
To power these partially ionised nebulae, at least a factor of 15 times the observed H$\alpha$ luminosity is required to include other UV/IR emission due to the same gas (\citealt{2003MNRAS.344L..48F}; \citealt{2009MNRAS.392.1475F}).
\citet{2010ApJ...719.1619O} and \citet{2010MNRAS.406..354O} reported that the energy of stellar photoionisation is comparable to the H$\alpha$ luminosity in our targets.
This means that additional sources of energy are required to power the remaining emission.
\citet{2003MNRAS.344L..48F} suggested that the soft X-ray gas can provide sufficient energy for the nebulae.
This is supported by the spatial coincidence between the soft X-ray components and the H$\alpha$ nebula (e.g. Perseus: \citealt{2003MNRAS.344L..48F,2006MNRAS.366..417F}; Centaurus: \citealt{2005MNRAS.363..216C}, \citealt{2016MNRAS.461..922F}; A1795: \citealt{2001MNRAS.321L..33F}, \citealt{2005MNRAS.361...17C}).
Since most soft X-ray gas stops cooling radiatively below 0.7 keV as seen in the spectra,
it can release a significant amount of energy if it continues to cool non-radiatively.
For nearby clusters, we found that the energy of the 0.7 keV gas is sufficient only for the less luminous nebulae, while the most luminous nebulae require a much warmer gas (\citealt{2019MNRAS.485.1757L}).
\citet{2008ApJ...681.1035O} found the same conclusion for the 1 keV gas to power 5 times the IR luminosity.
To calculate the energy required by the nebulae in our clusters into a mass inflow rate, we assume 15 times the H$\alpha$ luminosity
\begin{equation}
\label{equ:0p5}
15\times\,L_{\rm H\alpha}= 3/2 \times \dot M_{\rm neb} (\frac{kT}{\mu m_p}),
\end{equation}
which simplifies to
\begin{equation}
\label{equ:1}
\dot M_{\rm neb}= 0.99 \times (\frac{L_{\rm H \alpha}}{10^{40}\,\rm erg\,\rm s^{-1}})(\frac{kT}{\rm 1keV})^{-1}\rm M_{\odot}\,\rm yr^{-1}.
\end{equation}
For our targets, $\dot{M}_{\rm neb}$ is 4.6$\times10^{3}\rm M_{\odot}\rm\,yr^{-1}$ for RXCJ1504 and 2.1$\times10^{2}\rm M_{\odot}\rm\,yr^{-1}$ for A1664.
Both of these values are much larger than the observed cooling rate between the ICM temperature and 0.7 keV.
They are also 2-3 times of the classical cooling rate predicted in the absence of heating.
The cooling flow at 0.7 keV is therefore insufficient to power the observed H$\alpha$ nebulae.
Alternatively, if the nebulae are powered by the warmer gas of the same rate as radiatively cooling, equation~\ref{equ:1} suggests the temperature of the gas is 25 keV and 6.3 keV for RXCJ1504 and A1664, respectively.
The temperature is much hotter than the temperatures of the hot ICM in RXCJ1504 and is not expected in the cool core.
Therefore, other sources of significant energy are required to power the H$\alpha$ nebulae of at least RXCJ1504 in addition to stellar photoionisation and the soft X-ray cooling flow (see Section \ref{sec:4.4} for additional energy in an alternative cooling flow model).
The gas properties of RXCJ1504 compare well to those of the Phoenix cluster at $z=0.596$.
It has a similar molecular gas mass of 2$\times 10^{10}$ $\rm M_{\odot}$ (\citealt{2017ApJ...836..130R}) embedded in an optical line-emitting nebula with an H$\alpha$ luminosity of 8.52$\pm0.5\times10^{43}$ $\,\rm erg s^{-1}$ (\citealt{2014ApJ...784...18M}).
For the Phoenix cluster, \citet{2018MNRAS.480.4113P} reported a cooling rate of 350$^{+150}_{-120}$ $M_{\odot}\rm\,yr^{-1}$ below 2 keV at 68 per cent confidence level.
Both the H$\alpha$ luminosity and the cooling rate are twice of those measured in RXCJ1504.
We calculate $\dot{M}_{\rm neb}$ to be 4.3$\times10^{3}\rm M_{\odot}\rm\,yr^{-1}$ for the 2 keV gas.
This means the soft X-ray gas and stellar photoionisation are also insufficient as the power source.
However, the Phoenix cluster has a star burst of 500-800 $M_{\odot}\rm\,yr^{-1}$ that is comparable to the observed cooling rate at the 1$\sigma$ confidence level (\citealt{2013ApJ...765L..37M,2014ApJ...784...18M}).
This suggests the molecular gas reservoir is likely growing slowly at the young age of the cluster.
Although the energy produced by the cooling rates does not match the energy required by the H$\alpha$ nebulae, the fate of the mass of the cooling gas still needs to be accounted for.
The condensation of X-ray cooling gas is strongly linked to both the massive molecular gas reservoir and the star formation in the BCG, which are only present when the radiative cooling time falls below a Gyr (\citealt{2008ApJ...687..899R}; \citealt{2018ApJ...853..177P}).
\citet{2019MNRAS.490.3025R} and \citet{2020MNRAS.497.1256L} found that the mass of the soft X-ray gas is consistent with the molecular gas mass in the inner 10 kpc.
If the X-ray cooling flow is indeed a major source of gas for the molecular gas reservoir and then star formation, we can calculate the timescale for forming the reservoir.
Without any star formation activity and AGN gas accretion, the molecular gas requires $10^8$ yr to accumulate in RXCJ1504 and 3.2$\times10^8$ yr in A1664.
However, both of our targets are extremely star forming clusters and may have a strong AGN activity.
Our results show that RXCJ1504 is cooling at 10 per cent and A1664 is cooling at 34 per cent of the classical cooling rate predicted rate in the absence of heating.
This means most radiative cooling is suppressed by AGN feedback.
The amount of heating required can be deduced from the luminosity of the cooling flow component above 0.7 keV, which is available in SPEX.
This indicates at least 2.25$\times10^{45}\rm\,erg s^{-1}$ and 2.1$\times10^{43}\rm\,erg s^{-1}$ for RXCJ1504 and A1664, respectively.
Such energy is about a third of the mechanical power in A1664 and hence AGN feedback must supply the larger power of 6.8$\times10^{43}\rm\,erg s^{-1}$ (\citealt{2018ApJ...853..177P}).
However, the required energy is 10 times larger than the mechanical power from the AGN in RXCJ1504,
which suggests the AGN has been much more active than now observed.
\citet{2010MNRAS.406..354O} found the same conclusion in RXCJ1504 using the 3$\sigma$ upper limit of the cooling rate measured from archival EPIC/RGS spectra.
Assuming accretion efficiency of 0.1, the required energy is equivalent to a black hole growth rate of 0.39 $\rm M_{\odot}\rm\,yr^{-1}$ and 0.012 $\rm M_{\odot}\rm\,yr^{-1}$ for RXCJ1504 and A1664, respectively.
If the AGN is powered by Bondi accretion from the X-ray emitting gas, it requires a cool component of about 0.5 keV in RXCJ1504 (\citealt{1952MNRAS.112..195B}; \citealt{2010MNRAS.406..354O}).
Although our cooling flow models find most gas is above 0.7 keV, the detection of the {Fe\,\scriptsize{XVII}} resonance line shows that it is likely to have some cool gas at around 0.5 keV.
\citet{2020MNRAS.497.1256L} measured the mass of 0.7 keV in nearby cool core clusters of $10^8-10^9\rm M_{\odot}$.
RXCJ1504 is likely to have a higher gas mass at this temperature, since the luminosity of the 0.7 keV gas in the two-temperature model is 9 times larger than that of 2A0335+096, which has the largest gas mass below 1 keV.
Such a cool gas can fuel the AGN on the timescale of a few $10^9$ yr.
The ratio of the black hole growth rate to the star formation rate is 0.003 and 0.00086 for RXCJ1504 and A1664, respectively.
The relation of black hole growth and star formation is in good agreement with other clusters (\citealt{2006ApJ...652..216R}).
Finally, the ratio of the radiative cooling rate to the star formation rate is between 1.5 and 2.5, which is smaller than most moderate star forming clusters but consistent with more luminous clusters such as A1835 (\citealt{2006ApJ...652..216R}; \citealt{2008ApJ...681.1035O}; \citealt{2019MNRAS.485.1757L}).
We find a net mass deposition rate of 50 $\rm M_{\odot}\rm\,yr^{-1}$ in RXCJ1504 and 20-30 $\rm M_{\odot}\rm\,yr^{-1}$ in A1664.
These increase the molecular gas formation timescale by 2-3 times.
Nevertheless, these timescales are consistent with the typical radiative cooling time of cool core clusters (e.g. A1664, \citealt{2009ApJ...697..867K}), which suggests a strong link between the X-ray cooling gas and the molecular gas.
\subsection{Turbulence versus heat propagation}
The archival \textit{XMM-}Newton observations of A1664 (ObsID: 0302030201/0302030201) did not point at the centre of the cluster and no turbulent velocity measurement was made by previous works.
The previous spectroscopic analyses and Monte Carlo simulation of turbulence found a velocity of $670^{+600}_{-360}$ km\,$\rm s^{-1}$ and $1310^{+570}_{-670}$ km\,$\rm s^{-1}$ at 68 per cent confidence level, respectively (\citealt{2011MNRAS.410.1797S,2013MNRAS.429.2727S}).
Our results using the new data show a much tighter limit (of 300 km\,$\rm s^{-1}$ ).
The turbulent velocity of our targets is comparable to the velocity measured in many bright clusters, e.g. $<$211 km\,$\rm s^{-1}$ in A1835 (\citealt{2013MNRAS.429.2727S}; \citealt{2018MNRAS.478L..44B}), $<$400 km\,$\rm s^{-1}$ in 2A0335+096 (\citealt{2015A&A...575A..38P}), $\sim$164 km\,$\rm s^{-1}$ in Perseus (\citealt{2016Natur.535..117H}) and $<$370 km\,$\rm s^{-1}$ in the Phoenix cluster (\citealt{2018MNRAS.480.4113P}).
It is worth noting that resonant scattering can place constraint on the level of turbulence in elliptical galaxies in galaxy groups (\citealt{2012A&A...539A..34D}; \citealt{2017MNRAS.472.1659O}).
For elliptical galaxies with a temperature below 1 keV, strong {Fe\,\scriptsize{XVII}} lines are usually seen in the RGS spectra.
\citet{2017MNRAS.472.1659O} measured a mean turbulent velocity of 107$\pm17$ km\,$\rm s^{-1}$ in 13 elliptical galaxies.
This is lower than the upper limit in our targets, but individual galaxies can have a higher turbulence (e.g. NGC 5044, \citealt{2012A&A...539A..34D}).
The temperature of BCG in clusters is typically above 1.5-2 keV and {Fe\,\scriptsize{XVII}} lines are not detected in all clusters.
It is also difficult to measure the {Fe\,\scriptsize{XVII}} resonance-to-forbidden ratio due to the high continuum.
In the case of RXCJ1504, the {Fe\,\scriptsize{XVII}} forbidden line is redshifted to the RGS chip gap and therefore not detected by the RGS.
It is ideal to place constraint on turbulence with resonant scattering in clusters with Fe L lines, which is beyond the scope of this work.
We can calculate the adiabatic sound speed $c_{\rm s}=\sqrt{\gamma kT/\mu \rm m_{\rm p}}$, where $\gamma$ is the adiabatic index, which is 5/3 for ideal monatomic gas, $\mu = 0.6$ is the mean particle mass and $\rm m_{\rm p}$ is the proton mass.
This gives a sound speed of 1300 km\,$\rm s^{-1}$ and 750 km\,$\rm s^{-1}$ for RXCJ1504 and A1664, respectively.
In this work, we calculate the 1-D Mach number for turbulence $M=V_{\rm 1D}/c_{\rm s}$.
The turbulent velocity then has a Mach number $M$ of 0.23 in RXCJ1504 and 0.4 in A1664.
To calculate the ratio of the energy density in turbulence to the thermal energy of the plasma, we follow equation 11 in \citet{2009MNRAS.398...23W} and obtain
\begin{equation}
\label{equ:2}
\frac{\epsilon_{\rm turb}}{\epsilon_{\rm therm}}=\gamma M^2.
\end{equation}
We find that the energy density ratio is less than 8.9 per cent in RXCJ1504, which is comparable to the ratio of 4 per cent in Perseus (\citealt{2016Natur.535..117H}) and 13 per cent in A1835 (\citealt{2010MNRAS.402L..11S}).
In A1664, the turbulence energy is less than 27 per cent of thermal energy.
\citet{2018MNRAS.478L..44B} and \citet{2018MNRAS.480.4113P} calculated the minimum propagation velocity required to balance radiative cooling as a function of radius in 4 cool core clusters.
The gas properties of RXCJ1504 are similar to those of the Phoenix cluster and A1835, while the core of 1664 is similar to that of A2204.
The upper limits of turbulent velocity of 300 km\,$\rm s^{-1}$ are lower than required in both clusters at more than 15 kpc from the core.
It is clear that the energetics of the turbulent motion of hot gas can not fully balance radiative cooling throughout the cool core.
We now discuss the problem of energy transport and dissipation.
\citet{2018ApJ...865...53Z} argued that the large ($\sim$ 10 per cent) surface brightness fluctuations in the X-ray images are isobaric and/or isothermal on spatial scales of 10-60 kpc and are likely associated with slow gas motions and bubbles of relativistic plasma (X-ray cavities).
Bubbles tend to propagate along an axis but heating is also needed in directions away from that axis.
This requires a faster propagation than turbulence alone.
Internal waves or g-modes (buoyancy waves) are invoked on energetic grounds, but these waves do not propagate fast enough.
An alternative is to invoke time variability.
\citet{2020MNRAS.494.5507F} presents a time-dependent 1D simulation of heating in cool core clusters with outbursts reaching $10^{46}$ erg$\rm s^{-1}$ from the AGN on Gyr timescale.
The central density and temperature profiles make large excursions on this timescale with energy advected during the outbursts.
However, \citet{2014MNRAS.438.2341P}, \citet{2017ApJ...851...66H} and \citet{2018ApJ...862...39B} show a universal inner entropy shape which would not be seen if the central gas properties are cycling up and down.
The issue of how the energy is replaced or flows remains open, with sound waves remaining a possibility.
\subsection{Blueshifted component in A1664}
The best fit cooling flow models of A1664 show that the spectrum is well fitted by two \textit{lpro} components (see section 3.1).
This is also seen in some other clusters, e.g. Centaurus, where the {Fe\,\scriptsize{XVII}} lines are narrower than emission lines from hot-gas (\citealt{2016MNRAS.461.2077P}; \citealt{2019MNRAS.485.1757L}).
We find that although the scale factor is consistent in the \textit{lpro} components in A1664, the wavelength shift is different.
To understand the nature of this shift, we adopt a two-temperature model (2 \textit{cie}) and each \textit{cie} is associated with a separate \textit{lpro} component.
We find that the cooler component has a temperature of 0.80$\pm0.08$ keV and blueshifted by 0.046$^{+0.049}_{-0.017}$ \AA\,from the hot gas.
Such a difference can be achieved by either a blueshifted gas component or the different centroids of the hot and cool gas phase.
We extract the surface brightness profile of the MOS1 image in 0.5-1 keV and 1-3 keV energy bands.
These bands cover most emission seen in the core RGS spectrum.
The profiles are shown in Fig.~\ref{fig:A1664_SB_energy}.
We find that the centroids of different gas phases are separated by 0.0017$\pm0.0006$ \AA.
This only accounts for 4 per cent of the observed wavelength shift.
Therefore, the blueshift is due to the motion of the cool gas.
Assuming the blueshift is driven by the {Fe\,\scriptsize{XVII}} resonance line, we estimate the blueshifted velocity of 750$^{+800}_{-280}$ km\,$\rm s^{-1}$.
Independently, we also decouple the redshift of the two \textit{cie} components and convolve both with the same \textit{lpro} component.
We find a consistent blueshifted velocity of 1000$^{+500}_{-300}$ km\,$\rm s^{-1}$.
Note that the difference of the roll-angle of the pointings between the two observations is small. By simultaneously fitting the spectra of individual observations, we find that the fit parameters are consistent with the stacked spectrum of both observations.
A different line-of-sight velocity of different gas phases is seen in other clusters.
By decoupling the redshift, \citet{2018MNRAS.480.4113P} found a velocity of 1000$\pm400$ km\,$\rm s^{-1}$ in the Phoenix cluster.
A similar velocity of $\sim$1000 km\,$\rm s^{-1}$ is found in the non cool core Coma cluster, while gas in the Perseus cluster is more relaxed at 480$\pm210$ km\,$\rm s^{-1}$ (e.g. \citealt{2020A&A...633A..42S}).
It is possible to drive cool gas by sloshing due to minor mergers (e.g. \citealt{2006ApJ...650..102A}).
The shift in velocity is then seen in spatial coincidence with cold fronts (\citealt{2020A&A...633A..42S}).
In A1664, the molecular gas system in the centre is divided in 2 roughly equal clumps with a velocity separation of 600 km\,$\rm s^{-1}$ (\citealt{2014ApJ...784...78R}).
The blueshifted component is seen at a velocity of 571$\pm 7$ km\,$\rm s^{-1}$ from CO(3-2) in our line-of-sight, with a FWHM of 190$\pm 20$ km\,$\rm s^{-1}$.
Given the large uncertainty in the X-ray measurements, the velocities of the molecular and X-ray gas are consistent within 1$\sigma$.
Although it is unclear whether the blueshifted molecular gas lies in front or behind the BCG along the line of sight, the system is only a few kpc from the core in the transverse direction.
This molecular gas is likely embedded in the soft X-ray gas cloud so these two gas phases may be related.
The 0.8 keV gas has a sound speed of 460 km\,$\rm s^{-1}$.
The molecular gas will be shocked unless it comoves with the soft X-ray emitting gas.
\begin{figure}
\includegraphics[width=1\columnwidth]{A1664_SB_energy.png}
\caption{The energy dependent surface brightness profile of A1664.
\label{fig:A1664_SB_energy}
}
\end{figure}
\subsection{Embedded multilayer cooling flow in RXCJ1504}
\label{sec:4.4}
\begin{figure*}
\includegraphics[width=2\columnwidth]{Embedded_cf_schematic.png}
\caption{The schematic diagram of embedded cooling flow with three absorbing sheets and three cooling flow sheets. In the cluster, the blue columns represent sheets of absorbing gas and red columns represent sheets of radiative cooling flow. The black lines are the boundary between the columns. Blue arrows originated from the cluster represent emission from the associated sheet of cooling flow.
\label{fig:Embedded_cf_schematics}
}
\end{figure*}
There are larger amounts of cold obscuring material in the core of galaxy clusters, e.g. the Centaurus cluster (A3526) (\citealt{2005MNRAS.363..216C}; \citealt{2006MNRAS.370...63S}).
This suggests intrinsic absorption of the target galaxy is likely important and can reduce the amount of observed emission from a radiative cooling flow.
\citet{2008MNRAS.385.1186S} reported a factor of 3 larger cooling rate in the Centaurus cluster if there is a $4\times10^{21} \rm cm^{-2}$ column density intrinsic to the cluster.
This represents a significant amount of extra intrinsic luminosity available for powering emission due to cold gas.
In this section, we reintroduce a simple multilayer, intrinsically absorbed, cooling flow model, which was first proposed by \citet{1997MNRAS.286..583A}.
The schematic diagram of the model is shown in Fig.~\ref{fig:Embedded_cf_schematics}.
For simplicity, we assume the cool core consists of several parallel sheets of material.
Identical sheets of radiatively cooling gas in X-rays are placed in-between identical sheets of absorbing gas.
The absorbing gas is assumed to be cold and neutral.
An X-ray cooling sheet is absorbed by all absorbing sheets along the line-of-sight.
This means the cooling sheet closest to the observer is absorbed once, and the furthest cooling sheet is absorbed three times in Fig.~\ref{fig:Embedded_cf_schematics}.
The physical depth of these sheets is irrelevant in this work.
We assume each cooling gas sheet is emitting a flux $F_{\lambda}$.
The fraction of the emitted energy transmitted through one sheet of absorbing gas is $f_{\lambda}$.
We can write this transmission fraction as
\begin{equation}
\label{equ:2p5}
f_{\lambda}=e^{-\sigma(E)\Delta n_{\rm H}},
\end{equation}
where $-\sigma(E)$ is the absorption cross-section and $\Delta n_{\rm H}$ is the column density of one sheet of absorbing gas.
The total observed flux is then
\begin{equation}
\label{equ:3}
F_{\rm tot}=f_{\lambda}F_{\lambda}+f_{\lambda}^2F_{\lambda}+...+f_{\lambda}^{n_{\rm sheet}}F_{\lambda}
=F_{\lambda}\sum_{m=1}^{n_{\rm sheet}}f_{\lambda}^m,
\end{equation}
where $n_{\rm sheet}$ is the number of sheets of absorbing gas components.
Since absorption is a multiplicative process, it is possible to use the geometric series and equation \ref{equ:3} becomes
\begin{equation}
\label{equ:4}
F_{\rm tot}=F_{\lambda}\frac{1-f_{\lambda}^{n_{\rm sheet}}}{1-f_{\lambda}^{_{\rm \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,}}}.
\end{equation}
This suggests only $n_{\rm sheet}$ and the total column density $n_{\rm H, tot}$ are the additional free parameters.
Note that $n_{\rm H, tot}=n_{\rm sheet}\Delta n_{\rm H}$.
In the large $n_{\rm sheet}$ limit, Equation \ref{equ:2p5} can be expanded and Equation \ref{equ:4} rewritten as
\begin{equation}
\label{equ:5}
F_{\rm tot}=F_{\lambda}n_{\rm sheet}\frac{1-e^{-\sigma (E)\Delta n_{\rm H}}}{\sigma (E)\Delta n_{\rm H}}.
\end{equation}
Unfortunately, the geometric series implementation of absorption components is not yet available in SPEX.
Nevertheless, it is possible to implement a brute-force model combining the existing spectral component of absorption and cooling flow.
First, we select the number of absorbing sheets and choose a total column density.
Each absorption component has the same column density of $n_{\rm H, tot}/n_{\rm sheet}$ and is fixed in the spectral fitting.
We assume the temperature of the absorbing gas is 0.5 eV, and the abundances are coupled to the X-ray cooling gas.
The X-ray cooling gas is modelled by $n_{\rm sheet}$ \textit{cf} components.
Each of these \textit{cf} components will be modified by a different number of absorption components before Galactic absorption.
We couple the fit parameters of all \textit{cf} components to one \textit{cf} component.
Then we measure the cooling rate from the \textit{cie} temperature down to 0.01 keV as the complete cooling flow model described in section 3.1.
The intrinsic absorption corrected cooling rate is then the total cooling rate of all \textit{cf} components.
This reconstructs the multilayer cooling flow model, which is equivalent to the complete cooling flow model with intrinsic absorption.
We use a 15x15 grid in $n_{\rm sheet}$ and $n_{\rm H, tot}$ parameter space.
We apply our model to RXCJ1504 and fit for minimal C-stat for each pair of $n_{\rm sheet}$ and $n_{\rm H, tot}$.
The improvement of C-stat from the complete cooling model is seen in Fig.~\ref{fig:RXCJ1504_embeddedcf_cstat}, where we have included the contour at 68 per cent, 90 per cent and 95 per cent confidence levels.
We search for the total column density that gives the minimum C-stat at any given number of sheets of absorbing gas.
We find a valley of minimal C-stats in the parameter space.
The absolute minimum of C-stat occurs for 1 absorption component with a column density of $6\times10^{21} \rm cm^{-2}$.
However, the difference between the C-stats is less than 0.3 on this valley.
This means $n_{\rm sheet}$ and $n_{\rm H, tot}$ are highly degenerate.
\citet{1997MNRAS.286..583A} found that the multilayer cooling flow model typically overpredicts the intrinsic column density by a factor of 1.5-3 in a sample of low redshift clusters.
In conjunction with the valley of minimal C-stats in Fig.~\ref{fig:RXCJ1504_embeddedcf_cstat}, this suggests that the true value of the intrinsic column density corresponds to a complex multilayer model with $n_{\rm sheet}\sim 10$.
We can compare the intrinsic absorption corrected cooling rate between the simplest 1 absorbing sheet model and the 10 sheets model.
For the 1 sheet model, we measure a cooling rate of 430$\pm90$ $\rm M_{\odot}\rm\,yr^{-1}$.
This is 8 times higher than the complete cooling rate without intrinsic absorption (see Model 2 in Table \ref{tab:RXCJcf}).
For the 10 sheet model with a total column density of $1.5\times10^{22} \rm cm^{-2}$, the cooling rate is 520$\pm30$ $\rm M_{\odot}\rm\,yr^{-1}$.
The intrinsic absorption corrected cooling rate is consistent between 1 sheet and 10 sheets model at the 1$\sigma$ level.
For an order of magnitude increase in the cooling rate above 0.7 keV in RXCJ1504, it can contribute $\sim$40 per cent of the energy required to power the UV/optical line-emitting nebula, but not all.
In other massive clusters with $\dot M_{\rm neb}>100\rm M_{\odot}\rm\,yr^{-1}$ (such as 2A0335+096, A1835 and A2597, see \citealt{2019MNRAS.485.1757L}), 10 times the cooling rate means the soft X-ray emitting gas can power the UV/optical nebula alone.
We also apply the embedded cooling flow model to A1664.
For 10 sheets of absorbing gas with a total column density of $3\times10^{21} \rm cm^{-2}$, we only measure a cooling rate of 31$\pm7$ $\rm M_{\odot}\rm\,yr^{-1}$.
This is consistent with the cooling rate without intrinsic absorption.
No significant change of the cooling rate is detected for other combinations of $n_{\rm sheet}$ and $n_{\rm H, tot}$.
Note that the effect of embedded absorption on the optical line-emitting gas is explored by \citealt{2021arXiv210309842P}.
We also need to reexamine the role of absorption corrected cooling rate in the AGN feedback.
It is 24 per cent of the predicted rate in the absence of heating.
This only slightly reduces the amount of heating from the AGN and the black hole growth rate.
On the other hand, the cooling rate is 3.3 times higher than the star formation rate.
The difference between the cooling and star formation rates is 300 $\rm M_{\odot}\rm\,yr^{-1}$.
This suggests the molecular gas reservoir is growing 6 times faster than the unabsorbed cooling model with a formation timescale less than $10^8$ yr.
In future work, it will be interesting to reduce the degeneracy between intrinsic column density and the multilayer structure of embedded cooling flow model.
Consideration will be needed for scattering of resonance lines (see studies of cool X-ray emitting gas in groups and elliptical galaxies by \citealt{2016MNRAS.461.2077P}, \citealt{2017MNRAS.472.1659O}).
Such lines can be absorbed by the cold gas as they scatter around in the plasma.
The observational situation will be improved with the future high-spectral-resolution mission XRISM (\citealt{2020arXiv200304962X}), with its non-dispersive calorimeter, and later by the X-IFU of Athena (\citealt{2018SPIE10699E..1GB}).
\begin{figure}
\includegraphics[width=1\columnwidth]{RXCJ1504_embedded_cf_cstat.png}
\caption{The improvement of C-stat over the complete cooling model without intrinsic absorption. The yellow curve represents the maximum improvement of C-stat at each number of sheets of absorbing gas.
\label{fig:RXCJ1504_embeddedcf_cstat}
}
\end{figure}
\section{Conclusions}
We have performed a multiphase cooling flow analysis on deep \textit{XMM-}Newton RGS observations of two X-ray luminous cool core clusters RXCJ1504 at $z=0.2153$ and A1664 at $z=01283$.
The cooling rate is measured to be 180$\pm$40$\rm M_{\odot}\rm\,yr^{-1}$ and 34$\pm$6$\rm M_{\odot}\rm\,yr^{-1}$ for RXCJ1504.1-0248 and A1664, respectively.
It is higher than the observed star formation rate in both clusters.
We detect an upper limit of residual cooling rate below 0.7 keV of 53$\rm M_{\odot}\rm\,yr^{-1}$ at 90 per cent confidence level in RXCJ1504.1-0248.
The energy of the cooling gas is insufficient to power the UV/optical line-emitting nebula in both clusters and additional sources of energy are required.
If the molecular gas reservoir is accumulating mass from the condensation of the radiatively cooling gas, the formation timescale is 1-3$\times10^8$ yr from the observed cooling rate but is likely longer due to the high star formation activities.
We also place a tight constraint on turbulence in the core.
An upper limit of 300 km\,$\rm s^{-1}$ of 1-D turbulent velocity at 90 per cent confidence level is measured in both clusters.
These velocities correspond to a Mach number of 0.23 and 0.4 for RXCJ1504.1-0248 and A1664, respectively.
The energy density of turbulence is equivalent to 8.9 per cent and 27 per cent of the thermal energy density, which is inadequate to fully transfer AGN heating throughout the cooling core.
We find the cool component of 0.80$\pm 0.08$ keV is blueshifted from the systemic velocity of the cluster at 750$^{+800}_{-280}$ km\,$\rm s^{-1}$ in A1664.
This is consistent with the velocity of the blueshifted component in the molecular gas,
but we cannot rule out an origin within a sloshing cold front for the blueshifted X-ray gas.
We reintroduce a multilayer, intrinsically-absorbed, cooling flow model.
In RXCJ1504.1-0248, we find that the cooling rate increases to 520$\pm30$ $\rm M_{\odot}\rm\,yr^{-1}$ using the 10 absorbing sheet model.
This is an order of magnitude higher than the cooling rate measured without intrinsic absorption.
The intrinsically absorbed cooling rate of A1664 is unaffected and consistent with the current measurement.
In the future, XRISM and Athena will help to unveil the connection between molecular and X-ray emitting gas phases and determine the influence of intrinsic absorption on cooling flows.
\section*{Acknowledgements}
This work is based on observations obtained with XMM-\textit{Newton},
an ESA science mission funded by ESA Member States and USA (NASA).
HL thanks the co-authors for their contributions and the referee for the support.
\section*{Data Availability}
No new data were generated or analysed in support of this research.
\bibliographystyle{mnras}
|
{
"timestamp": "2021-05-11T02:19:27",
"yymm": "2105",
"arxiv_id": "2105.03904",
"language": "en",
"url": "https://arxiv.org/abs/2105.03904"
}
|
\section{Missing Proofs in Section \ref{sec:shallow-linear}}\label{app:shallow-linear}
\paragraph{Proofs of Proposition \ref{prop:basic}.}
\begin{proof}
Lipschitz continuous follows from
\[ \|\nabla L(\mathbf{w})\| \leq \mathbb{E}_{\mathbf{x}\sim\mathcal{D}} \left\|\frac{y(\mathbf{x})}{1+e^{y(\mathbf{x})\mathbf{w}^{\top}\mathbf{x}}}\mathbf{x} \right\| \leq \mathbb{E}_{\mathbf{x}\sim\mathcal{D}}\left\|\mathbf{x}\right\|\leq \sqrt{tr(\Sigma)}.\]
Convexity obtained by
\[ \nabla^2L(\mathbf{w})=\mathbb{E}_{\mathbf{x}\sim\mathcal{D}} \frac{1}{1+e^{y(\mathbf{x})\mathbf{w}^{\top}\mathbf{x}}} \frac{e^{y(\mathbf{x})\mathbf{w}^{\top}\mathbf{x}}} {1+e^{y(\mathbf{x})\mathbf{w}^{\top}\mathbf{x}}}\mathbf{x} \mathbf{x}^{\top}\succeq \mathbf{0}. \]
Moreover, for any $\mathbf{u}\in\mathbb{R}^d$ with $\|\mathbf{u}\|=1$,
\[ \mathbf{u}^\top\nabla^2L(\mathbf{w})\mathbf{u}= \mathbb{E}_{\mathbf{x}\sim\mathcal{D}} \frac{1}{1+e^{y(\mathbf{x})\mathbf{w}^{\top}\mathbf{x}}} \frac{e^{y(\mathbf{x})\mathbf{w}^{\top}\mathbf{x}}} {1+e^{y(\mathbf{x})\mathbf{w}^{\top}\mathbf{x}}}\|\mathbf{x}^{\top}\mathbf{u}\|^2\leq \frac{1}{4} \mathbb{E} \|\mathbf{x}^{\top}\mathbf{u}\|^2 \leq \frac{1}{4}\lambda_{\max}(\Sigma). \]
However, when $\mathbf{w}=k\mathbf{v}$, and $k \to \infty$ and $P(\{ \mathbf{x}:\mathbf{v}^{\top}\mathbf{x}=0\})=0$,
\[ \limsup\limits_{k\rightarrow +\infty, \mathbf{w} = k\mathbf{v}} \|\nabla^2 L(\mathbf{w})\| \leq \limsup\limits_{k\rightarrow +\infty, \mathbf{w} = k\mathbf{v}} \mathbb{E}_{\mathbf{x}\sim\mathcal{D}} \frac{1}{1+e^{k|\mathbf{v}^\top \mathbf{x}|}}\frac{e^{k|\mathbf{v}^\top \mathbf{x}|}} {1+e^{k|\mathbf{v}^\top \mathbf{x}|}}\|\mathbf{x}\|^2=0. \]
Hence
\[ \lim_{k\rightarrow +\infty, \mathbf{w} = k\mathbf{v}}\nabla^2L(\mathbf{w})=\mathbf{0}. \]
\end{proof}
\paragraph{Proofs of Proposition \ref{prop:grad-norm-prop}.}
\begin{proof}
Suppose $\|\mathbf{w}\|\leq M$, and notice that
\[ -\mathbf{v}^\top\nabla L(\mathbf{w})=\mathbb{E}_{\mathbf{x}\sim\mathcal{D}} \frac{|\mathbf{v}^\top\mathbf{x}|} {1+e^{y(\mathbf{x})\mathbf{w}^{\top}\mathbf{x}}} \geq 0. \]
\[ -\mathbf{v}^{\top}\nabla L(\mathbf{w})\geq \mathbb{E}_{\mathbf{x}\sim\mathcal{D}} \frac{|\mathbf{v}^\top\mathbf{x}|}{1+e^{\|\mathbf{w}\|\cdot\|\mathbf{x}\|}}\geq \frac{1}{1+e^{MR}}\mathbb{E}|\mathbf{v}^\top\mathbf{x}|\mathbbm{1}_{\{\|\mathbf{x}\| \leq R\}}\geq \frac{r}{1+e^{MR}}P\left(\|\mathbf{x}\| \leq R,|\mathbf{v}^\top\mathbf{x}|\geq r\right). \]
Let $\epsilon:=P\left(|\mathbf{v}^\top\mathbf{x}|>0\right)>0$, then there exist $R>r>0$, such that $P\left(\|\mathbf{x}\| \leq R,|\mathbf{v}^\top\mathbf{x}|\geq r\right)\geq\epsilon/2$, then $\mathbf{v}^{\top}\nabla L(\mathbf{w}(t))\geq 0.5r\epsilon/(1+e^{MR})>0$.
Hence, $\partial (\mathbf{v}^\top\mathbf{w}(t))/\partial t > 0$ and $\mathbf{v}^\top(\mathbf{w}(n+1)-\mathbf{w}(n)) > 0$, showing that $\mathbf{v}^{\top}\mathbf{w}(t)$ and $\mathbf{v}^{\top}\mathbf{w}(n)$ are increasing.
If $\|\mathbf{w}(t)\|\leq M$, then $\mathbf{v}^{\top}\mathbf{w}(t)\leq M$. Hence, $\mathbf{v}^{\top}\mathbf{w}(t)$ converges, showing that $\lim_{t\rightarrow \infty}\mathbf{v}^{\top}\nabla L(\mathbf{w}(t))=0$.
Using the argument again, we obtain contradiction.
If $\|\mathbf{w}(n)\|\leq M$, then $\mathbf{v}^{\top}\mathbf{w}(n)\leq M$. Hence, $\mathbf{v}^{\top}\mathbf{w}(n)$ converges, showing that $\lim_{n \rightarrow \infty}\eta_n\mathbf{v}^{\top}\nabla L(\mathbf{w}(n))=0$. Since $\eta_n\geq\eta_{-}>0$, then $\lim_{n \rightarrow \infty}\mathbf{v}^{\top}\nabla L(\mathbf{w}(n))=0$, remaining the same argument as the gradient flow case.
\end{proof}
\paragraph{Proofs of Proposition \ref{prop:grad-prop}.}
\begin{proof}
Since $\mathcal{D}$ is spherically symmetric, we can assume $\mathbf{v}=\mathbf{e}_1$, then
\[ \mathbb{E}_{\mathbf{x}\sim\mathcal{D}} \frac{\text{sgn}(\mathbf{v}^{\top} \mathbf{x})}{2} x_i =\frac{1}{2} \mathbb{E}_{\mathbf{x}\sim\mathcal{D}} |x_i| \mathbbm{1}_{\{i=1\}}. \]
Hence,
\[ \nabla L(\mathbf{0}) = -\frac{1}{2} \mathbb{E}_{\mathbf{x}\sim\mathcal{D}} |x_1| \mathbf{v}. \]
In addition,
\[ \left(-\nabla L(r\mathbf{v})\right)_i = \mathbb{E}_{\mathbf{x}\sim\mathcal{D}}\frac{\text{sgn}(\mathbf{v}^{\top} \mathbf{x})}{1+e^{r|\mathbf{v}^{\top} \mathbf{x}|}} x_i = \mathbb{E}_{\mathbf{x}\sim\mathcal{D}} \frac{|x_1|}{1+e^{|x_1|}}\mathbbm{1}_{\{i=1\}}. \]
Hence,
\[ \nabla L(r\mathbf{v}) = -\mathbb{E}_{\mathbf{x}\sim\mathcal{D}} \frac{|x_1|}{1+e^{r|x_1|}}\mathbf{v}. \]
Finally,
\[ \|\nabla L(\mathbf{w})\| = \|\mathbb{E}_{\mathbf{x}\sim\mathcal{D}} \frac{y}{1+e^{y\mathbf{w}^{\top}\mathbf{x}}}\mathbf{x}\| = \|\mathbb{E}_{\mathbf{x}\sim\mathcal{D}_{\mathbf{w},\mathbf{v}}} \frac{y}{1+e^{y\mathbf{w}^{\top}\mathbf{x}}}\mathbf{x}\| \leq \mathbb{E}_{\mathbf{x}\sim\mathcal{D}_{\mathbf{w},\mathbf{v}}} \|\mathbf{x}\| = \mathbb{E}_{\mathbf{x}\sim\mathcal{D}_{2}} \|\mathbf{x}\|=c_0. \]
The last equality uses the property that $\mathcal{D}$ is spherically symmetric, then $\mathcal{D}_{\mathbf{w},\mathbf{v}}$ is identical distribution for any $\mathbf{w},\mathbf{v}$. We choose the first two dimensions for example.
\end{proof}
\paragraph{Proofs of Lemma \ref{lemma:angle_grad}.}
\begin{proof}
We denote $ \mathcal{D}_{\mathbf{w}, \mathbf{v}} $ is the marginal distribution of $ \mathbf{x} $ on the $ 2 $-dimensional subspace $ \text{span}\{\mathbf{w},\mathbf{v}\} $, and with a little abuse of notation, we still adopt $ \mathbf{w}, \mathbf{v} \in \mathbb{R}^2 $ as the representations of $\mathbf{w}, \mathbf{v}$ in the subspace $ \text{span}\{\mathbf{w},\mathbf{v}\} $, meaning that we only consider the case in $\mathbb{R}^2$:
\[ \mathbb{E}_{\mathbf{x}\sim\mathcal{D}_{\mathbf{w}, \mathbf{v}}}\frac{|\mathbf{v}^{\top}\mathbf{x}|-\text{sgn}(\mathbf{v}^{\top}\mathbf{x}) \left(\overline{\mathbf{w}}^{\top} \mathbf{v}\right) \left(\overline{\mathbf{w}}^{\top}\mathbf{x}\right)}{1+e^{\text{sgn}(\mathbf{v}^{\top}\mathbf{x})\mathbf{w}^{\top}\mathbf{x}}}. \]
Additionally, the expression above is invariant to rotating the coordinate frame, so we can assume without loss of generality
that $\overline{\mathbf{w}} = (1,0)^{\top}$, $\mathbf{v} = (v_1, v_2)^{\top}$, then we only need to consider
\[ \mathbb{E}_{\mathbf{x}\sim\mathcal{D}_{\mathbf{w}, \mathbf{v}}}g(\mathbf{x}) := \mathbb{E}_{\mathbf{x}\sim\mathcal{D}_{\mathbf{w}, \mathbf{v}}} \frac{\text{sgn}(\mathbf{v}^{\top}\mathbf{x})\left(v_2x_2\right)} {1+e^{\text{sgn}(\mathbf{v}^{\top}\mathbf{x})\|\mathbf{w}\|x_1}}. \]
\begin{enumerate}
\item If $|v_1x_1| > |v_2x_2|$, then $\text{sgn}(\mathbf{v}^{\top}\mathbf{x})=\text{sgn}(v_1x_1)$,
\[ g(x_1,x_2)+ g(x_1,-x_2)=0. \]
\item If $|v_2x_2| \geq |v_1x_1|$, then $\text{sgn}(\mathbf{v}^{\top}\mathbf{x})=\text{sgn}(v_2x_2)$,
\[ g(x_1, x_2)+ g(x_1, -x_2) = \frac{|v_2x_2|}{1+e^{\|\mathbf{w}\|x_1}} +\frac{|v_2x_2|}{1+e^{-\|\mathbf{w}\|x_1}} = |v_2x_2|\geq 0. \]
\end{enumerate}
Therefore $\mathbb{E}_{\mathbf{x}\sim\mathcal{D}_{\mathbf{w}, \mathbf{v}}}g(x_1,x_2)\geq 0$. Moreover, if $v_2 \neq0$ and $P(|v_2x_2|\geq|v_1x_1| > 0) > 0$, then $\mathbb{E}_{\mathbf{x}\sim\mathcal{D}}g(x_1,x_2)> 0$. Otherwise, $v_2=0$, then $\mathbb{E}_{\mathbf{x}\sim\mathcal{D}_{\mathbf{w}, \mathbf{v}}}g(x_1,x_2) = 0$, i.e., the angle between $\mathbf{w}$ and $\mathbf{v}$ would not change. \\
More precisely,
\begin{equation*}
\begin{aligned}
\mathbb{E}_{\mathbf{x}\sim\mathcal{D_{\mathbf{w}, \mathbf{v}}}}g(x_1,x_2) &=\frac{1}{2}\mathbb{E}_{\mathbf{x} \sim\mathcal{D}_{\mathbf{w}, \mathbf{v}}}|v_2x_2|\mathbbm{1}_{\{|v_2x_2| \geq |v_1x_1|\}} \\
&=\frac{|v_2|}{4\pi} \int_{0}^{\infty}r\int_{|\tan\theta(\mathbf{w}, \mathbf{v}) \cdot \tan\theta|\geq 1}|\sin\theta| d\theta dF(r) \\
&=\frac{\sin^2\theta(\mathbf{w}, \mathbf{v})}{\pi}\int_{0}^{\infty}r dF(r)\\
&=\frac{c_0\sin^2\theta(\mathbf{w}, \mathbf{v})}{\pi}.
\end{aligned}
\end{equation*}
where $F(r)$ is the cumulative distribution function of $r := \|\mathbf{x}\|_2$.
Thanks to the scope of optimization in the plane $\text{span}\{\mathbf{w}, \mathbf{v}\}$, we can also obtain the projection of $\nabla L(\mathbf{w})$ into $\mathbf{w}$ exactly.
If $\theta(\mathbf{w}, \mathbf{v})=0$, equality holds, otherwise, from the first result in Lemma \ref{lemma:angle_grad}, we obtain
\[ -\mathbf{v}^{\top}\left(I-\overline{\mathbf{w}} \; \overline{\mathbf{w}}^{\top}\right) \nabla L(\mathbf{w}) = \frac{c_0}{\pi}\mathbf{v}^{\top} \left(I-\overline{\mathbf{w}} \; \overline{\mathbf{w}}^{\top}\right) \mathbf{v}. \]
Notice that $\nabla L(\mathbf{w}) \in \text{span}\{\mathbf{w}, \mathbf{v}\}$, and
\[ \left(I-\overline{\mathbf{w}} \; \overline{\mathbf{w}}^{\top}\right)\left(\frac{c_0}{\pi}\mathbf{v}+\nabla L(\mathbf{w})\right)\perp \mathbf{w}, \mathbf{v}. \]
Hence
\[ \left(I-\overline{\mathbf{w}} \; \overline{\mathbf{w}}^{\top}\right)\left(\frac{c_0}{\pi}\mathbf{v}+\nabla L(\mathbf{w})\right)=\mathbf{0}. \]
\end{proof}
\paragraph{Proofs of Lemma \ref{lemma:norm-change}.}
\begin{proof}
$1)$. The first proposition follows from
\[ N(\mathbf{w})=-\mathbf{w}^{\top} \nabla L(\mathbf{w}) = \mathbb{E}_{\mathbf{x}\sim\mathcal{D}} \frac{y(\mathbf{x})\mathbf{w}^{\top}\mathbf{x}}{1+e^{y(\mathbf{x})\mathbf{w}^{\top}\mathbf{x}}}. \]
Use the fact $x/(1+e^x) < 0.3$, then $N(\mathbf{w})\leq 0.3$.
Using the spherically symmetric, we may assume that $\overline{\mathbf{w}} = (1,0)^{\top}$ and $\mathbf{v} = (\cos\alpha,\sin\alpha)^{\top}$, then
\[ N(\mathbf{w})= \|\mathbf{w}\| g(\mathbf{x}), \ g(\mathbf{x}):=\mathbb{E}_{\mathbf{x}\sim\mathcal{D_{\mathbf{w},\mathbf{v}}}} \frac{\text{sgn}(\mathbf{v}^{\top}\mathbf{x}) x_1}{1+e^{\|\mathbf{w}\|\text{sgn}(\mathbf{v}^{\top}\mathbf{x})x_1}}.\]
$2)$ and $3)$.
\begin{itemize}
\item If $|v_1x_1| > |v_2x_2|$, then $\text{sgn}(\mathbf{v}^{\top}\mathbf{x})=\text{sgn}(v_1x_1)$,
\[ g(x_1, x_2) + g(x_1, -x_2)= \frac{2 \; \text{sgn}(v_1)|x_1|}{1+e^{\|\mathbf{w} \|\text{sgn}(v_1)|x_1|}}. \]
\item If $|v_2x_2| \geq |v_1x_1|$, then $\text{sgn}(\mathbf{v}^{\top}\mathbf{x})=\text{sgn}(v_2x_2)$,
\[ g(x_1, x_2) + g(x_1, -x_2) = \frac{1-e^{\|\mathbf{w}\|x_1}}{1+e^{\|\mathbf{w}\|x_1}}x_1 \leq 0. \]
\end{itemize}
When $\theta(\mathbf{w}, \mathbf{v}) \geq 0$ and decreases, then the area in the first case with positive contribution becomes larger while the area in the second case becomes smaller, leading to $N(\mathbf{w})$ increasing. As for $\theta(\mathbf{w}, \mathbf{v})\geq \pi/2$, then $v_1\leq 0$ and both cases are negative, showing $N(\mathbf{w})\leq 0$.
In addition, when $r\leq \|\mathbf{w}\|\leq R$, we have
\[ N(\mathbf{w})\geq \int_{|v_1x_1| > |v_2x_2|} \frac{2|x_1|}{1+e^{R|x_1|}}d\mathbf{x}-\int_{|v_1x_1| \leq |v_2x_2|} \frac{e^{r|x_1|}-1}{1+e^{r|x_1|}}|x_1|d\mathbf{x}. \]
Therefore, when $\theta(\mathbf{w},\mathbf{v}) \to 0$, then $v_2\to0, v_1\to 1$, we obtain
\[ \mathop{\lim\inf}_{\theta(\mathbf{w},\mathbf{v}) \to 0} N(\mathbf{w})\geq \int \frac{2|x_1|}{1+e^{R|x_1|}}d\mathbf{x}>0. \]
Then there exist $\theta_0$ (related to $r,R$), when $\theta(\mathbf{w},\mathbf{v}) < \theta_0$, $N(\mathbf{w})>0$ holds for any $r\leq \|\mathbf{w}\|\leq R$.
$4)$. We fix $\theta(\mathbf{w}, \mathbf{v})$ or $\overline{\mathbf{w}}$, and consider
\[ \overline{N}(r) := \frac{N(\mathbf{w})}{\|\mathbf{w}\|} = \mathbb{E}_{\mathbf{x}\sim\mathcal{D}_2} \frac{y(\mathbf{x})\overline{\mathbf{w}}^{\top}\mathbf{x}}{1+e^{y(\mathbf{x}) r \overline{\mathbf{w}}^{\top}\mathbf{x}}}. \]
Then when $r \to 0$, we obtain
\[ \lim_{r \to 0}\overline{N}(r) = \frac{1}{2} \mathbb{E}_{\mathbf{x}\sim\mathcal{D}_2} y(\mathbf{x})\overline{\mathbf{w}}^\top\mathbf{x}=\frac{c_0|\cos\alpha|}{\pi}. \]
In addition,
\[ \left|\frac{\partial \overline{N}(r)}{\partial r}\right| = \left|\mathbb{E}_{\mathbf{x}\sim\mathcal{D}_2} -\frac{\left(\overline{\mathbf{w}}^{\top}\mathbf{x}\right)^2e^{y(\mathbf{x})r\overline{\mathbf{w}}^{\top}\mathbf{x}}}{\left(1+e^{y(\mathbf{x}) r\overline{\mathbf{w}}^{\top}\mathbf{x}}\right)^2}\right| \leq \frac{1}{4} \mathbb{E}_{\mathbf{x}\sim\mathcal{D}_2} \left(\overline{\mathbf{w}}^{\top}\mathbf{x}\right)^2 \leq \frac{c_0}{4}. \]
Then $\overline{N}(r)$ is $c_0/4$-Lipschitz continuous, hence when $\|\mathbf{w}\| \leq 2|\cos\alpha|/\pi$, we obtain
\[ N(\mathbf{w})\cos\alpha>0. \]
\end{proof}
\paragraph{Proofs of Lemma \ref{lemma:norm-increase-linear}.}
\begin{proof}
From Lemma \ref{lemma:norm-change} and the condition $\partial \|\mathbf{w}(t)\|^{2}/\partial t=0$, we obtain
\[ \frac{\partial \cos\theta(t)}{\partial t} = -\left(\mathbf{v}-\mathbf{v}^\top\overline{\mathbf{w}}(t) \overline{\mathbf{w}}(t)\right)^\top \nabla L(\mathbf{w}(t)) = -\mathbf{v}^\top \nabla L(\mathbf{w}(t)) \geq 0. \]
\end{proof}
\paragraph{Proofs of Theorem \ref{thm:linear-flow}.}
\begin{proof}
Based on Proposition \ref{prop:grad-norm-prop}, Lemma \ref{lemma:norm-change} and \ref{lemma:norm-increase-linear}, $T$ always exists and $\|\mathbf{w}(t)\|$ first decreases when $t \leq T$, then increases when $t\geq T$. Therefore, when $t \leq T$
\[ \frac{\partial}{\partial t} \cos(\theta(t)) = \frac{1}{\|\mathbf{w}(t)\|} \frac{c_0\sin^2\theta(t)}{\pi} \geq \frac{1}{\|\mathbf{w}(0)\|} \frac{c_0\sin^2\theta(t)}{\pi}. \]
We obtain
\[ \frac{1}{2}\ln\frac{1+\cos\theta(t)}{1-\cos\theta(t)} - \frac{1}{2}\ln\frac{1+\cos\theta(t_0)}{1-\cos\theta(t_0)}\geq \frac{c_0}{\|\mathbf{w}(0)\|\pi}t. \]
Thus
\[ \cos\theta(t)\geq 1-\frac{2}{e^{A_1t+B_1}+1}, t\leq T. \]
When $t\geq T$, based on Lemma \ref{lemma:norm-change},
\[ \frac{\partial}{\partial t}\|\mathbf{w}(t)\|^2 = 2N(\mathbf{w}(t)) \leq 0.6.\]
Hence $\|\mathbf{w}(t)\|^2\leq 0.6(t-T)+\|\mathbf{w}(T)\|^2$.
\[ \frac{\partial}{\partial t} \cos(\theta(t)) = \frac{1}{\|\mathbf{w}(t)\|} \frac{c_0\sin^2\theta(t)}{\pi} \geq \frac{1}{\sqrt{0.6(t-T)+\|\mathbf{w}(T)\|^2}} \frac{c_0\sin^2\theta(t)}{\pi}. \]
We obtain
\[ \frac{1}{2}\ln\frac{1+\cos\theta(t)}{1-\cos\theta(t)} - \frac{1}{2}\ln\frac{1+\cos\theta(T)}{1-\cos\theta(T)}\geq \frac{2c_0}{0.6\pi}\left(\sqrt{0.6(t-T)+\|\mathbf{w}(T)\|^2}-\|\mathbf{w}(T)\|\right). \]
Thus
\[ \cos\theta(t)\geq 1-\frac{2}{e^{A_2\sqrt{t+C_2}+B_2}+1}. \]
Note that in the second part, we can choose any $t_0 \geq T$ as the initial point to obtain similar convergence.
\end{proof}
\paragraph{Proofs of Theorem \ref{thm:negative-angle}.}
First, we need a lemma below:
\begin{lemma}\label{lemma:gd-update}
\[ \|\mathbf{w}(n+1)\|^2 = \left(\overline{\mathbf{w}}(n)^{\top} \mathbf{w}(n+1)\right)^2 + \left(\frac{c_0\eta_n}{\pi}\right)^2\sin^2\theta(n). \]
\end{lemma}
\begin{proof}
\begin{equation*}
\begin{aligned}
\|\mathbf{w}(n+1)\|^2 &= \|\mathbf{w}(n)\|^2-2\eta_n \mathbf{w}(n)^{\top}\nabla L(\mathbf{w}_n)+\eta_n^2\|\nabla L(\mathbf{w}(n))\|^2 \\ &= \left(\|\mathbf{w}(n)\|- \eta_n\overline{\mathbf{w}}(n)^{\top}\nabla L(\mathbf{w}(n))\right)^2 + \eta_n^2\left\| \left(I-\overline{\mathbf{w}}(n) \; \overline{\mathbf{w}}(n)^{\top}\right) \nabla L(\mathbf{w}(n)) \right\|^2 \\
&\overset{(1)}{=} \left(\|\mathbf{w}(n)\|- \eta_n\overline{\mathbf{w}}(n)^{\top}\nabla L(\mathbf{w}(n))\right)^2 + \left(\frac{c_0\eta_n}{\pi}\right)^2 \left\|\left(I-\overline{\mathbf{w}}(n) \; \overline{\mathbf{w}}(n)^{\top}\right)\mathbf{v} \right\|^2 \\
&= \left(\|\mathbf{w}(n)\|- \eta_n\overline{\mathbf{w}}(n)^{\top}\nabla L(\mathbf{w}(n))\right)^2 + \left(\frac{c_0\eta_n}{\pi}\right)^2 \sin^2\theta(n) \\
&= \left( \overline{\mathbf{w}}(n)^{\top} \mathbf{w}(n+1) \right)^2 + \left(\frac{c_0\eta_n}{\pi}\right)^2 \sin^2\theta(n).
\end{aligned}
\end{equation*}
The equality in $(1)$ follows from Lemma \ref{lemma:angle_grad}.
\end{proof}
Now we turn back to the proof of Theorem \ref{thm:negative-angle}.
\begin{proof}
Then from Lemma \ref{lemma:norm-change}, $-\mathbf{w}(n)^{\top}\nabla L(\mathbf{w}(n))\leq 0$, leading to
\[ \|\mathbf{w}(n+1)\|^2 = \|\mathbf{w}(n)\|^2-2\eta_n \mathbf{w}(n)^{\top}\nabla L(\mathbf{w}(n)) + \eta_n^2\|\nabla L(\mathbf{w}(n))\|^2 \leq \|\mathbf{w}(n)\|^2 + \eta_n^2 c_0^2. \]
From Lemma \ref{lemma:gd-update}, $\|\mathbf{w}(n+1)\|\geq\left|\overline{\mathbf{w}}(n)^{\top} \mathbf{w}(n+1)\right| \geq \|\mathbf{w}(n)\|-\eta_n\overline{\mathbf{w}}(n)^{\top}\nabla L(\mathbf{w}(n))$.
\begin{equation*}
\begin{aligned}
\cos\ &\theta(n+1)-\cos\theta(n) = \frac{1}{\|\mathbf{w}(n+1)\|} \left( \mathbf{v}^{\top}\left(\mathbf{w}(n)-\eta_n\nabla L(\mathbf{w}(n))\right)-\|\mathbf{w}(n+1)\| \mathbf{v}^{\top}\overline{\mathbf{w}}(n)\right) \\
&\geq \frac{1}{\|\mathbf{w}(n+1)\|} \left( \mathbf{v}^{\top}\left(\mathbf{w}(n)-\eta_n\nabla L(\mathbf{w}(n))\right)-\left( \|\mathbf{w}(n)\|-\eta_n \overline{\mathbf{w}}(n)^{\top}\nabla L(\mathbf{w}(n))\right) \mathbf{v}^{\top}\overline{\mathbf{w}}(n)\right) \\
&= \frac{1}{\|\mathbf{w}(n+1)\|}\left( -\eta_n\left( \mathbf{v}-(\overline{\mathbf{w}}(n)^{\top}\mathbf{v}) \overline{\mathbf{w}}(n)\right)^{\top}\nabla L(\mathbf{w}(n))\right) \\
&= \frac{1}{\|\mathbf{w}(n+1)\|}\frac{c_0 \eta_n \sin^2\theta(n)}{\pi} \\
&\geq \frac{\eta_n}{\pi\sqrt{A+\sum_{i=0}^{n}\eta_n^2}}\sin^2\theta(n) > 0.
\end{aligned}
\end{equation*}
Thus $\theta(n)$ decrease when $n$ increase. i.e. $\frac{\pi}{2} \leq \theta(n+1) \leq \theta(n)$ and $0 \geq \cos\theta(n+1) \geq \cos\theta(n) \geq \cos\theta(0) > -1$. Then we obtain
\begin{equation*}
\begin{aligned}
\cos\theta(n+1)-\cos\theta(n) & \geq\frac{\eta_n(1-\cos\theta(n)) (1+\cos\theta(n))}{\pi\sqrt{A+\sum_{i=0}^{n}\eta_n^2}} \geq \left(1-\cos\theta(n)\right)\frac{B\eta_n}{\sqrt{A+\sum_{i=0}^{n}\eta_i^2}}.
\end{aligned}
\end{equation*}
Adjust the terms, we get
\[ \left(1-\frac{B\eta_n}{\sqrt{A+\sum_{i=0}^{n}\eta_i^2}}\right)\left(1-\cos\theta(n)\right) \geq 1-\cos\theta(n+1), \]
showing that
\[ \ln\left(1-\cos\theta(n)\right) -\ln\left(1-\cos\theta(n+1) \right)
\geq -\ln\left(1-\frac{B\eta_n}{\sqrt{A+\sum_{i=0}^{n}\eta_i^2}} \right) \geq \frac{B\eta_n}{\sqrt{A+\sum_{i=0}^{n}\eta_i^2}}. \]
Hence,
\[ \cos \theta(n) \geq 1-\left(1-\cos\theta(0)\right) e^{-BS_n^-}. \]
Since $S_n^- \to \infty $, then for some finite $T$ steps, $\cos\theta(T) \geq 0$, giving that $\mathbf{v}^\top\mathbf{w}(T)\geq 0$.
\end{proof}
\paragraph{Proofs of Theorem \ref{thm:suff}.}
\begin{proof}
Use the upper bound of $\|\mathbf{w}(n+1)\|$ below
\[ \|\mathbf{w}(n+1)\|^2 = \|\mathbf{w}(n)\|^2-2\eta_n \mathbf{w}(n)^{\top}\nabla L(\mathbf{w}_n) + \eta_n^2\| \nabla L(\mathbf{w}_n) \|^2 \leq \|\mathbf{w}(n)\|^2+0.6\eta_n+c_0^2\eta_n^2. \]
Therefore, from Lemma \ref{lemma:gd-update},
\begin{equation*}
\begin{aligned}
\cos \; & \theta\left(n{+}1\right){-}\cos\theta\left(n\right)
= \frac{\mathbf{v}^{\top}\mathbf{w}(n+1)-\|\mathbf{w}(n+1)\|\mathbf{v}^{\top} \overline{\mathbf{w}}(n)}{\|\mathbf{w}(n+1)\|} \\
&= \frac{\left(\mathbf{v}-\mathbf{v}^{\top}\overline{\mathbf{w}}(n)\overline{\mathbf{w}}(n)\right)^{\top} \mathbf{w}(n+1)-\left(\|\mathbf{w}(n+1)\|-\overline{\mathbf{w}}(n)^{\top} \mathbf{w}(n+1)\right) \mathbf{v}^{\top}\overline{\mathbf{w}}(n)}{\|\mathbf{w}(n+1)\|}\\
&= \frac{1}{\|\mathbf{w}(n+1)\|}\left({-}\eta_n\left(\mathbf{v}{-}(\overline{\mathbf{w}}(n)^{\top}\mathbf{v}) \overline{\mathbf{w}}(n)\right)^{\top}\nabla L(\mathbf{w}(n)){-}\frac{\|\mathbf{w}(n{+}1)\|^2{-}\left(\overline{\mathbf{w}}(n)^{\top} \mathbf{w}(n{+}1)\right)^2}{\|\mathbf{w}(n{+}1)\|+\overline{\mathbf{w}}(n)^{\top} \mathbf{w}(n{+}1)} \cos\theta(n) \right) \\
&= \frac{1}{\|\mathbf{w}(n+1)\|}\left(\frac{c_0 \eta_n \sin^2\theta(n)}{\pi}- \frac{c_0^2\eta_n^2\sin^2\theta(n)\cos\theta(n)/\pi^2}{\|\mathbf{w}(n+1)\|+\overline{\mathbf{w}}_n^{\top} \mathbf{w}(n+1)}\right)\\
&\geq \frac{1}{\|\mathbf{w}(n+1)\|}\frac{\delta c_0 \eta_n \sin^2\theta(n)}{(1+\delta)\pi}\\
&\geq \frac{\delta \eta_n}{(1+\delta)\pi\sqrt{A+\sum_{i=0}^{n}\eta_i^2+C\eta_i}}\sin^2\theta(n). \\
\end{aligned}
\end{equation*}
Hence, similarly, we obtain
\[ \cos\theta(n) \geq 1-\left(1-\cos\theta(0)\right) e^{-BS_n^+}. \]
\end{proof}
\paragraph{Proofs of Theorem \ref{thm:angle-convergence}.}
\begin{proof}
Set $R_1:=\eta_{+}c_0+\eta_{+}c_0/\pi$, $R_2:=R_1+\eta_{+}c_0$, $\mathcal{T}_1=\{n: \|\mathbf{w}(n)\| < R_1\}, \mathcal{T}_2=\{n: \|\mathbf{w}(n)\| < R_1, \|\mathbf{w}(n+k)\| \geq R_1, \|\mathbf{w}(n+K)\| > R_2, 1\leq k \leq K, 2\leq K\}$.
Notice that $\|\mathbf{w}(n+1)\|-\|\mathbf{w}(n)\|\leq \eta_{+}c_0$, hence the definition in $\mathcal{T}_2$ is meaningful.
\begin{enumerate}
\item If $|\mathcal{T}_1| < \infty$, then there exists $n_0$, such that $\|\mathbf{w}(n)\|\geq R_1$ for all $n\geq n_0$, showing logarithmic convergence from $n_0$.
\item Otherwise, $|\mathcal{T}_1|=\infty$, since $\|\mathbf{w}(n)\|$ is unbounded, therefore, $|\mathcal{T}_2|=\infty$. We list $\mathcal{T}_2=\{n_{1}, n_{2}, n_{3}, \dots\}$ and the corresponding sequence length for each $n_i$ is $K_{i}$.
Every subsequence $\{n_i{+}1,\dots,n_i{+}K_i\}$ gives linear convergence for $1-\cos\theta(n)$ with rate only related to $c_0, \eta_{+}, R_2$.
Additionally, notice that $\mathbf{v}^{\top}\mathbf{w}(n)=\|\mathbf{w}(n)\|\cos\theta(n)$ is increasing, giving that $\|\mathbf{w}(n_{i+1}+1)\|\cos\theta(n_{i+1}+1)\geq \|\mathbf{w}(n_i+K_i)\|\cos\theta(n_i+K_i)$.
From Theorem \ref{thm:negative-angle}, we only need consider $\cos\theta(n) \geq 0$.
Since $\|\mathbf{w}(n_{i+1})\|\leq R_1$, $\|\mathbf{w}(n_{i+1}+1)\|\leq R_2< \|\mathbf{w}(n_i+K_i)\|$, showing that $\cos\theta(n_{i+1}+1)\geq \cos\theta(n_i+K_i)$. Therefore, combining all sequence $\{n_i{+}1,\dots,n_i{+}K_i\}_{i=1}^{\infty}$ guarantees the directional convergence of gradient descent.
\end{enumerate}
\end{proof}
\paragraph{Proofs of Corollary \ref{corr:angle-convergence}.}
\begin{proof}
Based on Theorem \ref{thm:angle-convergence}, there exists $n_0$ such that $\|\mathbf{w}(n_0)\|\cos\theta(n_0)\geq R_1$, so we have $\|\mathbf{w}(n)\|\geq \|\mathbf{w}(n)\|\cos\theta(n)\geq \|\mathbf{w}(n_0)\|\cos\theta(n_0)\geq R_1, \forall \; n>n_0$.
Hence, from Theorem \ref{thm:suff} and its Remark, we obtain logarithmic convergence of $1-\cos\theta(n)$ from $n_0$.
\end{proof}
\subsection{Initialization Methods under Gradient Descent} \label{app:init}
\begin{enumerate}
\item Notice that $c_0$ is a constant (dimension-free) only related to the marginal distribution of $\mathcal{D}$, when $\eta_+ < \kappa \frac{\pi}{c_0(1+\pi)}$ for some small constant $\kappa>0$, and $\mathbf{w}(0)\sim\mathcal{N}(0, \tau^2I_d)$.
\[ P(\mathbf{w}^{\top}(0)\mathbf{v}\geq R_1) \geq 1-\Phi(\kappa/\tau) \geq \frac{1}{2}-\frac{\kappa}{\tau\sqrt{2\pi}}, \]
where $\Phi(\cdot)$ is the c.d.f. of standard Gaussian distribution. Therefore, if $\eta_+$ is small or $\|\mathbf{w}(0)\|$ is large, then with constant probability, we have logarithmic convergence rate from the initial point.
\item Set $c_1= \mathbb{E}_{\mathbf{x}\sim\mathcal{D}}|x_1|, c_2= \mathbb{E}_{\mathbf{x}\sim\mathcal{D}}x_1^2$. If $\|\mathbf{w}(0)\| \leq r:=c_1/c_2$, and $\eta_0\geq 4\eta_+(c_0+c_0/\pi)/c_1+4/c_2$, $\eta_+ \geq \eta_n$, we have $\mathbf{v}^\top\mathbf{w}(1)\geq R_1$ from $\lambda_{\max}(\Sigma)=\mathbb{E}_{\mathbf{x}\sim\mathcal{D}} \|\mathbf{x}\|^2/d=\mathbb{E}_{\mathbf{x}\sim\mathcal{D}}x_1^2$, and
\begin{equation*}
\begin{aligned}
\mathbf{v}^\top\mathbf{w}(1) &=\mathbf{v}^\top\left(-\eta_0 \nabla L(\mathbf{0})+\eta_0\left(\nabla L(\mathbf{0})-\nabla L(\mathbf{w}(0))\right)+\mathbf{w}(0)\right) \\
& \geq \frac{c_1\eta_0}{2}-\frac{\eta_0c_2r}{4}-r \\
& \geq \frac{c_1\eta_0}{4}-\frac{c_1}{c_2} \\
&\geq \eta_+(c_0+c_0/\pi)=R_1.
\end{aligned}
\end{equation*}
We can see if we initialize small $\mathbf{w}(0)$, then the condition of convergence can be satisfied after one step update. If data is normalized, i.e., $\mathbf{x} \sim \mathcal{N}(0, I_d)$, then $c_0=\sqrt{\pi/2}, c_1=\sqrt{2/\pi}, c_2=1$, thus we only need $\eta_0 \geq 7\eta_++4$ and $\|\mathbf{w}(0)\|\leq \sqrt{2/\pi}$ to give logarithmic convergence rate from $\mathbf{w}(1)$. If we initialize $\mathbf{w}(0) \sim \mathcal{N}(0, \frac{1}{d\pi}I_d)$, then with high probability that $\mathbf{w}(0)$ satisfies the requirement.
\end{enumerate}
\section{Missing Proofs in Section \ref{sec:deep-linear}}\label{app:deep-linear}
\paragraph{Proofs of Lemma \ref{lemma:deep_linear_grad}.}
\begin{proof}
From the induced flow on $\mathbf{w}_e$, we obtain
\begin{equation*}
\begin{aligned}
- \left(\mathbf{v}-\left(\overline{\mathbf{w}}_e^{\top} \mathbf{v}\right) \overline{\mathbf{w}}_e\right)^{\top} & \|\mathbf{w}_e\|^{2-\frac{2}{N}} \left(\frac{d L^1(\mathbf{w}_e)}{d\mathbf{w}}+(N-1)\overline{\mathbf{w}}_e \overline{\mathbf{w}}_e^\top \frac{d L^1(\mathbf{w}_e)}{d\mathbf{w}} \right) \\
&= -\|\mathbf{w}_e\|^{2-\frac{2}{N}} \left(\mathbf{v}-\left(\overline{\mathbf{w}}_e^{\top} \mathbf{v}\right) \overline{\mathbf{w}}_e\right)^{\top} \frac{d L^1(\mathbf{w}_e)}{d\mathbf{w}} \\
&= \|\mathbf{w}_e\|^{2-\frac{2}{N}} \frac{c_0\sin^2\theta(\mathbf{w}_e, \mathbf{v})}{\pi}.
\end{aligned}
\end{equation*}
Therefore we have
\[ \frac{\partial \cos\theta(\mathbf{w}_e(t), \mathbf{v})}{\partial t} = \frac{1}{\|\mathbf{w}_e(t)\|} \left(\mathbf{v}-\left(\overline{\mathbf{w}}_e(t)^{\top} \mathbf{v}\right) \overline{\mathbf{w}}_e(t)\right)^{\top} \frac{\partial \mathbf{w}_e(t)}{\partial t} = \frac{c_0\sin^2\theta(\mathbf{w}_e(t), \mathbf{v})}{\pi}\|\mathbf{w}_e(t)\|^{1-\frac{2}{N}}.\]
\end{proof}
\paragraph{Proofs of Proposition \ref{prop:norm-increase}.}
First, we have induced norm variation flow as below:
\begin{lemma} \label{lemma:deep_linear_norm}
If $\mathbf{w}_e(t) \neq \mathbf{0}$, then
\[ \frac{\partial \|\mathbf{w}_e(t)\|^{\frac{2}{N}}}{\partial t} = \mathbb{E}_{\mathbf{x}\sim\mathcal{D}_2} \frac{2y\mathbf{w}_e(t)^\top\mathbf{x}}{1+e^{y\mathbf{w}_e(t)^\top\mathbf{x}}}. \]
\end{lemma}
We now turn to prove the Proposition.
\begin{proof}
\begin{equation*}
\begin{aligned}
\frac{\partial \|\mathbf{w}_e(t)\|^{\frac{2}{N}}}{\partial t} &=\frac{2}{N}\|\mathbf{w}_e(t)\|^{\frac{2}{N}-1}\overline{\mathbf{w}}_e(t)^\top\frac{\partial \mathbf{w}_e(t)}{\partial t} \\
&= -\frac{2}{N} \mathbf{w}_e(t)^\top \left(\frac{d L^1(\mathbf{w}_e(t))}{d\mathbf{w}} +(N-1)\overline{\mathbf{w}}_e(t)\overline{\mathbf{w}}_e(t)^\top \frac{d L^1(\mathbf{w}_e(t))}{d\mathbf{w}} \right)\\
&= -2\mathbf{w}_e(t)^\top \frac{d L^1(\mathbf{w}_e(t))}{d\mathbf{w}}=\mathbb{E}_{\mathbf{x}\sim\mathcal{D}_2} \frac{2y\mathbf{w}_e(t)^\top\mathbf{x}}{1+e^{y\mathbf{w}_e(t)^\top\mathbf{x}}}.
\end{aligned}
\end{equation*}
\end{proof}
\paragraph{Proofs of Proposition \ref{prop:norm-increase}.}
\begin{proof}
From Lemma \ref{lemma:deep_linear_norm} and the condition $\partial \|\mathbf{w}_e(t)\|^{\frac{2}{N}}/\partial t=0$, we obtain
\[ \mathbf{w}_e(t)^\top \frac{d L^1(\mathbf{w}_e(t))}{d\mathbf{w}}=0. \]
Therefore,
\[ \frac{\partial \mathbf{w}_e(t)}{\partial t} = -\|\mathbf{w}_e(t)\|^{2-\frac{2}{N}} \left(\frac{d L^1(\mathbf{w}_e(t))}{d\mathbf{w}} +(N-1)\overline{\mathbf{w}}_e(t)\overline{\mathbf{w}}_e(t)^\top \frac{d L^1(\mathbf{w}_e(t))}{d\mathbf{w}} \right) = -\|\mathbf{w}_e(t)\|^{2-\frac{2}{N}} \frac{d L^1(\mathbf{w}_e(t))}{d\mathbf{w}}. \]
\[ \frac{\partial \cos\theta(\mathbf{w}_e(t), \mathbf{v})}{\partial t} = \frac{1}{\|\mathbf{w}_e(t)\|} \left(\mathbf{v}-\left(\overline{\mathbf{w}}_e(t)^{\top} \mathbf{v}\right) \overline{\mathbf{w}}_e(t)\right)^{\top} \frac{\partial \mathbf{w}_e(t)}{\partial t}= -\|\mathbf{w}_e(t)\|^{1-\frac{2}{N}}\mathbf{v}^\top\frac{d L^1(\mathbf{w}_e(t))}{d\mathbf{w}}\geq 0. \]
\end{proof}
\paragraph{Proofs of Lemma \ref{lemma:deep-linear-norm}.}
\begin{proof}
Lower bound comes from
\[ \frac{\partial \|\mathbf{w}_e(t)\|^{\frac{2}{N}}}{\partial t} =\mathbb{E}_{\mathbf{x}\sim\mathcal{D}_2} \frac{2y\mathbf{w}_e(t)^\top\mathbf{x}}{1+e^{y\mathbf{w}_e(t)^\top\mathbf{x}}}\geq -2c_0\|\mathbf{w}_e\|. \]
Integrate both sides from $[0, t]$, we obtain
\[ \frac{2}{2-N} \frac{\partial \|\mathbf{w}_e(t)\|^{\frac{2}{N}-1}}{\partial t} \geq -2c_0. \]
We get that
\[
\|\mathbf{w}_e(t)\|^{1-\frac{2}{N}} \geq \left(\|\mathbf{w}_e(0)\|^{\frac{2}{N}-1} + (N-2)c_0t\right)^{-1}.
\]
Upper bound comes from
\[ \frac{\partial \|\mathbf{w}_e(t)\|^{\frac{2}{N}}}{\partial t} =\mathbb{E}_{\mathbf{x}\sim\mathcal{D}_2} \frac{2y\mathbf{w}_e(t)^\top\mathbf{x}}{1+e^{y\mathbf{w}_e(t)^\top\mathbf{x}}} \leq 0.6. \]
\[ \|\mathbf{w}_e(t)\| \leq \left(\|\mathbf{w}_e(0)\|^{\frac{2}{N}} + 0.6t\right)^{\frac{N}{2}}. \]
\end{proof}
\paragraph{Proofs of Theorem \ref{thm:deep-linear-convergence1}.}
Similar as the linear predictor case, we have the monotonicity of the induced norm below:
\begin{prop}\label{prop:norm-increase}
If $\mathbf{w}_e(t)\neq \mathbf{0}$ and $\partial \|\mathbf{w}_e(t)\|^2/\partial t=0$, then $\partial \cos\theta(\mathbf{w}_e(t), \mathbf{v})/\partial t \geq 0$.
\end{prop}
\begin{proof}
\[ 0 = \frac{\mathbf{w}_e(t)^\top\partial \mathbf{w}_e(t)}{\partial t} = -N\|\mathbf{w}_e(t)\|^{2-\frac{2}{N}} \frac{\mathbf{w}_e(t)^\top d L^1(\mathbf{w}_e(t))}{d\mathbf{w}}. \]
Then
\[ \frac{\partial \mathbf{w}_e(t)}{\partial t} = -\|\mathbf{w}_e(t)\|^{2-\frac{2}{N}}\frac{d L^1(\mathbf{w}_e(t))}{d\mathbf{w}} . \]
Therefore
\[ \frac{\partial \cos\theta(\mathbf{w}_e(t), \mathbf{v})}{\partial t} = \left(\mathbf{v}-\mathbf{v}^\top\overline{\mathbf{w}}_e(t) \overline{\mathbf{w}}_e(t)\right)^\top \frac{\partial \mathbf{w}_e(t)}{\partial t} = \frac{\mathbf{v}^\top\partial \mathbf{w}_e(t)}{\partial t} = -\|\mathbf{w}_e(t)\|^{2-\frac{2}{N}}\frac{\mathbf{v}^\top d L^1(\mathbf{w}_e(t))}{d\mathbf{w}}\geq 0. \]
\end{proof}
From Proposition \ref{prop:norm-increase}, once $\mathbf{w}_e(t)$ drop into the norm increasing area $S:=\{\mathbf{w}: \partial \|\mathbf{w}_e(t)\|^2/\partial t \geq 0 \}$, then it would always stay in this area, since in the boundary curve $\partial S$, the angle would become smaller, and using Lemma \ref{lemma:norm-change}, the weight $\mathbf{w}_e(t)$ would still lie in $S$. \\
Now we give the proof of Theorem \ref{thm:deep-linear-convergence1}.
\begin{proof}
Using Lemma \ref{lemma:deep_linear_grad} and the lower bound of $\|\mathbf{w}_e\|$ in Lemma \ref{lemma:deep-linear-norm}:
\[ \frac{1}{2}\ln\frac{1+\cos\theta(t)}{1-\cos\theta(t)} - \frac{1}{2}\ln\frac{1+\cos\theta(0)}{1-\cos\theta(0)}\geq \frac{c_0}{\pi}\ln\frac{At+B}{B}. \]
Finally, we get
\[ \cos\theta(t)\geq 1-\frac{2}{C(At/B+1)^\alpha+1}. \]
Notice that the above convergence holds for the whole training period, hence we always have the directional convergence.
Now we show the existence of $T$ when $\partial \|\mathbf{w}_e(0)\|^2/\partial t < 0 $. If $\partial \|\mathbf{w}_e(t)\|^2/\partial t < 0 $ all the time, then $\|\mathbf{w}_e(t)\|$ converges.
\begin{itemize}
\item If $\|\mathbf{w}_e(t)\| \to 0$, we would never converge to origin, which is a saddle point. The reason follows from the above result that we always have the directional convergence, then $\cos\theta(t)$ would always increase and become positive at some time $t_0>0$ with $\cos\theta(t) \geq \delta>0, \, \forall \, t\geq t_0$. Therefore if $\mathbf{w}_e(t)$ converges to origin, then from Property 4 in Lemma \ref{lemma:norm-change}, for small enough $\|\mathbf{w}_e(t_1)\|$, we have $N(\mathbf{w}_e(t_1))\geq 0$, thus $ \partial \|\mathbf{w}_e(t)\|^2/\partial t\geq 0$ would always holds for $t\geq t_1$, showing contradiction for $\|\mathbf{w}_e(t)\| \to 0$.
\item If $\|\mathbf{w}_e(t)\| \to \epsilon > 0$, then we have $\|\mathbf{w}(0)\| \geq \|\mathbf{w}_e(t)\| \geq \epsilon $, since $\cos\theta(t)\to 1$, then from Property 3 in Lemma \ref{lemma:norm-change}, we would obtain $\partial \|\mathbf{w}_e(t)\|^2/\partial t >0 $, contradiction!
\end{itemize}
Therefore, there exists some time $T$ such that $\partial \|\mathbf{w}_e(T)\|^2/\partial t = 0 $.
For the second part, using Proposition \ref{prop:norm-increase}, we have
\[ \frac{\partial \|\mathbf{w}_e(t)\|^{\frac{2}{N}}}{\partial t} \geq 0, \ \forall t \geq T. \]
Hence $\|\mathbf{w}_e(t)\|\geq \|\mathbf{w}_e(T)\| $. Therefore,
\[ \frac{1}{2}\ln\frac{1+\cos\theta(t)}{1-\cos\theta(t)} - \frac{1}{2} \ln\frac{1+\cos\theta(T)}{1-\cos\theta(T)}\geq \frac{c_0}{\pi}\|\mathbf{w}_e(T)\|^{2-\frac{2}{N}}(t-T). \]
\[ \cos\theta(t)\geq 1-\frac{2}{1+e^{A(t-T)+B}}, t\geq T. \]
For the third part, using the upper bound of $\|\mathbf{w}_e\|$ in Lemma \ref{lemma:deep-linear-norm}:
\[ \frac{1}{2}\ln\frac{1+\cos\theta(t)}{1-\cos\theta(t)} - \frac{1}{2}\ln\frac{1+\cos\theta(0)}{1-\cos\theta(0)}\geq \frac{2c_0}{0.6N\pi}\left(\left(0.6t+D\right)^{\frac{N}{2}}-D^{\frac{N}{2}}\right). \]
Finally, we obtain
\[ \cos\theta(t)\geq 1-\frac{2}{e^{F[\left(0.6t+D\right)^{N/2}-D^{N/2}]+E}+1}. \]
\end{proof}
\section{Missing Proofs in Section \ref{sec:shallow-non-linear}}\label{app:shallow-nonlinear}
\paragraph{Proofs of Proposition \ref{prop:grad-update}.}
\begin{proof}
Let $\epsilon = 1-P\left(\mathbf{x}=\mathbf{0}\right)>0$, we can choose $M, m > 0$, such that $P(0<\|\mathbf{x}\|\leq M, |\mathbf{v}^\top\mathbf{x}| \geq m)\geq \epsilon/2$. Denote $\|\mathbf{w}_1\| \leq R_1, \|\mathbf{w}_2\| \leq R_2$, $M_1 = \sup_{|t|\leq R_1M}\sigma(t)$, $M_2 = \sup_{|t|\leq R_2M}\sigma(t)$, then
$ |y(\mathbf{x})\phi(\mathbf{x})| \leq |\sigma(\mathbf{w}_1^\top\mathbf{x})| + |\sigma(\mathbf{w}_2^\top\mathbf{x})| \leq M_1+M_2 $.
\[ -\mathbf{v}^\top\nabla_{1}L(\mathbf{w}) = \mathbb{E}_{\mathbf{x}\sim\mathcal{D}} \frac{|\mathbf{v}^\top\mathbf{x}|\sigma'(\mathbf{w}^{\top}_1\mathbf{x})}{1+e^{y(\mathbf{x})\phi(\mathbf{x})}} \geq \frac{0.5m\gamma(R_1M)}{1+e^{M_1+M_2}}P(0<\|\mathbf{x}\|\leq M, |\mathbf{v}^\top\mathbf{x}| \geq m) \geq \frac{0.25m\epsilon\gamma(R_1M)}{1+e^{M_1+M_2}}>0. \]
In addition, same argument holds for $\mathbf{v}^\top\nabla_{2}L(\mathbf{w})$.
From previous argument, We already obtain $\mathbf{v}^\top\mathbf{w}_1(t)$ and $\mathbf{v}^\top\mathbf{w}_1(n)$ are increasing, and $\mathbf{v}^\top\mathbf{w}_2(t)$ and $\mathbf{v}^\top\mathbf{w}_2(n)$ are decreasing. If $\|\mathbf{w}_1(t)\|$ and $\|\mathbf{w}_2(t)\|$ (or $\|\mathbf{w}_1(n)\|$ and $\|\mathbf{w}_2(n)\|$) are bounded, then $\mathbf{v}^\top\mathbf{w}_1(t)$ (or $\mathbf{v}^\top\mathbf{w}_1(n)$) converges. We obtain contradiction similar to the proof of Proposition \ref{prop:grad-update} by :
\[ \mathbf{v}^\top \nabla_{1} L(\mathbf{w}(t)) \to 0, \; \quad \; \eta_n\mathbf{v}^\top \nabla_{1} L(\mathbf{w}(n)) \to 0. \]
\end{proof}
\begin{corollary}\label{cor:angle-inv}
When $\cos\theta_1(t_0)\geq0$ for some $t_0\geq 0$, then $\forall t>t_0$, $\cos\theta_1(t)>0$. Similarly, when $\cos\theta_2(t_0)\leq0$ for some $t_0\geq 0$, then $\forall t>t_0$, $\cos\theta_1(t)<0$.
\end{corollary}
\begin{prop}\label{prop:para}
For $i=1,2$, if $\mathbf{w}_i\neq\mathbf{0}$ and $\mathbf{w}_i \parallel \mathbf{v}$, then $\left(\mathbf{v}-(\overline{\mathbf{w}}_i^\top\mathbf{v})\overline{\mathbf{w}}_i\right)^\top\nabla_{i} L(\mathbf{w}) = 0$.
\end{prop}
\begin{proof}
If $\mathbf{w}_i\neq\mathbf{0}$ and $\mathbf{w}_i \parallel \mathbf{v}$, then $\mathbf{v}-(\overline{\mathbf{w}}_i^\top\mathbf{v})\overline{\mathbf{w}}_i=0$.
\end{proof}
\paragraph{Proofs of Lemma \ref{lemma:ac-angle-grad}.}
\begin{proof}
Since $\mathbf{x}\sim\mathcal{D}$ has a spherically symmetric distribution, and $\mathbf{w}_1$, $\mathbf{w}_2$, $\mathbf{v}$ are in the same plane. We only need consider the dynamic in $\mathbb{R}^2$. In addition, w.l.o.g., we can assume $\mathbf{w}_1=(r_1, 0)^\top$, $\mathbf{v}=(v_1,v_2)^\top=(\cos\alpha, \sin\alpha)^\top$, $\mathbf{w}_2=(a, b)^\top$, then
\[\left(\mathbf{v}-(\overline{\mathbf{w}}_1^\top\mathbf{v})\overline{\mathbf{w}}_1\right)^\top\left(\mathbf{w}_2-(\overline{\mathbf{w}}_1^\top\mathbf{w}_2)\overline{\mathbf{w}}_1\right)=v_2b.\]
\begin{equation*}
\begin{aligned}
-\left(\mathbf{v}-(\overline{\mathbf{w}}_1^\top\mathbf{v})\overline{\mathbf{w}}_1\right)^\top\nabla_{1} L(\mathbf{w}) &= \mathbb{E}_{\mathbf{x}\sim\mathcal{D}} \frac{y(\mathbf{\mathbf{x}})\sigma'(\mathbf{w}^{\top}_1\mathbf{x})\left(\mathbf{v}-(\overline{\mathbf{w}}_1^\top\mathbf{v})\overline{\mathbf{w}}_1\right)\mathbf{x}}{1+e^{y(\mathbf{x})\phi(\mathbf{x})}} \\
&= \mathbb{E}_{\mathbf{x}\sim\mathcal{D}_2} \frac{y(\mathbf{x})\sigma'(r_1x_1)v_2x_2}{1+e^{y(\mathbf{x})\left(\sigma(r_1x_1)-\sigma(ax_1+bx_2)\right)}}
\end{aligned}
\end{equation*}
Set $g(x_1, x_2) := \dfrac{y(\mathbf{x})\sigma'(r_1x_1)v_2x_2}{1+e^{y(\mathbf{x})\left(\sigma(r_1x_1)-\sigma(ax_1+bx_2)\right)}} $. Referred to the proof in linear case, we need to consider
\[ \mathbb{E}_{\mathbf{x}\sim\mathcal{D}_2} g(x_1, x_2) = \int_{|v_1x_1| \geq |v_2x_2|}g(x_1, x_2)d\mathbf{x} + \int_{0\leq |v_1|x_1 < |v_2x_2|} g(x_1, x_2) d\mathbf{x} \triangleq I_1 + I_2. \]
\begin{enumerate}
\item When $|v_1x_1| > |v_2x_2|$,
\begin{equation*}
\begin{aligned}
g(x_1, x_2) + g(x_1, -x_2) &= \frac{\text{sgn}(v_1x_1)\sigma'(r_1x_1)v_2x_2} {1+e^{\text{sgn}(v_1x_1) \left(\sigma(r_1x_1)-\sigma(ax_1+bx_2)\right)}}-\frac{\text{sgn}(v_1x_1) \sigma'(r_1x_1)v_2x_2} {1+e^{\text{sgn}(v_1x_1)\left(\sigma(r_1x_1)-\sigma(ax_1-bx_2) \right)}} \\
&= \frac{\sigma'(r_1x_1)\text{sgn}(v_1x_1)v_2x_2 e^{\text{sgn}(v_1x_1)\sigma(r_1x_1)} \left(e^{-\text{sgn}(v_1x_1)\sigma(ax_1-bx_2)}-e^{-\text{sgn}(v_1x_1)\sigma(ax_1+bx_2)}\right)
} {\left(1+e^{\text{sgn}(v_1x_1)\left(\sigma(r_1x_1)-\sigma(ax_1+bx_2)\right)}\right) \left(1+e^{\text{sgn}(v_1x_1)\left(\sigma(r_1x_1)-\sigma(ax_1-bx_2)\right)}\right)}.
\end{aligned}
\end{equation*}
\[ \text{sgn}(g(x_1, x_2) + g(x_1, -x_2)) = \text{sgn}(v_1x_1v_2x_2 \cdot v_1x_1bx_2) = \text{sgn}(v_2b). \]
Hence when $ v_2b \geq 0$, $g(x_1, x_2) + g(x_1, -x_2) \geq 0$.
\item When $|v_1x_1| \leq |v_2x_2|$,
\begin{equation*}
\begin{aligned}
g(x_1, x_2) + g(x_1, -x_2) &= \frac{|v_2x_2|\sigma'(r_1x_1)} {1+e^{\text{sgn}(v_2x_2)\left(\sigma(r_1x_1)-\sigma(ax_1+bx_2)\right)}}+ \frac{|v_2x_2|\sigma'(r_1x_1)} {1+e^{-\text{sgn}(v_2x_2)\left(\sigma(r_1x_1)-\sigma(ax_1-bx_2)\right)}}.
\end{aligned}
\end{equation*}
When $ v_2b \geq 0$,
\[y\left(\sigma(r_1x_1)-\sigma(ax_1+bx_2)\right)-y\left(\sigma(r_1x_1)-\sigma(ax_1-bx_2)\right)=y\left(\sigma(ax_1-bx_2)-\sigma(ax_1+bx_2)\right)\leq 0. \]
Combining with the fact that
\[ x+y \leq 0, \frac{1}{1+e^x}+\frac{1}{1+e^y}\geq 1. \]
Therefore, $g(x_1, x_2) + g(x_1, -x_2) \geq |v_2x_2|\sigma'(r_1x_1)$.
\end{enumerate}
From the above discussion,
\begin{equation*}
\begin{aligned}
\mathbb{E}_{\mathbf{x}\sim\mathcal{D}_2} g(x_1, x_2) &\geq \frac{1}{2}\int_{|v_1x_1| \leq |v_2x_2|} |v_2x_2|\sigma'(r_1x_1) d\mathbf{x} \\
&= \frac{|\sin\alpha|}{4\pi}\int_{0}^{\infty} \int_{|\tan\theta\tan\alpha|\geq 1} |r\sin\theta|\sigma'(r_1r\cos\theta) d\theta dF(r) \\ &= \frac{|\sin\alpha|}{2\pi}\int_{0}^{\infty} \int_{\pi/2 + \tilde{\alpha} \geq \theta \geq \pi/2-\tilde{\alpha}} r\sin\theta\sigma'(r_1r\cos\theta) d\theta dF(r) \\
&= \frac{\sin^2\alpha}{2\pi} \int_{0}^{\infty} \frac{1}{r_1|\sin\alpha|}\left(\sigma(r_1r|\sin\alpha|)-\sigma(-r_1r|\sin\alpha|)\right)dF(r) \\
&= \frac{\sin^2\alpha}{2\pi} \nu(\|\mathbf{w}_1\sin\alpha\|), \\
\end{aligned}
\end{equation*}
Therefore,
\[ -\left(\mathbf{v}-(\overline{\mathbf{w}}_1^\top\mathbf{v})\overline{\mathbf{w}}_1\right)^\top\nabla_{1} L(\mathbf{w}) \geq \frac{\nu(\|\mathbf{w}_1\sin\theta_1\|)}{2\pi} \sin^2 \theta_1. \]
When applied to $\mathbf{w}_2$, notice that
$L(\mathbf{w}_2, \mathbf{w}_1, -v) = L(\mathbf{w}_1, \mathbf{w}_2, v) =\mathbb{E}_{\mathbf{x}\sim\mathcal{D}}\ln(1+e^{sgn(\mathbf{v}^\top\mathbf{x})\left( \sigma(\mathbf{w}_1^\top\mathbf{x})-\sigma(\mathbf{w}_2^\top\mathbf{x}) \right)})$, we can view $\mathbf{w}_2$ as $\mathbf{w}_1$ when applied with target direction $-\mathbf{v}$. Using the fact $\sin\theta_2=\sin(\pi-\theta_2)$, we obtain same results for $\mathbf{w}_2$.
\end{proof}
\paragraph{Proofs of Proposition \ref{prop:unbounded}.}
\begin{proof}
We only need to show the case that only one unbounded weight is impossible. Suppose $\|\mathbf{w}_2(t)\|\leq R_2$, but $\|\mathbf{w}_1(t)\|$ is unbounded and we consider $\mathbf{w}_1(t) \neq \mathbf{0}$ in the following analysis.
Let $\epsilon = 1-P\left(\mathbf{v}^\top\mathbf{x}=\mathbf{0}\right)>0$, then $P(\mathbf{v}^\top\mathbf{x} \cdot \mathbf{w}_1^\top\mathbf{x} < 0, \mathbf{w}_2^\top\mathbf{x} > 0) = 0.5 P( \mathbf{v}^\top\mathbf{x} \cdot \mathbf{w}_1^\top\mathbf{x} < 0) =\theta_1\epsilon/(2\pi)$. Therefore, we can choose $M, m(\theta_1) > 0$ with $\lim_{\theta\to 0}m(\theta) = 0$, such that $P(\|\mathbf{x}\|\leq M, |\mathbf{v}^\top\mathbf{x}| \geq m, \mathbf{v}^\top\mathbf{x} \cdot \mathbf{w}_1^\top\mathbf{x} < 0, \mathbf{w}_2^\top\mathbf{x} > 0) \geq \theta_1\epsilon/(4\pi)$.
Meanwhile, when $\mathbf{v}^\top\mathbf{x} \cdot \mathbf{w}_1^\top\mathbf{x} < 0$, $y(\mathbf{x}) \cdot \sigma(\mathbf{w}_1^\top \mathbf{x}) < \sigma(0)$. Denote $M_2 = \sup_{|t|\leq R_2M}\sigma(t)$, $\mathcal{T} = \|\mathbf{x}\|\leq M, |\mathbf{v}^\top\mathbf{x}| \geq m, \mathbf{v}^\top\mathbf{x} \cdot \mathbf{w}_1^\top\mathbf{x} < 0, \mathbf{w}_2^\top\mathbf{x} > 0$. We finally obtain
\[ \mathbf{v}^\top\nabla_{2}L(\mathbf{w}) = \mathbb{E}_{\mathbf{x}\sim\mathcal{D}} \frac{|\mathbf{v}^\top\mathbf{x}|\sigma'(\mathbf{w}^{\top}_2\mathbf{x})}{1+e^{y(\mathbf{x})\phi(\mathbf{x})}} \geq \mathbb{E}_{\mathbf{x}\in\mathcal{T}} \frac{|\mathbf{v}^\top\mathbf{x}| \sigma'(\mathbf{w}^{\top}_2\mathbf{x})}{1+e^{y(\mathbf{x})\phi(\mathbf{x})}} \geq
\frac{m(\theta_1)\gamma(R_2M)\theta_1\epsilon}{4\pi\left(1+e^{\sigma(0)+M_2}\right)} > 0. \]
Therefore $\mathbf{v}^\top\mathbf{w}_2(t)$ can't converge unless $\theta_1(t) \to 0$. However, when $\mathbf{w}_1(t) \to r_1(t)\mathbf{v}, r_1(t)>0$,
\[ \mathbf{v}^\top\nabla_{2}L(\mathbf{w}) \to\mathbb{E}_{\mathbf{x}\sim\mathcal{D}} \frac{|\mathbf{v}^\top\mathbf{x}|\sigma'(\mathbf{w}^{\top}_2\mathbf{x})}{1+e^{y(\mathbf{x})\left( \sigma(r_1(t)\mathbf{v}^\top\mathbf{x})-\sigma(\mathbf{w}_2^\top\mathbf{x}) \right) }} \geq \mathbb{E}_{\mathbf{v}^\top\mathbf{x}<0<\mathbf{w}_2^\top\mathbf{x}} \frac{|\mathbf{v}^\top\mathbf{x}| \sigma'(\mathbf{w}^{\top}_2\mathbf{x})}{1+e^{y(\mathbf{x})\phi(\mathbf{x})}} = \mathbb{E}_{\mathbf{v}^\top\mathbf{x}<0<\mathbf{w}_2^\top\mathbf{x}} \frac{|\mathbf{v}^\top\mathbf{x}|}{1+e^{\mathbf{w}^{\top}_2\mathbf{x}}}. \]
Then $\mathbf{w}_2(t)\to r_2(t)\mathbf{v}, r_2(t)>0$. Unfortunately,
\[ \mathbf{v}^\top\nabla_{2}L(\mathbf{w}) \to \mathbb{E}_{\mathbf{v}^\top\mathbf{x}>0} \frac{|\mathbf{v}^\top\mathbf{x}| }{1+e^{(r_1(t)-r_2(t))|\mathbf{v}^\top\mathbf{x}|}}\leftarrow -\mathbf{v}^\top\nabla_{1}L(\mathbf{w}). \]
Then $\partial r_1(t) / \partial t + \partial r_2(t) / \partial t \to 0$, then the variation of $r_1(t)$ and $r_2(t)$ would become same finally, but $r_1(t) \to \infty$. Similar argument holds when applied to gradient descent.
\end{proof}
\paragraph{Proofs of Lemma \ref{lemma:relu-norm}.}
\begin{proof}
\begin{equation*}
\begin{aligned}
\frac{\partial \left(\|\mathbf{w}_1(t)\|^2+\|\mathbf{w}_2(t)\|^2\right)}{\partial t} &= 2\mathbb{E}_{\mathbf{x}\sim\mathcal{D}} \frac{y(\mathbf{\mathbf{x}}) \left(\mathbf{w}_1^{\top}\mathbf{x} \sigma'(\mathbf{w}^{\top}_1 \mathbf{x})-\mathbf{w}_2^{\top}\mathbf{x}\sigma'(\mathbf{w}^{\top}_1 \mathbf{x})\right)}{1+e^{y(\mathbf{x})\phi(\mathbf{x})}} \\
&= 2\mathbb{E}_{\mathbf{x}\sim\mathcal{D}} \frac{y(\mathbf{\mathbf{x}}) \left(\sigma(\mathbf{w}^{\top}_1 \mathbf{x})-\sigma(\mathbf{w}^{\top}_2 \mathbf{x})\right)}{1+e^{y(\mathbf{x})\phi(\mathbf{x})}} \\
&= 2\mathbb{E}_{\mathbf{x}\sim\mathcal{D}} \frac{y(\mathbf{\mathbf{x}}) \phi(\mathbf{x})}{1+e^{y(\mathbf{x})\phi(\mathbf{x})}} \leq 0.6. \\
\end{aligned}
\end{equation*}
\end{proof}
\paragraph{Proofs of Theorem \ref{thm:diff-init}.}
\begin{proof}
For ReLU activation, $\nu(z) = \mathbb{E}_{\mathbf{x} \sim\mathcal{D}_2} \left(\sigma(z\|\mathbf{x}\|)-\sigma(-z\|\mathbf{x}\|) \right) / z = \mathbb{E}_{\mathbf{x} \sim\mathcal{D}_2} \|\mathbf{x}\|=c_0$.
Based on Proposition \ref{prop:para}, $\mathbf{w}_1$ and $\mathbf{w}_2$ would always settle in the different half-plane separated by $\mathbf{v}$.
Set $\mathcal{T} := \{t: \cos\theta_1(t)+\cos\theta_2(t)=0 \ \text{and} \ \partial \left(\cos\theta_1(t) + \cos\theta_2(t)\right) / \partial t \neq 0\}$. Since $\cos\theta_1(t)+\cos\theta_2(t)$ is
continuous related to $t$, and any $t\in\mathcal{T}$, there exists a $\epsilon_t>0$ such that $\cos\theta_1(t_s)+\cos\theta_2(t_s)\neq 0, \forall t_s\in (t-\epsilon_t, t+\epsilon_t)$, therefore, $\mathcal{T}$ is a countable set, and $\mathcal{T}=\{t_1,t_2\dots\}$, $t_0=0$.
We first prove if $\mathcal{T} \neq \emptyset$. For each $t_i \in \mathcal{T}$, if $\cos\theta_1(t_i)+\cos\theta_2(t_i)$ reverses from positive to negative at time $t_i$. Then $t_{i-1}\leq t \leq t_i$, $\mathbf{w}_2(t)$ and $\mathbf{v}$ lie in the same half-plane separated by $\mathbf{w}_1(t)$, then
\[ \frac{\partial \cos\theta_1(t)}{\partial t} \geq \frac{c_0}{2\pi\|\mathbf{w}_1(t)\|}\sin^2\theta_1(t). \]
Based on Lemma \ref{lemma:relu-norm},
\[ \|\mathbf{w}_i(t)\|^2 \leq \|\mathbf{w}_1(t)\|^2+\|\mathbf{w}_2(t)\|^2\leq 0.6 t+\|\mathbf{w}_1(0)\|^2+\|\mathbf{w}_2(0)\|^2, \ i=1, 2. \]
Set $r(t)^2 := \|\mathbf{w}_1(t)\|^2+\|\mathbf{w}_2(t)\|^2$, then,
\[ \frac{\partial \cos\theta_1(t)}{\partial t} \geq \frac{c_0\sin^2 \theta_1(t)}{2\pi\sqrt{0.6 t+r^2}} . \]
We obtain
\begin{equation}\label{eq:update1}
-\ln\tan\frac{\theta_1(t)}{2}+\ln\tan\frac{\theta_1(t_{i-1})}{2}\geq \frac{c_0}{0.6\pi} \left(\sqrt{0.6t+r(0)^2}-\sqrt{0.6t_{i-1}+r(0)^2}\right).
\end{equation}
Similarly, if $\cos\theta_1(t_i)+\cos\theta_2(t_i)$ reverses from negative to positive at time $t_i$, then
\begin{equation}\label{eq:update2}
-\ln\tan\frac{\pi-\theta_2(t)}{2}+\ln\tan\frac{\pi-\theta_2(t_{i-1})}{2} \geq \frac{c_0}{0.6\pi} \left(\sqrt{0.6t+r(0)^2}-\sqrt{0.6t_{i-1}+r(0)^2}\right).
\end{equation}
Since $\theta_2(t_i)+\theta_1(t_i)=\pi, \forall i\geq 1$, then Eq. \ref{eq:update1} or \ref{eq:update2} holds for $i\geq 1$. Therefore, we obtain for any $t>0$, summing over all $[t_{i-1}, t_i], t_{i}<t$,
\[ -\ln\tan\frac{\theta_i(t)}{2}+\ln\tan\frac{\max\{\theta_1(0),\pi-\theta_2(0)\}}{2} \geq \frac{c_0}{0.6\pi} \left(\sqrt{0.6t+r(0)^2}-r(0)\right), i=1 \ or \ 2. \]
\[ (-1)^{i-1}\cos\theta_i(t) \geq 1-\frac{2}{e^{A\sqrt{t+B}+C}+1}, \]
with $A = 2c_0/(\pi\sqrt{0.6}), B=r(0)^2/0.6$, and $C=-2c_0r(0)/(0.6\pi)+2\ln\tan\frac{\max\{\theta_1(0),\pi-\theta_2(0)\}}{2}$, and the choice of $i$ depends on the the position of weights at $t$.
Now we consider $\mathcal{T}=\emptyset$, and suppose $\cos\mathbf{w}_1(0) + \cos\mathbf{w}_2(0) \geq 0$. Then $\mathbf{w}_2(t)$ and $\mathbf{v}$ always lie in the same half-plane separated by $\mathbf{w}_1(t)$. Hence, we still have
\[ \cos\theta_1(t) \geq 1-\frac{2}{e^{A\sqrt{t+B}+C}+1}. \]
\end{proof}
\paragraph{Proofs of Theorem \ref{thm:diff-init2}.}
\begin{proof}
From Proposition \ref{prop:grad-update}, $\mathbf{w}_2(t)^\top\mathbf{v}$ is decreasing for any $\mathbf{w}_1(t)$ and $\mathbf{w}_2(t)$, and the origin is not a stationary point, thus we always have $\mathbf{w}_2(t_0)^\top\mathbf{v} \leq 0$, for some $t_0\geq 0$.
Now from Corollary \ref{cor:angle-inv} and Proposition \ref{prop:para}, $\mathbf{w}_2(t)^\top\mathbf{v}\leq 0, \ \forall t>t_0$.
When $\mathbf{w}_1 = r_1\mathbf{v}, r_1 \geq 0$, in the following expectation, using spherically symmetric distribution assumption, we assume $\mathbf{w}_2=(r_2,0)^\top$, then $\mathbf{v}=(\cos\theta_2, \sin\theta_2)^\top$, then
\begin{equation*}
\begin{aligned}
\left(\mathbf{v}-(\overline{\mathbf{w}}_2^\top\mathbf{v})\overline{\mathbf{w}}_2\right)^\top\nabla_{2} L(\mathbf{w}) &= \mathbb{E}_{\mathbf{x}\sim\mathcal{D}_2, x_1>0} \frac{y(\mathbf{x})v_2x_2}{1+e^{y(\mathbf{x}) \left(\sigma(\mathbf{w}_1^\top\mathbf{x})-r_2x_1\right)}} \\
&=\left(\mathbb{E}_{\mathbf{v}^\top\mathbf{x}\geq 0, x_1\geq 0} \frac{v_2x_2}{1+e^{ r_1\mathbf{v}^\top\mathbf{x}-r_2x_1}}-\mathbb{E}_{\mathbf{v}^\top\mathbf{x}\leq 0, x_1\geq 0} \frac{v_2x_2}{1+e^{r_2x_1}}\right) \\
&\overset{(1)}{\geq} - \mathbb{E}_{\mathbf{v}^\top\mathbf{x}\leq 0, x_1\geq 0} \frac{v_2x_2}{1+e^{r_2x_1}} \\
&=-\frac{1}{2\pi}\int_{0}^\infty \int_{-\pi/2}^{\alpha-\pi/2}\frac{v_2r\sin\theta}{1+e^{rr_2\cos\theta}}d\theta dF(r)\\
&=-\frac{\sin\theta_2}{2\pi r_2} \int_{0}^\infty \ln\frac{1+e^{-rr_2\sin\theta_2}}{2} dF(r) \geq 0.
\end{aligned}
\end{equation*}
The inequality in $(1)$ holds from $v_1\leq 0$ due to $\mathbf{w}_2(t)^\top\mathbf{v} \leq 0$ and $v_2x_2\geq v_1x_1+v_2x_2\geq 0$. Moreover, once $r_1 \gg r_2$, the inequality would become tight.
We denote `$\gtrsim$' as greater when multiplied and added by some constant, then invoking from the above analysis, we discover that when $r_2(t)\sin\theta_2(t)$ is large,
\[ -\frac{\partial\cos\theta_2(t)}{\partial t} \gtrsim \frac{\sin\theta_2(t)}{2\pi r_2^2} \gtrsim \frac{\sin\theta_2(t)}{t} \ \Rightarrow \ \theta_2(t) \gtrsim \ln t \Rightarrow \ 1+\cos\theta_2(t) \lesssim -\Theta(\ln t). \]
When $r_2(t)\sin\theta_2(t)$ is small, then
\[ -\frac{\partial\cos\theta_2(t)}{\partial t} \gtrsim \frac{\sin^2\theta_2(t)}{2\pi r_2} \ \Rightarrow \ -\cos\theta_2(t) \gtrsim 1-e^{-\Theta(\sqrt{t})}. \]
Due to the effect of $\mathbf{w}_1(t)$ and the norm of $\mathbf{w}_2(t)$, $\mathbf{w}_2$ may go through complex directional training dynamic between $-\Theta(\ln t) \sim e^{-\Theta(\sqrt{t})}$ for $1+\cos\theta_2(t)$.
\end{proof}
\paragraph{Proofs of Theorem \ref{thm:same-init}.}
\begin{proof}
Notice that when $\mathbf{w}_1(0)$ and $\mathbf{w}_2(0)$ lie in the same half-plane separated by $\mathbf{v}$ and $\theta_1(0) > \theta_2(0)$, we get that the conditions in Lemma \ref{lemma:ac-angle-grad} are satisfied both for $\mathbf{w}_1$ and $\mathbf{w}_2$, thus $\theta_2(t)$ increases and $\theta_1(t)$ decreases. In addition, for $\theta_1(t)$, we have
\[ \frac{\partial \cos\theta_1(t)}{\partial t} \geq \frac{c_0}{2\pi \|\mathbf{w}_1\|} \sin^2 \theta_1(t). \]
Based on Lemma \ref{lemma:relu-norm},
\[ \|\mathbf{w}_1(t)\|^2+\|\mathbf{w}_2(t)\|^2 \leq 0.6 t+\|\mathbf{w}_1(0)\|^2+\|\mathbf{w}_2(0)\|^2. \]
Hence,
\[ \frac{\partial \cos\theta_1(t)}{\partial t} \geq \frac{c_0}{2\pi\sqrt{0.6t+r_0^2}} \sin^2 \theta_1(t). \]
We obtain
\[ \cos\theta_1(t)\geq 1-\frac{2}{e^{A\sqrt{t+B}+C}+1}, \]
for some constant $A, B$ which related to initialization. Similarly, $\theta_2(t)$ has the same convergence rate.
Therefore, $\theta_1(t_o)=\theta_2(t_o)$ for some $t_0 = \mathcal{O}\left(\ln^2\left(\frac{1}{ \cos\theta_2(0)-\cos\theta_1(0)}\right)\right)$.
\end{proof}
\paragraph{Proofs of Theorem \ref{thm:gd-relu}.}
\begin{proof}
Motivated from linear network setting, We show for the analysis for $\mathbf{w}_1$ when $\mathbf{w}_2$ and $\mathbf{v}$ lie in the same half-plane separated by $\mathbf{w}_1$.
For the first step, we need to guarantee positive improvement towards target direction $\mathbf{v}$ (or $-\mathbf{v}$). Notice that
\begin{equation*}
\begin{aligned}
\|\mathbf{w}_1(n+1)\|^2 &= \|\mathbf{w}_1(n)\|^2-2\eta_n \mathbf{w}_1(n)^{\top}\nabla_{1} L(\mathbf{w}(n)) +
\eta_n^2\|\nabla_{1} L(\mathbf{w}(n)) \|^2 \\
&= \left(\|\mathbf{w}_1(n)\|- \eta_n\overline{\mathbf{w}}_1(n)^{\top}\nabla_{1} L(\mathbf{w}(n)) \right)^2 + \eta_n^2\left\| \left(I-\overline{\mathbf{w}}_1(n) \; \overline{\mathbf{w}}_1(n)^{\top}\right) \nabla_{1} L(\mathbf{w}(n)) \right\|^2 \\
&= \left( \overline{\mathbf{w}}_1(n)^{\top} \mathbf{w}_1(n+1) \right)^2 + \eta_n^2\left\| \left(I-\overline{\mathbf{w}}_1(n) \; \overline{\mathbf{w}}_1(n)^{\top}\right) \nabla_{1} L(\mathbf{w}(n)) \right\|^2.
\end{aligned}
\end{equation*}
We denote $a\overline{\mathbf{w}_1(n)^\perp} := \left(I-\overline{\mathbf{w}}_1(n) \; \overline{\mathbf{w}}_1(n)^{\top}\right) \mathbf{v}$, $b\overline{\mathbf{w}_1(n)^\perp} := -\left(I-\overline{\mathbf{w}}_1(n) \; \overline{\mathbf{w}}_1(n)^{\top}\right) \nabla_{1} L(\mathbf{w}(n))$, $\alpha_i(n)=\theta(\mathbf{w}_i(n+1), \mathbf{w}_i(n))$.
Then $a^2=\sin^2\theta_1(n)$, and from Lemma \ref{lemma:ac-angle-grad}, $ab\geq c_0\sin^2\theta_1(n) / (2\pi)$. Then we can simply the angle difference between two steps:
\begin{equation*}
\begin{aligned}
\cos \; & \theta_1(n{+}1){-}\cos\theta_1(n)
= \frac{\mathbf{v}^{\top}\mathbf{w}_1(n+1)-\|\mathbf{w}_1(n+1)\|\mathbf{v}^{\top} \overline{\mathbf{w}}_1(n)}{\|\mathbf{w}_1(n+1)\|} \\
&= \frac{\left(\mathbf{v}-\mathbf{v}^{\top}\overline{\mathbf{w}}_1(n)\overline{\mathbf{w}}_1(n)\right)^{\top} \mathbf{w}_1(n+1)-\left(\|\mathbf{w}_1(n+1)\|-\overline{\mathbf{w}}_1(n)^{\top} \mathbf{w}_1(n+1)\right) \mathbf{v}^{\top}\overline{\mathbf{w}}_1(n)}{\|\mathbf{w}_1(n+1)\|} \\
&= \frac{1}{\|\mathbf{w}_1(n+1)\|}\left({-}\eta_n\left(\mathbf{v}{-}(\overline{\mathbf{w}}_1(n)^{\top}\mathbf{v}) \overline{\mathbf{w}}_1(n)\right)^{\top}\nabla_1 L(\mathbf{w}(n)){-}\frac{\|\mathbf{w}_1(n{+}1)\|^2{-}\left(\overline{\mathbf{w}}_1(n)^{\top} \mathbf{w}_1(n{+}1)\right)^2}{\|\mathbf{w}_1(n{+}1)\|+\overline{\mathbf{w}}_1(n)^{\top} \mathbf{w}_1(n{+}1)} \cos\theta_1(n) \right) \\
&=\frac{1}{\|\mathbf{w}_1(n+1)\|}\left(\eta_nab-\frac{\eta^2_n b^2\cos\theta_1(n)}{\|\mathbf{w}_1(n+1)\|+\overline{\mathbf{w}}_1(n)^{\top} \mathbf{w}_1(n+1)}\right).
\end{aligned}
\end{equation*}
If $\theta_1(n)\geq \pi/2$, we obtain
\[ \cos\theta_1(n{+}1){-}\cos\theta_1(n) \geq \frac{1}{\|\mathbf{w}_1(n+1)\|}\frac{c_0 \eta_n \sin^2\theta_1(n)}{2\pi}.\]
Otherwise, $\theta_1(n) < \pi/2$. In order to derive
\[ \cos\theta_1(n{+}1){-}\cos\theta_1(n) \geq \frac{1}{\|\mathbf{w}_1(n+1)\|}\frac{c_0 \delta \eta_n \sin^2\theta_1(n)}{2(1+\delta)\pi}, \]
we need to find some large enough $\delta>0$ to satisfy
\begin{equation}\label{eq:suff2}
\|\mathbf{w}_1(n+1)\|+\overline{\mathbf{w}}_1(n)^{\top} \mathbf{w}_1(n+1) \geq \frac{(1+\delta)\eta_n b\cos\theta_1(n)}{a}.
\end{equation}
Notice the following relation:
\[ \|\mathbf{w}_1(n+1)\|+\overline{\mathbf{w}}_1(n)^{\top} \mathbf{w}_1(n+1) = \|\mathbf{w}_1(n+1)\|(1+\cos\alpha_i(n))\]
\[ \frac{\eta_n b\cos\theta_1(n)}{a} = \sin\alpha_1(n)\|\mathbf{w}_1(n+1)\|/\tan\theta_1(n). \]
Therefore, Eq. \ref{eq:suff2} is equivalent to
\[ \tan\theta_1(n) \geq \frac{(1+\delta)\sin\alpha_1(n)}{1+\cos\alpha_1(n)}=(1+\delta)\tan\frac{\alpha_1(n)}{2}. \]
From the condition that $\mathbf{w}_1(n)$ would never reach another half-plane separated by $\mathbf{v}$, we already have $\alpha_1(n)\leq\theta_1(n)<\pi/2$. Hence we can deduce $\tan\theta_1(n)\geq \tan\alpha_1(n)$.
Taking
\[ 1+\delta = \tan\theta_1(n)/\tan\frac{\alpha_1(n)}{2} \geq \tan\alpha_1(n)/\tan\frac{\alpha_1(n)}{2} = \frac{2}{1-\tan^2\frac{\alpha_1(n)}{2}} \geq 2,\]
which satisfies the requirement, and showing that
\[ \cos\theta_1(n{+}1){-}\cos\theta_1(n) \geq \frac{1}{\|\mathbf{w}_1(n+1)\|}\frac{c_0 \eta_n \sin^2\theta_1(n)}{4\pi}, \]
Second, obtaining the upper bound of $\|\mathbf{w}_1(t)\|^2+\|\mathbf{w}_2(t)\|^2$ using Lemma \ref{lemma:relu-norm},
\begin{equation*}
\begin{aligned}
\|\mathbf{w}_1(n+1)\|^2 + \|\mathbf{w}_2(n+1)\|^2 &= \|\mathbf{w}_1(n)\|^2 + \|\mathbf{w}_2(n)\|^2 - 2\eta_n \mathbf{w}_1(n)^{\top}\nabla_{1} L(\mathbf{w}(n)) - 2 \eta_n \mathbf{w}_2(n)^{\top}\nabla_{2} L(\mathbf{w}(n)) \\
& \quad +\eta_n^2\|\nabla_{1} L(\mathbf{w}(n)) \|^2 + \eta_n^2 \|\nabla_{2} L(\mathbf{w}(n)) \|^2 \\
&\leq \|\mathbf{w}_1(n)\|^2 + \|\mathbf{w}_2(n)\|^2 + 0.6 \eta_n + 2c_0^2\eta_n^2.
\end{aligned}
\end{equation*}
We obtain the improvement for angle $\theta_1(n)$: if $\left(\mathbf{v}-(\overline{\mathbf{w}}_1^\top\mathbf{v}) \overline{\mathbf{w}}_1\right)^\top\left(\mathbf{w}_2-(\overline{\mathbf{w}}_1^\top\mathbf{w}_2)\overline{\mathbf{w}}_1\right)\geq 0$, then
\begin{equation}\label{eq:gd-improve}
\cos\theta_1(n{+}1){-}\cos\theta_1(n) \geq \frac{1}{\|\mathbf{w}_1(n+1)\|}\frac{c_0 \eta_n \sin^2\theta_1(n)}{4\pi} \geq \frac{B\eta_n \left(1-\cos\theta_1(n)\right)}{\sqrt{\|\mathbf{w}_1(0)\|^2 + \|\mathbf{w}_2(0)\|^2 +\sum_{i=0}^{n} 0.6 \eta_i +2c_0^2\eta_i^2}}.
\end{equation}
Similar argument holds for $\mathbf{w}_2$ if $\mathbf{w}_1$ and $\mathbf{v}$ lie in the different half-plane separated by $\mathbf{w}_2$: if \\ $\left(\mathbf{v}-(\overline{\mathbf{w}}_2^\top\mathbf{v}) \overline{\mathbf{w}}_2\right)^\top\left(\mathbf{w}_1-(\overline{\mathbf{w}}_2^\top\mathbf{w}_1)\overline{\mathbf{w}}_2\right)\leq 0$, then
\begin{equation}\label{eq:gd-improve2}
-\cos\theta_2(n{+}1){+}\cos\theta_2(n) \geq \frac{1}{\|\mathbf{w}_2(n+1)\|}\frac{c_0 \eta_n \sin^2\theta_2(n)}{4\pi} \geq \frac{B\eta_n \left(1+\cos\theta_2(n)\right)}{\sqrt{\|\mathbf{w}_1(0)\|^2 + \|\mathbf{w}_2(0)\|^2 +\sum_{i=0}^{n} 0.6 \eta_i +2c_0^2\eta_i^2}}.
\end{equation}
Finally, using the same technique proof in Theorem \ref{thm:diff-init} for the interlaced steps
\[ \mathcal{T} := \{n: \left(\cos\theta_1(n)+\cos\theta_2(n)\right)\left( \cos\theta_1(n+1)+\cos\theta_2(n+1)\right)<0 \}. \]
We always have at least one weight satisfies the improvement in Eq. \ref{eq:gd-improve} or \ref{eq:gd-improve2}. Hence, we obtain the following one satisfies
\[ \cos\theta_1(n) \geq 1-(1-\cos\theta_1(0))e^{-BS_n}, -\cos\theta_2(n) \geq 1-(1+\cos\theta_2(0))e^{-BS_n}. \]
\end{proof}
\section{Additional Results}\label{app:add-res}
Addition lemmas for future work:
\begin{lemma}[Lemma \ref{lemma:norm-change} improved version]
If $\mathbf{x}\sim\mathcal{U}\left(\mathcal{S}^{1}\right)$ (uniform distribution in the unit circle), if $N(\mathbf{w})>0$ and $\|\mathbf{w}\|\sin \theta(\mathbf{w}, \mathbf{v}) \geq 2$, then $\|\mathbf{w}\|\sin \theta(\mathbf{w}, \mathbf{v}) \leq 2 \ln\left(\frac{4\pi}{\theta(\mathbf{w}, \mathbf{v})}+1\right)$.
\end{lemma}
\begin{proof}
We assume $\mathbf{w}=(r,0)^\top, \mathbf{v}=(\cos\alpha,\sin\alpha)$, then $0\leq \alpha\leq \pi/2$ since $N(\mathbf{w})>0$. We obtain
\begin{equation*}
\begin{aligned}
N(\mathbf{w})&=\frac{1}{2\pi}\int_{|v_1x_1| \geq |v_2x_2|}\frac{|x_1|}{1+e^{r|x_1|}} d\theta + \frac{1}{2\pi}\int_{|v_1x_1| \leq v_2x_2}\frac{1-e^{rx_1}}{1+e^{rx_1}}x_1 d\theta \\
& = \frac{1}{\pi}\int_{0 \leq \theta \leq \pi/2-\alpha}\frac{2\cos\theta}{1+e^{r\cos\theta}} d\theta - \frac{1}{\pi}\int_{\pi/2 \geq \theta \geq \pi/2-\alpha} \frac{e^{r\cos\theta}-1}{e^{r\cos\theta}+1}\cos\theta d\theta \\
& \leq \frac{1}{\pi}\int_{r\cos\theta\geq r\sin\alpha\geq 2} \frac{2\cos\theta}{e^{r\cos\theta}+1} d\theta -\frac{1}{\pi}\int_{\pi/2-\alpha/2 \geq \theta \geq \pi/2-\alpha} \frac{e^{r\cos\theta}-1}{e^{r\cos\theta}+1} \cos\theta d\theta \\
& \leq \frac{\pi/2-\alpha}{\pi}\frac{2\sin\alpha}{1+e^{r\sin\alpha}} -\frac{\alpha}{2\pi} \frac{e^{r\sin(\alpha/2)}-1}{e^{r\sin(\alpha/2)}+1} \sin\frac{\alpha}{2} \\
\end{aligned}
\end{equation*}
Hence
\[ \frac{2\pi\sin\alpha}{1+e^{r\sin\alpha}}\geq \frac{e^{r\sin(\alpha/2)}-1}{e^{r\sin(\alpha/2)}+1} \cdot \alpha\sin\frac{\alpha}{2}. \]
Using the fact $\sin\alpha\geq\sin\frac{\alpha}{2} \geq \frac{1}{2}\sin\alpha$
We obtain
\[ 4\pi \geq 4\pi\cos\frac{\alpha}{2} \geq \alpha \left(1+e^{r\sin\alpha}\right)\frac{e^{r\sin(\alpha/2)}-1}{e^{r\sin(\alpha/2)}+1} \geq \alpha \left(e^{r\sin(\alpha/2)}-1\right). \]
\[ r\sin\alpha \leq 2r\sin\frac{\alpha}{2}\leq 2\ln\left(\frac{4\pi}{\alpha}+1\right). \]
\end{proof}
\begin{prop}
$\mathbf{w}_1, \mathbf{w}_2$ are in the same half-plane separated by $\mathbf{v}$, and $\theta_2 \geq \theta_1 + \pi/2$, then
\[ \frac{\partial \cos\theta_1(t)}{\partial t}\geq 0, \ \text{if} \ \sin(\theta_2(t)-\theta_1(t)) \leq \frac{\ln\left(2e^{r_1(t)\sin\theta_1(t)}-1\right)}{r_1(t)}. \]
\[ \frac{\partial \cos\theta_2(t)}{\partial t} \leq 0, \ \text{if} \ \sin(\theta_2(t)-\theta_1(t)) \leq \frac{\ln\left(2e^{r_2(t)\sin\theta_2(t)}-1\right)}{r_2(t)}. \]
\end{prop}
\begin{proof} [Sketch]
When $\theta_2 \geq \theta_1 + \pi/2$ and $\theta_1, \theta_2$ are fixed, we can show that for any $r_1, r_2>0$,
\[ \frac{\partial^2 \cos\theta_1(t)}{\partial t\partial r_2} \leq 0, \ \frac{\partial^2 \cos\theta_2(t)}{\partial t\partial r_1} \geq 0.\]
Then we consider $r_1, r_2 \to \infty$ to obtain the minimum.
\end{proof}
\section{Experiments}\label{sec:exp}
\begin{figure*}[t]
\centering
\includegraphics[width=0.48\textwidth]{paper-fig/linear}
\includegraphics[width=0.48\textwidth]{paper-fig/deep-linear}
\caption{Linear network simulation when $\mathbf{x}\sim\mathcal{U}(\mathcal{S}^1), d=2, \mathbf{v}=(0,1)^\top$. Left two: running SGD with batch size $1000$ from $\mathbf{w}(0)=(0.6,-0.8)^\top$ for $30000$ iterations and constant learning rate $\eta=10^{-3}$. We show our lower bounds in Theorem \ref{thm:linear-flow} at $n{=}0$ and $n{=}5000$. Right two: training 4-layer deep linear network using SGD with batch size $1000$ from $\mathbf{w}_e(0)=(0.6,-0.8)^\top$ for $30000$ iterations and constant learning rate $\eta=10^{-3}$. We also how our lower and upper bounds in Theorem \ref{thm:deep-linear-convergence1} at $n{=}0$ and $n{=}18000$.}
\label{fig:linear}
\end{figure*}
\begin{figure*}[t]
\centering
\includegraphics[width=0.7\linewidth]{paper-fig/relu_weight_diff_init}
\vspace{-10pt}
\includegraphics[width=0.7\linewidth]{paper-fig/relu_weight_same_init}
\caption{Two layer network with ReLU activation simulation when $\mathbf{x}\sim\mathcal{U}(\mathcal{S}^1), d=2, \mathbf{v}=(1,0)^\top$. Left three: $\mathbf{w}_1(0)=(3,4)^\top, \mathbf{w}_2(0)=(4, -3)^\top$, which stay in the same half-plane separated by $\mathbf{v}$. We run SGD with batch size $1000$ for $2{\times} 10^4$ iterations and constant learning rate $\eta=10^{-2}$.
Right three: $\mathbf{w}_1(0)=(9,1)^\top, \mathbf{w}_2(0)=(9,7)^\top$, which stay in the different half-plane separated by $\mathbf{v}$. We run SGD with batch size $1000$ for $2{\times} 10^5$ iterations and constant learning rate $\eta=10^{-2}$. In each experiment, we plot angle variation, loss and weight norm, optimization trajectory in sequence.}
\label{fig:relu}
\vspace{-10pt}
\end{figure*}
In this section we conduct some experiments to verify our theoretical analyses. Figure \ref{fig:linear} shows the optimization period for (deep) linear networks. We plot out several lower bound (and upper bound) in our theorems. Although we do not give convergence for stochastic gradient descent, our lower bound in gradient flow still roughly matches the directional convergence in practice, and the weight norm $\|\mathbf{w}(n)\|$ indeed goes through decreasing and increasing period. Moreover, from our analysis, we can see rigorous understanding that the weight norm may give further reasonable results, particularly for deep linear networks.
When non-linear activation added into the decision function, the training dynamic becomes more complicated. We show one example in the left three graphs in Figure \ref{fig:relu}, which satisfies the different plane initialization in Lemma \ref{lemma:ac-angle-grad}.
Since two weights couple with each other, we can only guarantee (logarithmic) directional convergence for one weight from our proof, such as $\cos\theta_1(n)$ in Figure \ref{fig:relu}.
Moreover, when applied to (stochastic) gradient descent, we find weight may traverse the target, as the cusp in the first graph in Figure \ref{fig:relu} shows, but the recovery still holds at last.
The remaining weight may undergo complex period including reversed directional variation at the beginning as $\cos\theta_2(n)$ varies in the third graph of Figure \ref{fig:relu}, but finally the weight still reaches the target direction.
Additionally, we also give a strange trajectory for the remaining initialization that $\mathbf{w}_1(0)$ and $\mathbf{w}_2(0)$ lie in the same half-plane separated by $\mathbf{v}$.
As depicted by the right three graphs in Figure \ref{fig:relu}, both the weights may encounter totally reverse direction movement at the initial phase, and multi-fluctuation for certain angle (such as $\theta_1(n)$), though both the weights turn into the correct directional improvement soon and finally align with the target as well. Therefore, it is difficult to derive exact directional convergence rate in this case.
\section{Introduction}
In recent years, deep neural networks have been successfully trained with simple gradient-based methods, despite the inherent non-convexity of the learning problem.
Meanwhile, implicit biases introduced by optimization algorithms play a crucial role in training neural networks.
Previous work rigorously analyzes the optimization of deep networks, leading to many exciting developments such as the neural tangent kernel \cite{jacot2018neural, du2019gradient, arora2019fine, allen2019convergence, zou2018stochastic}. The above works are usually based on over-parameterization regime, which makes the weights stay close to initialization and share similar dynamic with linear regression.
By contrast, \citet{lyu2019gradient} showed that if the data can be perfectly classified, the parameters are guaranteed to diverge in norm to infinity, meaning that the prediction surface can continually change during the training in classification tasks.
Moreover, \citet{ji2020directional} experimentally illustrated that even on simple data, the prediction surface continues to change after perfect classification is achieved, and even with large width it is not close to the maximum margin predictor from the neural tangent regime. These findings raise an issue that the properties about neural networks may be unstable if the prediction surface never stops changing.
\citet{lyu2019gradient, ji2020directional, ji2018gradient} already addressed this issue by guaranteeing stable (directional) convergence behavior of deep networks as training proceeds, despite the growth of weight vectors to infinity.
Their works focus on general homogeneous deep networks and exponential-type loss functions at the ``late training'' phase, meaning that the predictor has already obtained zero classification error.
Moreover, they only proved asymptotic directional convergence, i.e., the parameters converge in direction (to a KKT point of a natural max-margin problem), and used a finite data case to ensure positive maximum margin. But the traditional dataset is large-scale and may be inseparable or separable with very small margin.
In this paper, we consider a more specific case for binary classification under population logit loss (binary cross-entropy) with zero margin. For analysis simplicity
we make the assumption that the distribution of input data is spherically symmetric, including the standard Gaussian as a special case.
Furthermore, the networks we consider include (deep) linear networks and a two-layer non-linear network with only two hidden nodes and fixed second layer.
The paper and the main contributions are organized as follows:
\begin{itemize}
\item We present the analysis of linear networks (linear predictors) in Section \ref{sec:shallow-linear} and show that gradient flow gives logarithmic directional convergence for any initialization. In addition, we find that gradient descent provides same convergence rate if the weights are initialized with certain large or small norm under bounded learning rates sequence. Both the findings show improved convergence bound compared to general empirical loss and positive margin setting as we expected, because we make the benign distribution assumption.
\item In Section \ref{sec:deep-linear} we build up deep linear networks motivated by the previous results, showing at least two-phase convergence rates during optimization with the gradient flow method.
\item In Section \ref{sec:shallow-non-linear} we discuss a two-layer nonlinear network with only two hidden neurons and fixed different signs of second layer weights, giving the simplest recovery guarantee in our setting. We find positive alignment in direction for half the possible initialization. Built on this result we give exact logarithmic directional convergence for ReLU activation under gradient flow, as well as gradient descent methods with constrained learning rates.
\item Motivated by the above results, we conduct several experiments in Section \ref{sec:exp} for potential improvements of our current results, particularly some bad initialization that gives slow convergence, and the remaining complex initialization case for non-linear networks can be viewed as future work.
\end{itemize}
In summary, we hope our work contributes to a better understanding of the optimization dynamics of gradient methods on deep neural networks in classification tasks.
\subsection{Related Work}
There is a rapid growth of literature on analyzing the directional convergence, as well as the optimization dynamic of neural network objectives, surveying all of which is well outside our scope. Thus, we only briefly survey the works most related to ours.
\paragraph{Optimization Dynamics of Neural Networks.}
A large amount of works focus on training networks under regression case with square loss.
\citet{yehudai2020learning} analyze a single neuron in a realizable setting under general families of input distributions and activations, showing that some assumptions on both the distribution and the activation function are necessary, and proving linear convergence under mild assumptions.
\citet{tian2019luck, yu2019student} analyze one-hidden-layer NNs with multi-nodes and ReLU activation under over-parameterization setting (more teacher nodes than student nodes), showing local convergence with assumption of small overlapping teacher node.
These recovery tasks do not suffer infinite norm problem but with more difficult learning requirements including direction as well as the norm of teacher weights.
There also have plenty of literature focus on classification task under cross entropy loss or logit loss. \citet{li2018learning} consider learning a two-layer over-parameterized ReLU neural network with cross entropy loss via stochastic gradient descent, the main technique they adopted is that the most activation pattern is set to be unchanged as the initialization, but we could hardly use since for continuous data distribution, any change of weight would effect the activation pattern for some data.
\citet{cao2019generalization,Cao_2020} also analyze cross-entropy loss with over-parameterized networks, similarly, the parameter range they focus on is in a small neighborhood of initialization as most over-parameterized work do.
We analyze binary cross entropy loss in really under-parameterized settings (two neurons or linear activation), to discover the concrete variation during training.
Thanks to the theories of previous works, we have more and more profound understanding of deep learning.
\paragraph{Directional Convergence under Separable Data.}
Separated data can be dated back to \citet{freund1997decision} from the literature on AdaBoost. A recent close line of work is an analysis of gradient descent for logistic regression when the data is separable \cite{soudry2018implicit, nacson2019stochastic, nacson2019convergence, ji2018risk}.
These works consider linear classifiers with smooth monotone loss functions including the cross-entropy loss, optimized on linearly separable data similar as us but with finite data as realistic setting with bounded learning rates.
Directional convergence has appeared in the literature \cite{gunasekar2018implicit, chizat2020implicit}, and established for linear predictors \cite{soudry2018implicit, nacson2019convergence, nacson2019stochastic, ji2020gradient, shamir2020gradient}.
\citet{ji2018gradient, ji2020directional, lyu2019gradient} extend linear classifiers to deep homogeneous networks using powerful techniques.
The findings build on the alignment of some weights of neural network reaching a stationary point of the limiting margin maximization objective under the gradient methods.
We consider the objective with logit loss and good input distribution to provide the potential directional convergence rate under population loss, and obtain convergence rate and the analysis of the whole training period.
\section{Preliminaries}
In this paper, we would learn predictor $\phi(\cdot, \mathbf{w})\colon \mathbb{R}^d \to \mathbb{R}$ with parameters $\mathbf{w}$,
and let $ \mathbf{x} \in \mathbb{R}^d $ be input features sampled from some unknown distribution $\mathcal{D}$ with a well-defined covariance matrix. We consider a binary classification
problem in which the label for each $\mathbf{x}$ is decided by such a normalized vector $\mathbf{v} \in \mathbb{R}^d$ that $\|\mathbf{v}\|=1$, and in this way $y(\mathbf{x})=\text{sgn}(\mathbf{v}^\top\mathbf{x}) \in\{{-}1, {+}1\}$\footnote{Here, $\text{sgn}(z)=1$ if $z\geq0$, otherwise $-1$.}.
We employ the population logit loss $ \ell(z) := \ln(1 + e^{-z}) $ (binary cross-entropy) with $z(\mathbf{x}, \mathbf{w})=y(\mathbf{x})\phi(\mathbf{x}, \mathbf{w})$. Thus, the objective is
\begin{equation*}
\min_{\mathbf{w}} L(\mathbf{w}) :=\mathbb{E}_{\mathbf{x}\sim\mathcal{D}}\ln\left(1+e^{-y(\mathbf{x})\phi(\mathbf{x}, \mathbf{w})}\right).
\end{equation*}
We focus on the following standard gradient methods applied with $\nabla L(\mathbf{w})$:
\begin{itemize}
\item \textbf{Gradient flow}: We initialize $\mathbf{w}(0)$, and for every $ t>0 $ we let $\mathbf{w}(t)$ be the solution of the
differential equation: $ \dot{\mathbf{w}}(t) = -\nabla L(\mathbf{w}(t)) $.
\item \textbf{Gradient descent}: We initialize $\mathbf{w}(0)$ and set a sequence of positive learning rates $\{\eta_n\}_{n=1}^\infty$. At each iteration $ t > 0 $,
we do a single step in the negative direction of the gradient, that is, $\mathbf{w}(n+1)=\mathbf{w}(n)-\eta_n\nabla L(\mathbf{w}(n))$.
\end{itemize}
\paragraph{Notation.} In this paper $\|\cdot\|$ denotes the $ \ell_2 $-norm, $\Sigma = \mathbb{E}_{\mathbf{x}\sim\mathcal{D}} \mathbf{x} \mathbf{x}^{\top}$ is the population covariance matrix, and $\lambda_{\max}(A)$ is the maximum eigenvalue of the real symmetric matrix $A$.
We let $\overline{\mathbf{w}} := \frac{\mathbf{w}}{\|\mathbf{w}\|}$ whenever $\|\mathbf{w}\|\neq 0$. Given vectors $\mathbf{w}$ and $\mathbf{v}$, we let $ \theta(\mathbf{w}, \mathbf{v}) := \arccos\left(\frac{\mathbf{w}^\top\mathbf{v}}{\|\mathbf{w}\|\|\mathbf{v}\|}\right)\in[0, \pi] $ denote the angle between $\mathbf{w}$ and $\mathbf{v}$, $\theta(t):=\theta(\mathbf{w}(t), \mathbf{v})$ and $\theta(n):=\theta(\mathbf{w}(n), \mathbf{v})$.
Let $ \mathcal{D}_{\mathbf{w},\mathbf{v}} $ be the marginal distribution of $\mathbf{x}$ on the subspace spanned by $\mathbf{w}, \mathbf{v}$ (a distribution over $\mathbb{R}^2$), $\mathcal{D}_2:=\mathcal{D}_{\mathbf{e}_1,\mathbf{e}_2}$ and $c_0:=\mathbb{E}_{\mathbf{x}\sim \mathcal{D}_2}\|\mathbf{x}\|$. We
call complexity $\mathcal{O}\left(\ln^\alpha(1/\epsilon)\right)$ for $\alpha>0$ to obtain $\epsilon$-error \textit{logarithmic} convergence.
\section{Shallow Linear Networks} \label{sec:shallow-linear}
In this section, we begin with the case of classical linear predictors: $\phi(\mathbf{x}, \mathbf{w})=\mathbf{w}^\top\mathbf{x}$, and defer the proofs to Appendix \ref{app:shallow-linear}. Noting that
$ \nabla L(\mathbf{w}) = -\mathbb{E}_{\mathbf{x}\sim\mathcal{D}} \frac{y(\mathbf{x}) \mathbf{x} }{1+e^{y(\mathbf{x})\mathbf{w}^{\top}\mathbf{x}}} $,
we have the following basic properties.
\begin{prop}\label{prop:basic}
The objective function $L(\mathbf{w})$ is $c$-Lipschitz continuous, $\frac{1}{4}\lambda_{\max}(\Sigma)$-smooth, and convex but not strongly convex if $P \{ \mathbf{x}:\mathbf{v}^{\top}\mathbf{x}=0\} =0$. Here $c:=\mathbb{E}_{\mathbf{x}\sim\mathcal{D}}\left\|\mathbf{x}\right\|$.
\end{prop}
Generally speaking, gradient methods have no linear convergence for smooth, Lipschitz continuous, and convex but not strongly convex functions \cite{nesterov2013introductory}.
However, some invariant properties (such as Proposition \ref{prop:grad-norm-prop} below) make the training dynamic reserve linear directional convergence.
\begin{prop}\label{prop:grad-norm-prop}
Suppose $P\left(\{ \mathbf{x}:\mathbf{v}^{\top}\mathbf{x}=0\}\right)<1$. Then we have that $\mathbf{v}^\top\nabla L(\mathbf{w})< 0$ for any $\mathbf{w}\in\mathbb{R}^d$.
Therefore, $\mathbf{v}^\top\mathbf{w}(t)$ is increasing, and $\|\mathbf{w}(t)\|$ is unbounded for gradient flow. If the learning rates are lower bounded, i.e. $\eta_n\geq \eta_->0$, then $\mathbf{v}^\top\mathbf{w}(n)$ is increasing, and $\|\mathbf{w}(n)\|$ is unbounded.
\end{prop}
As mentioned in previous work, it is a common phenomenon that the norm of the weight vector is infinite when data are perfectly classified. And we further verify it even under the population loss and zero margin dataset.
\subsection{Spherically Symmetric Data Distributions}
Now we give convergence rate under gradient methods. In the following analysis, we make the assumption that $\mathcal{D}$ is spherically symmetric used in \citet{yehudai2020learning}, which includes the standard Gaussian as a special case. Using this assumption, we are able to reduce the scope of optimization into the plane $\text{span}\{\mathbf{w}(0), \mathbf{v}\}$.
\begin{assume}\label{ass:distri}
Assume $\mathbf{x}\sim\mathcal{D}$ has a spherically symmetric distribution, i.e., for any orthogonal matrix $A: A \mathbf{x} \sim\mathcal{D}$.
\end{assume}
\begin{prop}\label{prop:grad-prop}
Under Assumption \ref{ass:distri}, we have $\|\nabla L(\mathbf{w})\| \leq c_0$ and $ \nabla L(r\mathbf{v}) = -\mathbb{E}_{\mathbf{x}\sim\mathcal{D}} \frac{|x_1|}{1+e^{r|x_1|}}\mathbf{v}, \forall \ r \in \mathbb{R}$.
\end{prop}
\begin{remark}
Proposition \ref{prop:grad-prop} shows that if $\mathbf{w}$ aligns with $ \mathbf{v}$ (including $\mathbf{0}$), then gradient methods always direct to $\mathbf{w} = r\mathbf{v}$ with $r\to +\infty$ according to Proposition \ref{prop:grad-norm-prop}. Therefore, we only need to consider the initial value $\mathbf{w}(0) \neq\mathbf{0}$ and $\theta(0)\neq 0$ or $\pi$. Moreover, $c_0$ does not depend on dimension $d$ under Assumption \ref{ass:distri}.
\end{remark}
The variation of $\theta(\mathbf{w}, \mathbf{v})$ is the most we concern. In the gradient flow setting, it follows that for $\mathbf{w}(t)\neq \mathbf{0}$,
\[ \frac{\partial \cos\theta(t)}{\partial t} = {-}\frac{1}{\|\mathbf{w}(t)\|}\left(\mathbf{v}{-}\left(\overline{\mathbf{w}}(t)^{\top} \mathbf{v}\right) \overline{\mathbf{w}}(t)\right)^{\top} \nabla L(\mathbf{w}(t)).
\]
Our analysis focuses on $\|\mathbf{w}(t)\|$ and the remaining part. Now we present the lemma below to show the exact directional improvement:
\begin{lemma} \label{lemma:angle_grad}
Under Assumption \ref{ass:distri} and if $\mathbf{w} \neq \mathbf{0}$, then
\[ -\left(\mathbf{v}-\left(\overline{\mathbf{w}}^{\top} \mathbf{v}\right) \overline{\mathbf{w}}\right)^{\top} \nabla L(\mathbf{w}) = \frac{c_0\sin^2\theta(\mathbf{w}, \mathbf{v})}{\pi}. \]
Moreover, invoking from the above result, we have the following corollary:
\[ -\left(I-\overline{\mathbf{w}} \; \overline{\mathbf{w}}^{\top}\right)\nabla L(\mathbf{w}) = \frac{c_0}{\pi} \left(I-\overline{\mathbf{w}} \; \overline{\mathbf{w}}^{\top}\right)\mathbf{v}. \]
\end{lemma}
\begin{figure}[t]
\centering
\includegraphics[width=0.3\textwidth]{paper-fig/norm_vary}
\caption{Verifying $\text{sgn}(N(\mathbf{w}))$ when $\mathbf{x}\sim\mathcal{U}(\mathcal{S}^1)$ and $\mathbf{v}=(0,1)^\top$. The sign of $N(\mathbf{w})$ at each $\mathbf{w}$ is estimated by $1000$ random samples. Red means positive sign while green is negative sign.}
\label{fig:norm-change}
\end{figure}
\subsection{Gradient Flow}
From Lemma \ref{lemma:angle_grad}, we obtain that $\frac{\partial \cos\theta(t)}{\partial t} \geq 0$ when $\mathbf{w}(t) \neq 0$.
Therefore, to derive directional convergence, we need to understand the variation of $\|\mathbf{w}(t)\|$ during optimization dynamic. We set $N(\mathbf{w}(t)):=-\mathbf{w}(t)^{\top} \nabla L(\mathbf{w}(t))=\frac{\partial \|\mathbf{w}(t)\|^2}{\partial t}$. Then we have the following propositions as depicted in Figure \ref{fig:norm-change}.
\begin{lemma}\label{lemma:norm-change}
Under Assumption \ref{ass:distri}, the following propositions hold:
1) $N(\mathbf{w})\leq 0.3$;
2) If $\theta(\mathbf{w}, \mathbf{v})\geq \pi/2$, then $N(\mathbf{w})\leq 0$;
3) If $\theta(\mathbf{w}, \mathbf{v})\leq \pi/2$ and $\|\mathbf{w}\|$ is fixed, then $N(\mathbf{w})$ increases when $\theta(\mathbf{w}, \mathbf{v})$ decreases.
More generally, if $r \leq \|\mathbf{w}\| \leq R$ (but $\|\mathbf{w}\|$ is not fixed), then there exists $\theta_0$ (related to $r,R$) such that $N(\mathbf{w})>0$ when $\theta(\mathbf{w}, \mathbf{v})<\theta_0$;
4) If $0 < \|\mathbf{w}\|\leq \frac{2|\cos\theta(\mathbf{w}, \mathbf{v})|}{\pi}$, then $N(\mathbf{w})\cos\theta(\mathbf{w}, \mathbf{v})>0$.
\end{lemma}
\begin{lemma} \label{lemma:norm-increase-linear}
Under Assumption \ref{ass:distri} and if $\partial \|\mathbf{w}(t)\|^{2}/\partial t=0$ for some $\mathbf{w}(t) \neq \mathbf{0}$, then $\partial \cos\theta(\mathbf{w}(t), \mathbf{v})/\partial t \geq 0$.
\end{lemma}
Therefore, based on Proposition \ref{prop:grad-norm-prop}, Lemmas \ref{lemma:norm-change} and \ref{lemma:norm-increase-linear}, $\|\mathbf{w}(t)\|$ first decreases (if $\mathbf{w}(0)^{\top}\nabla L(\mathbf{w}(0)) < 0$) then increases to $+\infty$. Therefore, the directional convergence includes the following two phases.
\begin{thm}\label{thm:linear-flow}
Under Assumption \ref{ass:distri}, we obtain the following two-phase directional convergence for $\mathbf{w}(0) \neq \mathbf{0}$ and $\theta(0)\neq \pi$. If $N(\mathbf{w}(0)) < 0$, then there exists some $T>0$ such that $N(\mathbf{w}(T)) = 0$, otherwise we set $T=0$. With such a $T$, we have that
\[ \cos\theta(t)\geq \left\{ \begin{array}{ll}
1-\frac{2}{e^{A_1t+B_1}+1}, & t \leq T, \\
1-\frac{2}{e^{A_2\sqrt{t-T+C_2}+B_2}+1}, & t \geq T.
\end{array} \right.
\]
Here $A_1 {=} \frac{2c_0}{\pi\|\mathbf{w}(0)\|}$, $\ B_1 {=} -2\ln\left|\tan\frac{\theta(0)}{2}\right|$,
$A_2=\frac{4c_0}{\sqrt{0.6}\pi}$, $B_2{=} -2\ln\left|\tan\frac{\theta(T)}{2}\right|-\frac{4c_0\|\mathbf{w}(T)\|}{0.6\pi}$, and $C_2=\frac{\|\mathbf{w}(T)\|^2}{0.6}$.
\end{thm}
We find that $\|\mathbf{w}\|$ increases much slower than $\theta$ decreases, showing that the objective is nearly strongly convex, which provides linear (or logarithmic) convergence. The time complexity is $\mathcal{O}\left(\ln^2(1/\epsilon)\right)$ to ensure $1-\cos \theta(t) \leq \epsilon$.
Moreover, our convergence bound gives faster directional convergence compared to \citet{nacson2019stochastic, soudry2018implicit}. The main difference comes from the structured data in Assumption \ref{ass:distri}, showing the benefit from data augmentation and preprocessing.
In addition, adding the regularization term $\lambda\|\mathbf{w}\|^2$ would accelerate the convergence, because $\|\mathbf{w}\|$ increases much slow and the loss function is already strongly convex, though we focus the original problem rather than the regularization.
\subsection{Gradient Descent}
Now we turn to the gradient descent setting based on the previous results. The difficulty we encounter is that the choice of the learning rate may break the directional monotonicity in Lemma \ref{lemma:angle_grad}. Fortunately, the first phase directional convergence in the gradient flow still holds.
\begin{thm}\label{thm:negative-angle}
Under Assumption \ref{ass:distri}, if $\theta(0)\neq \pi$, $\mathbf{v}^{\top}\mathbf{w}(0) < 0$, and setting $S_n^- :=\sum_{k=0}^{n-1} \frac{\eta_k}{\sqrt{A+\sum_{i=0}^{k}\eta_i^2}}$, we have that
\[ \cos\theta(n)\geq 1-\left(1-\cos\theta(0)\right) e^{-BS_n^-}, \]
until $\cos\theta(n)\geq 0$. Here $A=\frac{\|\mathbf{w}(0)\|^2}{c_0^2}$ and $B = \frac{1+\cos\theta(0)}{\pi}$.
\end{thm}
\begin{remark}
Obviously, $S_n^-<n$. And we list several choices of $\{\eta_n\}_{i=1}^\infty$:
\[ S_n^-=\left\{
\begin{aligned}
&\Theta(n), & & \eta_n=\Theta(q^n), q>1; \\
&\Theta(\sqrt{n}), & & \eta_n=\Theta(n^{\alpha}), \alpha > -1/2; \\
&\Omega(\sqrt{n/\ln(n)}), & & \eta_n=\Theta(n^{\alpha}), \alpha = -1/2; \\
&\Theta(n^{\alpha+1}), & & \eta_n=\Theta(n^{\alpha}), -1 < \alpha < -1/2; \\
&\Theta(\ln(n)), & & \eta_n=\Theta(n^{\alpha}), \alpha = -1; \\
&<\infty, & & \eta_n=\Theta(n^{\alpha}), \alpha < -1.
\end{aligned}
\right.\]
\end{remark}
Hence, when weight $\mathbf{w}(n) $ stays in the `wrong' region with $\theta(n) > \pi/2$, larger learning rate gives faster directional convergence to the region $\{\mathbf{w}: \theta(\mathbf{w},\mathbf{v})\geq 0\}$. Unfortunately, when $\theta(n)\leq \pi/2$, the phase becomes unstable and heavily depends on the learning rate. To inherit the directional monotonicity, we need a sufficient condition in the whole training period. Moreover, we will show that this condition can be satisfied when the weight norm is large enough compared to the learning rate.
\begin{thm}[A sufficient convergence condition] \label{thm:suff}
Under Assumption \ref{ass:distri}, if there exists a $\delta>0$ such that $\forall \; n\in\mathbb{N}$,
\begin{equation}\label{eq:suff}
\|\mathbf{w}(n{+}1)\|+\overline{\mathbf{w}}(n)^{\top} \mathbf{w}(n{+}1) \geq \frac{(1 {+} \delta)c_0\eta_n \cos\theta(n)}{\pi},
\end{equation}
then there exist constants $A, B, C>0$ such that
\[ \cos\theta(n) \geq 1-\left(1-\cos\theta(0)\right) e^{-BS_n^+}. \]
Here $S_n^+ := \sum_{k=0}^{n-1} \frac{\eta_k}{\sqrt{A+\sum_{i=0}^{k}\eta_i^2+C\eta_i}}$, $ A=\frac{\|\mathbf{w}(0)\|^2}{c_0^2}$, $B = \frac{\delta(1+\cos\theta(0))}{\pi+\delta\pi}$, and $C=\frac{0.6}{c_0^2}$.
\end{thm}
\begin{remark}
When $\|\mathbf{w}(n)\|\geq \eta_n c_0+(1+\delta)c_0\eta_n /(2\pi)$, Eq.~(\ref{eq:suff}) can be satisfied because
\begin{equation*}
\begin{aligned}
& \|\mathbf{w}(n+1)\|+ \overline{\mathbf{w}}(n)^{\top} \mathbf{w}(n+1) \\
& \geq 2\left( \|\mathbf{w}(n)\|-\eta_n\|\nabla L(\mathbf{w}(n))\|\right)\\
& \geq (1+\delta)c_0\eta_n \cos\theta(n)/\pi.
\end{aligned}
\end{equation*}
Once $\eta_n \leq \eta_+$, then $\|\mathbf{w}(n)\|\geq R_1:=\eta_+ c_0 +c_0\eta_+ /\pi$ is enough to derive the sufficient convergence condition.
\end{remark}
\begin{remark}
Obviously, $S_n^+<n$. And we list several choice of $\{\eta_n\}_{i=1}^\infty$:
\[ S_n^+=\left\{ \begin{aligned}
&\Theta(n), & & \eta_n=\Theta(q^n), q>1; \\
&\Theta(\sqrt{n}), & & \eta_n=\Theta(n^{\alpha}), \alpha \geq 0; \\
&\Theta(n^{(\alpha+1)/2}), & & \eta_n=\Theta(n^{\alpha}), -1 < \alpha < 0; \\
&\Theta(\sqrt{\ln(n)}), & & \eta_n=\Theta(n^{\alpha}), \alpha = -1; \\
&<\infty, & & \eta_n=\Theta(n^{\alpha}), \alpha < -1.
\end{aligned}
\right. \]
\end{remark}
To further derive more realistic result out of the sufficient convergence condition, we show directional convergence with bounded learning rates below. The idea is to combine the increasing property of $\mathbf{w}^\top\mathbf{v}$, giving enough directional monotonicity update steps. Similar results have shown in more general settings under benign initialization and positive margin dataset \cite{ji2020directional, lyu2019gradient, nacson2019stochastic}.
\begin{thm}\label{thm:angle-convergence}
Under Assumption \ref{ass:distri}, for an arbitrary choice of learning rate sequence $\{\eta_n\}_{i=1}^{\infty}$ with $\eta_{+} \geq \eta_n \geq \eta_{-}>0$, there exists a subsequence $\{n_k\}_{k=1}^\infty$, which guarantees the linear directional convergence of $\cos\theta(n_k)$ to $1$.
\end{thm}
\begin{remark}
In general, $\beta$-smooth and convex objective has the learning rate constraint ($\eta\leq \frac{2}{\beta}$) to guarantee the convergence. But in the current concern, there is no need for this constraint because the purpose is learning the direction instead of decreasing the alternative loss function.
\end{remark}
From Theorem \ref{thm:angle-convergence}, we always have directional convergence for a bounded learning rate sequence. Thus, motivated by the infinite weight norm and increasing projection on $\mathbf{v}$ by Proposition \ref{prop:grad-norm-prop}, we always reach some time that $\|\mathbf{w}(n)\|$ is large enough, which gives a sufficient convergence condition and leads to logarithmic directional convergence.
\begin{corollary}\label{corr:angle-convergence}
Under the assumptions in Theorem \ref{thm:angle-convergence}, there exists $n_0 > 0$ such that $\|\mathbf{w}(n_0)\|\cos\theta(n_0)\geq \eta_+c_0+\eta_+c_0/\pi$, and gradient descent will give logarithmic convergence of $\cos\theta(n)$ to $1$ from $n_0$.
\end{corollary}
Comparing to the previous fixed learning rate based convergence results for linear predictor \cite{soudry2018implicit, nacson2019convergence, nacson2019stochastic}, we derive directional convergence under more fixable bounded learning rates.
Since our concerned problem has inductive bias in origin, we also show two initialization methods to guarantee the convergence from beginning in Appendix \ref{app:init}.
\section{Deep Linear Networks}\label{sec:deep-linear}
Invoking from the linear predictor and previous work on deep linear networks, we extend the results to deep linear networks and leave details in Appendix \ref{app:deep-linear}. For a $N$-layer linear network $\phi(\mathbf{x}, \mathbf{w})=W_N\dots W_1\mathbf{x}$ where $\mathbf{w}=\left(W_N, \ldots, W_1\right)$, the objective becomes
\begin{equation*}
\min_{\mathbf{w}} L^N(W_N,\dots,W_1) :={\mathbb E}_{\mathbf{x}\sim\mathcal{D}}\ln\left[1{+}e^{-y(\mathbf{x})W_N\cdots W_1\mathbf{x}}\right].
\end{equation*}
Every such a network represents a linear mapping given as $\mathbf{w}_e = \left(W_N\cdots W_1\right)^\top\in \mathbb{R}^d$:
\[ L^N(W_1, \dots, W_N){=} L^1(\mathbf{w}_e){=}\mathbb{E}_{\mathbf{x}\sim\mathcal{D}} \ln\left(1{+}e^{-y(\mathbf{x})\mathbf{w}_e^\top\mathbf{x}}\right). \]
A key tool for analyzing the induced flow for $\mathbf{w}_e$ is established
in Claim 2 of \citet{arora2018optimization}. If the initial balancedness conditions:
\[ W_{j+1}(0)^\top W_{j+1}(0) = W_j(0) W_j(0)^\top, j=1,\dots, N-1, \]
hold, then we have that
\[ \frac{\partial \mathbf{w}_e}{\partial t} {=} -\|\mathbf{w}_e\|^{2-\frac{2}{N}} \left(\frac{d L^1(\mathbf{w}_e)}{d\mathbf{w}}{+}(N{-}1)\overline{\mathbf{w}}_e\overline{\mathbf{w}}_e^\top \frac{d L^1(\mathbf{w}_e)}{d\mathbf{w}} \right). \]
Similarly, we can build up the monotonic directional improvement in the following lemma.
\begin{lemma} \label{lemma:deep_linear_grad}
Under Assumption \ref{ass:distri} and initial balancedness condition, if $\mathbf{w}_e(t) \neq \mathbf{0}$, then
\[ \frac{\partial \cos\theta(\mathbf{w}_e(t), \mathbf{v})}{\partial t} = \frac{c_0\sin^2\theta(\mathbf{w}_e(t), \mathbf{v})}{\pi}\|\mathbf{w}_e(t)\|^{1-\frac{2}{N}}.\]
\end{lemma}
The main difference from the shallow linear network is that the dependence of $\|\mathbf{w}_e\|$ is reversed. Large $\|\mathbf{w}_e\|$ gives faster convergence for the deep linear network when $N \geq 3$, but for $N=1$ it is clearly opposite. Thanks to the similar expression of the gradient of the induced weight norm, we still have at least two-phase directional convergence. The only potential difficulty is that $\mathbf{w}_e(t)$ may converge to the potential stationary point at the origin (at which the angle is not well-defined). Fortunately, this cannot happen by Lemma \ref{lemma:deep-linear-norm}.
\begin{lemma}\label{lemma:deep-linear-norm}
Under Assumption \ref{ass:distri} and initial balancedness condition, if $\mathbf{w}_e(0) \neq \mathbf{0}$, and $N > 2$, then
\[ \left(\|\mathbf{w}_e(0)\|^{\frac{2}{N}} + 0.6t\right)^{\frac{N}{2}}\geq \|\mathbf{w}_e(t)\|, \]
\[ \|\mathbf{w}_e(t)\|\geq \left(\|\mathbf{w}_e(0)\|^{\frac{2}{N}-1} + (N-2)c_0t\right)^{-\frac{N}{N-2}}>0. \]
\end{lemma}
\begin{thm}\label{thm:deep-linear-convergence1}
Under Assumption \ref{ass:distri} and initial balancedness condition, if $N>2$, $\mathbf{w}_e(0) \neq \mathbf{0}$ and $\theta(0)\neq \pi$, then we obtain two-phase convergence as follows.
If $\partial \|\mathbf{w}_e(0)\|^2/\partial t < 0$, then there exists $T>0$ such that $\partial \|\mathbf{w}_e(T)\|^2/\partial t = 0$, otherwise, we set $T=0$. With such a $T$, it holds that
\[ \cos\theta(t)\geq \left\{ \begin{array}{ll}
1-\frac{2}{C_1(A_1t/B_1+1)^\alpha+1}, & t\leq T, \\
1-\frac{2}{1+e^{A_2(t-T)+B_2}}, & t\geq T.
\end{array} \right.
\]
Here $A_1{=}(N{-}2)c_0$, $B_1{=}\|\mathbf{w}_e(0)\|^{\frac{2}{N}-1}$, $C_1{=}\frac{1+\cos\theta(0)}{1-\cos\theta(0)}$, $A_2{=}\frac{2c_0\|\mathbf{w}_e(T)\|^{2-\frac{2}{N}}}{\pi}$, $B_2{=}-2\ln \left|\tan\frac{\theta(T)}{2}\right|$, $\alpha=2c_0/\pi$.
In addition, we have the upper bound that
\[ \cos\theta(t) \leq 1-\dfrac{2}{e^{F[\left(0.6t+D\right)^{N/2}-D^{N/2}]+E}+1}, \]
where $D{=}\|\mathbf{w}_e(0)\|^{\frac{2}{N}}$, $E{=}-2\ln \left|\frac{\tan\theta(0)}{2}\right|$, $F{}=\frac{4c_0}{0.6N\pi}$.
\end{thm}
\begin{remark}
As $N$ increases, $A_1=(N-2)c_0$ also increases. Thus the lower bound of $\cos\theta(t)$ increases, and $\cos\theta(t)$ converges faster, which is consistent with the implicit acceleration of large layers in \citet{arora2018optimization}.
In addition, we can see that $\|\mathbf{w}_e(t)\|$ behaves similarly with the case $N=1$, while it first decreases (if possible) then increases, showing that if we start with $\mathbf{w}_e(0) \neq k\mathbf{v}, k\leq 0$ we would never converge to the origin from the proof in Theorem \ref{thm:deep-linear-convergence1}. When $\theta(0) = \pi$, then $\forall t\geq 0, \ \theta(t) = \pi$, and $\mathbf{w}_e(t) \to \mathbf{0}$ but never hits the origin.
\end{remark}
\begin{remark}
Note that $\theta(t) \to 1$ except $\mathbf{w}_e(0) =k\mathbf{v}$ for $k\leq 0$. We can also guarantee $\mathbf{w}_e(t) \to \infty$ because $\mathbf{v}^\top\mathbf{w}_e(t)$ is increasing after some time (but not always). Hence, a larger norm of $\mathbf{w}_e(t)$ gives faster convergence rate. However, larger norm leads to much slow increasing of the weight norm, particularly in a negative exponential rate, which would result in $\|\mathbf{w}_e(t)\|=\Theta(\ln(t))$ in \citet{soudry2018implicit, nacson2019convergence}. We would treat this as the third phase, but roughly the direction of $\mathbf{w}_e(t)$ already approaches to the target.
\end{remark}
\section{Conclusion}
In this work, we have studied the behavior of gradient flow and gradient descent on separable data with zero margin under population loss in binary linear classification tasks.
We have proven logarithmic directional convergence for (deep) linear networks, and shallow non-linear networks with constraint initialization, which is much faster than the general finite sample setting.
The main mechanism behind our proof comes from the slow increasing of weight norm compared to rapid angle variant towards target.
Resorting to `lazy training' in previous work, such as over-parameterization, we have found the similar phenomenon between the norm and direction of weights applied to under-parameterization cases for classification tasks under good data distribution as well.
Moreover, we have found that large learning rate (even unbounded) in the early training phase introduces rapid directional improvement for linear activation. However, non-linear activations disturb stable increasing of direction because of the interaction among weights, though the learning rate decay that is broadly used in practice may be seen as a valid substitution.
We hope that our specific view of directional convergence would bring better understanding of the optimization dynamics of gradient methods on neural networks in classification tasks.
\pagebreak
\section{Two-layer Non-linear Network with Two Hidden Nodes} \label{sec:shallow-non-linear}
Despite of directional understanding of (deep) linear networks, we conceptually have more difficulties for nonlinear networks.
A popular line of the recent developments shows how gradient methods on highly over-parameterized neural networks can learn various target functions such as cross entropy loss in polynomial time. Here we discuss a simple, under-parameterized setting.
Using one hidden neuron with nonlinear activation as $\sigma(\cdot)$ would give constraints in the classification probability and may not guarantee the recovery of $\mathbf{v}$. We go further to see the case of two hidden neurons. Moreover, we fix the second layer with different signs to describe the optimization dynamic of the first layer weight. Notice that the homogeneity may be broken if we do not use homogeneous activation. Now the classifier becomes
$\phi(\mathbf{x}) = \sigma(\mathbf{w}^{\top}_1\mathbf{x})-\sigma(\mathbf{w}^{\top}_2\mathbf{x})$.
Then the objective becomes:
$\min_{\mathbf{w}_1, \mathbf{w}_2} L(\mathbf{w}_1, \mathbf{w}_2) := \mathbb{E}_{\mathbf{x}\sim\mathcal{D}}\ln\left(1+e^{-y(\mathbf{x})\phi(\mathbf{x})}\right)$.
So we make the following assumption for activation to guarantee the recovery of target.
\begin{assume}\label{assume:activation} The activation
$\sigma\colon \mathbb{R}\to\mathbb{R}$ is monotonically non-decreasing and satisfies $\inf_{0 \leq z\leq M} \sigma'(z) \geq \gamma(M)>0, \ \forall M > 0, a.e.$ (Note by the Lebesgue Differentiation Theorem, we can define $\sigma'(z), a.e.$).
\end{assume}
Assumption \ref{assume:activation} for the activation function covers most activations used in practice such as ReLU and standard sigmoidal activations (for which the derivative in any bounded interval is lower bounded by a positive constant).
We denote $\nabla_{i} L(\mathbf{w}) {:=} \nabla_{\mathbf{w}_i} L(\mathbf{w}_1, \mathbf{w}_2)$ for $i=1, 2$.
Similarly, we consider gradient flow and gradient descent methods. For simplifying the proof, we make initialization constraint assumption as follows:
\begin{assume}\label{assume:same_plane}
Assume that $\mathbf{w}_1(0)$, $\mathbf{w}_2(0)$, and $\mathbf{v}$ are in the same plane, i.e., $\mathbf{v} \in \mathrm{span}\{\mathbf{w}_1(0), \mathbf{w}_2(0)\}$.
\end{assume}
Assumption \ref{assume:same_plane} for the initial weights guarantees $\mathbf{w}_i(t)$ (or $\mathbf{w}_i(n)$) staying in the same plane decided by initial weights because $\nabla_i L(\mathbf{w})\in \text{span}\{\mathbf{w}_1, \mathbf{w}_2, \mathbf{v}\}$ when Assumption \ref{ass:distri} holds. Thus we have well-defined gradient and iterative methods. We denote $\theta_i=\theta(\mathbf{w}_i, \mathbf{v})$, $\theta_i(t)=\theta(\mathbf{w}_i(t), \mathbf{v})$, $\theta_i(n)=\theta(\mathbf{w}_i(n), \mathbf{v}), \ i=1,2$, as described earlier. Before deriving our result, we need to have some intuitions shared with the linear case in the previous sections.
\begin{prop}\label{prop:grad-update}
Under Assumption \ref{assume:activation} and if $P\left(\mathbf{x}=\mathbf{0}\right)<1$, then $\mathbf{v}^\top\nabla_1 L(\mathbf{w})<0$ and $\mathbf{v}^\top\nabla_{2} L(\mathbf{w})>0$.
Hence $\mathbf{v}^\top\mathbf{w}_1(t)$ is increasing, $\mathbf{v}^\top\mathbf{w}_2(t)$ is decreasing, and $\|\mathbf{w}_1(t)\|$ or $\|\mathbf{w}_2(t)\|$ is unbounded. Similarly, $\mathbf{v}^\top\mathbf{w}_1(n)$ is increasing, $\mathbf{v}^\top\mathbf{w}_2(n)$ is decreasing, and $\|\mathbf{w}_1(n)\|$ or $\|\mathbf{w}_2(n)\|$ is unbounded when applied with lower bounded learning rate sequence $\eta_n\geq\eta_->0$.
\end{prop}
In addition, we need to emphasize that the recovery happens only when $\mathbf{w}_1{-}\mathbf{w}_2=k\mathbf{v}$ for $k>0$, from monotonicity of the activation.
However, when using partial zero value activation (such as ReLU), we need more rigorous analysis of the recovery.
At the first glance, we would choose $\mathbf{w}_1{-}\mathbf{w}_2$ as the objective, but we discover complex process from the experiments in Section \ref{sec:exp}.
As a second thought, we find the monotonicity of $\theta_1(t)$ and $\theta_2(t)$ in some scenarios as well.
We make use of Assumptions \ref{ass:distri}, \ref{assume:activation} and \ref{assume:same_plane}, proving the following key technical lemma, which implies that weights towards the `correct' direction for at least half the possible position of $\mathbf{w}_1, \mathbf{w}_2$ and $\mathbf{v}$. We defer the proofs to Appendix \ref{app:shallow-nonlinear}.
\begin{lemma} \label{lemma:ac-angle-grad}
Under Assumptions \ref{ass:distri}, \ref{assume:activation} and \ref{assume:same_plane}, suppose $\mathbf{w}_1 \neq \mathbf{0}$ and $\left(\mathbf{v}-(\overline{\mathbf{w}}_1^\top\mathbf{v})\overline{\mathbf{w}}_1\right)^\top\left(\mathbf{w}_2-(\overline{\mathbf{w}}_1^\top\mathbf{w}_2)\overline{\mathbf{w}}_1\right)\geq 0$, which means that $\mathbf{w}_2$ and $\mathbf{v}$ are in the same half-plane separated by $\mathbf{w}_1$. Then we have that
\[ {-} \left(\mathbf{v} {-} (\overline{\mathbf{w}}_1^\top\mathbf{v})\overline{\mathbf{w}}_1\right)^\top\nabla_{1} L(\mathbf{w}) \geq \frac{\nu(\|\mathbf{w}_1\sin\theta_1\|)}{2\pi} \sin^2 \theta_1, \]
where $\nu(z) := \mathbb{E}_{\mathbf{x} \sim\mathcal{D}_2} \left[\sigma(z\|\mathbf{x}\|)-\sigma(-z\|\mathbf{x}\|) \right] / z ,\; z > 0$.
Similarly, if $\left(\mathbf{v} {-} (\overline{\mathbf{w}}_2^\top\mathbf{v}) \overline{\mathbf{w}}_2\right)^\top\left(\mathbf{w}_1 {-} (\overline{\mathbf{w}}_2^\top\mathbf{w}_1) \overline{\mathbf{w}}_2\right)\leq 0$ and $\mathbf{w}_2 \neq \mathbf{0}$, which means that $\mathbf{w}_1$ and $\mathbf{v}$ are in the different half-plane separated by $\mathbf{w}_2$, then
\[ \left(\mathbf{v}-(\overline{\mathbf{w}}_2^\top\mathbf{v})\overline{\mathbf{w}}_2\right)^\top\nabla_{2} L(\mathbf{w}) \geq \frac{\nu(\|\mathbf{w}_2\sin\theta_2\|)}{2\pi} \sin^2 \theta_2. \]
\end{lemma}
Lemma \ref{lemma:ac-angle-grad} introduces the positive (or negative) increment of angle between the weight and target. We note that the optimization analysis for general activation may have exponentially small (such as sigmoid, tanh) tail, yielding slow directional convergence rate.
In addition, if $\sigma(z)$ is bounded (this time it doesn't satisfy statistic scope of logistic regression) and strictly monotonically increasing, the recovery happens when $\mathbf{w}_1{-}\mathbf{w}_2$ aligns with positive direction of $\mathbf{v}$.
An interesting question for future work is to obtain the directional convergence of $\mathbf{w}_1{-}\mathbf{w}_2$ under bounded and increasing activation. Here we focus on unbounded activation and totally alignment for recovery.
\subsection{Convergence for ReLU Activation}
In this section we consider the standard ReLU function which is broadly employed in many neural networks. In more details, we impose the convention that even though the ReLU function is not
differentiable at $ 0 $, we take $\sigma(0)$ as some fixed positive number to meet Assumption \ref{assume:activation}. With such a ReLU function, we are able to provide logarithmic directional convergence as the linear case with strengthened property of objective and optimization dynamic.
\begin{prop}\label{prop:unbounded}
Under Assumptions \ref{ass:distri}, \ref{assume:activation} and \ref{assume:same_plane}, and $\sigma(z)=\max\{0,z\}$, then $\|\mathbf{w}_1(t)\|$ and $\|\mathbf{w}_2(t)\|$ are unbounded. And $\|\mathbf{w}_1(n)\|$ and $\|\mathbf{w}_2(n)\|$ are unbounded when applied with lower bounded learning rate sequence $\eta_n\geq\eta_->0$.
\end{prop}
\begin{lemma}\label{lemma:relu-norm}
$\partial \left(\|\mathbf{w}_1(t)\|^2+\|\mathbf{w}_2(t)\|^2\right)/\partial t \leq 0.6$.
\end{lemma}
\vspace{-8pt}
\paragraph{Gradient Flow.} Now we give directional convergence invoked from Lemmas \ref{lemma:ac-angle-grad} and \ref{lemma:relu-norm} when $\mathbf{w}_1(0)$ and $ \mathbf{w}_2(0)$ are initialized in the different half-plane separated by $\mathbf{v}$.
\begin{thm} \label{thm:diff-init}
Under Assumptions \ref{ass:distri}, \ref{assume:activation} and \ref{assume:same_plane}, if $\mathbf{w}_1(0)$ and $\mathbf{w}_2(0)$ lie in the different half-plane separated by $\mathbf{v}$, then for any $t>0$, there exists $i\in\{1,2\}$, such that
\[ (-1)^{i-1}\cos\theta_i(t)\geq 1-\frac{2}{e^{A\sqrt{t+B}+C}+1}. \]
Here $A{=}\frac{2c_0}{\pi\sqrt{0.6}}, B{=}\frac{r(0)^2}{0.6}$, $r(0)^2{=}\|\mathbf{w}_1(0)\|^2{+}\|\mathbf{w}_2(0)\|^2$, and $C=-\frac{2c_0r(0)}{0.6\pi}+2\ln\tan\frac{\max\{\theta_1(0),\pi-\theta_2(0)\}}{2}$.
\end{thm}
\begin{remark}
Theorem \ref{thm:diff-init} gives the convergence guarantee at least for one weight in $\mathbf{w}_1(t)$ and $\mathbf{w}_2(t)$. If we know the induction bias introduced by the learning task, it is enough to finish the learning task. Informally, we show the complex process of the case in which one weight already converges but another not. That is,
\end{remark}
\begin{thm} \label{thm:diff-init2}
Under Assumptions \ref{ass:distri}, \ref{assume:activation} and \ref{assume:same_plane}, if $\theta_1(0) = 0$, then for some $t_0\geq0$, $\mathbf{w}_2(t_0)^\top\mathbf{v}\leq 0$; and if $\mathbf{w}_2(t_0)\neq \mathbf{0}$, then for $t>t_0$, $1+\cos\theta_2(t)$ may go through convergence rate between $-\Theta(\ln t) \sim e^{-\Theta(\sqrt{t})}$.
\end{thm}
For the remaining initialization case that $\mathbf{w}_1(0)$ and $\mathbf{w}_2(0)$ are initialized in the same half-plane separated by $\mathbf{v}$, we only discover that $\theta_1(t) \leq \theta_2(t)$ always holds for some $t$ as follows, and the remaining dynamic of directional convergence seems complex and does not have distinct phases, which we will show experiments in Section \ref{sec:exp}.
\begin{thm}\label{thm:same-init}
Under Assumptions \ref{ass:distri}, \ref{assume:activation} and \ref{assume:same_plane}, if $\mathbf{w}_1(0)$ and $\mathbf{w}_2(0)$ lie in the same half-plane separated by $\mathbf{v}$, and $\theta_1(0) > \theta_2(0)$, then $\theta_1(t)$ decreases and $\theta_2(t)$ increases. Moreover, $\theta_1(t) \leq \theta_2(t)$ after time $\mathcal{O}\left(\ln^2\delta \right)$, where $\delta = \cos\theta_2(0)-\cos\theta_1(0)$.
\end{thm}
\paragraph{Gradient Descent.} We close our theoretical discussion by extending the gradient flow result to the gradient descent method. On the account of the unsolved initialization case in Theorem \ref{thm:same-init}, we are only able to give directional convergence for constrained learning rates. The main idea follows from the linear setting with more rigorous analysis.
\begin{thm}\label{thm:gd-relu}
Suppose that Assumptions \ref{ass:distri}, \ref{assume:activation} and \ref{assume:same_plane} hold and that $\mathbf{w}_1(0)$ and $\mathbf{w}_2(0)$ lie in the different half-plane separated by $\mathbf{v}$. Moreover, $\{\eta_n\}_{i=1}^\infty$ satisfy that $\mathbf{w}_i(n)$ would never reach another half-plane separated by $\mathbf{v}$. Then for each step $n$, there exists $i\in\{1,2\}$, such that
\[ (-1)^{i-1}\cos\theta_i(n) \geq 1-(1+(-1)^i\cos\theta_i(0))e^{-BS_n}, \]
where $ S_n = \sum_{k=0}^{n-1} \frac{\eta_k}{\sqrt{\|\mathbf{w}_1(0)\|^2 + \|\mathbf{w}_2(0)\|^2+\sum_{i=0}^{k}2\eta_i^2c_0^2+0.6\eta_i}}$ and $B=c_0\min\{1-\cos\theta_1(0), 1+\cos\theta_2(0)\}/(4\pi)$.
\end{thm}
\begin{remark}
We may wonder when $\{\eta_n\}_{i=1}^\infty$ satisfy that $\mathbf{w}_i(n)$ would never reach another half-plane separated by $\mathbf{v}$. Here we show that choosing $\eta_n=\Theta(1/n)$ may guarantee this condition. Notice that at this time, $S_n=\Theta(\sqrt{\ln(n)})$. For example of $\mathbf{w}_1(n)$, we have $\cos\theta_1(n) = 1- \mathcal{O}(e^{-B_1S_n})$. Then $\sin\theta_1(n) = \mathcal{O}(e^{-B_1S_n/2})=\mathcal{O}(e^{-c\sqrt{\ln(n)}})$ and $\|\mathbf{w}_1(n)\|=\mathcal{O}(\sqrt{\ln(n)})$. Hence $ \|\mathbf{w}_1(n)\|\sin\theta_1(n) = \mathcal{O}(\sqrt{\ln(n)}e^{-c\sqrt{\ln(n)}})$. We note that $\sqrt{\ln(n)}e^{-c\sqrt{\ln(n)}}\gg 1/n$, showing that $\mathbf{w}_1(n)$ may lie in the same half-plane separated by $\mathbf{v}$ with directional convergence $e^{-\Omega(\sqrt{\ln(n)})}$.
Moreover, we can also use learning rate decay in practice. At the beginning, $\|\mathbf{w}_1\|\sin\theta_1(n)$ and $\|\mathbf{w}_2\|\sin\theta_2(n)$ are large, we can use constant learning rate to obtain $S_n = \Theta(\sqrt{n})$, then we decay the learning rate to reduce oscillation around the target direction.
However, we consider the condition can be removed if we have elaborate understanding of the remaining initialization.
\end{remark}
|
{
"timestamp": "2021-05-11T02:18:49",
"yymm": "2105",
"arxiv_id": "2105.03879",
"language": "en",
"url": "https://arxiv.org/abs/2105.03879"
}
|
\section{Introduction}
Indexing data structures traditionally play a central role in algorithms on string and in information retrieval. Due to constantly growing volumes of data in applications, the attention of researchers in the last decades was naturally attracted to small-space indexes. In this paper we study two closely related small-space indexing data structures: a sparse suffix tree and a longest common extension (LCE) index. We investigate them in a general framework of (deterministic) locally consistent parsings that was developed by Cole and Vishkin~\cite{ColeVishkin}, Je{\.z}~\cite{Jez,Jez2,Jez3,Jez4}, and others~\cite{AlstrupBrodalRauhe,FischerIKoppl,GanczorzGawrychowskiJezKociumaka,GawrychowskiEtAl,GoldbergPlotkinShannon,MelhornSundarUhrig,NishimotoEtAl,SahinalpVishkin} (the list is not exhausting) and was recently revitalized in the works of Birenzweig et al.~\cite{BirenzwigeEtAl} and~Kempa and Kociumaka~\cite{KempaKociumaka} where two new potent concepts of partitioning and synchronizing sets were introduced.
The sparse suffix tree for a given set of $b$ suffixes of a string is a compacted trie built on these suffix. It can be viewed as the suffix tree from which all suffixes not from the set were removed (details follow). The tree takes $\Oh(b)$ space on top of the input string and can be easily constructed in $\Oh(n)$ time from the suffix tree, where $n$ is the length of the string. One can build the suffix tree in $\Oh(n)$ time~\cite{Farach} provided the letters of the string are sortable in linear time. However, if no more than $\Oh(b)$ space is available on top of the input, then in general there is no enough space for the full suffix tree and the problem, thus, becomes much more difficult. The $\Oh(b)$ bound is optimal since the tree itself takes $\Oh(b)$ space. This construction problem with restricted $\Oh(b)$ space was posed by K{\"a}rkk{\"a}inen and Ukkonen~\cite{KarkkainenUkkonen} who showed how to solve it in linear time for the case of evenly spaced $b$ suffixes. In a series of works~\cite{BilleEtAl3,BirenzwigeEtAl,FischerIKoppl,GawrychowskiKociumaka,IEtAl,KarkkainenSandersBurkhardt}, the problem was settled for the case of randomized algorithms: an optimal linear $\Oh(b)$-space Monte Carlo construction algorithm for the sparse suffix tree was proposed by Gawrychowski and Kociumaka~\cite{GawrychowskiKociumaka} and an optimal linear $\Oh(b)$-space Las-Vegas algorithm was described by Birenzwige et al.~\cite{BirenzwigeEtAl}. The latter authors also presented the best up-to-date deterministic solution that builds the sparse suffix tree within $\Oh(b)$ space in $\Oh(n \log\frac{n}b)$ time~\cite{BirenzwigeEtAl} (all logarithms in the paper are in base~2 unless explicitly stated otherwise). All these solutions assume (as we do too) that the input string is readonly and its alphabet is $\{0,1,\ldots,n^{\Oh(1)}\}$; the case of rewritable inputs is apparently very different as was shown by Prezza~\cite{Prezza}.
An LCE index preprocesses a readonly input string of length $n$ so that one can answer queries $\lce(p,q)$, for any positions $p$ and $q$, computing the length of the longest common prefix of the suffixes starting at $p$ and $q$. It is well known that the queries can be answered in $\Oh(1)$ time provided $\Oh(n)$ space is used~\cite{HarelTarjan}. In~\cite{BilleEtAl2} Bille et al.~presented an LCE index that, for any given user-defined parameter $b$, occupies $\Oh(b)$ space on top of the input string and answers queries in $\Oh(\frac{n}{b})$ time. In~\cite{Kosolobov} it was proved that this time-space trade-off is optimal provided $b \ge \Omega(n / \log n)$ (it is conjectured that the same trade-off lower bound holds for a much broader range of values $b$; a weaker trade-off appears in~\cite{BilleEtAl4,BrodalDavoodiRao}). In view of these lower bounds, it is therefore natural to ask how fast one can construct, for any parameter $b$, an LCE index that can answer queries in $\Oh(\frac{n}{b})$ time using $\Oh(b)$ space on top of the input. The space $\Oh(b)$ is optimal for this query time and the construction algorithm should not exceed it. The issue with the data structure of~\cite{BilleEtAl2} is that its construction time is unacceptably slow, which motivated a series of works trying to solve this problem. As in the case of sparse suffix trees, the problem was completely settled in the randomized setting: an optimal linear $\Oh(b)$-space Monte Carlo construction algorithm for an LCE index with $\Oh(\frac{n}{b})$-time queries was presented by Gawrychowski and Kociumaka~\cite{GawrychowskiKociumaka} and a Las-Vegas construction with the same time and space was proposed by Birenzwige et al.~\cite{BirenzwigeEtAl} provided $b \ge \Omega(\log^2 n)$. The best deterministic solution is also presented in~\cite{BirenzwigeEtAl} and runs in $\Oh(n\log\frac{n}{b})$ time answering queries in slightly worse time $\Oh(\frac{n}{b}\sqrt{\log^* n})$ provided $b \ge \Omega(\log n)$ (the previous best solution was from~\cite{TanimuraEtAl} and it runs in $\Oh(n\tau)$ time but, for some exotic parameters $b$, has slightly better query time); see Table~\ref{tbl:results}. The input string is readonly in all these solutions and the alphabet is $\{0,1,\ldots,n^{\Oh(1)}\}$; the case of rewritable inputs differs~\cite{Prezza}. A related (but different) problem of LCE indexes in compressed space was studied in~\cite{I,TanimuraEtAl2}.
For a broad range of values $b$, we settle both construction problems, for sparse suffix trees and LCE indexes, in $\Oh(b)$ space in the deterministic case. Specifically, given a readonly string of length $n$ over the alphabet $\{0,1,\ldots,n^{\Oh(1)}\}$, we present two algorithms: one that constructs the sparse suffix tree, for any user-defined set of $b$ suffixes such that $b \ge \Omega(\log^2 n)$, in $\Oh(n \log_b n)$ time using $\Oh(b)$ space on top of the input; and another that constructs an LCE index with $\Oh(\frac{n}{b})$-time queries, for any parameter $b$ such that $b \ge \Omega(\log^2 n)$, in $\Oh(n \log_b n)$ time using $\Oh(b)$ space on top of the input. This gives us optimal $\Oh(b)$-space solutions with $\Oh(\frac{1}{\epsilon} n) = \Oh(n)$ time when $b \ge n^\epsilon$, for constant $\epsilon > 0$, which arguably includes most interesting cases. The comparison of our result for an LCE index to best known solutions is given in Table~\ref{tbl:results}.
\begin{table}[ht]
\caption{LCE indexes deterministically constructible in $\Oh(b)$ space, for $b \ge \Omega(\log^2 n)$.}
\begin{center}
\begin{tabular}{r|c|l}\hlin
Query time & Construction in $\Oh(b)$ space & Algorithm \\\midrule
$\Oh(\frac{n}{b} \log\min\{b,\frac{n}{b}\})$ & $\Oh(n\cdot \frac{n}{b})$ & \cite{TanimuraEtAl} \\
$\Oh(\frac{n}{b} \sqrt{\log^* n})$ & $\Oh(n\log\frac{n}{b})$ & \cite{BirenzwigeEtAl} \\
$\bm{\Oh(\frac{n}{b})}$ & $\bm{{\Oh(n\log_b n)}}$ & Theorem~\ref{thm:main-theorem} \\\hline
\end{tabular}
\label{tbl:results}
\end{center}
\end{table}
In order to achieve these results, we develop a new algorithm that, for any given parameter $b \ge \Omega(\log^2 n)$, constructs a so-called $\tau$-partitioning set of size $\Oh(b)$ with $\tau = \frac{n}{b}$. This result is of independent interest.
\subparagraph{Techniques.}
The core of our solution is a version of locally consistent parsing developed by Birenzwige et al.~\cite{BirenzwigeEtAl}, the so-called $\tau$-partitioning sets (unfortunately, we could not adapt the more neat $\tau$-synchronizing sets from~\cite{KempaKociumaka} for the deterministic case). It was shown by Birenzwige et al.~that an $\Oh(b)$-space construction of a sparse suffix tree or an LCE index can be performed in linear time provided a $\tau$-partitioning set of size $\Oh(b)$ with $\tau = \frac{n}{b}$ is given. We define a variant of $\tau$-partitioning sets and, for completeness, repeat the argument of Birenzwige et al.~with minor adaptations to our case. The main bulk of the text is devoted to the description of an $\Oh(b)$-space algorithm that builds a (variant of) $\tau$-partitioning set of size $\Oh(b)$ with $\tau = \frac{n}{b}$ in $\Oh(n\log_b n)$ time provided $b \ge \Omega(\log^2 n)$, which is the main result of the present paper.
Our solution combines two well-known approaches to deterministic locally consistent parsings: the \emph{deterministic coin tossing} introduced by Cole and Vishkin~\cite{ColeVishkin} and developed in~\cite{AlstrupBrodalRauhe,FischerIKoppl,GanczorzGawrychowskiJezKociumaka,GawrychowskiEtAl,GoldbergPlotkinShannon,MelhornSundarUhrig,NishimotoEtAl,SahinalpVishkin}, and the \emph{recompression} invented by Je{\.z}~\cite{Jez4} and studied in~\cite{I,Jez,Jez2,Jez3}. The high level idea is first to use Cole and Vishkin's technique that constructs a $\tau$-partitioning set of size $\Oh(b\log^* n)$ where $\tau =\frac{n}{b}$ (in our algorithm the size is actually $\Oh(b\log\log\log n)$ since we use a ``truncated'' version of Cole and Vishkin's bit reductions); second, instead of storing the set explicitly, which is impossible in $\Oh(b)$ space, we construct a string $R$ of length $\Oh(b\log^* n)$ in which every letter corresponds to a position of the set and occupies $o(\log\log n)$ bits so that $R$ takes $o(b\log^* n \log\log n)$ bits in total and, thus, can be stored into $\Oh(b)$ machine words of size $\Oh(\log n)$ bits; third, Je{\.z}'s recompression technique is iteratively applied to the string $R$ until $R$ is shortened to a length $\Oh(b)$; finally, the first technique generating a $\tau$-partitioning set is performed again but this time we retain and store explicitly those positions that correspond to surviving letters of the string $R$. There are many hidden obstacles on this path and because of them our solution, unfortunately, is only of purely theoretical value in its present form due to numerous internal complications in the actual scheme (in contrast, existing randomized results on synchronizing sets~\cite{DinklageEtAl,KempaKociumaka} seem quite practical).
The paper is organized as follows. In Section~\ref{sec:partitioning-sets} we define $\tau$-partitioning sets and, in essence, repeat an argument from~\cite{BirenzwigeEtAl} showing how one can use them in order to build a sparse suffix tree or an LCE index. Section~\ref{sec:vishkin-process} describes the first stage of construction of a $\tau$-partitioning set that is based on a modification of Cole and Vishkin's technique. Section~\ref{sec:time-improvement} improves the running time of this first stage from $\Oh(n \log\tau)$ to $\Oh(n\log_b \tau)$. In Section~\ref{sec:recompression} the second stage based on a modification of Je{\.z}'s recompression technique is presented. Section~\ref{sec:small-tau} describes separately the case of very small $\tau$.
\section{Partitioning Sets with Applications}
\label{sec:partitioning-sets}
Let us fix a readonly string $s$ of length $n$ whose letters $s[0], s[1], \ldots, s[n{-}1]$ are from a polynomially bounded alphabet $\{0,1,\ldots,n^{\Oh(1)}\}$. We use $s$ as the input in our algorithms. As is standard, the algorithms are in the word-RAM model, their space is measured in $\Theta(\log n)$-bit machine words, and each $s[i]$ occupies a separate word. We write $s[i..j]$ for the \emph{substring} $s[i]s[i{+}1]\cdots s[j]$, assuming it is empty if $i > j$; $s[i..j]$ is called a \emph{suffix} (resp., \emph{prefix}) of $s$ if $j = n - 1$ (resp., $i = 0$). For any string $t$, let $|t|$ denote its length. We say that $t$ \emph{occurs} at position $i$ in $s$ if $s[i..i{+}|t|{-}1] = t$. Denote $[i..j] = \{k \in \mathbb{Z} \colon i\le k\le j\}$, $(i..j] = [i..j]\setminus \{i\}$, $[i..j) = [i..j]\setminus\{j\}$, $(i..j) = [i..j)\cap (i..j]$. A~number $p \in [1..|t|]$ is called a \emph{period} of $t$ if $t[i] = t[i-p]$ for each $i \in [p..|t|)$. For brevity, denote $\log\log\log n$ by $\log^{(3)} n$. We assume that $n$, the length of $s$, is sufficiently large: larger than $2^{\max\{16,c\}}$, where $c$ is a constant such that $n^c$ upperbounds the alphabet. We need the following well-known periodicity lemma.
\begin{lemma}[see~\cite{FineWilf}]
If two substrings $s[i..i']$ and $s[j..j']$ with periods $p$ and $q$, respectively, overlap on at least $p + q$ letters (i.e., $\min\{i',j'\} - \max\{i,j\} \ge p + q$), then the minimal period of $s[\min\{i,j\}..\max\{i',j'\}]$ is at most $\mathop{\mathrm{gcd}}(p,q)$.\label{lem:fine-wilf}
\end{lemma}
Given an integer $\tau \in [4..n/2]$, a set of positions $S \subseteq [0..n)$ is called a~\mbox{\emph{$\tau$-partitioning set}} if it satisfies the following properties:
\begin{enumerate}[label=(\alph*)]
\item if $s[i{-}\tau..i{+}\tau] = s[j{-}\tau..j{+}\tau]$ for $i,j \in [\tau..n{-}\tau)$, then $i \in S$ iff $j \in S$;
\item if $s[i..i{+}\ell] = s[j..j{+}\ell]$, for $i,j \in S$ and some $\ell \ge 0$, then, for each $d \in [0..\ell{-}\tau)$, $i + d \in S$ iff $j + d \in S$;
\item if $i,j \in S$ with $i < j$, $(i..j) \cap S = \emptyset$, and $j - i > \tau$, then $s[i .. j]$ has a period at most $\tau / 4$.
\end{enumerate}
Our definition is inspired by the \emph{forward synchronized \mbox{$(\tau,\tau)$-partitioning} sets} from~{\cite[Def.~3.1 and~6.1]{BirenzwigeEtAl}} but slightly differs; nevertheless, we retain the term ``partitioning'' to avoid inventing unnecessary new terms for very close concepts. In the definition, (a), (b), and (c) state, respectively, that $S$ is locally consistent, forward synchronized, and dense: the choice of positions depends only on short substrings around them, long enough equal right ``contexts'' of positions from $S$ are ``partitioned'' identically, and $S$ has a position every $\tau$ letters unless a long range with small period is encountered. In our construction of $S$ a certain converse of~(c) will also hold: whenever a substring $s[i..j]$ has a period at most $\tau / 4$, we will have $S \cap [i + \tau .. j - \tau] = \emptyset$ (see Lemma~\ref{lem:main-lemma}). This converse is not in the definition since it is unnecessary for our applications and we will use auxiliary $\tau$-partitioning sets not satisfying it. The definition also implies the following convenient property of ``monotonicity''.
\begin{lemma}
For any $\tau' \ge \tau$, every $\tau$-partitioning set is also \mbox{$\tau'$-partitioning}.\label{lem:monotone-synch}
\end{lemma}
Due to~(c), all $\tau$-partitioning sets in some strings have size at least $\Omega(n / \tau)$. In the remaining sections we device algorithms that construct a $\tau$-partitioning set of $s$ with size $\Oh(n / \tau)$ (matching the lower bound) using $\Oh(n / \tau)$ space on top of~$s$; for technical reasons, we assume that $\Omega(\log^2 n)$ space is always available, i.e., $n / \tau \ge \Omega(\log^2 n)$, which is a rather mild restriction. Thus, we shall prove the following main theorem.
\begin{theorem}
For any string of length $n$ over an alphabet $[0..n^{\Oh(1)}]$ and any $\tau \in [4..\Oh(n / \log^2 n)]$, one can construct in $\Oh(n\log_b n)$ time and $\Oh(b)$ space on top of the string a $\tau$-partitioning set of size $\Oh(b)$, for $b = n / \tau$.
\label{thm:main-theorem}
\end{theorem}
Let us sketch how one can use the $\tau$-partitioning set from Theorem~\ref{thm:main-theorem} for linear construction of small-space indexes.
\paragraph{LCE index and sparse suffix tree.}
A \emph{longest common extension (LCE) index} is a data structure on $s$ that, given a pair of positions $p$ and $q$, answers the \emph{LCE query} $\lce(p,q)$ computing the length of the longest common prefix of $s[p..n{-}1]$ and $s[q..n{-}1]$. Such index can be stored in $\Oh(b)$ space on top of $s$ with $\Oh(\frac{n}b)$ query time~\cite{BilleEtAl2} and this trade-off is optimal, at least for $b \ge \Omega(\frac{n}{\log n})$~\cite{Kosolobov}
Given $b$ suffixes $s[i_1..n{-}1], s[i_2..n{-}1],\ldots,s[i_b..n{-}1]$, their \emph{sparse suffix tree} \cite{KarkkainenUkkonen} is a compacted trie on these suffixes in which all edge labels are stored as pointers to corresponding substrings of $s$. Thus, the tree occupies $\Oh(b)$ space.
Our construction scheme for these two indexes is roughly as follows: given a \mbox{$\tau$-partitioning} set $S$ with $\tau = \frac{n}{b}$ and size $\Oh(b) = \Oh(n / \tau)$, we first build the sparse suffix tree $T$ for the suffixes $s[j..n{-}1]$ with $j \in S$, then use it to construct an LCE index, and, using the index, build the sparse suffix tree for arbitrarily chosen $b$ suffixes. We elaborate on this scheme below; our exposition, however, is rather sketchy and some details are omitted since the scheme is essentially the same as in~\cite{BirenzwigeEtAl} and is given here mostly for completeness.
\medskip
To construct the sparse suffix tree $T$ for all $s[j..n{-}1]$ with $j \in S$, we apply the following lemma. Its cumbersome formulation is motivated by its subsequent use in Section~\ref{sec:time-improvement}. In the special case when $m = n$ and $\sigma = n^{\Oh(1)}$, which is of primary interest for us now, the lemma states that $T$ can be built in $\Oh(n)$ time.
\begin{lemma}
Given an integer $\tau \ge 4$ and a read-only string $s$ of length $m$ over an alphabet $[0..\sigma)$, let $S$ be an ``almost'' $\tau$-partitioning set of size $b = \Theta(m / \tau)$: it satisfies properties (a) and (b), but not necessarily (c). The sparse suffix tree $T$ for all suffixes $s[j..m{-}1]$ with $j \in S$ can be built in $\Oh(m + m \min\{\log_b \sigma, \frac{\log b}{\tau}\})$ time and $\Oh(m / \tau)$ space on top of the space required for $s$.
\label{lem:sst-special}
\end{lemma}
\begin{proof}
Denote by $j_k$ the $k$th position in $S$ (so that $j_1 < \cdots < j_{b}$). Assume that the letters $s[i]$ with $i \ge m$ are equal to the special letter $-1$; with this condition, substrings $s[j..j']$ with $j' \ge m$ are well defined. In one pass, we collect all substrings $s[j_k{+}d\tau .. j_k{+}d\tau{+}2\tau]$ with $j_k \in S$ and $d \in [0..(j_{k+1} - j_k) / \tau)$ into a set $C$ (each substring is identified by its starting position), defining $j_{b+1} = m$ to cover all $s$ by the substrings. We order $C$ by starting positions from left to right: $C = \{s[i_h .. i_h{+}2\tau] \colon 1 \le h \le |C|\}$, where $i_1 < i_2 < \cdots < i_{|C|}$. Observe that $i_{h+1} - i_h \le \tau$, for any $h \in [1..|C|)$. Since the number of substrings in $C$ is $b + \Oh(m / \tau) = \Oh(b)$ and their total length is $\Oh(b\tau + m) = \Oh(m)$, they can be sorted in $\Oh(b)$ space and either in $\Oh(m\log_b \sigma)$ time using the radix sort, or in $\Oh(m + (m/\tau) \log(m/\tau)) = \Oh(m + m\frac{\log b}{\tau})$ time using the ternary tree~\cite{BentleySedgewick}. Let $r_h$ be the rank of $s[i_h..i_h{+}2\tau]$ in the sorted order (equal strings have equal ranks).
We build a suffix tree $T'$ for the string $r_1 r_2 \cdots r_{|C|}$ in $\Oh(|C|) = \Oh(b)$ time and space~\cite{Farach}. All suffixes $r_h r_{h+1} \cdots r_{|C|}$ such that $i_h \not\in S$ are then removed from $T'$. By property~(b), the equality $r_h = r_{h'}$, for any $h, h' \in [1..|C|]$ such that $i_h \in S$ and $i_{h'} \in S$, implies that, for all $d \in [0..\tau]$, $i_h + d \in S$ iff $i_{h'} + d \in S$. Hence, $s[i_h .. i_h{+}2\tau] = s[i_{h'} .. i_{h'}{+}2\tau]$ and $i_{h+1} - i_h = i_{h'+1} - i_{h'}$ since $i_{h+1} - i_h \le \tau$, for $h \in [1..|C|)$. We inductively deduce from this that if $r_h r_{h+1} \cdots r_{h+\ell-1} = r_{h'} r_{h'+1} \cdots r_{h'+\ell-1}$, for $\ell \ge 1$ and $h, h' \in [1..|C|]$ such that $i_h \in S$ and $i_{h'} \in S$, then $s[i_h .. i_{h+\ell-1}{+}2\tau] = s[i_{h'} .. i_{h'+\ell-1}{+}2\tau]$ and, for all $d \in [0 .. i_{h+\ell-1} - i_h + \tau]$, $i_h + d \in S$ iff $i_{h'} + d \in S$. Therefore, if $s[i_h .. t{-}1] = s[i_{h'} .. t'{-}1]$ and $s[t] < s[t']$, for some $t, t'$, and $i_h, i_{h'} \in S$, then $r_h r_{h+1} \cdots r_{h+\ell-1} = r_{h'} r_{h'+1} \cdots r_{h'+\ell-1}$ and $r_{h+\ell} < r_{h'+\ell}$, where $\ell \ge 0$ is the smallest non-negative integer such that $t \in [i_{h+\ell} .. i_{h+\ell}{+}2\tau]$. The tree $T'$ is transformed into $T$ as follows. For each node in $T'$ that corresponds to a string $r_h r_{h+1} \cdots r_{h+\ell-1}$ and for each pair of its lexicographically adjacent outgoing edges labelled by $r_{h+\ell}$ and $r_{h'+\ell}$, we find in $\Oh(\tau)$ time the first mismatched positions $t$ and $t'$ in $s[i_{h+\ell}..i_{h+\ell}{+}2\tau]$ and $s[i_{h+\ell}..i_{h+\ell}{+}2\tau]$, and, using this information, create a corresponding node of $T$. The total running time of the transformation is $\Oh(|C|\tau) = \Oh(m)$.
\end{proof}
For our LCE index, we equip $T$ with a lowest common ancestor (LCA) data structure~\cite{HarelTarjan}, which allows us to compute $\lce(p,q)$ in $\Oh(1)$ time for $p, q \in S$, and we preprocess an array $N[0..b{-}1]$ such that $N[i] = \min\{j \ge i\tau \colon j \in S\}$ for $i \in [0..b)$, which allows us to calculate $\min\{j \ge p \colon j \in S\}$, for any $p$, in $\Oh(\tau)$ time by traversing $j_k, j_{k+1}, \ldots$ in $S$, for $j_k = N[\lfloor p / \tau \rfloor]$. In order to answer an arbitrary query $\lce(p,q)$, we first calculate $p' = \min\{j \ge p + \tau \colon j\in S\}$ and $q' = \min\{j \ge q + \tau \colon j\in S\}$ in $\Oh(\tau)$ time. If either $p' - p \le 2\tau$ or $q' - q \le 2\tau$, then by the local consistency of $S$, $s[p..n{-}1]$ and $s[q..n{-}1]$ either differ in their first $3\tau$ positions, which is checked na{\"i}vely, or $s[p..p'] = s[q..q']$ and the answer is given by $p' - p + \lce(p', q')$ using $T$. If $\min\{p' - p, q' - q\} > 2\tau$, then the strings $s[p{+}\tau..p']$ and $s[q{+}\tau..q']$ both have periods at most $\tau / 4$ due to property~(c); we compare $s[p..p{+}2\tau]$ and $s[q..q{+}2\tau]$ na{\"i}vely and, if there are no mismatches, therefore, due to periodicity, $s[p{+}\tau .. p']$ and $s[q{+}\tau .. q']$ have a common prefix of length $\ell = \min\{p' - p, q' - q\} - \tau$; hence, the problem is reduced to $\lce(p + \ell, q + \ell)$, which can be solved as described above since either $p' - (p + \ell) \le 2\tau$ or $q' - (q + \ell) \le 2\tau$. We thus proved the following theorem.
\begin{theorem}
For any string of length $n$ over an alphabet $[0..n^{\Oh(1)}]$ and any $b \ge \Omega(\log^2 n)$, one can construct in $\Oh(n\log_b n)$ time and $\Oh(b)$ space on top of the string an LCE index that can answer LCE queries in $\Oh(n / b)$ time.\label{thm:sparse-lce}
\end{theorem}
Let us consider the construction of a sparse suffix tree for $b$ suffixes $s[i_1..n{-}1], s[i_2..n{-}1], \ldots, s[i_b..n{-}1]$. Denote by $j_k$ the $k$th position in a given $\tau$-partitioning set $S$ of size $\Oh(b)$ with $\tau = \frac{n}{b}$ (so that $j_1 < \cdots < j_{|S|}$). For all $\ell \in [1..b]$, we compute in $\Oh(b\tau) = \Oh(n)$ time using the array $N$ indices $k_\ell$ such that $j_{k_\ell} = \min\{j \ge i_\ell + \tau \colon j \in S\}$. Then, we sort all strings $s[i_\ell..i_\ell{+}4\tau]$ in $\Oh(n)$ time as in the proof of Lemma~\ref{lem:sst-special} and assign to them ranks $r_\ell$. Let $\bar{r}_k$ be the rank of $s[j_k..n{-}1]$ among the suffixes $s[j..n{-}1]$ with $j \in S$; $\bar{r}_k$ is obtained from $T$. Suppose that $j_{k_\ell} \le i_\ell + 3\tau$, for all $\ell \in [1..b]$. By property~(a), the equality $r_\ell = r_{\ell'}$, for any $\ell, \ell' \in [1..b]$, implies that $j_{k_\ell} - i_\ell = j_{k_{\ell'}} - i_{\ell'}$ when $j_{k_\ell} - i_\ell \le 3\tau$. Then, we sort the suffixes $s[i_\ell..n{-}1]$ with $\ell \in [1..b]$ in $\Oh(b)$ time using the radix sort on the corresponding pairs $(r_\ell, \bar{r}_{j_{k_\ell}})$. The sparse suffix tree can be assembled from the sorted suffixes in $\Oh(b\tau) = \Oh(n)$ time using the LCE index to calculate longest common prefixes of adjacent suffixes.
Suppose that $j_{k_\ell} > i_\ell + 3\tau$, for some $\ell \in [1..b]$. Then, by property~(c), the minimal period of $s[i_\ell{+}\tau..j_{k_\ell}]$ is at most $\tau / 4$. Denote this period by $p_{\ell}$. We compute $p_{\ell}$ in $\Oh(\tau)$ time using a linear $\Oh(1)$-space algorithm~\cite{CrochemoreRytter2} and, then, find the leftmost position $t_{\ell} > j_{k_\ell}$ breaking this period: $s[t_\ell] \ne s[t_\ell{-}p_\ell]$. As $j_{k_\ell} - p_\ell > i_\ell + 2 \tau > j_{k_\ell-1}$, $s[j_{k_\ell}{-}\tau..j_{k_\ell}{+}\tau] \ne s[j_{k_\ell}{-}p_\ell{-}\tau..j_{k_\ell}{-}p_\ell{+}\tau]$ (otherwise $j_{k_\ell} - p_\ell \in S$ by property~(a)) and, hence, $t_\ell \in (j_{k_\ell}..j_{k_\ell}{+}\tau]$. Therefore, the computation of $t_\ell$ takes $\Oh(\tau)$ time. Thus, all $p_\ell$ and $t_\ell$ can be calculated in $\Oh(b\tau) = \Oh(n)$ total time. We then sort the strings $s[t_\ell..t_\ell{+}\tau]$ in $\Oh(n)$ time and assign to them ranks $\tilde{r}_\ell$. For each suffix $s[i_\ell..n{-}1]$ with $\ell \in [1..b]$, we associate the tuple $(r_\ell,0,0,\bar{r}_{j_{k_\ell}})$ if $j_{k_\ell} \le i_\ell + 3\tau$, and the tuple $(r_\ell,d_\ell,\tilde{r}_\ell,\bar{r}_{j_{k_\ell}})$ if $j_{k_\ell} > i_\ell + 3\tau$, where $d_\ell = \pm(t_{\ell} - i_\ell - n)$ with plus if $s[t_{\ell}] < s[t_{\ell} - p_{\ell}]$ and minus otherwise. We claim that the order of the suffixes $s[i_\ell..n{-}1]$ is the same as the order of their associated tuples and, hence, the suffixes can be sorted by sorting the tuples in $\Oh(n)$ time using the radix sort. We then assemble the sparse suffix tree as above using the LCE index. We do not dive into the proof of the claim since it essentially repeats similar arguments in~\cite{BirenzwigeEtAl}; see~\cite{BirenzwigeEtAl} for details.
\begin{theorem}
For any string of length $n$ over an alphabet $[0..n^{\Oh(1)}]$ and any $b \ge \Omega(\log^2 n)$, one can construct in $\Oh(n\log_b n)$ time and $\Oh(b)$ space on top of the string a sparse suffix tree for arbitrarily chosen $b$ suffixes.\label{thm:sst}
\end{theorem}
\section{\boldmath Refinement of Partitioning Sets}
\label{sec:vishkin-process}
In this section we describe a process that takes the trivial partitioning set $[0..n)$ and iteratively refines it in $\lfloor\log\frac{\tau}{2^4\log^{(3)} n}\rfloor$ phases removing some positions so that, after the $k$th phase, the set is $(2^{k+3}\lfloor\log^{(3)} n\rfloor)$-partitioning and has size $\Oh(n / 2^k)$; moreover, it is ``almost'' $2^{k+3}$-partitioning, satisfying properties~(a) and~(b) but not necessarily~(c). In particular, the set after the last phase is $\frac{\tau}{2}$-partitioning (note Lemma~\ref{lem:monotone-synch}) and has size $\Oh(\frac{n}{\tau} \log^{(3)} n)$. Each phase processes all positions of the currently refined set from left to right and, in an almost online fashion, chooses which of them remain in the set. Rather than performing the phases one after another, which requires $\Oh(n)$ space, we run them simultaneously feeding the positions generated by the $k$th phase to the $(k{+}1)$th phase. Thus, the resulting set is produced in one pass. (It, however, has size $\Oh(\frac{n}{\tau}\log^{(3)} n)$, which is still too large to be stored in $\Oh(n / \tau)$ space; this issue is addressed in Section~\ref{sec:recompression}.) Let us elaborate on the details of this process.
Throughout this section, we assume that $\tau \ge 2^5\log^{(3)} n$ and, hence, the number of phases is non-zero. Consider the $k$th phase, for $k \ge 1$. Its input is a set $S_{k-1}$ produced by the $(k{-}1)$th phase; for $k=1$, $S_0 = [0..n)$. Denote by $j_h$ the $h$th position in $S_{k-1}$ (so that $j_1 < \cdots < j_{|S_{k-1}|}$). The phase processes $j_1, j_2, \ldots$ from left to right and decides which of them to put into the new set $S_k \subseteq S_{k-1}$ under construction. The decision for $j_h$ is based on the distances $j_{h} - j_{h-1}$ and $j_{h+1} - j_{h}$, on the substrings $s[j_{h+\ell}..j_{h+\ell}{+}2^k]$ with $\ell \in [-1..4]$, and on certain numbers $v_{h-1}, v_{h}, v_{h+1}$ computed for $j_{h-1}$, $j_{h}$, $j_{h+1}$. Let us define these numbers.
For any distinct integers $x, y \ge 0$, denote by $\lbit(x,y)$ the index of the lowest bit in which the bit representations of $x$ and $y$ differ (the lowest bit has index~$0$); e.g., $\lbit(1,0) = 0$, $\lbit(2,8) = 1$, $\lbit(8,0) = 3$. It is well known that $\lbit(x,y)$ can be computed in $\Oh(1)$ time provided $x$ and $y$ occupy $\Oh(1)$ machine words~\cite{Willard}. Denote $\vbit(x,y) = 2\lbit(x,y) + a$, where $a$ is the bit of $x$ with index $\lbit(x,y)$; e.g., $\vbit(8,0) = 7$ and $\vbit(0,8) = 6$. Note that the bit representation of the number $\vbit(x,y)$ is obtained from that of $\lbit(x,y)$ by appending $a$.
Let $w$ be the number of bits in an $\Oh(\log n)$-bit machine word sufficient to represent letters from the alphabet $[0..n^{\Oh(1)}]$ of $s$. For each $j_h$, denote $s_h = \sum_{i=0}^{2^k} s[j_h{+}i] 2^{wi}$. Each number $s_h$ takes $(2^k{+}1)w$ bits and its bit representation coincides with that of the string $s[j_h..j_h{+}2^k]$. The numbers $s_h$ are introduced merely for convenience of the exposition, they are never discerned from their corresponding substrings $s[j_h..j_h{+}2^k]$ in the algorithm. For each $j_h$, define $v'_h = \vbit(s_h, s_{h+1})$ if $j_{h+1} - j_h \le 2^{k-1}$ and $s_h \ne s_{h+1}$, and $v'_h = \infty$ otherwise. Observe that $\lbit(s_h, s_{h+1}) = w\ell + \lbit(s[i{+}\ell], s[j{+}\ell])$, where $\ell = \lce(j_h, j_{h+1})$; i.e., $\lbit(s_h, s_{h+1})$ is given by an LCE query in the bit string of length $wn$ obtained from $s$ by substituting each letter with its $w$-bit representation. Define:
$$
v''_h = \vbit(v'_h, v'_{h+1}),\,\,
v'''_h = \vbit(v''_h, v''_{h+1}),\,\,
v_h = \vbit(v'''_h, v'''_{h+1}),
$$
assuming $\vbit(x,y) = \infty$ if either $x = \infty$ or $y = \infty$
For each $j_h$, let $R(j_h)$ denote a predicate that is true iff $j_{h+1} - j_h \le 2^{k-1}$ and $s_h = s_{h+1}$; to verify whether $R(j_h)$ holds, we always first check the condition $j_{h+1} - j_h \le 2^{k-1}$ and only then $s_h = s_{h+1}$.
\medskip
\noindent \textbf{Refinement rule.}
\emph{The $k$th phase decides to put a position $j_h$ into $S_k$ either if $\infty > v_{h-1} > v_{h}$ and $v_{h} < v_{h+1}$ (i.e., $v_{h-1} \ne \infty$ and $v_{h}$ is a local minimum of the sequence $v_1,v_2,\ldots$), or in three ``boundary'' cases: (i)~$j_{h+1} - j_h > 2^{k-1}$ or $j_h - j_{h-1} > 2^{k-1}$; (ii)~$R(j_{h-1})$ does not hold while $R(j_h)$, $R(j_{h+1})$, $R(j_{h+2})$ hold; (iii)~$R(j_h)$ holds but $R(j_{h+1})$ does not.}
\medskip
For now, assume that the numbers $\lbit(s_h, s_{h+1})$, required to calculate $v'_h$ and $R(j_h)$, are computed by the na{\"i}ve comparison of $s[j_h..j_h{+}2^k]$ and $s[j_{h+1}..j_{h+1}{+}2^k]$ in $\Oh(2^k)$ time (we will change it later). Thus, the process is well defined. The trick with local minima and $\vbit$ reductions is, in essence, as in the deterministic approach of Cole and Vishkin to locally consistent parsings~\cite{ColeVishkin}. In what follows we derive some properties of this approach in order to prove that the $k$th phase indeed produces a $(2^{k+3}\lfloor\log^{(3)} n\rfloor)$-partitioning set.
It is convenient to interpret the $k$th phase as follows (see Fig.~\ref{fig:hills}): the sequence $j_1, j_2, \ldots$ is split into maximal disjoint contiguous regions such that, for any pair of adjacent positions $j_h$ and $j_{h+1}$ inside each region, the distance $j_{h+1} - j_h$ is at most $2^{k-1}$ and $R(j_h) = R(j_{h+1})$. Thus, the regions are of two types: all-$R$ ($\{j_{16}, \ldots, j_{20}\}$ in Fig.~\ref{fig:hills}) and all-non-$R$ ($\{j_{1}, \ldots, j_{15}\}$ or $\{j_{21},\ldots,j_{25}\}$ in Fig.~\ref{fig:hills}). By case~(i), for each long gap $j_{h+1} - j_h > 2^{k-1}$ between regions, we put both $j_h$ and $j_{h+1}$ into $S_k$. In each all-$R$ region, we put into $S_k$ its last position due to case~(iii) and, if the length of the region is at least $3$, its first position by case~(ii). In each all-non-$R$ region, we put into $S_k$ all local minima $v_{h}$ such that $v_{h-1} \ne \infty$. Only all-non-$R$ regions have positions $j_h$ with $v_h \ne \infty$; moreover, as it turns out, only the last three or four their positions $j_h$ have $v_h = \infty$ whereas, for other $j_h$, $v_h \ne \infty$ and $v_h \ne v_{h+1}$. Lemmas~\ref{lem:all-R} and~\ref{lem:all-non-R} describe all this in details.
\begin{figure}[!htb]
\center
\includegraphics{hills}
\caption{The $k$th phase. The heights of the dashed lines over $j_h$ are equal to $v_h$. Encircled positions are put into $S_k$: they are local minima of $v_h$, or are at the ``boundaries'' of all-$R$ regions, or form a gap of length ${>}2^k$. In the figure $R(j_{16}),\ldots, R(j_{20})$ hold and $R(j_{21})$ does not hold.}\label{fig:hills}
\end{figure}
\begin{lemma}
Let $j_{p}, j_{p+1}, \ldots, j_{q}$ be a maximal contiguous region of $j_1, j_2, \ldots$ such that, for all $h \in [p..q]$, $R(j_h)$ holds. Then, always $j_{q} \in S_k$, and $j_p \in S_k$ if $q - p \ge 2$ or $j_p - j_{p-1} > 2^{k-1}$; all other positions $j_h$ in the region do not belong to $S_k$. The string $s[j_p..j_q{+}2^k]$ has a period at most $2^{k-1}$.\label{lem:all-R}
\end{lemma}
\begin{proof}
Local minima of $v_h$ are not in the region since, for $h \in [p..q]$, $R(j_h)$ implies $s_h = s_{h+1}$ and $v_h = \infty$. As for the ``boundary'' cases: (iii)~always gives $j_q \in S_k$; (i)~can affect only $j_p$ if $j_{p-1} - j_p > 2^{k-1}$; (ii)~holds for $j_p$ iff $q - p \ge 2$.
For any $h \in [p..q]$, since $s[j_h..j_h{+}2^k] = s[j_{h+1}..j_{h+1}{+}2^k]$ and $j_{h+1} - j_h \le 2^{k-1}$, the string $s[j_h..j_{h+1}{+}2^k]$ has period $j_{h+1} - j_h \le 2^{k-1}$. Then, for $h \in (p..q]$, $s[j_{h-1}..j_{h}{+}2^k]$ and $s[j_h..j_{h+1}{+}2^k]$ both have periods at most $2^{k-1}$ and overlap on $2^k$ letters. Hence, by Lemma~\ref{lem:fine-wilf}, their minimal periods coincide and are at most $2^{k-1}$. Applying this reasoning for all $h \in (p..q]$, we deduce that the minimal period of $s[j_p..j_q{+}2^k]$ is at most $2^{k-1}$.
\end{proof}
The key observation of Cole and Vishkin is in the following lemma.
\begin{lemma}[{see \cite{ColeVishkin}}]
Given a string $a_1 a_2 \cdots a_m$ over an alphabet $[0..2^u)$ such that $a_i \ne a_{i+1}$ for any $i \in [0..m)$, the string $b_1 b_2 \cdots b_{m-1}$ such that $b_i = \vbit(a_i, a_{i+1})$, for $i \in [0..m)$, satisfies $b_i \ne b_{i+1}$, for any $i \in [0..m{-}1)$, and $b_i \in [0..2u)$.\label{lem:vishkin}
\end{lemma}
\begin{proof}
Consider $b_i$ and $b_{i+1}$. Denote $\ell = \lbit(a_i, a_{i+1})$ and $\ell' = \lbit(a_{i+1}, a_{i+2})$. As $a_i,a_{i+1} \in [0..2^u)$, we have $\ell \in [0..u)$. Hence, $b_i \le 2\ell + 1 \le 2u - 1$, which proves $b_i \in [0..2u)$. If $b_i = b_{i+1}$, then $\ell = \ell'$ and the bits with indices $\ell$ and $\ell' = \ell$ in $a_i$ and $a_{i+1}$ coincide; on the other hand, by the definition of $\ell = \lbit(a_i, a_{i+1})$, $a_i$ and $a_{i+1}$ must differ in this bit, which is a contradiction.
\end{proof}
\begin{lemma}
Let $j_{p}, j_{p+1}, \ldots, j_q$ be a maximal contiguous region of $j_1, j_2, \ldots$ such that, for all $h \in [p..q]$, $R(j_h)$ does not hold and, if $h < q$, $j_{h+1} - j_h \le 2^{k-1}$. Then, $v_h \ne \infty$ for $h \in [p..q{-}4]$, $v_h = \infty$ for $h \in (q{-}3..q]$, and $v_{q-3}$ may be $\infty$ or not. Further, for $h \in [p..q{-}3]$, we have $v_h \ne v_{h+1}$ whenever $v_h \ne \infty$. For $h \in (p..q)$, $j_h \in S_k$ iff $\infty > v_{h-1} > v_{h}$ and $v_{h} < v_{h+1}$; $j_p \in S_k$ iff $j_{p} - j_{p-1} > 2^{k-1}$; $j_q \in S_k$ iff $j_{q+1} - j_q > 2^{k-1}$.\label{lem:all-non-R}
\end{lemma}
\begin{proof}
We have either $v'_q = \infty$ (if $j_{q+1} - j_q > 2^{k-1}$) or $v'_{q+1} = \infty$ (if $R(j_{q+1})$ holds). Since the numbers $v_h$ are obtained via four $\vbit$ reductions and, for any $x$, $\vbit(x,\infty) = \infty$, we have $v_h = \infty$, for $h \in (q{-}3..q]$, and also $v_{q-3} = \infty$ if $v'_q = \infty$. Since $s_h \ne s_{h+1}$ provided $h \in [p..q)$, the claim for other $v_h$ with $h \in [p..q{-}4]$ is proved by four applications of Lemma~\ref{lem:vishkin} to the sequence $s_{p}, s_{p+1}, \ldots, s_q$. The criterium for $j_h \in S_k$ follows directly from the description of the $k$th phase.
\end{proof}
The goal of the fourfold $\vbit$ reduction for $v_h$ is to make $v_h$ small enough so that local minima occur often and, thus, the resulting set $S_k$ is not too sparse.
\begin{lemma}
For any $v_h \ne \infty$ in the $k$th phase, we have $v_h \in [0..2\log^{(3)} n{+}3)$.\label{lem:v-reduction}
\end{lemma}
\begin{proof}
Since $v'_h \in [0..2nw) = [0..\Oh(n\log n))$, we deduce from Lemma~\ref{lem:vishkin} that $v''_h \in [0..\Oh(\log n))$, $v'''_h \in [0..2\log\log n + \Oh(1))$, and, due to the inequality $\log(x+\delta) \le \log x + \frac{\delta\log e}{x}$, $v_h \in [0..2\log^{(3)} n + 3)$, for sufficiently large $n$.
\end{proof}
\begin{lemma}
Let $S_{k-1}$ and $S_k$ be the sets generated by the $(k{-}1)$th and $k$th phases. Then, any range $j_\ell, j_{\ell+1}, \ldots, j_m$ of at least $8\log^{(3)} n + 12$ consecutive positions from $S_{k-1}$ such that $v_h \ne \infty$, for all $h \in [\ell..m]$, has a position from $S_k$.\label{lem:local-density}
\end{lemma}
\begin{proof}
Due to Lemmas~\ref{lem:all-non-R} and~\ref{lem:v-reduction}, any sequence of $8\log^{(3)} n + 12$ numbers $v_h$ all of which are not $\infty$ contains a local minimum $v_h$ and $j_h$ will be put into $S_k$.
\end{proof}
\begin{lemma}
For any $i, i' \in [0..n]$, we have $|S_k \cap [i..i')| \le 2^6\lceil(i' - i) / 2^{k}\rceil$ and, in particular, $|S_k| \le n / 2^{k-6}$.\label{lem:local-sparsity}
\end{lemma}
\begin{proof}
Let us prove by induction on $k$ that the range $[0..n)$ of positions of the string $s$ can be partitioned into disjoint non-empty blocks $B_1, B_2, \ldots, B_m$ such that $B_i = [b_i..b_{i+1})$, where $b_1 = 0$, $b_{m+1} = n$, and $b_1 < b_2 < \cdots < b_{m+1}$, and the blocks are of two types:
\begin{enumerate}
\item a block $B_i$ is \emph{normal} if $2^{k-5} \le |B_i| \le 4\cdot 2^{k-5}$ and $|B_i \cap S_k| \le 2$;
\item a block $B_i$ is \emph{skewed} if $|B_i| \ge 8\cdot 2^{k-5}$, $|B_i \cap S_k| \le 3$, and $[b_i..b_{i+1}{-}4\cdot 2^{k-5}] \cap S_k = \emptyset$, i.e., all positions of $B_i \cap S_k$ are concentrated in the suffix of $B_i$ of length $4\cdot 2^{k-5}$ (hence the name ``skewed'').
\end{enumerate}
The lemma easily follows from the claim since every range $[i..i')$ can intersect at most $\lceil(i' - i) / 2^{k-5}\rceil$ normal blocks and at most $\lceil(i' - i) / (8\cdot 2^{k-5})\rceil$ skewed blocks.
The base of the induction is $k \le 6$ and it is trivial since the range $[0..n)$ always can be split into blocks of length $2$. Suppose that the inductive hypothesis holds for $k \ge 6$ and let us construct a partitioning of the range $[0..n)$ into blocks for $k + 1$.
First, we greedily unite blocks into disjoint pairs and singletons from left to right as follows: given a block $B_i$ such that the blocks $B_1 B_2 \cdots B_{i-1}$ were already united into pairs and singletons, we consider the following cases: (1)~if $B_i$ is a skewed block, then it forms a singleton and we analyze $B_{i+1}$ next; (2)~if both $B_i$ and $B_{i+1}$ are normal blocks, then $B_i$ and $B_{i+1}$ are united into a pair and we consider $B_{i+2}$ next; (3)~if $B_i$ is a normal block and $B_{i+1}$ is a skewed block, then we cut from $B_{i+1}$ a new normal block $B' = [b_{i+1}..b_{i+1}{+}2^{k-5})$ of length $2^{k-5}$, which is necessarily empty due to properties of skewed blocks, and we unite $B_i$ with $B'$ and analyze the (already cut) block $B_{i+1}$ next; the length of the skewed block $B_{i+1}$ is reduced by $2^{k-5}$ after the cutting and might become less than $8\cdot 2^{k-5}$ but it is still considered as skewed since this is not an issue as the block $B_{i+1}$ will be anyways dissolved into normal blocks in a moment. After the construction of all the pairs and singletons, we proceed as follows.
We consider all the produced pairs and singletons of blocks from left to right. A singleton block $B_i$ is always a skewed block and its length is at least $7\cdot 2^{k-5}$ (a prefix of length $2^{k-5}$ could be cut from the block). Let $S_k = \{j_1 < \cdots < j_{|S_k|}\}$. It follows from the refinement rule that any two consecutive positions $j_h$ and $j_{h+1}$ from $S_{k}$ can both belong to the set $S_{k+1}$ only if either $j_{h+1} - j_h > 2^{k}$, or $j_{h} - j_{h-1} > 2^{k}$ (and $v_{h+1}$ is a local minimum that is not preceded by $\infty$ in the latter case). In both cases there is a gap of length at least $2^{k}$ between either $j_h$ and $j_{h+1}$, or $j_{h-1}$ and $j_{h}$. Therefore, since only the last $4\cdot 2^{k-5} = 2^{k-3}$ positions of the skewed block $B_i$ may contain positions of $S_k$, the refinement rule in the case $|B_i \cap S_k| = 3$ necessarily removes from $S_k$ either the second or the last position of $S_k \cap B_i$ when producing $S_{k+1}$. Thus, we have $|B_i \cap S_{k+1}| \le 2$. We then split the skewed block $B_i$ into a series of new normal blocks all of which, except possibly the last one, do not contain positions from $S_{k+1}$: the last block is $(b_{i+1}{-}4\cdot 2^{k-5}..b_{i+1}]$ and it has length $4\cdot 2^{k-5} = 2\cdot 2^{k-4}$; the remaining prefix of $B_i$ is $[b_i..b_{i+1}{-}4\cdot 2^{k-5}]$ (the boundary $b_i$ used here can differ from the original $b_i$ if the skewed block $B_i$ was cut and, thus, $b_i$ was increased) and it has a length at least $7\cdot 2^{k-5} - 4\cdot 2^{k-5}= 3\cdot 2^{k-5}$, the prefix is split into normal blocks arbitrarily (recall that the length of a normal block for the inductive step $k+1$ must be at least $2^{k-4}$ and at most $4\cdot 2^{k-4}$).
Consider a pair of normal blocks $B_i$ and $B_{i+1}$. For simplicity, we denote by $B_{i+1}$ the block following $B_i$ even if it was created by cutting the skewed block (actual $B_{i+1}$) that followed $B_i$ in the initial partitioning. The length of the united block $B_i B_{i+1}$ is at least $2^{k-4}$ and at most $4\cdot 2^{k-4}$ so that it could have formed a new normal block if at most two positions of $S_{k+1}$ belonged to it. By the observation above, if two consecutive positions $j_h$ and $j_{h+1}$ were retained in $S_{k+1}$ by the refinement rule, then there is a gap of length at least $2^{k}$ between either $j_h$ and $j_{h+1}$, or $j_{h-1}$ and $j_{h}$. Therefore, since $|B_iB_{i+1}| \le 4\cdot 2^{k-4} < 2^{k}$, the united block $B_i B_{i+1}$ contains at most three positions of $S_{k+1}$, i.e., one of the positions from $S_k \cap B_iB_{i+1}$ must be discarded from $S_{k+1}$. In case $|S_{k+1} \cap B_iB_{i+1}| \le 2$, we simply form a new normal block $B_iB_{i+1}$. The case $|S_{k+1} \cap B_iB_{i+1}| = 3$ is more interesting; it occurs when each of the blocks $B_i$ and $B_{i+1}$ contains two positions from $S_k$ and the refinement procedure retains the two positions from $S_k \cap B_i$ and removes the first position of $S_k \cap B_{i+1}$. By the observation above, we must have a gap before the block $B_i$ in this case: $S_{k+1} \cap [b_i{-}2^{k}{+}|B_i|..b_i) = \emptyset$. Since $|B_i| \le 4\cdot 2^{k-5} = 2^{k-3}$, we hence obtain $S_{k+1} \cap [b_i{-}7\cdot 2^{k-3}..b_i) = \emptyset$. Therefore, all newly produced blocks that are entirely contained in the range $[b_i{-}7\cdot 2^{k-3}..b_i)$ are empty, i.e., they do not contain positions of $S_k$ (recall that we process pairs and singletons of blocks from left to right and, so, only newly constructed blocks are located to the left of $B_i$). Let $\hat{B}_j, \hat{B}_{j+1}, \ldots, \hat{B}_{\ell}$ be a maximal sequence of consecutive newly constructed blocks to the left of $B_i$ such that $\hat{B}_{\ell}$ is a block immediately preceding $B_i$ (we use the notation $\hat{B}_j$ for the newly constructed blocks to distinguish them from the ``old'' blocks $B_1, B_2, \ldots$). We unite all blocks $\hat{B}_j, \hat{B}_{j+1}, \ldots, \hat{B}_{\ell}$ with $B_i$ and $B_{i+1}$ thus producing a new skewed block whose length is at least $|B_i B_{i+1}| + 7\cdot 2^{k-3} - 4\cdot 2^{k-4}$ (the negative term is because some block to the left of $B_i$ may only partially intersect the ``empty'' range $[b_i{-}7\cdot 2^{k-3}..b_i)$), which is at least $2^{k-4} + 5\cdot 2^{k-3} = 11\cdot 2^{k-4} \ge 8\cdot 2^{k-4}$.
\end{proof}
We are ready to prove that $S_k$ is a $(2^{k+3}\lfloor\log^{(3)} n\rfloor)$-partitioning set and, moreover, it is almost a \mbox{$2^{k+3}$-partitioning} set in a sense.
\begin{lemma}
The $k$th phase generates a $(2^{k+3}\lfloor\log^{(3)} n\rfloor)$-partitioning set $S_k$. Moreover, $S_k$ is almost \mbox{$2^{k+3}$-partitioning:} for $\tau = 2^{k+3}$, it satisfies properties (a) and (b) but not (c), i.e., if $(i..j) \cap S_k = \emptyset$, for $i,j \in S_k$ such that $2^{k+3} < j{-}i \le 2^{k+3}\lfloor\log^{(3)} n\rfloor$, then $s[i..j]$ does not necessarily have period ${\le}2^{k+2}$.\label{lem:2k-partitioning}
\end{lemma}
\begin{proof}
The proof is by induction on $k$. The case $k = 0$ is trivial. Let $k > 0$ and $S_{k-1} = \{j_1 < \cdots < j_{|S_{k-1}|}\}$. We first establish property~(a) for $S_k$ with $\tau = 2^{k+3}$: if $s[i{-}2^{k+3}..i{+}2^{k+3}] = s[j{-}2^{k+3}..j{+}2^{k+3}]$ for $i,j \in [2^{k+3}..n{-}2^{k+3})$, then $i \in S$ iff $j \in S$. Fix $i$ and $j$ such that $s[i{-}2^{k+3}..i{+}2^{k+3}] = s[j{-}2^{k+3}..j{+}2^{k+3}]$. It suffices to show that $i \in S_k$ implies $j \in S_k$; the case $j \in S_k$ is symmetric. So, let $i \in S_k$.
By the inductive hypothesis, for any $\ell \in [-2^{k+2}..2^{k+2}]$, we have $i + \ell \in S_{k-1}$ iff $j + \ell \in S_{k-1}$. In particular, $i$ and $j$ both belong to $S_{k-1}$. Let $i = j_h$ and $j = j_{h'}$, for some $h$ and $h'$. Therefore, when $i \in S_k$ due to the ``boundary'' case~(i), i.e., either $j_h - j_{h-1} > 2^{k-1}$ or $j_{h+1} - j_{h} > 2^{k-1}$, then $j_{h'} - j_{h'-1} > 2^{k-1}$ or $j_{h'+1} - j_{h'} > 2^{k-1}$, respectively, as $2^{k-1} < 2^{k+2}$; thus, $j \in S_k$.
Now suppose that $i \in S_k$ because $\infty > v_{h-1} > v_h$ and $v_h < v_{h+1}$, where $v_{h-1}, v_h, v_{h+1}$ are as defined in the transformation of $S_{k-1}$ into $S_k$. By Lemma~\ref{lem:all-non-R}, whenever $v_h \ne \infty$, then $j_{h+\ell} - j_{h+\ell-1} \le 2^{k-1}$, for all $\ell \in [1..4]$, and the value of $v_h$ is derived using the substrings $s[j_{h+\ell}..j_{h+\ell}{+}2^k]$ with $\ell \in [0..4]$. Then, $j_{h+\ell} - j_{h+\ell-1} \le 2^{k-1}$, for all $\ell \in [0..4]$. Since $j_{h+4} - j_h \le 4\cdot 2^{k-1} < 2^{k+2}$, it follows from the inductive hypothesis that $j_{h+\ell} - j_{h+\ell-1} = j_{h'+\ell} - j_{h'+\ell-1}$, for all $\ell \in [0..4]$, and $s[j_{h+\ell} .. j_{h+\ell}{+}2^k] = s[j_{h'+\ell} .. j_{h'+\ell}{+}2^k]$, for all $\ell \in [-1..4]$. Hence, $v_{h'-1} = v_{h-1}$ and $v_{h'} = v_{h}$. In the same fashion, the case $v_{h+1} \ne \infty$ implies that $j_{h+5} - j_h \le 5\cdot 2^{k-1} < 2^{k+2}$ and $s[j_{h+5} .. j_{h+5}{+}2^k] = s[j_{h'+5} .. j_{h'+5}{+}2^k]$, thus inferring $v_{h'+1} = v_{h+1}$. By Lemma~\ref{lem:all-non-R}, in case $v_{h+1} = \infty$ either $j_{h+5} - j_{h+4} > 2^{k-1}$, which gives $j_{h'+5} - j_{h'+4} > 2^{k-1}$ since $j_{h+4} + 2^{k-1} - j_h < 2^{k+2}$, or $s[j_{h+4} .. j_{h+4}{+}2^k] = s[j_{h+5} .. j_{h+5}{+}2^k]$ and $j_{h+5} - j_{h+4} \le 2^{k-1}$, which gives $s[j_{h'+4} .. j_{h'+4}{+}2^k] = s[j_{h'+5} .. j_{h'+5}{+}2^k]$; thus, both alternatives imply $v_{h'+1} = \infty$. Therefore, $j$ (${=}j_{h'}$) belongs to $S_k$ because $v_{h'}$ is a local minimum. Analogously, we deduce that when $i \in S_k$ since $R(j_{h-1})$ does not hold but $R(j_h)$, $R(j_{h+1})$, $R(j_{h+2})$ hold, then the same is true for $R(j_{h'-1})$ and $R(j_{h'})$, $R(j_{h'+1})$, $R(j_{h'+2})$, respectively, and, thus, $j \in S_k$. The remaining case when $i \in S$ since $R(j_h)$ holds but $R(j_{h+1})$ does not hold is similar.
Let us establish property~(b) for $S_k$ with $\tau = 2^{k+3}$: if $s[i..i{+}\ell] = s[j..j{+}\ell]$, for $i,j \in S_k$ and some $\ell \ge 0$, then, for each $d \in [0..\ell{-}2^{k+3})$, $i + d \in S_k$ iff $j + d \in S_k$. In view of property~(a), the only interesting case is when $\ell > 2^{k+3}$ and $d \in (0..2^{k+3})$. Given $i + d \in S_k$, let us show that $j + d \in S_k$; the case $j + d \in S_k$ is symmetric. By the inductive hypothesis, we have $i + d' \in S_{k-1}$ iff $j + d' \in S_{k-1}$, for any $d' \in [0..\ell{-}2^{k+2})$; moreover, $s[i{+}d' .. i{+}d'{+}2^k] = s[j{+}d' .. j{+}d'{+}2^k]$, for such $d'$, since $2^k < 2^{k+2}$. In particular, $i + d$ and $j + d$ both belong to $S_{k-1}$. Let $i + d = j_h$ and $j + d = j_{h'}$, for some $h$ and $h'$. The remaining case analysis is exactly as in the proof of property~(a): the only strings of interest to the ``left'' of $i + d$ and $j + d$ in the analysis are $s[j_{h-1} .. j_{h-1}{+}2^k]$ and $s[j_{h'-1} .. j_{h'-1}{+}2^k]$, respectively, and they coincide since $j_{h} - j_{h-1} = j_{h'} - j_{h'-1} \le d$; all strings to the ``right'' are addressed using the inductive hypothesis.
It remains to establish property~(c) for $S_k$ with $\tau = 2^{k+3}\lfloor\log^{(3)} n\rfloor$: if $i,j \in S_k$ with $i < j$, $(i..j) \cap S_k = \emptyset$, and $j - i > \tau$, then $s[i..j]$ has a period at most $\tau / 4$. We shall actually prove a stronger claim that $s[i..j]$ has a period at most $2^{k-1}$. We assume that $n > 2^{16}$ so that $\tau > 2^{k+3}$. If $(i..j) \cap S_{k-1} = \emptyset$, then the property follows from the inductive hypothesis. Assume that the set $(i..j) \cap S_{k-1}$ is not empty and denote by $j_r, j_{r+1}, \ldots, j_t$ all its positions. Note that $j_\ell \not\in S_k$, for all $\ell \in [r..t]$, and $i = j_{r-1}$ and $j = j_{t+1}$. We have $j_{\ell} - j_{\ell-1} \le 2^{k-1}$, for all $\ell \in [r..t{+}1]$, since otherwise $j_{\ell-1}, j_\ell \in S_k$. The latter implies that $2 + t - r \ge (j - i) / 2^{k-1} > \tau / 2^{k-1} = 2^4\lfloor\log^{(3)} n\rfloor$.
Suppose that $R(j_{u})$ holds, for some $u \in [r..t]$, and let $u$ be the smallest such $u$. By Lemma~\ref{lem:all-R}, the last position of the all-$R$ region containing $j_u$ belongs to $S_k$ and its other positions, except possibly the first one, do not belong to $S_k$. Hence, $R(j_\ell)$ holds, for all $\ell \in [u..t{+}1]$, and $j$ ($=j_{t+1}$) is this last position. Suppose that $t + 1 - u \ge 2$. Then, by Lemma~\ref{lem:all-R}, the first position in the all-$R$ region containing $j_u$ belongs to $S_k$ too. Hence, this first position must be $j_{r-1} = i$. Therefore, the string $s[i..j]$ has a period at most $2^{k-1}$ due to Lemma~\ref{lem:all-R}.
Suppose that $R(j_r)$ does not hold, i.e., $j_r$ belongs to an all-non-$R$ region according to Lemma~\ref{lem:all-non-R}. Then, by the above argument, $R(j_u)$ may hold only for $u = t$ in $[r..t]$ (so that $ t + 1 - u < 2$) and, therefore, all $j_r, j_{r+1}, \ldots, j_{t-1}$ are in the same all-non-$R$ region. By Lemma~\ref{lem:all-non-R}, we have $v_\ell \ne \infty$ when $\ell \in [r..t{-}5]$. Hence, $(t - 5) - r + 1 < 8\log^{(3)} n + 12$ since otherwise, by Lemma~\ref{lem:local-density}, $j_\ell \in S_k$, for some $\ell \in [r..t{-}5]$. But it contradicts the inequality $t - r > 2^4\lfloor\log^{(3)} n\rfloor - 2$ derived above, assuming $n$ is large enough. Thus, this case (when $R(j_r)$ does not hold) is impossible.
\end{proof}
\section{Speeding up the Refinement Procedure}
\label{sec:time-improvement}
Since, for any $k$, $|S_k| \le n / 2^{k-6}$ by Lemma~\ref{lem:local-sparsity}, it is evident that the algorithm of Section~\ref{sec:vishkin-process} takes $\Oh(|S_0| + |S_1| + \cdots) = \Oh(n)$ time plus the time needed to calculate the numbers $v'_h$, for all positions (from which the numbers $v_h$ are derived). For a given $k \ge 1$, denote by $j_h$ the $h$th position in $S_{k-1}$. For each $j_h$, the number $v'_h$ can be computed by checking whether $j_{h+1} - j_h > 2^{k-1}$ (in this case $v'_h = \infty$), and, if $j_{h+1} - j_h \le 2^{k-1}$, by the na{\"i}ve comparison of $s[j_h..j_h{+}2^k]$ and $s[j_{h+1}..j_{h+1}{+}2^k]$ in $\Oh(2^k)$ time. Thus, all numbers $v'_h$ for the set $S_{k-1}$ can be computed in $\Oh(2^k |S_{k-1}|) = \Oh(n)$ time, which leads to $\Oh(n\log\tau)$ total time for the whole algorithm.
This na{\"i}ve approach can be sped up if one can perform the LCE queries that compare $s[j_h..j_h{+}2^k]$ and $s[j_{h+1}..j_{h+1}{+}2^k]$ faster; in fact, if one can do this in $\Oh(1)$ time, the overall time becomes linear. To this end, we exploit the online nature of the procedure. The algorithm runs simultaneously $\lfloor\log\frac{\tau}{2^4\log^{(3)} n}\rfloor$ phases: the $k$th phase takes positions from the set $S_{k-1}$ produced by the $(k-1)$th phase and decides which of them to feed to the $(k+1)$th phase, i.e., to put into $S_k$ (the ``top'' phase feeds the positions to an external procedure described in the following section). To make the decision for a position $j_h \in S_{k-1}$, the $k$th phase needs to know the distance $j_h - j_{h-1}$ and the distances $j_{h+\ell} - j_h$ to the positions $j_{h+\ell}$ with $\ell \in [1..5]$ such that $j_{h+\ell} - j_h \le 5\cdot 2^{k-1}$. Then, the $k$th phase calculates $\min\{2^k + 1, \lce(j_{h + \ell - 1}, j_{h + \ell})\}$, for all $\ell \in [0..5]$ such that $j_{h + \ell} - j_{h + \ell - 1} \le 2^{k-1}$ and $j_{h + \ell} - j_h \le 5\cdot 2^{k-1}$, and, based on the distances and the LCE values, computes $v_{h-1}, v_{h}, v_{h+1}$ and decides the fate of $j_h$.
Thus, for any prefix $s[0..d]$, once the positions $S_{k-1} \cap [0..d]$ are known to the $k$th phase, it reports all positions from the set $S_k \cap [0..d{-}5\cdot 2^{k-1}]$ and no position from the set $S_{k-1} \cap [0..d{-}6\cdot 2^{k-1}]$ will be accessed by an LCE query of the $k$th phase in the future. We deduce from this that after the processing of the prefix $s[0..d]$ by the whole algorithm, the $k$th phase reports all positions from the set $S_{k} \cap [0..d{-}5\sum_{k'=0}^{k-1} 2^{k'}] \supseteq S_{k} \cap [0..d{-}5\cdot 2^{k}]$ and no LCE query in the $k$th phase will access positions from the set $S_{k-1} \cap [0..d{-}6\cdot 2^{k}]$ in the future.
\begin{lemma}
Suppose we run the described $\lfloor\log\frac{\tau}{2^4\log^{(3)} n}\rfloor$ phases on a string $s$ of length $n$ from left to right. Then, for any $k \ge 1$ and $d \ge 0$, after processing the prefix $s[0..d]$, the $k$th phase reports all positions from $S_k \cap [0..d{-}5\cdot 2^k]$ to the $(k+1)$th phase and will not perform queries $\lce(j,j')$ on positions $j, j' \in S_{k-1}$ such that $\min\{j,j'\} \le d - 6\cdot 2^k$ in the future.\label{lem:dist-process}
\end{lemma}
\paragraph{The case \boldmath $\tau < \sqrt{n}$.}
Each phase in this scheme uses $\Oh(1)$ space so that the overall space is $\Oh(\log\tau)$, which, as we assumed, is always available. Suppose that $\tau < \sqrt{n}$ . We have $b = \Theta(\frac{n}{\tau}) \ge \Omega(\sqrt{n})$ additional space for the algorithm in this case, which can be utilized in order to speed up LCE queries, the bottleneck of the scheme, as follows. Consider the (overlapping) substrings $C_i = s[i\lfloor\sqrt{n}\rfloor..(i + 3)\lfloor\sqrt{n}\rfloor - 1]$, for $i \in [0..n / \lfloor\sqrt{n}\rfloor{-}3]$. We build in linear time for $C_0$ an LCE data structure~\cite{HarelTarjan} using $\Oh(|C_0|) = \Oh(\sqrt{n})$ space that can answer LCE queries in $\Oh(1)$ time. Then, the algorithm processes the prefix $s[0.. 2\lfloor\sqrt{n}\rfloor{-}1]$ from left to right feeding the positions $[0.. 2\lfloor\sqrt{n}\rfloor)$ to the first and all subsequent phases and computing queries $\min\{2^k + 1, \lce(j, j')\}$, with $j, j' \in [0.. 2\lfloor\sqrt{n}\rfloor)$, emerging in the phases along the way in $\Oh(1)$ time; the latter is possible since $2^k \le \frac{\tau}{2^4\log^{(3)} n} < \frac{1}{8}\sqrt{n}$ for all $k \le \lfloor\log\frac{\tau}{2^4\log^{(3)} n}\rfloor$, assuming $n > 2^{16}$, and hence, the strings $s[j..j{+}2^k]$ and $s[j'..j'{+}2^k]$ in the queries are substrings of $C_0$. Since $6\cdot 2^k < \sqrt{n}$ for $k \le \lfloor\log\frac{\tau}{2^4\log^{(3)} n}\rfloor$, it follows from Lemma~\ref{lem:dist-process} that any subsequent LCE queries $\lce(j, j')$ in the algorithm that will emerge after the processing of the prefix $s[0.. 2\lfloor\sqrt{n}\rfloor{-}1]$ can be performed only on positions $j$ and $j'$ such that $\min\{j,j'\} \ge \sqrt{n}$, i.e., the positions $j$ and $j'$ and the corresponding substrings $s[j..j{+}2^k]$ and $s[j'..j'{+}2^k]$ in the queries will be inside the string $C_1$ in the next $\sqrt{n}$ steps. Accordingly, to continue the execution of the algorithm, we build an LCE data structure for $C_1$ in place of the structure for $C_0$ and continue the run feeding the positions $[2\lfloor\sqrt{n}\rfloor..3\lfloor\sqrt{n}\rfloor)$ to the first and all subsequent phases. We continue this procedure analogously: on a generic step, after feeding the positions $[i\lfloor\sqrt{n}\rfloor..(i+1)\lfloor\sqrt{n}\rfloor)$ using an LCE data structure for $C_{i-1}$, we construct an LCE data structure for $C_i$ in its place in $\Oh(|C_i|)$ time and feed the positions $[(i+1)\lfloor\sqrt{n}\rfloor..(i+2)\lfloor\sqrt{n}\rfloor)$ to the algorithm. The overall running time is $\Oh(n + \sum_i |C_i|) = \Oh(n)$ and the occupied space is $\Oh(\sqrt{n}) = \Oh(b)$.
\paragraph{The case \boldmath $\tau \ge \sqrt{n}$.}
Let us generalize this idea to the case $\tau \ge \sqrt{n}$. Denote $b = \frac{n}{\tau}$. We have $\Oh(b) < \Oh(\sqrt{n})$ space and, hence, cannot resort to the above described scheme since LCE data structures for substrings of length $\Oh(b)$ are not enough to answer queries of the form $\min\{2^k + 1, \lce(j, j')\}$ when $2^k > \Omega(b)$. The key idea is that the queries $\lce(j, j')$ in a $k'$th phase can use only positions $j$ and $j'$ such that $j, j' \in S_{k}$, for all $k \in [0..k')$, and therefore, the full suffix tree that lies at the core of the LCE data structure~\cite{HarelTarjan} used for the substrings $C_i$ is unnecessary, we can use a sparse suffix tree only for suffixes whose starting positions are from a set $S_{k}$, for some $k < k'$. When the sparse suffix tree is equipped with an LCA data structure~\cite{HarelTarjan}, one can compute the queries $\lce(j,j')$, for $j, j' \in S_{k'-1}$, in $\Oh(1)$ time.
Denote $\hat{b} = \lfloor\frac{b}{\log n}\rfloor$. Our new scheme evenly splits all $\lfloor\log\frac{\tau}{2^4\log^{(3)} n}\rfloor$ phases into ``levels'' containing $\log\hat{b}$ phases. Observe that, since $b = \Omega(\log^2 n)$, we have $\log\hat{b} = \Theta(\log b)$. The scheme has $\lfloor\log(\frac{\tau}{2^4\log^{(3)} n}) / \log\hat{b}\rfloor = \Oh(\log_{\hat{b}} n)$ ``levels'', which is $\Oh(\log_b n)$ as $\log\hat{b} = \Theta(\log b)$. For integer $p \ge 0$, the $p$th level takes positions from the set $S_k$ produced by the previous level, for $k = p\lfloor\log\hat{b}\rfloor$, and produces the set $S_{k'}$, for $k' = (p + 1)\lfloor\log\hat{b}\rfloor$. The $p$th level receives positions of the set $S_{k}$ from left to right; for $p = 0$, the received positions are $0, 1, \ldots, n{-}1$. When sufficiently many positions of $S_{k}$ are collected (at most $\Oh(\hat{b})$), we temporarily pause the procedure of the previous level, process the collected chunk of positions thus producing some positions of $S_{k'}$ (from left to right), and then continue collecting positions of $S_{k}$ until the next chunk is ready, which is again processed analogously, and so no. Let us describe this in details.
For integer $p \ge 0$, fix $k = p\lfloor\log\hat{b}\rfloor$ and $k' = (p + 1)\lfloor\log\hat{b}\rfloor$. The $p$th level works as follows.
By analogy to the case $\tau < \sqrt{n}$, we consider consecutively substrings $C_i = s[i\hat{b}\cdot 2^{k+3}..(i + 3)\hat{b}\cdot 2^{k+3} - 1]$, for $i \in [0..n / (\hat{b}\cdot 2^{k+3}) - 3]$. First, we collect all positions of $S_k$ produced by the $(p{-}1)$th level that are less than $3\hat{b}\cdot 2^{k+3}$ (i.e., span the substring $C_0$) and, then, temporarily pause the generation of new positions from $S_k$ (i.e., pause the levels $p-1, p-2, \ldots, 0$). Denote $Q_i = S_k \cap [i\hat{b}\cdot 2^{k+3}..(i + 3)\hat{b}\cdot 2^{k+3})$. By Lemma~\ref{lem:local-sparsity}, we have $|Q_i| \le \Oh(\hat{b})$. Since, by Lemma~\ref{lem:dist-process}, the set $S_k$ is ``almost'' $2^k$-partitioning (it satisfies properties~(a) and~(b)), we can construct on the string $C_0$ a sparse suffix tree, for all its suffixes $s[j..|C_0|{-}1]$ starting at positions $j \in Q_0$, in $\Oh(|C_0| + |C_0|\min\{\log_{\hat{b}} n, \frac{\log\hat{b}}{2^k}\})$ time using Lemma~\ref{lem:sst-special}. This time can be upper-bounded by $\Oh(|C_0| \log_b n)$ for $p = 0$ (note that $\log_{\hat{b}} n = \Theta(\log_b n)$ since $b \ge \Omega(\log^2 n)$), and by $\Oh(|C_0| + |C_0|\frac{\log\hat{b}}{\hat{b}^p}) = \Oh(|C_0|)$ for $p \ge 1$ (recall that $k = p\lfloor\log\hat{b}\rfloor$). The sparse suffix tree is equipped in $\Oh(\hat{b})$ time by an LCA data structure~\cite{HarelTarjan} that allows us to compute LCE queries for substrings of $C_0$ starting at positions from $Q_0$ in $\Oh(1)$ time.
We build the set $S_{k'}$ as follows: we first process the positions $S_k \cap [0..2\hat{b}\cdot 2^{k+3})$ by the usual procedure of the phases $k + 1, k + 2, \ldots, k'$ using the sparse suffix tree of $C_0$ in order to answer LCE queries of the form $\min\{2^{k'} + 1, \lce(j,j')\}$ emerging along the way in $\Oh(1)$ time. This is possible since $2^{k'} \le \hat{b}\cdot 2^k$ and the queries $\lce(j,j')$ access only positions $j, j' \in Q_0 \cap [0..2\hat{b}\cdot 2^{k+3})$ so that $s[j..j{+}2^{k'}]$ and $s[j'..j'{+}2^{k'}]$ are substrings of $C_0$. By Lemma~\ref{lem:dist-process}, we shall report in this way all positions $S_{k'} \cap [0..2\hat{b}\cdot 2^{k+3} - 5\cdot 2^{k'}] \supseteq S_{k'} \cap [0 .. (1 + \frac{3}{8})\hat{b}\cdot 2^{k+3}]$ (note that since $2^{k'} \le \hat{b}\cdot 2^k$, we have $2\hat{b}\cdot 2^{k+3} - 5\cdot 2^{k'} \ge 2\hat{b}\cdot 2^{k+3} - 5\hat{b}\cdot 2^k = (1 + \frac{3}{8})2^{k+3}\hat{b}$). Then, we resume the procedures of the previous temporarily paused levels $p - 1, p-2, \ldots, 0$ that generate $S_k$ and collect all positions from $S_k \cap [3\hat{b}\cdot 2^{k+3}..4\hat{b}\cdot 2^{k+3})$. Once they are collected and, therefore, the set $Q_1$ is known, the previous levels $p - 1, p-2, \ldots, 0$ are again paused. We construct the sparse suffix tree of $C_1$, for all suffixes starting at positions from $Q_1$, in place of the tree for $C_0$ using Lemma~\ref{lem:sst-special}, which takes $\Oh(|C_1|\log_b n)$ or $\Oh(|C_1|)$ time depending on whether $p = 0$ or not. Further, we feed the positions $S_k \cap [2\hat{b}\cdot 2^{k+3}..3\hat{b}\cdot 2^{k+3})$ to the phases $k + 1, k + 2, \ldots, k'$ using the suffix tree to answer LCE queries. This is possible since, by Lemma~\ref{lem:dist-process}, all substrings $s[j..j{+}2^{k'}]$ that are accessed by LCE queries at this point have $j \ge 2\hat{b}\cdot 2^{k+3} - 6\cdot 2^{k'} > \hat{b}\cdot 2^{k+3}$. Thus, we report the rest of the set $S_{k'} \cap [0..3\hat{b}\cdot 2^{k+3} - 5\cdot 2^{k'}] \supseteq S_k \cap [0 .. (2 + \frac{3}{8})\hat{b}\cdot 2^{k+3}]$ as a result. Analogously, we continue the described procedure for $C_2, C_3, \ldots$: for $C_i$, we collect positions from $S_k \cap [(i+2)\hat{b}\cdot 2^{k+3}..(i+3)\hat{b}\cdot 2^{k+3})$, thus constructing $Q_i$, then temporarily pause the previous levels, construct a sparse suffix tree for the suffixes of $C_i$ starting at positions from $Q_i$, and feed the positions $S_k \cap [(i + 1)\hat{b}\cdot 2^{k+3}..(i + 2)\hat{b}\cdot 2^{k+3})$ to the phases $k + 1, k + 2, \ldots, k'$, thus reporting $S_{k'} \cap [0 .. (i + 1 + \frac{3}{8})\hat{b}\cdot 2^{k+3}]$ in the end; then, we move on to $C_{i+1}$.
Each level uses $\Oh(\hat{b})$ space. Since the number of levels is $\Oh(\log_b n)$, the total space is $\Oh(\hat{b}\log_b n) = \Oh(b)$. The overall running time of the first level ($p=0$) is $\Oh(\sum_i |C_i|\log_b n) = \Oh(n\log_b n)$, the time for each subsequent level ($p > 0$) is $\Oh(\sum_i |C_i|) = \Oh(n)$. Thus, the total time is $\Oh(n \log_b n)$.
\section{Recompression}
\label{sec:recompression}
Let $S$ be the set produced by the last phase of the procedure described in Sections~\ref{sec:vishkin-process} and~\ref{sec:time-improvement}. By Lemma~\ref{lem:2k-partitioning}, $S$ is a $\frac{\tau}2$-partitioning set of size $\Oh(\frac{n}{\tau} \log^{(3)} n)$. Throughout this section, we assume that $\tau \ge (\log^{(3)} n)^4$ so that the size of $S$ is at most $\Oh(\frac{n}{(\log^{(3)} n)^3})$; the case $\tau < (\log^{(3)} n)^4$ is discussed in Section~\ref{sec:small-tau}. In what follows we are to described an algorithm that removes some positions from $S$ transforming it into a $\tau$-partitioning set of size $\Oh(n / \tau)$.
Instead of storing $S$ explicitly, which is impossible in $\Oh(n / \tau)$ space, we construct a related-to-$S$ string $R$ of length $\Oh(\frac{n}{\tau} \log^{(3)} n)$ over a small alphabet such that $R$ can be packed into $\Oh(n / \tau)$ machine words. Positions of $S$ are represented, in a way, by letters of $R$. The construction of $R$ is quite intricate, which is necessary in order to guarantee that letters of $R$ corresponding to close positions of $S$ (namely, positions at a distance at most $\tau / 2^5$) are necessarily distinct even if the letters are not adjacent in $R$. This requirement is stronger than the requirement of distinct adjacent letters that was seen, for instance, in Lemma~\ref{lem:vishkin} but it is achieved by similar means using $\vbit$ reductions as in Section~\ref{sec:vishkin-process}. We then apply to $R$ a variant of the iterative process called \emph{recompression}~\cite{Jez4} that removes some letters thus shrinking the length of $R$ to $\Oh(n / \tau)$. Then, the whole procedure of Sections~\ref{sec:vishkin-process}--\ref{sec:time-improvement} that generated $S$ is performed again but this time we discard all positions of $S$ corresponding to removed positions of the string $R$ and store the remaining positions explicitly in a set $S^* \subseteq S$. We show that $S^*$ is $\tau$-partitioning and has size $\Oh(n / \tau)$. Let us elaborate details.
\medskip
The algorithm starts with an empty string $R$ and receives positions of $S$ from left to right appending to the end of $R$ new letters corresponding to the received positions. It is more convenient to describe the algorithm as if it acted in two stages: the first stage produces a $\frac{3}{4}\tau$-partitioning set $S' \subseteq S$, for which a condition converse to property~(c) holds (thus, some positions of $S$ are discarded already in this early stage), and the second stage, for each position of $S'$, appends to the end of $R$ a letter of size $\Oh((\log^{(3)} n)^2)$ bits. Both stages act in an almost online fashion and, hence, can be actually executed simultaneously in one pass without the need to store the auxiliary set $S'$. The separation is just for the ease of the exposition.
\paragraph{\boldmath The first stage.}
The goal of the construction of the set $S' \subseteq S$ is to get rid of ``close'' positions $i,j \in S$ such that $s[i..i{+}\tau/2] = s[j..j{+}\tau/2]$ and of all positions between such $i$ and $j$. The set $S'$ is defined as the set $S$ from which we exclude all positions $h \in S$ for which there exist $i,j \in S$ such that $i < h \le j$, $j - i \le \tau / 4$, and $s[i..i{+}\tau/2] = s[j..j{+}\tau/2]$. The algorithm generating $S'$ is as follows.
We consider all positions of $S$ from left to right and, for each $i \in S$, process every $j \in (i..i{+}\tau/4] \cap S$ by comparing $s[i..i{+}\tau/2]$ with $s[j..j{+}\tau/2]$. If $s[i..i{+}\tau/2] = s[j..j{+}\tau/2]$, then we traverse all positions of the set $(i..j] \cap S$ from right to left marking them for removal until an already marked position is encountered. Since the marking procedure works from right to left, every position is marked at most once. The position $i$ is put into $S'$ iff it was not marked previously. During the whole process, we maintain a ``look-ahead'' queue that stores the positions $(i..i{+}\tau/4] \cap S$ and indicates which of them were marked for removal.
Due to Lemma~\ref{lem:local-sparsity}, the size of the set $(i..i{+}\tau/4] \cap S$ is $\Oh(\log^{(3)} n)$. Therefore, the look-ahead queue takes $\Oh(\log^{(3)} n)$ space, which is $\Oh(n / \tau)$ since $n / \tau \ge \log^2 n$, and $\Oh(\log^{(3)} n)$ comparisons are performed for each $i$. Hence, if every comparison takes $\Oh(1)$ time, the set $S'$ is constructed in $\Oh(|S|\log^{(3)} n) = \Oh(\frac{n}{\tau}(\log^{(3)} n)^2)$ time, which is $\Oh(n)$ since $\tau \ge (\log^{(3)} n)^4$. Thus, it remains to explain how the comparisons can be performed.
Similar to the algorithm of Section~\ref{sec:time-improvement}, we consecutively consider substrings $C'_i = s[i\tau..(i + 3)\tau - 1]$, for $i \in [0..n / \tau - 3]$: when all positions from a set $S \cap [i\tau..(i + 3)\tau)$ are collected, we use the algorithm of Lemma~\ref{lem:sst-special} to build a sparse suffix tree for all suffixes of the string $C'_i$ whose starting positions are from $S$; the tree, endowed with an LCA data structure~\cite{HarelTarjan}, is used in the procedure for deciding which of the positions from the set $S \cap [(i + \frac{1}2)\tau .. (i + \frac{3}2)\tau)$ (or $S \cap [0 .. \frac{3}{2}\tau)$ if $i = 0$) should be marked for removal. Thus, after processing the last string $C'_i$, all positions of $S$ are processed and $S'$ is generated. By Lemma~\ref{lem:local-sparsity}, the number of suffixes in the sparse suffix tree for $C'_i$ is $\Oh(\log^{(3)} n)$ and, therefore, the tree occupies $\Oh(\log^{(3)} n) \le \Oh(n / \tau)$ space and its construction takes $\Oh(\tau + \log^{(3)} n \cdot \log \log^{(3)} n)$ time by Lemma~\ref{lem:sst-special}, which is $\Oh(\tau)$ since $\tau \ge (\log^{(3)} n)^4$. Thus, the total construction time for all the trees in the stage is $\Oh(\frac{n}{\tau}\tau) = \Oh(n)$ and the space used is $\Oh(\log^{(3)} n)$ since, at every moment, at most one tree is maintained.
\begin{lemma}
The set $S'$ is $\tau$-partitioning. Also a converse of property~(c) holds for $S'$: if a substring $s[i..j]$ has a period at most $\tau / 4$, then $S' \cap [i + \frac{3}{4}\tau .. j - \frac{3}{4}\tau] = \emptyset$. Moreover, $S'$ is almost $\frac{3}{4}\tau$-partitioning: it satisfies properties~(a) and~(b) with $\frac{3}{4}\tau$ in place of $\tau$, but does not necessarily satisfy~(c).\label{lem:periodicity-gap}
\end{lemma}
\begin{proof}
For property~(a) of $S'$ (with $\frac{3}{4}\tau$ in place of $\tau$), consider $p$ and $q$ such that $s[p{-}\frac{3}{4}\tau .. p{+}\frac{3}{4}\tau] = s[q{-}\frac{3}{4}\tau..q{+}\frac{3}{4}\tau]$. It suffices to show that if $p \in S'$, then $q \in S'$. Since $S$ is a \mbox{$\frac{\tau}2$-partitioning} set and $S' \subseteq S$, the inclusion $p \in S'$ implies $q \in S$, by property~(a) of $S$. The position $q$ could be excluded from $S'$ only if there exist $r \in (0..\tau/4]$ and $r' \in [0..\tau/4]$ such that $r' + r \le \tau/4$, $q - r \in S$, $q + r' \in S$, and $s[q{-}r..q{-}r{+}\frac{\tau}{2}] = s[q{+}r'..q{+}r'{+}\frac{\tau}{2}]$. Then, since $p - \frac{3}{4}\tau \le p - r - \frac{\tau}{2}$ and $p + r' + \frac{\tau}{2} \le p + \frac{3}{4}\tau$, we have $p - r \in S$ and $p + r' \in S$ by property~(a) of $S$. Therefore, $p$ must be excluded from $S'$ too after the comparison of the equal substrings $s[p{-}r..p{-}r{+}\frac{\tau}{2}]$ and $s[p{+}r'..p{+}r'{+}\frac{\tau}{2}]$, which contradicts the assumption $p \in S'$.
For property~(b) of $S'$ (with $\frac{3}{4}\tau$ in place of $\tau$), consider $p,q \in S'$ such that $s[p..p{+}\ell] = s[q..q{+}\ell]$, for some $\ell \ge 0$. By contradiction, assume that there exists $d \in [0..\ell{-}\frac{3}{4}\tau)$ such that $p + d \in S'$ whereas $q + d \not\in S'$, and $d$ is the smallest such integer. Denote $p' = p + d$ and $q' = q + d$. Since $p' \in S' \subseteq S$, we have $q' \in S$, by property~(b) of $S$. Then, as above, $q'$ could be excluded from $S'$ only if there are $r \in (0..\tau/4]$ and $r' \in [0..\tau/4]$ such that $r' + r \le \tau/4$, $q' - r \in S$, $q' + r' \in S$, and $s[q'{-}r..q'{-}r{+}\frac{\tau}{2}] = s[q'{+}r'..q'{+}r'{+}\frac{\tau}{2}]$. Hence, all positions $S \cap (q'{-}r .. q'{+}r']$ were excluded from $S'$, i.e., $S' \cap (q'{-}r .. q'{+}r'] = \emptyset$. Since $q \in S'$, we have $q' - r \ge q$. Since $d + r < \ell - \frac{\tau}{2}$ and $q' - r, q' + r' \in S$, we obtain $p' - r \in S$ and $p' + r \in S$, by property~(b) of $S$. Therefore, the positions $S \cap (p'{-}r .. p'{+}r]$ all too should have been excluded from $S'$ together with $p'$, which is a contradiction.
For property~(c) of $S'$ (with $\tau$), consider $p, q \in S'$ such that $q - p > \frac{3}{4}\tau$ and $(p..q) \cap S' = \emptyset$. We are to show that $s[p..q]$ has a period at most $\tau / 4$ (property~(c) does not necessarily holds for $\frac{3}{4}\tau$, only for $\tau$). If $(p..q) \cap S = \emptyset$, then the string $s[p..q]$ has a period at most $\tau / 8$ (${<}\tau/4$) because of property~(c) of $S$. Suppose that $(p..q) \cap S \ne \emptyset$. Denote by $D$ the set of all pairs of positions $(i,j)$ such that $i,j \in S$, $0 < j - i \le \tau /4$, and $s[i..i{+}\frac{\tau}{2}] = s[j..j{+}\frac{\tau}{2}]$. The set $S'$ is generated from $S$ by removing, for each pair $(i,j) \in D$, the positions $S \cap (i..j]$. Therefore, all positions of $S$ between $p$ and $q$ are covered by half-intervals $(i..j]$ with $(i,j) \in D$. Further, there is a subcover of $S \cap (p..q)$ consisting of interleaving half-intervals $\{(i_t..j_t]\}_{t=1}^m$ from $D$, i.e., $i_1 < i_2 < \cdots < i_m$, $S \cap (p..q) \subseteq (i_1..j_m]$, and, for any $t \in [1..m)$, $i_{t+1} \le j_t$. Since, for each $(i,j) \in D$, the string $s[i..j{+}\frac{\tau}{2}]$ has a period at most $\tau / 4$, the period of the substring $s[i_1..j_m{+}\frac{\tau}{2}]$ is at most $\tau / 4$ by Lemma~\ref{lem:fine-wilf}. It is easy to see that $i_1 = p$. Hence, if $j_m + \frac{\tau}{2} \ge q$, then property~(c) for $S'$ is established: $s[p..q]$ has a period at most $\tau / 4$. Otherwise (if $j_m + \frac{\tau}{2} < q$), we have $S \cap (j_m..q) = \emptyset$ and, therefore, $s[j_m .. q]$ has a period at most $\tau / 8$, by property~(c) of $S$. Then, $s[p..q]$ has a period at most $\tau / 4$ by Lemma~\ref{lem:fine-wilf} since it is covered by two overlapping substrings $s[p..j_m{+}\frac{\tau}{2}]$ and $s[j_m..q]$ with periods at most $\tau / 4$.
For the converse of property~(c), consider a substring $s[i..j]$ whose minimal period $\pi$ is at most $\tau / 4$. Let $q \in S \cap [i + \frac{3}{4}\tau .. j - \frac{3}{4}\tau]$. Denote $p = q - \pi$. Due to periodicity of $s[i..j]$, we obtain $s[q{-}\frac{\tau}{2} .. q{+}\frac{\tau}{2}] = s[p{-}\frac{\tau}{2} .. p{+}\frac{\tau}{2}]$. By property~(a) of $S$, $p \in S$. Since $q \in (p .. p{+}\tau/4]$, the procedure generating $S'$ must have compared the strings $s[p..p{+}\frac{\tau}{2}]$ and $s[q..q{+}\frac{\tau}{2}]$ during the analysis of the position $p$ and, as a result, could not put $q$ into $S'$. Hence, $S' \cap [i + \frac{3}{4}\tau .. j - \frac{3}{4}\tau] = \emptyset$.
\end{proof}
\paragraph{The second stage.}
We consider all positions of $S'$ from left to right and, for each $p \in S'$, append to the end of the (initially empty) string $R$ a new carefully constructed letter $a_p$ occupying $\Oh((\log^{(3)} n)^2)$ bits. Thus, the string $R$ will have length $|S'|$ and will take $\Oh(|S'|(\log^{(3)} n)^2) = \Oh(\frac{n}{\tau} (\log^{(3)} n)^3)$ bits of space, which can be stored into $\Oh(n / \tau)$ machine words of size $\Oh(\log n)$ bits. The main property of $R$ that is of interest for us is that any two letters of $R$ corresponding to close positions of $S'$ are distinct, namely the following lemma will be proved:
\begin{restatable}{lem}{lemDistinctLetters}
For any $p, \bar{p} \in S'$, if $0 < \bar{p} - p \le \tau / 2^5$, then $a_p \ne a_{\bar{p}}$.\label{lem:r-letters-equality}
\end{restatable}
Consider $p \in S'$. We are to describe an algorithm generating an $\Oh((\log^{(3)} n)^2)$-bit letter $a_p$ for $p$ that will be appended to the string $R$.
Let $m = |S' \cap (p..p{+}\tau/2^5]|$. Denote by $p_1, p_2, \ldots, p_{m}$ all positions of $S' \cap (p..p{+}\tau/2^5]$ in the increasing order. By Lemma~\ref{lem:local-sparsity}, $m \le \Oh(\log^{(3)} n)$ and, hence, there is enough space to store them. By construction, $s[p..p{+}\frac{\tau}{2}] \ne s[p_j..p_j{+}\frac{\tau}{2}]$, for each $j \in [1..m]$. One can compute the longest common prefix of $s[p..p{+}\frac{\tau}{2}]$ and $s[p_j..p_j{+}\frac{\tau}{2}]$, for any $j \in [1..m]$, in $\Oh(1)$ time using a sparse suffix tree with an LCA data structure~\cite{HarelTarjan} built in the first stage for a substring $C'_i$ such that $p \in [i\tau..(i+\frac{3}{2})\tau)$. (In the simultaneous run of the stages, we handle $p$, which was reported by the first stage after processing $C'_i$, only when $C'_{i+1}$ was processed too in order to have $p_1, p_2, \ldots, p_m$ prepared; thus, the first stage maintains two sparse suffix trees: one for a substring $C'_{i+1}$ currently under analysis and one for $C'_{i}$, retained for its use in the second stage.)
Denote $\ell = 2^6\lceil\log^{(3)} n\rceil$. Recall that $S$ is produced by the $k$th phase of the procedure of Section~\ref{sec:vishkin-process}, for $k = \lfloor\log\frac{\tau}{2^4\log^{(3)} n}\rfloor$, and hence, by Lemma~\ref{lem:local-sparsity}, the size of any set $S \cap [i..j]$, for $i \le j$, is at most $2^6\lceil (j - i + 1) / 2^{k}\rceil$. Therefore, since $S' \subseteq S$ and $m$ is the size of the set $S' \cap (p..p{+}\tau/2^5]$, we obtain $m \le 2^6 (\tau / 2^5) / \frac{\tau}{2\cdot 2^4\log^{(3)} n} \le \ell$.
Let $w$ be the number of bits in an $\Oh(\log n)$-bit machine word sufficient to represent letters from the alphabet $[0..n^{\Oh(1)}]$ of $s$. For each $p_j$, denote $t_j = \sum_{i=0}^{\tau/2} s[p_j{+}i] 2^{wi}$; similarly, for $p$, denote $t = \sum_{i=0}^{\tau/2} s[p{+}i] 2^{wi}$. As in an analogous discussion in Section~\ref{sec:vishkin-process}, we do not discern the numbers $t_j$ and $t$ from their corresponding substrings in $s$ and use them merely in the analysis. The intuition behind our construction is that the numbers $t, t_1, t_2, \ldots, t_m$, in principle, could have been used for the string $R$ as letters corresponding to the positions $p, p_1, p_2, \ldots, p_m$ since $t, t_1, t_2, \ldots, t_m$ are pairwise distinct (due to the definition of $S'$) but, unfortunately, they occupy too much space ($\Oh(w\tau)$ bits each). One has to reduce the space for the letters retaining the property of distinctness. The tool capable to achieve this was already developed in Section~\ref{sec:vishkin-process}: it is the $\vbit$ reduction, a trick from Cole and Vishkin's deterministic locally consistent parsing~\cite{ColeVishkin}.
We first generate for $p$ a tuple of $\ell$ numbers $\langle w'_1, w'_2, \ldots, w'_\ell\rangle$: for $j \in [1..\ell]$, $w'_j = \vbit(t, t_j)$ if $j \le m$, and $w'_j = \infty$ otherwise. Since the longest common prefix of substrings $s[p..p{+}\frac{\tau}{2}]$ and $s[p_j..p_j{+}\frac{\tau}{2}]$, for $j \in [1..m]$, can be calculated in $\Oh(1)$ time, the computation of the tuple takes $\Oh(\ell) = \Oh(\log^{(3)} n)$ time. By Lemma~\ref{lem:vishkin}, each number $w'_j$ occupies less than $\lceil\log w + \log\tau + 1\rceil$ bits. Thus, we can pack the whole tuple into $\ell\lceil\log w + \log\tau + 1\rceil$ bits encoding each value $w'_j$ into $\lceil\log w + \log\tau + 1\rceil$ bits and representing $\infty$ by setting all bits to~$1$. We denote this chunk of $\ell\lceil\log w + \log\tau + 1\rceil$ bits by $\bar{t}$. In the same way, for each $p_i$ with $i \in [1..m]$, we generate a tuple $\langle w'_{i,1}, w'_{i,2}, \ldots, w'_{i,\ell}\rangle$ comparing $s[p_i..p_i{+}\tau/2]$ to $s[q..q{+}\tau/2]$, for each $q \in S' \cap (p_i..p_i{+}\tau/2^5]$, and using the $\vbit$ reduction; the tuple is packed into a chunk $\bar{t}_i$ of $\ell\lceil\log w + \log\tau + 1\rceil$ bits. For each $j \in [1..m]$, the number $w'_j$ is not equal to $\infty$ and, thus, due to Lemma~\ref{lem:vishkin}, differs from the number $w'_{j,j}$. Therefore, all the tuples---and, hence, their corresponding numbers $\bar{t}, \bar{t}_1, \bar{t}_2, \ldots, \bar{t}_m$---are pairwise distinct.
The numbers $\bar{t}, \bar{t}_1, \bar{t}_2, \ldots, \bar{t}_m$ however are still too larger to serve as letters of $R$. We therefore repeat the same $\vbit$ reduction but now for the numbers $\bar{t}, \bar{t}_1, \bar{t}_2, \ldots, \bar{t}_m$ in place of $t, t_1, t_2, \ldots, t_m$ thus generating a tuple $\langle w''_1, w''_2, \ldots, w''_\ell\rangle$: for $j \in [1..\ell]$, $w''_j = \vbit(\bar{t}, \bar{t}_j)$ if $j \le m$, and $w''_j = \infty$ otherwise. The computation of $\vbit(\bar{t}, \bar{t}_j)$ takes $\Oh(\ell)$ time since $\bar{t}$ occupies $\ell$ machine words of size $\Oh(\log n)$ bits. It follows from Lemma~\ref{lem:vishkin} that the tuple $\langle w''_1, w''_2, \ldots, w''_\ell\rangle$ can be packed into a chunk $\bar{\bar{t}}$ of $\ell\lceil\log\ell + \log\lceil\log w + \log\tau + 1\rceil + 1\rceil$ bits (i.e., $\Oh(\log^{(3)} n \cdot \log \log n)$ bits), which already fits into one machine word. We perform analogous reductions for the positions $p_1, p_2, \ldots, p_m$ generating $m$ tuples $\langle w''_{i,1}, w''_{i,2}, \ldots, w''_{i,\ell}\rangle$, for $i\in [1..m]$, packed into new chunks $\bar{\bar{t}}_1, \bar{\bar{t}}_2, \ldots, \bar{\bar{t}}_m$, respectively. Note that, in order to produce a tuple $\langle w''_{i,1}, w''_{i,2}, \ldots, w''_{i,\ell}\rangle$, for $i \in [1..m]$, that is packed into $\bar{\bar{t}}_i$, we use not only the numbers $\bar{t}_i, \bar{t}_{i+1}, \ldots, \bar{t}_m$ corresponding to positions $p_i, p_{i+1}, \ldots, p_m$ but also similarly computed numbers at other positions from $S' \cap (p_i..p_i{+}\tau/2^5]$, if any.
By the same argument that proved the distinctness of $\bar{t}, \bar{t}_1, \bar{t}_2, \ldots, \bar{t}_m$, one can easily show that $\bar{\bar{t}}, \bar{\bar{t}}_1, \bar{\bar{t}}_2, \ldots, \bar{\bar{t}}_m$ are pairwise distinct. But they are still too larger to be used as letters of $R$. Then again, we repeat the same reductions at positions $p, p_1, p_2, \ldots, p_m$ but now for the numbers $\bar{\bar{t}}, \bar{\bar{t}}_1, \bar{\bar{t}}_2, \ldots, \bar{\bar{t}}_m$ in place of $\bar{t}, \bar{t}_1, \bar{t}_2, \ldots, \bar{t}_m$, thus generating new chunks $\bar{\bar{\bar{t}}}, \bar{\bar{\bar{t}}}_1, \bar{\bar{\bar{t}}}_2, \ldots, \bar{\bar{\bar{t}}}_m$. Finally, once more, we do the $\vbit$ reduction for the numbers $\bar{\bar{\bar{t}}}, \bar{\bar{\bar{t}}}_1, \bar{\bar{\bar{t}}}_2, \ldots, \bar{\bar{\bar{t}}}_m$ generating a tuple $\langle w_1, w_2, \ldots, w_\ell\rangle$ such that, for $j \in [1..\ell]$, $w_j = \vbit(\bar{\bar{\bar{t}}}, \bar{\bar{\bar{t}}}_j)$ if $j \le m$, and $w'_j = \infty$ otherwise.
Using the same reasoning as in the proof of Lemma~\ref{lem:v-reduction}, one can deduce from Lemma~\ref{lem:vishkin} that the tuple $\langle w_1, w_2, \ldots, w_\ell\rangle$ fits into a chunk of $\ell \cdot 2 \log \log^{(3)} n \le 2^6 \lceil\log^{(3)} n\rceil^2$ bits (the inequality holds provided $n > 2^{16}$) encoding each value $w_j$ into $\lceil\log^{(3)} n\rceil$ bits and representing $\infty$ by setting all $\lceil\log^{(3)} n\rceil$ bits to~$1$. Denote by $a_p$ this chunk of $2^6\lceil\log^{(3)} n\rceil^2$ bits that encodes the tuple. We treat $a_p$ as a new letter of $R$ that corresponds to the position $p$ and we append $a_p$ to the end of $R$. Lemma~\ref{lem:r-letters-equality} follows then straightforwardly by construction.
For a given $p \in S'$, the calculation of the numbers $\bar{t}, \bar{\bar{\bar{t}}}, a_p$ takes $\Oh(\ell^2)$ time. The calculation of the number $\bar{\bar{t}}$ requires $\Oh(\ell^3)$ time since each $\vbit$ reduction $\vbit(\bar{t}, \bar{t}_j)$ for it takes $\Oh(\ell)$ time. Therefore, the total time for the computation of the string $R$ is $\Oh(|S'|\ell^3) = \Oh(\frac{n}{\tau} (\log^{(3)} n)^4)$, which is $\Oh(n)$ since $\tau \ge (\log^{(3)} n)^4$.
\begin{lemma}
If $s[p..p{+}\frac{7}{8}\tau] = s[q..q{+}\frac{7}{8}\tau]$, for $p, q \in S'$, then $a_p = a_q$.\label{lem:r-letters-locality}
\end{lemma}
\begin{proof}
Since $S'$ is an almost $\frac{3}{4}\tau$-partitioning set, as stated in Lemma~\ref{lem:periodicity-gap}, it follows from property~(b) for $S'$ that the positions $p$ and $q$ have a common ``right context'' of length $\tau / 8$: more formally, for any $d \in [0..\tau/8]$, we have $s[p{+}d .. p{+}d{+}\tau/2] = s[q{+}d .. q{+}d{+}\tau/2]$, and $p + d \in S'$ iff $q + d \in S'$.
In order to produce $a_p$, our algorithm first consecutively computed numbers $\bar{t}, \bar{\bar{t}}, \bar{\bar{\bar{t}}}$ for $p$. Denote by $p_1, p_2, \ldots, p_m$ all positions of $S' \cap (p..p{+}\tau/2^5]$ in the increasing order. By construction, the number $\bar{t}$ depends on a ``right context'' of $p$ of length $\tau/2^5$: $\bar{t}$ is produced by comparing $s[p..p{+}\tau/2]$ to all strings $s[p_1..p_1{+}\tau/2], \ldots,$ $s[p_m..p_m{+}\tau/2]$ and, thus, $\bar{t}$ coincides with analogously computed numbers at any other positions $r$ such that, for all $d \in [0..\tau/2^5]$, $s[p{+}d .. p{+}d{+}\tau/2] = s[r{+}d .. r{+}d{+}\tau/2]$ and $p + d \in S'$ iff $r + d \in S'$. It remains to show that the numbers $\bar{\bar{t}}, \bar{\bar{\bar{t}}}, a_p$ in the same sense depend on ``right contexts'' of $p$ of lengths $2\tau/2^5, 3\tau/2^5, 4\tau/2^5 = \tau/8$, respectively. The proof is similar for all three cases. Consider, for instance, the number $\bar{\bar{\bar{t}}}$, assuming that the claim holds for $\bar{\bar{t}}$. We obtain $\bar{\bar{\bar{t}}}$ by comparing $\bar{\bar{t}}$ to numbers $\bar{\bar{t}}_1, \bar{\bar{t}}_2, \ldots, \bar{\bar{t}}_m$ computed for $p_1, p_2, \ldots, p_m$, respectively. By the assumption, the number $\bar{\bar{t}}_m$ corresponding to the rightmost of the positions, $p_m$, depends on a ``right context'' of $p_m$ of length $2\tau/2^5$. Therefore, since $p_m - p \le \tau/2^5$, we obtain the claimed dependency of $\bar{\bar{\bar{t}}}$ on a ``right context'' of $p$ with length $\tau/2^5 + 2\tau/2^5 = 3\tau/2^5$.
\end{proof}
\newcommand{\LB}{6}
\newcommand{\LBB}{5}
\newcommand{\BB}{11}
\newcommand{\BBB}{10}
\newcommand{\LBplusBBB}{16}
\paragraph{Recompression.}
If the distance between any pair of adjacent positions of $S'$ is at least $\tau / 2^{\LB}$, then the size of $S'$ is at most $2^{\LB} n / \tau$ and, due to Lemma~\ref{lem:periodicity-gap}, $S'$ can be used as the resulting $\tau$-partitioning set of size $\Oh(n / \tau)$. Unfortunately, this might not be the case and, in general, we have to ``sparsify'' $S'$.
There is a natural one-to-one correspondence between $S'$ and positions of $R$. Using a technique of Je{\.z}~\cite{Jez4} called \emph{recompression} (see also~\cite{I}), we can remove in $\Oh(|R|)$ time some letters of $R$ reducing by a fraction $\frac{4}{3}$ the number of pairs of adjacent letters $R[i], R[i{+}1]$ whose corresponding positions in $S'$ are at a distance at most $\tau / 2^{\LB}$ (see below how the information about distances is stored). We perform such reductions until the length of $R$ becomes at most $2^{14} \cdot n / \tau$. The positions of $S'$ corresponding to remained letters will constitute a $\tau$-partitioning set of size $\Oh(n / \tau)$. In order to guarantee that this subset of $S'$ is $\tau$-partitioning, we have to execute the recompression reductions gradually increasing the distances that are of interest for us: first, we get rid of adjacent pairs with distances at most $\tau / \log^{(3)} n$ between them, then the threshold is increased to $2\tau / \log^{(3)} n$, then $2^2\tau / \log^{(3)} n$, $2^3\tau / \log^{(3)} n$, and so on until (most) adjacent pairs with distances at most $2^{\log^{(4)} n - \LB} \tau / \log^{(3)} n = \tau / 2^{\LB}$ between them are removed in last recompression reductions. The details follow.
Since it is impossible to store in $\Oh(n / \tau)$ space the precise distances between positions of $S'$, the information about distances needed for recompression is encoded as follows. For each $i \in [0..|R|)$ and a position $p \in S'$ corresponding to the letter $R[i]$, we store an array of numbers $M_i[0..\lceil\log^{(4)} n\rceil]$ such that, for $j \in [0..\lceil\log^{(4)} n\rceil]$, $M_i[j]$ is equal to the size of the set $S' \cap (p..p{+}\tau / 2^j]$. By Lemma~\ref{lem:local-sparsity}, we have $|S' \cap (p..p{+}\tau]| \le \Oh(\log^{(3)} n)$ and, hence, each number $M_i[j]$ occupies $\Oh(\log^{(4)} n)$ bits. Therefore, all the arrays $M_i$ can be stored in $\Oh(|R|(\log^{(4)} n)^2) \le \Oh(\frac{n}{\tau}\log^{(3)} n \cdot (\log^{(4)} n)^2)$ bits, which fits into $\Oh(\frac{n}{\tau})$ machine words of size $\Oh(\log n)$ bits. All arrays $M_i$ are constructed in a straightforward way in $\Oh(|R|\log^{(3)} n) = \Oh(\frac{n}{\tau}(\log^{(3)} n)^2)$ time (which is $\Oh(n)$ since $\tau \ge (\log^{(3)} n)^4$) during the left-to-right pass over $S'$ that generated the string $R$.
Our algorithm consecutively considers all numbers $j \in [\LB..\lceil\log^{(4)} n\rceil]$ in decreasing order (i.e., starting with $j = \lceil\log^{(4)} n\rceil$). For each $j$, the algorithm iteratively performs a recompression procedure reducing the number of adjacent pairs of letters $R[i], R[i{+}1]$ whose corresponding positions from $S'$ are at a distance at most $\tau / 2^j$ until $R$ shrinks to a length at most $2^{j+\BBB} \cdot \frac{n}{\tau}$.
Thus, $|R| \le 2^{\LBplusBBB}\cdot \frac{n}{\tau}$ after last recompression reductions for $j = \LB$. Let us describe the recompression procedure; for completeness, we repeat with proofs some key observations of Je{\.z}~\cite{Jez4} for our adaptation of recompression.
Fix $j \in [\LB..\lceil\log^{(4)} n\rceil]$. To preserve property~(c) of the $\tau$-partitioning set $S'$ during the sparsifications, we impose an additional restriction: a letter $R[i]$ cannot be removed if either $i = 0$ or the distance between the position $p \in S'$ corresponding to $R[i]$ and the predecessor of $p$ in $S'$ is larger than $\tau / 2^5$, i.e., if $M_{i-1}[5] = 0$.
The processing of the number $j$ starts with checking whether $|R| \le 2^{j+\BBB} \cdot \frac{n}{\tau}$. If so, we skip the processing of $j$ and move to $j - 1$ (provided $j > \LB$). Suppose that $|R| > 2^{j+\BBB} \cdot \frac{n}{\tau}$. Denote $\sigma = 2^{2^6\lceil\log^{(3)} n\rceil^2}$, the size of the alphabet $[0..\sigma)$ of $R$. Then, the algorithm creates an array $P[0..\sigma{-}1][0..\sigma{-}1]$ filled with zeros, which occupies $\Oh(\sigma^2) = \Oh(2^{2^7(\log^{(3)} n)^2}) = o(\log n)$ space, and collects in $P$ statistics on pairs of adjacent letters of $R$ whose corresponding positions in $S'$ are at a distance at most $\tau / 2^j$ and whose first letter may be removed: namely, we traverse all $i \in [1..|R|)$ and, if $M_i[j] \ne 0$ and $M_{i-1}[5] \ne 0$, then we increase by one the number $P[R[i]][R[i{+}1]]$. It follows from Lemma~\ref{lem:r-letters-equality} that $R[i] \ne R[i+1]$ when $M_i[j] \ne 0$ (this is the only place where this lemma is used but this observation is critical for the analysis in Lemma~\ref{lem:recompression} that follows).
The idea proposed by Je{\.z} is to view $P$ as an adjacency matrix of a weighted digraph with $\sigma$ vertices and to construct an (approximately) maximal directed cut for it; then, the set $[0..\sigma)$ is split into two disjoint subsets $\acute{\Sigma}$ and $\grave{\Sigma}$ according to this cut and we mark for removal from $R$ all indices $i \in [1..|R|{-}1)$ for which the following conditions hold: $M_i[j] \ne 0$, $M_{i-1}[5] \ne 0$, $R[i] \in \acute{\Sigma}$, and $R[i{+}1] \in \grave{\Sigma}$. The larger the cut the more pairs $R[i], R[i{+}1]$ such that $M_i[j] \ne 0$ and $M_{i-1}[5] \ne 0$ are marked for removal.
The marking can be organized using a bit array of length $|R|$. Let us describe how the sets $\acute{\Sigma}$ and $\grave{\Sigma}$ are built.
The splitting on $\acute{\Sigma}$ and $\grave{\Sigma}$ is performed consecutively for the letters $0, 1, \ldots, \sigma - 1$ by the following standard greedy algorithm. We start with empty sets $\acute{\Sigma}$ and $\grave{\Sigma}$. Consider a letter $a \in [0..\sigma)$ assuming that the letters $0, 1, \ldots, a{-}1$ were already distributed among $\acute{\Sigma}$ and $\grave{\Sigma}$. Consider two numbers $c_0 = \sum_{b \in \grave{\Sigma}} (P[a][b] + P[b][a])$ and $c_1 = \sum_{b \in \acute{\Sigma}} (P[b][a] + P[a][b])$: $c_0$ is the total weight of edges that will be added to the (undirected) cut in case $a \in \acute{\Sigma}$, for the current assignment, and $c_1$ is the weight that will be added to the (undirected) cut in case $a \in \grave{\Sigma}$. We put $a$ into $\acute{\Sigma}$ if $c_0 \ge c_1$, and into $\grave{\Sigma}$ otherwise. In the end we check whether $\sum_{a \in \acute{\Sigma}, b \in \grave{\Sigma}} P[a][b] < \sum_{a \in \grave{\Sigma}, b \in \acute{\Sigma}} P[a][b]$ and, if so, switch $\acute{\Sigma}$ and $\grave{\Sigma}$. Thus, the initialization of $\acute{\Sigma}$ and $\grave{\Sigma}$ takes $\Oh(\sigma^2) = o(\log^2 n)$ time.
In one right to left pass, we update values in all arrays $M_i$ according to removal marks: for each $i \in [0..|R|)$ and $j'\in [0..\lceil\log^{(4)} n\rceil]$, a new value for $M_i[j']$ is the number of indices $i + 1, i + 2, \ldots, i + M_i[j']$ that were not marked for removal, i.e., $M_i[j']$ is the number of positions in the set $S' \cap (p..p{+}\tau/2^{j'}]$ whose corresponding letters $R[i']$ will remain in $R$, where $p \in S'$ is the position corresponding to $R[i]$. Since $M_i[j'] \le M_{i+1}[j'] + 1$, for $i \in [0..|R|{-}1)$, the pass updating $M$ can be executed in $\Oh(|R|\log^{(4)} n)$ time.
Finally, we delete letters $R[i]$ and arrays $M_i$, for all indices $i$ marked for removal, thus shrinking the length of $R$ and the storage used for $M_i$. The procedure overall takes $\Oh(|R|\log^{(4)} n)$ time, where $|R|$ is the length of $R$ before the shrinking.
\begin{lemma}
If, for $j \in [\LB..\lceil\log^{(4)} n\rceil]$, before the recompression procedure there were $d$ non-zero numbers $M_i[j]$ with $i \in [1..|R|)$ such that $M_{i-1}[5] \ne 0$, then the arrays $M_i$ modified by the procedure, for all $i$ corresponding to unremoved positions of $R$, contain at most $\frac{3}{4}d$ non-zero numbers $M_i[j]$ such that $M_{i-1}[5]{\ne}0$.\label{lem:recompression
\end{lemma}
\begin{proof}
The proof repeats an argument from~\cite{Jez4} and~\cite[Lemma 7]{I}. Consider an undirected weighted graph $G$ corresponding to the digraph encoded in the adjacency matrix $P$. By construction of $P$, we have $d = \sum_{a\ne b} P[a][b]$, which follows from Lemma~\ref{lem:r-letters-equality} that guarantees $R[i] \ne R[i+1]$ when $M_i[j] \ne 0$.
Thus, $d$ is the sum of weights of all edges in $G$.
Putting a letter $a$ into either $\acute{\Sigma}$ or $\grave{\Sigma}$, we add to the cut at least half of the total weight of all edges connecting $a$ to the letters $0,1,\ldots,a{-}1$.
Therefore, the cut of $G$ induced by $\acute{\Sigma}$ and $\grave{\Sigma}$ has a weight at least $\frac{1}2 d$. The edges in the cut might be directed both from $\acute{\Sigma}$ to $\grave{\Sigma}$ and in the other direction.
Switching $\acute{\Sigma}$ and $\grave{\Sigma}$, if needed, we ensure that the direction from $\acute{\Sigma}$ to $\grave{\Sigma}$ has a maximal total weight, which is obviously at least $\frac{1}4 d$. According to this cut, we mark for removal from $R$ at least $\frac{1}{4} d$ letters $R[i]$ such that $M_i[j] \ne 0$. Hence, the number of non-zero values $M_i[j]$ such that $M_{i-1}[5] \ne 0$ is reduced by $\frac{1}{4} d$, which gives the result of the lemma since new non-zero values could not appear after the deletes.
\end{proof}
Suppose, for a fixed $j \in [\LB..\lceil\log^{(4)} n\rceil]$, the algorithm has performed one iteration of the recompression. Denote by $S''$ the set of all positions from $S'$ that ``survived'' the recompression for $j \in [\LB..\lceil\log^{(4)} n\rceil]$ and, thus, have a corresponding letter in the updated string $R$. There is a one-to-one correspondence between $S''$ and letters of $R$. For each $i \in [0..|R|)$ and $j' \in [0..\lceil\log^{(4)} n\rceil]$, the number $M_i[j']$ in the modified arrays $M_i$ is the size of the set $S'' \cap (p..p{+}\tau/2^{j'}]$, for a position $p \in S''$ corresponding to $i$. We therefore can again apply the recompression procedure thus further shrinking the length of $R$. The algorithm first again checks whether $|R| > 2^{j+\BBB} \cdot \frac{n}{\tau}$ and, if so, repeats the recompression. For the given fixed $j$, we do this iteratively until $|R| \le 2^{j+\BBB} \cdot \frac{n}{\tau}$. During this process, the number of zero values $M_i[j]$ in the arrays $M_i$ is always at most $2^j \cdot \frac{n}{\tau}$ since the equality $M_i[j] = 0$ implies that $S''' \cap (p..p{+}\tau/2^j] = \emptyset$, for a set $S'''\subseteq S'$ of size $|R|$ defined by analogy to the definition of $S''$ and for a position $p \in S'''$ corresponding to $i$. Therefore, due to Lemma~\ref{lem:recompression}, the condition $|R| \le 2^{j+\BBB} \cdot \frac{n}{\tau}$ eventually should be satisfied. Furthermore, we are to show that, for each $j$, the condition $|R| \le 2^{j+\BBB}\cdot \frac{n}{\tau}$ holds after at most three iterations of the recompression.
Given $j \in [\LB..\lceil\log^{(4)} n\rceil)$, the length of $R$ before the first iteration of the recompression for $j$ is at most $2^{j+\BB} \cdot \frac{n}{\tau}$ since this is a condition under which shrinking iterations stopped for $j + 1$. The same bound holds for $j = \lceil\log^{(4)} n\rceil$: the initial length of $R$ is at most $2^{\BB} \cdot\frac{n}{\tau} \log^{(3)} n$ (which is upperbounded by $2^{j+\BB} \cdot\frac{n}{\tau}$) since $S' \subseteq S$ and $S$ is produced by the $k$th phase of the procedure of Section~\ref{sec:vishkin-process}, for $k = \lfloor\log\frac{\tau}{2^4\log^{(3)} n}\rfloor$, so that the size of $S$, by Lemma~\ref{lem:local-sparsity}, is at most $2^6\lceil n / 2^{k}\rceil \le 2^6 n / \frac{\tau}{2\cdot 2^4\log^{(3)} n} = 2^{11} \frac{n}{\tau} \log^{(3)} n$. Fix $j \in [\LB..\lceil\log^{(4)} n\rceil]$. Since the number of zero values $M_i[j]$ is always at most $2^j \cdot n / \tau$ and the number of zero values $M_{i-1}[5] = 0$ is at most $2^5\cdot \frac{n}{\tau}$, three iterations of the recompression for $j$ performed on a string $R$ with initial length $r$ shrink the length of $R$ to a length at most $(\frac{3}{4})^3 r + 2^j \cdot \frac{n}{\tau} + 2^5\cdot \frac{n}{\tau} \le (\frac{3}{4})^3 r + 2\cdot 2^{j} \cdot \frac{n}{\tau}$, by Lemma~\ref{lem:recompression}. Putting $r = 2^{j+\BB} \cdot \frac{n}{\tau}$, we estimate the length of $R$ after three iterations for $j$ from above by $((\frac{3}{4})^3 2^{\BB} + 2)2^j \cdot \frac{n}{\tau} < 2^{j+\BBB} \cdot \frac{n}{\tau}$. That is, for each $j$, three iterations are enough to reduce the length of $R$ to at most $2^{j+\BBB} \cdot \frac{n}{\tau}$.
Thus, the total running time of all recompression procedures is $\Oh(\sum_{j=\lceil\log^{(4)} n\rceil}^{\LB} 2^{j+\BB} \cdot\frac{n}{\tau} \log^{(4)} n) = \Oh(\frac{n}{\tau}\log^{(4)} n)$, which is $\Oh(n)$ since $\tau \ge (\log^{(3)} n)^4$. Observe that the most time consuming part is in recalculations of the arrays $M_i$, each taking $\Oh(|R|\log^{(4)} n)$ time, all other parts take $\Oh(|R|)$ time, i.e., $\Oh(\sum_{j=\lceil\log^{(4)} n\rceil}^{\LB} 2^{j+\BB} \cdot\frac{n}{\tau}) = \Oh(\frac{n}{\tau})$ time is needed for everything without the recalculations. The length of $R$ in the end is at most $2^{\LBplusBBB}\cdot n / \tau$, which is a condition under which shrinking iterations stopped for $j = \LB$.
Finally, we create a bit array $E$ of the same length as the original string $R$ that marks by $1$ those letters that survived all the iterations. Additional navigational data structures that allow us to produce the array $E$ in linear time are straightforward. We then again perform the whole ``semi-online'' algorithm that generates the set $S'$ (from which the string $R$ was constructed) but, in this time, we discard all positions of $S'$ that correspond to unmarked indices in $E$ (i.e., their corresponding letters in the original string $R$ did not survive the recompression iterations) and we store all positions corresponding to marked indices of $E$ explicitly in an array $S^*$. Since at most $2^{\LBplusBBB}\cdot n / \tau$ indices in $E$ are marked by~$1$, the size of $S^*$ is $\Oh(n / \tau)$.
\begin{lemma}
The set $S^*$ is $\tau$-partitioning; also a converse of property~(c) holds for $S^*$: if a substring $s[i..j]$ has a period at most $\tau / 4$, then $S^* \cap [i + \tau .. j - \tau] = \emptyset$.\label{lem:main-lemma}
\end{lemma}
\begin{proof}
Since $S^* \subseteq S'$, the converse of property~(c) is inherited from the set $S'$ that satisfies it by Lemma~\ref{lem:periodicity-gap}; we relax the condition $S^* \cap [i + \frac{3}{4}\tau .. j - \frac{3}{4}\tau] = \emptyset$ slightly, for aesthetical reasons.
For $h \in [\LBB..\lceil\log^{(4)} n\rceil]$, denote by $S_h$ the set of positions from $S'$ whose corresponding letters remained in $R$ after the algorithm has performed all recompression procedures for all $j > h$.
In particular, $S_{\lceil\log^{(4)} n\rceil} = S'$ and $S_{\LBB} = S^*$. Note that the size of each set $S_h$ is at most $2^{h+\BB} \cdot \frac{n}{\tau}$.
For property~(a) of $S^*$, consider $p$ and $q$ such that $s[p{-}\tau..p{+}\tau] = s[q{-}\tau..q{+}\tau]$. Let us show by induction that, for each $h \in [\LBB..\lceil\log^{(4)} n\rceil]$ and each $d$ such that $|d| \le \frac{1}{8}\tau - \frac{8}{2^{h+1}}\tau$, we have $p + d \in S_h$ iff $q + d \in S_h$. In particular, for $h = \LBB$, $p \in S^*$ iff $q \in S^*$, which is precisely the claim of property~(a). The base of the induction is $h = \lceil\log^{(4)} n\rceil$: since, as stated in Lemma~\ref{lem:periodicity-gap}, $S'$ is an almost $\frac{3}{4}\tau$-partitioning set, we have $p + d \in S'$ iff $q + d \in S'$, for any $d$ such that $|d| \le \frac{1}{8}\tau$. By Lemma~\ref{lem:r-letters-locality}, letters of $R$ corresponding to positions $p + d$ and $q + d$ such that $p + d \in S'$ and $q + d \in S'$ coincide provided $|d| \le \frac{1}{8}\tau$ since $s[p{+}d..p{+}d{+}\frac{7}{8}\tau] = s[q{+}d..q{+}d{+}\frac{7}{8}\tau]$.
Fix $h \in [\LB..\lceil\log^{(4)} n\rceil]$ and suppose, by the inductive hypothesis, that $p + d \in S_h$ iff $q + d \in S_h$, for any $d$ such that $|d| \le \frac{1}{8}\tau - \frac{8}{2^{h+1}}\tau$. We are to prove the inductive step: $p + d \in S_{h-1}$ iff $q + d \in S_{h-1}$, for any $d$ such that $|d| \le \frac{1}{8}\tau - \frac{8}{2^{h}}\tau$. Let $R$ be a string obtained by the algorithm after performing all recompression procedures for all $j > h$. Thus, there is a one-to-one correspondence between the positions $S_h$ and letters of the string $R$. Consider a (contiguous) sequence of letters $a_{i} a_{{i}+1} \cdots a_{m}$ of $R$ corresponding to all positions $p + d \in S_h$ such that $|d| \le \frac{1}{8}\tau - \frac{8}{2^{h+1}}\tau$, and, analogously, a sequence $a_{{i}'} a_{{i}'+1} \cdots a_{m'}$ corresponding to positions $q + d \in S_h$. By the inductive hypothesis and due to Lemma~\ref{lem:r-letters-locality}, the sequences coincide. The algorithm performs on the string $R$ at most three recompression reductions removing some of the letters until the positions of $S_h$ corresponding to the remained letters constitute the set $S_{h-1}$. Denote by $r$ and $r'$ the positions of $S_h$ corresponding to the letters $a_m$ and $a_{m'}$, respectively. A discrepancy in the processing of the sequences by the first iteration of recompression may occur only in their last letters: for instance, the letter $a_m$ will be removed whereas $a_{m'}$ will be retained (see an example in Figure~\ref{fig:discrepancies}). Let us analyze this particular case (other cases are similar). This may happen only if $a_m \in \acute{\Sigma}$ and $a_m$ is followed in $R$ by a letter $a \in \grave{\Sigma}$ and the distance between $r$ and the position of $S_h$ corresponding to $a$ is at most $\tau / 2^h$; at the same time, either $a_{m'}$ ($=a_m$) is followed by a different letter $b \in \acute{\Sigma}$ ($\ne a$) or the distance between $r'$ and the position of $S_h$ corresponding to this following letter is larger than $\tau / 2^h$. We therefore deduce that the distance from $r$ to $p + \frac{1}{8}\tau - \frac{8}{2^{h+1}}\tau$ is less than $\tau / 2^h$ (for otherwise the letter $a$ following $a_m$ must be a part of the sequence $a_{i} a_{{i}+1} \cdots a_{m}$), which implies that $r - p > \frac{1}{8}\tau - \frac{8}{2^{h+1}}\tau - \frac{1}{2^h} \tau = \frac{1}{8}\tau - \frac{5}{2^h}\tau$.
Thus, we have shown that two sequences resulting from $a_{i} a_{{i}+1} \cdots a_{m}$ and $a_{{i}'} a_{{i}'+1} \cdots a_{m'}$ after the recompression coincide in all letters whose corresponding positions from $S_h$ are at a distance at most $\frac{1}{8}\tau - \frac{5}{2^h}\tau$ from $p$ and $q$, respectively. Exactly the same argument can be applied for the second recompression. A discrepancy in the processing of the resulting sequences by the second iteration of recompression (if any) may again occur only in last letters of the sequences: for instance, the last letter $c$ of the first sequence (the letter $a_{m-1}$ in the example of Figure~\ref{fig:discrepancies}) will be retained whereas the corresponding letter $c$ of the second sequence (the letter $a_{m'-1}$ in the example)
will be removed.
Denote by $r''$ the position of $S_h$ corresponding to the removed letter $c$. By analogy to the argument used for the first iteration, we deduce that the distance between $r''$ and $r'$ is at most $\tau / 2^h$. Observe that $r - p = r' - q$. Therefore, $r'' - q \ge r' - q - \frac{1}{2^h}\tau > \frac{1}{8}\tau - \frac{5}{2^h}\tau - \frac{1}{2^h}\tau = \frac{1}{8}\tau - \frac{6}{2^h}\tau$.
Thus, two sequences resulting from the second recompression coincide in all letters whose corresponding positions from $S_h$ are at a distance at most $\frac{1}{8}\tau - \frac{6}{2^h}\tau$ from $p$ and $q$, respectively. Analogously, we deduce that two sequences resulting from the third recompression (if any) coincide in all letters whose corresponding positions from $S_h$ are at a distance at most $\frac{1}{8}\tau - \frac{6}{2^h}\tau - \frac{1}{2^h}\tau = \frac{1}{8}\tau - \frac{7}{2^h}\tau$ from $p$ and $q$, respectively. This proves the inductive claim since $\frac{1}{8}\tau - \frac{7}{2^h}\tau \ge \frac{1}{8}\tau - \frac{8}{2^h}\tau$ and the set $S_{h-1}$ consists of all positions from $S_h$ that correspond to the letters of $R$ remained after the (at most) three recompressions.
\newcommand{\rn}[2]
\tikz[remember picture,baseline=(#1.base)]\node [inner sep=0] (#1) {$#2$}
}
\begin{figure}[hbt]
$$
\begin{array}{rlcl}
\text{before 1st recompression:} & \cdots a_{m-5} a_{m-4} {a_{m-3}} a_{m-2} \rn{m1}{a_{m-1}} a_{m} ~a & ~ & \cdots a_{m'-5} a_{m'-4} {a_{m'-3}} a_{m'-2} \rn{m2}{a_{m'-1}} a_{m'} ~b\\
\text{after 1st recompression:} & \cdots {a_{m-5}} a_{m-4} \phantom{a_{m-3}} a_{m-2} a_{m-1} \phantom{a_{m}} ~a & ~ & \cdots {a_{m'-5}} a_{m'-4} \phantom{a_{m'-3}} a_{m'-2} {a_{m'-1}} a_{m'} ~b\\
\text{after 2nd recompression:} & \cdots \phantom{a_{m-5}} a_{m-4} \phantom{a_{m-3}} a_{m-2} a_{m-1} \phantom{a_{m}} ~a & ~ & \cdots \phantom{a_{m'-5}} a_{m'-4} \phantom{a_{m'-3}} a_{m'-2} \phantom{a_{m'-1}} a_{m'} ~b\\
\text{after 3rd recompression:} & \cdots \phantom{a_{m-5}} a_{m-4} \phantom{a_{m-3}} \phantom{a_{m-2}} a_{m-1} \phantom{a_{m}} ~a & ~ & \cdots \phantom{a_{m'-5}} a_{m'-4} \phantom{a_{m'-3}} a_{m'-2} \phantom{a_{m'-1}} a_{m'} ~b
\end{array}
$$
\begin{tikzpicture}[overlay, remember picture]
\node[anchor=south] at (m1) {$\overbrace{\phantom{aa}}^{{\le}\frac{\tau}{2^h}}~\overbrace{\phantom{aa}}^{{\le}\frac{\tau}{2^h}} ~\overbrace{\phantom{aa}}^{{\le}\frac{\tau}{2^h}}$};
\node[anchor=south] at (m2) {$\overbrace{\phantom{aa}}^{{\le}\frac{\tau}{2^h}} ~\overbrace{\phantom{aa}}^{{\le}\frac{\tau}{2^h}} ~\overbrace{\phantom{aa}}^{~}$};
\end{tikzpicture}
\caption{A schematic example of three iterations of recompression on equal sequences $a_{{i}} a_{{i}+1} \cdots a_{m}$ and $a_{{i}'} a_{{i}'+1} \cdots a_{m'}$ ($m - {i} = m' -{i}'$ and $a_{{i} + t} = a_{{i}' + t}$, for any $t \in [0..m{-}{i}]$). The overbraces designate restrictions on distances between positions of $S_h$ corresponding to letters.}\label{fig:discrepancies}
\end{figure}
For property~(b) of $S^*$, consider $p, q \in S^*$ such that $s[p..p{+}\ell] = s[q..q{+}\ell]$, for some $\ell \ge 0$. Clearly, only the case $\ell > \tau$ is interesting. Denote $\tilde{\ell} = \ell - \tau$. It suffices to prove that, for each $h \in [\LBB..\lceil\log^{(4)} n\rceil]$ and each $d$ such that $0 \le d < \tilde{\ell} + \frac{1}{8}\tau - \frac{8}{2^{h+1}}\tau$, we have $p + d \in S_h$ iff $q + d \in S_h$. In particular, for $h = \LBB$, it is precisely the claim of property~(b): for any $d \in [0..\tilde{\ell}) = [0..\ell{-}\tau)$, $p + d \in S^*$ iff $q + d \in S^*$.
The proof is essentially by the same inductive argument as for property~(a). The base of the induction, $h = \lceil\log^{(4)} n\rceil$, follows from Lemma~\ref{lem:periodicity-gap} where it is stated that $S'$ ($= S_{\lceil\log^{(4)} n\rceil}$) is an almost $\frac{3}{4}\tau$-partitioning set. By Lemma~\ref{lem:r-letters-locality}, the letters of $R$ corresponding to positions $p + d$ and $q + d$ such that $p + d \in S'$ and $q + d \in S'$ coincide provided $0 \le d < \tilde{\ell} + \frac{1}{8}\tau$ since $s[p{+}d..p{+}d{+}\frac{7}{8}\tau] = s[q{+}d..q{+}d{+}\frac{7}{8}\tau]$. The proof of the inductive step is very similar to the proof for property~(a) above; we therefore only briefly sketch it without diving into details.
Fix $h \in [\LB..\lceil\log^{(4)} n\rceil]$ and suppose, by the inductive hypothesis, that $p + d \in S_h$ iff $q + d \in S_h$, for any $d$ such that $0 \le d < \tilde{\ell} + \frac{1}{8}\tau - \frac{8}{2^{h+1}}\tau$. Let $R$ be a string produced by the algorithm after performing all recompression procedures for all $j > h$. There is a one-to-one correspondence between $S_h$ and letters of $R$. Consider a (contiguous) sequence of letters $a_i a_{i+1} \cdots a_{m}$ of $R$ corresponding to all positions $p + d \in S_h$ such that $0 \le d < \tilde{\ell} + \frac{1}{8}\tau - \frac{8}{2^{h+1}}\tau$, and, analogously, a sequence $a_{i'} a_{i'+1} \cdots a_{m'}$ corresponding to positions $q + d \in S_h$. By the inductive hypothesis and due to Lemma~\ref{lem:r-letters-locality}, the sequences coincide. The algorithm performs on $R$ at most three recompressions removing some letters until the positions of $S_h$ corresponding to the remained letters constitute the set $S_{h-1}$. Discrepancies occurring in the two sequences after the recompressions may affect only last positions that are close---at a distance at most $3\tau / 2^h$---to the ``right borders'', $p + \tilde{\ell} + \frac{1}{8}\tau - \frac{8}{2^{h+1}}\tau$ and $q + \tilde{\ell} + \frac{1}{8}\tau - \frac{8}{2^{h+1}}\tau$, of the sequences, respectively.
Therefore, all positions from $S_h \cap [p..\infty)$ and $S_h \cap [q..\infty)$ that were at a distance at least $3\tau / 2^h$ to the left of the ``right borders'' are ``synchronized'', i.e., such a position $p + d \in S_h$ is removed from $S_h$ iff $q + d \in S_h$ is removed too. The ``synchronized'' positions are exactly $p + d \in S_h$ and $q + d \in S_h$ such that $0 \le d < \tilde{\ell} + \frac{1}{8}\tau - \frac{8}{2^{h+1}}\tau - \frac{3}{2^h}\tau = \tilde{\ell} + \frac{1}{8}\tau - \frac{7}{2^{h}}\tau$, so that $p + d \in S_{h-1}$ iff $q + d \in S_{h-1}$. Since $\frac{1}{8}\tau - \frac{7}{2^{h}}\tau \ge \frac{1}{8}\tau - \frac{8}{2^{h}}\tau$ and the set $S_{h-1}$ is formed by remained positions of $R$ after the (at most) three recompressions, this proves the inductive step.
For property~(c) of $S^*$, consider $p, q \in S^*$ such that $q - p > \tau$ and $S^* \cap (p..q) = \emptyset$. By construction, a recompression procedure performed on a string $R$ may delete a letter $R[i]$ only if the distance from the position $r$ corresponding to $R[i]$ to the positions $r'$ and $r''$ of $S'$ corresponding to the letters $R[i{-}1]$ and $R[i{+}1]$ is at most $\tau / 2^5$. Further, if $R[i]$ is removed, then neither $R[i{-}1]$ nor $R[i{+}1]$ can be removed in the same iteration of recompression. The distance between $r'$ and $r''$ is at most $\tau / 2^4$. Therefore, it is impossible that there was a position of $S'$ from $(p..q)$ that got removed. Thus, $S' \cap (p..q) = \emptyset$. Since $S'$ is a $\tau$-partitioning set, we deduce that the substring $s[p..q]$ has a period at most $\tau / 4$ by property~(c) of $S'$.
\end{proof}
\section{\boldmath Small $\tau$}
\label{sec:small-tau}
Assume that $\tau < (\log^{(3)} n)^4$. If $\tau \ge 2^5 \log^{(3)} n$, we perform the procedure of Sections~\ref{sec:vishkin-process}--\ref{sec:time-improvement} generating a \mbox{$\frac{\tau}{2}$-partitioning} set $S$ of size $\Oh(\frac{n}{\tau}\log^{(3)} n)$. If $\tau < 2^5 \log^{(3)} n$, we put $S = [0..n)$, which is a \mbox{$\frac{\tau}{2}$-partitioning} set of size $\Oh(\frac{n}{\tau}\log^{(3)} n)$ in this case. All this is done in $\Oh(n)$ time and $\Oh(\frac{n}{\tau})$ space (provided the set $S$ is not stored explicitly), which are the time and space bounds we aim for. The problem is in the remaining stages of the algorithm. Specifically, it is in the following ``slow'' parts:
\begin{enumerate}[label=(\roman*)]
\item the generation of the subset $S' \subseteq S$ requires $\Oh(n + |S| \log^{(3)} n)$ time;
\item the construction of the string $R$ takes $\Oh(n + |S'|(\log^{(3)} n)^3)$ time;
\item the initialization of the arrays $M_i$, for all $i \in [0..|S'|)$, requires $\Oh(|S'|\log^{(3)} n)$ time;
\item each update of the arrays $M_i$ before shrinking the string $R$ takes $\Oh(|R|\log^{(4)} n)$ time.
\end{enumerate}
If we optimize all these four bottlenecks to run, respectively, in $\Oh(n)$, $\Oh(n)$, $\Oh(|S'|)$, and $\Oh(|R|)$ time, then the running time of the whole algorithm will be $\Oh(n)$. We do all four optimizations by the four russians' trick~\cite{ArlazarovDinicKronrodFaradzev}. The idea of the trick is that if one has to perform complicated queries on chunks of $c$ bits, then instead of computing the queries explicitly each time, we can precalculate a table of size $2^c$ with answers for every possible chunk. What are the ``chunks'' and the ``queries'' in our case? The easiest is to analyze~(iv) to give an example. Let us describe how one can update the arrays $M_i$ in $\Oh(|R|)$ time before shrinking $R$.
\subparagraph{Part (iv).} Suppose that we have marked some letters of $R$ for removal using a bit array $A[0..|R|{-}1]$ of length $R$: $R[i]$ is marked iff $A[i] = 1$. Every array $M_i[0..\lceil\log^{(4)} n\rceil]$ occupies $\Oh((\log^{(4)} n)^2)$ bits (each entry is of size $\Oh(\log^{(4)} n)$ bits). The na{\"i}ve updating procedure considers every entry $M_i[j]$ and counts which of the letters $R[i{+}1], R[i{+}2], \ldots, R[i{+}M_i[j]]$ were not marked for removal, assigning the result to $M_i[j]$. Recall that each number $M_i[j]$, for $j \in [0..\lceil\log^{(4)} n\rceil]$, is at most $\Oh(\log^{(3)} n)$. Therefore, the new value for $M_i[j]$ is determined by the old value $M_i[j]$ and the subarray $A[i..i{+}\Oh(\log^{(3)} n)]$, for an appropriate constant under the big-O. By storing the bit array $A$ in a sequence of $\Oh(1 + |R| / \log n)$ machine words of size $\Theta(\log n)$ bits, we can retrieve and pack the subarray $A[i..i{+}\Oh(\log^{(3)} n)]$ in $\Oh(1)$ time into one machine word and can concatenate the subarray to the bit representation of the whole $M_i$. The bit representation of $M_i$ takes $\Oh((\log^{(4)} n)^2)$ and, thus, the resulting chunk after the concatenation occupies $\Oh((\log^{(4)} n)^2) + \Oh(\log^{(3)} n) = \Oh(\log^{(3)} n)$ bits. We view this chunk storing the concatenated bit representations of $M_i[0..\lceil\log^{(4)} n\rceil]$ and $A[i..i{+}\Oh(\log^{(3)} n)]$ as an integer number $x$ with $\Oh(\log^{(3)} n)$ bits. It is clear that the chunk determines the state of the updated array $M_i$. Therefore, we can in advance before the start of the whole algorithm consider all possible valid chunks that encode in the same way arrays $M[0..\lceil\log^{(4)} n\rceil]$ with entries of size $\Oh(\log^{(4)} n)$ bits concatenated with bit arrays of length $\Oh(\log^{(3)} n)$ and we can precalculate the updated arrays $M$ in a table $B$ so that the updated array $M_i$ is already stored in the entry $B[x]$. Thus, we simply read $B[x]$, which contains a bit representation for the updated array $M_i$ occupying $\Oh((\log^{(4)} n)^2)$ bits, and we rewrite $M_i$ with the content of $B[x]$. All is done $\Oh(1)$ time since all the bit representations take only $\Oh(1)$ machine words. The size of the table $B$ is $\Oh(2^{\Oh(\log^{(3)} n)} \cdot (\log^{(4)} n)^2) = o(\log n)$ bits and, hence, all precalculations can be performed in $o(n)$ time with $\Oh(1)$ space (the space is measured in $\Oh(\log n)$-bit machine words).
\subparagraph{Part (i).}
Let us describe how the set $S'$ is produced in $\Oh(n)$ time from the set $S$, which is fed to our algorithm from left to right in an online fashion. The na{\"i}ve procedure, for each position $p \in S$, collects all positions from the set $S \cap (p..p{+}\tau/4]$ and compares $s[p..p{+}\tau/2]$ to each of the substrings $s[q..q{+}\tau/2]$ with $q \in S \cap (p..p{+}\tau/4]$; if $s[p..p{+}\tau/2] = s[q..q{+}\tau/2]$, then all the positions $S \cap (p..q]$ are marked for removal and, thus, will not be reported into $S'$. The idea of a faster solution is that all the comparisons in this algorithm, for the given $p$, are performed inside a very short substring $s[p..p{+}\tau/4+\tau/2]$ (recall that $\tau < (\log^{(3)} n)^4$); we therefore can pack all this string into one small chunk that fits into one machine word specifying which of its positions are from $S$ and, then, we perform the marking for removal in $\Oh(1)$ time using a precomputed answer. The issue with this idea is that the alphabet can be quite large so that the packing is impossible.
A solution is to ``reduce'' the alphabet for the substring $s[p..p{+}\tau/4+\tau/2]$ by sorting all letters and substituting them by their ordering numbers. The sorting can be performed in linear time using fusion trees~\cite{FredmanWillard,PatrascuThorup2}. More precisely, during the left-to-right processing of $S$, we consecutively consider (overlapping) substrings $s[i\tau..(i{+}2)\tau]$, for $i \in [0..n/\tau-2]$. Using the fusion tree, we can sort all letters of a given substring $s[i\tau..(i{+}2)\tau]$ in $\Oh(\tau)$ time assigning to them their ordering numbers, i.e., reducing the alphabet to $[0..2\tau]$ (see~\cite{FredmanWillard,PatrascuThorup2}; note that we have $\Omega(n / \tau) = \Omega(n / \log n)$ space). Denote by $\hat{s}_i$ the string $s[i\tau..(i{+}2)\tau]$ with letters substituted by the ordering numbers (so that all new letters are from $[0..2\tau]$). The string $\hat{s}$ occupies $\Oh(\tau\log\tau) = o((\log^{(3)} n)^5)$ bits and, thus, can be packed into one machine word. Now in order to process $p \in S$, we first check whether the substring $s[p..p{+}\tau/4{+}\tau/2]$ is contained in the last preprocessed substring $\hat{s}_i$. If not, we set $i = \lfloor p / \tau\rfloor$ and preprocess $s[i\tau..(i{+}2)\tau]$ in $\Oh(\tau)$ time generating $\hat{s}_i$. Note that in this way we never preprocess the same substring $s[i\tau..(i{+}2)\tau]$ twice since the positions $p \in S$ are fed to our algorithm from left to right. Next, using standard bit operations on the machine word containing $\hat{s}_i$, we retrieve the substring $s[p..p{+}\tau/4{+}\tau/2]$ from the string $s[i\tau..(i{+}2)\tau]$ encoded in $\hat{s}_i$ in which the alphabet is ``reduced'', i.e., all letters are substituted with numbers from $[0..2\tau]$. The retrieved substring occupies $\Oh(\tau\log\tau)$ bits and is stored in one machine word. We concatenate to the bit representation of this substring a bit array $a$ of length $\tau/4$ that indicates which of the positions $i + 1, i + 2, \ldots, i + \tau/4$ belong to $S$: for $h \in (0..\tau/4]$, we have $i + h \in S$ iff $a[h - 1] = 1$. The bit array $a$ is easy to maintain in one machine word during the execution of the algorithm using the bit shift operations. Thus, we obtain a chunk of $\Oh(\tau\log\tau) + \tau/4 = o((\log^{(3)} n)^5)$ bits that encodes the concatenated bit representations of the substring $s[p..p{+}\tau/4{+}\tau/2]$, with a ``reduced'' alphabet, and of the bit array $a$, indicating all positions from $S \cap (p..p{+}\tau/4]$. We view this chunk as an integer number $x$ with $o((\log^{(3)} n)^5)$ bits.
Clearly, the chunk $x$ determines which of the positions from $S \cap (p..p{+}\tau/4]$ should be marked for removal during the processing of $p$. Therefore, we can in advance before the start of the whole algorithm consider all possible valid chunks that encode in the same way strings $t[0..\tau/2+\tau/4]$ over the alphabet $[0..2\tau]$ concatenated with bit arrays of length $\tau/4$ and we can precalculate which of the positions from $S \cap (p..p{+}\tau/4]$ will be marked for removal in a table $C$ so that the entry $C[x]$ stores a bit array $c$ of length $\tau/4$ that ``masks'' positions for removal: for $h\in (0..\tau/4]$, $c[h - 1] = 0$ iff the position $q \in S \cap (p..p{+}\tau/4]$ such that $|S \cap (p..q]| = h$ must be marked for removal because there exists $q' \in S \cap [q..p{+}\tau/4]$ such that $s[p..p{+}\tau/2] = s[q'..q'{+}\tau/2]$. Thus, we simply read $C[x]$ and apply the masking array stored in $C[x]$ to mark for removal some positions from $S$ after $p$.
Thus, the processing of $p\in S$ is done in $\Oh(1)$ time since all the bit representations take only $\Oh(1)$ machine words. The total time for processing all substrings $s[i\tau..(i{+}2)\tau]$, for $i \in [0..n/\tau-2]$, is $\Oh(\frac{n}{\tau}\tau) = \Oh(n)$. The size of the table $C$ is $\Oh(2^{o((\log^{(3)} n)^5)} \cdot \tau) = o(\log n)$ bits and, hence, all precalculations can be performed in $o(n)$ time with $\Oh(1)$ space. As in the original algorithm from Section~\ref{sec:recompression}, the computation of $S'$ is executed in an online manner reporting its positions from left to right without storing them (only few are stored for internal purposes).
\subparagraph{Part (ii).}
Let us describe now how the string $R$ can be computed from the set $S'$, which is fed to our algorithm from left to right in an online fashion. The idea is very similar to what was done for Part~(i) to produce the set $S'$ from $S$. The algorithm starts with an empty string $R$ and considers all $p \in S'$ from left to right generating, for each $p \in S'$, a letter $a_p$ that is then appended to the end of the string $R$. As is evident from Lemma~\ref{lem:r-letters-locality}, the construction of $a_p$ requires only the substring $s[p..p{+}\frac{7}{8}\tau]$ and the positions $S' \cap (p..p{+}\frac{7}{8}\tau]$. At first glance, the same trick can be applied as in Part~(i): we consecutively consider substrings $s[i\tau..(i{+}2)\tau]$, for $i \in [0..n/\tau-2]$, reducing their alphabets to $[0..2\tau]$ and encoding them into chunks $\hat{s}_i$ of $\Oh(\tau\log\tau)$ bits; when a position $p \in S'$ arrives, we retrieve the substring $s[p..p{+}\frac{7}{8}\tau]$ with a reduced alphabet from the chunk $\hat{s}_i$ with $i = \lfloor p / \tau\rfloor$, constructing $\hat{s}_i$ from scratch if it was not built previously, and then, we concatenate to the chunk a bit array of length $\Oh(\tau)$ indicating which of the positions from $(p..p{+}\frac{7}{8}\tau]$ belong to $S'$. Unfortunately, this scheme does not work since the alphabet reductions loose some essential information required to construct the letters $a_p$. Precalculations therefore are more involved.
Denote by $p_1, p_2, \ldots, p_m$ the set of all positions from $S' \cap (p..p{+}\tau/2^5]$ in increasing order. Recall that the procedure of Section~\ref{sec:recompression} first generates for $p$ a tuple of numbers $\langle w'_1, w'_2, \ldots, w'_\ell\rangle$: for $j \in [1..\ell]$, $w'_j = \vbit(t, t_j)$ if $j \le m$, and $w'_j = \infty$ otherwise, where $t = \sum_{i=0}^{\tau/2} s[p{+}i] 2^{wi}$ and $t_j = \sum_{i=0}^{\tau/2} s[p_j{+}i] 2^{wi}$, for $j \in [1..m]$. The numbers $t$ and $t_j$ simply represent the substrings $s[p..p{+}\tau/2]$ and $s[p_j..p_j{+}\tau/2]$. In order to compute $\vbit(t, t_j)$, it suffices to find the length $L$ of the longest common prefix of $s[p..p{+}\tau/2]$ and $s[p_j..p_j{+}\tau/2]$ and, then, compute the lowest bit at which the numbers $s[p{+}L]$ and $s[p_j{+}L]$ differ and which of these numbers has 0 and 1 in this differing bit. While the length $L$ can be computed on the substring $s[p..p{+}\frac{7}{8}\tau]$ with a reduced alphabet, the information about the bits at which the numbers $s[p{+}L]$ and $s[p_j{+}L]$ differ is lost after the alphabet reduction. However, it turns out that this information can be stored too in small space without the need to preserve the numbers $s[p{+}L]$ and $s[p_j{+}L]$ themselves.
Consider the procedure reducing the alphabet for a substring $s[i\tau..(i{+}2)\tau]$ with $i \in [0..n/\tau-2]$. The procedure sorts all letters assigning to them ordering numbers from $[0..2\tau]$. Denote by $b_0, b_1, \ldots, b_{k-1}$ all distinct letters of $s[i\tau..(i{+}2)\tau]$ in increasing order before the reduction. Each letter $b_i$ is an $\Oh(\log n)$-bit number that is mapped by the reduction to its index $i$. Each number $b_i$ can be represented as a bit string $\bar{b}_i$ of length $\Oh(\log n)$ in which the bits are written from the lowest to the highest. We can construct a compacted trie on the strings $\bar{b}_i$. Since $k \le 2\tau + 1$, the compacted trie can be stored in $\Oh(\tau\log\log n)$ bits: every edge in the trie stores the length of the bit string written on it (which takes $\Oh(\log\log n)$ bits), each internal node contains pointers to its children (taking $\Oh(\log\tau)$ bits since the number of nodes is $\Oh(\tau)$), and each leaf stores the index $i$ of the corresponding number $b_i$ (taking $\Oh(\log\tau)$ bits). The compacted trie can be built in $\Oh(k)$ time using a fusion tree~\cite{FredmanWillard}; in fact, the fusion tree on the numbers $b_0, b_1, \ldots, b_{k-1}$ implicitly constructs precisely this trie (see also~\cite{ChanLarsenPatrascu,GrossiOrlandiRamanRao} where this is emphasized more explicitly). Our idea is that we do not have to store the numbers $b_0, b_0, \ldots, b_{k-1}$ in addition to the trie in order to find the lowest bit at which two numbers $b_i$ and $b_{i'}$ differ: the bit corresponds precisely to the position in the compacted trie at which the corresponding strings $\bar{b}_i$ and $\bar{b}_{i'}$ diverge.
By analogy to the solution for Part~(i), when processing $p \in S'$, we first retrieve the substring $s[p..p{+}\frac{7}{8}\tau]$ with a reduced alphabet from the string $s[i\tau..(i{+}2)\tau]$ with $i = \lfloor p / \tau\rfloor$ encoded in $\hat{s}_i$. Then, we concatenate to the bit representation of this substring a bit array $a$ of length $\lfloor\frac{7}{8}\tau\rfloor$ that indicates which of the positions $i + 1, i + 2, \ldots, i + \lfloor\frac{7}{8}\tau\rfloor$ belong to $S'$: for $h \in (0..\frac{7}{8}\tau]$, we have $i + h \in S'$ iff $a[h - 1] = 1$. The bit array $a$ is easy to maintain in one machine word during the execution of the algorithm using the bit shift operation. Next, we concatenate to the resulting bit chunk a bit representation of the compacted trie on the bit strings $\bar{b}_0, \bar{b}_1, \ldots, \bar{b}_{k-1}$ (without storing the letters $b_0, b_0, \ldots, b_{k-1}$ themselves), which adds more $\Oh(\tau\log\log n)$ bits. Thus, we obtain a chunk of $\Oh(\tau\log\tau) + \frac{7}{8}\tau + \Oh(\tau\log\log n) = \Oh((\log^{(3)} n)^4\log\log n)$ bits that encodes the concatenated bit representations of the substring $s[p..p{+}\frac{7}{8}\tau]$ with a reduced alphabet, of the bit array $a$ indicating all positions from $S' \cap (p..p{+}\frac{7}{8}\tau]$, and of the compacted trie. We view this chunk as an integer number $x$ with $\Oh((\log^{(3)} n)^4\log\log n)$ bits.
The chunk $x$ determines the letter $a_p$. Indeed, in order to compute the letter, we first have to compute the numbers $w'_j = \vbit(t, t_j)$: this can be done by first computing the length $L$ of the longest common prefix of the corresponding substrings of $s[p..p{+}\frac{7}{8}\tau]$ at positions $p$ and $p_j$ and, then, by finding the lowest differing bit of the numbers $s[p{+}L]$ and $s[p_j{+}L]$, which can be performed using the compacted trie; similar computations should be executed for other positions from $S' \cap (p..p{+}\frac{7}{8}]$ but all of them involve only substrings of the string $s[p..p{+}\frac{7}{8}\tau]$, due to Lemma~\ref{lem:r-letters-locality}. All further computations can be executed as described in Section~\ref{sec:recompression}. Instead of performing all this from scratch, we can in advance consider all possible valid chunks that encode in the same way strings $t[0..\frac{7}{8}\tau]$ over the alphabet $[0..2\tau]$ concatenated with bit arrays of length $\frac{7}{8}\tau$ and with compacted tries on $\Oh(\tau)$ bit strings of length $\Oh(\log n)$ (without storing the $\Oh(\tau)$ strings themselves); for each such chunk, we precalculate the resulting letter in a table $D$ so that the entry $D[x]$ stores $a_p$. Thus, we simply read $D[x]$ and append it to $R$.
The processing of $p\in S$ is done in $\Oh(1)$ time since all the bit representations take only $\Oh(1)$ machine words. The total time for processing all substrings $s[i\tau..(i{+}2)\tau]$, for $i \in [0..n/\tau-2]$, is $\Oh(\frac{n}{\tau}\tau) = \Oh(n)$. The size of the table $D$ is $\Oh(2^{\Oh((\log^{(3)} n)^4\log\log n)} \cdot (\log^{(3)} n)^2)$ bits, which fits into $\Oh(n/\tau)$ space provided $\tau < (\log^{(3)} n)^4$ as can be easily seen since $\log(n/\tau) = \Theta(\log n)$ whereas the logarithm of the space for $D$ is $o(\log\log n)$. Hence, all precalculations can be performed in $o(n)$ time within $\Oh(n / \tau)$ space.
\subparagraph{Part (iii).}
It remains to describe how all the arrays $M_i[0..\lceil\log^{(4)} n\rceil]$ can be initialized in $\Oh(|S'|)$ time. Given a letter $R[i]$ and a position $p \in S'$ corresponding to it, recall that $M_i[j]$, for $j \in [0..\lceil\log^{(4)} n\rceil]$, is equal to the size of the set $S' \cap (p..p{+}\tau/2^j]$. We receive positions $p$ of $S'$ from left to right and, using the bit shift operation, maintain along the way a bit array $a$ of length $\tau$ that is stored in a machine word $x$ of size $\Oh(\log n)$ bits such that $a$ indicates which of the positions of $(p..p{+}\tau]$ are from $S'$, i.e., for $h \in (0..\tau]$, we have $p + h \in S'$ iff $a[h - 1] = 1$. Obviously, the array $a$ determines the content of $M_i$. The array $M_i$ occupies $\Oh((\log^{(4)} n)^2)$ bits and, therefore, its bit representation can be stored into one machine word. We hence can precompute a table $F$ of size $\Oh(2^{\tau}\cdot (\log^{(4)} n)^2) = o(\log n)$ bits such that $F[x]$ stores the bit representation of the array $M_i$. The content of $F[x]$ is then copied in $\Oh(1)$ time in place of $M_i$. The table $F$ can be straightforwardly precalculated in $o(n)$ time and $\Oh(1)$ space.
\bibliographystyle{elsart-num-sort}
|
{
"timestamp": "2021-05-11T02:15:25",
"yymm": "2105",
"arxiv_id": "2105.03782",
"language": "en",
"url": "https://arxiv.org/abs/2105.03782"
}
|
\section*{Acknowledgement}
This work was partially supported by the National Natural Science Foundation of China under Grant No. 61872369 and 61832017, Beijing Academy of Artificial Intelligence (BAAI) under Grant No. BAAI2020ZJ0301, Beijing Outstanding Young Scientist Program under Grant No. BJJWZYJH012019100020098, the Fundamental Research Funds for the Central Universities, and the Research Funds of Renmin University of China under Grant No.18XNLG22 and 19XNQ047. Xin Zhao is the corresponding author.
\section{Conclusion}
In this paper, we have presented a novel coherence-enhanced text
planning model for review generation. Our core idea is to utilize
KG subgraphs and their correlations to enhance local and global
coherence, respectively. KG subgraphs characterize the semantic
structure of intra-sentence entities which can naturally enforce the
local coherence since entities are tightly associated in subgraphs,
while subgraph sequence can capture complicated inter-sentence
correlations of entities to improve global coherence. The two kinds
of coherence have been modeled in a unified, principled text planning approach based on the HKG. Furthermore, we developed a
supervised copy mechanism to verbalize sentence based on KG subgraphs for further enhancing local coherence via copied threading
words. We have constructed extensive experiments on three real-world review datasets. The experimental results have demonstrated
the effectiveness of our model on review generation task in a series
of evaluation metrics.
As future work, we will consider more kinds of external knowledge (e.g., WordNet) and investigate how our model could be applied to other domains.
\section{Experiments}
In this section, we conduct the evaluation experiments for our approach on the review generation task. We first set up the experiments, and then report the results and detailed analysis.
\subsection{Experimental Setup}
\subsubsection{Construction of the Datasets} We use three datasets from different domains for evaluation, including \textsc{Amazon} Electronic, Book datasets~\cite{HeM16}, and \textsc{IMDb} Movie dataset~\cite{imdb}.
We remove users and items occurring fewer than five times, discard reviews containing more than 100 tokens
and only keep the top frequent 30,000 words in vocabulary for the three datasets.
All the text is processed with the procedures of lowercase and tokenization using NLTK.
In order to obtain KG information for these items, we adopt the public KB4Rec~\cite{zhao2019kb4rec} dataset to construct the aligned linkage between Freebase~\cite{freebase} (March 2015 version) entities and online items from the three domains.
Starting with the aligned items as seeds, we include their one-hop neighbors from Freebase as our KG data.
We keep the reverse relations and remove the triples with non-Freebase strings.
Note that we only retain the items linked with Freebase entities in our datasets.
The statistics of three datasets after preprocessing are summarized in Table~\ref{tab-data}.
Furthermore, for each domain, we randomly split it into training, validation and test sets with a ratio of 8:1:1.
To construct the entity-word links in HKG, we employ the Stanford NER package to identify entity mentions in review text, and extract aspect words by following~\cite{NiM18}.
We select 489, 442 and 440 aspect words, frequently co-occurring with entity mentions in review sentences, for the three domains, respectively.
The user-item links in HKG can be constructed according to user-item interactions in review datasets.
Finally, we extract the top 30, 30, and 35 frequent subgraph schemas using gSpan algorithm~\cite{YanH02} for the three domains, respectively.
\ignore{After preprocessing the review datasets, we adopt the public KB4Rec~\cite{ZXW18} and RuleRec~\cite{MaZCJWLMR19} datasets to construct the aligned linkage between Freebase~\cite{freebase} entities and online items from the three domains. Freebase stores facts by triples of the form $\langle head, relation, tail \rangle$, and we use the last public version released on March 2015. We keep the aligned entities and their one-hop neighboring entities. We removed relations like \emph{<book,author,written\_book>} which just reverses the head and tail compared to the relations \emph{<book.written\_book.author>}. We also remove relations that end up with non-freebase string, \emph{e.g.,}\xspace like \emph{<film.film.rottentomatoes\_id>}. }
\ignore{
\begin{table}[t]
\centering\small
\caption{KG relations in the three domains. We omit their reverse relations here.}
\begin{tabular}{ c | l }
\toprule[1pt]
\textbf{Datasets} & \textbf{Relations} \\
\midrule[0.7pt]
\textsc{Electronic} & \tabincell{l}{price, service, laptop, sound, case, storage, \\company, category, brand, software, battery, \\peripheral, advertising, camera, manufacturer} \\
\midrule[0.7pt]
\textsc{Book} & \tabincell{l}{genre, subject, author, edition, character, language} \\
\midrule[0.7pt]
\textsc{Movie} & \tabincell{l}{genre, actor, director, writer, producer, music, \\country, language}\\
\bottomrule[1pt]
\end{tabular}
\label{tab-relations}
\end{table}}
\begin{table}[tbp]
\centering\small
\caption{Statistics of our datasets after preprocessing.}
\begin{tabular}{c|l|r r r}
\toprule[1pt]
\multicolumn{2}{c|}{\textbf{Dataset}}& \textbf{Electronic} & \textbf{Book} & \textbf{Movie}\\
\midrule[0.7pt]
\multirow{3}{*}{\tabincell{c}{Review}}&\#Users & 50,473& 71,156& 47,096\\
&\#Items &12,352& 25,045& 21,125\\
&\#Reviews & 221,722& 853,427& 1,152,925\\
\midrule[0.7pt]
\multirow{3}{*}{\tabincell{c}{Knowledge\\Graph}}&\#Entities &30,310 &105,834 &247,126 \\
&\#Relations &30 &12 &16 \\
&\#Triplets &129,254 &300,416 &1,405,348 \\
\bottomrule[1pt]
\end{tabular}%
\label{tab-data}%
\end{table}
\subsubsection{Baseline Methods} We consider the following baselines as comparison:
\textbullet~\emph{Attr2Seq}~\citep{ZhouLWDHX17}: It adopts an attention-enhanced attribute-to-sequence framework to generate reviews with input attributes.
\textbullet~\emph{A2S+KG}: We incorporate the KG embeddings of items as additional inputs into \emph{Attr2Seq}.
\textbullet~\emph{ExpanNet}~\citep{NiM18}: It adopts an encoder-decoder architecture to generate personalized reviews by introducing aspect words.
\textbullet~\emph{A-R2S}~\citep{NiLM19}: It employs a reference-based Seq2Seq model with aspect-planning in order to cover different aspects.
\textbullet~\emph{A-R2S+KG}: We incorporate the KG entities of items as external inputs into \emph{A-R2S}.
\textbullet~\emph{SeqGAN}~\citep{YuZWY17}: : It regards the generative model as a stochastic parameterized policy and uses Monte Carlo search to approximate the state-action value. The discriminator is a binary classifier to evaluate the sequence and guide learning process of the generator.
\textbullet~\emph{LeakGAN}~\citep{GuoLCZYW18}: It is designed for long text generation through the leaked mechanism. The generator is built upon a hierarchical reinforcement learning architecture and the discriminator is a CNN-based feature extractor.
\textbullet~\emph{ACF}~\citep{LiZWS19}: It decomposes the review generation process into three different stages by designing an aspect-aware coarse-to-fine generation model. The aspect semantics and syntactic characteristics are considered in the process.
\textbullet~\emph{KCGNN}~\citep{li2020knowledge}: It proposes a KG-enhanced review generation model based on capsule graph neural network for capturing user preference at both aspect and word levels.
\textbullet~\emph{PHVM}~\citep{ShaoHWXZ19}: It adopts a planning-based hierarchical variational model to capture the inter-sentence coherence of texts.
Among these baselines, \emph{Attr2Seq}, \emph{ExpanNet}, \emph{A-R2S}, \emph{ACF} and \emph{KCGNN} are five recently proposed review generation models; \emph{SeqGAN} and \emph{LeakGAN} are GAN-based text generation models; \emph{A2S+KG} and \emph{A-R2S+KG} incorporate the pre-trained KG item embeddings from DistMult~\cite{YangYHGD14a} and KG entities of items, respectively; \emph{PHVM} is the state-of-the-art text planning model. We implement it by transferring KG into a list of $\langle relation, entity \rangle$ pairs (\emph{e.g.,}\xspace $\langle actor, Burton \rangle$) about items (\emph{e.g.,}\xspace movie $Sleepy$) as inputs. We employ validation set to optimize the parameters in each method.
\ignore{
\begin{table}
\centering\small
\caption{Parameter settings of the two modules in our model.}
\begin{tabular}{c | l}
\toprule[1pt]
\textbf{Modules} & \textbf{Settings} \\
\midrule[0.7pt]
Subgraph & \tabincell{l}{$d_E=512$, FFN-embed.-size=$1024$, batch-size=$128$,\\
Attention-heads=$8$, Attention-blocks=$6$,\\
init.-learning-rate=$0.002$, Adam optimizer}\\
\midrule[0.7pt]
Sentence & \tabincell{l}{$d_W=512$, FFN-embed.-size=$1024$, batch-size=$64$,\\
Attention-heads=$8$, Attention-blocks=$6$,\\
init.-learning-rate=$0.002$, Adam optimizer}\\
\bottomrule[1pt]
\end{tabular}
\label{tab-parameters}
\end{table}}
\begin{table*}[tb]
\renewcommand\arraystretch{1.1}
\small
\begin{center}
\caption{Performance comparisons of different methods for automatic review generation under three domains. ``*'' denotes the improvement is statistically significant compared with the best baseline (t-test with p-value $< 0.05$). ``-'' denotes this metric is not applicable to this method, since the generated text contains very few entity mentions.}
\begin{tabular}{c|l | c c | c c c c c}
\toprule[1pt]
\multirow{2}[2]*{\textbf{Datasets}} & \multirow{2}[2]*{\textbf{Models}} & \multicolumn{2}{c|}{\textbf{Coherence}} & \multicolumn{5}{c}{\textbf{Generation}} \\
\cmidrule[0.7pt]{3-9}
& & \textbf{Sen-Sim} & \textbf{ECR} & \textbf{BLEU-1} & \textbf{BLEU-4} & \textbf{ROUGE-1} & \textbf{ROUGE-2} & \textbf{ROUGE-L} \\
\midrule[0.7pt]
\multirow{4}[6]{*}{\textsc{Electronic}}
& Attr2Seq & 0.671 & - & 24.28 & 0.88 & 0.263 & 0.043 & 0.214 \\
& A2S+KG & 0.652 & 1.63 & 25.62 & 0.93 & 0.271 & 0.049 & 0.223 \\
& ExpanNet & 0.653 & - & 26.56 & 0.95 & 0.290 & 0.052 & 0.262 \\
& A-R2S & 0.667 & - & 27.04 & 1.15 & 0.309 & 0.065 & 0.279 \\
& A-R2S+KG & 0.650 & 7.01 & 29.28 & 1.69 & 0.322 & 0.067 & 0.288 \\
& SeqGAN & 0.625 & - & 25.18 & 0.84 & 0.265 & 0.043 & 0.220 \\
& LeakGAN & 0.666 & - & 25.66 & 0.92 & 0.267 & 0.050 & 0.236 \\
%
%
& ACF & 0.686 & - & 28.22 & 1.04 & 0.315 & 0.066 & 0.280 \\
& KCGNN & 0.688 & 16.50 & \underline{29.88} & 1.83 & 0.323 & \underline{0.078} & 0.295 \\
& PHVM & \underline{0.690} & \underline{17.56} & 29.40 & \underline{1.93} & \underline{0.325} & 0.072 & \underline{0.301} \\
& CETP & \textbf{0.707*} & \textbf{20.85*} & \textbf{31.51*} & \textbf{3.12*} & \textbf{0.338*} & \textbf{0.084*} & \textbf{0.312*} \\
\midrule[0.7pt]
\multirow{4}[6]{*}{\textsc{Book}}
& Attr2Seq & 0.677 & - & 26.93 & 1.14 & 0.259 & 0.047 & 0.223 \\
& A2S+KG & 0.672 & 2.25 & 27.69 & 1.42 & 0.268 & 0.053 & 0.236 \\
& ExpanNet & 0.708 & - & 26.52 & 1.49 & 0.301 & 0.054 & 0.271 \\
& A-R2S & 0.695 & - & 28.34 & 1.82 & 0.318 & 0.075 & 0.283 \\
& A-R2S+KG & 0.671 & 9.07 & 29.00 & 2.06 & 0.321 & 0.077 & 0.295 \\
& SeqGAN & 0.633 & - & 26.89 & 1.24 & 0.255 & 0.053 & 0.246 \\
& LeakGAN & 0.663 & - & 28.79 & 1.94 & 0.274 & 0.060 & 0.285 \\
& ACF & 0.715 & - & 28.96 & 2.11 & 0.317 & 0.068 & 0.291 \\
& KCGNN & 0.733 & 16.66 & \underline{30.66} & \underline{3.08} & \underline{0.332} & 0.080 & 0.306 \\
& PHVM & \underline{0.740} & \underline{18.79} & 29.33 & 2.46 & 0.319 & \underline{0.085} & \underline{0.307} \\
& CETP & \textbf{0.761*} & \textbf{23.89*} & \textbf{31.93*} & \textbf{3.89} & \textbf{0.341*} & \textbf{0.095*} & \textbf{0.317*} \\
\midrule[0.7pt]
\multirow{4}[6]{*}{\textsc{Movie}}
& Attr2Seq & 0.629 & - & 26.57 & 1.55 & 0.271 & 0.050 & 0.222 \\
& A2S+KG & 0.619 & 14.96 & 27.02 & 1.67 & 0.278 & 0.053 & 0.235 \\
& ExpanNet & 0.651 & - & 27.93 & 2.00 & 0.310 & 0.063 & 0.266 \\
& A-R2S & 0.649 & - & 29.01 & 2.12 & 0.314 & 0.074 & 0.306 \\
& A-R2S+KG & 0.617 & 20.06 & 30.05 & 2.95 & 0.325 & 0.077 & 0.313 \\
& SeqGAN & 0.641 & - & 27.07 & 1.63 & 0.274 & 0.052 & 0.221 \\
& LeakGAN & 0.672 & - & 28.10 & 2.29 & 0.302 & 0.064 & 0.271 \\
& ACF & 0.709 & - & 29.46 & 2.40 & 0.322 & 0.076 & 0.303 \\
& KCGNN & 0.766 & 23.34 & \underline{31.39} & \underline{3.55} & \underline{0.341} & 0.096 & 0.327 \\
& PHVM & \underline{0.770} & \underline{24.70} & 30.29 & 3.02 & 0.331 & \underline{0.098} & \underline{0.328} \\
& CETP & \textbf{0.794*} & \textbf{28.96*} & \textbf{32.37*} & \textbf{4.39*} & \textbf{0.353*} & \textbf{0.114*} & \textbf{0.345*} \\
\bottomrule[1pt]
\end{tabular}
\label{tab:main-results}
\end{center}
\vspace{-0.04cm}
\end{table*}
\paratitle{Evaluation Metrics}. To evaluate the performance of review generation,
we adopt two automatic \emph{generation metrics}, including BLEU-1/4 and ROUGE-1/2/L.
BLEU~\cite{PapineniRWZ02} measures the ratios of the co-occurrences of $n$-grams between the generated and real reviews;
ROUGE~\cite{Lin04} counts the overlapping $n$-grams between generated and real reviews.
Furthermore, to evaluate the coherence of generated reviews, we adopt two automatic \emph{coherence metrics}, including Sen-Sim proposed in~\cite{LapataB05} (measuring discourse coherence as an average cosine similarity between any two sentences from the discourse based on sentence embeddings from BERT~\cite{DevlinCLT19}) and Entity Co-occurrence Ratio~(abbreviated as \emph{ECR}, modified based on BLEU-2 and computing the ratio of co-occurrences of entity pairs between generated and real reviews).
Compared with~\cite{LapataB05} which represents the sentence embedding as the mean of embeddings of words in the sentence, BERT adds a special token ``\texttt{[CLS]}'' as the first token of every sentence and the final representation of this token is used as the sentence embedding. We also try other models (\emph{e.g.,}\xspace Word2Vec and ELMo) to acquire sentence embeddings. These models achieve similar results as BERT.
\subsection{Main Results}
Table~\ref{tab:main-results} presents the results of different methods on the review generation task.
First, by incorporating the KG embeddings or KG entities, A2S+KG and A-R2S+KG achieve better results for generation metrics while worse results for coherence metrics than original Attr2Seq and A-R2S. This observation implies that, although KG data is useful for text generation, we should more carefully incorporate it considering the text coherence. Besides, among the GAN-based methods, LeakGAN gives better results for both kinds of metrics than SeqGAN. A major reason is that LeakGAN is specially designed for generating long text, while SeqGAN may not be effective in capturing long-range semantic dependency in text generation.
Second, ACF, KCGNN and PHVM overall outperform most of baselines for both kinds of metrics.
ACF adopts a coarse-to-fine three-stage generation process by considering aspect semantics and syntactic patterns, and KCGNN is a KG-enhanced generation model for capturing user preference on KG attributes. It shows that both aspect semantics and KG information are helpful for review generation, especially KG information. As the most relevant comparison with our model, the recent proposed PHVM yields the best performance among all baselines. It introduces KG entities and verbalizes coherent sentences conditioned on attribute-level planning (a sequence of entity groups).
\ignore{Second, LeakGAN and ACF outperform most of baselines for generation and coherence metrics due to the special long text modeling. Specifically, ACF adopts a more powerful three-stage generation process by considering aspect semantics and syntactic patterns.
Among all baselines, the most competitive and direct comparison with our model is PHVM, which realizes coherent sentences conditioned on attribute-level planning (a sequence of item groups).
}
Finally, we compare the proposed CETP with the baseline methods. It is clear to see that CETP performs better than all the baselines by a large margin.
The major difference between our model and baselines lies in that we design a text planning mechanism based on KG subgraphs in the generation process, thus simultaneously improving the global and local coherence of texts.
PHVM lacks the modeling of multi-grained correlations between entity groups, and also neglects the intrinsic structure of an entity group.
While, other baselines do not explicitly model the coherence of text or incorporate external KG data.
\begin{table}[t]
\centering
\caption{Ablation analysis on \textsc{Movie} dataset.}
\begin{tabular}{ l c c c }
\toprule[1pt]
\textbf{Models} & \textbf{ECR} & \textbf{BLEU-1} & \textbf{ROUGE-1} \\
\midrule[0.7pt]
CETP & 28.96 & 32.37 & 0.342 \\
\midrule[0.7pt]
w/o HKG, w KG & 28.91 & 31.93 & 0.335 \\
w/o SA & 25.60 & 30.96 & 0.317 \\
w/o NA & 26.00 & 31.34 & 0.330\\
w/o Copy & 25.20 & 31.03 & 0.326 \\
\bottomrule[1pt]
\end{tabular}
\label{tab:ablation-results}
\end{table}
\subsection{Detailed Analysis}
Next, we construct detailed analysis experiments on our model.
We only report the results on \textsc{Movie} dataset due to similar findings in three datasets. We select the two best baselines \emph{KCGNN} and \emph{PHVM} as comparisons.
\subsubsection{Ablation Analysis} Our model has three novel designs: HKG incorporation, subgraph-based text planning and supervised copy mechanism.
Table~\ref{tab:ablation-results} shows the results if we ablate these designs. Here, we consider four variants for comparison: (1) \emph{w/o HKG, w KG} keeping original KG links while removes the interaction and co-occurrence links in HKG; (2) \emph{w/o SA} removing subgraph-level attention (Eq.~\ref{eq-subgraph-level-attention}) in subgraph-based text planning; (3) \emph{w/o NA} removing node-level attention (Eq.~\ref{eq-node-level-attention}) in subgraph-based text planning; (4) \emph{w/o Copy} removing the supervised copy mechanism during generating words (Section~\ref{sec-supervised-copy}).
In Table~\ref{tab:ablation-results}, we can see that removing user and word nodes (including the associated links) gives a worse result than CETP, which shows that user-item interaction and entity-word co-occurrence are useful to review generation in terms of capturing user preference and entity-keyword association.
Second, variants dropping the subgraph- and node-level attention are worse than CETP, especially dropping the subgraph-level attention.
This shows that our model benefits from the subgraph-based text planning, which improves the process of content selection, arrangement, and order.
Finally, removing the supervised copy mechanism also greatly declines the final performance of our model.
In our model, the supervised copy mechanism explicitly guides the switch between generation and copy by selecting highly related entities or words from the planned subgraph, which has a significant effect on the final coherence performance.
\subsubsection{Human Evaluation} Above, we have performed automatic evaluation experiments for our model and baselines. For text generation models, it is important
to construct human evaluation for further effectiveness verification.
Following previous work~\cite{LiZWS19,NiLM19}, we also conduct human evaluation on the generated reviews.
We randomly choose 200 samples from test set.
A sample contains the input contexts (\emph{i.e.,}\xspace user, item and rating), and the texts generated by different models.
Three experienced e-commerce users were asked to score the texts with respect to four dimensions of coherence, relevance, fluency and informativeness.
\emph{Coherence} evaluates how content is coherent considering both intra- and inter-sentence correlation~\cite{ShaoHWXZ19}.
\emph{Relevance} measures how relevant the generated review is according to the input contexts.
\emph{Fluency} measures how likely the generated review is produced by human.
\emph{Informativeness} means that how much the generated text provides specific or different information.
The scoring mechanism adopts a 5-point Likert scale~\cite{likert1932technique}, ranging from 1-point~(``very terrible'') to 5-point~(``very satisfying'').
We further average the three scores from the three human judges over the 200 inputs for each method.
The results in Table \ref{tab:human-results} show that CETP produces more coherent texts, which further verifies the effectiveness of the subgraph-based text planning. It is also worth noting that CETP performs better in terms of fluency, since KG subgraphs enforce more fluent and logical expressions.
The informativeness of CETP is slightly worse than PHVM. It is possibly because PHVM applies a more greedy strategy to copy entities from KG while our model adopts a more conservative strategy to incorporate highly relevant KG entities.
The Cohen's kappa coefficients for the four factors are 0.78, 0.71, 0.75 and 0.69, respectively, indicating a high agreement between the three human judges.
\begin{table}[t]
\centering
\caption{Human evaluation on \textsc{Movie} dataset.}
\begin{tabular}{l c c c c }
\toprule[1pt]
\textbf{Metrics} & \textbf{Gold} & \textbf{CETP} & \textbf{KCGNN} & \textbf{PHVM} \\
\midrule[0.7pt]
Coherence & 4.22 & \underline{3.51} & 3.10 & 3.18 \\
Relevance & 4.22 & \underline{3.42} & 3.34 & 3.33 \\
Fluency & 4.54 & \underline{3.49} & 3.15 & 3.08 \\
Informativeness & 4.33 & 2.97 & 2.95 & \underline{3.03} \\
\bottomrule[1pt]
\end{tabular}
\label{tab:human-results}
\end{table}
\begin{figure}[h]
\centering
\subfigure[Tuning the amount of KG data.]{\label{fig-movie-varing-kg}
\centering
\includegraphics[width=0.22\textwidth]{movie_kg_sparsity.pdf}
}
\subfigure[Tuning the KG embedding size.]{\label{fig-movie-varing-es}
\centering
\includegraphics[width=0.22\textwidth]{movie_kg_embed.pdf}
}
\centering
\caption{Performance tuning on \textsc{Movie} dataset.}
\label{fig-parameter-tuning-movie}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[width=0.98\textwidth]{case.pdf}
\caption{Subgraph schema visualization and sample reviews generated by CETP on \textsc{Movie} dataset. The two reviews are about the movies \emph{``The Visitor''} and \emph{``Moneyball''} from the same user. The capital letters $A$, $G$, $D$, $L$, $M$ and $C$ denote the relations of \emph{actor}, \emph{genre}, \emph{director}, \emph{language}, \emph{music} and \emph{co-occurrence}, respectively.}
\label{fig-case}
\end{figure*}
\subsection{Performance Sensitivity Analysis}
\label{sec-performance}
In our paper, we have shown that KG data is very helpful to our model for both generation and coherence metrics. Here, we would examine how it affects the final performance.
\subsubsection{Tuning the amount of KG data} The amount of available KG information directly affects the performance of various KG-enhanced methods. Here we examine how different methods perform with the varying amount of KG data. We select A-R2S+KG, KCGNN and PHVM as comparison methods.
We take 40\%, 60\%, 80\% and 100\% of the available KG data to generate four new KG training datasets, respectively.
We utilize them together with the original review data to train our model and report the performance on the test set. The KG test set is fixed as original. As shown in Figure~\ref{fig-movie-varing-kg}, the performance of CETP gradually improves with the increasing of KG data, and CETP has achieved a consistent improvement over the other baselines with more than 40\% KG data.
\subsubsection{Tuning the KG embedding size} For KG data, the embedding size is an important parameter to tune in real applications, which restricts the capacity of encoding KG information. Here, we vary the embedding size in the set \{64, 128, 256, 512\} and construct a similar evaluation experiment as that for the amount of KG data. As we can see from Figure~\ref{fig-movie-varing-es}, CETP is substantially better than the other baselines for all the four embedding sizes, which indicates the effectiveness of our model in extracting and encoding useful KG information. Another observation is that the performance of CETP gradually increases and the change is relatively stable.
\subsection{Qualitative Analysis}
Previous experiments have demonstrated the effectiveness of our model in generating semantically coherent review text. In this part, we qualitatively analyze why our model performs well.
In Figure~\ref{fig-case}(a), we present the top 13 frequent subgraph schemas and their distributions from gold reviews and generated reviews of CETP and PHVM.
As we can see, the distribution of subgraph schemas from CETP is closer to the real distribution (smaller MAE and RMSE results) than PHVM, indicating the effectiveness of our text planning mechanism.
Furthermore, Figure~\ref{fig-case}(b) and \ref{fig-case}(c) present two movie reviews and corresponding text plans generated by CETP for a sample user.
Note that, in Figure~\ref{fig-case}(b) and \ref{fig-case}(c), each generated sentence is verbalized from a generated subgraph (on colored background), and they are labelled with the same number. The number represent the order of the subgraph and sentence in their sequence generated by our model.
As we can see, for global coherence, CETP can capture inter-sentence entity distributions and generate similar aspect and content sketches compared with real reviews (\emph{e.g.,}\xspace \emph{romance film (genre)}$ \rightarrow$ \emph{thomas mccarthy (director)}$ \rightarrow$ \emph{richard jenkins (actor)}), due to the effective text planning mechanism based on KG subgraphs.
For local coherence, the sentences are well verbalized through the intra-sentence correlation between entities in subgraphs (\emph{e.g.,}\xspace \emph{thomas mccarthy} and \emph{richard kind}) and the connection of threading words (\emph{e.g.,}\xspace \emph{stars} and \emph{performance}).
Besides, CETP can capture the preferred relations and entities by the user about the two movies (\emph{e.g.,}\xspace \emph{genre} and \emph{romance film}). This implies that the user-augmented KG data can provide important semantics for learning user preference.
\section{Introduction}
With the development of e-commerce in recent years, online reviews play a crucial role in reflecting real customer experiences. Online review information is useful to both users interested in a certain product and sellers concerned about increasing their revenue. However, many users find it tedious to write review text, and a large proportion of users do not post online reviews~\cite{bareket2016strategic}. To ease the process of review writing, the task of review generation has been proposed and received wide attention from both research and industry communities~\cite{ZhouLWDHX17,NiM18,LiZWS19}. Review generation aims to automatically produce review text conditioned on some necessary context inputs (\emph{e.g.,}\xspace users, items and ratings), which potentially influences many applications, such as explanation generation for recommendation~\cite{LiZC20}, automatic scientific reviewing for papers~\cite{BartoliLMT16}. Existing methods mainly make extensions based on sequential neural networks (\emph{e.g.,}\xspace recurrent neural network), including attribute awareness~\cite{ZhouLWDHX17}, aspect enrichment~\cite{NiM18}, and length enhancement~\cite{LiZWS19}. While, these studies do not explicitly utilize the factual information about items, tending to generate dull and uninformative review text.
To enrich the generated content, we consider incorporating external knowledge graph (KG) to improve review generation.
By associating KG entities with e-commerce items, we can obtain rich semantic relations about items from KG. Indeed, there has also been growing interest in the utilization of KG data in other text generation tasks, such as dialog system~\cite{niu2019knowledge} and document summarization~\cite{AmplayoLH18}.
These approaches typically learn to copy entities or triples from KG when necessary, which improves the informativeness of the generated content to a certain extent. However, they lack overall consideration to select and arrange the incorporated KG data, which is likely to cause the issue of text incoherence~\cite{GroszJW95}, such as content discontinuity and logic confusion.
Figure~\ref{fig-example} presents a comparison between a coherent review and an incoherent review in terms of content continuity. As we can see, although review 2 has incorporated factual information from KG, the entire organization is poor and the review content is discontinuous in terms of semantic structure, \emph{i.e.,}\xspace the lead actor and actress, \emph{Leonardo Dicaprio} and \emph{Kate Winslet}, are separated by the genre \emph{romance film}.
To address the above issue, we focus on improving \emph{entity-centric coherence} of the generated reviews by leveraging the semantic structure of KGs.
According to \cite{GroszJW95}, entity-centric coherence refers to the entities are closely correlated to the other in text and the entity correlations among sentences can be used to create coherence patterns for text.
Also, it has been widely recognized that entity graphs (or subgraphs) are a powerful form to characterize the coherent structure in natural language~\cite{GuinaudeauS13}.
Compared with the traditional entity-grid method~\cite{BarzilayL05} which is restricted to capturing coherent transitions between adjacent sentences, the KG-based entity graphs can easily span the entire text and capture the semantic correlations between different sentences.
Following the literature of linguistics~\cite{mani1998using}, we consider two kinds of coherence, namely \emph{global} and \emph{local coherence}. \emph{Global coherence} captures how entities distribute among different sentences through inter-sentence correlations~\cite{ElsnerAC07}.
While \emph{local coherence} means the intra-sentence entities (or keywords) have close correlations through semantic relations or threading words~\cite{BarzilayL05}.
Our main idea is aimed at utilizing KG subgraphs and their correlations to enhance local and global coherence, respectively.
To develop our model, we derive the generation plan before generating the text.
Such a way is called \emph{text planning} in text generation~\cite{MoryossefGD19,HuaW19}, which refers to the process of selecting, arranging and ordering content to be produced.
\ignore{According to studies~\cite{danes1974functional,Mesgar015}, entities among sentences can be used to create coherence patterns for text.
The key idea is that
to utilize the connected subgraph from KG for enforcing local coherence, and the overall correlation among KG subgraphs for capturing global coherence. In this way, we make the generation plan before generating the text.
Such an approach is usually called \emph{text planning} in text generation~\cite{}, which refers to the process of selecting, arranging and ordering content to be produced.}
\ignore{To enrich the review content, we incorporate external knowledge graph (KG) into PRG task.
By linking KG entities with online items, we are able to obtain rich attribute information for items.
However, simply introducing KG entities without considering their correlations will cause the coherence issue~\cite{GroszJW95,KaramanisPMO04}. Specifically, in this work, we focus on the entity-centric coherence of review text.
In the literature of linguistics~\cite{foltz1998measurement,mani1998using}, coherence tends to fall into two classes.
Global coherence captures how entities distribute among different sentences through their correlations~\cite{ParveenM016}.
While local coherence means the intra-sentence entities have close correlations through some related keywords~\cite{BarzilayL05}.
Intuitively, local coherence is necessary for global coherence.
Hence, we augment the original KG with word nodes and entity-word links according to their co-occurrence in review sentences.
Besides, for the purpose of personalization, we also add user nodes and user-item links into the KG according to user-item interactions.
We call this augmented graph as heterogeneous knowledge graph (HKG) and present an illustrative example in Figure~\ref{fig-example}.
}
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{example.pdf}
\caption{Comparison of coherent and incoherent review texts in terms of content continuity. Review 1 first comments on the movie genre, followed a sentence about two actors; Review 2 introduces the two actors with two separate sentences, interleaved by another sentence on genre.}
\label{fig-example}
\end{figure}
To this end, in this paper, we propose a novel \textbf{C}oherence \textbf{E}nhanced \textbf{T}ext \textbf{P}lanning model (CETP) for review generation.
We utilize KGs for capturing coherence patterns to generate coherent review text.
Based on an augmented KG with user-item interaction and entity-keyword co-occurrence, we design a two-level text plan for generating a document: (1) the document plan
is modeled as a sequence of sentence plans in order, and (2) the sentence plan is modeled as an entity-based subgraph from KG.
In our model, local coherence is naturally enforced by KG subgraphs, since entities from a KG subgraph are tightly associated by semantic relations. To enhance global coherence, we develop a hierarchical self-attentive architecture with both subgraph- and node-level attention to generate a coherent sequence of subgraphs.
For sentence realization, we develop a supervised copy mechanism
for copying entities (or keywords) from the planned subgraph. It further improves local coherence by enhancing the intra-sentence entity correlation via threading words.
To the best of our knowledge, we are the first to utilize a KG-based text planning model to enhance both global and local coherence for review generation.
For evaluation, we construct three review datasets by associating KG entities with e-commerce items to obtain semantic attributes about items. Extensive experiments demonstrate the effectiveness of our model.
\section{The Proposed Approach}
In this section, we present the proposed \underline{C}oherence \underline{E}nhanced \underline{T}ext \underline{P}lanning model, named \emph{CETP}, for the review generation task. We first introduce the two-level text plan for selecting, arranging and ordering the contents in the output text, namely \emph{document plan} and \emph{sentence plan}.
As discussed earlier, a document plan is modeled as a sequence of sentence plans in order and a sentence plan is modeled as an entity-based subgraph from KG. Subgraphs naturally enforce the \emph{local coherence} of entities in a sentence, since they are originally connected and associated with relations in HKG. Furthermore, subgraph sequence can capture the overall distribution and arrangement of entities, which helps improve \emph{global coherence}.
Based on the two-level text plan, we adopt a supervised copy mechanism for sentence realization by copying entities (or keywords) from the planned subgraph. This step further improves local coherence by enhancing the intra-sentence entity correlation via threading words.
Figure~\ref{fig-model} presents examples for illustrating the basic notations and coherence enhancement in our model.
\begin{figure*}[t]
\centering
\includegraphics[width=1\textwidth]{model.pdf}
\caption{Illustrative examples for our approach: (a) subgraph schema (the capital letters $I$, $A$, $G$ and $C$ denote the relations of \emph{interaction}, \emph{actor}, \emph{genre} and \emph{co-occurrence}, respectively); (b) subgraph; (c) heterogeneous knowledge graph; and (d) the overview of the proposed CETP model. The document plan and sentence plan can enhance the global and local coherence, respectively.}
\label{fig-model}
\end{figure*}
\subsection{Text Plan Generation}
In this step, we study how to generate the text plan, \emph{i.e.,}\xspace a subgraph sequence $g^{1:m}={\langle g_1,\cdots,g_j,\cdots,g_m \rangle}$ defined in Section~\ref{sec-preliminary}.
\subsubsection{HKG Embedding}
\label{sec-graph-encoding}
We first learn node representations for the HKG.
Let $n$ ($n_j$ and $n_k$) denote a node placeholder for the HKG, associated with an embedding vector $\bm{v}_n \in \mathbb{R}^{d_E}$, where $d_E$ denotes the node embedding size. Node embeddings can be initialized with pre-trained KG embeddings or word embeddings~\cite{YangYHGD14a,MikolovSCCD13}. In order to capture the semantic correlations between nodes,
we propose to use Relation-Enhanced Graph Transformer~\cite{VaswaniSPUJGKP17},
which applies a relation-enhanced multi-head attention (MHA) to obtain the node embedding $\bm{\hat{v}}_{n_j}$ for node $n_j$ as:
\begin{equation}\label{eq-encoder-multi-head}
\bm{\hat{v}}_{n_j} = \underset{n_k \in \mathcal{G}}{\text{MHA}}(\bm{q}_{n_j}, \bm{k}_{n_k}, \bm{v}_{n_k}),
\end{equation}
where $\text{MHA}(\bm{Q}, \bm{K}, \bm{V})$ follows the implementation of multi-head attention~\cite{VaswaniSPUJGKP17} taking a query $\bm{Q}$, a key $\bm{K}$ and a value $\bm{V}$ as input:
\begin{eqnarray}\label{eq-multi-head-define}
\text{MHA}(\bm{Q}, \bm{K}, \bm{V}) &=& \text{Concat}(\text{head}_1, ..., \text{head}_H)\bm{W}^O, \\
\text{head}_h &=& \text{Attn}(\bm{Q}\bm{W}^Q_h, \bm{K}\bm{W}^K_h, \bm{V}\bm{W}^V_h), \nonumber
\end{eqnarray}
where $\bm{W}^Q_h, \bm{W}^K_h, \bm{W}^V_h \in \mathbb{R}^{d_E \times d_H}$, and $d_H$ denotes the dimension of attention heads. For clarity, we write Eq.~\ref{eq-encoder-multi-head} in the form of vectors. The key point lies in that we incorporate the semantic relations between nodes into the \emph{query vector} ($\bm{q}_{n_j}$) and \emph{key vector} ($\bm{k}_{n_k}$):
\begin{eqnarray}\label{eq-encoder-relation}
\bm{q}_{n_j} &=& \bm{v}_{n_j} + \bm{v}_{n_j \to n_k}, \\
\bm{k}_{n_k} &=& \bm{v}_{n_k} + \bm{v}_{n_k \to n_j}, \nonumber
\end{eqnarray}
where $\bm{v}_{n_j \to n_k}$ and $\bm{v}_{n_k \to n_j}$ are encodings for the shortest relation path $n_j \to n_k$ and $n_k \to n_j$ between nodes $n_j$ and $n_k$, which are learned by summing the embeddings of the relations in the path.
\ignore{Formally, we represent $j \to k = \langle z_0 \to z_1, z_1 \to z_2, \cdots ,z_n \to z_{n+1} \rangle$, where $z_0=j$, $z_{n+1}=k$ and $z_{1:n}$ are relay nodes. In this work, we compute the path encoding $\bm{v}_{j \to k}$ via:
\begin{equation}\label{eq-relation-sum}
\bm{v}_{j \to k} = \bm{v}_{z_0 \to z_1} + \cdots + \bm{v}_{z_n \to z_{n+1}}.
\end{equation}
where $\bm{v}_{z_p \to z_q}$ is the embedding of one-hop relation $z_p \to z_q$.
}
Finally, we employ a residual connection and fully connected feed-forward network (FFN):
\begin{eqnarray}\label{eq-encoder-ffn}
\bm{\tilde{v}}_{n_j} &=& \bm{\hat{v}}_{n_j} + \text{FFN}(\bm{\hat{v}}_{n_j}), \\
\text{FFN}(\bm{x}) &=& \text{max}(0, \bm{x}^\top \bm{W}_1 + \bm{b}_1)\bm{W}_2 + \bm{b}_2, \nonumber
\end{eqnarray}
where $\bm{W}_1, \bm{W}_2, \bm{b}_1$ and $\bm{b}_2$ are trainable parameters, and FFN is a linear network with \texttt{gelu} activation.
The attention block, \emph{i.e.,}\xspace MHA and FFN, can be stacked multiple times.
\subsubsection{Subgraph Encoder}
\label{sec-two-level-attention}
Since a sentence plan corresponds to a connected subgraph from HKG, it naturally enforces the local coherence of its nodes.
The major difficulty lies in how to enhance global coherence in text planning.
Let $\bm{v}_{g_{j}} \in \mathbb{R}^{d_{E}}$ denote the embedding of the current subgraph $g_{j}$, initialized by an average pooling of the embeddings of all nodes in $g_{j}$ as $\bm{v}_{g_{j}} = \text{AvgPooling}(\bm{\tilde{v}}_{n_k})$, where $\bm{\tilde{v}}_{n_k}$ is the embedding of node $n_k$ in $g_{j}$ learned with Graph Transformer in Section~\ref{sec-graph-encoding}. Specifically, at the first step, the subgraph $g_0$ is initialized as a \textsc{Start} graph without any nodes.
As shown in Figure~\ref{fig-model}(d), the nodes in previous subgraphs have closed semantic correlations with the nodes in subsequent subgraphs. Therefore, we introduce two kinds of multi-head attention to enhance global coherence for subgraph representations.
\paratitle{Subgraph-level Attention}. To make content globally coherent, the basic idea is to refer to previous subgraphs when learning the embedding of current subgraph. For this purpose, we propose a subgraph-level multi-head attention
and obtain a subgraph-enhanced subgraph embedding $\bm{v}^G_{g_j}$ as:
\begin{equation}\label{eq-subgraph-level-attention}
\bm{v}^G_{g_{j}} = \underset{k < j}{\text{MHA}}(\bm{v}_{g_{j}}, \bm{v}_{g_k}, \bm{v}_{g_k}),
\end{equation}
where $\bm{v}_{g_k}$ denotes the embeddings of previous subgraphs $g_1, \cdots, g_{j-1}$. In the subgraph-level multi-head attention mechanism, the embeddings for previous subgraphs are considered as key and value vectors, and the embedding of the current subgraph acts as the query vector. In this way, it incorporates the information of previous subgraphs for encoding the current subgraph.
\paratitle{Node-level Attention}. Subgraph-level attention cannot directly reflect the fine-grained entity correlations between different subgraphs. Hence, we further propose to use node-level multi-head attention by considering the effect of nodes from \emph{previous subgraphs}.
The node-enhanced subgraph embedding $\bm{v}^N_{g_j}$ is given as:
\begin{equation}\label{eq-node-level-attention}
\bm{v}^N_{g_j} = \underset{n_z \in g_{k}, \forall k < j}{\text{MHA}}(\bm{v}^G_{g_j}, \bm{\tilde{v}}_{n_z}, \bm{\tilde{v}}_{n_z}),
\end{equation}
where $\bm{\tilde{v}}_{n_z}$ is the learned embedding of node $n_z$ in previous subgraphs $g_1, \cdots, g_{j-1}$ computed as Eq.~\ref{eq-encoder-ffn}.
Similar to Eq.~\ref{eq-subgraph-level-attention}, the node embeddings in previous subgraphs are considered as key and value vectors, and the embedding of the current subgraph acts as the query vector.
Hence, the node information of previous subgraphs has been incorporated for encoding the current subgraph.
With the two kinds of multi-head attention, we have enhanced the global coherence by capturing inter-sentence correlations, since
the information of previous subgraphs and their nodes can be injected into current subgraph.
Finally, we also apply a residual connection and fully connected feed-forward network (FFN) to the node-enhanced subgraph embedding $\bm{v}^N_{g_{j}}$ (similar to Eq.~\ref{eq-encoder-ffn}) and obtain the final subgraph embedding $\bm{\tilde{v}}_{g_{j}}$.
\subsubsection{Subgraph Decoder}
After obtaining the final embedding of current subgraph $\bm{\tilde{v}}_{g_{j}}$, we further utilize it to generate the next subgraph, \emph{i.e.,}\xspace $g_{j+1}$.
General graph generation is a challenging task in deep learning~\cite{TrivediFBZ19,TaheriGB19}.
While, our task has several important unique characteristics, making the generation task simpler.
Recall that each subgraph $g_j$ is associated with a subgraph schema $s_j$ and a subgraph schema can be instantiated into different subgraphs (Section~\ref{sec-preliminary}). Thus, to generate a subgraph, we first generate the subgraph schema and then fill in the empty slots with entities or keywords. Usually, a review sentence usually contains only a few entities, and the number of frequent subgraph schemas in corpus is indeed small.
We treat schema generation as a classification task over the frequent schema set, which is pre-extracted from training data. Once the schema has been determined, we utilize the relations in the schema as constraints and the entity (keyword) probability predicted by our model as selection criterion. Figure~\ref{fig-selection} presents an example for the process of subgraph generation.
To enhance the personalized characteristics of subgraphs, following \emph{Attr2Seq}~\cite{ZhouLWDHX17}, we apply a standard attention mechanism~\cite{BahdanauCB14} on context information $\langle u,i,a \rangle$, and obtain a context vector $\bm{\tilde{c}}_j$ for encoding information of users, items and ratings.
\ignore{
\begin{eqnarray}\label{attn_score}
w_{j,k} &=& \frac{\exp(\tanh(\bm{W}_3 [\bm{\tilde{v}}_{g_j};\bm{v}_{c_k}]))}{\sum_{c_{k'} \in \{u,i,a\}} \exp (\tanh(\bm{W}_3 [\bm{\tilde{v}}_{g_j};\bm{v}_{c_{k'}}]))}, \\
\bm{\tilde{c}}_j &=& \sum_{c_{k} \in \{u,i,a\}} w_{j,k}\bm{v}_{c_k}, \nonumber
\end{eqnarray}
where $\bm{W}_3$ is the trainable parameter, $\bm{v}_{c_k}$ is the embedding of context $c_k \in \{u,i,a\}$.} Finally, we stack the attention block, \emph{i.e.,}\xspace subgraph- and node-level attention, by multiple times and compute the selection probabilities for a subgraph schema and an entity (or keyword) node as:
\begin{eqnarray}
\text{Pr}(s_{j+1} | g_1,...,g_j) &=& \text{softmax}(\bm{W}_4 [\bm{\tilde{v}}_{g_j}; \bm{\tilde{c}}_j] + \bm{b}_4), \label{eq-schema-prob}\\
\text{Pr}(n_{j+1} | g_1,...,g_j) &=& \text{softmax}(\bm{W}_5 [\bm{\tilde{v}}_{g_j}; \bm{\tilde{c}}_j] + \bm{b}_5), \label{eq-node-prob}
\end{eqnarray}
where $\bm{W}_4, \bm{W}_5, \bm{b}_4$ and $\bm{b}_5$ are trainable parameters, and $s_{j+1}$ and $n_{j+1}$ denote a subgraph schema and an entity (or keyword), respectively.
In practice, we select the most possible subgraph schema according to Eq.~\ref{eq-schema-prob}. Then, we collect all the entities that satisfy the requirement of the subgraph schema. Finally, each empty slot is filled in with the most probable node according to Eq.~\ref{eq-node-prob}.
Although there might be other combinatorial optimization strategies, our method empirically works well and is more efficient.
\subsection{Sentence Realization}
Given the inferred subgraph $g_{j}$, we next study how to generate the words of the $j$-th sentence, \emph{i.e.,}\xspace $\langle w_{j,1},\cdots,w_{j,t},\cdots,w_{j,n_j} \rangle$.
\subsubsection{Base Sentence Decoder}
\label{sec-base-sentence-decoder}
The base sentence generation module adopts Transformer decoder in GPT-2~\cite{radford2019language} by stacking multiple self-attention blocks (similar to Eq.~\ref{eq-encoder-multi-head}\textasciitilde Eq.~\ref{eq-encoder-ffn}). Based on GPT-2, we can obtain the embedding $\bm{\tilde{v}}_{w_{j,t}} \in \mathbb{R}^{d_W}$ for the current word $w_{j,t}$ in the $j$-th sentence, where $d_W$ denotes the embedding size. Also, similar to Eq.~\ref{eq-schema-prob}-\ref{eq-node-prob}, we follow \emph{Attr2Seq}~\cite{ZhouLWDHX17} to encode
context information $\langle u,i,a \rangle$ into a context vector $\bm{\tilde{c}}_{j,t}$ with attention mechanism.
We generate the next word via a softmax probability function:
\begin{equation}\label{eq-base-prob}
\text{Pr}_{1}(w_{j,t+1}|w_{j,1},...,w_{j,t}) = \text{softmax}(\bm{W}_6 [\bm{\tilde{v}}_{w_{j,t}}; \bm{\tilde{c}}_{j,t}] + \bm{b}_6),
\end{equation}
where $\bm{W}_6$ and $\bm{b}_6$ are trainable parameters.
\subsubsection{Supervised Copy Mechanism}
\label{sec-supervised-copy}
To verbalize KG subgraphs, we introduce a supervised copy mechanism that copies nodes from the subgraph.
The predictive probability of a word $w$ can be decomposed into two parts, either generating a word from the vocabulary or copying a node from the subgraph:
\begin{eqnarray}\label{eq-sum-prob}
&&\text{Pr}(w_{j,t+1}=w|w_{j,1},...,w_{j,t}, g_j) \\
&=& \lambda_{j,t} \cdot \text{Pr}_{1}(w|w_{j,1},...,w_{j,t}) + (1-\lambda_{j,t}) \cdot \text{Pr}_{2}(w|g_j),\nonumber
\end{eqnarray}
where $\text{Pr}_{1}(w|w_{j,1},...,w_{j,t})$ is the generative probability from the base sentence decoder (Eq.~\ref{eq-base-prob}), and
$\text{Pr}_{2}(w|g_j)$ is the copy probability defined as:
\begin{equation}\label{eq-copy-prob}
\text{Pr}_{2}(w|g_j) = \frac{\exp(\tanh(\bm{W}_7 [\bm{\tilde{v}}_{w_{j,t}}; \bm{\tilde{c}}_{j,t}; \bm{\tilde{v}}_w]))}{\sum_{w' \in g_j}\exp(\tanh(\bm{W}_7 [\bm{\tilde{v}}_{w_{j,t}}; \bm{\tilde{c}}_{j,t}; \bm{\tilde{v}}_{w'}]))},
\end{equation}
where $\bm{W}_7$ is the trainable parameter and $\bm{\tilde{v}}_w$ is the embedding of an entity or a word node $w$ in the current subgraph $g_j$.
Note that we only copy entities or keywords from the predicted subgraph, which dramatically reduces the candidate set. Since subgraph generation has already considered local and global coherence, our candidate set is more meaningful and coherent. In Eq.~\ref{eq-sum-prob}, we use a dynamically learned coefficient $\lambda_{j,t}$ to control the combination between the two parts as:
\begin{equation}\label{eq-copy-alpha}
\lambda_{j,t} = \sigma (\bm{w}^\top_{gen}[\bm{\tilde{v}}_{w_{j,t}};\bm{\tilde{c}}_{j,t}]+ b_{gen}),
\end{equation}
where $\bm{w}_{gen}$ and $b_{gen}$ are trainable parameters. For each word, we add a binary indicator $d_{j,t}$ (0 for \emph{copy} and 1 for \emph{generate}) to provide a supervised signal for the generation and copy. In addition to the word prediction loss, we incorporate a supervised indicator loss with the
binary cross entropy:
\begin{equation}\label{eq-super-loss}
\mathcal{L}_{si} = -\sum\limits_{j,t} d_{j,t} \log (\lambda_{j,t}) - (1-d_{j,t}) \log (1-\lambda_{j,t}).
\end{equation}
Different from traditional copy mechanism, we utilize the loss in Eq.~\ref{eq-super-loss} to explicitly guide the switch between copy or generation during decoding, which can further enhance the local coherence via copying threading keywords from subgraphs.
\subsection{Discussion and Learning}
In this part, we present the model discussion and optimization.
\paratitle{Coherence}. For local coherence, we utilize KG subgraphs as sentence plans, since KG subgraphs are tightly associated semantic structures, which naturally enforce the intra-sentence correlations of entities. Supervised copy mechanism (Section~\ref{sec-supervised-copy}) further connects entities in sentences with the copied threading words from subgraphs. For global coherence, we utilize both subgraph- and node-level attention (Section~\ref{sec-two-level-attention}) to enhance inter-sentence correlations of entities.
Note that not all sentences include entity mentions, we set up a special sentence plan that directly calls the base decoder in Section~\ref{sec-base-sentence-decoder} without copy mechanism. To our knowledge, there are seldom studies that consider both local and global coherence in text generation models. By incorporating KG data, our model provides a principled text planning approach for enhancing the two kinds of coherence of the generated text.
\paratitle{Personalization}. Review generation requires to capture personalized user preference and writing styles. We explicitly model personalization through the contextual embeddings of users, items, and ratings during the decoding of subgraphs and words in Eq.~\ref{eq-schema-prob}\textasciitilde \ref{eq-node-prob} and Eq.~\ref{eq-base-prob}\textasciitilde \ref{eq-copy-prob}, respectively. Another point is that HKG embedding (Section~\ref{sec-graph-encoding}) has involved user-item interactions, which can capture user preference over items and associated attributes. In particular, given a $\langle user, item \rangle$ pair, we construct the HKG by involving one-hop entities linked with the item from KG and the associated keywords for entities. Such a method naturally enforces the personalized preference over item attributes for a given user.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{selection.pdf}
\caption{An illustrative example for subgraph generation.}
\label{fig-selection}
\end{figure}
\paratitle{Optimization}. In our model, there are two sets of trainable parameters in subgraph generation and sentence generation, denoted by $\Theta^{(s)}$ and $\Theta^{(w)}$, respectively.
First, we optimize $\Theta^{(s)}$ according to the predictive loss for subgraph schemas and nodes based on cross entropy loss using Eq.~\ref{eq-schema-prob} and Eq.~\ref{eq-node-prob}.
And then, we optimize $\Theta^{(w)}$ according to the indicator loss in Eq.~\ref{eq-super-loss} and word prediction loss that sums negative likelihood of individual words using Eq.~\ref{eq-sum-prob}.
We incrementally train the two parts, and fine-tune the shared or dependent parameters in different modules.
For training, we directly use the real subgraphs and sentences to optimize the model parameters with Adam optimizer~\cite{KingmaB14}. The same learning rate schedule in \cite{VaswaniSPUJGKP17} is adopted in our training. In order to avoid overfitting, we adopt the dropout strategy with a ratio of 0.2.
During inference, we apply our model in a pipeline way: we first infer the subgraph sequence, then predict the sentences using inferred subgraphs. For sentence generation, we apply the beam search method with a beam size of 8.
We set the maximum generation lengths for subgraph and sentence sequence to be 5 and 50, respectively.
\section{Problem Formulation}
\label{sec-preliminary}
In this section, we introduce the notations that will be used throughout the paper, and then formally define the task.
\paratitle{Basic Notations}.
Let $\mathcal{U}$, $\mathcal{I}$ and $\mathcal{A}$ denote a user set, an item set and a rating score set, respectively.
A review text is written by a user $u \in \mathcal{U}$ about an item $i \in \mathcal{I}$ with a rating score of $a \in \mathcal{A}$.
Formally, a review text is denoted by $w^{1:m}=\{\langle w_{j,1},\cdots,w_{j,t},\cdots,w_{j,n_j} \rangle \}_{j=1}^m$, consisting of $m$ sentences, where $w_{j,t}$ denotes the $t$-th word (from the vocabulary $\mathcal{V}$) of the $j$-th sentence and $n_j$ is the length of the $j$-th sentence. Besides, in our setting, a knowledge graph~(KG) $\mathcal{T}$ about item attributes is available for our task. Typically, it organizes facts by triples: $\mathcal{T}=\{\langle h,r,t \rangle \}$, where each triple describes that there is a relation $r$ between head entity $h$ and tail entity $t$ regarding to some facts. We assume that a KG entity can be aligned to an e-commerce item.
For instance, the Freebase movie entity ``\emph{Avatar}'' (with the Freebase ID \emph{m.0bth54}) has an entry of a movie item in IMDb (with the IMDb ID \emph{tt0499549}). Several studies~\cite{zhao2019kb4rec,li2020knowledge} try to develop heuristic algorithms for item-to-entity alignment and have released public linkage dataset.
We add user-item links according to their interactions and entity-keyword links if they frequently co-occur in review sentences, in order to capture personalized entity preference and enhance the entity-word associations, respectively.
For unifying the triple form, two kinds of non-KG links are attached with two new relations, \emph{i.e.,}\xspace interaction and co-occurrence.
As shown in Figure~\ref{fig-model}(c), such an augmented KG can be referred as \emph{Heterogeneous KG}~(HKG), denoted by $\mathcal{G} = \mathcal{T} \cup \{\langle u, r_{int}, i \rangle\} \cup \{\langle e, r_{co}, w \rangle \}$, where $r_{int}$ and $r_{co}$ denote the relations of user-item interaction and entity-word co-occurrence, respectively.
\paratitle{Planning with HKG Subgraphs}.
To enhance the global and local coherence, we design a two-level text plan for selecting, arranging and ordering the contents in the output text, namely \emph{document plan} and \emph{sentence plan}. Specifically, the document plan is modeled as a sequence of sentence plans in order, denoted by $g^{1:m}={\langle g_1,\cdots,g_j,\cdots,g_m \rangle}$. Each sentence plan is modeled as an entity-based subgraph $g_j$ (short for \emph{subgraph}) from the HKG shown in Figure~\ref{fig-model}(b), specifying the relations and entities (or keywords) to be verbalized in each sentence. We further introduce the concept of \emph{subgraph schema} denoted by $s_j$ for subgraph $g_j$, as shown in Figure~\ref{fig-model}(a), which keeps the structure and relations of subgraphs while replaces nodes with empty slots. Typically, a subgraph schema can be instantiated into different subgraphs by filling empty slots with different entities or keywords.
\paratitle{Task Definition}.
Review generation task is concerned with how to automatically generate the review text $w^{1:m}$ for a rating record $\langle u, i, a\rangle$ based on other possible side information if any. Different from most of previous works, we incorporate the HKG $\mathcal{G}$ as available resource for review generation.
We would like to utilize KG-based text planning model to enhance global and local coherence of the generated text.
\section{Related Work}
With the striking success of deep neural networks, automatic review generation has received much attention from the research community~\cite{ZhouLWDHX17,NiM18,LiZWS19}.
Typical methods extend the Sequence-to-Sequence framework~\cite{SutskeverVL14} by using available side information, including context information~\cite{ZhouLWDHX17}, sentiment score~\cite{ZangW17} and user-item interactions~\cite{NiLVM17}.
In order to alleviate the repetitiveness of texts caused by the RNN models, Generative Adversarial Nets (GAN) based approaches have been applied to text generation~\cite{YuZWY17,GuoLCZYW18}.
Moreover, several studies utilize aspect information of products or writing style of users with a more instructive generation process to generate personalized or controllable review text~\cite{NiM18,LiZWS19,li2020knowledge}.
Although various approaches have emerged, they seldom include structured attribute information about items, thus making the generated review text less informative.
In many applications~\cite{AmplayoLH18,li2020knowledge,niu2019knowledge}, various approaches utilize structural knowledge data (\emph{e.g.,}\xspace Freebase and DBpedia) in the text generation process in order to improve the informativeness and diversity of the generated content. However, most of these studies mainly learn to copy related entities or triples from structural knowledge data while lack overall consideration of semantic structure of text, which have limitation in generating semantically coherent text.
Coherence is a key property of well-organized text. A variety of coherence analysis methods have been developed, such as entity grid model~\cite{BarzilayL05}, coherence pattern model~\cite{ParveenM016}, and neural network model~\cite{LiH14a}. However, these methods still require considerable experience or domain expertise to define or extract features. Other related approaches include global graph model~\cite{GuinaudeauS13} which projects entities into a global graph and HMM system~\cite{LouisN12} in which the coherence between adjacent sentences is modeled by a hidden Markov framework and captured by the transition rules of topics.
Text planning is a critical component in traditional data-to-text systems~\cite{MoryossefGD19,HuaW19}. Typical methods are based on hand-crafted~\cite{DalianisH93} or automatically-learnt rules~\cite{DuboueM03}, which are unable to capture rich variations of texts. Recent neural approaches mainly rely on well-designed network architectures to learn from training data~\cite{ShaMLPLCS18,HuaW19}, which is difficult to control the planning process. However, as demonstrated in~\cite{WisemanSR17}, existing neural methods are still problematic for text generation and often generate incoherent text. Moreover, these text planning methods seldom focus on review generation task, lacking consideration of user-item interaction or personalized characteristics.
\ignore{
In recent years, researchers have made great progress in natural language generation (NLG) \cite{ChengXGLZF18, ZhaoZWYHZ18, LewisDF18}.
As a special NLG task, automatic review generation has been proposed to assist the writing of online reviews for users. RNN-based methods have been proposed to generate the review content conditioned on useful context information~\cite{TangYCZM16,ZhouLWDHX17}.
Especially, the task of review generation is closely related to the studies in recommender systems that aim to predict the preference of a user over products.
Hence, several studies propose to couple the solutions of the two lines of research work, and utilize the user-product interactions for improving the review generation~\cite{NiLVM17,WangZ17,CatherineC18,NiM18}.
Although \citet{NiM18} have explored aspect information to some extent, they characterize the generation process in a single stage and do not perform the coarse-to-fine decoding.
Besides, the aspect transition patterns have been not modeled.
It has been found that RNN models tend to generate short, repetitive, and dull texts~\cite{LinSMS18, LuoXLZ018}. For addressing this issue, Generative Adversarial Nets (GAN) based approaches have been recently proposed to generate long, diverse and novel text~\cite{ZangW17,YuZWY17,GuoLCZYW18,XuRL018}.
These methods usually utilize reinforcement learning techniques to deal with the generation of discrete symbols. However, they seldom consider the linguistic information from natural languages, which cannot fully address the difficulties of our task.
Our work is inspired by the work of using sketches as intermediate representations~\cite{LapataD18, WisemanSR18, XuRZZC018, SuLYC18}.
These works usually focus on sentence- or utterance-level generation tasks, in which global aspect semantics and transitions have not been considered.
Our work is also related to review data mining, especially the studies on topic or aspect extraction from review data~\cite{QiuYCB17,Zhao-emnlp-2010}.
}
\ignore{
\citet{ZangW17} proposed a hierarchical generation model with attention mechanism to generate long reviews. They assumed each review text corresponds to some aspects and each aspect is aligned with a sentiment score. The assumption is so strict that cannot be applied to all review generation scenarios.
Generative Adversarial Nets (GAN) is one of the promising techniques for handling the above issue with adversarial training schema. Due to the nature of adversarial training, the generated text is discriminated with real text, rendering generated sentences to maintain high quality from the start to the end.
\citet{YuZWY17} considered the sequence generation procedure as a sequential decision making process to address two problems, GAN cannot handle discrete texts and GAN can only give the loss when the entire sequence has been generated.
\citet{GuoLCZYW18} proposed a LeakGAN model to generate long texts by providing more informative guiding signal from discriminator to generator.
Our approach differs from these GAN models mainly in our building a context-aware GAN model by providing context information and incorporating prior knowledge which would enhance the effectiveness of generative process and discriminative process.
There has been some work exploring personalized review text generation.
\citet{NiM18} designed a model that is able to generate personalized reviews by leveraging both user and product information as well as auxiliary, textual input and aspect words.
\citet{XuX2018} proposed a model, called DP-GAN, to generate diversified texts by building a language model-based discriminator that gives reward to the generator based on the novelty of the generated texts.
We argue that the extra aspect words in \citet{NiM18} can enrich the content contained in the generated review, but it is runs a risk of repeatedly describing an aspect and generating an aspect not belonging to a product. Consequently, we incorporate the product specification to revise the choice of aspects and the coverage mechanism to avoid the repeated issue.
}
|
{
"timestamp": "2021-05-11T02:16:41",
"yymm": "2105",
"arxiv_id": "2105.03815",
"language": "en",
"url": "https://arxiv.org/abs/2105.03815"
}
|
\section{Introduction}
Consider the weakly-coupled competitive polyharmonic system of $\ell$ equations in $\mathbb{R}^N$,
\begin{equation} \label{s:intro}
\begin{cases}
(-\Delta)^m u_i = \mu_i|u_i|^{2_m^*-2}u_i + \sum\limits_{\substack{j=1 \\ j\neq i}}^\ell\lambda_{ij}\beta_{ij}|u_j|^{\alpha_{ij}}|u_i|^{\beta_{ij}-2}u_i,\quad i=1,\ldots,\ell, \\
u_i\in D^{m,2}(\mathbb{R}^N),\qquad i=1,\ldots,\ell,
\end{cases}
\end{equation}
where $m,N\in\mathbb{N}$, $N>2m$, $2_m^*:=\frac{2N}{N-2m}$ is the critical Sobolev exponent, $\mu_i>0$, $\lambda_{ij}=\lambda_{ji}<0$, $\alpha_{ij},\beta_{ij}>1$ satisfying $\alpha_{ij}=\beta_{ji}$ and $\alpha_{ij}+\beta_{ij}=2_m^*$, and $D^{m,2}(\mathbb{R}^N)$ is the completion of $\mathcal{C}_c^\infty(\mathbb{R}^N)$ with respect to the norm $\|\cdot\|$ induced by the scalar product
\begin{equation} \label{eq:scalar_product}
\langle u,v\rangle :=
\begin{cases}
\int_{\r^N}\Delta^\frac{m}{2}u\cdot\Delta^\frac{m}{2}v, &\text{for \ }m\text{ even},\\
\int_{\r^N}\nabla\Delta^\frac{m-1}{2}u\cdot\nabla\Delta^\frac{m-1}{2}v, &\text{for \ }m\text{ odd}.
\end{cases}
\end{equation}
In this paper we establish the existence of solutions to \eqref{s:intro} which are invariant under some groups of conformal diffeomorphisms, and study the behavior of least energy solutions as $\lambda_{ij}\to -\infty.$ We show that the supports of the limiting profiles of their components are pairwise disjoint smooth domains and solve a nonlinear optimal partition problem in $\mathbb{R}^N$.
To state our results we introduce some notation. Fix $n_1,n_2\in\mathbb{N}$ with $n_1,n_2\geq 2$ and $n_1+n_2=N+1$, and set $\Gamma:=O(n_1)\times O(n_2)$. Each $\gamma\in\Gamma$ is an isometry of the unit sphere $\mathbb{S}^N$, and gives rise to a conformal diffeomorphism $\widetilde\gamma:\mathbb{R}^N\to\mathbb{R}^N$ given by $\widetilde\gamma x:=(\sigma\circ\gamma^{-1}\circ\sigma^{-1})(x)$, where $\sigma:\mathbb{S}^N\to\mathbb{R}^N\cup\{\infty\}$ is the stereographic projection. A subset $\Omega$ of $\mathbb{R}^N$ will be called \emph{$\Gamma$-invariant} if $\widetilde\gamma x\in\Omega$ for all $x\in\Omega$, and a function $u:\Omega\to\mathbb{R}$ will be said to be \emph{$\Gamma$-invariant} if $$|\det\widetilde\gamma'(x)|^\frac{1}{2^*_m}u(\widetilde\gamma x)=u(x)\qquad \text{for all \ }\gamma\in\Gamma, \ x\in\Omega.$$
If $\Omega$ is a $\Gamma$-invariant open subset of $\mathbb{R}^N$ we write $D_0^{m,2}(\Omega)$ for the closure of $\mathcal{C}_c^\infty(\Omega)$ in $D^{m,2}(\mathbb{R}^N)$ and we use $D_0^{m,2}(\Omega)^\Gamma$ and $D^{m,2}(\mathbb{R}^N)^\Gamma$ to denote the subspaces of $\Gamma$-invariant functions in $D_0^{m,2}(\Omega)$ and $D^{m,2}(\mathbb{R}^N)$ respectively. Consider the Dirichlet problem
\begin{equation} \label{eq:dirichlet}
\begin{cases}
(-\Delta)^m u = |u|^{2_m^*-2}u,\\
u\in D_0^{m,2}(\Omega)^\Gamma.
\end{cases}
\end{equation}
This problem has a least energy nontrivial solution (see Section \ref{sec:segregation}), whose energy will be denoted by $c^\Gamma_\Omega$, i.e.,
$$c^\Gamma_\Omega:=\inf\left\{\frac{m}{N}\|u\|^2:u\neq 0\text{ and } u\text{ solves }\eqref{eq:dirichlet}\right\}.$$
We consider partitions of $\mathbb{R}^N$ by $\Gamma$-invariant open subsets. More precisely, for $\ell\geq 2$, let
\begin{align*}
\mathcal{P}_\ell^\Gamma:=\{\{\Omega_1,\ldots,\Omega_\ell\}:&\,\Omega_i\neq\emptyset \text{ is a }\Gamma\text{-invariant open subset of }\mathbb{R}^N\text{ and }\Omega_i\cap \Omega_j=\emptyset\text{ if }i\neq j \}.
\end{align*}
A \emph{$(\Gamma,\ell)$-optimal partition} for $\mathbb R^N$ is a partition $\{\Omega_1,\ldots,\Omega_\ell\}\in\mathcal{P}_\ell^\Gamma$ such that
\begin{align}\label{op}
\sum_{i=1}^\ell c^\Gamma_{\Omega_i}=\inf_{\{\Theta_1,\ldots,\Theta_\ell\}\in\mathcal{P}_\ell^\Gamma}\sum_{i=1}^\ell c^\Gamma_{\Theta_i}.
\end{align}
We study a symmetric version of \eqref{s:intro}, namely
\begin{equation} \label{eq:system}
\begin{cases}
(-\Delta)^m u_i = \mu_i|u_i|^{2_m^*-2}u_i + \sum\limits_{\substack{j=1 \\ j\neq i}}^\ell\lambda_{ij}\beta_{ij}|u_j|^{\alpha_{ij}}|u_i|^{\beta_{ij}-2}u_i,\quad i=1,\ldots,\ell, \\
u_i\in D^{m,2}(\mathbb{R}^N)^\Gamma,\qquad i=1,\ldots,\ell,
\end{cases}
\end{equation}
where $\lambda_{ij}$, $\mu_i$, $\alpha_{ij}$ and $\beta_{ij}$ are as before.
Our first result asserts the existence of infinitely many fully nontrivial solutions of \eqref{eq:system}. A solution $(u_1,\ldots,u_\ell)$ to the system \eqref{eq:system} is called \emph{fully nontrivial} if each component $u_i$ is nontrivial. We refer to Definition \ref{def:least energy} for the notion of a least energy fully nontrivial solution.
\begin{theorem} \label{thm:existence}
The system \eqref{eq:system} has a least energy fully nontrivial solution and a sequence of fully nontrivial solutions which is unbounded in $[D^{m,2}(\mathbb{R}^N)]^\ell$.
\end{theorem}
Our next result describes the segregation behavior of least energy fully nontrivial solutions as $\lambda_{ij}\to -\infty$, showing that the supports of the limiting profiles of their components solve the optimal partition problem \eqref{op}. We write $\mathbb{S}^{d-1}$ and $\mathbb{B}^d$ for the unit sphere and the open unit ball in $\mathbb{R}^d$. The symbol $\cong$ stands for ``is $\Gamma$-diffeomorphic to".
\begin{theorem} \label{thm:main}
For $i=1,\ldots,\ell$, fix $\mu_i=1$ and for each $i\neq j$, $k\in\mathbb{N}$, let $\lambda_{ij,k}<0$ be such that $\lambda_{ij,k}=\lambda_{ji,k}$ and $\lambda_{ij,k}\to -\infty$ as $k\to\infty$. Let $(u_{k,1},\ldots,u_{k,\ell})$ be a least energy fully nontrivial solution to the system \eqref{eq:system} with $\lambda_{ij}=\lambda_{ij,k}$. Then, after passing to a subsequence, we have that
\begin{itemize}
\item[$(a)$]$u_{k,i}\to u_{\infty,i}$ strongly in $D^{m,2}(\mathbb{R}^N)$, $u_{\infty,i}\in\mathcal{C}^{m-1}(\mathbb{R}^N)$, and $u_{\infty,i}\neq 0$. Let
\begin{align*}
\Omega_i:=\operatorname{int}\overline{\{x\in\mathbb{R}^N:u_{\infty,i}(x)\neq 0\}}\qquad \text{ for \ }i=1,\ldots,\ell.
\end{align*}
Then $u_{\infty,i}\in \mathcal{C}^{2m,\alpha}(\overline\Omega_i)$ is a least energy solution of \eqref{eq:dirichlet} in $\Omega_i$ for each $i=1,\ldots,\ell$.
\item[$(b)$]$\{\Omega_1,\ldots,\Omega_\ell\}\in\mathcal{P}_\ell^\Gamma$ is a $(\Gamma,\ell)$-optimal partition for $\mathbb R^N$.
\item[$(c)$]$\Omega_1,\ldots,\Omega_\ell$ are smooth and connected, $\overline{\Omega_1\cup\cdots\cup \Omega_\ell}=\mathbb{R}^N$ and, after relabeling, we have that
\begin{itemize}
\item[$(c_1)$] $\Omega_1\cong\mathbb{S}^{n_1-1}\times \mathbb{B}^{n_2}$, \ $\Omega_i\cong\mathbb{S}^{n_1-1}\times\mathbb{S}^{n_2-1}\times(0,1)$ if $i=2,\ldots,\ell-1$, and \ $\Omega_\ell\cong\mathbb{B}^{n_1}\times \mathbb{R}^{n_2-1}$,
\item[$(c_2)$] $\overline{\Omega}_i\cap \overline{\Omega}_{i+1}\cong\mathbb{S}^{n_1-1}\times\mathbb{S}^{n_2-1}$ and\quad $\overline{\Omega}_i\cap \overline{\Omega}_j=\emptyset$\, if\, $|j-i|\geq 2$.
\end{itemize}
\end{itemize}
\end{theorem}
Combining Theorems \ref{thm:existence} and \ref{thm:main} we obtain the following result.
\begin{theorem} \label{teo:partition}
For every group $\Gamma:=O(n_1)\times O(n_2)$ with $n_1,n_2\geq 2$ and $n_1+n_2=N+1$, there exists a $(\Gamma,\ell)$-optimal partition for $\mathbb R^N$ having the properties stated in $(c)$ above.
\end{theorem}
Theorem \ref{thm:existence} extends the multiplicity result in \cite{BaScWe} for a single polyharmonic equation to systems, and generalizes the results for systems involving the Laplacian in \cite{cp} and \cite[Theorem 1.2]{cs} to the higher-order case. In all of these results the symmetries play a crucial role in compensating the lack of compactness inherent to critical problems. This fact was first used by W.Y. Ding in \cite{d}. For bounded domains with Dirichlet boundary conditions, critical polyharmonic systems with linear and subcritical coupling terms have been studied in \cite{BaGu,Mo}, whereas critical couplings were considered in \cite{GoRa}. See also \cite{Ta} for some results on weakly-coupled fourth-order Schrödinger equations.
For the proof of Theorem \ref{thm:existence} we follow the variational approach introduced in \cite{cs} which carries over immediately to higher-order operators and may be used to obtain existence and multiplicity results for other polyharmonic systems as well.
The connection between optimal partitions and competitive systems for the Laplacian was first noticed in \cite{ctv} and has been further developed in various papers considering different types of nonlinearities, couplings, and general smooth domains. Optimal partitions and shape optimization problems in general are difficult to study in the higher-order regime. The available results for the Laplacian involve the use of advanced tools such as Almgren-type monotonicity formulae, boundary Harnack principles, and Liouville theorems. The extention of all this machinery to the higher-order case seems out of reach. For general statements and a review of previously known results for the Laplacian we refer to \cite{sttz}.
In Theorem \ref{thm:main} we make strong use of the symmetries to obtain and fully describe the shape of the optimal partition. This result extends the main theorem in \cite{css}. As far as we know, it is the first result to exhibit and fully characterize an optimal partition for a higher-order elliptic operator.
The conformal invariance of the system \eqref{eq:system} allows translating it into the polyharmonic system on the standard sphere,
\begin{equation} \label{eq:system_sn}
\begin{cases}
\mathscr{P}^m_g v_i = \mu_i |v_i|^{2^*_m-2}v_i + \sum_{i\neq j}\lambda_{ij}\beta_{ij}\vert v_j\vert^{\alpha_{ij}}\vert v_i\vert^{\beta_{ij}}v_i,\\
v_i\in H^m_g(\mathbb{S}^N),\\
v_i\text{ is }\Gamma\text{-invariant},\qquad i,j=1,\ldots,\ell,
\end{cases}
\end{equation}
with the same $\mu_i$, $\lambda_{ij},\alpha_{ij},\beta_{ij}$, where $\mathscr{P}^m_g$ is a conformally invariant operator generalizing the conformal Laplacian for $m=1$ and the Paneitz operator for $m=2$, see \eqref{Eq:JGM-Operator}. More precisely, $\bar v=(v_1,\ldots,v_\ell)$ is a solution of \eqref{eq:system_sn} iff $\bar u=(\iota(v_1),\ldots,\iota(v_\ell))$ solves \eqref{eq:system} where $\iota$ is defined in terms of the stereographic projection, see Proposition \ref{prop:equivalent_spaces1}. Theorems \ref{thm:existence}, \ref{thm:main} and \ref{teo:partition} translate into similar statements for the system \eqref{eq:system_sn}, which is interesting in itself.
The orbit space of the action of $\Gamma$ on $\mathbb{S}^N$ is one-dimensional, see \eqref{eq:orbit_map}. This allows to translate the systems \eqref{eq:system_sn} and \eqref{eq:system} into a system of ODE's. However, the operator has a rather complicated expression and it is degenerate, in the sense that it involves sign-changing weights that can vanish at different points.
For $m=1$, the ODE approach was exploited in \cite{fp} to derive the existence of sign-changing solutions to the Yamabe equation on the sphere having precisely $\ell$ nodal domains for any $\ell\geq 2$, using a double-shooting method. This result does not extend easily to $m\geq 2$, because the corresponding ODE has a rather complicated expression, see Remark \ref{b:rmk} below. On the other hand, in \cite{css}, sign-changing solutions to the Yamabe problem with a prescribed number of nodal domains were constructed using an alternating sum of limiting profiles of \emph{positive} least energy solutions to \eqref{eq:system} with $m=1$. This method also fails when considering $m\geq 2$, since it is not known if the least energy solutions of \eqref{eq:system} are signed or sign-changing. Therefore, the existence of sign-changing solutions to the problem
\begin{equation} \label{eq:problem}
\mathscr{P}^m_g v = |v|^{2^*_m-2}v,\qquad v\in H^m_g(\mathbb{S}^N),
\end{equation}
with precisely $\ell$ nodal domains for any $\ell\geq 2$ remains an open question.
Problem \eqref{eq:problem} arises naturally in conformal geometry when seeking for prescribed higher-order conformal invariants, called $Q$-curvatures, generalizing the scalar curvature \cite{GGS,FeGr,Ro}. For $m=1$ it is the Yamabe problem and it is the Paneitz problem for $m=2$ \cite{DjHeLe,LePa}.
This paper is organized as follows. In Section \ref{Section:regularity} we use the symmetries to restore compactness and to derive regularity properties of the $\Gamma$-invariant functions. Next, in Section \ref{sec:system}, we describe the variational setting for the polyharmonic system and prove Theorem \ref{thm:existence}. Finally, in Section \ref{sec:segregation} we study the behavior of the least energy solutions to the system as $\lambda_{ij}\to-\infty$ and prove Theorem \ref{thm:main}. To simplify our presentation, two technical results are added in an Appendix.
\section{Compactness and regularity by symmetry}\label{Section:regularity}
Let $(\mathbb{S}^N,g)$ denote the unit sphere with its round metric. For $m\in\mathbb{N}$ and $N>2m$, the Sobolev space $H^m_g(\mathbb{S}^N)$ is the completion of $\mathcal{C}^\infty(\mathbb{S}^N)$ with respect to the norm defined by the interior product
\begin{equation}\label{Eq:Standard Norm}
\langle u, v\rangle_{H_g^m(\mathbb{S}^N)}:=
\begin{cases}
\int_{\mathbb{S}^N}( uv + \Delta_g^{m/2}u\cdot\Delta_g^{m/2}v) \;dV_g, & m \text{ even},\\
\int_{\mathbb{S}^N}( uv + \langle\nabla_g\Delta_g^{(m-1)/2}u,\nabla\Delta_g^{(m-1)/2}v\rangle_g) \;dV_g, & m \text{ odd},
\end{cases}
\end{equation}
where $\nabla_g$ is the gradient and $\Delta_g$ is the Laplace-Beltrami operator on $\mathbb{S}^N$. Consider the elliptic operator of order $2m$ on $\mathbb{S}^N$ given by
\begin{equation}\label{Eq:JGM-Operator}
\mathscr{P}^m_g:=\prod_{k=1}^m\left( -\Delta_g + c_k \right),\qquad c_k:=\frac{(N-2k)(N+2k-2)}{4}.
\end{equation}
This is a conformal operator. For $m=1$ it is the conformal Laplacian and for $m=2$ it is the Paneitz operator. It yields an inner product
\begin{align}\label{norm}
\langle u,v\rangle_\ast = \int_{\mathbb{S}^N}u\mathscr{P}^m_gv\; \;dV_g,\quad u,v\in\mathcal{C}^\infty(\mathbb{S}^N),
\end{align}
and the induced norm $\Vert \cdot \Vert_\ast$ is equivalent to the standard norm given by \eqref{Eq:Standard Norm}, see \cite{BaScWe,Ro}.
The stereographic projection $\sigma:\mathbb{S}^N\smallsetminus\{p_0\}\rightarrow\mathbb{R}^N$ from the north pole $p_0$ is a conformal diffeomorphism and the coordinates of the standard metric $g$ in the chart given by $\sigma^{-1}$ are
\[
g_{ij} =\psi^{4/(N-2m)}\delta_{ij},
\]
where $\delta_{ij}$ is the Kronecker delta and $\psi\in D^{m,2}(\mathbb{R}^N)$ is
\[
\psi(x) := \left[ \frac{2}{ 1 + |x|^2 } \right]^{\frac{N-2m}{2}}.
\]
As the operators $\mathscr{P}^m_{g}$ and $(-\Delta)^m$ are conformally invariant, the stereographic projection yields the relation
\begin{equation}\label{eq:equivalent_operators1}
\mathscr{P}^m_{g}(u) = \psi^{1-2_m^\ast}(-\Delta)^m [\iota(u)],\qquad\text{where \ }\iota(u):=\psi(u\circ\sigma^{-1}),
\end{equation}
for every $u\in \mathcal{C}^\infty(\mathbb{S}^N)$, see \cite{Ro}.
\begin{proposition}\label{prop:equivalent_spaces1}
The map
$$\iota:(H_g^{m}(\mathbb{S}^N),\|\cdot\|_{*})\rightarrow (D^{m,2}(\mathbb{R}^N),\|\cdot\|),\qquad u\mapsto\iota(u):=\psi(u\circ\sigma^{-1}),$$
is an isometric isomorphism with inverse $\iota^{-1}v = \frac{1}{\psi\circ\sigma}\,v\circ\sigma$.
\end{proposition}
\begin{proof}
As $dV_{g}=\psi^{2_m^\ast}\;dx$, we derive from \eqref{eq:equivalent_operators1} that
\[
\langle u_1,u_2 \rangle_{\ast}=\int_{\mathbb{S}^N} u_1\mathscr{P}_{g} u_2\; dV_{g} = \int_{\mathbb{R}^N}\iota(u_1)(-\Delta)^m[\iota(u_2)] \;dx=\langle \iota(u_1),\iota(u_2) \rangle
\]
for any $u_1,u_2\in\mathcal{C}^\infty(\mathbb{S}^N)$. The proposition now follows by density.
\end{proof}
Set $\Gamma:=O(n_1)\times O(n_2)$, where $n_1,n_2\in\mathbb{N}$ with $n_1,n_2\geq 2$ and $n_1+n_2=N+1$. Then $\Gamma$ acts by linear isometries on the Sobolev spaces $H^{m}_g(\mathbb{S}^N)$ and $D^{m,2}(\mathbb{R}^N)$ as follows.
\begin{proposition}\label{prop:sobolev_action}
For every $\gamma\in O(N+1)$,
$$\gamma:(H^{m}_g(\mathbb{S}^N),\|\cdot\|_*)\to (H_g^m(\mathbb{S}^N),\|\cdot\|_*),\qquad\gamma u:=u\circ\gamma^{-1},$$
and
$$\gamma:D^{m,2}(\mathbb{R}^N)\to D^{m,2}(\mathbb{R}^N),\qquad \gamma v:=|\det \widetilde{\gamma}'|^{1/2_m^\ast} v\circ\widetilde{\gamma},$$
with $\widetilde\gamma:=\sigma\circ\gamma^{-1}\circ\sigma^{-1}$, are linear isometries.
\end{proposition}
\begin{proof}
The operator $\mathscr{P}_g^m$ is natural in the sense that it is invariant under changes of coordinates \cite{FeGr,Ro}. This implies, in particular, that $\gamma^\ast \mathscr{P}^m_{g}=\mathscr{P}^m_{g}\circ\gamma^\ast$ for every isometry $\gamma:\mathbb{S}^N\rightarrow \mathbb{S}^N$, where $\gamma^\ast$ denotes the pullback of tensors, see \cite{BaJu,Ro}. Therefore, if $u\in\mathcal{C}^\infty(\mathbb{S}^N)$ and $\gamma\in O(N+1),$
\[
\mathscr{P}^m_g(u\circ\gamma) = (\mathscr{P}^m_g\circ\gamma^\ast)(u)=(\gamma^\ast\circ \mathscr{P}^m_g)(u)=\mathscr{P}^m_g(u)\circ\gamma.
\]
Then, for $u\in\mathcal{C}^\infty(\mathbb{S}^N)$,
\[
\|\gamma u\|_{*}^2 =\int_{\mathbb{S}^N}(u\circ\gamma^{-1})\mathscr{P}^m_g(u\circ\gamma^{-1}) \;dV_g =\int_{\mathbb{S}^N}(u\circ\gamma^{-1})\mathscr{P}^m_g(u)\circ\gamma^{-1} \;dV_g = \int_{\mathbb{S}^N} u\mathscr{P}^m_gu \;dV_g =\Vert u\Vert^2_{*}.
\]
This shows, by density, that $\gamma:(H^{m}_g(\mathbb{S}^N),\|\cdot\|_*)\to (H^m_g(\mathbb{S}^N),\|\cdot\|_*)$ is a linear isometry.
By Proposition \ref{prop:equivalent_spaces1}, the composition $\iota\circ\gamma\circ\iota^{-1}:D^{m,2}(\mathbb{R}^N)\rightarrow D^{m,2}(\mathbb{R}^N)$ is a linear isometry for every $\gamma\in\Gamma$. So $\gamma v := (\iota\circ\gamma\circ\iota^{-1})v$ defines a linear action of $\Gamma$ on $D^{m,2}(\mathbb{R}^N)$. Setting $\widetilde\gamma:=\sigma\circ\gamma^{-1}\circ\sigma^{-1}$, we have that
\[
\gamma v = \frac{\psi}{\psi\circ\widetilde{\gamma}} v\circ\widetilde{\gamma}
= \vert \det \widetilde{\gamma}'\vert^{1/2_m^\ast} v\circ\widetilde{\gamma}
\]
for any $\gamma\in \Gamma$ and any $v\in D^{m,2}(\mathbb{R}^N)$, see identity (3.2) in \cite{cp}.
\end{proof}
Define
\begin{align*}
H^{m}_g(\mathbb{S}^N)^\Gamma:=&\{u\in H^{m}_g(\mathbb{S}^N):\gamma u=u\text{ for all }\;\gamma\in\Gamma\}, \\
D^{m,2}(\mathbb{R}^N)^\Gamma:=&\{v\in D^{m,2}(\mathbb{R}^N):\gamma v=v \text{ for all }\;\gamma\in\Gamma\}.
\end{align*}
Note that the map $\iota$ from Proposition \ref{prop:equivalent_spaces1} yields an isometric isomorphism
\begin{equation} \label{eq:iota}
\iota:H^{m}_g(\mathbb{S}^N)^\Gamma\to D^{m,2}(\mathbb{R}^N)^\Gamma.
\end{equation}
Let $ L_g^{2^*_m}(\mathbb{S}^N)$ and $L^{2^*_m}(\mathbb{R}^N)$ denote the usual Lebesgue spaces. The crucial role played by the symmetries is given by the following statement.
\begin{lemma}\label{Lemma:Sobolev}
The embeddings
\[
H^m_g(\mathbb{S}^N)^\Gamma\hookrightarrow L_g^{2^*_m}(\mathbb{S}^N)\qquad\text{and}\qquad D^{m,2}(\mathbb{R}^N)^\Gamma\hookrightarrow L^{2^*_m}(\mathbb{R}^N)
\]
are continuous and compact.
\end{lemma}
\begin{proof}
The statement for $\mathbb{S}^N$ follows from \cite[Lemma 3.2]{BaScWe}. The statement for $\mathbb{R}^N$ is obtained using the isometry \eqref{eq:iota} and noting that $\iota:L_g^{2^*_m}(\mathbb{S}^N)\to L^{2^*_m}(\mathbb{R}^N)$ is also an isometry.
\end{proof}
To study the regularity of functions belonging to $H^{m}_g(\mathbb{S}^N)^\Gamma$ and $D^{m,2}(\mathbb{R}^N)^\Gamma$ we turn our attention to the space of $\Gamma$-orbits of $\mathbb{S}^N$.
We write $\mathbb R^{N+1}\equiv\mathbb{R}^{n_1}\times\mathbb{R}^{n_2}$. Accordingly, points in $\mathbb{S}^N$ are written as $(x,y)\in\mathbb{R}^{n_1}\times\mathbb{R}^{n_2}$. Let $q:\mathbb{S}^N\rightarrow[0,\pi]$ be given by
\begin{equation} \label{eq:orbit_map}
q:=\arccos\circ f,\qquad \text{where \ }f(x,y):=|x|^2-|y|^2.
\end{equation}
This is a quotient map identifying each $\Gamma$-orbit in $\mathbb{S}^N$ with a single point. It is called the $\Gamma$-\textit{orbit map} of $\mathbb{S}^N$. Note that the $\Gamma$-orbit space of $\mathbb{S}^N$ is one-dimensional and that
$$q^{-1}(0)\cong\mathbb{S}^{n_1-1},\qquad q^{-1}(t)\cong\mathbb{S}^{n_1-1}\times\mathbb{S}^{n_2-1}\text{ if }t\in(0,\pi),\qquad q^{-1}(\pi)\cong\mathbb{S}^{n_2-1}.$$
Let $\phi:(0,\pi)\rightarrow\mathbb{R}$ be given by
\begin{align}\label{H}
\phi(t):=\frac{2}{\sin t}[(n_1+n_2-2)\cos t - (n_2-n_1)]
\end{align}
and define $\mathscr{L}:\mathcal{C}^\infty(0,\pi)\rightarrow\mathcal{C}^\infty(0,\pi)$ by
\begin{align*}
\mathscr{L}:=4\frac{d^2}{\;dt^2} + \phi(t)\frac{d}{\;dt}.
\end{align*}
Set
\begin{align}\label{h}
h(t):=2 \vert\mathbb{S}^{n_1-1}\vert \vert\mathbb{S}^{n_2-1}\vert \cos^{n_1-1}(t/2)\sin^{n_2-1}(t/2),\quad t\in[0,\pi],
\end{align}
where $\vert\mathbb{S}^{n_i-1}\vert$ is the $(n_i-1)$-dimensional measure of the sphere $\mathbb{S}^{n_i-1}$ for $i=1,2$. For $\mathbf k=(k_0,\ldots,k_m)\in (0,\infty)^{m+1}$ and $w\in\mathcal{C}^\infty(0,\pi)$ define
\begin{equation}\label{h:norm}
\|w\|_{\mathbf k,h}:=\left(\sum_{\substack{i=0\\ i\ \textrm{even}}}^m \frac{k_i}{4}\int_0^\pi |\mathscr{L}^{i/2} w|^2 \;h \;dt+ \sum_{\substack{i=1\\ i\ \textrm{odd}}}^m k_i \int_0^\pi |(\mathscr{L}^{(i-1)/2}w)'|^2 \; h\;dt\right)^{1/2},
\end{equation}
where $\mathscr{L}^{i}$ denotes the $i$-fold composition of $\mathscr{L}$ and $(\mathscr{L}^{i}w)':=\frac{d}{dt}\Big((4\frac{d^2}{\;dt^2} + \phi(t)\frac{d}{\;dt})^{i}(w)\Big)$.
Note that the operator $\mathscr{P}_g^m$ can be written as
\begin{align*}
\mathscr{P}_g^m=\sum_{i=0}^ma_i(-\Delta_g)^{i}
\end{align*}
for some $a_i>0$. Given $\mathbf k:=(k_0,\ldots,k_m)\in(0,\infty)^{m+1}$, we consider the operator
\begin{equation*}
\mathscr{P}_{\mathbf k,g}^m:=\sum_{i=0}^m k_i(-\Delta_g)^{i},
\end{equation*}
and the norm
\begin{equation}\label{k}
\|u\|_{\mathbf k,\ast}:=\left(\int_{\mathbb{S}^N}u\mathscr{P}_{\mathbf k,g}^m u \ \;dV_g\right)^\frac{1}{2}\qquad \text{ for }u\in \mathcal{C}^\infty(\mathbb{S}^N).
\end{equation}
Note that $\|\cdot\|_*=\|\cdot\|_{\mathbf a,*}$ with $\mathbf a=(a_0,\ldots,a_m)$ as above. So $\|\cdot\|_{\mathbf k,*}$ is equivalent to $\|\cdot\|_*$.
\begin{lemma} \label{lem:isometry2}
For every $\mathbf k\in(0,\infty)^{m+1}$ and $w\in\mathcal{C}^\infty[0,\pi]$,
$$\|w\circ q\|_{\mathbf k,*}^2= \|w\|_{\mathbf k,h}^2.$$
\end{lemma}
\begin{proof}
Set $u:=w\circ q$. For $u_1,u_2\in \mathcal{C}^\infty(\mathbb{S}^N)$, observe that, if $i$ is even, then
\[
\int_{\mathbb{S}^N} u_1 (-\Delta_g)^{i}u_2\;dV_g = \int_{\mathbb{S}^N} \Delta_g^{i/2}u_1 \Delta_g^{i/2} u_2 \; \;dV_g,
\]
while, if $i$ is odd,
\[
\int_{\mathbb{S}^N} u_1 (-\Delta_g)^{i}u_2 \;dV_g
= \int_{\mathbb{S}^N} \langle\nabla_g \Delta_g^{(i-1)/2}u_1,\nabla_g \Delta_g^{(i-1)/2} u_2\rangle_g \; \;dV_g.
\]
Hence,
\begin{equation}
\begin{split}
\|u\|_{\mathbf k,\ast}^2=&\sum_{\substack{i=0\\ i\ \textrm{even}}}^m k_i\int_{\mathbb{S}^N} |\Delta^{i/2}u|^2 \; \;dV_g + \sum_{\substack{i=0\\ i\ \textrm{odd}}} k_i \int_{\mathbb{S}^N} |\nabla_g \Delta_g^{(i-1)/2}u|_g^2 \; \;dV_g.
\end{split}
\end{equation}
Note that, for the function $f$ defined in \eqref{eq:orbit_map}, the sets $M_+:= f^{-1}(1)$ and $M_-:= f^{-1}(-1)$ are submanifolds of $\mathbb{S}^{N}$ diffeormorphic to $\mathbb{S}^{n_1-1}$ and $\mathbb{S}^{n_2-1}$ respectively. As in \cite{fp}, we have that
\[
\vert \nabla_g f\vert_g^2 = 4(1 - f^2)\quad\text{ and } \quad\Delta_g f= -2(N+1)f + 2(n_2-n_1).
\]
Then, by the definition of $q$,
\[
|\nabla_g q|_g^2 = 4\qquad\text{and}\qquad \Delta_g q=\phi\circ q,
\]
so
\[
\Delta_{g}u=\Delta_g(w\circ q)=(w''\circ q) |\nabla_g q|_g^2+(w'\circ q)\Delta_gq=(\mathscr{L} w)\circ q, \quad \text{ in }\mathbb{S}^N\smallsetminus M_+\cup M_-
\]
and, for each $i\in\mathbb{N}\cup\{0\}$,
\[
\Delta_g^i u = (\mathscr{L}^i w)\circ q \quad \text{and}\quad |\nabla_g \Delta_g^i u|_g^2 = 4|(\mathscr{L}^i w_1)'|^2\circ q, \quad \text{ in }\mathbb{S}^N\smallsetminus M_+\cup M_-.
\]
By \cite[Lemma 2.2]{fp},
\begin{equation*}
\int_{\mathbb{S}^N}|\Delta_g^{i}u|^2 \; \;dV_g
= \frac{1}{4}\int_0^\pi |\mathscr{L}^i w|^2 \; h\;dt\label{Eq:Norm even}\quad \text{and}\quad
\int_{\mathbb{S}^N}|\nabla_g\Delta_g^{i}u|_g^2 \; \;dV_g = \int_0^\pi |(\mathscr{L}^{i}w)'|^2 \;h \;dt.
\end{equation*}
Therefore,
\begin{equation*}
\|u\|^2_{\mathbf k,*}=\sum_{\substack{i=0\\ i\ \textrm{even}}}^m \frac{k_i}{4}\int_0^\pi |\mathscr{L}^{i/2} w|^2 \;h \;dt+ \sum_{\substack{i=1\\ i\ \textrm{odd}}}^m k_i \int_0^\pi |(\mathscr{L}^{(i-1)/2}w)'|^2 \;h \;dt=\|w\|^2_{\mathbf k,h},
\end{equation*}
as claimed.
\end{proof}
For $\varepsilon>0$, let $H^{m}(\varepsilon,\pi-\varepsilon)$ denote the usual Sobolev space of order $m$ in the interval $(\varepsilon,\pi-\varepsilon)$.
\begin{lemma}\label{p}
For each $\varepsilon>0$, there are $\mathbf k=(k_0,\ldots, k_m)\in(0,\infty)^{m+1}$ and $A>0$, depending on $\varepsilon$, such that
\[
\Vert w\Vert_{\mathbf k,h}\geq A \Vert w\Vert_{H^{m}(\varepsilon,\pi-\varepsilon)}\qquad \text{for every \ }w\in\mathcal{C}^\infty[0,\pi].
\]
\end{lemma}
\begin{proof}
By Lemma \ref{bb:lemma}, for every $\varepsilon>0$ there are $\eta>0$ and $\mu>1$, depending on $\varepsilon$, such that, for $i\geq 2$ even
\begin{align}\label{i1}
\frac{1}{4}\,|\mathscr{L}^{i/2} w|^2 h&\geq \eta\left(|w^{(i)}|^2 - \mu\sum_{j=1}^{i - 1}|w^{(j)}|^2\right)\quad \text{ in }(\varepsilon,\pi-\varepsilon),
\end{align}
and for $i$ odd
\begin{align}\label{i2}
|(\mathscr{L}^{(i-1)/2} w)'|^2 h &\geq \eta
\left(|w^{(i)}|^2 -\mu\sum_{j=1}^{i-1}|w^{(j)}|^2\right)\quad \text{ in }(\varepsilon,\pi-\varepsilon).
\end{align}
Let $k_0:=1$, $k_i:=(2\mu)^{-i}$ for $i\geq 1$, and $\mathbf k:=(k_0,\ldots, k_m)\in(0,\infty)^{m+1}$. By \eqref{i1}, \eqref{i2},
\begin{align*}
&\Vert w\Vert^2_{\mathbf k,h}
=\sum_{\substack{i=0\\ i\ \textrm{even}}}^m \frac{k_i}{4}\int_0^\pi |\mathscr{L}^{i/2} w|^2 h\; \;dt
+ \sum_{\substack{i=1\\ i\ \textrm{odd}}}^m k_i \int_0^\pi |\mathscr{L}^{(i-1)/2}w'|^2 h\; \;dt\\
&\geq \eta\int_\varepsilon^{\pi-\varepsilon}
\Big[\Big(\sum_{i=0}^m k_i|w^{(i)}|^2\Big)
-\mu\Big(\sum_{i=0}^m k_i\sum_{j=1}^{i - 1}|w^{(j)}|^2\Big)\Big]\; \;dt\\
&= \eta\int_\varepsilon^{\pi-\varepsilon}\Big[
k_0|w|^2
+\Big(k_1- \mu \sum_{i=2}^mk_i\Big)|w^{(1)}|^2
+\sum_{i=2}^{m-1}\Big(k_i- \mu \sum_{j=i+1}^m k_{i}\Big)|w^{(i)}|^2 + k_m \vert w^{(m)}\vert^2\Big] \;dt\\
&= \eta\int_\varepsilon^{\pi-\varepsilon}\Big[
|w|^2
+\Big(\frac{1}{2\mu}-\sum_{i=2}^m \frac{1}{2^i\mu^{i-1}}\Big)|w^{(1)}|^2
+\sum_{i=2}^{m-1}\Big(\frac{1}{2^i\mu^i}-\sum_{j=i+1}^m \frac{1}{2^j\mu^{j-1}}\Big)|w^{(i)}|^2 +(2\mu)^{-m} \vert w^{(m)}\vert^2\Big] \;dt.
\end{align*}
For $i=1,\ldots,m-1$,
\begin{align*}
A_i:=\frac{1}{2^i\mu^i}-\sum_{j=i+1}^m \frac{1}{2^j\mu^{j-1}}=\frac{2^{-i} (\mu -1) \mu ^{-i}+2^{-m} \mu ^{1-m}}{2 \mu -1}>0,
\end{align*}
and the claim follows with $A:=\eta\min\{A_1,\ldots,A_{m-1},(2\mu)^{-m}\}>0$.
\end{proof}
We have the following regularity result.
\begin{proposition} \label{prop:continuity}
Let $Z:=(\mathbb{S}^{n_1-1}\times\{0\})\, \cup\, (\{0\}\times\mathbb{S}^{n_2-1})\subset\mathbb{S}^N$. For every $u\in H^m_g(\mathbb{S}^N)^\Gamma$ there exists $\widetilde u\in \mathcal{C}^{m-1}(\mathbb{S}^N\smallsetminus Z)^\Gamma$ such that $u=\widetilde u$ a.e. in $\mathbb{S}^N$.
\end{proposition}
\begin{proof}
Fix $\varepsilon>0$ and let $\Theta_\varepsilon:=q^{-1}(\varepsilon,\pi-\varepsilon)$. Then, by Lemmas \ref{lem:isometry2} and \ref{p}, there exists $C>0$, depending on $\varepsilon$, such that $\|u\|_*\geq C\|w\|_{H^m(\varepsilon,\pi-\varepsilon)}$ because the norm defined in \eqref{k} is equivalent to $\|\cdot\|_*$.
Therefore, the map
$$H^{m}_g(\Theta_\varepsilon)^\Gamma\to H^m(\varepsilon,\pi-\varepsilon),\qquad u\mapsto w,\quad\text{ \ where \ }u=w\circ q,$$
is continuous. Sobolev's theorem yields a continuous embedding $H^m(\varepsilon,\pi-\varepsilon)\hookrightarrow\mathcal{C}^{m-1}(\varepsilon,\pi-\varepsilon)$. Thus, for $u\in H^m_g(\mathbb{S}^N)^\Gamma$ and $w$ given by $u=w\circ q$, there exists $w_\varepsilon\in\mathcal{C}^{m-1}(\varepsilon,\pi-\varepsilon)$ such that $w=w_\varepsilon$ a.e. in $(\varepsilon,\pi-\varepsilon)$. So $u_\varepsilon:=w_\varepsilon\circ q\in\mathcal{C}^{m-1}(\Theta_\varepsilon)$ and $u=u_\varepsilon$ a.e. in $\Theta_\varepsilon$. The function $\widetilde u(p):=u_\varepsilon(p)$ if $p\in\Theta_\varepsilon$ is well defined and of class $\mathcal{C}^{m-1}$ on $\mathbb{S}^N\smallsetminus Z$, and it coincides a.e. with $u$.
\end{proof}
\begin{remark}\label{b:rmk}
\emph{
Let $u\in H^m_g(\mathbb{S}^N)^\Gamma$. Since $u$ is $\Gamma$-invariant, there is $w:[0,\pi]\to \mathbb{R}$ such that $u=w\circ q$, with $q$ as in \eqref{eq:orbit_map}. As a consequence, problem \eqref{eq:problem} can be seen as an ODE.}
\emph{
In particular, if $m=1$, Lemma~\ref{lem:isometry2} yields that
\begin{align*}
\|u\|_*^2=\int_0^\pi \left(|w'(t)|^2 + \frac{c_1}{4}\, w(t)^2\right)h(t)\ dt,
\end{align*}
where the constant $c_1$ is as in \eqref{Eq:JGM-Operator}. As a consequence, $u\in H^m_g(\mathbb{S}^N)^\Gamma$ is a solution to the Yamabe equation \eqref{eq:problem} with $m=1$ iff $w$ solves the ODE
\begin{align*}
-(w'h)'+ \frac{c_1}{4}\, w\, h=-w''h-w'h'+ \frac{c_1}{4}\, w\, h = \frac{h}{4}|w|^{2^*_1-2}w\quad \text{ in }(0,\pi).
\end{align*}
A careful study of this ODE is performed in \cite{fp} to obtain existence of solutions to the Yamabe equation on the sphere with exactly $\ell$-nodal regions for any $\ell\in\mathbb{N}$. A similar analysis is much harder for $m\geq 2$, where the coefficients of the ODE are more complex. For instance, if $u\in H^2_g(\mathbb{S}^N)^\Gamma$ is a solution of \eqref{eq:problem} with $m=2$ and $w:[0,\pi]\to \mathbb{R}$ is such that $u=w\circ q$, then, by Lemma~\ref{lem:isometry2},
\begin{align*}
\|u\|_*^2&=\|w\|^2_{(a_0,a_1,1),h}
=\frac{a_0}{4}\int_0^\pi |w|^2 h\; \;dt
+ a_1 \int_0^\pi |w'|^2 h\; \;dt
+\frac{1}{4}\int_0^\pi |
4w'' + \phi(t)w'|^2 h\; \;dt\\
&=\int_0^\pi
\left(
4 w''(t)^2
+\left(\frac{1}{4} \phi(t)^2+a_1\right) w'(t)^2
+ 2 \phi(t) w'(t) w''(t)
+\frac{a_0}{4} w(t)^2
\right)h(t) \ \mathrm{d}t,
\end{align*}
where $a_0=c_1c_2$, $a_1=c_1+c_2$, and $c_1,c_2$ are given in \eqref{Eq:JGM-Operator}. The associated fourth-order ODE for \eqref{eq:problem} with $m=2$ is
\begin{align*}
&4 h\, w''''+ 8 h'\, w'''+C_1\, w''+C_2\, w'+ \frac{a_0}{4} h\, w(t)=\frac{h}{4}|w|^{2^*_2-2}w\quad \text{ in }(0,\pi),
\end{align*}
where
\begin{align*}
C_1(t)&:=4 h''(t)+2 \phi(t) h'(t)+2 h(t)\phi'(t)-\frac{1}{4} h(t) \phi(t)^2 - a_1 h(t),\\
C_2(t)&:=4 h'(t) \phi'(t)+ 2 \phi(t) h''(t)- \frac{1}{4} \phi(t)^2 h'(t)- a_1 h'(t)+2 h(t) \phi''(t)- \frac{1}{2} h(t) \phi(t) \phi'(t).
\end{align*}}
\end{remark}
\section{The polyharmonic system}
\label{sec:system}
We fix $\Gamma:=O(n_1)\times O(n_2)$ with $n_1,n_2\geq 2$ and $n_1+n_2=N+1$ and we study the system \eqref{eq:system}. Let $\mathcal{H}:=(D^{1,2}(\mathbb{R}^N)^\Gamma)^\ell$ with the norm
\begin{equation*}\label{Eq:NormProduct}
\Vert \bar{u}\Vert =\Vert(u_1,\ldots,u_\ell)\Vert = \Big(\sum_{i=1}^\ell \Vert u_i \Vert^2\Big)^{1/2},
\end{equation*}
and $\mathcal{J}:\mathcal{H}\rightarrow\mathbb{R}$ be the functional given by
\[
\mathcal{J}(\bar{u}):=\frac{1}{2}\sum_{i=1}^\ell\Vert u_i\Vert^2 - \frac{1}{2^*_m}\sum_{i=1}^\ell\int_{\mathbb{R}^N}\mu_i\vert u_i\vert^{2^*_m} - \frac{1}{2}\sum_{\substack{i,j=1 \\ j\neq i}}^\ell\int_{\mathbb{R}^N}\lambda_{ij}\vert u_j\vert^{\alpha_{ij}}\vert u_i\vert^{\beta_{ij}}.
\]
This is a $\mathcal{C}^1$-functional and, by the principle of symmetric criticality \cite{palais}, its critical points are the solutions of \eqref{eq:system}. Observe that the fully nontrivial critical points of $\mathcal{J}$ belong to the set
\[
\mathcal{N}:=\{\bar{u}\in\mathcal{H}\;:\; u_i\neq 0,\ \partial_i\mathcal{J}(\bar{u})u_i = 0, \text{ for each }i=1,\ldots,\ell\}.
\]
Note also that, for each $i$,
\begin{align*}
&\partial_i\mathcal{J}(\bar u)u_i=\|u_i\|^2 - \int_{\r^N} \mu_i|u_i|^{2_m^*} - \sum_{\substack{j=1 \\ j\neq i}}^\ell\int_{\r^N}\lambda_{ij}\beta_{ij}|u_j|^{\alpha_{ij}}|u_i|^{\beta_{ij}}.
\end{align*}
It is readily seen that
\begin{equation} \label{eq:energy_nehari}
\mathcal{J}(\bar u)=\frac{m}{N}\|\bar u\|^2\qquad\text{if \ }\bar u\in\mathcal{N}.
\end{equation}
\begin{lemma} \label{lem:away_from_0}
There exists $d_0>0$, independent of $\lambda_{ij}$, such that $\min_{i=1,\ldots,\ell}\|u_i\|\geq d_0$ if $\bar u=(u_1,\ldots,u_\ell)\in \mathcal{N}$. Thus, $\mathcal{N}$ is a closed subset of $\mathcal{H}$ and $\inf_\mathcal{N}\mathcal{J}>0$.
\end{lemma}
\begin{proof}
From $\lambda_{ij}<0$ and Sobolev's inequality we obtain
\begin{align*}
\|u_i\|^2\leq \int_{\r^N} \mu_i|u_i|^{2_m^*}\leq C\|u_i\|^{2_m^*}\quad \text{ for \ }\bar u\in \mathcal{N}, \ i=1,\ldots,\ell,
\end{align*}
with $C>0$.
\end{proof}
\begin{definition} \label{def:least energy}
A fully nontrivial solution $\bar u$ to the system \eqref{eq:system} satisfying $\mathcal{J}(\bar u)=\inf_\mathcal{N}\mathcal{J}$ is called a least energy solution.
\end{definition}
To establish the existence of fully nontrivial critical points of $\mathcal{J}$ we follow the variational approach introduced in \cite{cs}.
Given $\bar{u}=(u_1,\ldots,u_\ell)$ and $\bar{s}=(s_1,\ldots,s_\ell)\in(0,\infty)^\ell$, we write
\[
\bar{s}\bar{u}:= (s_1u_1,\ldots,s_\ell u_\ell).
\]
Let $\mathcal S:=\{u\in D^{1,2}(\mathbb{R}^N)^\Gamma:\|u\|=1\}$, $\mathcal{T}:=\mathcal S^\ell$, and define
$$\mathcal{U}:=\{\bar{u}\in\mathcal{T}:\bar{s}\bar{u}\in\mathcal{N}\text{ \ for some \ }\bar s\in(0,\infty)^\ell\}.$$
\begin{lemma} \label{lem:U}
\begin{itemize}
\item[$(i)$] Let $\bar u\in\mathcal{T}$. If there exists $\bar s_{\bar u}\in(0,\infty)^\ell$ such that $\bar s_{\bar u}\bar u\in\mathcal{N}$, then $\bar s_{\bar u}$ is unique and satisfies
$$\mathcal{J}(\bar s_{\bar u}\bar u)=\max_{\bar s\in(0,\infty)^\ell}\mathcal{J}(\bar s\bar u).$$
\item[$(ii)$] $\mathcal{U}$ is a nonempty open subset of $\mathcal{T}$, and the map $\mathcal{U}\to(0,\infty)^\ell$ given by $\bar u\mapsto\bar s_{\bar u}$ is continuous.
\item[$(iii)$] The map $\mathcal{U}\to \mathcal{N}$ given by $\bar u\mapsto\bar s_{\bar u}\bar u$ is a homeomorphism.
\item[$(iv)$] If $(\bar u_n)$ is a sequence in $\mathcal{U}$ and $\bar u_n\to\bar u\in\partial\mathcal{U}$, then $|\bar s_{\bar u_n}|\to\infty$.
\end{itemize}
\end{lemma}
\begin{proof}
The same arguments used in the proof of \cite[Proposition 3.1]{cs} give the proof of this result.
\end{proof}
Define $\Psi:\mathcal{U}\to\mathbb{R}$ as
\begin{equation*}
\Psi(\bar u): = \mathcal{J}(\bar s_{\bar u}\bar u).
\end{equation*}
According to Lemma \ref{lem:U}, $\mathcal{U}$ is an open subset of the smooth Hilbert submanifold $\mathcal{T}$ of $\mathcal{H}$. If $\Psi$ is of class $\mathcal{C}^1$ we write $\|\Psi'(\bar u)\|_*$ for the the norm of $\Psi'(\bar u)$ in the cotangent space $\mathrm{T}_{\bar u}^*(\mathcal{T})$ to $\mathcal{T}$ at $\bar u$, i.e.,
$$\|\Psi'(\bar u)\|_*:=\sup\limits_{\substack{\bar v\in\mathrm{T}_{\bar u}(\mathcal{U}) \\\bar v\neq 0}}\frac{|\Psi'(\bar u)\bar v|}{\|\bar v\|},$$
where $\mathrm{T}_{\bar u}(\mathcal{U})$ is the tangent space to $\mathcal{U}$ at $\bar u$.
Recall that a sequence $(\bar u_n)$ in $\mathcal{U}$ is called a $(PS)_c$\emph{-sequence for} $\Psi$ if $\Psi(\bar u_n)\to c$ and $\|\Psi'(\bar u_n)\|_*\to 0$, and $\Psi$ is said to satisfy the $(PS)_c$\emph{-condition} if every such sequence has a convergent subsequence. Similarly, a $(PS)_c$\emph{-sequence for} $\mathcal{J}$ is a sequence $(\bar u_n)$ in $\mathcal{H}$ such that $\mathcal{J}(\bar u_n)\to 0$ and $\|\mathcal{J}'(\bar u_n)\|_{\mathcal{H}'}\to 0$, and $\mathcal{J}$ satisfies the $(PS)_c$\emph{-condition} if any such sequence has a convergent subsequence. Here $\mathcal{H}'$ denotes, as usual, the dual space of $\mathcal{H}$.
\begin{lemma} \label{lem:psi}
\begin{itemize}
\item[$(i)$] $\Psi\in\mathcal{C}^1(\mathcal{U},\mathbb{R})$,
\begin{equation*}
\Psi'(\bar u)\bar v = \mathcal{J}'(\bar s_{\bar u}\bar u)[\bar s_{\bar u}\bar v] \quad \text{for all } \bar u\in\mathcal{U} \text{ and }\bar v\in \mathrm{T}_{\bar u}(\mathcal{U}),
\end{equation*}
and there exists $d_0>0$ such that
$$d_0\,\|\mathcal{J}'(\bar s_{\bar u}\bar u)\|_{\mathcal{H}'}\leq\|\Psi'(\bar u)\|_*\leq |\bar s_{\bar u}|_\infty\|\mathcal{J}'(\bar s_{\bar u}\bar u)\|_{\mathcal{H}'}\quad \text{for all } \bar u\in\mathcal{U},$$
where $|\bar s|_\infty=\max\{|s_1|,\ldots,|s_q|\}$ if $\bar s=(s_1,\ldots,s_q)$.
\item[$(ii)$] If $(\bar u_n)$ is a $(PS)_c$-sequence for $\Psi$, then $(\bar s_{\bar u_n}\bar u_n)$ is a $(PS)_c$-sequence for $\mathcal{J}$.
\item[$(iii)$] $\bar u$ is a critical point of $\Psi$ if and only if $\bar s_{\bar u}\bar u$ is a critical point of $\mathcal{J}$ if and only if $\bar s_{\bar u}\bar u$ is a fully nontrivial solution of \eqref{eq:system}.
\item[$(iv)$] If $(\bar u_n)$ is a sequence in $\mathcal{U}$ and $\bar u_n\to\bar u\in\partial\mathcal{U}$, then $|\Psi(\bar u_n)|\to\infty$.
\item[$(v)$]$\bar{u}\in\mathcal{U}$ if and only if $-\bar{u}\in\mathcal{U}$, and $\Psi(\bar u)=\Psi(-\bar u)$.
\end{itemize}
\end{lemma}
\begin{proof}
These statements are proved arguing exactly as in \cite[Theorem 3.3]{cs}.
\end{proof}
\begin{lemma}
$\Psi$ satisfies the $(PS)_c$-condition for every $c\in\mathbb{R}$.
\end{lemma}
\begin{proof}
Let $(\bar{v}_n)$ be a $(PS)_c$-sequence for $\mathcal{J}$ with $\bar v_n\in\mathcal{N}$. Then
\[
\frac{m}{N}\Vert \bar{v}_n \Vert^2 = \mathcal{J}(\bar{v}_n) - \frac{1}{2_m^\ast} \mathcal{J}'(\bar{v}_n)\bar{v}_n \leq c(1 + \Vert \bar{v}_n\Vert)
\]
for some positive constant $c$ not depending on $\bar{v}_n$, so the sequence is bounded. A standard argument using Lemma \ref{Lemma:Sobolev}, as in \cite[Proposition 3.6]{cp}, shows that $(\bar{v}_n)$ contains a convergent subsequence. The statement of the lemma follows from Lemmas \ref{lem:psi}$(ii)$ and \ref{lem:U}$(iii)$.
\end{proof}
Given a nonempty subset $\mathcal{Z}$ of $\mathcal{T}$ such that $\bar{u}\in\mathcal{Z}$ if and only if $-\bar{u}\in\mathcal{Z}$, the \emph{genus of $\mathcal{Z}$}, denoted $\mathrm{genus}(\mathcal{Z})$, is the smallest integer $k\geq 1$ such that there exists an odd continuous function $\mathcal{Z}\rightarrow\mathbb{S}^{k-1}$ into the unit sphere $\mathbb{S}^{k-1}$ in $\mathbb{R}^k$. If no such $k$ exists, we define $\mathrm{genus}(\mathcal{Z})=\infty$; finally, we set $\mathrm{genus}(\emptyset)=0$.
\begin{lemma}
$\mathrm{genus}(\mathcal{U})=\infty$.
\end{lemma}
\begin{proof}
As in \cite[Lemma 3.2]{cp} one constructs $\Gamma$-invariant functions in $\mathcal{C}^\infty(\mathbb{R}^N)$ with disjoint supports. Then, arguing as in \cite[ Lemma 4.5]{cs}, one shows that $\mathrm{genus}(\mathcal{U})=\infty$.
\end{proof}
\smallskip
\begin{proof}[Proof of Theorem \ref{thm:existence}]
Lemma \ref{lem:psi}$(iv)$ implies that $\mathcal{U}$ is positively invariant under the negative pseudogradient flow of $\Psi$, so the usual deformation lemma holds true for $\Psi$, see e.g. \cite[Section II.3]{s} or \cite[Section 5.3]{w}. As $\Psi$ satisfies the $(PS)_c$-condition for every $c\in\mathbb{R}$, standard variational arguments show that $\Psi$ attains its minimum on $\mathcal{U}$ at some $\bar u$. By Lemma \ref{lem:psi}$(iii)$ and the principle of symmetric criticality, $\bar s_{\bar u}\bar u$ is a least energy fully nontrivial solution of the system \eqref{eq:system}. Moreover, as $\Psi$ is even and $\mathrm{genus}(\mathcal{U})=\infty$, $\Psi$ has an unbounded sequence of critical values. Since $\Psi(\bar u)=\mathcal{J}(\bar s_{\bar u}\bar u)=\frac{m}{N}\|\bar s_{\bar u}\bar u\|^2$ by \eqref{eq:energy_nehari}, the system \eqref{eq:system} has an unbounded sequence of fully nontrivial solutions.
\end{proof}
\section{Segregation and optimal partitions}
\label{sec:segregation}
Let $\Gamma$ be as before and let $\Omega$ be a $\Gamma$-invariant open subset of $\mathbb{R}^N$. The solutions to the problem \eqref{eq:dirichlet} are the critical points of the energy functional $J_\Omega: D_0^{m,2}(\Omega)^\Gamma\rightarrow\mathbb{R}$ defined by
\begin{align*}
J_\Omega(v):=\frac{1}{2} \|v\|^2-\frac{1}{{2^*_m}}\int_\Omega|v|^{2^*_m}.
\end{align*}
The nontrivial ones belong to the Nehari manifold
\begin{align*}
\mathcal{M}_\Omega:=&\{v\in D^{m,2}_0(\Omega)^\Gamma:v\neq 0,\;J_\Omega'(v)v=0\} \\
=&\{v\in D^{m,2}_0(\Omega)^\Gamma:v\neq 0,\;\|v\|^2=\int_\Omega|v|^{2^*_m}\},
\end{align*}
which is a closed submanifold of $D^{m,2}_0(\Omega)^\Gamma$ of class $\mathcal{C}^2$ and a natural constraint for $J_\Omega$. A minimizer for $J_\Omega$ on $\mathcal{M}_\Omega$ is called a \emph{least energy $\Gamma$-invariant solution to \eqref{eq:dirichlet} in $\Omega$}. By standard arguments, using Lemma \ref{Lemma:Sobolev}, one sees that \eqref{eq:dirichlet} does have a least energy solution. So the quantity $c_\Omega^\Gamma$ defined in the introduction is
$$c_{\Omega}^\Gamma=\inf_{u\in\mathcal{M}_\Omega}J_\Omega(u).$$
We begin by establishing some properties of optimal partitions. Let
$$\widetilde{q}:=q\circ\sigma^{-1}:\mathbb{R}^N\to[0,\pi],$$
where $\sigma$ is the stereographic projection and $q$ is the $\Gamma$-orbit map of $\mathbb{S}^N$ defined in \eqref{eq:orbit_map}. So, writing $\mathbb{R}^{N+1}=\mathbb{R}^{n_1}\times\mathbb{R}^{n_2}$, one has that $\widetilde q^{\,-1}(0) = \mathbb{S}^{n_1-1}\times\{0\}$ and $\widetilde q^{\,-1}(\pi) = \{0\}\times \mathbb{R}^{n_2-1}$.
\begin{lemma} \label{lem:tori}
Let $\ell\geq 2$ and $\{\Theta_1,\ldots,\Theta_\ell\}\in\mathcal{P}_\ell^\Gamma$ be a $(\Gamma,\ell)$-optimal partition for problem \eqref{p}. Then, the following statements hold true.
\begin{itemize}
\item[$(i)$] There exist $a_1,\ldots,a_{\ell-1}\in(0,\pi)$ such that
$$(0,\pi)\smallsetminus\bigcup_{i=1}^\ell\widetilde{q}\,(\Theta_i)=\{a_1,\ldots,a_{\ell-1}\}.$$
Therefore, after reordering,
\begin{align*}
\Omega_1 :=& \ \Theta_1\cup(\mathbb{S}^{n_1-1}\times\{0\})=\widetilde{q}\,^{-1}[0,a_1),\\
\Omega_i :=& \ \Theta_i=\widetilde{q}\,^{-1}(a_{i-1},a_i)\quad\text{for }\; i=2,\ldots,\ell-1,\\
\Omega_\ell :=& \ \Theta_\ell\cup(\{0\}\times\mathbb{R}^{n_2-1})=\widetilde{q}\,^{-1}(a_{\ell-1},\pi].
\end{align*}
\item[$(ii)$] $\Omega_1,\ldots,\Omega_\ell$ are smooth and connected, they satisfy items $(c_1)$ and $(c_2)$ of \emph{Theorem}~\ref{thm:main}, $\Omega_1,\ldots,\Omega_{\ell-1}$ are bounded, $\Omega_\ell$ is unbounded, $\overline{\Omega_1\cup\cdots\cup \Omega_\ell}=\mathbb{R}^{N}$, and $\{\Omega_1,\ldots,\Omega_\ell\}\in\mathcal{P}_\ell^\Gamma$ is a $(\Gamma,\ell)$-optimal partition for problem \eqref{p}.
\end{itemize}
\end{lemma}
\begin{proof}
$(i):$ Let $a,b,c\in(0,\pi)$ with $a<b<c$ and set $\Lambda_1:=\widetilde{q}\,^{-1}(a,b)$,\; $\Lambda_2:=\widetilde{q}\,^{-1}(b,c)$,\; $\Lambda=\widetilde{q}\,^{-1}(a,c)$. As $\Lambda_i\subset\Lambda$, we have that $c_{\Lambda}^\Gamma \leq \min\{c_{\Lambda_1}^\Gamma,c_{\Lambda_2}^\Gamma\}$. We claim that
$$c_{\Lambda}^\Gamma < \min\{c_{\Lambda_1}^\Gamma,c_{\Lambda_2}^\Gamma\}. $$
Indeed, if $c_{\Lambda}^\Gamma=c_{\Lambda_1}^\Gamma$ then, taking a least energy $\Gamma$-invariant solution to \eqref{eq:dirichlet} in $\Lambda_1$ and extending it by $0$ in $\Lambda\smallsetminus\Lambda_1$ we obtain a least energy $\Gamma$-invariant solution $u$ to \eqref{eq:dirichlet} in $\Lambda$. Then, $u\in\mathcal{C}^{2m}(\Lambda)$ by \cite{l} and it vanishes in $\Lambda\smallsetminus\Lambda_1$, contradicting the unique continuation principle \cite{lin,protter}.
Therefore, if $\{\Theta_1,\ldots,\Theta_\ell\}\in\mathcal{P}_\ell^\Gamma$ is a $(\Gamma,\ell)$-optimal partition for problem \eqref{p}, then $(0,\pi)\smallsetminus\bigcup_{i=1}^\ell\widetilde{q}\,(\Theta_i)$ must consist of precisely $\ell-1$ points.
$(ii):$ Clearly, $\Omega_1,\ldots,\Omega_\ell$ are smooth and connected and satisfy statements $(c_1)$ and $(c_2)$ of Theorem \ref{thm:main}. Moreover, $\Omega_1,\ldots,$ $\Omega_{\ell-1}$ are bounded, $\Omega_\ell$ is unbounded, $\mathbb{R}^{N}=\overline{\Omega_1\cup\cdots\cup \Omega_\ell}$, and $\{\Omega_1,\ldots,\Omega_\ell\}\in\mathcal{P}_\ell^\Gamma$.
As $\Theta_i\subset\Omega_i$ we have that $c_{\Omega_i}^\Gamma\leq c_{\Theta_i}^\Gamma$ for all $i$. So, as $\{\Theta_1,\ldots,\Theta_\ell\}$ is a $(\Gamma,\ell)$-optimal partition, we conclude that $\{\Omega_1,\ldots,\Omega_\ell\}$ is a $(\Gamma,\ell)$-optimal partition.
\end{proof}
\smallskip
\begin{proof}[Proof of Theorem \ref{thm:main}] Fix $\mu_i=1$ in \eqref{eq:system} for each $i=1,\ldots,\ell$, and let $\lambda_{ij,k}\to-\infty$ as $k\to\infty$. To highlight the role of $\lambda_{ij,k}$, we write $\mathcal{J}_k$ and $\mathcal{N}_k$ for the functional and the set associated to the system \eqref{eq:system} with $\lambda_{ij}$ replaced by $\lambda_{ij,k}$, introduced in Section~\ref{sec:system}. Let $\bar u_k=(u_{k,1},\ldots,u_{k,\ell})\in\mathcal{N}_k$ be such that
$$c_k^\Gamma:= \inf_{\mathcal{N}_k} \mathcal{J}_k =\mathcal{J}_k(\bar u_k)=\frac{m}{N}\sum_{i=1}^\ell\|u_{k,i}\|^2.$$
Let
\begin{align*}
\mathcal{N}_0:=\{(v_1,\ldots,v_\ell)\in\mathcal{H}:\,&v_i\neq 0,\;\|v_i\|^2=\int_{\r^N}|v_i|^{{2^*_m}}, \text{ and }v_iv_j=0\text{ a.e. in }\mathbb{R}^N \text{ if }i\neq j\}.
\end{align*}
Then, $\mathcal{N}_0\subset\mathcal{N}_k$ for all $k\in\mathbb{N}$ and, therefore,
$$0<c_k^\Gamma\leq c_0^\Gamma:=\inf\left\{\frac{m}{N}\sum_{i=1}^\ell\|v_i\|^2:(v_1,\ldots,v_\ell)\in\mathcal{N}_0\right\}<\infty.$$
So, after passing to a subsequence, using Lemma~\ref{Lemma:Sobolev}, we get that $u_{k,i} \rightharpoonup u_{\infty,i}$ weakly in $D_{0}^{m,2}(\mathbb{R}^N)^\Gamma$, $u_{k,i} \to u_{\infty,i}$ strongly in $L^{{2^*_m}}(\mathbb{R}^N)$, and $u_{k,i} \to u_{\infty,i}$ a.e. in $\mathbb{R}^N$ for each $i=1,\ldots,\ell$. Moreover, as $\partial_i\mathcal{J}_k(\bar u_k)[u_{k,i}]=0$, we have for each $j\neq i$,
\begin{align*}
0\leq\int_{\r^N}\beta_{ij}|u_{k,j}|^{\alpha_{ij}}|u_{k,i}|^{\beta_{ij}}\leq \frac{1}{-\lambda_{ij,k}}\int_{\r^N}|u_{k,i}|^{{2^*_m}}\leq \frac{C}{-\lambda_{ij,k}}.
\end{align*}
Then, Fatou's lemma yields
$$0 \leq \int_{\r^N}|u_{\infty,j}|^{\alpha_{ij}}|u_{\infty,i}|^{\beta_{ij}} \leq \liminf_{k \to \infty} \int_{\r^N}|u_{k,j}|^{\alpha_{ij}}|u_{k,i}|^{\beta_{ij}} = 0.$$
Hence, $u_{\infty,j} u_{\infty,i} = 0$ a.e. in $\mathbb{R}^N$. By Lemma \ref{lem:away_from_0},
$$0<d_0 \leq \|u_{k,i}\|^2 \leq\int_{\r^N} |u_{k,i}|^{{2^*_m}}\qquad\text{for all \ }k\in\mathbb{N},\;i=1,\ldots,\ell,$$
and, as $u_{k,i} \to u_{\infty,i}$ strongly in $L^{{2^*_m}}(\mathbb{R}^N)$ and $u_{k,i} \rightharpoonup u_{\infty,i}$ weakly in $D^{m,2}(\mathbb{R}^N)$, we get
\begin{equation} \label{eq:comparison2}
0<\|u_{\infty,i}\|^2 \leq \int_{\r^N}|u_{\infty,i}|^{{2^*_m}}\qquad\text{for every \ }i=1,\ldots,\ell.
\end{equation}
Since $u_{\infty,i}\neq 0$, there is a unique $t_i\in(0,\infty)$ such that $\|t_iu_{\infty,i}\|_0^2 = \int_{\r^N}|t_iu_{\infty,i}|^{{2^*_m}}$. So $(t_1u_{\infty,1},\ldots,t_\ell u_{\infty,\ell})\in \mathcal{N}_0$. The inequality \eqref{eq:comparison2} implies that $t_i\in (0,1]$. Therefore,
\begin{align*}
c_0^\Gamma &\leq \frac{m}{N}\sum_{i=1}^\ell\|t_iu_{\infty,i}\|^2 \leq \frac{m}{N}\sum_{i=1}^\ell\|u_{\infty,i}\|^2\leq \frac{m}{N}\liminf_{k\to\infty}\sum_{i=1}^\ell\|u_{k,i}\|^2=\liminf_{k\to\infty} c_k^\Gamma \leq c_0^\Gamma.
\end{align*}
It follows that $u_{k,i} \to u_{\infty,i}$ strongly in $D^{m,2}(\mathbb{R}^N)^\Gamma$ and $t_i=1$, yielding
\begin{equation}\label{eq:limit}
\|u_{\infty,i}\|^2 = \int_{\r^N}|u_{\infty,i}|^{{2^*_m}},\qquad\text{and}\qquad\frac{m}{N}\sum_{i=1}^\ell\|u_{\infty,i}\|^2 =c_0^\Gamma.
\end{equation}
Set $Y_1:=\mathbb{S}^{n_1-1}\times\{0\}$, $Y_2:=\{0\}\times\mathbb{R}^{n_2-1}$, and $Y_0:=Y_1\cup Y_2$. Proposition~\ref{prop:continuity}, together with Lemma~\ref{lem:isometry2}, imply that $u_{\infty,i}|_{\mathbb{R}^N\smallsetminus Y_0}\in\mathcal{C}^{m-1}(\mathbb{R}^N\smallsetminus Y_0)$. Consequently,
\begin{align*}
\Theta_i:=\{x\in\mathbb{R}^N\smallsetminus Y_0:u_{\infty,i}(x)\neq 0\}
\end{align*}
is a $\Gamma$-invariant nonempty open subset of $\mathbb{R}^N$ and, as $u_{\infty,i}u_{\infty,j}=0$, we have that $\Theta_i\cap\Theta_j=\emptyset$ if $i\neq j$. We set $\Omega_i:=\operatorname{int}(\overline{\Theta_i})$. Then, every $\Omega_i$ is a nonempty $\Gamma$-invariant open smooth subset of $\mathbb{R}^N$, $\Omega_i\cap\Omega_j=\emptyset$ if $i\neq j$, and $u_{\infty,i}(x)=0$ in $\mathbb{R}^N\smallsetminus\Omega_i$. By Lemma \ref{A}, $u_{\infty,i}\in D_0^{1,2}(\Omega_i)^\Gamma$ and, by \eqref{eq:limit}, $u_{\infty,i}\in\mathcal{M}_{\Omega_i}$ and
\begin{equation*}
\sum_{i=1}^\ell c_{\Omega_i}^\Gamma\leq\frac{m}{N}\sum_{i=1}^\ell\|u_{\infty,i}\|^2 = c_0^\Gamma \leq \inf_{(\Phi_1,\ldots,\Phi_\ell)\in\mathcal{P}_\ell^\Gamma}\;\sum_{i=1}^\ell c_{\Phi_i}^\Gamma.
\end{equation*}
This shows that $\{\Omega_1,\ldots,\Omega_\ell\}$ is a $(\Gamma,\ell)$-optimal partition for the system \eqref{eq:system} and that $u_{\infty,i}$ is a least energy $\Gamma$-invariant solution to \eqref{eq:dirichlet} in $\Omega_i$. Thus, by \cite{l}, $u_{\infty,i}\in\mathcal{C}^{2m,\alpha}(\overline\Omega_i)$ for $\alpha\in(0,1)$. This concludes the proof of statements $(a)$ and $(b)$. Statement $(c)$ follows from Lemma \ref{lem:tori}.
\end{proof}
|
{
"timestamp": "2021-05-11T02:14:08",
"yymm": "2105",
"arxiv_id": "2105.03754",
"language": "en",
"url": "https://arxiv.org/abs/2105.03754"
}
|
\section{Introduction and main results}
Chen and Choi \cite{Chen:2014:AAF} presented a method to estimate Euler's constant $e$, accurate to as many decimal places as desired. Their starting point was the well known limit $\lim_{x\rightarrow \infty
}g(x)=e$, where
\begin{equation}
g(x)=\left(1+\frac{1}{x}\right) ^{x}.
\label{g}
\end{equation}
Their method was based on an asymptotic expansion of $g(x)$ for large values of $x$, namely
\begin{equation}
g(x)\sim \sum_{j=0}^{\infty }\frac{c_{j}}{x^{j}}. \label{eu}
\end{equation}
We shall show that this asymptotic expansion
converges for $x>1$.
In \cite{Chen:2014:AAF} they proved that $c_{j}=e a_{j}$, where $a_{j}$ are rational numbers which alternate in sign, and are given explicitly by
\begin{equation}
a_{j}=(-1)^{j}\ \sum \frac{1}{k_{1}!k_{2}!\cdots k_{j}!}\ \left( \frac{1}{2}
\right) ^{k_{1}}\left( \frac{1}{3}\right) ^{k_{2}}\cdots \left( \frac{1}{j+1}
\right) ^{k_{j}}.
\label{chen}
\end{equation}
Here for each $j$ the sum is taken over all possible combinations of nonnegative integers $k_{1},k_{2},k_{3},\cdots ,k_{j}$ that satisfy the
relation $\sum_{l=1}^{j} l k_{l}=j$. We remark that Ponomorenko \cite{Ponomarenko:2015:AFF} gave a much simpler proof of (\ref{chen}) using Fa\`{a} di Bruno's formula generalizing the chain rule to higher derivatives.
The number of terms in the sum (\ref{chen}) is the partition function $P(j)$ \cite[Sect. 27.14(i)]{NIST:DLMF}. Hardy and Ramanujan \cite{Hardy:1918:AFI} gave the asymptotic formula
\begin{equation}
P(j)\sim \frac{\exp (\pi \sqrt{2j/3})}{4j\sqrt{3}}
\end{equation}
as $j\rightarrow \infty $. It is therefore evident that the number of terms in the formula (\ref{chen}) grows exponentially in $\sqrt{j}$, and so is only practicable for small or moderate values of $j$.
We derive a new way of computing the coefficients $c_{j}$, by a simple recursion formula. We also provide a simple asymptotic approximation for the coefficients as $j\rightarrow{\infty}$, and this shows that in absolute value they approach the value 1.
Our main results read as follows. In \cref{sec2} we prove (\ref{main2}), and in \cref{sec3} we prove (\ref{main3}).
\begin{theorem}
For $x>1$ the
expansion (\ref{eu}) converges, where the coefficients ${c_{j}}$ are given
recursively by $c_{0}=e$ and
\begin{equation}
c_{j+1}=\frac{1}{j+1}{\sum_{l=0}^{j}(-1)^{j-l+1}\left( \frac{j-l+1}{j-l+2}
\right) c_{l}\quad (j=0,1,2,\cdots )}.
\label{main2}
\end{equation}
Moreover as $j\rightarrow \infty $
\begin{equation}
c_{j}=(-1)^{j}\left(1+\frac{1}{j}\right)
+\mathcal{O}\left(\frac{\ln(j)^{2}}{j^{2}}\right) .
\label{main3}
\end{equation}
\end{theorem}
\begin{remark}
Since $\lim_{x\rightarrow \infty}g(x)=e$ it is clear from (\ref{eu}) that $c_{0}=e$.
\end{remark}
\section{Proof of the recursion formula (\ref{main2})}
\label{sec2}
Define $z=1/x$ and $f(z)=g(1/z)$ so that (\ref{eu}) is written in the new form
\begin{equation}
f(z)=(1+z)^{1/z}={\sum_{j=0}^{\infty }c_{j}z^{j}}.
\label{1.8.1}
\end{equation}
and in this we consider $z$ complex. Moreover, on writing it as
\begin{equation}
f(z)=\exp \left\{ z^{-1}\ln (1+z)\right\} ,
\label{1.9.3}
\end{equation}
we note that it has a removable singularity at $z=0$, and therefore can be considered analytic at $z=0$ by assuming $f(0)=\lim_{z\rightarrow
0}f(z)=c_{0}=e$. We also see it has one finite singularity (a logarithmic branch point) at $z=-1$. Therefore the radius of convergence of the series (\ref{1.8.1}) is 1, i.e. it converges for $\left\vert z\right\vert <1$. Thus (\ref{eu}) converges for $x>1$, as asserted. By taking the principal branch of the logarithm in (\ref{1.9.3}) we have that $f(z)$ is analytic on the cut plane $\mathbb{C} \setminus (-\infty ,-1]$.
Next, on taking the natural logarithm of both sides of (\ref{1.8.1}), we get
\begin{equation}
{\frac{1}{z}\ln (1+z)=\ln }\left\{ {\sum_{j=0}^{\infty }c_{j}z^{j}}\right\}.
\label{1.8.2}
\end{equation}
Then expand the $\ln (1+z)$ term by its Maclaurin expansion valid for $\left\vert z\right\vert <1$ and we arrive at
\begin{equation}
\sum_{j=1}^{\infty }(-1)^{j}\frac{z^{j}}{j+1}=\ln \left\{ \sum_{j=0}^{\infty
}c_{j}z^{j}\right\} .
\label{1.8.5}
\end{equation}
Next differentiate both sides with respect to $z$ to yield
\begin{equation}
\sum_{j=1}^{\infty }(-1)^{j}\frac{jz^{j-1}}{j+1}=\sum_{j=1}^{\infty
}jc_{j}z^{j-1}\left[ \sum_{j=0}^{\infty }c_{j}z^{j-1}\right] ^{-1}.
\label{1.8.6}
\end{equation}
By shifting indices of the series starting at $j=1$ to start at $j=0$, and taking the series $\sum_{j=0}^{\infty }c_{j}z^{j}$ to the left-hand side, we see this is equivalent to
\begin{equation}
\sum_{j=0}^{\infty }c_{j}z^{j}\sum_{j=0}^{\infty
}d_{j}z^{j}=\sum_{j=0}^{\infty }(j+1)c_{j+1}z^{j}, \label{1.8.7}
\end{equation}
where $d_{j}$ is given by
\begin{equation}
d_{j}=(-1)^{j+1}\frac{j+1}{j+2}.
\label{1.8.8}
\end{equation}
We now use the Cauchy product, which is the discrete convolution of two infinite series. It is given by the formula \cite[Sect. 73]{Brown:2014:CVA}
\begin{equation}
\sum_{j=0}^{\infty }C_{j}\sum_{j=0}^{\infty }D_{j}=\sum_{j=0}^{\infty }\left[\sum_{l=0}^{j}C_{l}D_{j-l}\right].
\label{1.8.9}
\end{equation}
Applying this to the left-hand side of (\ref{1.8.7}) we combine both power series to the following single power series
\begin{equation}
\sum_{j=0}^{\infty }c_{j}z^{j}\sum_{j=0}^{\infty
}d_{j}z^{j}=\sum_{j=0}^{\infty }\left[ {\sum_{l=0}^{j}c_{l}d_{j-l}}\right]
z^{j}{.} \label{1.8.11}
\end{equation}
Finally substitute this into the left-hand side of (\ref{1.8.7}), and equate coefficients of $z^{j}$, to obtain
\begin{equation}
{\sum_{l=0}^{j}c_{l}d_{j-l}}=(j+1)c_{j+1}, \label{1.8.13}
\end{equation}
and then using (\ref{1.8.8}) this leads to (\ref{main2}).
\section{Proof of the asymptotic approximation (\ref{main3})}
\label{sec3}
We will use the famous Cauchy integral formula \cite[Eq. 1.9.31]{NIST:DLMF} to obtain an integral representation for the coefficients $c_{j}$. If $C_r$ is the positively orientated circle $\{z:|z|=r\}$ for arbitrary $r \in (0,1)$ then from (\ref{1.8.1}) and (\ref{1.9.3})
\begin{equation}
c_{j}={\frac{1}{2\pi i}\oint_{C_r}\frac{(1+z)^{1/z}}{z^{j+1}}\,dz=\frac{1}{
2\pi i}\oint_{C_r}\frac{\exp \left\{ z^{-1}\ln (1+z)\right\} }{z^{j+1}}\,dz}.
\label{1.9.7}
\end{equation}
In the second integral of (\ref{1.9.7}) we rewrite the exponential term using the geometric series
\begin{equation}
\frac{1}{z}=-\frac{1}{1-(1+z)}
=-(1+\delta +\delta^{2}+\delta ^{3}+\cdots ),
\label{1.9.8}
\end{equation}
where $\delta =1+z$, assuming $0<|\delta |<1$. So from (\ref{1.9.3})
\begin{equation}
f(z)=\exp \left\{ -\ln (\delta )\left( 1+\sum_{j=1}
^{\infty }\delta^{j}\right) \right\}
= \frac{1}{\delta}\exp \left\{ -\ln (\delta
)\sum_{j=1}^{\infty }\delta ^{j}\right\} .
\label{1.9.9}
\end{equation}
Note that $\delta ^{j}\ln (\delta )\rightarrow 0$ as $\delta \rightarrow 0$ for $j=1,2,3,\cdots $ by L'Hopital's rule. So using the Maclaurin expansion of the exponential function along with (\ref{1.9.9}) this function has the expansion
\begin{equation}
f(z)=\frac{1}{\delta}\left( 1-v+\frac{v^{2}}{2!}-\frac{v^{3}}{3!}+\cdots \right),
\label{1.9.11}
\end{equation}
for $0<|\delta |<1$, where $v=\ln (\delta )\sum_{j=1}^{\infty }\delta ^{j}$. From this one deduces for small $\delta $ that
\begin{equation}
v=-\delta \ln (\delta )-\delta ^{2}\ln (\delta )+{\mathcal{O}}\left\{ \delta
^{3}\ln (\delta )\right\},\,
v^{2}=\delta ^{2}\ln ^{2}(\delta )+{\mathcal{O}}\left\{ \delta ^{3}\ln
^{2}(\delta )\right\},
\label{1.9.13}
\end{equation}
and
\begin{equation}
v^{j}={\mathcal{O}}\left\{ \delta ^{3}\ln ^{3}(\delta )\right\} \quad
(j=3,4,5,\cdots ).
\label{1.9.15}
\end{equation}
Recalling $\delta =1+z$ we consequently have from (\ref{1.9.11}) - (\ref{1.9.15})
\begin{equation}
f(z)=\left( 1+z\right) ^{-1}-\ln \left( 1+z\right) +R(z),
\label{1.9.15a}
\end{equation}
where
\begin{multline}
R(z)=(1+z)^{1/z}-\left( 1+z\right) ^{-1}+\ln (1+z) \\
=\left[ \tfrac{1}{2}\ln \left( 1+z\right) ^{2}-\ln \left( 1+z\right) \right]
\left( 1+z\right) +{\mathcal{O}}\left\{ \ln \left( 1+z\right) ^{3}\left(
1+z\right) ^{2}\right\} ,
\label{1.9.16}
\end{multline}
as $z\rightarrow -1$.
Now substitute (\ref{1.9.15a}) into (\ref{1.9.7}) to get
\begin{equation}
c_{j}=I_{1,j}+I_{2,j}+\eta_{j},
\label{I0}
\end{equation}
where
\begin{equation}
I_{1,j}=\frac{1}{2\pi i}\oint_{C_r}\frac{1}{z^{j+1}(1+z)}\,dz,\,
I_{2,j}=-\frac{1}{2\pi i}\oint_{C_r}\frac{\ln \left( 1+z\right) }{z^{j+1}}\,dz,
\label{I1}
\end{equation}
and
\begin{equation}
\eta_{j}=\frac{1}{2\pi i}\oint_{C_r}\frac{R(z)}{z^{j+1}}\,dz.
\label{1.10.4}
\end{equation}
The integrals in (\ref{I1}) can readily be evaluated by residue theory. For the first we have by the geometric series expansion
\begin{equation}
I_{1,j}=\underset{z=0}{\mathrm{Res}}
\left\{\frac{1}{z^{j+1}(z+1)}\right\}
=\underset{z=0}{\mathrm{Res}}
\left\{ \sum_{s=0}^{\infty }(-1)^{s}z^{s-j-1}\right\} ={(-1)^{j}}.
\label{1.9.20}
\end{equation}
Likewise for $I_{2,j}$ one finds that
\begin{equation}
I_{2,j}=-\underset{z=0}{\mathrm{Res}}
\left\{\frac{\ln(1+z)}{z^{j+1}} \right\} = \frac{(-1)^{j}}{j}.
\label{1.9.21}
\end{equation}
For the integral (\ref{1.10.4}) we make a change of variable $w=-\left( z+1\right)$ to obtain the following
\begin{equation}
\eta_{j}=\frac{(-1)^{j+1}}{2\pi i}\oint_{C_r^{\prime }}
\frac{R(-1-w)}{(1+w)^{j+1}}\,dw.
\label{1.10.7}
\end{equation}
The contour $C_r^{\prime }$ in the $w$ plane is now the circle $\{w:|w+1|=r\}$ for $0<r<1$, and is positively orientated. This lies in the left half plane and encircles $w=-1$. The integrand of (\ref{1.10.7}) has a branch point at $w=0$ and a pole at $w=-1$, and is analytic elsewhere in the $w$ plane having a cut along the non-negative real axis. So we can deform the contour to a new one, called $\Gamma_{\epsilon,\rho}$, as seen in \cref{fig1}.
\begin{figure}[ht!]
\centering
\includegraphics[width=130mm, scale=1, trim=0 270 0 220, clip]{Contour.eps}
\caption{Contour $\Gamma_{\epsilon,\rho}$ \label{fig:fig1}}
\label{fig1}
\end{figure}
This contour consists of circles $\gamma _{\epsilon
}$ and $\gamma _{\rho }$\ centered at $w=0,$ of radius $\epsilon $ and $\rho $ (respectively), where $0<\epsilon <1<\rho $, and horizontal line segments $l_{1}$ and $l_{2}$ with end points $w=\epsilon \pm i0$ and $w=\rho \pm i0$ above and below the cut.
Now from (\ref{1.9.16}) we see that the integrand of (\ref{1.10.7}) is $\mathcal{O}\{w\ln(w)^{2}\}$ as $w\rightarrow 0$, and $\mathcal{O}\{w^{-j-1}\ln (w)\}$ as $w\rightarrow \infty $. Hence the contributions of $\gamma _{\epsilon }$ and $\gamma_{\rho }$ vanish as $\epsilon \rightarrow {0}$ and $\rho \rightarrow \infty$ . Then the only contribution will be along $l_{1}\cup l_{2}$, where now these lines extend from $0$ to $\infty $. We are therefore left with
\begin{equation}
\eta_{j}=\frac{(-1)^{j+1}}{2\pi i}\int_{l_{1}\cup l_{2}}\frac{R(-1-w)}{
(1+w)^{j+1}}\,dw.
\label{1.10.8}
\end{equation}
Next from (\ref{1.9.16}) $R(-1-w)=\mathcal{O}\{w\ln (w)^{2}\}$ uniformly for $w\in l_{1}\cup l_{2}$, and so from (\ref{1.10.8}) for unbounded $j$
\begin{equation}
\eta_{j}=\mathcal{O}\left\{ \int_{0}^{\infty }
\frac{w \ln(w)^{2}}{(1+w)^{j+1}}dw\right\}.
\label{1.10.10}
\end{equation}
We then let $1+w=e^{t}$, and consequently from (\ref{1.10.10}) obtain
\begin{equation}
\eta_{j}=\mathcal{O}\left\{ \int_{0}^{\infty }{e^{-jt}(e^{t}-1)\ln
(e^{t}-1)^{2}\,dt}\right\}.
\label{1.10.12}
\end{equation}
Now split the integral in (\ref{1.10.12}) into two integrals, one from $t=0$ to $t=1$ and the other from $t=1$ to $t=\infty$. For the first use $e^{t}-1=\mathcal{O}(t)$ for $0\leq t\leq 1$, and for the second use $\ln (e^{t}-1)=\mathcal{O}(t)$ for $1\leq t<\infty $. Thus as $j\rightarrow {\infty}$ we deduce that
\begin{multline}
\eta_{j}=\mathcal{O}\left\{ \int_{0}^{1}{e^{-jt}t\ln (t)^{2}\,dt}\right\}
+\mathcal{O}\left\{ \int_{1}^{\infty }e^{-(j-1)t}t^{2}\,dt\right\} \\
=\mathcal{O}\left( \frac{{\ln (j)}^{2}}{j^2}\right)
+\mathcal{O}\left(\frac{1}{je^{j}}\right)
=\mathcal{O}\left( \frac{{\ln (j)}^{2}}{j^2}\right),
\label{1.10.13}
\end{multline}
where the third $\mathcal{O}$ term comes from \cite[Chap. 9, Eq. (1.07)]{Olver:1997:ASF}, and the fourth $\mathcal{O}$ term came from integration by parts. Finally, from (\ref{I0}), ( \ref{1.9.20}), (\ref{1.9.21}) and (\ref{1.10.13}) we arrive at (\ref{main3}).
\section*{Acknowledgments}
TMD acknowledges financial support from Ministerio de Ciencia e Innovaci\'on, Spain,
projects MTM2015-67142-P (MINECO/FEDER, UE) and PGC2018-098279-B-I00 (MCIU/AEI/FEDER, UE).
\bibliographystyle{siamplain}
|
{
"timestamp": "2021-05-11T02:15:49",
"yymm": "2105",
"arxiv_id": "2105.03794",
"language": "en",
"url": "https://arxiv.org/abs/2105.03794"
}
|
\section{Introduction} \label{sec:intro}
Blazar OJ 287, having the most massive black hole among all blazars, is located at a redshift 0.306 (\citealt{Mao2011}). It is classified as BL Lac object in the 3FGL (\citealt{3FGL}) \& 4FGL (\citealt{4FGL}) catalog. It is known for its high radio, optical, X-ray, and $\gamma$-ray variability. The variability time scale ranges from hours to decades across the wavebands (\citealt{Goyal_2020}). OJ 287 got more attention after the detection of high optical activity recurring at a regular interval of 12 years periodicity. In fact, this was the first AGN classified as a blazar, whose periodic behaviour was observed after analysing its century long optical light curve, which challenges the general scenario of blazars where flares or high activity states are randomly produced or in other words blazars have very stochastic variability across the electromagnetic spectrum. Periodic behavior observed in optical light curve encouraged many authors to study this source in detail and provide the possible physical explanation of periodic behavior that is new to a blazar. In literature, \citet{Sillanpaa_1988} proposed a model of binary black hole system where they associated the periodic variations to tidally induced mass inflows from accretion disk into the black hole (BH). They suggested that during the periastron passage, i.e., when the secondary black hole passes close to the primary BH, an optical outburst is produced. It has been observed that sometimes flares appear a bit earlier or a bit later than their predicted time, which is difficult to explain from first principles for a binary BH model. Later, \citet{Lehto_1996}, have performed a detailed analysis of substructure inside the major flares and they modified the Binary BH model to explain the sharp flare. In their model the secondary BH crosses the accretion disk of primary BH and produces the major optical flares. Other possible models have also been proposed in the recent past. The study of \citet{Britzen_2018} considering the 15~GHz radio observation suggests the highly Doppler-boosted jet emission for major flare and the precession of jet to explain the periodicity in the light curve. Their model suggests that the optical emission is produced due to synchrotron radiation of relativistic electrons inside the jet. Many more studies suggest that the BBH model is more successful in explaining the double-peaked optical flare (\citealt{Dey_2018}).
In the Binary BH model it is shown that the impact of the secondary onto the accretion disk of the primary BH creates two hot bubbles of gas on both sides of the disk which radiate in optical through thermal Bremsstrahlung after becoming optically transparent. Since the secondary BH crosses the disk two times in each orbit, a double-peaked flare is observed/produced. The orbital period of the secondary BH is associated with the $\sim$12~years periodicity seen in the light curve.
Recent study by \citet{Dey_2018} has confirmed the presence of 24 flares between 1888 to 2015 by the BBH model. The 2015 flare was also explained by \citet{Dey_2018} in the context of binary black hole model. \citet{Valtonen_2019} have estimated the values of the physical parameters of the impact of secondary onto the accretion disk of the primary BH in the BBH model.
In the past decade many broadband studies have been carried out on this source.
In the broadband analysis, this source is classified as a low-peaked BL Lac source where the synchrotron emission is constrained by NIR to soft X-ray as the synchrotron peak in the spectral energy distribution (SED) lies below 10$^{14}$~Hz. The high energy hump of the SED generally covers the X-ray and gamma-ray frequencies, with the peak at around 0.1~GeV to few GeV (\citealt{abdo_2010}, \citealt{Kushwaha_2013}). Synchrotron self-Compton (SSC) is the most well accepted mechanism that can explain the second SED peak. However, in some flares it has been found that the SSC model is not sufficient to explain the high energy part of the spectrum and hence people have used additional external thermal field to explain this. In \citet{Kushwaha_2013}, they have used external photon field at a temperature of 250~K as a source of external seed photons. These photons get up-scattered by the highly relativistic electrons present in the jet through the inverse-Compton mechanism.
In X-rays, this source has been studied in detail during several active states. The maximum number of observations with the longest durations have been carried out with the {\it Swift~\/}-XRT telescope, which nicely covered many bright X-ray flares of OJ 287. From the earlier X-ray flares it has been found that OJ 287 sometimes shows a significant change in the optical-UV and X-ray spectrum and that eventually leads to the shift in synchrotron peak towards higher energy during flare. Significant change in the X-ray spectrum and emergence of new HBL component is very rare in blazars. However, this kind of behavior has been seen in a few sources studied in the past (\citealt{Pian_1998}, \citealt{Raiteri15}, \citealt{Kapanadze_2018}, \citealt{kushwaha_2018a, kushwaha_2018b, Kushwaha_2020}).
In 2020, a bright optical-UV, and X-ray flare was reported in various ATels\footnote{http://www.astronomerstelegram.org/?read=13677}(\citealt{ATel13637}, \citealt{ATel13658}, \citealt{ATel13677}, \citealt{ATel13755}, \citealt{ATel13785}).
Recently, \citet{Komossa_2020} presented a detailed analysis with the data from {\it XMM-Newton~\/} and {\it NuSTAR~\/} telescopes. Many {\it XMM-Newton~\/} archival observations along with recent 2018 and 2020 observations were analysed and they found that the spectrum in 2020 behaves very differently from archival observations including the year 2018.
They suggested a non-thermal origin of the 2020 flares.
We have performed a broadband SED modeling of the flares during 2017--2020 and it is presented in \citet{Prince-2021}.
In this work, we provide a detailed analysis of \textit{Swift}-XRT and UVOT instrument data and report the emergence of a new HBL component which eventually leads to the shift in synchrotron peak towards higher energy. The synchrotron emission is constrained by the optical-UV and X-ray data. Emergence of HBL component in low BL Lac blazars is very rare. We also carried out a Target-of-Opportunity (ToO) observation of the source using India's first space-based multi-wavelength mission, {\it AstroSat~\/} (\citealt{Agrawal_2017}). The results from Soft X-ray focusing Telescope (SXT) and Large Area X-ray Proportional Counter (LAXPC) instrument of {\it AstroSat~\/} are also discussed in this work. At the end we have discussed the color-magnitude diagram followed by the summary and conclusions.
\begin{figure*}
\centering
\includegraphics[scale=0.6]{historical-Xray-lc.eps}
\caption{Long-term X-ray light curve from {\it Swift~\/}-XRT. WT: Window timing, PC: Photon Counting, Hard: 1.5-10.0keV, Soft: 0.3-1.5 keV.}
\label{fig:historical}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[scale=0.5]{soft-hard_lc.eps}
\caption{This shows the light curve in hard and soft state. Lower panel shows the hardness ratio.}
\label{fig:soft-xray}
\end{figure*}
\begin{figure}
\centering
\includegraphics[scale=0.5]{HR.eps}
\caption{Hardness ratio plotted against the total (0.2-10.0 keV) flux.}
\label{fig:HR}
\end{figure}
\section{X-ray Analysis}
\textbf{Swift-XRT:}
OJ 287 is continuously monitored by the Swift telescope under the project summarised in \citet{komossa2017}.
On February 3, 2017, an X-ray flare was observed by the {\it Swift~\/} telescope (\citealt{Gehrels_2004}), and the results was reported in Atel 10043 (\citealt{Grupe_2017}). It is reported as the brightest flare ever detected since the monitoring of the {\it Swift~\/} telescope. A long-term light curve is presented in \citet{Komossa_2020} where multiple flares were observed in X-rays.
{\it Swift~\/} is a space-based telescope with three instruments on-board, observing all kinds of Galactic and extra-galactic sources in soft $\&$ hard X-rays, Optical, and UV simultaneously. The working energy range of \textit{Swift}-XRT is 0.3--10.0~keV.
The BL Lac OJ 287 was observed by \textit{Swift}-XRT telescope during the multiple flaring episodes in X-ray frequencies between the period November 2019 to May 2020.
We have studied the period between 2019 and 2020 in this work.
We have followed the standard analysis procedure to reduce the raw Swift data as it was done in \citet{Komossa_2020}.
The modeling is done for an energy range of 0.3--10.0~keV with the Galactic absorption column density $n_H$ = 3.04$\times$10$^{20}$~cm$^{-2}$ from \citet{Dickey1990}.
\textbf{AstroSat-SXT:}
The SXT observations were carried out in the Photon Counting (PC) mode and the Level-1 data was further reduced using the \texttt{sxtpipeline1.4b} (Release Date: 2019-01-04) to generate the Level-2 data products (\citealt{Singh2016, Singh_2017}). The events from the various orbits were merged using the \texttt{SXTEVTMERGERTOOL} in Julia.
Further, we used the merged event list for temporal and spectral analysis. The merged event files were accessed by the tools \texttt{Xselect} where a source region of 10 arcsec was chosen around the source and the blank sky for the background. The response files and the background spectra are provided by the SXT-POC (Payload Operation Center) team. Finally, the grouped spectra were fitted in \texttt{Xspec} in the 0.3--10 keV band, and $\chi^2$ statistic was obtained for the best fit.
\textbf{AstroSat-LAXPC:}
LAXPC (\citealt{Yadav2016}) is designed to observe sources in a very wide range of X-rays ranging from 3 keV to 80 keV. It has three identical proportional counters (LAXPC10, LAXPC20, \& LAXPC30) onboard the {\it AstroSat~\/} satellite. Data used in this work is taken from the LXP20 detector. The LAXPC30 detector is switched off due to gain instability issues. The data reduction and further processing of Level-1 were carried out using the LAXPC data analysis pipeline version 3.1\footnote{Data analysis software was obtained from http://astrosat-ssc.iucaa.in/?q=laxpcData}. The pipeline code combines data from multiple orbits and also filters out overlapping segments between each orbit. Using the standard pipeline \texttt{laxpc\_make\_event}, we generate the cleaned event file. A good time interval (GTI) window was applied during the processing using the \texttt{laxpc\_make\_stdgti} tool, in order to exclude the time intervals corresponding to the Earth occultation periods, SAA passage and standard elevation angle screening criteria. Since Blazars are observed as a faint sources in LAXPC, we adopted the faint source pipeline for the spectral analysis. In order to reduce background contribution from all LAXPC layers, only the top detector layer was used for the timing and spectral analysis of the LXP20 data.
\textbf{NuSTAR:} We also searched the archive for the {\it NuSTAR~\/} observations.
A ToO observation in {\it NuSTAR~\/} (\citealt{Harrison_2013}) was performed during the high X-ray flux state on 2020-05-04 starting at 20:36:09 UTC with an effective exposure time of 29.5 ks by \citet{Komossa_2020}. The data reduction is made using the latest {\it NuSTAR~\/} data analysis software (NuSTARDAS) version 1.9.2 provided by \texttt{HEASOFT}. To extract the source spectrum and the background spectrum, a circular region of 20$''$ and 50$''$ were chosen around the source and away from the source, respectively.
\section{Optical and UV Observations}
Having the {\it Swift~\/} Ultraviolet/Optical Telescope (UVOT, \citealt{Roming_2005}) on board with {\it Swift~\/}-XRT has the advantage of getting simultaneous observations in Optical and UV bands.
{\it Swift~\/}-UVOT has also observed the OJ 287 in all of the available six filters U, V, B, W1, M2, and W2, simultaneously with the X-ray observations. A long-term optical-UV light curve is presented in \citet{Komossa_2020}. We have used the archival data for the period between 2019-2020, and the standard analysis procedure is followed to extract the mag and the flux as it was done in \citet{Komossa_2020}.
We have considered the region of 5 arcsec around the source as the source region, and the region away from the source as the background region.
The magnitudes are corrected for galactic extinction by using the reddening E(B-V) = 0.0241 from \citet{Schlafly_2011} and zero points from \citet{Breeveld_2011}. Moreover, the magnitudes are converted into flux by multiplying by the conversion factor estimated by \citet{Poole_2008} and the ratios of extinction to reddening from \citet{Giommi_2006}.
\section{Results}
\subsection{X-ray Light curves}
\textbf{XRT:} We have produced the long term X-ray light curve of OJ 287, using \textit{Swift}-XRT observations which is shown in Figure \ref{fig:historical}. The XRT observed the source in two different modes \texttt{Photon Counting (PC)} mode and \texttt{Window Timing (WT)} mode.
In Figure \ref{fig:historical}, it is observed that the source showed a very bright soft-X-ray flare in 2016-2017, which was studied in detail earlier by \citet{kushwaha_2018b}. In 2020, the second brightest soft-X-ray flare was observed as also marked by color patch in Figure \ref{fig:historical}. Our focus is to study the 2020 flare in detail and compare it with the older 2016-2017 flare.
Figure \ref{fig:soft-xray} shows the patched part of the light curve. The soft (0.3--1.5~keV) and hard (1.5--10.0~keV) light curves are shown in two colors, and the lower panel shows the hardness ratio, which is defined as the ratio of hard count to soft count. From the upper panel it is clear that the Blazar OJ 287 is highly variable and flaring in soft X-rays, while the count rate is lower and the source is far less variable in the hard X-ray band. In the beginning of the light curve (at MJD 58800), the fluxes are almost similar in both the bands, but as time passes, the soft X-ray part starts rising slowly with respect to hard X-ray. During April 2020 ($\sim$MJD 58940), the soft X-ray flux is almost four times higher than the hard X-ray flux. It remained in a very high state for an entire month till May 2020 because of which the flare was eventually recognized as the second brightest X-ray flare in the history of OJ 287. It was first recognised by the \citet{Komossa_2020}. To characterize the variability in the soft and hard X-ray band, we have estimated the fractional variability (F$_{\rm var}$) amplitude (\citealt{Vaughan_2003}). The source is found to be more variable in 0.3--1.5~keV band with F$_{\rm var}$ = 0.38$\pm$0.01 and less variable in 1.5--10~keV band with F$_{\rm var}$ = 0.27$\pm$0.01.
In the lower panel it can be seen that, as the
source flux increases, the hardness ratio starts to shift towards the lower value. To see the clear trend in the hardness ratio (HR), we have plotted the hardness ratio against the total (0.3--10.0~keV) flux in Figure \ref{fig:HR}. The HR is anti-correlated with the total flux with Pearson's correlation coefficient 0.61, suggesting a \enquote{softer-when-brighter} trend. \citet{Komossa_2020} have showed the \enquote{softer-when-brighter} trend based on the flux (0.3-10.0 keV) variation with respect to the X-ray powerlaw spectral index.
The \enquote{softer-when-brighter} trend in X-ray is a prevalent behavior in history of OJ 287 except the flare in 2015 where the spectrum was rather harder with flux. Comparing the softer behavior detected in the current flaring state, with the harder behavior observed in the 2015 flare, we conclude that both the flares may have a different origin.
\textbf{SXT:} We carried out a Target of Opportunity observation using {\it AstroSat~\/} (ObsID: 9000003672) during the X-ray flare in May 2020, since an enhanced X-ray flux was reported in several ATels. The source was observed from 15th--19th May, 2020. We have analyzed all the observations separately. In some of the observations, {\it AstroSat~\/} detected a very low count rate and was heavily background dominated. Due to poor statistical quality of the data from those Obs IDs, we did not carry out any timing or spectral study for those observations. The SXT spectral products corresponding to May 15, 2020 had a good signal to noise ratio (SNR) for meaningful spectral study, which we have discussed in Section 4.4.
\textbf{LAXPC:} Along with SXT, simultaneous LAXPC observations were carried out between May 15, 2020 to May 19, 2020.(ObsID: 9000003672). We carried out a similar data reduction process for all the observational data taken during this period. As was the case with SXT, only the data of the observation corresponding to May 15 had a good SNR and is therefore presented in this paper. All the other observations were of poor statistical quality and background dominated and we do not utilize them for this study. The simultaneous observations by LAXPC with SXT provide an opportunity to model both the spectra together. The result of this joint fit is discussed in section 4.4.
\textbf{Nustar:} We also looked for the timing analysis in Nustar, but were unable to extract any light curve. Therefore, the study is focused on understanding the spectral behavior. Detailed discussion is provided in Section 4.4.
\begin{figure*}
\centering
\includegraphics[scale=0.6]{plot.eps}
\caption{Broadband light curve created for period starting from November 2019 to current flaring state May 2020. Step plot in optical-UV is shown to identify the step-wise increase in their flux.}
\label{fig:mwl-lc}
\end{figure*}
\subsection{Optical-UV and $\gamma$-ray Light Curves}
In Figure \ref{fig:mwl-lc}, we have shown
the simultaneous gamma-ray, X-ray , optical and UV light curves during the flaring period of the source from MJD 58800 to 59012. It can be seen that the optical-UV emissions follow the X-ray emission (\citealt{Komossa_2020}), where they show a increase in brightness with time. A long-term optical and UV light curves are also discussed in \citet{Komossa_2020}. However, here we have focused on the detailed structure of light curve during the short period between 2019-2020.
Based on the average values of magnitudes (shown by a solid green line) observed in the optical-UV band, we divided the light curves into three different parts, namely, low state, intermediate state, and high state. The solid green lines which represent the average values of the magnitudes in three different states are jointly represents a step function structure.
In X-ray, the flux follows a similar kind of step function, but no apparent trend is seen.
These three different states are used for the further study. No evident activity is seen in the $\gamma$-ray light curve shown in the uppermost panel. The flux observed in $\gamma$-rays is very low and mostly constant over time (fluctuating around the mean). The average flux during the flaring period is obtained as 2.78$\times$10$^{-8}$~ph cm$^{-2}$ s$^{-1}$ and shown by a horizontal dashed line. Similarly, the 4FGL catalog flux is also represented by the horizontal dashed line in Figure \ref{fig:mwl-lc} which is way below the average flux of the source during this period. Previous studies on the flares detected in 2015 and 2016-2017 by \citealt{kushwaha_2018a, kushwaha_2018b} also show no strong activity in $\gamma$-ray, though the source was flaring in optical-UV and X-rays.
\subsection{X-ray Spectral Analysis}
We have produced the X-ray spectrum of all the three states identified in Figure \ref{fig:mwl-lc}. The spectra are fitted with the simple power-law and the log parabola models, in built in \texttt{Xspec}, and a galactic absorption \texttt{tbabs} is added to the model.
The final model we used is \texttt{tbabs*powerlaw/logparabola} and
the best fit model parameters are presented in Table \ref{tab:table1}. We have also carried out an \texttt{F-test} to determine the best model for the X-ray spectral fitting. It is considered that, if the the null hypothesis probability (p-value) is below 0.01 the log parabola model is preferred over the power-law model. In our study, we always found that the p-value $>$ 0.01 which suggests the power-law is preferred over log parabola model. To estimate the flux and error bar on flux, we have used an additional component to the model in XSPEC, called \texttt{cflux}.
It is noted that the power-law spectral index changes from low state to high state.
The power-law spectral index can be seen changing from low state to high state. During the low state, the power-law flux is estimated as (5.15$\pm$0.15)$\times$10$^{-12}$~erg cm$^{-2}$ s$^{-1}$ with the photon spectral index ($\Gamma$), 1.89$\pm$0.07 and the reduced $\chi^2$ $\sim$ 0.99. In the intermediate state, the flux is
(7.14$\pm$0.08)$\times$10$^{-12}$~erg cm$^{-2}$ s$^{-1}$ and the $\Gamma$ is 2.15$\pm$0.04 and the fit has the reduced $\chi^2$ $\sim$1.15. Similarly, during the high state the averaged flux is (14.72$\pm$0.09)$\times$10$^{-12}$~erg cm$^{-2}$ s$^{-1}$ with $\Gamma$ = 2.56$\pm$0.02, and the reduced $\chi^2$ $\sim$0.91. It shows a clear trend of \enquote{softer-when-brighter} behavior of the source OJ 287 on a longer time-scale.
The spectrum produced for all the three states is shown in Figure \ref{fig:xray-sed} , which clearly show that the shape of the spectrum is also changing when the source transitions from the low state to the intermediate state and finally to the high state.
The spectral fitting of all the individual observations since 2015 are done in \citet{Komossa_2020} with a single power law and they report a range of spectral index between 1.6-3.0. Our results are based on shorter time scale from 2019 to 2020 and estimated for three different flux states by modeling with the PL and LP spectral models.
We have also extracted the spectrum from the optical-UV band, and similar to the case in X-ray spectrum, a significant change is noticed here as the source transitions from the low state to the high flux state. The combined optical-UV and X-ray spectrum for all the three states is shown in Figure \ref{fig:xray-uv-sed}, suggesting a significant change in the combined spectrum, which leads to an emergence of a new HBL component. A significant spectral change in X-ray with Swift-XRT, and the emergence of a new component was already reported by \citet{Komossa_2020, Komossa2021} during 2020 flare.
The significant change in optical-UV and X-ray spectrum suggests a shift in the synchrotron peak of the broadband SED towards higher energy. As a result, OJ 287, a well-known LBL (\citealt{Nilsson_2018}) blazar, behaves like an HBL type source. A recent study of the kinetic features of radio jet blazars classifies OJ 287 as IBL type blazar (\citealt{Hervet_2016}).
The details about the new HBL component are discussed in section 5.
\begin{figure*}
\centering
\includegraphics[scale=0.33,angle=-90]{lowstate-PL.eps}
\includegraphics[scale=0.33,angle=-90]{middlestate-PL.eps}
\includegraphics[scale=0.33,angle=-90]{highstate_PL.eps}
\caption{X-ray spectrum for all the three states fitted by powerlaw.}
\label{fig:xray-sed}
\end{figure*}
\begin{figure}
\centering
\includegraphics[scale=0.5]{sed1.eps}
\caption{Combined broadband spectrum of all states. A clear spectral change is observed from low state to higher state.}
\label{fig:xray-uv-sed}
\end{figure}
The X-ray spectrum and the corresponding UVOT spectrum for all the three states are shown in Figure \ref{fig:xray-uv-sed}. The shape of the spectrum is flipping around a pivotal energy value corresponding to $\sim$10$^{18}$~Hz as the source changes its states. In the UVOT spectrum, it is found that the spectrum is quite steep during the low state, and then it becomes flat in the intermediate state, and finally, in the high state, it is increasing with energy. The change in the UVOT and the X-ray spectrum together can be helpful to determine the peak of the synchrotron emission. In the section 5, we have modeled the UVOT and X-ray spectrum for a better understanding of the synchrotron peak during various states.
\subsection{\textit{Swift}-XRT, {\it NuSTAR~\/}, and {\it AstroSat~\/} Spectral Analysis}
In this section, we present the spectral analysis of simultaneous X-ray observations shown in Figure \ref{fig:joint-lc}.
We have used the observation of {\it NuSTAR~\/} performed on May 04, 2020 (MJD 58973.86) by \citet{Komossa_2020}, and one observation of \textit{AstroSat}-SXT on May 15, 2020 (MJD 58984.0) performed by us.
Simultaneous Swift observations were also made under the program run by \citet{komossa2017}, we have taken the data from the archive.
In the case of the simultaneous {\it NuSTAR~\/} and \textit{Swift}-XRT observations, we have fitted the joint \textit{Swift}-XRT and {\it NuSTAR~\/} spectrum to have a clear understanding of the broadband spectrum.
The \textit{Swift}-XRT observation with ObsID:35905053 is simultaneous to the {\it NuSTAR~\/} observation, and hence a joint XRT+{\it NuSTAR~\/} fit is produced, and a joint spectral fit of XRT+Nustar observations is shown in Figure \ref{fig:xrt-nustar}. The joint fitting is performed in \texttt{Xspec} with a power-law spectral model. It is found that a single power-law can explain the total spectrum ranging from 0.3--50.0~keV. The model used for fitting is \texttt{constant*tbabs*powerlaw}. The component \texttt{constant} is added to match the normalization of both the data group and \texttt{tbabs} is used to account for galactic absorption. The best fit power-law index is found to be 2.37$\pm$0.09 and the flux estimated in 0.3-10.0 keV is (2.10$\pm$0.07)$\times$10$^{-11}$ erg cm$^{-2}$ s$^{-1}$ and the flux in 3.0--50.0~keV is found to be (6.34$\pm$0.20)$\times$10$^{-12}$~erg cm$^{-2}$ s$^{-1}$.
{\it AstroSat~\/} SXT and LAXPC observations are simultaneous as marked in Figure \ref{fig:joint-lc}. The joint SXT+LAXPC spectral fitting is done using the power-law model along with the galactic absorption and the constant factor, same as described in the above section. The spectral fitting plot is shown in Figure \ref{fig:sxt-spec}. The spectral model \texttt{constant*tbabs*powerlaw} is used to model the spectrum. The simultaneous XRT observation corresponding to OBSID 00035905063 was also analyzed separately, and the spectral index was found to be 2.67$\pm$0.14. Comparing this value with the $\Gamma$ = 2.43$\pm$0.09 obtained from the joint SXT+LAXPC spectral fit, suggests that the XRT spectrum is comparatively steeper than the joint SXT+LAXPC spectrum.
The shape of the joint spectra of XRT+{\it NuSTAR~\/} and SXT+LAXPC are consistent with each other.
The best fit parameters of all the simultaneous and quasi-simultaneous spectra are presented in Table \ref{tab:table2}.
{\it NuSTAR~\/} spectrum is also fitted in \citet{Komossa_2020} with powerlaw spectral model and they found the spectral index 2.36$\pm$0.06 and 2.20$\pm$0.20 below 10.0 keV and after the 10.0 keV, respectively. Our joint spectral fitting of XRT+{\it NuSTAR~\/} with signle powerlaw shows the spectral index 2.37$\pm$0.09 which is consistent with theirs below 10.0 keV spectrum. We have also did the joint spectral fitting of SXT+LAXPC with single powerlaw and the the joint spectral index estimated to be 2.43$\pm$0.09 which shows consistency with our XRT+{\it NuSTAR~\/} joint fit. Here the SXT and LAXPC joint fit is important since the spectrum is highly simultaneous and the exposure time is $\sim$ 30 ks. Due to the higher exposure time Astrosat-SXT spectrum is much better than the Swift-XRT spectrum used in the joint fitting (Figure \ref{fig:xrt-nustar} \& \ref{fig:sxt-spec}).
\begin{figure}
\centering
\includegraphics[scale=0.35]{update-joint-lc.eps}
\caption{Broadband light curve of 2020 along with the simultaneous observations in other X-ray low and high energy band.}
\label{fig:joint-lc}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.35,angle=-90]{Xrt5053-nustar-jointfit.eps}
\caption{Joint XRT+{\it NuSTAR~\/} fitting of the simultaneous observations. Black data points shows the {\it Swift~\/}-XRT result and the red data points are the simultaneous {\it NuSTAR~\/} results.}
\label{fig:xrt-nustar}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.35,angle=-90]{joint-sxt-laxpc.eps}
\caption{Joint {\it AstroSat~\/} SXT+LAXPC fitting for the simultaneous observations done on May 15, 2020. In upper panel, red data points represent the SXT data and blue data points are the data from LAXPC.}
\label{fig:sxt-spec}
\end{figure}
\section{Modeling the Synchrotron peak: HBL like component}
This section is dedicated to identifying the change in the synchrotron peak as the source travels from low state to high flux state. Generally, the synchrotron peak in blazars can be constrained by the near-infrared and optical-UV emission along with the soft X-ray emission. Here we have used optical-UV and X-ray spectrum to identify any changes in the location of the synchrotron peak during two different flux states.
The general approach that is typically adopted in order to determine the location of the synchrotron peak is based on the log-parabolic modeling of NIR, optical-UV, and X-ray spectrum (\citealt{Massaro2004}, \citealt{Kapanadze_2018}). \citet{Raiteri2013} have used the log-cubic model to fit the spectrum from NIR to X-ray in order
to constrain the exact location of the synchrotron peak in LBL BL Lac blazar. Here, we have used a different approach, we have modeled the optical-UV and X-ray data points with the synchrotron process to estimate the precise location of the synchrotron peak.
The broadband spectral energy distribution of blazars exhibit a two hump structure located at low and high energy. The low energy hump covers the energy from infra-red to soft X-ray and high energy hump lies between hard X-ray to $\gamma$-ray. The low energy hump is well described by the synchrotron emission produced by the motion of relativistic charged particle in the jet magnetic field.
To examine the position of synchrotron peak, we have modeled the optical-UV and X-ray emission together with the synchrotron model from publicly available code GAMERA, and modeling details can be found in \citet{Prince_2021}. We found that during the low state of the source, the synchrotron peak is observed at around $\sim$10$^{14}$~Hz, which is the synchrotron peak assigned for the LBL type source (\citealt{Padovani1995}, \citealt{abdo_2010}). During the intermediate state, the X-ray spectrum does not change much, however a marginal shift of the synchrotron peak towards the higher energy is observed because of change observed in UVOT spectrum (Figure \ref{fig:sync-shift}).
During the high flux state, the shape of the UVOT and X-ray spectrum both have changed significantly compared to the low flux state, suggesting the emergence of a new HBL component in this source. The fitting of the parameters using GAMERA shows that a higher magnetic field ($\sim$6.7~G) is required in high flux state. However, in the low and intermediate state the magnetic field is found to be $\sim$4.1~G. The LP electron distribution is provided to GAMERA to model the synchrotron peak. The spectral index, $\alpha$ is 1.8 for low and intermediate state and a steeper spectral index of 2.56 is required for the high state.
Our modeling also suggests the need for higher energy electrons during high state compared to low and intermediate state. The values of the parameters obtained in this work are consistent with the parameters estimated in Prince et al. 2021 (under review).
Modeling of the synchrotron emission shows that the synchrotron peak is shifted towards the higher energy, and the current peak location is around 10$^{16}$~Hz, which is the ideal case for the HBL type BL Lac source (\citealt{Padovani1995}, \citealt{abdo_2010}). The shift is two orders of magnitude in frequency which is very rarely seen in a single blazar. The results suggest that during the flaring period, the source exhibits very different spectral properties compared to when it is in a normal state. The result is shown in Figure \ref{fig:sync-shift}. In OJ 287, this behavior was also seen before during the flare of 2016-2017 (\citealt{kushwaha_2018b}) where the synchrotron peak was shifted towards the higher energy. The first appearance of the HBL component was seen in a flare of 2017, the study of \citet{Kushwaha_2020} discussed the disappearance of the HBL component in 2018. Again the HBL component appeared in the flare of 2020 as mentioned by \citet{Komossa_2020}, \citet{Kushwaha_2020}, which has also been found in the present study.
This finding suggests that the source is highly variable and changes its behavior during high flux state and fluctuates between LBL and HBL type source, and has an identity crisis. The blazar classification into FSRQ and BL Lac, and further the classification of BL Lac objects into LBL, IBL, and HBL is based on the fixed location of the synchrotron peak (\citealt{abdo_2010}). Therefore, the change in the synchrotron peak is not very common, and if it does occur, it challenges the understanding of the underlying physical mechanisms in these types of sources. Blazar OJ 287 is a well accepted binary black hole system, and the significant change in the optical-UV and X-ray spectral state could be associated with the disk-impact scenario. However, in the binary black hole model, the black hole-disk impact was expected in July-September 2019, and because of the sun's constraint it could not be observed using {\it Swift~\/} but was observed by the {\it Spitzer Space Telescope~\/} (\citealt{Laine_2020}). Considering the spectral change is associated with the disk-impact scenario, the appearance of new HBL components in April-May 2020 is still puzzling and needs more future study to draw any reliable conclusion.
One of the possible explanation for the new HBL component could be the increase in the accretion rate (\citealt{Komossa_2020}) post BH-disk impact and consequently it could trigger the flare produced in the jet after a few months. The time delay observed around nine months between the impact and the time of flare is not easy to understand, since various physical conditions could be responsible for that (disk/corona properties and geometry, magnetic field geometry, shock formation in the jet).
In our first paper, we have modeled the broadband SED of 2020 flare along with a low state, and we found that one emission zone is sufficient to explain the total SED (\citealt{Prince-2021}).
Recent study by \citet{Komossa2021} has discussed the long term X-ray spectral behavior of the source. They analysed the {\it XMM-Newton~\/} and {\it Swift~\/} data from 2005--2020, and found that the X-ray spectrum is highly variable. They also mentioned that the X-ray spectrum covers all the behaviors seen in blazars from significant flat to very steep.
\begin{figure*}
\centering
\includegraphics[scale=0.35]{low-state_v2.eps}
\includegraphics[scale=0.35]{middle-state_v2.eps}
\includegraphics[scale=0.35]{high-state_V4.eps}
\caption{Broadband SED fit during the low, middle, and high state. The data points are from {\it Swift~\/}-XRT/UVOT and the synchrotron peak is fitted with code GAMERA.}
\label{fig:sync-shift}
\end{figure*}
\begin{table*}
\centering
\begin{tabular}{c|ccc|cccc|cc}
\hline
\noalign{\smallskip}
Observation ID & PL & &&LP& & &&F-test & p-value \\
&F$_{0.3-10 keV}$ & $\Gamma$& ${\chi}^2$ (dof)& F$_{0.3-10 keV}$&$\alpha$& $\beta$ &${\chi}^2$(dof) & & \\
\noalign{\smallskip}
\hline \noalign{\smallskip}
low state&5.15$\pm$0.15 & 1.89$\pm$0.07 & 85.30(86)& 4.98$\pm$0.20&1.86$\pm$0.09 & 0.12$\pm$0.21 & 84.46(85) & 0.84 & 0.36 \\
\noalign{\smallskip}
\hline \noalign{\smallskip}
intermediate state&7.14$\pm$0.08 & 2.15$\pm$0.04 & 166.40(145)& 7.22$\pm$0.10 &2.16$\pm$0.05 & 0.04$\pm$0.11 & 166.01(144) & 0.34 & 0.56 \\
\noalign{\smallskip} \hline \noalign{\smallskip}
high state & 14.72$\pm$0.09 &2.56$\pm$0.02 & 206.14(227)& 14.60$\pm$0.12 &2.55$\pm$0.02 & 0.04$\pm$0.07 & 205.04(226)& 1.21 & 0.27 \\
\noalign{\smallskip}
\hline
\end{tabular}
\caption{Modeled parameters for the X-ray spectrum during low state, intermediate state, as well as of high state. The fluxes are in unit of 10$^{-12}$ erg cm$^{-2}$ s$^{-1}$.}
\label{tab:table1}
\end{table*}
\begin{table}
\centering
\begin{tabular}{c|ccc|}
\hline
\noalign{\smallskip}
Instruments & PL & & \\%&\\%LP& & &&F-test & p-value \\
and Dates &F$_{0.3-10 keV}$ & $\Gamma$& ${\chi}^2$ (dof)\\
& (F$_{3.0-50.0 keV}$) & & \\
\noalign{\smallskip}
\hline \noalign{\smallskip}
SXT + LAXPC &0.91$\pm$0.04 & 2.43$\pm$0.09 & 40.25(57) \\%& \\%-&- & - & - & - & - \\
& (0.96$^-_-$) & & \\%&\\% & & & & & \\
\noalign{\smallskip}
\hline \noalign{\smallskip}
\noalign{\smallskip} \hline \noalign{\smallskip}
XRT + {\it NuSTAR~\/} & 2.10$\pm$0.17 &2.37$\pm$0.09 & 119.22(134)\\%&
& (0.63$\pm$0.04) & & \\%\&\\% & & & & & \\
\noalign{\smallskip}
\hline
\end{tabular}
\caption{Modeled parameters for the simultaneous X-ray spectrum from XRT, SXT, {\it NuSTAR~\/}, and LAXPC during various occasions. The fluxes are in unit of 10$^{-11}$ erg cm$^{-2}$ s$^{-1}$.}
\label{tab:table2}
\end{table}
\section{Color-magnitude diagram}
Simultaneous monitoring of OJ 287 in various optical (U, B, V) bands allow us to study the color variation of the source. The color-magnitude diagram helps in understanding the origin of different physical processes responsible for the flux variations in blazars.
We fit the plots of colour indices (CIs) vs. magnitude (M) with
straight lines (i.e CI = m*M +c) using which we estimated the fit values of the
slope, m, the Spearman correlation coefficient r
along with the corresponding null hypothesis probability, p which
are summarized in Table \ref{tab:color-mag}. A positive slope (when the p value is $<$ 0.05) indicates significant positive correlation between CI and blazar magnitude, which in turn points towards a bluer when brighter (BWB) or redder when fainter
trend (H.E.S.S. Collaboration et al. 2014) in the target, while a negative slope
implies the opposite i.e. redder when brighter (RWB) trend \citep{2019MNRAS.488.4093A}.
We have plotted the four color-magnitude diagram defined among B-V vs. B, U-V vs. U, U-B vs. U, and U-B vs. V, which are shown in Figure \ref{fig:col-mag}. The estimated values of $r$ and $p$ for all the cases are mentioned in each plot. In three cases, we have seen a very strong correlation with a correlation coefficient of 0.69, 0.84, and 0.77, respectively, suggesting a \enquote{bluer-when-brighter} chromatism. In the case of B-V vs. B, we did not find any strong correlation, however, the slope suggests RWB chromatism but could not be confirmed. A recent study by \citet{Siejkowski_2017} also did not find any correlation in B-V vs. V color-magnitude diagram. They have also not found any correlation in other combinations. However, our study shows a strong correlation and a strong BWB chromatism (Figure \ref{fig:col-mag}).
\par
The previous study on the color-magnitude diagram reveals various types of variations, including BWB and RWB trends. The observations collected in B and V bands during 1973-1976 have been studied by \citet{Carini_1992}, and their result shows a hint of BWB chromatism.
Further, observations made in flaring state from 1993 to 1997 in R and V bands show clear BWB chromatism as studied by \citet{Dai_2011}. A weak color-magnitude correlation in V and J bands was reported by \citet{Ikejiri_2011} from the observation made between 2008 May to 2010 January. A recent study by \citet{Wierzcholska_2015} found a weak or no correlation in the color-magnitude diagram on the longer timescale for the observations made in B and R bands from the ATOM telescope during 2007-2012. However, they claimed the BWB chromatism at shorter isolated intervals in the long-term light curve. Color-magnitude diagram during the flare of 2015 studied by \citet{jermak_2016} shows a BWB chromatism which is expected in BL Lac type blazar.
A recent study by \citet{Gupta_2019} has collected the data from various telescopes worldwide from 2016 September to 2017 November in BVRI bands, and their study shows a BWB chromatism in V-R and R-I colors with respect to R band magnitude. However, their Pearson's correlation coefficient is below 50\%.
In our study, the data collected for the period of roughly one year between 2019 to 2020 shows a strong correlation in the color-magnitude diagram for U-V and U-B colors (Figure \ref{fig:col-mag}).
A clear BWB and RWB and occasional lack of correlation in the color-magnitude diagram of OJ 287 indicate a complex nature of the physical process responsible for the optical emission in this blazar.
Further, we have also estimated the average optical spectral index following the expression from \citet{Wierzcholska_2015}, the average spectral index, $\langle \alpha_{BR} \rangle$ is derived as,
\begin{equation}
\langle \alpha_{BR} \rangle = \frac{0.4 \langle B-R \rangle} {log(\nu_B/\nu_R)}
\end{equation}
Where $\langle B-R \rangle$ is the average color from any two bands, and the $\nu_B$, $\nu_R$ are the effective frequency of the corresponding B, and R bands.
The optical spectral index estimated for the \textit{Swift} U, B, V bands vary between 0.89 to 1.15, suggesting a significantly harder spectrum in optical bands. The optical spectral index is shown in Table \ref{tab:color-mag}. A significant spectral hardening in the optical emission is also seen in \citet{Kushwaha_2020}. During the same period, a significant spectral softening is observed in Figure \ref{fig:HR}, suggesting an anti-correlation with optical spectral index.
\citet{Wierzcholska_2015} have studied the large number of blazars and they found that the optical spectral index varies from extremely steep (>3) to significantly flat (<0.5). In their study OJ 287 has a significantly steep optical spectrum, between, $\alpha_{BR}$ $\sim$2-3. In our study, the anti-correlation between optical and X-ray spectral index might suggests they are produced through different processes. Many more optical and X-ray study is required to come-up with any concrete conclusions.
\begin{table}
\centering
\begin{tabular}{c|cccc}
Color & m & r & p-value & $\langle \alpha_{BR} \rangle$ \\
\hline
B-V & -0.03 & -0.19 & 0.109 & 1.15$\pm$0.22\\
U-V & 0.16 & 0.69 & 1.39e-11& 1.01$\pm$0.10 \\
U-B & 0.20 & 0.84 & 3.95e-21 & 0.89$\pm$0.17 \\
U-B * & 0.22 & 0.77 & 7.24e-16& - \\
\hline
\end{tabular}
\caption{Calor magnitude variations and their parameters. m = slope, r = Pearson's correlation, p-value = Null hypothesis probability, U-B * = color variation with respect to V magnitude. The last column show the average optical spectral index.}
\label{tab:color-mag}
\end{table}
\begin{figure*}
\centering
\includegraphics[scale=0.55]{color-mag.eps}
\caption{Colour-magnitude diagram during the 2020 flare.}
\label{fig:col-mag}
\end{figure*}
\section{Summary and Conclusions}
Here we present a detailed X-ray and UV-optical study of the interesting blazar OJ 287 during its flaring activity between 2019 - 2020. The source was widely observed across the wavebands ranging from high energy $\gamma$-ray to down to the radio waveband. The source was found to be less variable in $\gamma$-ray, and it was reported to be flaring in various wavebands through various ATels. In \textit{Swift}-XRT, the source was continuously monitored throughout the flaring period (\citealt{komossa2017, Komossa_2020}).
A ToO observation of $\sim$30~ks was carried out during the peak of the X-ray flare using {\it NuSTAR~\/} (\citealt{Komossa_2020}).
These datasets are publicly available on the HEASARC webpage and are therefore used in this study.
We proposed a ToO observation of OJ 287 in {\it AstroSat~\/}, a broadband telescope operated by India, during the same period. The study includes the data from \textit{AstroSat}-SXT and LAXPC instruments.
The long-term light curve from November 2019 to May 2020, using \textit{Swift}-XRT and UVOT observations, are divided into three different states categorized as low state, intermediate state, and high state (Figure \ref{fig:mwl-lc}) based on the flux level in optical-UV bands. The flare of 2020 is categorized as a soft X-ray flare where the source is flaring in 0.3--1.5~keV band, while the flux is almost constant in the 1.5--10.0~keV band (Figure \ref{fig:soft-xray}). A \enquote{softer-when-brighter} behavior is seen in the hardness ratio plot in Figure \ref{fig:HR}. \citet{Komossa_2020} have also showed the \enquote{softer-when-brighter} trend based on the flux vs spectral index plot.
Joint spectral fitting of simultaneous observations of \textit{Swift}-XRT and {\it NuSTAR~\/}, and the \textit{AstroSat}-SXT and LAXPC is presented in this work. In both cases the spectral model is fitted with power-law and the spectral index is found to be consistent with each other within the errorbar. The {\it NuSTAR~\/} spectrum is also modeled by \citet{Komossa_2020} and their spectral index are 2.36$\pm$0.06 and 2.20$\pm$0.20 for below and above 10.0 keV energy, respectively. Our joint spectral fitting of XRT+{\it NuSTAR~\/} gives the spectral index 2.37$\pm$0.09 for the energy range 0.3-50.0 keV. The Astrosat-SXT and LAXPC observations are simultaneous and the joint spectral fitting for 0.3-20.0 keV energy range show the spectral index 2.43$\pm$0.09. Our LAXPC spectrum is highly dominated by background above 20.0 keV and hence can not modeled properly with powerlaw, and therefore, we can not compare the LAXPC spectrum with the {\it NuSTAR~\/} above 10.0 keV as estimated in \citet{Komossa_2020}. In this study, the Astrosat-SXT spectrum is important since it has a longer exposure time than Swift-XRT spectra used in the spectral fitting, and as a result it has a better spectrum than Swift-XRT as seen in Figure (\ref{fig:xrt-nustar} \& \ref{fig:sxt-spec}).
The \textit{Swift}-XRT spectral analysis along with the UVOT spectrum for the low, intermediate, and high state suggests a spectral evolution, and the result is shown in Figure \ref{fig:xray-uv-sed}. The spectral evolution suggests a shift in the synchrotron peak of the source towards the higher energy as shown in Figure \ref{fig:sync-shift}. The shift in synchrotron peak suggests an emergence of HBL components during the high flux state. We have modeled the synchrotron peak with the physical model implemented in publicly available code GAMERA. Our modeling suggest that a higher value of magnetic field is required to explain the high state compared to low state in leptonic scenario.
We have also studied the color-magnitude behavior of the source during this bright state, and results are shown in Figure \ref{fig:col-mag}. In B-V vs. B, we do not see any correlation. However, in other combinations, a significant correlation is observed between color and the magnitude of the source, suggesting a strong \enquote{bluer-when-brighter} chromatism. A significant harder spectrum is observed in optical which is anti-correlated with the X-ray emission.
A significant spectral change between the current flaring state and the older flare in OJ 287 studied by many authors clearly suggests a complex behavior of the source. Much more detailed optical and X-ray study would be required to better understand this proposed binary black hole system.
\section*{Acknowledgements}
The project was partially supported by the Polish Funding Agency National Science Centre, project 2017/26/A/ST9/00756 (MAESTRO 9), and MNiSW grant DIR/WK/2018/12.
\section*{Data Availability}
The data and softwares used in this research are available at NASA’s HEASARC
webpages with the links given in the manuscript.
\vspace{5mm}
\bibliographystyle{mnras}
|
{
"timestamp": "2021-07-01T02:23:12",
"yymm": "2105",
"arxiv_id": "2105.03937",
"language": "en",
"url": "https://arxiv.org/abs/2105.03937"
}
|
\section{Introduction}
The scale of Internet is ever increasing. Today there are more than 1 billion hosts connected to Internet. Since IPv4 addresses are exhausted in 2011~\cite{exhausted}, the move to IPv6 seems inevitable. However, the transition from IPv4 to IPv6 requires updating not only the Internet infrastructure but also a large amount of applications, which faces many obstacles in practice. As a result, IPv4 network and IPv4 users are still the dominant in today's Internet. In IPv4 network, the key technology to deal with the address insufficiency problem is NAT (Network Address Translation).
In the past, most NAT systems are deployed in the Linux platform leveraging the Netfilter framework \cite{netfilter}. Although it may work in small-scale networks, its performance faces significant challenge with high traffic volume. Specifically, for small-sized packets, the throughput of NAT system on commodity servers can hardly exceed 1Gbps, which leads to a big gap between the system performance and the hardware capability with 10G/100G NIC cards and multiple CPU cores. In this work we try to improve the performance of NAT system by the following approaches. First, we leverage the DPDK (Data Plane Development Kit)'s capabilities to build NAT system in the user space instead of in the Linux kernel, and thus enable polling the NIC to read packets directly into user space to eliminate the high overhead caused by packet copy and interrupt. But we also need to manipulate the packet through pointers to achieve zero-copy in the process of NAT. Second, to leverage the multi-core capability of modern commodity servers, we enable RSS (Receive-side Scaling) to let multiple cores process packets in a parallel way. But we need to minimize the sharing cost between CPU cores. Third, we find that the algorithms used in today's NAT system can also be improved. In particular, we use hash based search instead of sequential search when looking up the NAT rule table, which also considerably helps improve the performance.
Based on the improvements above, we implement a NAT system called Quick NAT. Our experiments show that Quick NAT system significantly reduces the time to search for NAT rules.
\section{System Design}
In order to improve the performance of NAT on commodity platforms, we design Quick NAT system built on DPDK \cite{DPDK}.
\subsection{System Overview}
The architecture of Quick NAT system is shown in Figure 1. Quick NAT system utilizes DPDK's capabilities to bypass the kernel and be built in the user space. Quick NAT system is composed of four components, i.e. Connection Tracer, Rule Finder, Tuple Rewriter and IP/Port Pool.
\begin{figure}[ht!]
\centering
\includegraphics[width = .4\textwidth]{qnat.pdf}
\caption{Quick NAT system architecture}
\label{RSG}
\end{figure}
In Quick NAT system, we have made three major contributions to improve the performance of NAT.
\subsection{QNS Algorithm}
We design QNS (Quick NAT Search) Algorithm to reduce the time of NAT rule lookup. It uses hash search instead of sequential search to look up NAT rule tables with complexity of O(1).
Quick NAT system uses small rule tables (i.e. 32 DNAT rule tables and 32 SNAT rule tables) instead of one big NAT rule table. NAT rules are stored into different small rule tables according to the subnet mask and NAT-type. Moreover, each small NAT rule table has one bit as flag to indicate whether it contains rules. QNS searches different rule tables one by one in the order of decreasing subnet mask because the rule with longer subnet mask bits is preciser than that with lower mask.
We use an example to illustrate how QNS works. As Figure 2 shows, QNS starts with the SNAT rule table with 32-bit subnet mask. QNS computes hash value on the basis of source IP and port of this packet. In this scenario, the hash value is 1288 and thus it does not find a SNAT rule with exact port and 32-bit subnet mask. And then QNS computes hash value based on source IP and zero port to search SNAT rules with wildcard port and still does not find the rule with wildcard port to match this packet. In all, QNS does not find the rule to match this packet in this sub-table.
\begin{figure}[ht!]
\centering
\includegraphics[width = .48\textwidth]{srule1.pdf}
\caption{Search for SNAT rule (step 1)}
\label{RSG}
\end{figure}
If a rule table's flag bit is zero, there is no rule in this table and QNS skips this sub-table.
Figure 3 shows that QNS turns to the rule table with 24-bit subnet mask at this time. Due to the subnet mask of this rule table, QNS uses 255.255.255.0 to mask source IP from 192.168.88.32 to 192.168.88.0 and then calculates the hash value. Ultimately, it finds a SNAT rule on the basis of masked IP and wildcard port. Once finding a NAT rule, QNS stops searching for NAT rules.
\begin{figure}[ht!]
\centering
\includegraphics[width = .48\textwidth]{srule3.pdf}
\caption{Search for SNAT rule (step 3)}
\label{RSG}
\end{figure}
\subsection{Lock-free Sharing Among CPU cores}
Multicore processors have been pursued to improve overall processing performance. In Quick NAT system, connection record table is shared among CPU cores and built with lock-free hash table to eliminate the overhead of locks and make it more efficient and scalable to share connection records on multicore server.
\subsection{Polling and Zero-copy Packet Delivery}
We take advantage of Intel's DPDK to poll for data from NIC to eliminate overheads existing in interrupt-driven packet processing. In addition, Quick NAT manipulates the packet through pointers without copy operations in the process of NAT, increasing the throughput of packet processing.
\section{Evaluation}
In our experiment, we use three Intel Core CPU i7-5930k @ 3.5GHz (6 cores) servers -- one for the Quick NAT system under test and another two acting as traffic generator and receiver -- each of which has an Intel 82599ES 10G Dual Port NICs and 16GB memory. Ubuntu 16.04 (kernel 4.4.0) and DPDK-16.11 are used. We use Wind River's DPDK-pktgen \cite{pktgen} to generate traffic in line rate.
We add a number of rules and then look for different NAT rules for 100 times to calculate the mean searching time of QNS algorithm in Quick NAT system and linear search algorithm that Netfilter uses to search for NAT rules. We change the number of rules up to 10k and do this experiment for many times.
\begin{table}[!htbp]
\centering
\caption{The time of rule lookup (ns)}
\begin{tabular}{c|ccccc}
\hline
Rule Number & 100 & 1000 & 3000 & 5000 &10000\\
\hline
QNS Algorithm & 43 & 45& 42& 45 & 45 \\
Linear Search & 420 & 3256 & 9373 & 15078 & 30120\\
\hline
\end{tabular}
\end{table}
We can learn from the result in table I that it takes about 43 ns for QNS algorithm to search for rules and that the number of rules makes no difference on the performance of QNS algorithm because QNS is based on hash search with the complexity of O(1). On the contrary, it is a time consuming process to use linear search to look for rules in Netfilter, especially when the number of rules is large.
\bibliographystyle{unsrt}
|
{
"timestamp": "2021-05-11T02:18:26",
"yymm": "2105",
"arxiv_id": "2105.03864",
"language": "en",
"url": "https://arxiv.org/abs/2105.03864"
}
|
\section{Introduction}
\label{sec: intro}
\input{tex/intro.tex}
\section{Preliminaries}
\label{sec: preli}
\input{tex/preliminary.tex}
\section{Main results with generative model and $(s,a)$-rectangular assumption}
\label{sec: res-gen}
\input{tex/res-gen.tex}
\section{Main results with generative model and $s$-rectangular assumption}
\label{sec: res-s}
\input{tex/res-s.tex}
\section{Main results with offline dataset}
\label{sec: res-off}
\input{tex/res-off.tex}
\section{Conclusion}
\label{sec: conclusion}
\input{tex/conclusion.tex}
\section*{Acknowledgements}
\input{tex/acknow.tex}
\bibliographystyle{plainnat}
\section{Proofs of Section~\ref{sec: preli}}
\begin{proof}[Proof of Lemma~\ref{lem: uni-dev-pi}]
The left hand inequality is trivial. Next we prove the right hand inequality.
\begin{align*}
\max_\pi V_r^\pi(\mu)-V_r^{\widehat{\pi}}(\mu)&=\max_\pi V_r^\pi(\mu)-\max_\pi \widehat{V}_r^\pi+\max_\pi \widehat{V}_r^\pi-V_r^{\widehat{\pi}}(\mu)\\
&\le\max_\pi\left|V_r^\pi(\mu)-\widehat{V}_r^\pi(\mu)\right|+\left(\widehat{V}_r^{\widehat{\pi}}(\mu)-V_r^{\widehat{\pi}}(\mu)\right)\\
&\le2\sup_{\pi\in\Pi}\left|V_r^\pi(\mu)-\widehat{V}_r^\pi(\mu)\right|
\end{align*}
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lem: uni-dev-v}]
Noting that $V_r^\pi$ and $\widehat{V}_r^\pi$ are fixed points of ${\mathcal T}_r^\pi$ and $\widehat{{\mathcal T}}_r^\pi$, we have:
\begin{align*}
\left\|V_r^\pi-\widehat{V}_r^\pi\right\|_\infty&=\left\|{\mathcal T}_r^\pi V_r^\pi-\widehat{{\mathcal T}}_r^\pi\widehat{V}_r^\pi\right\|_\infty\\
&\le\left\|{\mathcal T}_r^\pi V_r^\pi-\widehat{{\mathcal T}}_r^\pi V_r^\pi\right\|_\infty+\left\|\widehat{{\mathcal T}}_r^\pi V_r^\pi-\widehat{{\mathcal T}}_r^\pi \widehat{V}_r^\pi\right\|_\infty\\
&\le\sup_{V\in{\mathcal V}}\left\|{\mathcal T}_r^\pi V - \widehat{{\mathcal T}}_r^\pi V\right\|_\infty+\gamma\left\|V_r^\pi-\widehat{V}_r^\pi\right\|_\infty
\end{align*}
Arranging both sides, we obtain:
\begin{align*}
\left\|V_r^\pi-\widehat{V}_r^\pi\right\|_\infty\le\frac{1}{1-\gamma}\sup_{V\in{\mathcal V}}\left\|{\mathcal T}_r^\pi V - \widehat{{\mathcal T}}_r^\pi V\right\|_\infty
\end{align*}
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lem: eps-pi}]
To simplify, we denote $G^\pi_r V={\mathcal T}^\pi_r V-\widehat{{\mathcal T}}^\pi_r V$ for any $V\in{\mathcal V}$. For any $\pi\in\Pi$, there exists $\pi_0\in{\mathcal N}(\Pi,\|\cdot\|_1,\varepsilon)$, such that $\|\pi(\cdot|s)-\pi_0(\cdot|s)\|_1\le\varepsilon$ for all $s\in{\mathcal S}$. Thus, we have:
\begin{align*}
\sup_{V\in{\mathcal V}}\left\|G^\pi_r V\right\|_\infty\le\sup_{V\in{\mathcal V}}\left\|G^\pi_r V-G^{\pi_0}_r V\right\|_\infty+\sup_{V\in{\mathcal V}}\left\|G^{\pi_0}_r V\right\|_\infty
\end{align*}
For any fixed $V$, we have:
\begin{align*}
\left|G_r^\pi V (s) -G_r^{\pi_0} V(s)\right|&\le\gamma\sup_{P\in{\mathcal P}(s)}\left|\sum_{s'}(P^\pi(s'|s)-P^{\pi_0}(s'|s))V(s')\right|\\
&+\gamma\sup_{P\in\widehat{{\mathcal P}}(s)}\left|\sum_{s'}(P^\pi(s'|s)-P^{\pi_0}(s'|s))V(s')\right|\\
&\le \frac{2\gamma}{1-\gamma}\left\|\pi(\cdot|s)-\pi_0(\cdot|s)\right\|_1
\end{align*}
Thus, we have:
\begin{align*}
\sup_{V\in{\mathcal V}}\left\|G^\pi_r V\right\|_\infty\le\frac{2\gamma\varepsilon}{1-\gamma}+\sup_{V\in{\mathcal V}}\left\|G^{\pi_0}_r V\right\|_\infty\le\frac{2\gamma\varepsilon}{1-\gamma}+\sup_{\pi\in\Pi_\varepsilon}\sup_{V\in{\mathcal V}}\left\|G^{\pi}_r V\right\|_\infty
\end{align*}
In conclusion, taking supremum of $\pi$ at left hand side, we have:
\begin{align*}
\sup_{\pi\in\Pi, V\in{\mathcal V}}\left\|G^\pi_r V\right\|_\infty\le\frac{2\gamma\varepsilon}{1-\gamma}+\sup_{\pi\in\Pi_\varepsilon, V\in{\mathcal V}}\left\|G^{\pi}_r V\right\|_\infty
\end{align*}
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lem: num-eps-pi}]
To simplify, we ignore dependence on $s$ and denote $A=\{p\in{\mathbb R}_+^n|\sum_{i}p_i=1\}$ and $B=\{q\in{\mathbb R}_+^{n-1}|\sum_{i}q_i\le1\}$. We aim to prove that $|{\mathcal N}(A,\|\cdot\|_1,2\varepsilon)|\le|{\mathcal N}(B,\|\cdot\|_1,\varepsilon)|$.
For any $p\in A$, we have $(p_1,...,p_{n-1})\in B$, there exists $(p_{0,1},...,p_{0,n-1})\in{\mathcal N}(B,\|\cdot\|_1,\varepsilon)$ such that:
\begin{align*}
\sum_{i=1}^{n-1}\left|p_i-p_{0,i}\right|\le\varepsilon
\end{align*}
And we let $p_{0,n}=1-\sum_{i=1}^{n-1}p_{0,i}$, which guarantees that $(p_{0,1},...,p_{0,n})\in A$ and:
\begin{align*}
\sum_{i=1}^n \left|p_i-p_{0,i}\right|\le\varepsilon+\left|p_n-p_{0,n}\right|
\le\varepsilon+\sum_{i=1}^{n-1}\left|p_i-p_{0,i}\right|\le2\varepsilon
\end{align*}
Thus, we can construct a set $\{p\in{\mathbb R}_+^n|p_{1:n-1}\in {\mathcal N}(B, \|\cdot\|_1,\varepsilon), p_{n}=1-\sum_{i=1}^{n-1}p_i\}$, which is an $2\varepsilon$-net of $A$, from which we obtain the desired result. And also, from Lemma 5.7 of \citet{wainwright2019high}, we obtain that:
\begin{align*}
\left|{\mathcal N}(B, \|\cdot\|_1,\varepsilon)\right|\le\left(1+\frac{2}{\varepsilon}\right)^n
\end{align*}
Thus, the smallest number covering number of $\Pi$ is bouned by:
\begin{align*}
\left|{\mathcal N}(\Pi,\|\cdot\|_1, \varepsilon)\right|\le\left(1+\frac{4}{\varepsilon}\right)^{|{\mathcal S}||{\mathcal A}|}
\end{align*}
\end{proof}
\section{Proofs of Section~\ref{sec: res-gen}}
\begin{lem}
\label{lem: f-eq}
For any $f$-divergence uncertainty set as Example~\ref{eg: f-set} states, the convex optimization problem
\begin{align*}
\inf_{P}&\sum_{s\in{\mathcal S}}P(s)V(s)\\
\text{s.t.}&\hspace{2pt}D_f(P\|P^*)\le\rho, \hspace{4pt}P\in\Delta({\mathcal S}),\hspace{4pt}P\ll P^*
\end{align*}
can be reformulated as:
\begin{align*}
\sup_{\lambda\ge0,\eta\in{\mathbb R}}-\lambda\sum_{s\in{\mathcal S}}P^*(s)f^*\left(\frac{\eta-V(s)}{\lambda}\right)-\lambda\rho+\eta
\end{align*}
where $f^*(t)=-\inf_{s\ge0}(f(s)-st)$.
\end{lem}
\begin{proof}
As we assume $P\ll P^*$, we can set $P^*(s)>0$ for all $s$ without loss of generality. We replace the varibale $P$ with $r(s)=P(s)/P^*(s)$, the original optimization problem can be reformulated as:
\begin{align*}
\inf_{r}&\sum_{s\in{\mathcal S}}r(s)V(s)P^*(s)\\
\text{s.t.}\hspace{2pt}&\sum_{s\in{\mathcal S}}f(r(s))P^*(s)\le\rho\\
&\sum_{s\in{\mathcal S}}r(s)P^*(s)=1\\
&r(s)\ge0\hspace{2pt}\text{for all $s\in{\mathcal S}$}
\end{align*}
Thus, we can obtain the Lagrangian function of the problem with domain $r\ge0$, $\lambda\ge0$ and $\eta\in{\mathbb R}$:
\begin{align*}
L(r,\lambda, \eta)=\sum_{s\in{\mathcal S}}r(s)V(s)P^*(s)+\lambda\left(\sum_{s\in{\mathcal S}}f(r(s)))P^*(s)-\rho\right)-\eta\left(\sum_{s\in{\mathcal S}}r(s)P^*(s)-1\right)
\end{align*}
Denote $f^*(t)=-\inf_{s\ge0}(f(s)-st)$, we have:
\begin{align*}
\inf_{r\ge0}L(r,\lambda,\eta)=-\lambda\sum_{s\in{\mathcal S}}P^*(s)f^*\left(\frac{\eta-V(s)}{\lambda}\right)-\lambda\rho+\eta
\end{align*}
By Slater's condition, the primal value equals with the dual value $\sup_{\lambda\ge0,\eta\in{\mathbb R}} \inf_{r\ge0}L(r,\lambda,\eta)$.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lem: l1}]
By $(s,a)$-rectangular set assumption, we have:
\begin{align*}
{\mathcal T}_r^\pi V(s) &= \sum_{a}\pi(a|s)R(s,a)+\gamma\inf_{P\in{\mathcal P}}\sum_{s'\in{\mathcal S}}P^\pi(s'|s)V(s')\\
&= \sum_{a}\pi(a|s)R(s,a)+\gamma\inf_{P\in{\mathcal P}}\sum_{s'\in{\mathcal S}, a\in{\mathcal A}}P(s'|s,a)\pi(a|s)V(s')\\
&= \sum_{a}\pi(a|s)\left(R(s,a)+\gamma\inf_{P_{s,a}\in{\mathcal P}_{s,a}(\rho)}\sum_{s'\in{\mathcal S}}P(s'|s,a)V(s')\right)
\end{align*}
To simplify, we solve the following convex optimization problem:
\begin{align*}
\inf_{P}&\sum_{s\in{\mathcal S}}P(s)V(s)\\
\text{s.t.}\hspace{2pt} &\sum_{s\in{\mathcal S}}|P(s)-P^*(s)|\le\rho\\
&P\in\Delta({\mathcal S}),\hspace{4pt}P\ll P^*
\end{align*}
By Lemma~\ref{lem: f-eq} and set $f(t)=|t-1|$, we have :
\begin{align*}
f^*(s)=
\begin{cases}
-1&,s\le-1\\
s&,s\in[-1,1]\\
+\infty&,s>1
\end{cases}
\end{align*}
Thus, the value of the convex optimization problem equals with:
\begin{align*}
\sup_{\lambda\ge0,\eta\in{\mathbb R},\frac{\eta-V(s)}{\lambda}\le1}-\lambda\sum_{s\in{\mathcal S}}P^*(s)\max\left\{\frac{\eta-V(s)}{\lambda},-1\right\}-\lambda\rho+\eta
\end{align*}
By replacing $\eta$ with $\tilde{\eta}-\lambda$, the problem equals with:
\begin{align*}
\sup_{\lambda\ge0,\tilde{\eta}\in{\mathbb R},\frac{\tilde{\eta}-V(s)}{\lambda}\le2}-\sum_{s\in{\mathcal S}}P^*(s)\max\left\{\tilde{\eta}-\lambda-V(s),-\lambda\right\}-\lambda\rho+\tilde{\eta}-\lambda
\end{align*}
Noting that $\max\{a,b\}=(a-b)_++b$, the problem turns to be:
\begin{align*}
\sup_{\lambda\ge0,\tilde{\eta}\in{\mathbb R},\frac{\tilde{\eta}-V(s)}{\lambda}\le2}-\sum_{s\in{\mathcal S}}P^*(s)\left(\tilde{\eta}-V(s)\right)_+ -\lambda\rho+\tilde{\eta}
\end{align*}
Optimizing over $\lambda$, the final result is obtained:
\begin{align*}
\sup_{\tilde{\eta}\in{\mathbb R}}-\sum_{s\in{\mathcal S}}P^*(s)\left(\tilde{\eta}-V(s)\right)_+-\frac{\left(\tilde{\eta}-\min_s V(s)\right)_+}{2}\rho+\tilde{\eta}
\end{align*}
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm: l1-fix}]
To simplify, we also ignore dependence on $(s,a)$. Denote $g(\eta,P)=\sum_{s\in{\mathcal S}}P(s)(\eta-V(s))_++\frac{(\eta-\min_s V(s))_+}{2}\rho-\eta$, we have:
\begin{align*}
g(\eta,\widehat{P})&=\sum_{s\in{\mathcal S}}\widehat{P}(s)(\eta-V(s))_++\frac{(\eta-\min_s V(s))_+}{2}\rho-\eta\\
&=\frac{1}{n}\sum_{k=1}^n\sum_{s\in{\mathcal S}}1(X_k=s)(\eta-V(s))_++\frac{(\eta-\min_s V(s))_+}{2}\rho-\eta
\end{align*}
where $\{X_k\}_{k=1}^n$ are i.i.d samples generated by $P(s)$. Thus, ${\mathbb E} g(\eta,\widehat{P})=g(\eta,\mathbb{P})$. We denote $Z_k=\sum_{s\in{\mathcal S}}1(X_k=s)(\eta-V(s))_+$. By $\eta\in[0,\frac{2+\rho}{\rho(1-\gamma)}]$, we know $Z_k\in[0,\frac{2+\rho}{\rho(1-\gamma)}]$. By Hoeffding's inequality, we have the following inequality:
\begin{align*}
{\mathbb P}\left(\left|g(\eta,\widehat{P})-g(\eta,P)\right|\ge\frac{2+\rho}{\rho(1-\gamma)}\sqrt{\frac{\log\frac{2}{\delta}}{2n}}\right)\le\delta
\end{align*}
Noting that $g(\eta,P)$ is $(2+\frac{\rho}{2})$-Lipschitz w.r.t. $\eta$, we can take the $\varepsilon$-net of $[0,\frac{2+\rho}{\rho(1-\gamma)}]$ as ${\mathcal N}_\varepsilon$ w.r.t metric $|\cdot|$. The size of ${\mathcal N}_\varepsilon$ is bounded by:
\begin{align*}
|{\mathcal N}_\varepsilon|\le 1+\frac{2+\rho}{\varepsilon\rho(1-\gamma)}
\end{align*}
Thus, we have:
\begin{align*}
\sup_{\eta\in[0,\frac{2+\delta}{\delta(1-\gamma)}]}\left|g(\eta,\widehat{P})-g(\eta,P)\right|\le(4+\rho)\varepsilon+\sup_{\eta\in{\mathcal N}_\varepsilon}\left|g(\eta,\widehat{P})-g(\eta,P)\right|
\end{align*}
By taking $\varepsilon=\frac{2+\rho}{\sqrt{2n}\rho(4+\rho)(1-\gamma)}$, we have the following inequality holds with probability $1-\delta$:
\begin{align*}
\sup_{\eta\in[0,\frac{2+\delta}{\delta(1-\gamma)}]}\left|g(\eta,\widehat{P})-g(\eta,P)\right|&\le\frac{2+\rho}{\sqrt{2n}\rho(1-\gamma)}+\frac{2+\rho}{\rho(1-\gamma)}\sqrt{\frac{\log\frac{2(1+(4+\rho)\sqrt{2n})}{\delta}}{2n}}\\
&=\frac{2+\rho}{\sqrt{2n}\rho(1-\gamma)}\left(1+\sqrt{\log\frac{2(1+(4+\rho)\sqrt{2n})}{\delta}}\right)
\end{align*}
Thus, for any fixed $\pi\in\Pi$ and $V\in{\mathcal V}$, we have:
\begin{align*}
\left\|{\mathcal T}_r^\pi V-\widehat{{\mathcal T}}_r^\pi V\right\|_\infty\le\gamma\max_{s\in{\mathcal S},a\in{\mathcal A}}\sup_{\eta\in[0,\frac{2+\rho}{\rho(1-\gamma)}]}\left|g(\eta,\widehat{P}_{s,a})-g(\eta,P^*_{s,a})\right|
\end{align*}
Thus, we can take a union bound over $(s,a)\in{\mathcal S}\times{\mathcal A}$ and obtain the following inequality with probability $1-\delta$:
\begin{align*}
\left\|{\mathcal T}_r^\pi V-\widehat{{\mathcal T}}_r^\pi V\right\|_\infty\le\frac{(2+\rho)\gamma}{\sqrt{2n}\rho(1-\gamma)}\left(1+\sqrt{\log\frac{2|{\mathcal S}||{\mathcal A}|(1+(4+\rho)\sqrt{2n})}{\delta}}\right)
\end{align*}
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm: l1-union}]
Let's first consider union bound over ${\mathcal V}$. We take an $\varepsilon$-net of ${\mathcal V}$ w.r.t norm $\|\cdot\|_\infty$ and denote it as ${\mathcal N}_\varepsilon$. For any given $V\in{\mathcal V}$, there exists $V_\varepsilon\in{\mathcal N}_\varepsilon$ s.t. $\|V-V_\varepsilon\|_\infty\le \varepsilon$. Thus we have:
\begin{align*}
\left\|{\mathcal T}_r^\pi V -\widehat{{\mathcal T}}_r^\pi V\right\|_\infty&\le\left\|{\mathcal T}_r^\pi V -{\mathcal T}_r^\pi V_\varepsilon\right\|_\infty+\left\|\widehat{{\mathcal T}}_r^\pi V -\widehat{{\mathcal T}}_r^\pi V_\varepsilon\right\|_\infty+\left\|{\mathcal T}_r^\pi V_\varepsilon - \widehat{{\mathcal T}}_r^\pi V_\varepsilon\right\|_\infty\\
&\le 2\gamma\varepsilon + \sup_{V\in{\mathcal N}_\varepsilon}\left\|{\mathcal T}_r^\pi V -\widehat{{\mathcal T}}_r^\pi V\right\|_\infty
\end{align*}
Thus, we have $\sup_{V\in{\mathcal V}}\left\|{\mathcal T}_r^\pi V -\widehat{{\mathcal T}}_r^\pi V\right\|_\infty\le 2\gamma\varepsilon+\sup_{V\in{\mathcal N}_\varepsilon}\left\|{\mathcal T}_r^\pi V -\widehat{{\mathcal T}}_r^\pi V\right\|_\infty$. Noting that $|{\mathcal N}_\varepsilon|\le(1+\frac{1}{\varepsilon(1-\gamma)})^{|{\mathcal S}|}$, we have the following result with probability $1-\delta$:
\begin{align*}
\sup_{V\in{\mathcal V}}\left\|{\mathcal T}_r^\pi V -\widehat{{\mathcal T}}_r^\pi V\right\|_\infty\le 2\gamma\varepsilon+\frac{(2+\rho)\gamma}{\sqrt{2n}\rho(1-\gamma)}\left(1+\sqrt{\log\frac{2|{\mathcal S}||{\mathcal A}|(1+(4+\rho)\sqrt{2n})}{\delta}+|{\mathcal S}|\log\left(1+\frac{1}{\varepsilon(1-\gamma)}\right)}\right)
\end{align*}
By taking $\varepsilon=\frac{2+\rho}{2\sqrt{2n}\rho(1-\gamma)}$, we have the following inequality with probability $1-\delta$:
\begin{align*}
\sup_{V\in{\mathcal V}}\left\|{\mathcal T}_r^\pi V -\widehat{{\mathcal T}}_r^\pi V\right\|_\infty\le\frac{(2+\rho)\gamma}{\sqrt{2n}\rho(1-\gamma)}\left(2+\sqrt{\log\frac{2|{\mathcal S}||{\mathcal A}|(1+(4+\rho)\sqrt{2n})}{\delta}+|{\mathcal S}|\log\left(1+\frac{2\sqrt{2n}\rho}{2+\rho}\right)}\right)
\end{align*}
Next, we consider union bound over $\Pi$. By $(s,a)$-rectangular assumption, we can restrict the policy class $\Pi$ to deterministic class with fintie size $|{\mathcal A}|^{|{\mathcal S}|}$. Thus, combining with Lemma~\ref{lem: uni-dev-pi} and Lemma~\ref{lem: uni-dev-v}, the following inequality holds with probability $1-\delta$:
\begin{align*}
\max_{\pi}V_r^\pi(\mu)-V_r^{\widehat{\pi}}(\mu)\le\frac{2(2+\rho)\gamma}{\sqrt{2n}\rho(1-\gamma)^2}\left(2+\sqrt{\log\frac{2|{\mathcal S}||{\mathcal A}|(1+(4+\rho)\sqrt{2n})}{\delta}+|{\mathcal S}|\log|{\mathcal A}|\left(1+\frac{2\sqrt{2n}\rho}{2+\rho}\right)}\right)
\end{align*}
Noting that $|{\mathcal S}|\ge1$, we can simplify the above inequality:
\begin{align*}
\max_{\pi}V_r^\pi(\mu)-V_r^{\widehat{\pi}}(\mu)&\le\frac{2(2+\rho)\gamma\sqrt{|{\mathcal S}|}}{\sqrt{2n}\rho(1-\gamma)^2}\left(2+\sqrt{\log\frac{2|{\mathcal S}||{\mathcal A}|^2(1+(4+\rho)\sqrt{2n})}{\delta}+\log\left(1+\frac{2\sqrt{2n}\rho}{2+\rho}\right)}\right)\\
&\le\frac{2(2+\rho)\gamma\sqrt{|{\mathcal S}|}}{\sqrt{2n}\rho(1-\gamma)^2}\left(2+\sqrt{\log\frac{4|{\mathcal S}||{\mathcal A}|^2(1+2(2+\rho)\sqrt{2n})^2}{\delta(2+\rho)}}\right)
\end{align*}
The final inequality holds by the following observation:
\begin{align*}
2(1+(4+\rho)\sqrt{2n})(2+\rho+2\sqrt{2n}\rho)\le4(1+2(2+\rho)\sqrt{2n})^2
\end{align*}
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lem: chi2}]
As $f(t)=(t-1)^2$, we have:
\begin{align*}
f^*(s)=
\begin{cases}
\frac{t^2}{4}+t &, s\ge-2\\
-1 &, s<-2
\end{cases}
\end{align*}
By Lemma~\ref{lem: f-eq}, the value of the convex optimization problem equals with:
\begin{align*}
\sup_{\lambda\ge0,\eta\in{\mathbb R}}-\lambda\sum_{s\in{\mathcal S}}P^*(s)\left(\frac{(\eta-V(s)+2\lambda)^2_+}{4\lambda^2}-1\right)-\lambda\rho +\eta
\end{align*}
which is equivalent to:
\begin{align*}
\sup_{\lambda\ge0,\eta\in{\mathbb R}}-\sum_{s\in{\mathcal S}}P^*(s)\left(\frac{(\eta-V(s)+2\lambda)^2_+}{4\lambda}\right)-\lambda(\rho-1) +\eta
\end{align*}
Replacing $\eta$ with $\tilde{\eta}=\eta+2\lambda$, the problem turns into:
\begin{align*}
\sup_{\lambda\ge0,\tilde{\eta}\in{\mathbb R}}-\frac{1}{4\lambda}\sum_{s\in{\mathcal S}}P^*(s)\left(\tilde{\eta}-V(s))^2_+\right)-\lambda(\rho+1) +\tilde{\eta}
\end{align*}
Optimizing over $\lambda\ge0$, the problem turns into:
\begin{align*}
\sup_{\tilde{\eta}\in{\mathbb R}}-\sqrt{\rho+1}\sqrt{\sum_{s\in{\mathcal S}}P^*(s)\left(\tilde{\eta}-V(s)\right)_+^2}+\tilde{\eta}
\end{align*}
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm: chi2-fix}]
To simplify, we ignore dependence on $(s,a)$ and also denote $g(\eta,P)=C(\rho)\sqrt{\sum_{s\in{\mathcal S}}P(s)(\eta-V(s))_+^2}-\eta$. Besides, we also denote $Y_k=\sum_{s\in{\mathcal S}}1(X_k=s)(\eta-V(s))_+$, which leads to $Y_k^2=\sum_{s\in{\mathcal S}}1(X_k=s)(\eta-V(s))_+^2$. Thus, we can re-write $g(\eta,\widehat{P})=\frac{C(\rho)}{\sqrt{n}}\|Y\|_2-\eta$. Noting that $g(\eta, Y)$ is $\frac{C(\rho)}{\sqrt{n}}$-Lipschitz w.r.t $Y$ and norm $\|\cdot\|_2$, thus, by Lemma 6 of \cite{duchi2018learning}, we have the following inequality holding with probability $1-\delta$:
\begin{align*}
\left|g(\eta,\widehat{P})-{\mathbb E} g(\eta,\widehat{P})\right|\le\frac{\sqrt{2} C^2(\rho)}{(C(\rho)-1)(1-\gamma)\sqrt{n}}\sqrt{\log\frac{2}{\delta}}
\end{align*}
Noting that ${\mathbb E} g(\eta,\widehat{P})\not=g(\eta,P)$, but they are closed as Lemma 8 of \cite{duchi2018learning}:
\begin{align*}
\frac{1}{\sqrt{n}}{\mathbb E}\|Y\|_2\ge \sqrt{\sum_{k=1}^n {\mathbb E} Y_k^2}-\sqrt{\frac{C(\rho)}{(C(\rho)-1)(1-\gamma)}}\frac{1}{\sqrt{n}}
\end{align*}
By ${\mathbb E}\|Y\|_2\le\sqrt{n\sum_{k=1}^n{\mathbb E} Y_k^2}$, we have:
\begin{align*}
\left|{\mathbb E} g(\eta,\widehat{P})- g(\eta,P)\right|\le C(\rho)\sqrt{\frac{C(\rho)}{(C(\rho)-1)(1-\gamma)}}\frac{1}{\sqrt{n}}
\end{align*}
Putting all these together, we have the following inequality with probability $1-\delta$:
\begin{align*}
\left|g(\eta, \widehat{P})- g(\eta, P)\right|\le\frac{C^2(\rho)}{(C(\rho)-1)(1-\gamma)\sqrt{n}}\left(1+\sqrt{2\log\frac{2}{\delta}}\right)
\end{align*}
Noting that $g(\eta,P)$ is $1+C(\rho)$-Lipschitz w.r.t $\eta$, we take the $\varepsilon$-net of $[0,\frac{C(\rho)}{(C(\rho)-1)(1-\gamma)}]$ as ${\mathcal N}_\varepsilon$ w.r.t metric $|\cdot|$. The size of ${\mathcal N}_\varepsilon$ is bounded by:
\begin{align*}
|{\mathcal N}_\varepsilon|\le1+\frac{C_{2}(\rho)}{(C_{2}(\rho)-1)(1-\gamma)\varepsilon}
\end{align*}
Thus, we have:
\begin{align*}
\sup_{\eta\in[0,\frac{C(\rho)}{(C(\rho)-1)(1-\gamma)}]}\left|g(\eta,\widehat{P})-g(\eta,P)\right|\le2(1+C(\rho))\varepsilon+\sup_{\eta\in{\mathcal N}_\varepsilon}\left|g(\eta,\widehat{P})-g(\eta,P)\right|
\end{align*}
By taking $\varepsilon=\frac{C^2(\rho)}{2\sqrt{n}(1-\gamma)(C^2(\rho)-1)}$, the following inequality holds with probability $1-\delta$:
\begin{align*}
\sup_{\eta\in[0,\frac{C(\rho)}{(C(\rho)-1)(1-\gamma)}]}\left|g(\eta,\widehat{P})-g(\eta,P)\right|&\le\frac{C^2(\rho)}{(C(\rho)-1)(1-\gamma)\sqrt{n}}\left(2+\sqrt{2\log\frac{2|{\mathcal N}_\varepsilon|}{\delta}}\right)\\
&\le\frac{C^2(\rho)}{(C(\rho)-1)(1-\gamma)\sqrt{n}}\left(2+\sqrt{2\log\frac{2(1+2(1+1/C(\rho))\sqrt{n})}{\delta}}\right)
\end{align*}
As $C(\rho)=\sqrt{1+\rho}\ge1$, we then have
\begin{align*}
\sup_{\eta\in[0,\frac{C(\rho)}{(C(\rho)-1)(1-\gamma)}]}\left|g(\eta,\widehat{P})-g(\eta,P)\right|\le\frac{C^2(\rho)}{(C(\rho)-1)(1-\gamma)\sqrt{n}}\left(2+\sqrt{2\log\frac{2(1+4\sqrt{n})}{\delta}}\right)
\end{align*}
holding with probability $1-\delta$. Thus, the final result is obtained by union bound over $(s,a)\in{\mathcal S}\times{\mathcal A}$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm: chi2-uni}]
Similar with Theorem~\ref{thm: l1-union}, we take an $\varepsilon$-net of ${\mathcal V}$ w.r.t norm $\|\cdot\|_\infty$ and denote it as ${\mathcal V}_\varepsilon$. By Theorem~\ref{thm: chi2-fix}, we have the following inequality with probability $1-\delta$:
\begin{align*}
\sup_{V\in{\mathcal V}}\left\|{\mathcal T}_r^\pi V -\widehat{{\mathcal T}}_r^\pi V\right\|_\infty\le 2\gamma\varepsilon+\frac{C^2(\rho)\gamma}{(C(\rho)-1)(1-\gamma)\sqrt{n}}\left(2+\sqrt{2\log\frac{2|{\mathcal S}||{\mathcal A}|(1+4\sqrt{n})}{\delta}+|{\mathcal S}|\log\left(1+\frac{1}{(1-\gamma)\varepsilon}\right)}\right)
\end{align*}
By taking $\varepsilon=\frac{C^2(\rho)}{(C(\rho)-1)(1-\gamma)\sqrt{n}}$, we have the following inequality with probability $1-\delta$:
\begin{align*}
\sup_{V\in{\mathcal V}}\left\|{\mathcal T}_r^\pi V -\widehat{{\mathcal T}}_r^\pi V\right\|_\infty\le\frac{C^2(\rho)\gamma}{(C(\rho)-1)(1-\gamma)\sqrt{n}}\left(4+\sqrt{2\log\frac{2|{\mathcal S}||{\mathcal A}|(1+4\sqrt{n})}{\delta}+|{\mathcal S}|\log\left(1+\frac{(C(\rho)-1)\sqrt{n}}{C^2(\rho)}\right)}\right)
\end{align*}
By $(s,a)$-rectangular assumption, the policy class $\Pi$ being deterministic is enough. Thus, we have the final result with probability $1-\delta$:
\begin{align*}
\max_\pi V_r^\pi(\mu)-V_r^{\widehat{\pi}}(\mu)&\le\frac{2C^2(\rho)\gamma}{(C(\rho)-1)(1-\gamma)^2\sqrt{n}}\left(4+\sqrt{2|{\mathcal S}|\log\frac{2|{\mathcal S}||{\mathcal A}|^2(1+4\sqrt{n})(1+(C(\rho)-1)\sqrt{n})}{\delta C^2(\rho)}}\right)\\
&\le\frac{2C^2(\rho)\gamma}{(C(\rho)-1)(1-\gamma)^2\sqrt{n}}\left(4+\sqrt{2|{\mathcal S}|\log\frac{2|{\mathcal S}||{\mathcal A}|^2(1+(C(\rho)+3)\sqrt{n})^2}{\delta C^2(\rho)}}\right)
\end{align*}
where the final inequality holds by:
\begin{align*}
(1+4\sqrt{n})(1+(C(\rho)-1)\sqrt{n})\le(1+(C(\rho)+3)\sqrt{n})^2
\end{align*}
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lem: kl}]
As $f(t)=t\log t$, we have $f^*(s)=\exp(s-1)$. By Lemma~\ref{lem: f-eq}, the value of the convex optimization problem equals with:
\begin{align*}
\sup_{\lambda\ge0, \eta\in{\mathbb R}}-\lambda\sum_{s\in{\mathcal S}}P^*(s)\exp(\frac{\eta-V(s)}{\lambda}-1)-\lambda\rho+\eta
\end{align*}
Optimizing over $\eta$, we obtain the equivalent form:
\begin{align*}
\sup_{\lambda\ge0}-\lambda\log\sum_{s\in{\mathcal S}}P^*(s)\exp(-\frac{V(s)}{\lambda})-\lambda\rho
\end{align*}
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm: kl-fix}]
Let's denote $g(\lambda, P)=\inf_{\lambda\in[0,1/\rho(1-\gamma)]}\lambda\rho+\lambda\log\sum_{s}P(s)\exp(-V(s)/\lambda)$. Thus we have:
\begin{align*}
\left|g(\lambda,\widehat{P})-g(\lambda,P)\right|&\le\sup_{\lambda\in[0,\frac{1}{\rho(1-\gamma)}]}\left|\lambda\log\sum_{s}\widehat{P}(s)\exp(-\frac{V(s)}{\lambda})-\lambda\log\sum_{s}P(s)\exp(-\frac{V(s)}{\lambda})\right|\\
&\le\frac{1}{\rho(1-\gamma)}\sup_{\lambda\in[0,\frac{1}{\rho(1-\gamma)}]}\left|\log\left(1+\frac{\sum_{s}(\widehat{P}(s)-P(s))\exp(-\frac{V(s)}{\lambda})}{\sum_{s}P(s)\exp(-\frac{V(s)}{\lambda})}\right)\right|\\
&\le \frac{2}{\rho(1-\gamma)}\frac{\sum_{s}\left|\widehat{P}(s)-P(s)\right|\exp(-\frac{V(s)}{\lambda})}{\sum_{s}P(s)\exp(-\frac{V(s)}{\lambda})}
\end{align*}
where the last inequality holds by $|\log(1+x)|\le 2|x|$ for $|x|\le1/2$. Noting that $\widehat{P}\ll P$ by generative model assumption, we then have:
\begin{align*}
\left|g(\lambda,\widehat{P})-g(\lambda,P)\right|\le\frac{2}{\rho(1-\gamma)}\max_{s}\left|\frac{\widehat{P}(s)}{P(s)}-1\right|
\end{align*}
Denote $\underline{p}=\min_{P(s'|s,a)>0}P(s'|s,a)$ and we know $\widehat{P}(s)=\frac{1}{n}\sum_{k}1(X_k=s)$, the Hoeffding's inequality tells us:
\begin{align*}
{\mathbb P}\left(\max_s \left|\frac{\widehat{P}(s)}{P(s)}-1\right|\ge\sqrt{\frac{1}{n\underline{p}^2}\log\frac{2|{\mathcal S}|}{\delta}}\right)\le\delta
\end{align*}
Thus, with probability $1-\delta$, we have:
\begin{align*}
\left|g(\lambda,\widehat{P})-g(\lambda,P)\right|\le\frac{2}{\rho\underline{p}(1-\gamma)\sqrt{n}}\sqrt{\log\frac{2|{\mathcal S}|}{\delta}}
\end{align*}
The final result is obtain by union bound over $(s,a)\in{\mathcal S}\times{\mathcal A}$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm: kl-uni}]
Similar with Theorem~\ref{thm: l1-union}, we take an $\varepsilon$-net of ${\mathcal V}$ w.r.t norm $\|\cdot\|_\infty$ and denote it as ${\mathcal V}_\varepsilon$. By Theorem~\ref{thm: kl-fix}, we have the following inequality with probability $1-\delta$:
\begin{align*}
\sup_{V\in{\mathcal V}}\left\|{\mathcal T}_r^\pi V -\widehat{{\mathcal T}}_r^\pi V\right\|_\infty\le 2\gamma\varepsilon+\frac{2\gamma}{\rho(1-\gamma)\underline{p}\sqrt{n}}\sqrt{\log\frac{2|{\mathcal S}|^2|{\mathcal A}|}{\delta}+|{\mathcal S}|\log\left(1+\frac{1}{(1-\gamma)\varepsilon}\right)}
\end{align*}
By taking $\varepsilon=\frac{1}{\rho(1-\gamma)\underline{p}\sqrt{n}}$, with probability $1-\delta$, we have:
\begin{align*}
\sup_{V\in{\mathcal V}}\left\|{\mathcal T}_r^\pi V -\widehat{{\mathcal T}}_r^\pi V\right\|_\infty\le\frac{2\gamma}{\rho(1-\gamma)\underline{p}\sqrt{n}}\left(1+\sqrt{\log\frac{2|{\mathcal S}|^2|{\mathcal A}|}{\delta}+|{\mathcal S}|\log\left(1+\rho\underline{p}\sqrt{n}\right)}\right)
\end{align*}
By taking a union bound over deterministic policy class $\Pi$, we have:
\begin{align*}
\max_\pi V_r^\pi(\mu)-V_r^{\widehat{\pi}}(\mu)&\le\frac{2\gamma}{\rho(1-\gamma)\underline{p}\sqrt{n}}\left(1+\sqrt{\log\frac{2|{\mathcal S}|^2|{\mathcal A}|}{\delta}+|{\mathcal S}|\log|{\mathcal A}|\left(1+\rho\underline{p}\sqrt{n}\right)}\right)\\
&=\frac{2\gamma}{\rho(1-\gamma)\underline{p}\sqrt{n}}\left(1+\sqrt{|{\mathcal S}|\log\frac{2|{\mathcal S}|^2|{\mathcal A}|^2(1+\rho\underline{p}\sqrt{n})}{\delta}}\right)
\end{align*}
\end{proof}
\section{Proofs of Section~\ref{sec: res-s}}
\label{apd: s-gen}
\begin{lem}
\label{lem: f-eq-s}
For any $f$-divergence uncertainty set as Example~\ref{eg: f-set-s} states, the convex optimization problem
\begin{align*}
\inf_{P}&\sum_{s\in{\mathcal S}, a\in{\mathcal A}}P_a(s)\pi(a)V(s)\\
\text{s.t.}&\hspace{2pt}\sum_{a\in{\mathcal A}}D_f(P_a\|P_a^*)\le|{\mathcal A}|\rho, \hspace{4pt}P_a\in\Delta({\mathcal S}),\hspace{4pt}P_a\ll P_a^* \hspace{4pt}\text{for all $a\in{\mathcal A}$}
\end{align*}
can be reformulated as:
\begin{align*}
\sup_{\lambda\ge0,\eta\in{\mathbb R}^{|{\mathcal A}|}}-\lambda\sum_{s\in{\mathcal S}, a\in{\mathcal A}}P_a^*(s)f^*\left(\frac{\eta_a-\pi(a)V(s)}{\lambda}\right)-\lambda|{\mathcal A}|\rho+\sum_{a\in{\mathcal A}}\eta_a
\end{align*}
where $f^*(t)=-\inf_{s\ge0}(f(s)-st)$.
\end{lem}
\begin{proof}
Similar with proof of Lemma~\ref{lem: f-eq}.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lem: l1-s}]
By definition of s-rectangular set assumption, for any given $V\in{\mathcal V}$ and $\pi\in\Pi$, we have:
\begin{align*}
{\mathcal T}_r^\pi V(s)=\sum_{a}\pi(a|s)R(s,a)+\gamma\inf_{P_s\in{\mathcal P}_s(\rho)}\sum_{s'\in{\mathcal S}, a\in{\mathcal A}}P(s'|s,a)\pi(a|s)V(s')
\end{align*}
To simplify, we solve the following convex optimization problem:
\begin{align*}
\inf_{P}&\sum_{s\in{\mathcal S}}P_a(s)\pi(a)V(s)\\
\text{s.t.}\hspace{2pt} &\sum_{s\in{\mathcal S}, a\in{\mathcal A}}|P_a(s)-P_a^*(s)|\le|{\mathcal A}|\rho\\
&P_a\in\Delta({\mathcal S}),\hspace{4pt}P_a\ll P_a^*\hspace{4pt} \text{for all $a\in{\mathcal A}$}
\end{align*}
By taking $f=|t-1|$, we have:
\begin{align*}
f^*(s)=
\begin{cases}
-1&,s\le-1\\
s&,s\in[-1,1]\\
+\infty&,s>1
\end{cases}
\end{align*}
Thus, by Lemma~\ref{lem: f-eq-s}, we turn the optimization problem into:
\begin{align*}
\sup_{\lambda\ge0,\eta\in{\mathbb R}^{|{\mathcal A}|}, \frac{\eta_a-\pi(a)V(s)}{\lambda}\le1}-\lambda\sum_{s\in{\mathcal S}, a\in{\mathcal A}}P^*_a(s)\max\left\{\frac{\eta_a-\pi(a)V(s)}{\lambda}, -1\right\}-\lambda|{\mathcal A}|\rho+\sum_{a\in{\mathcal A}}\eta_a
\end{align*}
By similar proof of Lemma~\ref{lem: l1}, the optimization problem can be formulated as:
\begin{align*}
\sup_{\eta\in{\mathbb R}^{|{\mathcal A}|}}-\sum_{s\in{\mathcal S},a\in{\mathcal A}}P^*_a(s)\left(\eta_a-\pi(a)V(s)\right)_+-\left(\max_{a,s}\frac{\eta_a-\pi(a)V(s)}{2}\right)_+|{\mathcal A}|\rho+\sum_{a\in{\mathcal A}}\eta_a
\end{align*}
\end{proof}
\begin{lem}
\label{lem: l1-value}
For fixed $\pi$ and $V$, we denote $g(\eta,P)=\sum_{s,a}P_a(s)(\eta_a-\pi(a)V(s))_++(\max_{a,s}\frac{\eta_a-\pi(a)V(s)}{2})_+|{\mathcal A}|\rho-\sum_{a}\eta_a$. The infimum $\eta$ of $g(\eta,P)$ locates in set:
\begin{align*}
I_{L_1}=\left\{\eta\in{\mathbb R}^{|{\mathcal A}|}\Bigg{|}\eta_a\ge0, \sum_{a}\eta_a\le\frac{2+\rho}{\rho(1-\gamma)}\right\}
\end{align*}
\end{lem}
\begin{proof}
If there exists $\widehat{a}\in{\mathcal A}$ such that $\eta_{\widehat{a}}\le0$, we have:
\begin{align*}
g(\eta,P)=-\eta_{\widehat{a}}+\sum_{s,a\not=\widehat{a}}P_a(s)(\eta_a-\pi(a)V(s))_++\left(\max_{s,a\not=\widehat{a}}\frac{\eta_a-\pi(a)V(s)}{2}\right)_+|{\mathcal A}|\rho-\sum_{a\not=\widehat{a}}\eta_a
\end{align*}
It's easy to observe that the infimum of $g(\eta,P)$ w.r.t variable $\eta_{\widehat{a}}$ is obtained when $\eta_{\widehat{a}}=0$. Thus, we can safely say that the infimum of $g(\eta,P)$ locates in ${\mathbb R}^{|{\mathcal A}|}_+$ and $g(0,P)=0$.
Besides, we also have:
\begin{align*}
g(\eta,P)&\ge\sum_{s,a}P_a(s)(\eta_a-\pi(a)V(s))+\left(\max_a\frac{\eta_a-\pi(a)V_{\min}}{2}\right)_+|{\mathcal A}|\rho-\sum_a\eta_a\\
&\ge-\frac{1}{1-\gamma}+\left(\max_a\frac{\eta_a-\pi(a)V_{\min}}{2}\right)_+|{\mathcal A}|\rho
\end{align*}
Noting that we have:
\begin{align*}
\left(\max_a\frac{\eta_a-\pi(a)V_{\min}}{2}\right)_+&=\max_a\left(\frac{\eta_a-\pi(a)V_{\min}}{2}\right)_+\\
&\ge\frac{1}{|{\mathcal A}|}\sum_{a}\left(\frac{\eta_a-\pi(a)V_{\min}}{2}\right)_+\\
&\ge\frac{1}{2|{\mathcal A}|}\left(\sum_a\eta_a-\frac{1}{1-\gamma}\right)
\end{align*}
Thus, when $\sum_a\eta_a\ge\frac{2+\rho}{\rho(1-\gamma)}$, $g(\eta,P)\ge0$. By convexity of $g(\eta,P)$, the infimum of $g(\eta,P)$ locates in set:
\begin{align*}
\left\{\eta\in{\mathbb R}_+^{|{\mathcal A}|}\Bigg{|}\sum_{a}\eta_a\le\frac{2+\rho}{\rho(1-\gamma)}\right\}
\end{align*}
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm: l1-fix-s}]
To simplify, we denote $g(\eta,P)=\sum_{s,a}P_a(s)(\eta_a-\pi(a)V(s))_++(\max_{a,s}\frac{\eta_a-\pi(a)V(s)}{2})_+|{\mathcal A}|\rho-\sum_{a}\eta_a$. Besides, we also denote $Y_k^a=\sum_s 1(X_k^a=s)(\eta_a-\pi(a)V(s))_+$ and $Z_k=\sum_a Y_k^a$, where $X_k^a$ are generated by $P_a(\cdot)$ independently. Thus, we have $g(\eta,\widehat{P})=\frac{1}{n}\sum_{k}Z_k+(\max_{a,s}\frac{\eta_a-\pi(a)V(s)}{2})_+|{\mathcal A}|\rho-\sum_a\eta_a$. By restricting $\eta$ in the set of Lemma~\ref{lem: l1-value}, we have $0\le Z_k\le\sum_a\eta_a\le\frac{2+\rho}{\rho(1-\gamma)}$ and $\{Z_k\}$ are i.i.d random variables. By Hoeffding's inequality, with probability $1-\delta$, we have:
\begin{align*}
\left|g(\eta,\widehat{P})- g(\eta,P)\right|=\left|g(\eta,\widehat{P})-{\mathbb E} g(\eta,\widehat{P})\right|\le\frac{2+\rho}{\rho(1-\gamma)}\sqrt{\frac{\log\frac{2}{\delta}}{2n}}
\end{align*}
Next, we turn to bound the deviation over $I_{L_1}$ uniformly. Noticing that, for any two $\widetilde{\eta}$ and $\widehat{\eta}$, we have:
\begin{align*}
\left|g(\widetilde{\eta},P)-g(\widehat{\eta},P)\right|\le(2+\frac{\rho}{2})\left\|\widetilde{\eta}-\widehat{\eta}\right\|_1
\end{align*}
Besides, by taking the smallest $\varepsilon$-net of $I_{L_1}$ as ${\mathcal N}_\varepsilon$ w.r.t metric $\|\cdot\|_1$. The size of ${\mathcal N}_\varepsilon$ is bounded by:
\begin{align*}
|{\mathcal N}_\varepsilon|\le\left(1+\frac{2(2+\rho)}{\rho(1-\gamma)\varepsilon}\right)^{|{\mathcal A}|}
\end{align*}
Thus, with probability $1-\delta$, we have:
\begin{align*}
\sup_{\eta\in I_{L_1}}\left|g(\eta,\widehat{P})- g(\eta,P)\right|&\le(4+\rho)\varepsilon+\sup_{\eta\in{\mathcal N}_\varepsilon}\left|g(\eta,\widehat{P})- g(\eta,P)\right|\\
&\le(4+\rho)\varepsilon+\frac{2+\rho}{\rho(1-\gamma)\sqrt{2n}}\sqrt{\log\frac{2}{\delta}+\log|{\mathcal N}_\varepsilon|}
\end{align*}
By taking $\varepsilon=\frac{2+\rho}{\rho(1-\gamma)\sqrt{2n}(4+\rho)}$, with probability $1-\delta$, we have:
\begin{align*}
\sup_{\eta\in I_{L_1}}\left|g(\eta,\widehat{P})- g(\eta,P)\right|&\le\frac{2+\rho}{\rho(1-\gamma)\sqrt{2n}}\left(1+\sqrt{\log\frac{2}{\delta}+|{\mathcal A}|\log\left(1+2\sqrt{2n}(4+\rho)\right)}\right)
\end{align*}
The final result is obtained by union bound over $s\in{\mathcal S}$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm: l1-union-s}]
Firstly, we bound the deviation uniformly with $V\in{\mathcal V}$. Simialr with proof of Theorem~\ref{thm: l1-union}, we take the smallest $\varepsilon$-net of ${\mathcal V}$ w.r.t norm $\|\cdot\|_\infty$ and denote it as ${\mathcal V}_\varepsilon$. By Theorem~\ref{thm: l1-fix-s}, with probability $1-\delta$, we have:
\begin{align*}
&\sup_{V\in{\mathcal V}}\left\|{\mathcal T}_r^\pi V - \widehat{{\mathcal T}}_r^\pi V\right\|_\infty\le2\gamma\varepsilon+\frac{\gamma(2+\rho)}{\rho(1-\gamma)\sqrt{2n}}\left(1+\sqrt{\log\frac{2|{\mathcal S}|}{\delta}+|{\mathcal A}|\log\left(1+2\sqrt{2n}(4+\rho)\right)+\log|{\mathcal V}_\varepsilon|}\right)\\
&\le 2\gamma\varepsilon+\frac{\gamma(2+\rho)}{\rho(1-\gamma)\sqrt{2n}}\Bigg{(}1+\sqrt{\log\frac{2|{\mathcal S}|}{\delta}+|{\mathcal A}|\log\left(1+2\sqrt{2n}(4+\rho)\right)+|{\mathcal S}|\log\left(1+\frac{1}{(1-\gamma)\varepsilon}\right)}\Bigg{)}
\end{align*}
By taking $\varepsilon=\frac{(2+\rho)}{2\rho(1-\gamma)\sqrt{2n}}$, with probability $1-\delta$, we have:
\begin{align*}
\sup_{V\in{\mathcal V}}\left\|{\mathcal T}_r^\pi V - \widehat{{\mathcal T}}_r^\pi V\right\|_\infty\le&\frac{\gamma(2+\rho)}{\rho(1-\gamma)\sqrt{2n}}\Bigg{(}2+\\
&\sqrt{\log\frac{2|{\mathcal S}|}{\delta}+|{\mathcal A}|\log\left(1+2\sqrt{2n}(4+\rho)\right)+|{\mathcal S}|\log\left(1+\frac{2\rho\sqrt{2n}}{2+\rho}\right)}\Bigg{)}
\end{align*}
Next, we bound the deviation uniformly with $\pi\in\Pi$. By Lemma~\ref{lem: eps-pi}, with probability $1-\delta$, we have:
\begin{align*}
\sup_{\pi\in\Pi, V\in{\mathcal V}}&\left\|{\mathcal T}_r^\pi V - \widehat{{\mathcal T}}_r^\pi V\right\|_\infty\le\frac{2\gamma\varepsilon}{1-\gamma}+\frac{\gamma(2+\rho)}{\rho(1-\gamma)\sqrt{2n}}\Bigg{(}2+\\
&\sqrt{\log\frac{2|{\mathcal S}|}{\delta}+|{\mathcal A}|\log\left(1+2\sqrt{2n}(4+\rho)\right)+|{\mathcal S}|\log\left(1+\frac{2\rho\sqrt{2n}}{2+\rho}\right)+\log|\Pi_\varepsilon|}\Bigg{)}
\end{align*}
Taking $\varepsilon=\frac{2+\rho}{\rho\sqrt{2n}}$, by Lemma~\ref{lem: num-eps-pi}, with probability $1-\delta$, we have:
\begin{align*}
&\sup_{\pi\in\Pi, V\in{\mathcal V}}\left\|{\mathcal T}_r^\pi V - \widehat{{\mathcal T}}_r^\pi V\right\|_\infty\le\frac{\gamma(2+\rho)}{\rho(1-\gamma)\sqrt{2n}}\Bigg{(}4+\\
&\sqrt{\log\frac{2|{\mathcal S}|}{\delta}+|{\mathcal A}|\log\left(1+2\sqrt{2n}(4+\rho)\right)+|{\mathcal S}|\log\left(1+\frac{2\rho\sqrt{2n}}{2+\rho}\right)+|{\mathcal S}||{\mathcal A}|\log\left(1+\frac{4\rho\sqrt{2n}}{2+\rho}\right)}\Bigg{)}
\end{align*}
By some calculation, we can simplify the inequality with:
\begin{align*}
\sup_{\pi\in\Pi, V\in{\mathcal V}}\left\|{\mathcal T}_r^\pi V - \widehat{{\mathcal T}}_r^\pi V\right\|_\infty\le\frac{\gamma(2+\rho)\sqrt{|{\mathcal S}||{\mathcal A}|}}{\rho(1-\gamma)\sqrt{2n}}\Bigg{(}4+\sqrt{\log\frac{2|{\mathcal S}|(1+2\sqrt{2n}(\rho+4))^3}{\delta}}\Bigg{)}
\end{align*}
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lem: chi2-s}]
Similar with proof of Lemma~\ref{lem: l1-s}, when $f(t)=(t-1)^2$, we have:
\begin{align*}
f^*(s)=
\begin{cases}
\frac{t^2}{4}+t &, s\ge-2\\
-1 &, s<-2
\end{cases}
\end{align*}
By Lemma~\ref{lem: f-eq-s}, the optimization problem turns into:
\begin{align*}
\sup_{\lambda\ge0, \eta\in{\mathbb R}^{|{\mathcal A}|}}-\frac{1}{4\lambda}\sum_{s,a}P_a(s)\left(\eta_a-\pi(a)V(s)\right)_+^2-\lambda|{\mathcal A}|(\rho+1)+\sum_{a}\eta_a
\end{align*}
Optimizing over $\lambda$, the problem becomes:
\begin{align*}
\sup_{\eta\in{\mathbb R}^{|{\mathcal A}|}}-\sqrt{(\rho+1)|{\mathcal A}|}\sqrt{\sum_{s,a}P_a(s)(\eta_a-\pi(a)V(s))_+^2}+\sum_{a}\eta_a
\end{align*}
\end{proof}
\begin{lem}
\label{lem: chi2-value}
Denote $g(\eta)=\sqrt{(\rho+1)|{\mathcal A}|}\sqrt{\sum_{s,a}P_a(s)(\eta_a-\pi(a)V(s))_+^2}-\sum_{a}\eta_a$, the optimal infimum of $g(\eta)$ lies in set
\begin{align*}
I_{\chi^2}=\left\{\eta\in{\mathbb R}^{|{\mathcal A}|}\Bigg{|}\eta_a\ge 0, \sum_{a}\eta_a\le\frac{C(\rho)}{(C(\rho)-1)(1-\gamma)}\right\}
\end{align*}
, where $C(\rho)=\sqrt{1+\rho}$.
\end{lem}
\begin{proof}
If there exists $\widehat{a}\in{\mathcal A}$ such that $\eta_{\widehat{a}}\le0$, we have:
\begin{align*}
g(\eta)=-\eta_{\widehat{a}}+\sqrt{(\rho+1)|{\mathcal A}|}\sqrt{\sum_{s,a\not=\widehat{a}}P_a(s)(\eta_a-\pi(a)V(s))_+^2}-\sum_{a\not=\widehat{a}}\eta_a
\end{align*}
Thus, the infimum of $g(\eta)$ could be obtained when $\eta_{\widehat{a}}=0$. Besides, by Cauchy-Schwarz inequality, we have:
\begin{align*}
g(\eta)&\ge\sqrt{(\rho+1)|{\mathcal A}|}\frac{\sum_{s,a}P_a(s)(\eta_a-\pi(a)V(s))_+}{\sqrt{\sum_{s,a}P_a(s)}}-\sum_a \eta_a\\
&=\sqrt{1+\rho}\sum_{s,a}P_a(s)(\eta_a-\pi(a)V(s))_+-\sum_a\eta_a\\
&\ge\sqrt{1+\rho}\sum_{s,a}P_a(s)(\eta_a-\pi(a)V(s))-\sum_a\eta_a\\
&=\left(\sqrt{1+\rho}-1\right)\sum_a\eta_a - \sqrt{1+\rho}\sum_{s,a}P_a(s)\pi(a)V(s)
\end{align*}
Thus, when $\sum_a\eta_a\ge\frac{C(\rho)}{(C(\rho)-1)(1-\gamma)}$, we have:
\begin{align*}
g(\eta)\ge\sqrt{1+\rho}\left(\frac{1}{1-\gamma}-\sum_{s,a}P_a(s)\pi(a)V(s)\right)\ge0
\end{align*}
By convexity of $g(\eta)$, the optimal infimum of $g(\eta)$ locates in set:
\begin{align*}
\left\{\eta\in{\mathbb R}^{|{\mathcal A}|}\Bigg{|}\eta_a\ge 0, \sum_{a}\eta_a\le\frac{C(\rho)}{(C(\rho)-1)(1-\gamma)}\right\}
\end{align*}
where $C(\rho)=\sqrt{1+\rho}$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm: chi2-s-fix}]
Similar with proof of Theorem~\ref{thm: chi2-uni}, we firstly ignore dependence on $s$ and denote $g(\eta,P)=C(\rho)\sqrt{|{\mathcal A}|}\sqrt{\sum_{s,a}P_a(s)(\eta_a-\pi(a)V(s))_+^2}-\sum_a\eta_a$, where $C(\rho)=\sqrt{\rho+1}$. Besides, we also denote $Y_{k}^a=\sum_s 1(X_k^a=s)(\eta_a-\pi(a)V(s))_+$ and $Z_k = \sqrt{\sum_a (Y_k^a)^2}$, where $X_k^a$ are generated by $P_a(\cdot)$ independently. Thus, we have $g(\eta,\widehat{P})=C(\rho)\sqrt{\frac{1}{n}\sum_{k=1}^n Z_k^2}-\sum_a\eta_a$. By restricting $\eta$ in the set of Lemma~\ref{lem: chi2-value}, we have $Z_k\le\sqrt{\sum_a \eta_a^2}\le\sum_a \eta_a$ and $\{Z_k\}$ are i.i.d random variables. By Lemma 6 of \cite{duchi2018learning}, with probability $1-\delta$, we have:
\begin{align*}
\left|g(\eta,\widehat{P})-{\mathbb E} g(\eta,\widehat{P})\right|\le\frac{\sqrt{2|{\mathcal A}|}C^2(\rho)}{(C(\rho)-1)(1-\gamma)\sqrt{n}}\sqrt{\log\frac{2}{\delta}}
\end{align*}
Besides, by Lemma 8 of \cite{duchi2018learning}, we have:
\begin{align*}
\frac{1}{\sqrt{n}}{\mathbb E}\|Z\|_2\ge\sqrt{\sum_{k=1}^n {\mathbb E} Z_k^2}-\sqrt{\frac{C(\rho)}{(C(\rho)-1)(1-\gamma)}}\frac{1}{\sqrt{n}}
\end{align*}
Combing these together, with probability $1-\delta$, we have:
\begin{align*}
\left|g(\eta,\widehat{P})-g(\eta,P)\right|\le\frac{C^2(\rho)\sqrt{|{\mathcal A}|}}{(C(\rho)-1)(1-\gamma)\sqrt{n}}\left(1+\sqrt{2\log\frac{2}{\delta}}\right)
\end{align*}
Next, we turn to bound the deviation over $I_{\chi^2}$ uniformly. Noticing that, for any two $\widetilde{\eta}$ and $\widehat{\eta}$, we have:
\begin{align*}
\left|g(\widetilde{\eta},P)-g(\widehat{\eta}, P)\right|\le (C(\rho)\sqrt{|{\mathcal A}|}+1)\left\|\widetilde{\eta}-\widehat{\eta}\right\|_1
\end{align*}
By taking the smallest $\varepsilon$-net of $I_{\chi^2}$ as ${\mathcal N}_\varepsilon$ w.r.t metric $\|\cdot\|_1$. The size of ${\mathcal N}_\varepsilon$ is bounded by:
\begin{align*}
|{\mathcal N}_\varepsilon|\le\left(1+\frac{2C(\rho)}{(C(\rho)-1)(1-\gamma)\varepsilon}\right)^{|{\mathcal A}|}
\end{align*}
Thus, we have:
\begin{align*}
\sup_{\eta\in I_{\chi^2}}\left|g(\eta,\widehat{P})-g(\eta,P)\right|\le 2(1+C(\rho)\sqrt{|{\mathcal A}|})\varepsilon+\sup_{\eta\in{\mathcal N}_\varepsilon}\left|g(\eta,\widehat{P})-g(\eta,P)\right|
\end{align*}
By taking $\varepsilon=\frac{C^2(\rho)\sqrt{|{\mathcal A}|}}{2(1+\sqrt{|{\mathcal A}|}C(\rho))(C(\rho)-1)(1-\gamma)\sqrt{n}}$, with probability $1-\delta$, we have:
\begin{align*}
\sup_{\eta\in I_{\chi^2}}\left|g(\eta,\widehat{P})-g(\eta,P)\right|&\le\frac{C^2(\rho)\sqrt{|{\mathcal A}|}}{(C(\rho)-1)(1-\gamma)\sqrt{n}}\left(2+\sqrt{2\log\frac{2|{\mathcal N}_\varepsilon|}{\delta}}\right)\\
&\le\frac{C^2(\rho)\sqrt{|{\mathcal A}|}}{(C(\rho)-1)(1-\gamma)\sqrt{n}}\left(2+\sqrt{2\log\frac{2}{\delta}+2|{\mathcal A}|\log\left(1+\frac{4\sqrt{n}(1+C(\rho)\sqrt{|{\mathcal A}|})}{C(\rho)\sqrt{|{\mathcal A}|}}\right)}\right)
\end{align*}
Noting that $C(\rho)=\sqrt{\rho+1}\ge1$ and $|{\mathcal A}|\ge1$, we can simplify the inequality with:
\begin{align*}
\sup_{\eta\in I_{\chi^2}}\left|g(\eta,\widehat{P})-g(\eta,P)\right|\le\frac{C^2(\rho)\sqrt{|{\mathcal A}|}}{(C(\rho)-1)(1-\gamma)\sqrt{n}}\left(2+\sqrt{2\log\frac{2}{\delta}+2|{\mathcal A}|\log(1+8\sqrt{n})}\right)
\end{align*}
Thus, the final result is obtained by union bound over $s\in{\mathcal S}$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm: chi2-s-union}]
Firstly, we bound the deviation uniformly with $V\in{\mathcal V}$. Similar with proof of Theorem~\ref{thm: l1-union}, we take the smallest $\varepsilon$-net of ${\mathcal V}$ w.r.t norm $\|\cdot\|_\infty$ and denote it as ${\mathcal V}_\varepsilon$. By Theorem~\ref{thm: chi2-s-fix}, with probability $1-\delta$, we have:
\begin{align*}
\sup_{V\in{\mathcal V}}\left\|{\mathcal T}_r^\pi V-\widehat{{\mathcal T}}_r^\pi V\right\|_\infty\le&2\gamma\varepsilon+\frac{\gamma C^2(\rho)\sqrt{|{\mathcal A}|}}{(C(\rho)-1)(1-\gamma)\sqrt{n}}\Bigg{(}2+\\
&\sqrt{2\log\frac{2|{\mathcal S}|}{\delta}+2|{\mathcal A}|\log\left(1+8\sqrt{n}\right)+2|{\mathcal S}|\log\left(1+\frac{1}{(1-\gamma)\varepsilon}\right)}\Bigg{)}
\end{align*}
By taking $\varepsilon=\frac{C^2(\rho)\sqrt{|{\mathcal A}|}}{(C(\rho)-1)(1-\gamma)\sqrt{n}}$, with probability $1-\delta$, we have:
\begin{align*}
\sup_{V\in{\mathcal V}}&\left\|{\mathcal T}_r^\pi V-\widehat{{\mathcal T}}_r^\pi V\right\|_\infty\le \frac{\gamma C^2(\rho)\sqrt{|{\mathcal A}|}}{(C(\rho)-1)(1-\gamma)\sqrt{n}}\Bigg{(}4+\\
&\sqrt{2\log\frac{2|{\mathcal S}|}{\delta}+2|{\mathcal A}|\log\left(1+8\sqrt{n}\right)+2|{\mathcal S}|\log\left(1+\frac{\sqrt{n}(C(\rho)-1)}{C^2(\rho)\sqrt{|{\mathcal A}|}}\right)}\Bigg{)}
\end{align*}
Next, we bound the deviation uniformly with $\pi\in\Pi$. By Lemma~\ref{lem: eps-pi}, with probability $1-\delta$, we have:
\begin{align*}
&\sup_{\pi\in\Pi, V\in{\mathcal V}}\left\|{\mathcal T}_r^\pi V- \widehat{{\mathcal T}}_r^\pi V\right\|_\infty\le\frac{2\gamma\varepsilon}{1-\gamma}+\frac{\gamma C^2(\rho)\sqrt{|{\mathcal A}|}}{(C(\rho)-1)(1-\gamma)\sqrt{n}}\Bigg{(}4+\\
&\sqrt{2\log\frac{2|{\mathcal S}|}{\delta}+2|{\mathcal A}|\log\left(1+8\sqrt{n}\right)+2|{\mathcal S}|\log\left(1+\frac{\sqrt{n}(C(\rho)-1)}{C^2(\rho)\sqrt{|{\mathcal A}|}}\right)+2\log|\Pi_\varepsilon|}\Bigg{)}
\end{align*}
By taking $\varepsilon=\frac{C^2(\rho)\sqrt{|{\mathcal A}|}}{(C(\rho)-1)\sqrt{n}}$, with probability $1-\delta$, we have:
\begin{align*}
&\sup_{\pi\in\Pi, V\in{\mathcal V}}\left\|{\mathcal T}_r^\pi V- \widehat{{\mathcal T}}_r^\pi V\right\|_\infty\le\frac{\gamma C^2(\rho)\sqrt{|{\mathcal A}|}}{(C(\rho)-1)(1-\gamma)\sqrt{n}}\Bigg{(}6+\\
&\sqrt{2\log\frac{2|{\mathcal S}|}{\delta}+2|{\mathcal A}|\log\left(1+8\sqrt{n}\right)+2|{\mathcal S}|\log\left(1+\frac{\sqrt{n}(C(\rho)-1)}{C^2(\rho)\sqrt{|{\mathcal A}|}}\right)+2\log|\Pi_\varepsilon|}\Bigg{)}
\end{align*}
By Lemma~\ref{lem: num-eps-pi} and some calculation, with probability $1-\delta$, we have:
\begin{align*}
&\sup_{\pi\in\Pi, V\in{\mathcal V}}\left\|{\mathcal T}_r^\pi V- \widehat{{\mathcal T}}_r^\pi V\right\|_\infty\le\frac{\gamma C^2(\rho)\sqrt{|{\mathcal A}|}}{(C(\rho)-1)(1-\gamma)\sqrt{n}}\Bigg{(}6+\\
&\sqrt{2\log\frac{2|{\mathcal S}|}{\delta}+2|{\mathcal A}|\log\left(1+8\sqrt{n}\right)+4|{\mathcal S}||{\mathcal A}|\log\left(1+\frac{4\sqrt{n}(C(\rho)-1)}{C^2(\rho)\sqrt{|{\mathcal A}|}}\right)}\Bigg{)}
\end{align*}
By $C(\rho)\ge1$ and $|{\mathcal A}|\ge1$, we can simplify it as:
\begin{align*}
\sup_{\pi\in\Pi, V\in{\mathcal V}}\left\|{\mathcal T}_r^\pi V- \widehat{{\mathcal T}}_r^\pi V\right\|_\infty\le\frac{\gamma C^2(\rho)\sqrt{|{\mathcal S}||{\mathcal A}|^2}}{(C(\rho)-1)(1-\gamma)\sqrt{n}}\left(6+\sqrt{2\log\frac{2|{\mathcal S}|(1+8\sqrt{n})(1+4\sqrt{n}(C(\rho)-1))^2}{\delta}}\right)
\end{align*}
\end{proof}
\begin{proof}[Proof of Lemma~\ref{lem: kl-s}]
Similar with proof of Lemma~\ref{lem: l1-s}, when $f(t)=t\log t$, we have $f^*(s)=\exp(s-1)$. By Lemma~\ref{lem: f-eq-s}, we optimize over $\eta$, the optimization problem turns into:
\begin{align*}
\sup_{\lambda\ge0}-\lambda\sum_{a\in{\mathcal A}}\log\sum_{s\in{\mathcal S}}P^*_a(s)\exp\left(-\frac{\pi(a)V(s)}{\lambda}\right)-\lambda|{\mathcal A}|\rho
\end{align*}
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm: kl-s-fix}]
Similar with proof of Theorem~\ref{thm: kl-fix}, we denote $g(P)=\inf_{\lambda\in[0,1/|{\mathcal A}|\rho(1-\gamma)]}\lambda|{\mathcal A}|\rho+\lambda\sum_a\log\sum_{s}P_a(s)\exp(-\pi(a)V(s)/\lambda)$. We have:
\begin{align*}
\left|g(\widehat{P})-g(P)\right|&\le\sup_{\lambda\in[0,\frac{1}{|{\mathcal A}|\rho(1-\gamma)}]}\left|\lambda\sum_a\left(\log\sum_s \widehat{P}_a(s)\exp(-\frac{\pi(a)V(s)}{\lambda})-\log\sum_s P_a(s)\exp(-\frac{\pi(a)V(s)}{\lambda})\right)\right|\\
&\le\frac{1}{|{\mathcal A}|\rho(1-\gamma)}\sup_{\lambda\in[0,\frac{1}{|{\mathcal A}|\rho(1-\gamma)}]}\left|\sum_a\log\left(1+\frac{\sum_s \left(\widehat{P}_a(s)-P_a(s)\right)\exp\left(-\frac{\pi(a)V(s)}{\lambda}\right)}{\sum_s P_a(s)\exp\left(-\frac{\pi(a)V(s)}{\lambda}\right)}\right)\right|
\end{align*}
Noting that $|\log(1+x)|\le2|x|$ when $|x|\le\frac{1}{2}$, we then have:
\begin{align*}
\left|g(\widehat{P})-g(P)\right|&\le\frac{2}{|{\mathcal A}|\rho(1-\gamma)}\sum_a\max_{s}\left|\frac{\widehat{P}_a(s)}{P_a(s)}-1\right|\le\frac{2}{\rho(1-\gamma)}\max_{s,a}\left|\frac{\widehat{P}_a(s)}{P_a(s)}-1\right|
\end{align*}
Denote $\underline{p}=\min_{P(s'|s,a)>0}P(s'|s,a)$, the Hoeffding's inequality tells us:
\begin{align*}
{\mathbb P}\left(\max_{s,a}\left|\frac{\widehat{P}_a(s)}{P_a(s)}-1\right|\ge\sqrt{\frac{1}{n\underline{p}^2}\log\frac{2|{\mathcal S}||{\mathcal A}|}{\delta}}\right)\le\delta
\end{align*}
Thus, with probability $1-\delta$, we have:
\begin{align*}
\left|g(\widehat{P})-g(P)\right|\le\frac{2}{\rho(1-\gamma)}\sqrt{\frac{1}{n\underline{p}^2}\log\frac{2|{\mathcal S}||{\mathcal A}|}{\delta}}
\end{align*}
Thue final result is obtained by union bound over $s\in{\mathcal S}$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm: kl-s-union}]
Firstly, we take the smallest $\varepsilon$-net of ${\mathcal V}$ w.r.t norm $\|\cdot\|_\infty$ and denote it as ${\mathcal V}_\varepsilon$. By Theorem~\ref{thm: kl-s-fix}, with probability $1-\delta$, we have:
\begin{align*}
\sup_{V\in{\mathcal V}}\left\|{\mathcal T}_r^\pi V-\widehat{{\mathcal T}}_r^\pi V\right\|_\infty\le 2\gamma\varepsilon+\frac{2\gamma}{\rho\underline{p}(1-\gamma)\sqrt{n}}\sqrt{\log\frac{2|{\mathcal S}|^2|{\mathcal A}|}{\delta}+|{\mathcal S}|\log\left(1+\frac{1}{(1-\gamma)\varepsilon}\right)}
\end{align*}
By taking $\varepsilon=\frac{1}{\rho\underline{p}(1-\gamma)\sqrt{n}}$, with probability $1-\delta$, we have:
\begin{align*}
\sup_{V\in{\mathcal V}}\left\|{\mathcal T}_r^\pi V-\widehat{{\mathcal T}}_r^\pi V\right\|_\infty\le \frac{2\gamma}{\rho\underline{p}(1-\gamma)\sqrt{n}}\left(1+\sqrt{\log\frac{2|{\mathcal S}|^2|{\mathcal A}|}{\delta}+|{\mathcal S}|\log\left(1+\rho\underline{p}\sqrt{n}\right)}\right)
\end{align*}
Next, we bound the deviation uniformly with $\pi\in\Pi$. By Lemma~\ref{lem: eps-pi}, with probability $1-\delta$, we have:
\begin{align*}
\sup_{\pi\in\Pi, V\in{\mathcal V}}\left\|{\mathcal T}_r^\pi V -\widehat{{\mathcal T}}_r^\pi V\right\|_\infty\le\frac{2\gamma\varepsilon}{1-\gamma}+\frac{2\gamma}{\rho\underline{p}(1-\gamma)\sqrt{n}}\left(1+\sqrt{\log\frac{2|{\mathcal S}|^2|{\mathcal A}|}{\delta}+|{\mathcal S}|\log\left(1+\rho\underline{p}\sqrt{n}\right)+\log|\Pi_\varepsilon|}\right)
\end{align*}
By taking $\varepsilon=\frac{1}{\rho\underline{p}\sqrt{n}}$, with probability $1-\delta$, we have:
\begin{align*}
\sup_{\pi\in\Pi, V\in{\mathcal V}}\left\|{\mathcal T}_r^\pi V -\widehat{{\mathcal T}}_r^\pi V\right\|_\infty\le\frac{2\gamma}{\rho\underline{p}(1-\gamma)\sqrt{n}}\left(2+\sqrt{\log\frac{2|{\mathcal S}|^2|{\mathcal A}|}{\delta}+|{\mathcal S}|\log\left(1+\rho\underline{p}\sqrt{n}\right)+\log|\Pi_\varepsilon|}\right)
\end{align*}
By Lemma~\ref{lem: num-eps-pi}, with probability $1-\delta$, we have:
\begin{align*}
\sup_{\pi\in\Pi, V\in{\mathcal V}}\left\|{\mathcal T}_r^\pi V -\widehat{{\mathcal T}}_r^\pi V\right\|_\infty&\le\frac{2\gamma}{\rho\underline{p}(1-\gamma)\sqrt{n}}\left(2+\sqrt{\log\frac{2|{\mathcal S}|^2|{\mathcal A}|}{\delta}+2|{\mathcal S}||{\mathcal A}|\log\left(1+4\rho\underline{p}\sqrt{n}\right)}\right)\\
&\le\frac{2\gamma\sqrt{|{\mathcal S}||{\mathcal A}|}}{\rho\underline{p}(1-\gamma)\sqrt{n}}\left(2+\sqrt{2\log\frac{2|{\mathcal S}|^2|{\mathcal A}|(1+4\rho\underline{p}\sqrt{n})}{\delta}}\right)
\end{align*}
\end{proof}
\section{Proofs of Section~\ref{sec: res-off}}
\begin{proof}[Proof of Lemma~\ref{lem: off-dis}]
By Chernoff bound, for fixed $(s,a)\in{\mathcal S}\times{\mathcal A}$, we have:
\begin{align*}
{\mathbb P}\left(n(s,a)<\frac{n}{2}\nu(s,a)\right)\le\left(\frac{2}{e}\right)^{\frac{n\nu(s,a)}{2}}\le e^{-\frac{n\nu(s,a)}{8}}\le e^{-\frac{n\nu_{\min}}{8}}
\end{align*}
Thus, by union bound over ${\mathcal S}\times{\mathcal A}$, we have:
\begin{align*}
{\mathbb P}\left(\exists(s,a)\in{\mathcal S}\times{\mathcal A}, n(s,a)<\frac{n}{2}\nu(s,a)\right)\le|{\mathcal S}||{\mathcal A}|e^{-\frac{n\nu_{\min}}{8}}
\end{align*}
Thus, when $n\ge\frac{8}{\nu_{\min}}\log\frac{|{\mathcal S}||{\mathcal A}|}{\delta}$, with probability $1-\delta$, for all $(s,a)\in{\mathcal S}\times{\mathcal A}$, we have:
\begin{align*}
n(s,a)\ge\frac{n}{2}\nu(s,a)
\end{align*}
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm: l1-union-off}]
Denote $g(\eta,P)=\sum_{s\in{\mathcal S}}P(s)(\eta-V(s))_+-\frac{(\eta-\min_s V(s)_+}{2}\rho-\eta$, by Proof of Theorem~\ref{thm: l1-fix}, we have:
\begin{align*}
{\mathbb P}\left(\sup_{\eta\in[0,\frac{2+\delta}{\delta(1-\gamma)}]}\left|g(\eta,\widehat{P}_{s,a})-g(\eta,P_{s,a})\right|>\varepsilon(s,a)\Bigg{|}\sigma(X)\right)\le\frac{\delta}{2|{\mathcal S}||{\mathcal A}|}
\end{align*}
where $\varepsilon(s,a)=\frac{2+\rho}{\sqrt{2n(s,a)}\rho(1-\gamma)}\left(1+\sqrt{\log\frac{4|{\mathcal S}||{\mathcal A}|(1+(4+\rho)\sqrt{2n(s,a)})}{\delta}}\right)$ and $X\sim\nu(s,a)$ are random variables generating $(s,a)$ pairs in ${\mathcal D}$. $\sigma(X)$ stands for that all randomness by $\nu(s,a)$ is fixed. Denote $\varepsilon=\frac{2+\rho}{\sqrt{n\nu_{\min}}\rho(1-\gamma)}\left(1+\sqrt{\log\frac{4|{\mathcal S}||{\mathcal A}|(1+(4+\rho)\sqrt{2n})}{\delta}}\right)$ and $E=\{\forall (s,a), n(s,a)\ge n\nu_{\min}/2\}$. Thus, by union bound over $(s,a)$, we have:
\begin{align*}
&{\mathbb P}\left(\exists(s,a),\sup_{\eta\in[0,\frac{2+\delta}{\delta(1-\gamma)}]}\left|g(\eta,\widehat{P}_{s,a})-g(\eta,P_{s,a})\right|>\varepsilon, E\right)\\
\le&\sum_{s,a}{\mathbb P}\left(\sup_{\eta\in[0,\frac{2+\delta}{\delta(1-\gamma)}]}\left|g(\eta,\widehat{P}_{s,a})-g(\eta,P_{s,a})\right|>\varepsilon(s,a)\right)\\
\le&\frac{\delta}{2}
\end{align*}
Thus, for any fixed $V\in{\mathcal V}$ and $\pi\in\Pi$, when $n\ge\frac{8}{\nu_{\min}}\log\frac{2|{\mathcal S}||{\mathcal A}|}{\delta}$, with probability at most $\delta$, we have events $E$ and:
\begin{align*}
\left\|{\mathcal T}_r^\pi V-\widehat{{\mathcal T}}_r^\pi V\right\|_\infty\ge\frac{(2+\rho)\gamma}{\sqrt{n\nu_{\min}}\rho(1-\gamma)}\left(1+\sqrt{\log\frac{4|{\mathcal S}||{\mathcal A}|(1+(4+\rho)\sqrt{2n})}{\delta}}\right)
\end{align*}
At last, we can union bound over ${\mathcal V}$ and $\Pi$ by similar approach in Theorem~\ref{thm: l1-union} and then combine ${\mathbb P}(A)\le {\mathbb P}(A\cap E)+{\mathbb P}(E^c)$ for any event $A$. Letting $n\ge\frac{8}{\nu_{\min}}\log\frac{2|{\mathcal S}||{\mathcal A}|}{\delta}$, with probability $1-\delta$, we have:
\begin{align*}
\max_{\pi}V_r^\pi(\mu)-V_r^{\widehat{\pi}}(\mu)\le\frac{2(2+\rho)\gamma\sqrt{|{\mathcal S}|}}{\sqrt{n\nu_{\min}}\rho(1-\gamma)^2}\left(2+\sqrt{\log\frac{8|{\mathcal S}||{\mathcal A}|^2(1+2(2+\rho)\sqrt{2n})^2}{\delta(2+\rho)}}\right)
\end{align*}
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm: chi2-union-off},~\ref{thm: kl-union-off},~\ref{thm: l1-union-off-s},~\ref{thm: chi2-union-off-s}, and~\ref{thm: kl-union-off-s}]
Similar with Proof of Theorem~\ref{thm: l1-union-off}.
\end{proof}
\end{appendix}
\subsection{Contributions}
Instead of providing a new efficient algorithm to solve robust MDPs, we take effort to investigate the performance of optimal robust policy $\widehat{\pi}\in\mathop{\rm argmax}_\pi \widehat{V}^\pi_r(\mu)$ on robust value function with true transition probability centering at the uncertainty set as Equation~\eqref{eq: gap} states, instead of estimated transition probability.
\begin{align}
\label{eq: gap}
\max_\pi V_r^\pi(\mu)- V_r^{\widehat{\pi}}(\mu)
\end{align}
However, deriving an upper bound of Equation~\eqref{eq: gap} directly could be quite hard. Thus, we consider a uniform deviation upper bound:
\begin{align}
\max_\pi V_r^\pi(\mu)- V_r^{\widehat{\pi}}(\mu)\le 2\sup_{\pi\in\Pi}\left|V_r^\pi(\mu)-\widehat{V}_r^\pi(\mu)\right|
\end{align}
In this paper, we mainly consider three different uncertainty sets: $L_1$, $\chi^2$, and KL balls, each of which is a certain form of $f$-divergence balls, and two kinds of data generated process: generative model and offline dataset. When the dataset is obtained by a generative model, we provide the sample complexity of achieving $\varepsilon$ deviation bound of Equation~\eqref{eq: gap} in different settings in Table~\ref{tab: result}. The overall performance among different uncertainty sets is near the same, up to some logarithmic factors, in $(s,a)$-rectangular assumption, which is about $\widetilde{O}\left(\frac{|{\mathcal S}|^2|{\mathcal A}|}{\varepsilon^2\rho^2(1-\gamma)^4}\right)$ when $\rho$ is small.
Notably, the sample complexity would enlarge when we assume the uncertainty sets satisfy $s$-rectangular assumption in Table~\ref{tab: result}. The main difference is caused by the fact that the optimal robust policy is deterministic \citep{nilim2005robust} in $(s,a)$-rectangular setting rather randomized \citep{wiesemann2013robust} in $s$-rectangular setting. Thus, the uniform bound over the global policy class could be worse than the deterministic policy class.
Besides, we also extend our analysis from the generative model to the offline dataset, which is generated by a given behavior policy. As long as concentrability assumption \citep{chen2019information} holds, the result of sample complexity is only changed with the concentrability coefficient, which can be referred to in Table~\ref{tab: result-off}.
\citet{zhou2021finite} also provides a sample complexity bound $\widetilde{O}\left(\frac{C|{\mathcal S}|^2 }{\nu_{\min}\varepsilon^2\rho^2(1-\gamma)^2}\right)$ when KL divergence is applied in the uncertainty set and $(s,a)$-rectangular set is assumed. However, the result is exponentially dependent on $\frac{1}{1-\gamma}$, which is hidden in $C$. Besides, there is also an unknown parameter hidden in $C$, which is an optimal solution of a convex problem and has no explicit expression. In this paper, we improve their results to a polynomial and explicit sample complexity bound in Table~\ref{tab: result} and Table~\ref{tab: result-off}.
\newcommand{\topsepremove}{\aboverulesep = 0mm \belowrulesep = 0mm} \topsepremove
\begin{table}[htbp!]
\centering
\scalebox{1.0}{
\begin{tabular}{|c|c|c|}
\toprule
Uncertainty Set & $(s,a)$-rectangular & $s$-rectangular \\
\toprule
$L_1$ & $\widetilde{O}\left(\frac{|{\mathcal S}|^2|{\mathcal A}|(2+\rho)^2}{\varepsilon^2\rho^2(1-\gamma)^4}\right)$ & $\widetilde{O}\left(\frac{|{\mathcal S}|^2|{\mathcal A}|^2(2+\rho)^2}{\varepsilon^2\rho^2(1-\gamma)^4}\right)$\\
\hline
$\chi^2$ & $\widetilde{O}\left(\frac{|{\mathcal S}|^2|{\mathcal A}|(1+\rho)^2}{\varepsilon^2(\sqrt{1+\rho}-1)^2(1-\gamma)^4}\right)$ & $\widetilde{O}\left(\frac{|{\mathcal S}|^2|{\mathcal A}|^3(1+\rho)^2}{\varepsilon^2(\sqrt{1+\rho}-1)^2(1-\gamma)^4}\right)$\\
\hline
KL & $\widetilde{O}\left(\frac{|{\mathcal S}|^2|{\mathcal A}|}{\varepsilon^2\rho^2\underline{p}^2(1-\gamma)^4}\right)$ & $\widetilde{O}\left(\frac{|{\mathcal S}|^2|{\mathcal A}|^2}{\varepsilon^2\rho^2\underline{p}^2(1-\gamma)^4}\right)$\\
\hline
\end{tabular}
}
\caption{The sample complexity of achieving $\varepsilon$ deviation bound~\eqref{eq: gap} in generative model setting. $|{\mathcal S}|$ and $|{\mathcal A}|$ are sizes of state space and action space. $\rho$ represents the width of uncertainty set in Example~\ref{eg: f-set} and Example~\ref{eg: f-set-s}. $\underline{p}=\min_{P^*(s'|s,a)>0}P^*(s'|s,a)$.}
\label{tab: result}
\end{table}
\begin{table}[htbp!]
\centering
\scalebox{1.0}{
\begin{tabular}{|c|c|c|}
\toprule
Uncertainty Set & $(s,a)$-rectangular & $s$-rectangular \\
\toprule
$L_1$ & $\widetilde{O}\left(\frac{|{\mathcal S}|(2+\rho)^2}{\nu_{\min}\varepsilon^2\rho^2(1-\gamma)^4}\right)$ & $\widetilde{O}\left(\frac{|{\mathcal S}||{\mathcal A}|(2+\rho)^2}{\nu_{\min}\varepsilon^2\rho^2(1-\gamma)^4}\right)$\\
\hline
$\chi^2$ & $\widetilde{O}\left(\frac{|{\mathcal S}|(1+\rho)^2}{\nu_{\min}\varepsilon^2(\sqrt{1+\rho}-1)^2(1-\gamma)^4}\right)$ & $\widetilde{O}\left(\frac{|{\mathcal S}||{\mathcal A}|^2(1+\rho)^2}{\nu_{\min}\varepsilon^2(\sqrt{1+\rho}-1)^2(1-\gamma)^4}\right)$\\
\hline
KL & $\widetilde{O}\left(\frac{|{\mathcal S}|}{\nu_{\min}\varepsilon^2\rho^2\underline{p}^2(1-\gamma)^4}\right)$ & $\widetilde{O}\left(\frac{|{\mathcal S}||{\mathcal A}|}{\nu_{\min}\varepsilon^2\rho^2\underline{p}^2(1-\gamma)^4}\right)$\\
\hline
\end{tabular}
}
\caption{The sample complexity of achieving $\varepsilon$ deviation bound~\eqref{eq: gap} in offline dataset. $|{\mathcal S}|$ and $|{\mathcal A}|$ are sizes of state space and action space. $\rho$ represents the width of uncertainty set in Example~\ref{eg: f-set} and Example~\ref{eg: f-set-s}. $\underline{p}=\min_{P^*(s'|s,a)>0}P^*(s'|s,a)$. $\nu_{\min}=\min_{s,a,\nu(s,a)>0}\nu(s,a)$}
\label{tab: result-off}
\end{table}
\subsection{Related Work}
In this part, we survey prior work on offline RL and robust MDPs. Besides, we will also discuss with Distributionally Robust Optimization (DRO) in supervised learning, as solving robust MDPs is related to robust optimization approaches.
\paragraph{Offline RL.} Offline RL mainly is divided into two parts: Off-Policy Evaluation (OPE) and Off-Policy Learning (OPL). Both of these two problems assume the agent is unable to interact with the environment but only access to a given explorable dataset. In terms of OPE, there are mainly three different methods: Direct Method (DM), Importance Sampling (IS), and Doubly Robust (DR). For DM, the approach is directly estimating the value function from the dataset and evaluating value function under a given target policy \citep{mannor2004bias, jong2007model, grunewalder2012modelling, bertsekas1995neuro, dann2014policy, duan2020minimax}.
The second approach is IS, or its variant self-normalized IS, where the density ratio of evaluation and target policies is estimated from dataset \citep{hirano2003efficient, li2015toward, liu2018breaking, swaminathan2015self, kallus2020double}.
Besides, there is a line of work \citep{liu2018breaking, xie2019towards, kallus2020double, yin2020asymptotically} consider estimating the marginal density ratio of evaluation and target policies, which is called Marginalized Importance Sampling (MIS), and has shown that MIS can help overcome the curse of horizon phenomenon. The third approach DR combines DMM and IS methods by adding estimated value function as error controlling term, which shows a lot of benefits such as lower bias and variance than DM and IS methods \citep{dudik2014doubly, jiang2016doubly, thomas2016data, farajtabar2018more, kallus2020double}. For OFL (or Batch RL), it's more difficult than OPE as the goal of OPL is to learn the optimal policy from the given dataset, especially combining with function approximation settings. When some assumptions are satisfied, many works have discussed the sufficient and necessary conditions of efficient OPL and provided sample efficient algorithms under different function hypothesis classes \citep{munos2008finite, lazaric2012finite, le2019batch,chen2019information, xie2020batch, wang2020statistical, yin2020near, duan2021risk}.
\paragraph{Robust MDPs.} Robust MDPs are related to Direct Method in offline RL, as the agent only has access to an offline dataset. The usual approach to solve robust MDPs is estimating the reward and transition probabilities firstly and running dynamic programming algorithms to obtain near-optimal solutions \citep{iyengar2005robust,nilim2005robust}. Different from the basic model MDPs \citep{puterman2014markov}, robust MDPs allow transition probability taking values in an uncertainty set \citep{xu2006robustness, mannor2012lightning} and aim to obtain an optimal robust policy that maximizes the worst-case value function. Some works \citep{xu2009parametric, petrik2012approximate, ghavamzadeh2016safe} have shown that solutions of robust MDPs are less sensitive to estimation errors. However, the choice of uncertainty set still matters with the solutions of robust MDPs. \citet{wiesemann2013robust} has concluded that with $(s,a)/s$-rectangular and convex set assumptions, the computation complexity of obtaining near-optimal solutions is polynomial. If the uncertainty set is non-rectangular, the problem becomes NP-hard. Thus, with $(s,a)/s$-rectangular set assumptions, many works have provided efficient learning algorithms to obtain near-optimal solutions in different uncertainty sets \citep{iyengar2005robust, nilim2005robust,wiesemann2013robust, kaufman2013robust, ho2018fast, smirnova2019distributionally, ho2020partial}. Besides, \citet{goyal2018robust} considers a more general assumption called $r$-rectangular when MDPs have a low dimensional linear representation. And \citet{derman2020distributional} also proposes an extension of robust MDPs called distributionally robust MDPs under Wasserstein distance. Instead of designing efficient learning algorithms, there are few works considering the performance of optimal robust policy as Equation~\eqref{eq: gap} shows. \citet{si2020distributionally} has considered the asymptotic and non-asymptotic behavior of the optimal robust solutions in the bandit case when KL divergence is applied in uncertainty set. And later, \citet{zhou2021finite} extends the non-asymptotic results of \citet{si2020distributionally} to the infinite horizon case.
\paragraph{Distributionally Robust Optimization (DRO).} In fact, handling uncertainty set in robust MDPs is relevant with Distributionally Robust Optimization (DRO), where the objective function is minimized with a worst-case loss function. The core motivation of DRO is to deal with the distribution shift of data under different uncertainty sets. \citet{bertsimas2018data, delage2010distributionally} consider the uncertainty set is formulated by moment conditions, while \citet{ben2013robust, duchi2021statistics, duchi2016variance, lam2016robust,duchi2018learning} consider the uncertainty set is formulated by $f$-divergence balls. Besides, \citet{wozabal2012framework, blanchet2019quantifying, gao2016distributionally, lee2017minimax} also consider Wasserstein balls, which is more computationally challenging. The most related work with our results is \citet{duchi2018learning}, where they consider the asymptotic and non-asymptotic performances of the empirical minimizer on a population level. However, the results of \citet{duchi2018learning} are mainly built on the field of supervised learning, while our results are built on robust MDPs. And recently, there is a line of works \citep{jin2020pessimism, dai2020coindice, xiao2021optimality} also considering the connections between pessimistic RL and DRO.
\subsection{Case 1: $L_1$ balls}
In this part, we set $f(t)=|t-1|$ in Example~\ref{eg: f-set}. The uncertainty set is formulated as:
\begin{example}[$L_1$ balls]
For each $(s,a)\in{\mathcal S}\times{\mathcal A}$ and assuming $\rho<2$, the uncertainty set is defined as:
\begin{align*}
&{\mathcal P}_{s,a}(\rho)=\left\{P(\cdot|s,a)\in\Delta({\mathcal S}),\hspace{4pt}P(\cdot|s,a)\ll P^*(\cdot|s,a)\Bigg{|}\sum_{s'\in{\mathcal S}}\left|P(s'|s,a)-P^*(s'|s,a)\right|\le\rho\right\}\\
&\widehat{{\mathcal P}}_{s,a}(\rho)=\left\{P(\cdot|s,a)\in\Delta({\mathcal S}),\hspace{4pt}P(\cdot|s,a)\ll \widehat{P}(\cdot|s,a)\Bigg{|}\sum_{s'\in{\mathcal S}}\left|P(s'|s,a)-\widehat{P}(s'|s,a)\right|\le\rho\right\}
\end{align*}
By $(s,a)$-rectangular set assumption, we define ${\mathcal P}=\bigotimes_{s,a}{\mathcal P}_{s,a}(\rho)$ and $\widehat{{\mathcal P}}=\bigotimes_{s,a}\widehat{{\mathcal P}}_{s,a}(\rho)$.
\end{example}
Thus, for any given $V\in{\mathcal V}$, the explicit form of ${\mathcal T}^\pi_r V$ and $\widehat{{\mathcal T}}^\pi_r V$ is:
\begin{lem}
\label{lem: l1}
Under the $(s,a)$-rectangular assumption and $L_1$ balls uncertainty set, for each $s\in{\mathcal S}$, we have:
\begin{align*}
{\mathcal T}^\pi_r V (s)&=\sum_{a}\pi(a|s)\left(R(s,a) + \gamma\sup_{\eta\in{\mathbb R}}\left( -\sum_{s'}P^*(s'|s,a)(\eta-V(s'))_+ - \frac{(\eta-\min_{s'}V(s'))_{+}}{2}\rho +\eta \right)\right)\\
\widehat{{\mathcal T}}^\pi_r V (s)&=\sum_{a}\pi(a|s)\left(R(s,a) + \gamma\sup_{\eta\in{\mathbb R}}\left( -\sum_{s'}\widehat{P}(s'|s,a)(\eta-V(s'))_+ - \frac{(\eta-\min_{s'}V(s'))_{+}}{2}\rho +\eta \right)\right)
\end{align*}
\end{lem}
To simplify, we denote $g(\eta,P)=\sum_{s\in{\mathcal S}}P(s)(\eta-V(s))_++\frac{(\eta-\min_s V(s))_+}{2}\rho-\eta$, which is convex in $\eta$. It's easy to observe that $g(\eta,P)=-\eta\ge0$ when $\eta\le0$ and :
\begin{align*}
g\left(\frac{2+\rho}{\rho(1-\gamma)},P\right)=-\sum_{s\in{\mathcal S}}P(s)V(s)+\frac{\frac{2+\rho}{\rho(1-\gamma)}-\min_s V(s)}{2}\rho\ge0
\end{align*}
by $V(s)\in[0,\frac{1}{1-\gamma}]$ for all $s\in{\mathcal S}$. Thus, we can restrict the value of $\eta$ in Lemma~\ref{lem: l1} with $[0,\frac{2+\rho}{\rho(1-\gamma)}]$. Then, we can easily bound the error of ${\mathcal T}_r^\pi V$ and $\widehat{{\mathcal T}}_r^\pi V$ with fixed $V\in{\mathcal V}$ and $\pi\in\Pi$ as Theorem~\ref{thm: l1-fix} states:
\begin{thm}
\label{thm: l1-fix}
In the setting of $L_1$ balls, for fixed $V\in{\mathcal V}$ and $\pi\in\Pi$, the following inequality holds with probability $1-\delta$:
\begin{align*}
\left\|{\mathcal T}_r^\pi V-\widehat{{\mathcal T}}_r^\pi V\right\|_\infty\le\frac{(2+\rho)\gamma}{\sqrt{2n}\rho(1-\gamma)}\left(1+\sqrt{\log\frac{2|{\mathcal S}||{\mathcal A}|(1+(4+\rho)\sqrt{2n})}{\delta}}\right)
\end{align*}
\end{thm}
Next we need to do is extend Theorem~\ref{thm: l1-fix} to a uniform bound over ${\mathcal V}$ and $\Pi$. By $(s,a)$-rectangular set assumption, the optimal robust policy is deterministic. Thus, we can restrict the policy class $\Pi$ to all the deterministic policies, which is finite with size $|{\mathcal A}|^{|{\mathcal S}|}$. Even though ${\mathcal V}=[0,\frac{1}{1-\gamma}]^{|{\mathcal S}|}$ is infinite, we can take an $\varepsilon$-net of ${\mathcal V}$ with norm $\|\cdot\|_\infty$. Thus, we have the final result as Theorem~\ref{thm: l1-union} states:
\begin{thm}
\label{thm: l1-union}
Under the $(s,a)$-rectangular assumption as $L_1$ balls uncertainty set, we denote $\widehat{\pi}=\mathop{\rm argmax}_{\pi}\widehat{V}_r^\pi(\mu)$. The following inequality holds with probability $1-\delta$:
\begin{align*}
\max_{\pi} V^\pi_r(\mu)- V_r^{\widehat{\pi}}(\mu)\le\frac{2(2+\rho)\gamma\sqrt{|{\mathcal S}|}}{\sqrt{2n}\rho(1-\gamma)^2}\left(2+\sqrt{\log\frac{4|{\mathcal S}||{\mathcal A}|^2(1+2(2+\rho)\sqrt{2n})^2}{\delta(2+\rho)}}\right)
\end{align*}
\end{thm}
By Theorem~\ref{thm: l1-union}, to achieve $\widetilde{O}(\varepsilon)$ error bound, the sample complexity could be $\widetilde{O}\left(\frac{|{\mathcal S}|(2+\rho)^2}{\varepsilon^2\rho^2(1-\gamma)^4}\right)$ for each $(s,a)$ pair. Thus the total sample complexity is $\widetilde{O}\left(\frac{|{\mathcal S}|^2|{\mathcal A}|(2+\rho)^2}{\varepsilon^2\rho^2(1-\gamma)^4}\right)$.
\subsection{Case 2: $\chi^2$ balls}
In this part, we set $f(t)=(t-1)^2$ in Example~\ref{eg: f-set}. Thus, the uncertainty set is formulated as:
\begin{example}[$\chi^2$ balls]
For each $(s,a)\in{\mathcal S}\times{\mathcal A}$, the uncertainty set is defined as:
\begin{align*}
&{\mathcal P}_{s,a}(\rho)=\left\{P(\cdot|s,a)\in\Delta({\mathcal S}),\hspace{4pt}P(\cdot|s,a)\ll P^*(\cdot|s,a)\Bigg{|}\sum_{s'\in{\mathcal S}}\frac{\left(P(s'|s,a)-P^*(s'|s,a)\right)^2}{P^*(s'|s,a)}\le\rho\right\}\\
&\widehat{{\mathcal P}}_{s,a}(\rho)=\left\{P(\cdot|s,a)\in\Delta({\mathcal S}),\hspace{4pt}P(\cdot|s,a)\ll \widehat{P}(\cdot|s,a)\Bigg{|}\sum_{s'\in{\mathcal S}}\frac{\left(P(s'|s,a)-\widehat{P}(s'|s,a)\right)}{\widehat{P}(s'|s,a)}\le\rho\right\}
\end{align*}
By $(s,a)$-rectangular set assumption, we define ${\mathcal P}=\bigotimes_{s,a}{\mathcal P}_{s,a}(\rho)$ and $\widehat{{\mathcal P}}=\bigotimes_{s,a}\widehat{{\mathcal P}}_{s,a}(\rho)$.
\end{example}
By \cite{duchi2018learning}, the results of $f(t)=(t-1)^2$ can be generalized to $f(t)\propto t^k$ for $k>1$. However, the excess risk in DRO is controlled by $\widetilde{O}_p(\frac{1}{n^{(1-1/k)}}\vee\frac{1}{\sqrt{n}})$, where the fastest rate is obtained when $k=2$. Thus we consider the special case when $k=2$. Similar with the $L_1$ ball case, for any given $V\in{\mathcal V}$ and $\pi\in\Pi$, the explicit form of ${\mathcal T}_r^\pi V$ and $\widehat{{\mathcal T}}_r^\pi V$ is:
\begin{lem}
\label{lem: chi2}
Under the $(s,a)$-rectangular assumption and $\chi^2$ balls uncertainty set, for each $s\in{\mathcal S}$, we have:
\begin{align*}
{\mathcal T}^\pi_r V (s)&=\sum_{a}\pi(a|s)\left(R(s,a) + \gamma\sup_{\eta}\left( -C(\rho)\sqrt{\sum_{s'}P^*(s'|s,a)(\eta-V(s'))_+^2}+\eta\right)\right)\\
\widehat{{\mathcal T}}^\pi_r V (s)&=\sum_{a}\pi(a|s)\left(R(s,a) + \gamma\sup_{\eta}\left(-C(\rho)\sqrt{\sum_{s'}\widehat{P}(s'|s,a)(\eta-V(s'))_+^2}+\eta\right)\right)
\end{align*}
where $C(\rho)=\sqrt{1+\rho}$.
\end{lem}
We also denote $g(\eta,P)=C(\rho)\sqrt{\sum_{s\in{\mathcal S}}P(s)(\eta-V(s))_+^2}-\eta$, which is convex in $\eta$. It's easily to note that $g(\eta,P)=-\eta\ge0$ when $\eta\le0$ and $g\left(\frac{C(\rho)}{(C(\rho)-1)(1-\gamma)}, P\right)\ge0$. Thus, we can restirct the optimal value of $\eta$ in Lemma~\ref{lem: chi2} to interval $[0,\frac{C(\rho)}{(C(\rho)-1)(1-\gamma)}]$. Thus the error bound between ${\mathcal T}_r^\pi$ and $\widehat{{\mathcal T}}_r^\pi$ with fixed $V\in{\mathcal V}$ and $\pi\in\Pi$ is:
\begin{thm}
\label{thm: chi2-fix}
In the setting of $\chi^2$ balls, for fixed $V\in{\mathcal V}$ and $\pi\in\Pi$, the following inequality holds with probability $1-\delta$:
\begin{align*}
\left\|{\mathcal T}_r^\pi V-\widehat{{\mathcal T}}_r^\pi V\right\|_\infty\le\frac{C^2(\rho)\gamma}{(C(\rho)-1)(1-\gamma)\sqrt{n}}\left(2+\sqrt{2\log\frac{2|{\mathcal S}||{\mathcal A}|(1+4\sqrt{n})}{\delta}}\right)
\end{align*}
\end{thm}
Similar with case of $L_1$ balls, we can extend Theorem~\ref{thm: chi2-fix} to a uniform bound over ${\mathcal V}$ and deterministic policy class $\Pi$ as Theorem~\ref{thm: chi2-uni} states, where we can derive an overall sample complexity bound $\widetilde{O}\left(\frac{|{\mathcal S}|^2|{\mathcal A}|(1+\rho)^2}{\varepsilon^2(\sqrt{1+\rho}-1)^2(1-\gamma)^4}\right)$.
\begin{thm}
\label{thm: chi2-uni}
Under the $(s,a)$-rectangular assumption and $\chi^2$ balls uncertainty set, we denote $\widehat{\pi}=\mathop{\rm argmax}_{\pi}\widehat{V}_r^\pi(\mu)$. The following inequality holds with probability $1-\delta$:
\begin{align*}
\max_\pi V_r^\pi(\mu)-V_r^{\widehat{\pi}}(\mu)\le\frac{2C^2(\rho)\gamma\sqrt{|{\mathcal S}|}}{(C(\rho)-1)(1-\gamma)^2\sqrt{n}}\left(4+\sqrt{2\log\frac{2|{\mathcal S}||{\mathcal A}|^2(1+(C(\rho)+3)\sqrt{n})^2}{\delta C^2(\rho)}}\right)
\end{align*}
\end{thm}
\subsection{Case 3: KL balls}
In this part, we set $f(t)=t\log t$ in Example~\ref{eg: f-set}. The uncertainty set is formulated as:
\begin{example}[KL balls]
For each $(s,a)\in{\mathcal S}\times{\mathcal A}$, the uncertainty set is defined as:
\begin{align*}
&{\mathcal P}_{s,a}(\rho)=\left\{P(\cdot|s,a)\in\Delta({\mathcal S}),\hspace{4pt}P(\cdot|s,a)\ll P^*(\cdot|s,a)\Bigg{|}\sum_{s'\in{\mathcal S}}P(s'|s,a)\log\frac{P(s'|s,a)}{P^*(s'|s,a)}\le\rho\right\}\\
&\widehat{{\mathcal P}}_{s,a}(\rho)=\left\{P(\cdot|s,a)\in\Delta({\mathcal S}),\hspace{4pt}P(\cdot|s,a)\ll \widehat{P}(\cdot|s,a)\Bigg{|}\sum_{s'\in{\mathcal S}}P(s'|s,a)\log\frac{P(s'|s,a)}{\widehat{P}(s'|s,a)}\le\rho\right\}
\end{align*}
By $(s,a)$-rectangular set assumption, we define ${\mathcal P}=\bigotimes_{s,a}{\mathcal P}_{s,a}(\rho)$ and $\widehat{{\mathcal P}}=\bigotimes_{s,a}\widehat{{\mathcal P}}_{s,a}(\rho)$.
\end{example}
Similar with $L_1$ ball case, for any given $V\in{\mathcal V}$ and $\pi\in\Pi$, the explicit form of ${\mathcal T}_r^\pi V$ and $\widehat{{\mathcal T}}_r^\pi V$ is:
\begin{lem}
\label{lem: kl}
Under the $(s,a)$-rectangular assumption and KL balls uncertainty set, for each $s\in{\mathcal S}$, we have:
\begin{align*}
{\mathcal T}^\pi_r V (s)&=\sum_{a}\pi(a|s)\left(R(s,a) + \gamma\sup_{\lambda\ge0}\left(-\lambda\rho-\lambda\log\sum_{s'}P^*(s'|s,a)\exp(-\frac{V(s')}{\lambda})\right)\right)\\
\widehat{{\mathcal T}}^\pi_r V (s)&=\sum_{a}\pi(a|s)\left(R(s,a) + \gamma\sup_{\lambda\ge0}\left(-\lambda\rho-\lambda\log\sum_{s'}\widehat{P}(s'|s,a)\exp(-\frac{V(s')}{\lambda})\right)\right)
\end{align*}
\end{lem}
Here we denote $g(\lambda,P)=\lambda\rho+\lambda\log\sum_{s}P(s)\exp(-V(s)/\lambda)$, which is convex in $\eta$. Even though the domain of $g(\lambda,P)$ does not conclude $\lambda=0$, we observe $g(\lambda,P)\ge0$ for all $\lambda\ge\frac{1}{\rho(1-\gamma)}$ and $g(\lambda,P)$ is monotonically increasing w.r.t $\lambda$ when $\lambda\ge\frac{1}{\rho(1-\gamma)}$. Thus, the optimal $\lambda^*$ of Lemma~\ref{lem: kl} takes value in interval $[0, \frac{1}{\rho(1-\gamma)}]$. Thus, for fixed $V\in{\mathcal V}$ and $\pi\in\Pi$, we have:
\begin{thm}
\label{thm: kl-fix}
In the setting of KL balls, for fixed $V\in{\mathcal V}$ and $\pi\in\Pi$, the following inequality holds with probability $1-\delta$:
\begin{align*}
\left\|{\mathcal T}_r^\pi V-\widehat{{\mathcal T}}_r^\pi V\right\|_\infty\le\frac{2\gamma}{\rho(1-\gamma)\underline{p}\sqrt{n}}\sqrt{\log\frac{2|{\mathcal S}|^2|{\mathcal A}|}{\delta}}
\end{align*}
where $\underline{p}=\min_{P^*(s'|s,a)>0}P^*(s'|s,a)$.
\end{thm}
Next, we can extend Theorem~\ref{thm: kl-fix} to a uniform bound over ${\mathcal V}$ and deterministic policy class $\Pi$ as Theorem~\ref{thm: kl-uni} states, where we can derive an overall sample complexity bound $\widetilde{O}\left(\frac{|{\mathcal S}|^2|{\mathcal A}|}{\varepsilon^2\rho^2\underline{p}^2(1-\gamma)^4}\right)$.
\begin{thm}
\label{thm: kl-uni}
Under the $(s,a)$-rectangular assumption and KL balls uncertainty set, we denote $\widehat{\pi}=\mathop{\rm argmax}_\pi \widehat{V}^\pi_r(\mu)$. The following inequality holds with probability $1-\delta$:
\begin{align*}
\max_\pi V_r^\pi(\mu)-V_r^{\widehat{\pi}}(\mu)\le\frac{4\gamma\sqrt{|{\mathcal S}|}}{\rho(1-\gamma)^2\underline{p}\sqrt{n}}\left(1+\sqrt{\log\frac{2|{\mathcal S}|^2|{\mathcal A}|^2(1+\rho\underline{p}\sqrt{n})}{\delta}}\right)
\end{align*}
where $\underline{p}=\min_{P^*(s'|s,a)>0}P^*(s'|s,a)$.
\end{thm}
\subsection{Offline dataset with $(s,a)$-rectangular assumption}
\begin{thm}[$L_1$ balls]
\label{thm: l1-union-off}
Under offline dataset, $(s,a)$-rectangular assumption, Assumption~\ref{asmp: conc}, and $L_1$ balls uncertainty set, we denote $\widehat{\pi}=\mathop{\rm argmax}_\pi \widehat{V}_r^\pi(\mu)$. The following inequality holds with probability $1-\delta$:
\begin{align*}
\max_{\pi}V_r^\pi(\mu)-V_r^{\widehat{\pi}}(\mu)\le\frac{2(2+\rho)\gamma\sqrt{|{\mathcal S}|}}{\rho(1-\gamma)^2\sqrt{n\nu_{\min}}}\left(2+\sqrt{\log\frac{8|{\mathcal S}||{\mathcal A}|^2(1+2(2+\rho)\sqrt{2n})^2}{\delta(2+\rho)}}\right)
\end{align*}
\end{thm}
\begin{thm}[$\chi^2$ balls]
\label{thm: chi2-union-off}
Under offline dataset, $(s,a)$-rectangular assumption, Assumption~\ref{asmp: conc}, and $\chi^2$ balls uncertainty set, we denote $\widehat{\pi}=\mathop{\rm argmax}_\pi \widehat{V}_r^\pi(\mu)$. The following inequality holds with probability $1-\delta$:
\begin{align*}
\max_{\pi}V_r^\pi(\mu)-V_r^{\widehat{\pi}}(\mu)\le\frac{2C^2(\rho)\gamma\sqrt{|{\mathcal S}|}}{(C(\rho)-1)(1-\gamma)^2\sqrt{n\nu_{\min}}}\left(4+\sqrt{2\log\frac{4|{\mathcal S}||{\mathcal A}|^2(1+(C(\rho)+3)\sqrt{n})^2}{\delta C^2(\rho)}}\right)
\end{align*}
\end{thm}
\begin{thm}[KL balls]
\label{thm: kl-union-off}
Under offline dataset, $(s,a)$-rectangular assumption, Assumption~\ref{asmp: conc}, and KL balls uncertainty set, we denote $\widehat{\pi}=\mathop{\rm argmax}_\pi \widehat{V}_r^\pi(\mu)$. The following inequality holds with probability $1-\delta$:
\begin{align*}
\max_{\pi}V_r^\pi(\mu)-V_r^{\widehat{\pi}}(\mu)\le\frac{4\gamma\sqrt{|{\mathcal S}|}}{\rho(1-\gamma)^2\underline{p}\sqrt{n\nu_{\min}}}\left(1+\sqrt{2\log\frac{2|{\mathcal S}|^2|{\mathcal A}|^2(1+\rho\underline{p}\sqrt{n})}{\delta}}\right)
\end{align*}
where $\underline{p}=\min_{P^*(s'|s,a)>0}P^*(s'|s,a)$.
\end{thm}
\subsection{Offline dataset with $s$-rectangular assumption}
However, the situation becomes complicated when $s$-rectangular is assumed. In Section~\ref{sec: res-s}, we assume the dataset is generated by a generative model, where $n(s,a)$ are all the same. When we face with the setting of offline dataset, $n(s,a)$ are not the same. Instead, inspired by \citet{yin2020near}, we construct a new dataset ${\mathcal D}'\subset{\mathcal D}$ (modified offline dataset), where we only aggregate first $n'$ samples for each $(s,a)$ pair in ${\mathcal D}$, and $n':=n'(s,a)=\min_{s,a}n(s,a)$ for any $(s,a)\in{\mathcal S}\times{\mathcal A}$. From Lemma~\ref{lem: off-dis}, we know that $n'$ is large with high probability. Thus, we can extend our results in Section~\ref{sec: res-s} in this modified offline dataset setting as follows.
\begin{rem}
Here we give a high-level explanation on why we construct a new offline dataset ${\mathcal D}'$. In generative model setting, where $n(s,a)$ are all the same, assuming $Y_k^{s,a}$ are i.i.d bounded random variables for each $s,a$ pair, we can always deal with the concentrabtion bound of $f(\sum_a \frac{1}{n(s,a)}\sum_{k=1}^{n(s,a)}Y_k^{s,a})$ by constructing a new i.i.d sum of $\frac{1}{n}\sum_{k=1}^n Z_k$, where $f$ is a Lipschitz function, $n=n(s,a)$ and $Z_k=\sum_a Y_k^{s,a}$. However, in offline dataset and assuming randomness caused by $\nu$ is fixed, $n(s,a)$ can vary a lot and it's difficult to deal with the concentration bound of $f(\sum_a \frac{1}{n(s,a)}\sum_{k=1}^{n(s,a)}Y_k^{s,a})$. If we bound each $\frac{1}{n(s,a)}\sum_{k=1}^{n(s,a)}Y_k^{s,a}$ for $a\in{\mathcal A}$ serparately, the statistical error would amplify by a factor of $|{\mathcal A}|$ when $f$ is linear function. When $f$ is non-linear function, the situation becomes much more complicated and we leave it as future work.
\end{rem}
\begin{thm}[$L_1$ balls]
\label{thm: l1-union-off-s}
Under modified offline dataset, $s$-rectangular assumption, Assumption~\ref{asmp: conc}, and $L_1$ balls uncertainty set, we denote $\widehat{\pi}=\mathop{\rm argmax}_\pi \widehat{V}_r^\pi(\mu)$. The following inequality holds with probability $1-\delta$:
\begin{align*}
\max_\pi V_r^\pi(\mu)-V_r^{\widehat{\pi}}(\mu)\le\frac{2\gamma(2+\rho)\sqrt{|{\mathcal S}||{\mathcal A}|}}{\rho(1-\gamma)^2\sqrt{n\nu_{\min}}}\Bigg{(}4+\sqrt{\log\frac{2|{\mathcal S}|(1+2\sqrt{2n}(\rho+4))^3}{\delta}}\Bigg{)}
\end{align*}
\end{thm}
\begin{thm}[$\chi^2$ balls]
\label{thm: chi2-union-off-s}
Under modified offline dataset, $s$-rectangular assumption, Assumption~\ref{asmp: conc}, and $\chi^2$ balls uncertainty set, we denote $\widehat{\pi}=\mathop{\rm argmax}_\pi \widehat{V}_r^\pi(\mu)$. The following inequality holds with probability $1-\delta$:
\begin{align*}
\max_\pi V_r^\pi(\mu)-V^{\widehat{\pi}}_r(\mu)\le\frac{2\gamma C^2(\rho)\sqrt{|{\mathcal S}||{\mathcal A}|^2}}{(C(\rho)-1)(1-\gamma)^2\sqrt{n\nu_{\min}}}\left(6+\sqrt{2\log\frac{2|{\mathcal S}|(1+8\sqrt{n})(1+4\sqrt{n}(C(\rho)-1))^2}{\delta}}\right)
\end{align*}
\end{thm}
\begin{thm}[KL balls]
\label{thm: kl-union-off-s}
Under modified offline dataset, $s$-rectangular assumption, Assumption~\ref{asmp: conc}, and KL balls uncertainty set, we denote $\widehat{\pi}=\mathop{\rm argmax}_\pi \widehat{V}_r^\pi(\mu)$. The following inequality holds with probability $1-\delta$:
\begin{align*}
\max_\pi V_r^\pi(\mu)-V^{\widehat{\pi}}_r(\mu)\le\frac{4\gamma\sqrt{|{\mathcal S}||{\mathcal A}|}}{\rho(1-\gamma)^2\underline{p}\sqrt{n\nu_{\min}}}\left(2+\sqrt{2\log\frac{2|{\mathcal S}|^2|{\mathcal A}|\left(1+4\rho\underline{p}\sqrt{n}\right)}{\delta}}\right)
\end{align*}
where $\underline{p}=\min_{P^*(s'|s,a)>0}P^*(s'|s,a)$.
\end{thm}
\subsection{Case 1: $L_1$ balls}
In this part, we set $f(t)=|t-1|$ in Example~\ref{eg: f-set-s}. The uncertainty set is formulated as:
\begin{example}[$L_1$ balls]
For each $s\in{\mathcal S}$, the uncertainty set is defined as:
\begin{align*}
&{\mathcal P}_{s}(\rho)=\left\{P(\cdot|s,a)\in\Delta({\mathcal S}),\hspace{4pt}P(\cdot|s,a)\ll P^*(\cdot|s,a)\Bigg{|}\sum_{s'\in{\mathcal S}, a\in{\mathcal A}}\left|P(s'|s,a)-P^*(s'|s,a)\right|\le|{\mathcal A}|\rho\right\}\\
&\widehat{{\mathcal P}}_{s}(\rho)=\left\{P(\cdot|s,a)\in\Delta({\mathcal S}),\hspace{4pt}P(\cdot|s,a)\ll \widehat{P}(\cdot|s,a)\Bigg{|}\sum_{s'\in{\mathcal S}, a\in{\mathcal A}}\left|P(s'|s,a)-\widehat{P}(s'|s,a)\right|\le|{\mathcal A}|\rho\right\}
\end{align*}
By $s$-rectangular set assumption, we define ${\mathcal P}=\bigotimes_s {\mathcal P}_s(\rho)$ and $\widehat{{\mathcal P}}=\bigotimes_s \widehat{{\mathcal P}}_s(\rho)$.
\end{example}
For any given $V\in{\mathcal V}$ and $\pi\in\Pi$, the explicit form of ${\mathcal T}_r^\pi V$ and $\widehat{{\mathcal T}}_r^\pi V$ is:
\begin{lem}
\label{lem: l1-s}
Under the s-rectangular assumption and $L_1$ balls uncertainty set, for each $s\in{\mathcal S}$, we have:
\begin{align*}
{\mathcal T}_r^\pi V(s)&=R^\pi(s)+\gamma\sup_{\eta\in{\mathbb R}^{|{\mathcal A}|}}\left(-\sum_{s',a}P^*(s'|s,a)\left(\eta_a-\pi(a|s)V(s')\right)_+-|{\mathcal A}|h(\eta,\pi(\cdot|s), V)\rho+\sum_{a}\eta_a\right)\\
\widehat{{\mathcal T}}_r^\pi V(s)&=R^\pi(s)+\gamma\sup_{\eta\in{\mathbb R}^{|{\mathcal A}|}}\left(-\sum_{s',a}\widehat{P}(s'|s,a)\left(\eta_a-\pi(a|s)V(s')\right)_+-|{\mathcal A}|h(\eta,\pi(\cdot|s), V)\rho+\sum_{a}\eta_a\right)
\end{align*}
where $R^{\pi}(s)=\sum_a \pi(a|s)R(s,a)$ and $h(\eta,\pi, V)=(\max_{a,s'}\frac{\eta_a-\pi(a)V(s')}{2})_+$.
\end{lem}
Different from the case when $(s,a)$-rectangular assumption holds, the explicit form of ${\mathcal T}_r^\pi V$ in Lemma~\ref{lem: l1-s} is determined by a $|{\mathcal A}|$-dimensional vector $\eta$. However, we can still find a way to deal with this situation, which is detailed in Appendix~\ref{apd: s-gen}. Thus, for any fixed $\pi\in\Pi$ and $V\in{\mathcal V}$, we have the following Theorem.
\begin{thm}
\label{thm: l1-fix-s}
In the setting of $L_1$ balls, for fixed $V\in{\mathcal V}$ and $\pi\in\Pi$, the following inequality holds with probability $1-\delta$:
\begin{align*}
\left\|{\mathcal T}_r^\pi V-\widehat{{\mathcal T}}_r^\pi V\right\|_\infty\le\frac{\gamma(2+\rho)}{\rho(1-\gamma)\sqrt{2n}}\left(1+\sqrt{\log\frac{2|{\mathcal S}|}{\delta}+|{\mathcal A}|\log\left(1+2\sqrt{2n}(4+\rho)\right)}\right)
\end{align*}
\end{thm}
By a similar union bound over ${\mathcal V}$ and Lemma~\ref{lem: eps-pi}, we otain the bound of performance gap as Theorem~\ref{thm: l1-union-s} states, where we can derive an overall sample complexity bound $\widetilde{O}\left(\frac{|{\mathcal S}|^2|{\mathcal A}|^2(2+\rho)^2}{\varepsilon^2\rho^2(1-\gamma)^4}\right)$, which is $|{\mathcal A}|$ times larger than the case of $(s,a)$-rectangular assumption.
\begin{thm}
\label{thm: l1-union-s}
Under the s-rectangular assumption and $L_1$ balls uncertainty set, we denote $\widehat{\pi}=\mathop{\rm argmax}_\pi \widehat{V}_r^\pi(\mu)$. The following inequality holds with probability $1-\delta$:
\begin{align*}
\max_\pi V_r^\pi(\mu)-V_r^{\widehat{\pi}}(\mu)\le\frac{2\gamma(2+\rho)\sqrt{|{\mathcal S}||{\mathcal A}|}}{\rho(1-\gamma)^2\sqrt{2n}}\Bigg{(}4+\sqrt{\log\frac{2|{\mathcal S}|(1+2\sqrt{2n}(\rho+4))^3}{\delta}}\Bigg{)}
\end{align*}
\end{thm}
\subsection{Case 2: $\chi^2$ balls}
In this part, we set $f(t)=(t-1)^2$ in Example~\ref{eg: f-set-s}. The uncertainty set is formulated as:
\begin{example}[$\chi^2$ balls]
For each $s\in{\mathcal S}$, the uncertainty set is defined as:
\begin{align*}
&{\mathcal P}_{s}(\rho)=\left\{P(\cdot|s,a)\in\Delta({\mathcal S}),\hspace{4pt}P(\cdot|s,a)\ll P^*(\cdot|s,a)\Bigg{|}\sum_{s'\in{\mathcal S}, a\in{\mathcal A}}\frac{\left(P(s'|s,a)-P^*(s'|s,a)\right)^2}{P^*(s'|s,a)}\le|{\mathcal A}|\rho\right\}\\
&\widehat{{\mathcal P}}_{s}(\rho)=\left\{P(\cdot|s,a)\in\Delta({\mathcal S}),\hspace{4pt}P(\cdot|s,a)\ll \widehat{P}(\cdot|s,a)\Bigg{|}\sum_{s'\in{\mathcal S}, a\in{\mathcal A}}\frac{\left(P(s'|s,a)-\widehat{P}(s'|s,a)\right)^2}{\widehat{P}(s'|s,a)}\le|{\mathcal A}|\rho\right\}
\end{align*}
By $s$-rectangular set assumption, we define ${\mathcal P}=\bigotimes_s {\mathcal P}_s(\rho)$ and $\widehat{{\mathcal P}}=\bigotimes_s \widehat{{\mathcal P}}_s(\rho)$.
\end{example}
Thus, for any fixed $\pi\in\Pi$ and $V\in{\mathcal V}$, the explicit form of ${\mathcal T}_r^\pi$ and $\widehat{{\mathcal T}}_r^\pi$ is:
\begin{lem}
\label{lem: chi2-s}
Under the s-rectangular assumption and $\chi^2$ balls uncertainty set, for each $s\in{\mathcal S}$, we have:
\begin{align*}
{\mathcal T}_r^\pi V(s)&=R^\pi(s)+\gamma\sup_{\eta\in{\mathbb R}^{|{\mathcal A}|}}\left(-\sqrt{(\rho+1)|{\mathcal A}|}\sqrt{\sum_{s',a}P^*(s'|s,a)(\eta_a-\pi(a|s)V(s'))_+^2}+\sum_{a}\eta_a\right)\\
\widehat{{\mathcal T}}_r^\pi V(s)&=R^\pi(s)+\gamma\sup_{\eta\in{\mathbb R}^{|{\mathcal A}|}}\left(-\sqrt{(\rho+1)|{\mathcal A}|}\sqrt{\sum_{s',a}\widehat{P}(s'|s,a)(\eta_a-\pi(a|s)V(s'))_+^2}+\sum_{a}\eta_a\right)
\end{align*}
\end{lem}
Similar with the case of $L_1$ balls, ${\mathcal T}_r^\pi$ and $\widehat{{\mathcal T}}_r^\pi$ depend on a $|{\mathcal A}|$-dimensional vector. By a similar approach with the case of $L_1$ balls under $s$-rectangular assumption, which can be referred in Appendix~\ref{apd: s-gen}, we can also obtain the following result.
\begin{thm}
\label{thm: chi2-s-fix}
In the setting of $\chi^2$ balls, for fixed $V\in{\mathcal V}$ and $\pi\in\Pi$, the following inequality holds with probability $1-\delta$:
\begin{align*}
\left\|{\mathcal T}_r^\pi V - \widehat{{\mathcal T}}_r^\pi V\right\|_\infty\le\frac{\gamma C^2(\rho)\sqrt{|{\mathcal A}|}}{(C(\rho)-1)(1-\gamma)\sqrt{n}}\left(2+\sqrt{2\log\frac{2}{\delta}+2|{\mathcal A}|\log(1+8\sqrt{n})}\right)
\end{align*}
where $C(\rho)=\sqrt{\rho+1}$.
\end{thm}
Finally, by union bounding over ${\mathcal V}$ and Lemma~\ref{lem: eps-pi}, we otain the bound of performance gap as Theorem~\ref{thm: chi2-s-union} states, where we can derive an overall sample complexity bound $\widetilde{O}\left(\frac{|{\mathcal S}|^2|{\mathcal A}|^3(1+\rho)^2}{\varepsilon^2(\sqrt{1+\rho}-1)^2(1-\gamma)^4}\right)$, which is $|{\mathcal A}|^2$ times larger than the case of $(s,a)$-rectangular assumption.
\begin{thm}
\label{thm: chi2-s-union}
Under the s-rectangular assumption and $\chi^2$ balls uncertainty set, we denote $\widehat{\pi}=\mathop{\rm argmax}_\pi \widehat{V}_r\pi(\mu)$. The following inequality holds with probability $1-\delta$:
\begin{align*}
\max_\pi V_r^\pi(\mu)-V^{\widehat{\pi}}_r(\mu)\le\frac{2\gamma C^2(\rho)\sqrt{|{\mathcal S}||{\mathcal A}|^2}}{(C(\rho)-1)(1-\gamma)^2\sqrt{n}}\left(6+\sqrt{2\log\frac{2|{\mathcal S}|(1+8\sqrt{n})(1+4\sqrt{n}(C(\rho)-1))^2}{\delta}}\right)
\end{align*}
where $C(\rho)=\sqrt{\rho+1}$.
\end{thm}
\subsection{Case 3: KL balls}
In this part, we set $f(t)=t\log t$ in Example~\ref{eg: f-set-s}. The uncertainty set is formulated as:
\begin{example}[KL balls]
For each $(s,a)\in{\mathcal S}\times{\mathcal A}$, the uncertainty set is defined as:
\begin{align*}
&{\mathcal P}_{s}(\rho)=\left\{P(\cdot|s,a),\hspace{4pt}P(\cdot|s,a)\ll P^*(\cdot|s,a)\Bigg{|}\sum_{s'\in{\mathcal S}, a\in{\mathcal A}}P(s'|s,a)\log\frac{P(s'|s,a)}{P^*(s'|s,a)}\le|{\mathcal A}|\rho\right\}\\
&\widehat{{\mathcal P}}_{s}(\rho)=\left\{P(\cdot|s,a),\hspace{4pt}P(\cdot|s,a)\ll \widehat{P}(\cdot|s,a)\Bigg{|}\sum_{s'\in{\mathcal S}, a\in{\mathcal A}}P(s'|s,a)\log\frac{P(s'|s,a)}{\widehat{P}(s'|s,a)}\le|{\mathcal A}|\rho\right\}
\end{align*}
By $s$-rectangular set assumption, we define ${\mathcal P}=\bigotimes_s {\mathcal P}_s(\rho)$ and $\widehat{{\mathcal P}}=\bigotimes_s \widehat{{\mathcal P}}_s(\rho)$.
\end{example}
Thus, for any fixed $\pi\in\Pi$ and $V\in{\mathcal V}$, the explicit form of ${\mathcal T}_r^\pi$ and $\widehat{{\mathcal T}}_r^\pi$ is:
\begin{lem}
\label{lem: kl-s}
Under the s-rectangular assumption and KL balls uncertainty set, for each $s\in{\mathcal S}$, we have:
\begin{align*}
{\mathcal T}^\pi_r V (s)&=R^\pi(s) + \gamma\sup_{\lambda\ge0}\left(-\lambda|{\mathcal A}|\rho-\lambda\sum_{a}\log\sum_{s'}P^*(s'|s,a)\exp\left(-\frac{\pi(a|s)V(s')}{\lambda}\right)\right)\\
\widehat{{\mathcal T}}^\pi_r V (s)&=R^\pi(s) + \gamma\sup_{\lambda\ge0}\left(-\lambda|{\mathcal A}|\rho-\lambda\sum_{a}\log\sum_{s'}\widehat{P}(s'|s,a)\exp\left(-\frac{\pi(a|s)V(s')}{\lambda}\right)\right)
\end{align*}
\end{lem}
In this case, ${\mathcal T}_r^\pi$ and $\widehat{{\mathcal T}}_r^\pi$ only depend on a scalr $\lambda$, which is similar with $(s,a)$-rectangular assumption. To simplify, we also denote $g(\lambda,P)=\lambda|{\mathcal A}|\rho+\lambda\sum_a\log\sum_{s}P_a(s)\exp(-\pi(a)V(s)/\lambda)$. Similar with the case of $(s,a)$-rectangular assumption, when $\lambda\ge\frac{1}{|{\mathcal A}|\rho(1-\gamma)}$, $g(\lambda,P)\ge0$ and monotonically increasing w.r.t $\lambda$. Thus, the optimal value $\lambda^*$ locates in $[0,\frac{1}{|{\mathcal A}|\rho(1-\gamma)}]$. Thus, for fixed $V\in{\mathcal V}$ and $\pi\in\Pi$, we have:
\begin{thm}
\label{thm: kl-s-fix}
In the setting of KL balls, for fixed $V\in{\mathcal V}$ and $\pi\in\Pi$, the following inequality holds with probability $1-\delta$:
\begin{align*}
\left\|{\mathcal T}_r^\pi V-\widehat{{\mathcal T}}_r^\pi V\right\|_\infty\le\frac{2\gamma}{\rho\underline{p}(1-\gamma)\sqrt{n}}\sqrt{\log\frac{2|{\mathcal S}||{\mathcal A}|}{\delta}}
\end{align*}
where $\underline{p}=\min_{P^*(s'|s,a)>0}P^*(s'|s,a)$.
\end{thm}
By union bound over ${\mathcal V}$ and Lemma~\ref{lem: eps-pi}, to achieve $\varepsilon$-optimal bound in Theorem~\ref{thm: kl-s-union}, the sample complexity is about $\widetilde{O}\left(\frac{|{\mathcal S}|^2|{\mathcal A}|^2}{\varepsilon^2\rho^2\underline{p}^2(1-\gamma)^4}\right)$, which is $|{\mathcal A}|$ times larger than the case of $(s,a)$-rectangular set.
\begin{thm}
\label{thm: kl-s-union}
Under the s-rectangular assumption and KL balls uncertainty set, we denote $\widehat{\pi}=\mathop{\rm argmax}_\pi \widehat{V}_r\pi(\mu)$. The following inequality holds with probability $1-\delta$:
\begin{align*}
\max_\pi V_r^\pi(\mu)-V^{\widehat{\pi}}_r(\mu)\le\frac{4\gamma\sqrt{|{\mathcal S}||{\mathcal A}|}}{\rho\underline{p}(1-\gamma)^2\sqrt{n}}\left(2+\sqrt{2\log\frac{2|{\mathcal S}|^2|{\mathcal A}|\left(1+4\rho\underline{p}\sqrt{n}\right)}{\delta}}\right)
\end{align*}
where $\underline{p}=\min_{P^*(s'|s,a)>0}P^*(s'|s,a)$.
\end{thm}
\begin{rem}
It seems confusing that ${\mathcal T}_r^\pi V$ depends on different kinds of parameters in the case of KL balls and other cases. That is because the dual form of the evaluation problem $V_r^\pi$ owns dual variables $\eta\in{\mathbb R}^{|{\mathcal A}|}$ and $\lambda\in{\mathbb R}$. It turns out that the dual problem in the case of KL balls is easier to solve as the form of Lemma~\ref{lem: kl-s} states.
\end{rem}
|
{
"timestamp": "2021-05-11T02:18:22",
"yymm": "2105",
"arxiv_id": "2105.03863",
"language": "en",
"url": "https://arxiv.org/abs/2105.03863"
}
|
\section{Introduction}
In a hotter and more drought-prone world, wildfire risk will continue to increase. The situation is particularly dire for the Northern Californian investor-owned utility: "Cal Fire [determined] in May that PG\&E lines were the cause of several fires that killed at least 15 people and razed over 5,000 homes in the fall of 2017, including 12 instances in which it found the utility in violation of safety or maintenance procedures"~\cite{John-2019-Cal}. In 2018, the Camp Fire killed 85 people, destroyed the town of Paradise and deteriorated air pollution throughout Northern California for nearly a week. PG\&E is liable for up to \$17B from 2018's wildfires. Finding exact locations where vegetation and power lines intersect can help avoid these fires, which leads to significant savings in money, carbon emissions, and lives.
Electric utilities hoping to strategically reduce wildfire risk in their service territories immediately face three hurdles: First, they often have tens of thousands of linear transmission and distribution (T\&D) miles. PG\&E is responsible for roughly 106,000 miles of T\&D, much of which requires annual inspection ~\cite{John-2019-Cal}. Clearly, any vegetation management solution needs to scale. Second, utilities lack comprehensive maps of their assets. They installed many of their electricity distribution systems over 60 years ago, and there are no records or employees left from that era. Third, there is no standardized and rigorous way to prioritize maintenance. "Utilities utilities typically rely on vegetation management contractors to decide how vegetation should be maintained" ~\cite{malashenko2018powergrid}. Inspection is currently done in two stages: A trained arborist identifies infringing vegetation, and then subcontracts a tree trimming service to actually remove the vegetation. The only form of prioritization comes from customers reporting outages and fires.
The scale of the problem makes a software solution highly appealing. Governor Newsom’s Strike Force released a report specifically asking for Artificial Intelligence-based visual recognition technology to analyze satellite imagery to determine fuel conditions and vegetation risks in proximity to utility lines," and "machine learning and automation inspections for increased safety assurance and regulatory compliance" ~\cite{newsom2018wildfires}. Public, private, and education sectors agree that machine learning solutions to vegetation management appear promising.
Google Street View images provides an even more accessible dataset than satellite imagery, and Street View images are available for nearly the entirety of PG\&E's service territory. Feature transformations such as the Histogram of Oriented Gradients (HOG) and Hough transformation have proven successful in identifying power lines and similar structure~\cite{li2008towards, fernandes2008real, Gubbi-2018-New}. Thus, in this work, a convolutional neural network (CNN) is trained using Google Street View images and these feature transformation as inputs, and the output is a classification decision determining whether or not the image contains utility assets and if they are considered at risk. An extremely useful byproduct of this approach is a geotagged list of utility assets. Last, by choosing classes with care, the CNN can output an effective a prioritization system.
This paper is structured as follows. Section~\ref{sec:2} describes recent computer vision approaches that have been applied to line detection. Section~\ref{sec:3} describes the methods used for training the CNN in the work. The dataset and feature transformations are described in Section~\ref{sec:4}, and Section~\ref{sec:5} shows the results of various networks trained for the classification task.
\section{Related Work} \label{sec:2}
Recently, computer vision schemes have been employed to classify images into different land cover classes such as building, grassland, dense vegetation, water body, barren land, road, and shadow~\cite{sameen2007class}. However, the task of wire detection remains an open challenge; wires are generally only a few pixels wide, can appear at any orientation and location, may appear non-linear due to low tension or poor picture stitching, and are hard to distinguish from other similar looking lines and edges.
Utility companies currently spend an enormous amount of time and money performing visual inspections on their transmission and distribution networks, using helicopters, unmanned aerial vehicles, and climbing robots. In addition to the expense and slow implementation, some of these solutions can be dangerous. Luckily, CNNs are making great strides in power line detection, using drone footage, Google Street View, and satellite images as data sources~\cite{Madaan-2018-Wire, liu2016insulator, Zhang-2018-Using, martinez2017PoLIS, siddiqui2018robust}. Detection of transmission lines relies on precisely locating objects of interest and their locations with respect to one another, making it an ideal problem for computer vision. Madann et al. and Siddiqui et al. focus on insulator detection using bounding boxes, with the latter attempting to detect defects ~\cite{Madaan-2018-Wire, siddiqui2018robust}. Zhang et al. show Google Street View can accurately perform object detection and geotag utility poles within a 10 meter radius ~\cite{Zhang-2018-Using}.
To improve CNN performance in power line detection, many researchers are employing feature engineering \cite{Nguyen-2018-Automatic}. Previous approaches have included these features without CNNs, but they typically perform less successfully and robustly than CNNs. For example, Song et al. used a matched derivative variant called the first-order derivative of Gaussian. It works well when there are no other edges in the image, but cannot distinguish a street edge from a power line ~\cite{song2014power}. Two feature transforms that have been applied for power line detection are the Histogram of Oriented Gradients (HOG) and the Hough transform~\cite{li2008towards, fernandes2008real, Gubbi-2018-New}. The Hough transform extracts canny edges, performs a polar transform, and then finds line segments in the polar space. In the HOG transform, the distribution of directional gradients over local patches of the image are used collectively as a feature. Gubbi et. al employed this approach to detect power lines from satellite images~\cite{Gubbi-2018-New}.
\section{Methods} \label{sec:3}
\begin{figure*}[h!]
\centering
\includegraphics[width=\linewidth]{Figure/network.jpg}
\caption{Chart detailing the process used for training a utility line classifier from Google Street View images. The HOG and Hough transforms are first applied and then stacked together with the original image as the input into a pretrained CNN.}
\label{fig:net}
\end{figure*}
This work uses a feature-enhanced CNN to classify Google Street View images into one of three classes:
\begin{enumerate}
\item No utility systems
\item Utility systems without overgrown vegetation
\item Utility systems with overgrown vegetation
\end{enumerate}
To collect and label our data, we extract images from Google Street View and manually create labels for the ground truth samples. More detail is given in Section~\ref{sec:4}. We take advantage of pretrained models available in Pytorch, using transfer learning by freezing the early layers and resizing and tuning the fully-connected layers on our dataset. The three pretrained pytorch models examined in this work are \textit{AlexNet}, \textit{ResNet18}, and \textit{VGG11}.
While the HOG and Hough transforms, which are described in detail in Section~\ref{sec:4}, are powerful feature extractors, they do not output any information based on pixel color, which is important when identifying power lines against a blue sky, green vegetation, or distinguishing them different color lines on the street. Therefore, this work leverages both the image and the transformed features by stacking them together as the CNN input, making the input size $5\times 224\times 224$ because each transform has one output channel. The first convolutional layer will thus also have to be retrained to learn this size input, but the weights for the first three of the five channels can be initialized from pretrained models. Figure~\ref{fig:net} details this process.
The images collected from Google had a default size of $3\times 640 \times 640$, so they are first resized to $3\times 224\times 224$ to be compatible with pretrained networks. The $1,330$ images are divided into $80\%$ training set, $10\%$ development set, and $10\%$ test set. Then the HOG and Hough transforms are applied to and stacked on each input image to create $5\times 224\times 224$ inputs.
For each network, the loss, $L$, is calculated during training by using the the cross entropy loss function:
\begin{equation}\label{eqn:loss}
L(y,c) = w_c \left[ -y_c + \log{\sum_{j=1}^3 \left(\exp{y_j} \right)} \right]
\end{equation}
where $y$ is the scores vector of size three (for three classes) generated from the forward pass of the model, $c$ is the correct class identifier (0, 1, or 2), and $w_c$ is the weight for class $c$. The weights $w_c$ can help account for class imbalance in the dataset, and also prioritize accuracy on classes of interest by punishing them more for misclassification. In this study, the weights were set according to the number of samples in each class:
\begin{figure*}[htp]
\centering
\includegraphics[width=0.8\linewidth]{Figure/class_examples.png}
\caption{Labeled images from each class.}
\label{fig:class_examples.png}
\end{figure*}
\begin{equation}
w_c = N_{c,max}/N_c
\end{equation}
where $N_c$ is the number of samples in class $c$, and $N_{c,max}$ is the number of samples in the largest class. The loss function is minimized using ADAM optimization~\cite{Kingma-2015-Adam}, where the training data is divided into batches, the derivative of the loss function with respect to each trainable weight, $W_k$, and bias, $b_k$, is computed with backpropagation:
\begin{equation}\label{eqn:adam1}
\nabla W_k = \frac{\partial L}{\partial W_{k}}, \quad
\nabla b_k = \frac{\partial L}{\partial b_{k}}.
\end{equation}
Next, a decaying average of the first and second gradient moments, $m_t$ and $v_t$, is computed for each weight and bias:
\begin{subequations}
\begin{gather}
m_{t,W_k} = \beta_1 m_{t-1,W_k} + (1-\beta_1)\nabla W_k, \\
m_{t,b_k} = \beta_1 m_{t-1,b_k} + (1-\beta_1)\nabla b_k, \\
v_{t,W_k} = \beta_2 v_{t-1,W_k} + (1-\beta_2)(\nabla W_k)^2, \\
v_{t,b_k} = \beta_2 v_{t-1,b_k} + (1-\beta_2)(\nabla b_k)^2,
\end{gather}
\end{subequations}
where $t$ is the batch number, and standard values of $\beta_1=0.9$ and $\beta_2=0.999$ are used. To overcome the fact that $m_t$ and $v_t$ are biased toward zero for early time steps, the moments are corrected by:
\begin{subequations}
\begin{gather}
\hat{m}_{t,W_k} = \frac{m_{t,W_k}}{1-\beta_1^t}, \quad \hat{m}_{t,b_k} = \frac{m_{t,b_k}}{1-\beta_1^t}, \\
\hat{v}_{t,W_k} = \frac{v_{t,W_k}}{1-\beta_2^t}, \quad \hat{v}_{t,b_k} = \frac{v_{t,b_k}}{1-\beta_2^t}.
\end{gather}
\end{subequations}
Finally, the optimizer updates the weights and biases:
\begin{subequations}\label{eqn:adam2}
\begin{gather}
W_{k} = W_k -\alpha_\tau\frac{\hat{m}_{t,W_k}}{\sqrt{\hat{v}_{t,W_k}}+\epsilon}, \\
b_{k} = b_k -\alpha_\tau\frac{\hat{m}_{t,b_k}}{\sqrt{\hat{v}_{t,b_k}}+\epsilon},
\end{gather}
\end{subequations}
where $\alpha_\tau$ is a scheduled learning rate for epoch $\tau$ and $\epsilon=10^{-8}$ is used for numerical stability. In this work, a delayed cosine learning rate schedule was used:
\begin{subequations}\label{eqn:lr}
\begin{align}
\alpha_\tau &= \alpha_0, \qquad \tau \leq \tau_{start} \\
\alpha_\tau &= \frac{1}{2} \alpha_0 \left[1 + \cos{\left(\frac{(\tau-\tau_{start})\pi}{\tau_{max}}\right)} \right], \quad \tau > \tau_{start}
\end{align}
\end{subequations}
where $\alpha_0$ is the initial learning rate, $\tau_{max}$ is the number of training epochs, and $\tau_{start}$ is the number of epochs with constant learning rate.
The batch size, initial learning rate, regularization, max epochs, and start epochs are all hyperparameters than are tuned on a separate development set. For each set of hyperparameters the epoch that performed best on the development set, not necessarily the model at the end of training, was evaluated on the test set.
\section{Dataset and Features} \label{sec:4}
\subsection{Data acquisition}
A labeled dataset of utility assets is not readily available online. Instead, the dataset is scraped from Google Street View. Almost every utility asset is built near a road for accessibility, and Google Street View coverage in the United States, especially northern California, is very high. To gather a significant number of images with limited labeling effort, streets that fall mostly into one of the three classification categories listed in Section~\ref{sec:3} are found. Only streets in northern California are chosen in the study, where the use case for this type of utility asset identification and maintenance prioritization is currently strongest. This also ensures relatively similar image features, which is useful for a relatively small dataset.
Each street is manually geotagged with a starting and ending latitude and longitude. A script is then developed to calculate the street trajectory from starting to ending coordinates, then travel along the street and download images at regular intervals. The majority of images are oriented looking forward (heading = 0 , pitch = 0), at maximum resolution ($3\times 640\times 640$), and with a $90^\circ$ field of view. As one can imagine, not all images are perfectly classified based on the broader, ``average'' street-level classification. Not all images on ``utility'' streets had utilities, and even more images on the ``utility'' and ``vegetation overgrowth'' streets were misclassified. Several were completely anomalous, like a Street View image which happened to be inside a restaurant, or a selfie of a family. To improve accuracy, after downloading all images, each are examined and reclassified by hand if necessary.
Examples of images in each of the three classes are shown in Figure 1. In the following subsections, the two transformations used to detect the power lines are described. The resulting images from these two feature descriptors have been stacked with the three RGB channels of the original images and then fed to our training network as the input tensor.
It was very difficult to find accurate images with overgrown vegetation--as it should be if the utility is performing quality maintenance. Given this limitation, this class of data was augmented by directly searching for pictures of utility equipment involved in fires, utility equipment undergoing vegetation management, and utility equipment that had fallen over and failed. Some of theses images present an extremely challenging classification task for the CNN, because the perspective and zoom level are completely different.
Ultimately, $1,320$ images were scraped and classified. Of these, $655$ had no utility equipment, $572$ had utility equipment with no overgrown vegetation, and $93$ had utility equipment with no overgrown vegetation. Data was augmented by horizontally flipping each image to double the dataset size. Vertically flipping images would not be appropriate as a key identifying feature of powerlines is that they are above the ground. Random cropping to augment data for this dataset would also be problematic as the crop may miss the power lines, which are often only a small subset of the image, so no additional augmentation beyond horizontal flipping was performed.
\subsection{Hough Transformation}
\begin{figure}[htp]
\centering
\subfloat{%
\includegraphics[clip,width=0.54\columnwidth]{Figure/test_image.png}%
}
\subfloat{%
\includegraphics[clip,width=0.54\columnwidth]{Figure/test_hog.png}%
}
\subfloat{%
\includegraphics[clip,width=0.51\columnwidth]{Figure/test_hough.png}%
}
\caption{(Top to bottom) Input image, HOG and Hough feature descriptors the image}
\label{fig:transforms}
\end{figure}
The Hough transformation (HT), shown in the bottom of Figure~\ref{fig:transforms}, is a method to explore a parameter space and detect straight lines. The classical Hough transform is widely used for the detection of regular curves such as lines, circles, ellipses, etc~\cite{fernandes2008real}. This technique is particularly useful where a global description of a feature(s) is needed and the number of solution classes don't need to be known a priori. The motivating idea behind this transformation technique for line detection is the fact that each input measurement has its contribution to a globally consistent solution (e.g. the physical line which gave rise to that image point). As we know, lines are parameterized as $y= mx + c$ where $m$ is the slope and $c$ is the y-intercept. However, when the line is vertical, the slope goes to infinity. Therefore, a polar coordinate system is preferred for this purpose. To that end, a segment perpendicular to the line and leading to the origin is usually constructed. This line is represented by the length of the segment r, and its angle with the x-axis, $\theta$. Therefore it can be described by the parametric equation,
\begin{align}
r = x cos \theta + y sin \theta,
\end{align}
where $r$ and $\theta$ are the unknown variables that need to be determined. In other words, the coordinates of points of edge segments $(x_i,y_i)$ in the cartesian image space map to curves in the polar Hough transform space.
The Hough transformation constructs a histogram array representing the parameter space (\textit{i.e.}, a $M\times N$ matrix, for $M$ different values of the radius and $N$ different values of $\theta$). For each parameter combination, $r$ and $\theta$, the number of non-zero pixels in the input image that would fall close to the corresponding line is found. Each non-zero pixel can be interpreted as voting for the potential line candidates. The local maxima in the resulting histogram indicate the parameters of the most probable lines.
Another approach for line detection is the progressive Probabilistic Hough Transform. The Randomized HT overcomes the shortcomings of HT method such as high computational burden, low detection accuracy, and possibility of missing objects ~\cite{xu1990new}. The Randomized HT works based on random sampling in place of pixel scanning, converging mapping instead of diverging mapping, and dynamic storage in place of accumulation array. Strictly speaking, it is based on the assumption that using a random subset of voting points gives a better approximation, and that lines can be extracted during the voting process by walking along connected components. This returns the beginning and end of each line segment, which is useful.
The probabilistic Hough transform function has three parameters: a general threshold that is applied to the Hough accumulator, a minimum line length and the line gap that influences line merging.
Before applying the transformation on the street view images (using OpenCV in Python) we employ the Canny Edge Detection to find the edges of the images. The Canny edge detector can produce a set of boundary descriptors for the further image processing. This function requires gray images as input. Therefore, we convert the input images to gray scale images before feeding them to the Canny edge detector.
\subsection{Histogram of Oriented Gradients (HOG)}
In this work, the HOG is used as a feature descriptor along with the Hough transformation. The HOG transform, shown in the middle row of Figure~\ref{fig:transforms}, is widely used to detect objects in computer vision and image processing. This scheme was introduced by Dalal and Triggs~\cite{Dalal-2005-Histograms}. The idea behind the HOG descriptor technique is extracting the most important features and counting occurrences of gradient orientation in portions of an image, which is referred to as the detection window, or region of interest (ROI). Roughly speaking, it is a representation of an image or an image patch that simplifies the image by extracting useful information, \textit{i.e.} the gradient orientations, and throwing away extraneous information. The HOG descriptor computes oriented gradient histograms by global image normalization, calculates the gradient of pixel values in the $x$ and $y$-directions, and finally, computes gradient histograms and normalizes across the ROI.
The following steps describe the HOG descriptor algorithm shown schematically in Figure~\ref{fig:hog_alg}. First, the image is divided into small sub-images called cells or ROIs. for each cell, a histogram of gradient directions or edge orientations is computed. Cells can be rectangular (R-HOG) or circular (C-HOG). Usually blocks overlap each other, so that the same cell may be in several blocks. For each pixel within the cell, vertical and horizontal gradients are obtained. This is done by employing the 1-D Sobel horizontal and vertical operators as following:
\begin{align}
G_x = Y(y, x+1) - Y(y, x-1)\nonumber\\
G_x = Y(y+1, x) - Y(y-1, x)
\end{align}
where $Y(y,x)$ is the pixel intensity at coordinates $x$ and $y$. The magnitude and phase of the gradient are determined as:
\begin{align}
G(y,x) = \sqrt{G_x(y,x)^2 + G_y(y,x)^2}\nonumber\\
\theta(y,x) = arctan\left(\dfrac{G_y(y,x)}{G_x(y,x)}\right)
\end{align}
The next step is discretizing each cell into angular bins according to the gradient orientation. Certain number of bins are chosen for the angle. Each cell's pixel contributes the weighted gradient to its corresponding angular bin. Subsequently, groups of adjacent cells are considered as spatial regions called blocks. The grouping of cells into a block is the basis for grouping and normalization of histograms. Normalized group of histograms represents the block histogram. The set of these block histograms represents the descriptor. The detector window descriptor is used as information for object recognition. Since different images may have different contrast, contrast normalization would be very useful. Normalization is done on the histogram vector $v$ within a block. Either $L_1$ or $L_2$ can be used for this purpose.
\begin{figure}[htp]
\centering
\includegraphics[clip,width=.65\columnwidth]{Figure/hog_algorithmV2.png}
\caption{The algorithm for Histogram of Oriented Gradients}
\label{fig:hog_alg}
\end{figure}
Oriented image gradients are useful because the magnitude of gradients is large around lines, edges, and corners (regions of abrupt intensity changes), which helps improving classification and can be very useful for power line detection. The HOG transform effectively increases the salience of edge orientation while maintaining illumination invariance. This feature simplifies the image by extracting useful gradient distribution information and discarding other texture and color information. Clearly, as seen in Figure~\ref{fig:transforms}, the outputs of both the HOG and HT are not useful for the purpose of human image interpretation, but they are very useful for computer object detection. Support Vector Machines (SVMs) and CNNs trained on HOG outputs in addition to the typical RGB channels for a colored image can outperform vanilla image inputs.
\section{Results and Discussion} \label{sec:5}
The three pretrained models examined in this work are the \textit{AlexNet}, \textit{ResNet18}, and \textit{VGG11}. The loss function used for each model is cross entropy loss given by Equation~\ref{eqn:loss}, and ADAM optimizer described in Equations~\ref{eqn:adam1}~-~\ref{eqn:adam2} is used. The learning rate, regularization, batch size, max epochs and start epochs from Equation~\ref{eqn:lr} are hyperparameters tuned using the development set. The final hyperparameters chosen and the best test set accuracy for each model are shown in Table~\ref{tab:res1}. For all networks, only the fully-connected layers of the final classifier and the very first convolutional layer (to learn weights for a 5 channel input) are trained. All models are trained using the exact same train, development, and test sets.
\begin{table*}[h]
\centering
\begin{tabular}{|P{1.3cm}|P{1.6cm}|P{1.5cm}|P{1.5cm}|P{1.5cm}|P{2.1cm}|P{1.5cm}|P{1.5cm}|}
\hline
\textbf{Model} & \textbf{Batch size} & \textbf{Initial learning rate}, $\alpha_0$ & \textbf{Max epochs}, $\tau_{max}$ & \textbf{Start epochs}, $\tau_{start}$ & \textbf{Regularization} & \textbf{Test set accuracy} & \textbf{Time per epoch} (s)\\
\hline
AlexNet & 64 & 2e-5 & 70 & 30 & 1e-4 & 70.99\% & 5.1 \\
ResNet18 & 96 & 8e-4 & 150 & 50 & 5e-4 & 78.63\% & 12.0 \\
VGG11 & 32 & 1e-4 & 11 & 5 & 1e-3 & 80.15\% & 42.1 \\
\hline
\end{tabular}
\caption{Hyperparameters and best performance for each of the three pretrained models chosen}
\label{tab:res1}
\end{table*}
Interestingly, development set accuracy is $79.00\%$, $84.33\%$, and $85.07\%$ for AlexNet , ResNet18, and VGG11 respectively, which is quite a bit better than the test set accuracy for each network. This is likely due to the relatively small size of each set even after data augmentation 268 and 262 images in the development and test set. This suggests that more data could further improve accuracy. Nevertheless, the early results are promising.
As shown in Table~\ref{tab:res1}, \textit{AlexNet} is the fastest model to run, but it has the poorest accuracy. On the other hand, \textit{VGG11} achieves the highest accuracy, but runs the slowest. Meanwhile, \textit{ResNet18} exhibits a good balance of both speed and accuracy, so this model is chosen to explore farther.
In the best \textit{ResNet18} model above, the best training and development set accuracy are $97.63\%$ and $84.33\%$ respectively, suggesting that the model is overfitting to the training set. To counteract this, a new model is trained after inserting a dropout layer with $50\%$ dropout probability before the final fully-connected layer. Additionally, to farther improve model flexibility, all layers are unfrozen and allowed to train, but with a learning rate $1,000$ times lower than the learning rate of the first and last layers. This allows for fine tuning of the pretrained intermediate layers to fit the new utility line dataset.
Furthermore, it is noted that while the overall test accuracy was $78.63\%$ for the best \textit{ResNet18} model, the accuracy on the images with encroaching vegetation on the power lines is only $72.22\%$, which is significant because that is the most important class to accurately categorize so that utilities know where they need to perform maintenance. To combat this, the weight for this class, $w_c$ in Equation~\ref{eqn:loss}, is doubled to penalize the network more for misclassifying these images. Note that this is likely to make overall accuracy worse, as more images will be flagged as potentially dangerous. This is acceptable though, because flagged images can always be manually examined to determine if maintenance needs to be provided. This is obviously far better than missing these images and not providing needed maintenance, which could cause an increased fire risk.
Model ensemble is additionally employed at this step to achieve the best possible model. The model is trained for $200$ epochs and saved after each one. The $10$ best models, determined by development set performance, are then applied to the test set. To determine the final classification, each of the $10$ models votes on which class to put each image in, and each image is classified according to the class with the most votes. The loss and accuracy for the training and development sets during this process are shown in Figure~\ref{fig:training}.
\begin{figure}
\centering
\includegraphics[width=0.9\columnwidth]{Figure/training.png}
\caption{Loss for each epoch during training (top) and classification accuracy for the training and development set (bottom)}
\label{fig:training}
\end{figure}
Figure~\ref{fig:training} shows that adding the dropout layer successfully overcame the overfitting observed when training the earlier models in Table~\ref{tab:res1}, as the development and training accuracy are very close together throughout the training. The top row shows that the loss behaves as expected for each epoch -- initial loss is roughly $ln(3)=1.09$, it starts to flatten out before decay schedule kicks in at epoch $100$, and then continues to slightly decrease.
The $10$ best epochs from this model, ensuring that they are at least $5$ epochs apart for substantial differences, are extracted and evaulated on the test set. The maximum test set accuracy for the $10$ models is $73.97\%$. Note that this is worse than achieved in Table~\ref{tab:res1} due to the fact that the weights were changed to predict more vegetation. The best of the $10$ models correctly classified $17$ of $18$ ($94.44\%$) of the vegetation images, while outputting $10$ ``false positives''. Taking the ensemble of the $10$ models and implementing a voting procedure yielded an overall accuracy of $75.95\%$. The confusion matrix for the ensemble model is shown below:
\begin{figure}[!h]
\centering
\includegraphics[width=0.9\columnwidth]{Figure/conf.png}
\end{figure}
This confusion matrix demonstrates that the ensemble model correctly predicts $16$ out of $18$ ($88.88\%)$ of the images labeled as vegetation overgrowth. The two images below show the two that were misclassified, both as ``no utility.''
\begin{figure}[!h]
\centering
\includegraphics[width=0.95\columnwidth]{Figure/misclassify.png}
\caption{The two images labeled as ``vegetation overgrowth'' that the network misclassified as ``no utility''}
\label{fig:misclassify}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=0.90\columnwidth]{Figure/misclassify2.PNG}
\caption{The eight unique false positive images from the ensemble network. The horizontally flipped image of Images 1-5 were also false positives.}
\label{fig:misclassify2}
\end{figure}
The left image has power lines very well mixed into the tree in the background, but they are very difficult to detect, so it is understandable labeling this image as ``no utility.'' The image on the right actually does not have any powerlines, so this was mislabeled during the manual labeling process, and the network correctly classified it, pointing out our human error. Interestingly, the network classified the horizontally flipped version of each of these images as ``vegetation.''
The confusion matrix also shows that there are $13$ false positives, where the network classified vegetation overgrowth for images labeled without it. This number is quite substantial, but it is expected in the way the weights in the loss function were defined because it is much more dangerous to have false positives than negatives. Because the test set was augmented by horizontally flipping images, only $8$ of these $13$ are unique images as $5$ sets of mirrored images appeared in this group. The images in this group, as seen in Figure~\ref{fig:misclassify2} are also understandable, as many of them have a lot of vegetation, some very close to power lines.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{Figure/salience.PNG}
\caption{Salience maps for two images in the vegetation class. Brighter red indicates a stronger influence on the classification.}
\label{fig:sal}
\end{figure}
Salience maps were created to visualize which pixels in various training images were causing major differences in the network's scoring. Brighter red corresponds to greater activation, and all images shown in Figure~\ref{fig:sal} are chosen from the correct category of "Vegetation overgrowth". The images on the left show three separate sets of overlapping power lines all activating \textit{VGG11}, demonstrating that network is in fact learning to recognize power utility assets. It appears the network is also responding to the edge of the street, and the dark-to-light transition in the street. The images on the right display strong activation for the entire area with trees and power lines for \textit{ResNet18}.
\section{Conclusion}
This work developed a framework for identifying utility systems that pose a fire risk using computer vision approaches. Images were scraped from Google Street View. HOG and Hough features were computed and concatenated to the images. A variety of CNN architectures were trained through transfer learning. \textit{VGG11} proved the most accurate on the test set (80.15\%), while \textit{ResNet18} maintained nearly the same level of accuracy with much lower computational cost.
Future work should include more detailed classes to refine the prioritization, and methods for estimating vegetation distance to utility assets quantitatively. Future work should also focus on obtaining accurate and consistent data--particularly for the class in which vegetation intersects with utility assets.
{\small
\bibliographystyle{ieeetr}
|
{
"timestamp": "2021-05-11T02:16:11",
"yymm": "2105",
"arxiv_id": "2105.03804",
"language": "en",
"url": "https://arxiv.org/abs/2105.03804"
}
|
\section{Introduction}
\noindent Computing shortest paths in a continuous 2D environment has been of interest for researchers in various domains such as robotics, game development and computational geometry. \cite{vgraph} solved the problem of finding the shortest path among polygonal obstacles using visibility graphs. In visibility graphs, search is performed over a graph with vertices at convex obstacle corners and guarantees to return Euclidean optimal paths in two dimensional spaces with polygonal obstacles. However, constructing and planning on visibility graphs can become slow as the number of obstacles increases. In computational geometry literature, \cite{polygoncutting,besp2004} solved the Euclidean shortest path problem with the funnel algorithm but the algorithm requires triangulation of the environment which is a not an efficient way to represent environments that can vary dynamically. Consequently, grid representation of the environment dominated search-based planning literature due to its ease of use and flexibility in representation for varying costs and dynamically changing environments. The limitations of using grid-based graphs for search-based planning algorithms such as A* and its derivatives is that the angle of traversal is limited by increments of 45$^\circ$ (assuming an eight-connected grid). As a result, these planners can produce unrealistic looking paths with unnecessary turns and high path cost. The need to alleviate the suboptimal costs and unnecessary turns of the generated paths has lead to two research directions: post-processing of computed paths and any-angle planning algorithms.
In this work, we present a novel algorithm for post-processing grid-based paths which returns a path that is at least as short as the Euclidean-distance optimal path in the homotopy class of the original path. We utilize the fact that search over visibility graphs gives Euclidean optimal paths by building a `local' Homotopic Visibility Graph around the grid-search path and performing a search over it to obtain the post-processed path. Considering that visibility graph search can become time consuming due to high branching factor, there is a need to prune the maximum number of vertices and edges possible. Our algorithm chooses only convex obstacle corners in the homotopy class of the grid path that could potentially contribute to a taut path, to build the visibility graph. The process of finding the relevant obstacle corners and building the visibility graph is highly parallelizable and hence has a much better runtime compared to planning on full visibility graphs.
In the following sections of the paper the related work is detailed, the Homotopic Visibility Graph Planning (HVG) algorithm is detailed and proofs showing that the post-processed path is at least as short as the provably optimal path lying within the same homotopy class as the initial grid-based path are provided. The results of experiments on grid-based maps along with comparison to other post processing algorithms and any-angle planning algorithms are presented. HVG is able to scale significantly better than any-angle algorithms to large and dense maps and achieves better runtimes while providing a homotopic optimality guarantee.
\section{Related Work}
A two step approach of path computation and post-processing of grid-based paths has been commonly used to reduce the cost of the generated paths. The most commonly used technique is greedy post-processing \cite{highqualitypaths} where three consecutive nodes are taken and if there is a line of sight between the first and the third node, the second node is removed from the path. This is followed until there are no path shortcuts available. The main pitfall of such greedy post-processing is that it only removes nodes from the path and does not allow for addition of nodes which is necessary for obtaining a provably shortest path. \cite{stringpull} introduced a post-processing algorithm which allowed for addition and removal of nodes in the path and always generated post-processed paths with no heading changes in freespace. However, to the best of our knowledge there still exists no post-processing algorithm that provides guarantees on optimality or bounded sub-optimality of the post-processed path in grid-based representations.
Any-angle algorithms \cite{uras2015empirical} such as Theta* \cite{thetastar}, ANYA \cite{anya} and Field D* \cite{fielddstar} resolve the problem of angle limitations in search over grid-based graphs by interleaving shortcutting with search and propagating information along grid edges without constraining the path to go along the grid edges. Among any-angle algorithms, the approaches followed to alleviate unnecessary turns differs from one algorithm to another. The Theta* algorithm interleaves the path shortcutting step during expansion of nodes by checking for a line of sight between the node and its parent's predecessor. Whereas ANYA performs the search over row-wise intervals on the grid-based graph to restrict the intermediate nodes of the path to obstacle corners. Although any-angle planning algorithms give shorter paths than A*, they are typically slower and do not provide optimality guarantees with the exception of ANYA \cite{anya}.
The problem of post-processing a coarse path to obtain Euclidean optimal paths in the same homotopy class is well studied in computational geometry literature. \cite{polygoncutting, besp2004} introduced the funnel algorithm which gives the shortest path in the homotopy class with respect to the Euclidean or link metric. The pitfall of this approach is the requirement of the boundary-boundary triangulation of the map which is a non-trivial endeavor.
The motivation for this work comes from the need to have a post-processing algorithm with provable guarantees in the grid-based representation of 2D environments as it is widely used in multiple domains. Existing post-processing algorithms do not provide theoretical bounds and any-angle algorithms do not scale well to large maps and maps with high obstacle density. The proposed Homotopic Visibility Graph (HVG) algorithm addresses the mentioned gaps in research by providing a Euclidean optimality guarantee in the homotopy class and scales well to large and dense maps due to its ability to be parallelized.
\section{Terminology}
\noindent A grid representation of a 2D environment is a discretization of the space into equally sized cells which are either obstacles or not. A graph $G = [S,E]$ is constructed with corners of the cells as the graph vertices and the edges of the cells as the edges of the graph $E$ (either 4-connected or 8-connected). Let $S$ be the set of all vertices in the graph and $s,g \in S$ where $s=[s_x,s_y]$ and $g=[g_x,g_y]$ denote the start and goal vertex of the path. The set of convex obstacle corners in the grid representation of a planar map is denoted by $V \subseteq S$. From Lozano-P\'{e}rez and Wesley (1979), if a vertex $v \in P_{opt}$ where $P_{opt}$ is the Euclidean optimal path, then $v \in \{s, g\} \cup V$.
\noindent \textbf{Grid-based Path: }A grid-based path on graph $G$ is defined as $P=\{s=p_1, p_2, ... p_n=g\}$ if $s \in S$ is the start vertex and $g \in S$ is the goal vertex and $p_i \in S$ with consecutive vertices in the path connected by edge $e_i \in E$.
\noindent \textbf{Line of Sight (LOS): } The graph vertices $p,q \in S$ are said to have line of sight if the line joining the vertices in the Cartesian space does not intersect any obstacle.
\noindent \textbf{Visibility Graph: }A graph $VG = [V, E_{vg}]$ is defined as a visibility graph if $V \subseteq S$ is the set of all convex obstacle corners in $S$ and $E_{vg}$ is the set of the lines joining all pairs of vertices $v_i, v_j \in V$ which have line of sight.
\noindent \textbf{Homotopic Trajectories: } Formally, homotopic trajectories are defined as two trajectories $\tau_1$ and $\tau_2$ with the same start and end coordinates $s$ and $g$ iff one can be continuously deformed into the other without intersecting an obstacle.
\[ \tau_1, \tau_2 : [0,1] \rightarrow \mathbb{R}^2 \]
\[\tau_1(0) = \tau_2(0) = s, \tau_1(1) = \tau_2(1) = g\]
then, $\tau_1$ and $\tau_2$ are homotopic iff there exists a continuous map $\eta$ such that
\[\eta : [0,1]\times[0,1] \rightarrow \mathbb{R}^2\]
\[\eta(\alpha,0) = \tau_1(\alpha) \forall \alpha \in [0,1]\]
\[\eta(\beta,1) = \tau_2(\beta) \forall \beta \in [0,1]\]
\[\eta(0,\gamma) = s, \eta(1,\gamma) = g \forall \gamma \in [0,1]\]
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth]{images/homotopy.png}
\caption{The paths P$_1$ and P$_2$ and homotopic trajectories with canonical sequence AB'C whereas P$_3$ lies in a different homotopy class with canonical sequence AB'C'}
\label{fig:homotopy}
\end{figure}
\noindent \textbf{A test for homotopic paths: }As shown in \cite{testhomotopy}, two paths in a plane belong to the same homotopy class if and only if they have the same reduced canonical sequence. To determine the canonical sequence of a path, we begin by choosing one representative point for each obstacle in the map and draw an infinite vertical line passing through each of those points. Each of these lines are treated as two separate rays originating from the obstacle's representative point and each ray is given a unique ID. The canonical sequence is then constructed by appending the ID's of the rays which are intersected by the path in order. The canonical sequence is further reduced to eliminate redundant crossings. Two paths are then said to be homotopic iff they have the same reduced canonical sequence (see Figure \ref{fig:homotopy}).
\noindent \textbf{Euclidean Optimal Path: }Performing an optimal search on the graph $G$ will lead to a path that is optimal with respect to the grid resolution. A path from $s \in S$ to $g \in S$ is said to be a Euclidean optimal path if it minimizes the Euclidean distance metric $d(p,q) = ||p-q||_2$.
\noindent \textbf{Taut Path: }A 2D path on a discrete grid is a taut path if it has no turns in freespace and all of its intermediate vertices are convex obstacle corners. Additionally, all the pairs of path segments intersecting at an obstacle corner should subtend an angle less than $180^{\circ}$ at the obstacle corner. Euclidean optimal paths are always taut paths with vertices at obstacle corners as shown in the work on visibility graphs \cite{vgraph}. A taut exit (see Figure \ref{fig:taut}) is one which subtends an angle less than 180$^\circ$ at an obstacle corner. A taut path is one where each intermediate vertex is a taut exit as illustrated in Figure \ref{fig:taut}.
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth]{images/taut.png}
\caption{The figure illustrates taut and non-taut exits at an obstacle corner. The taut exit subtends an angle less than 180$^\circ$ whereas the non-taut exit subtends an angle greater than 180$^\circ$. The taut path cannot be shortened without intersecting an obstacle. The non-taut path can be shortened by removing the intermediate vertex $v$.}
\label{fig:taut}
\end{figure}
\section{HVG Algorithm}
\begin{figure*}[h!]
\centering
\subfigure[The initially computed grid-based path]{\includegraphics[width=0.48\textwidth]{images/0.jpg}}\quad
\subfigure[The blue circles indicate the vertices that were encountered when performing scans in the four directions]{\includegraphics[width=0.48\textwidth]{images/1.jpg}}
\subfigure[The orange circles indicate the vertices of the Homotopic Visibility Graph $HVG$ computed after completing the scanning]{\includegraphics[width=0.48\textwidth]{images/2.jpg}}\quad
\subfigure[The green path is the post-processed path $\hat{P}$ and the blue lines illustrate the constructed Homotopic Visibility Graph $HVG$]{\includegraphics[width=0.48\textwidth]{images/3.jpg}}
\caption{Illustration of the HVG post-processing algorithm where a) shows the initial grid-based path which has unnecessary turns in freespace. b) shows the vertices encountered when performing scans originating from each vertex of the grid-based path in the up, down, left and right directions. c) illustrates the vertices that are common in the $V_v$ and $V_h$ sets as described in step 24 of Algorithm 1. d) shows the constructed visibility graph and the post-processed path that is Euclidean-distance optimal in its homotopy class.}
\label{fig:vis}
\end{figure*}
The Homotopic Visibility Graph Planning (HVG) algorithm proposed in this paper is a post-processing algorithm that takes as input the grid-based path $P = (s=p_1,p_2,p_3 ... p_n=g)$ and returns the post-processed path $\hat{P}$. The core idea behind the approach utilizes two properties of Euclidean distance-optimal paths in 2D environments: visibility graphs provide Euclidean shortest paths and the shortest paths are always taut. If a complete visibility graph $VG$ can be constructed within the homotopy class of $P$, then the shortest path can be obtained by performing an optimal search in $VG$. However $VG$ can be pruned to reduce the number of edges and vertices by exploiting the knowledge of the grid-based path $P$. In our algorithm, we construct $HVG \subseteq VG$ from $P$ as described in Algorithm 1. A visualization of the algorithm is shown in Figure \ref{fig:vis}.
\begin{algorithm}[h]
\SetAlgoLined
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Output}
\Input{Grid path $P = (s=p_1,p_2, ... p_n=g)$}
\Output{Post-processed path $\hat{P}$}
$V_h = \emptyset$, $V_v = \emptyset$, $V_{HVG} = \{s,g\}$\\
\For{node in P}{
\If{iscorner(node)}{
$V_{HVG}$.append(node)\;
}
\For{dir in \{up,down,left,right\}}{
obstacle\_hit = false\;
curr = node\;
\While{obstacle\_hit == false}
{
new = curr + dir\;
obstacle\_hit = CollisionDetect(curr, new)\;
curr = new\;
\If{iscorner(curr)}{
\uIf{dir == left or dir == right}{
$V_{h}$.append(curr)\;
}
\Else
{$V_{v}$.append(curr)\;}
break\;
}
}
}
}
$V_{HVG} = V_{HVG} \cup (V_h \cap V_v)$\;
$HVG$ = VisibilityGraph($V_{HVG}$)\;
$\hat{P}$ = Search($V_{HVG}$, $p_1$, $p_n$)\;
\textbf{return} $\hat{P}$
\caption{Homotopic Visibility Graphs (HVG)}
\end{algorithm}
\noindent \textbf{Finding vertices of $HVG$:
A key observation that aids in finding the vertices of $HVG$ is that $v \in V$ is a candidate vertex for the optimal path $\hat{P}$ in the homotopy class of $P$ \textit{if and only if} it provides a taut exit (see Figure \ref{fig:taut}). For every node $p \in P$, a line of sight scan originating from $p$ is performed in all the four directions parallel to the axes of the grid. Two lists $V_h, V_v$ are maintained for storing obstacle corner vertices encountered during horizontal scans and vertical scans. The scans are performed until an obstacle or an obstacle corner is encountered. In the case that an obstacle is encountered, the scan in that direction is terminated. If an obstacle corner is encountered, then we add that to the corresponding list and the scan is terminated. After performing scans for all the nodes $p \in P$, the $HVG$ vertices are determined by $V_{HVG} = V_h \cap V_v$. By finding obstacle corner vertices that have a horizontal and vertical line of sight scan from the grid-search path, we ensure that only vertices that can potentially be a part of a taut exit are included. Since the scans originating at node $p_i \in P$ does not depend on scans from any other node $p_j \in P$, the scans can be parallelized. The proof for optimality is provided in the section below.
\noindent \textbf{Post-processed path $\hat{P}$: }The post-processed path $\hat{P}$ is simply obtained by performing an optimal search over the constructed visibility graph $HVG$. The resulting path may lie in neighboring homotopy classes since there is no restriction for the path to lie in the same homotopy class as $P$. However, the post-processed path $\hat{P}$ is guaranteed to be at least as short as the Euclidean optimal path in the homotopy class of $P$. The construction of the visibility graph $HVG$ requires pairwise line of sight scans which can be parallelized for efficient computation.
\begin{table}[h!]
\centering
\begin{tabular}{|l|l|l|}
\hline
\multicolumn{3}{|c|}{Grid Path: E1,D2,D3,C4,B5,B6,B7,B8} \\ \hline
\multicolumn{1}{|c|}{\textbf{Grid Path Node}} & \multicolumn{1}{c|}{\boldmath{$V_h$}} & \multicolumn{1}{c|}{\boldmath{$V_v$}} \\ \hline
E1 & E5 & C1 \\ \hline
D2 & E5 & C1 \\ \hline
D3 & E5 & C1,C3 \\ \hline
C4 & E5,C3,C5 & C1,C3 \\ \hline
B5 & E5,C3,C5 & C1,C3,C5 \\ \hline
B6 & E5,C3,C5 & C1,C3,C5 \\ \hline
B7 & E5,C3,C5 & C1,C3,C5 \\ \hline
B8 & E5,C3,C5 & C1,C3,C5 \\ \hline
\multicolumn{3}{|c|}{$V_{HVG}$ = {[}E1,C3,C5,B8{]}, HVG Path = E1,C5,B8} \\ \hline
\end{tabular}
\caption{An execution of HVG algorithm. For each node in the grid based path, the state of the sets $V_h$ and $V_v$ is shown.}
\label{table:exhvg}
\end{table}
\noindent \textbf{Execution of HVG: }An example execution of HVG is illustrated in Figure \ref{fig:example} and Table \ref{table:exhvg}. The green circles denote the start and goal vertices. The red path is the initially computed grid-based path $P$ where the red circles represent the vertices of the path. The blue path is the post-processed path $\hat{P}$ with the blue circles denoting the vertices $V_{HVG}$ found by HVG. For each node in $P$, Table \ref{table:exhvg} shows the state of the lists $V_v$ and $V_h$. The horizontal and vertical scans are performed from the vertices colored in red or green and the result is agnostic to the order in which the scans are performed.
\begin{figure}[h!]
\centering
\includegraphics[width=0.9\linewidth]{images/hvg_example.png}
\caption{Illustration of the execution of HVG algorithm. The green circles represent the start and goal vertices. Red circles denote the vertices of the grid-based path $P$ which is computed using A*. Blue path is the post-processed path $\hat{P}$ with the blue circles denoting the HVG vertices $V_{HVG}$.}
\label{fig:example}
\end{figure}
\section{Proof}
\noindent In this section, the proof for homotopic optimality of the post-processing algorithm is presented. The outline of the proof is as follows. For the given grid-based path $P$ in homotopy class $H$, let $P_{opt} \in H$ be the optimal path in $H$ and $V_{HVG}$ be the set of vertices found by the HVG algorithm. If we can show that $P_{opt} \subseteq V_{HVG}$ then the visibility graph constructed using $V_{HVG}$ will be guaranteed to contain the path $P_{opt}$. The aim is to show that an obstacle corner vertex $v^i \in V$ belongs to the optimal path in the homotopy class $P_{opt}$ if and only if there exists a grid search node $p_h^i, p_v^i \in P$ such that $p_h^i$ has a horizontal scan to $v$ and $p_v^i$ has vertical scan to $v$. Here, a scan is defined as a line that passes through freespace and does not go along obstacle edges or intersect an obstacle.
\noindent \textbf{Lemma 1: }For the grid-based path $P \in H$, consider the scenario when the Euclidean-distance optimal path $P_{opt} \in H$ has only one intermediate vertex $v$. For $v \in V$ to be a taut exit belonging to the optimal path, the angle subtended by the path segments at $v$ is less than $180^{\circ}$ and the vertex $v = [v_x,v_y]$ satisfies
\[s_{x} \leq v_x \leq g_{x}\]
\[s_{y} \leq v_y \leq g_{y}\]
\noindent \textbf{Proof: }Let us assume that $v_y < s_{y} < g_{y}$ and $v$ is part of the optimal path. But this would lead to the angle subtended at $v$ by the path segments to be greater than $180^{\circ}$ which would make it a non-taut exit. This contradicts the assumption that $v$ is part of the optimal path. Similarly, violation of the other inequalities can be shown to lead to $v$ not being part of the optimal path. Hence, it is proven by contradiction that for a Euclidean optimal path with two segments, containing an obstacle corner $v$ as one of the vertices, the inequalities $s_{x} < v_x < g_{x}$ and $s_{y} < v_y < g_{y}$ are satisfied.
\noindent \textbf{Lemma 2: }For any grid-based path $P$ that is homotopic to $P_{opt}$ with one intermediate vertex $v \in P_{opt}$, there exists $p_h, p_v \in P$ such that
\[p_{hy} = v_y\]
\[p_{vx} = v_x\]
\noindent and the line joining the pair of points $\{v,p_h\}$, $\{v,p_v\}$ are collision free.
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth]{images/homotopyproof.png}
\caption{Illustration to show that the homotopy class changes if an obstacle is present between either pair of points $\{v,p_h\}$ and $\{v,p_v\}$}
\label{fig:homotopyproof}
\end{figure}
\noindent \textbf{Proof: }The grid search path $P$ is homotopic to the path $P_{opt}$. In Lemma 1, we stated and proved that if $v \in P_{opt}$, then $s_{x} < v_x < g_{x}$ and $s_{y} < v_y < g_{y}$. Since the grid search path $P$ is continuous between $s$ and $g$, there exists $p_h, p_v \in P$ such that $p_{hy} = v_y$ and $p_{vx} = v_x$. Let us assume that the either of the pair of points $\{v,p_h\}$ and $\{v,p_v\}$ are obstructed by an obstacle. As shown in Figure \ref{fig:homotopyproof}, the canonical sequence \cite{testhomotopy} for testing homotopy for $P$ is $AB$ and for $P_{opt}$ is $A'B$ which indicates that the grid search path $P$ and $P_{opt}$ belongs to different homotopies. Thus, by contradiction we prove that the pair of points $\{v,p_h\}$ and $\{v,p_v\}$ has to be collision free.
\noindent \textbf{Lemma 3: }When $P_{opt}$ is the optimal path in $H$, there exists $p_h^i,p_v^i \in P$ for every $v^i \in P_{opt}/\{s,g\}$ such that $p_{hy}^i = v_y^i$ and $p_{vx}^i = v_x^i$ and the line joining the pair of points $\{v^i,p_h^i\}$ and $\{v^i,p_v^i\}$ are collision free.
\noindent \textbf{Proof: }We generalize Lemma 1 and Lemma 2 for $P_{opt}$ with multiple intermediate vertices. The two step proof for the lemma is as follows:
\begin{enumerate}
\item First we show that when the paths $P$ and $P_{opt}$ intersect at only two points namely the start $s$ and goal $g$ coordinates, there exists horizontal and vertical line of sight scans from $v_i \in P_{opt}$ to $P$. We utilize the property that any subset of the optimal path is also optimal. Consider the sub-path $P' = \{v_{i-1}, v_i, v_{i+1}\}$ from $P_{opt} = (s=v_1,v_2, ... v_k=g)$ where $P'$ is an optimal path with start $v_{i-1}$ and goal $v_{i-1}$ with only one taut exit as shown in Lemma 2. Construct another path $P'' \in H$ such that $P'' = \{v_{i-1}, v_{i-2} .. v_1, P, v_k, v_{k-1} ... v_{i+1}\}$. From Lemma 2, $P'$ and $P''$ are homotopic trajectories and the existence of $p_h^i,p_v^i \in P''$ can be shown. Further, it holds that $p_h^i,p_v^i \in P$ because if $p_h^i,p_v^i \in P''/P$ then that would be a valid path shortcut in $P_{opt}$ which contradicts the assumption that $P_{opt}$ is the optimal path in $H$. Hence proven that for every $v_i \in P_{opt} \: \exists \: p_h^i,p_v^i \in P$
\item In the case that the paths $P$ and $P_{opt}$ intersect at more than two points, the paths are split at the intersection points and for each of the segments, (1.) of the proof holds.
\end{enumerate}
\noindent \textbf{Theorem 1: }For a given grid-based path $P$ in homotopy class $H$ with $P_{opt}$ being the Euclidean-distance optimal path in $H$, the post-processed path $\hat{P}$ returned by the Homotopic Visibility Graph (HVG) algorithm is such that $length(\hat{P}) \leq length(P_{opt})$.
\noindent \textbf{Proof: }Vertices of the graph found in lines 2-24 of Algorithm 1 is a superset of the vertices $v \in P_{opt}$ as stated and proved in Lemma 3. Hence, the path found by performing an optimal search on the visibility graph constructed using the found vertices is guaranteed to be at least as short as $P_{opt}$.
\section{Experiments}
For benchmarking the algorithms, we use city and random maps from \cite{movingai} ranging from sizes 512x512 to 6000x6000. To test the algorithms on maps with high obstacle density, custom maps were generated by randomly sampling obstacles. We test the path length and runtime for A$^*$ with HVG, A$^*$ with Greedy post-processing (G-PP), A$^*$ with String Pulling (SP), Theta$^*$ and ANYA. All the algorithms were implemented in C++ on an Intel i7-6700 CPU (3.40GHz) and 32GB RAM. The implementations for Theta$^*$ \& ANYA is from \cite{uras2015empirical}.
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth]{images/mapsize_vs_time.png}
\caption{Runtime vs Map Size statistics for maps generated with 40\% obstacle density}
\label{fig:mapsize}
\end{figure}
\noindent \textbf{Runtime vs Map Size: }The scalability of HVG and other algorithms are tested in maps of size 512x512 to 6000x6000. The practical merits of using A*+HVG in place of ANYA or Theta* is seen in Figure \ref{fig:mapsize} where the scalability of the proposed algorithm (A*+HVG) to large maps is shown. On average, A*+HVG is four times faster than Theta* and three times faster than ANYA in maps of size 6000x6000. While the time taken for HVG is comparable to that of the search itself in maps of size 512x512, the ratio of time taken by HVG to that of A* grows significantly smaller as the map size increases.
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth]{images/obsdensity_vs_time.png}
\caption{Runtime vs Obstacle Density statistics for maps of size 512x512 generated with randomly sampled obstacles}
\label{fig:obsdensity}
\end{figure}
\noindent \textbf{Runtime vs Obstacle Density: }The performance of A*+HVG is benchmarked in maps of varying obstacle densities and the runtime is plotted in Figure \ref{fig:obsdensity}. It can be seen that A*+HVG performs better than the other algorithms in maps of high obstacle densities. ANYA exploits freespace regions efficiently by searching over the space of intervals instead of neighboring vertices in the grid-based graph and hence performs better in maps which are less dense.
\noindent \textbf{Path Cost: }The path quality generated by HVG when coupled with A* is comparable to that of Theta* and ANYA. The length of paths generated by A*+HVG is shorter than that of A*, A*+G-PP and A*+SP. Theta* and ANYA generate marginally shorter paths than A*+HVG which is a consequence of the A* path being in a homotopy class that does not contain the globally Euclidean-distance optimal path.
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth]{images/pathcost.png}
\caption{Path cost statistics with the grid-based path generated by A*. The random maps have names that contain the map size and the obstacle density in order. For example, random2000-40-0 is a randomly generated map of size 2000x2000 with 40\% obstacle density. The plot depicts the path cost as a percentage of the path cost of A* on the y-axis.}
\label{fig:pathcost}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth]{images/runtime.png}
\caption{Runtime statistics with the grid-based path generated by A*. The random maps have names that contain the map size and the obstacle density in order. For example, random2000-40-0 is a randomly generated map of size 2000x2000 with 40\% obstacle density. The plot depicts the runtime (ms) in log-scale on the y-axis.}
\label{fig:runtime}
\end{figure}
\noindent \textbf{Bounded Suboptimal Search + HVG: }Typically bounded suboptimal search methods like Weighted A* (wA*) are significantly faster than A* with the tradeoff on path cost. HVG can be used to post-process paths generated by wA* to provide homotopic Euclidean optimality guarantee while having a much smaller runtime compared to using A*. HVG provides a tradeoff opportunity by coupling HVG with weighted A* which worsens the path quality but the runtime improves significantly as the search component has at least a five-fold reduction in runtime.
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth]{images/wt_pathcost.png}
\caption{Path cost statistics with the grid-based path generated by wA* with w = 3. The plot depicts the path cost as a percentage of the path cost of wA* on the y-axis.}
\label{fig:pathcostwt}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth]{images/wt_runtime.png}
\caption{Runtime statistics with the grid-based path generated by wA* with w = 3. The plot depicts the runtime (ms) in log-scale on the y-axis.}
\label{fig:runtimewt}
\end{figure}
In the case of the map `random4000-40-0', the cost of the path generated by A*+HVG is around 0.98\% worse than the path cost of ANYA with a 3x improvement in runtime over ANYA. Whereas the cost of the path generated by wA*+HVG is 9.1\% worse than that of ANYA with the runtime of wA*+HVG being 26x faster than ANYA. There is a substantial improvement in runtime when HVG is used along with weighted A*. The homotopic optimality guarantee still stands for wA*+HVG in the homotopy class of the grid-based path generated by wA*.
\section{Conclusion \& Future Work}
In this work, we introduced a novel post-processing algorithm (HVG), which is, to the best of our knowledge, the first post-processing technique that returns a provably optimal path within the same homotopy class as the path returned by A* search run on a 2D grid. The algorithm is highly parallelizable and hence gives competitive runtimes. Other post-processing algorithms and any-angle algorithms do not lend themselves to be parallelized trivially. A$^*$ with HVG has shorter path lengths than A$^*$, A$^*$ with greedy post-processing and A$^*$ with String Pulling. In large maps and maps with high obstacle density, we demonstrated that A$^*$ with HVG consistently shows better runtimes than Theta$^*$ and ANYA. The runtime of A$^*$ with HVG is dominated by the time taken by A$^*$ for the initial grid search. However, HVG can also be used with weighted A$^*$. The runtime of the algorithm can be further improved by incorporating pruning techniques used in \cite{svg}. The parallelization of the vertex scanning and visibility graph construction was done using a naive allocation of one thread per node of the grid-based path. There is potential in improving multi-threading to get a sizeable reduction in runtime.
|
{
"timestamp": "2021-05-11T02:17:17",
"yymm": "2105",
"arxiv_id": "2105.03833",
"language": "en",
"url": "https://arxiv.org/abs/2105.03833"
}
|
\section{Introduction}\label{sec:introduction}
The Universe is homogeneous and isotropic on large scale, which is called the cosmological principle. Based on it, the standard Lambda cold dark matter ($\Lambda$CDM) cosmological model has been established. In the past few decades, many experiments test its validity and verify that it is consistent with most cosmological observations. The observations of Cosmic Microwave Background (CMB) temperature anisotropies and polarizations from Wilkinson Microwave Anisotropy Probe (WMAP) \cite{Hinshaw:2012aka,Bennett:2012zja} and Planck satellites \cite{Akrami:2018vks,Aghanim:2019ame,Aghanim:2018eyx,Akrami:2019bkn} provide high-precision constraints on the six based cosmological parameters. However, there still exist several anomalies that have been reported, such as the alignment of low-$\ell$ multipoles in the CMB temperature anisotropies \cite{Tegmark:2003ve,Bielewicz:2004en,Copi:2013jna,Chang:2013lxa}, the parity asymmetry \cite{Akrami:2019bkn,Kim:2010gd,Kim:2010gf,Gruppuso:2010nd,Liu:2013wfa,Zhao:2013jya} and the hemispherical power asymmetry \cite{Akrami:2019bkn,Eriksen:2003db,Hansen:2004vq,Eriksen:2007pc} in CMB, the spatial variation of the fine structure constant \cite{Webb:2010hc,King:2012id}, the anisotropic accelerating expansion of the Universe \cite{Bonvin:2006en,Cai:2013lja,Koivisto:2008ig,Chang:2014wpa,Chang:2014nca,Lin:2016jqp}, the alignment of quasar polarization vectors on large scale \cite{Hutsemekers:2005iz}, the MOND acceleration scale \cite{Zhou:2017lwy,Chang:2018vxs,Chang:2018lab}. These phenomena may imply that our Universe has a preferred direction.
As the most luminous and persistent energy source, quasars have extraordinary potential in the exploration of our Universe. In recent years, quasars are tentatively used to investigate the cosmological parameters. An incomplete list includes the relation between the UV emission lines and the continuum luminosity \cite{1977ApJ...214..679B}, the relation between the radius of quasars and its luminosity \cite{Watson:2011um,Melia:2013sxa,Kilerci_Eser_2015}, the relation between luminosity and mass of super Eddington accreting quasars \cite{Wang:2013ha}, the correlation between X-ray variability and luminosity of quasars \cite{LaFranca:2014eba}, the non-linear relation between UV and X-ray luminosity \cite{Risaliti:2015zla,Khadka:2019njj,Khadka:2020tlm,Lusso:2020pdb}. The non-linear relation between UV and X-ray luminosity was firstly discovered by the X-ray surveys \cite{1979ApJ...234L...9T,1981ApJ...245..357Z,1986ApJ...305...83A}. For decades, the UV and X-ray luminosity relationship has been confirmed by observations of a few hundred quasars in the redshift range from 0 to 6.5. Since 2015, E. Lusso etc. \cite{Risaliti:2015zla,Lusso:2020pdb} have been attempting to estimate the cosmological parameters by the non-linear relation between UV and X-ray luminosity with quasars as standardizable candles.
In this paper, we will use the X-ray and UV fluxes of 808 quasars \cite{Risaliti:2015zla} to explore the anisotropy in the Universe. These 808 quasars are thought to be standardizable candles through the relation between UV and X-ray luminosity. The quasars sample is in the redshift range $0.061\leq z \leq 6.28$. The Pantheon sample \cite{Scolnic:2017caz} and the JLA compilation \cite{Betoule:2014frx} are combined with 808 quasars in the analysis of the anisotropic cosmological model i.e., Finslerian cosmological model. We also attempt to investigate the Hubble constant $H_0$ in the Finslerian cosmological model by considering six gravitationally lensed quasars with measured time delays \cite{Wong:2019kwg}. At last, we will forecast the future constraints on the Finslerian cosmological model with the X-ray and UV fluxes of quasars.
The rest of this paper is organized as follows. In Section \ref{sec:Methodology}, we briefly introduce the UV and X-ray luminosity relationship, the Time-Delay Strong Lensing measurement, and the Finslerian cosmological model. We show our results in Section \ref{sec:PJq}. Finally, discussions and conclusions are given in Section \ref{sec:DC}.
\section{Methodology}\label{sec:Methodology}
\subsection{The UV and X-ray luminosity relationship}
The relation of the UV and X-ray luminosity is parameterized as
\begin{equation}\label{lux_lx_a}
\alpha_{\mathrm{OX}}=0.384 \times \log \left(L_{\mathrm{X}} / L_{\mathrm{UV}}\right),
\end{equation}
where $L_{\mathrm{UV}}$ denotes the logarithm of the monochromatic luminosity at 2500 {\r A} and $L_{\mathrm{X}}$ denotes the logarithm of the monochromatic luminosity at 2 keV. $\alpha_{\mathrm{OX}}$ is the slope of power law, which connects $L_{\mathrm{X}}$ and $L_{\mathrm{UV}}$. Eq. (\ref{lux_lx_a}) can be written as
\begin{equation}\label{lux_lx}
\log L_{\mathrm{X}}=\beta+\gamma \log L_{\mathrm{UV}},
\end{equation}
where $\beta$ and $\gamma$ are two free parameters. The luminosities and fluxes of quasars are connected by the luminosity distance. Now we rewrite the Eq. (\ref{lux_lx}) and obtain
\begin{equation}\label{lux_lx_F}
\log \left(F_{X}\right)=\beta+(\gamma-1) \log (4 \pi)+\gamma \log \left(F_{U V}\right)+2(\gamma-1) \log \left(D_{L}\right),
\end{equation}
where $\log$ denotes $log_{10}$. $F_{X}$ and $F_{U V}$ represent the are X-ray and UV fluxes of quasars, respectively. The luminosity distance $D_{L}$ takes the form,
\begin{equation}\label{DL}
D_{L}=\frac{c\left(1+z\right)}{H_0} \int_{0}^{z} \frac{d z'}{E(z')},
\end{equation}
where $z_{c m b}$ denotes redshift. c is the speed of light and $H_0$ is the Hubble constant. The expression of $E(z)$ depends on cosmological models.
In our work, the dataset of the X-ray and UV fluxes of quasars are from G. Risaliti and E. Lusso \cite{Risaliti:2015zla}. The dataset contains 808 quasars, which are in the redshift range $0.061\leq z \leq 6.28$. The redshift distribution of 808 quasars is shown in Fig. \ref{fig:red_dis}. The distribution of 808 quasars in the galactic coordinate system is shown in Fig. \ref{fig:red_dis_g} and the pseudo-colors indicate the redshift of these quasars.
\begin{figure}
\begin{center}
\includegraphics[width=9cm]{redshift.pdf}
\caption{The redshift distribution of 808 quasars.}
\label{fig:red_dis}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=8.5cm]{l_b_distribution.pdf}
\caption{The distribution of 808 quasars in the galactic coordinate system. The pseudo-colors indicate redshift of these quasars.}
\label{fig:red_dis_g}
\end{center}
\end{figure}
\subsection{The Time-Delay Strong Lensing measurement}
Strong gravitational lensing is a powerful probe of cosmological models. The time-delay strong lensing (TDSL) measurement is a fully independent method to measure the Hubble constant. Since the approach first proposed by Refsdal \cite{Refsdal:1964nw}, lensed quasars have generally been used to constrain $H_0$ by measuring the difference in arrival time of photons. TDSL provides a measurement of $H_0$, which is completely independent of the CMB and the local distance ladder.
The travel time of light rays from a source to the observer depends on their path length and the gravitational potential they traverse. For a system of lenses with an image at an angular position $\boldsymbol{\theta}$ and corresponding source position $\boldsymbol{\beta}$, the excess time delay is
\begin{equation}\label{e_time_delay}
t(\boldsymbol{\theta}, \boldsymbol{\beta})=\frac{D_{\Delta t}}{c}\left[\frac{(\boldsymbol{\theta}-\boldsymbol{\beta})^{2}}{2}-\psi(\boldsymbol{\theta})\right],
\end{equation}
where c is the speed of light and $\psi(\boldsymbol{\theta})$ is the lens potential. The time-delay distance $D_{\Delta t}$ is defined as \cite{Refsdal:1964nw}
\begin{equation}
D_{\Delta t} = \left(1+z_{\mathrm{d}}\right) \frac{D_{\mathrm{d}} D_{\mathrm{s}}}{D_{\mathrm{ds}}},
\end{equation}
where $z_{\mathrm{d}}$ denotes the lens redshift. $D_{\mathrm{d}}$ and $D_{\mathrm{s}}$ are the angular diameter distance from the observer to the lens and the angular diameter distance from the observer to the source, respectively. $D_{\mathrm{ds}}$ is the angular diameter distance from the lens to the source. The angular diameter distance is defined as
\begin{equation}\label{DA}
D_{A}=\frac{c}{H_0 \left( 1+z \right)} \int_{0}^{z} \frac{d z'}{E(z')},
\end{equation}
where $z$ is the redshift and $H_0$ is the Hubble constant. The expression of $E(z)$ depends on cosmological models. The difference of excess time delays between two images A and B is
\begin{equation}
\Delta t_{AB}=\frac{D_{\Delta t}}{c}\left[\frac{\left(\boldsymbol{\theta}_{A}-\boldsymbol{\beta}\right)^{2}}{2}-\psi\left(\boldsymbol{\theta}_{A}\right)-\frac{\left(\boldsymbol{\theta}_{B}-\boldsymbol{\beta}\right)^{2}}{2}+\psi\left(\boldsymbol{\theta}_{B}\right)\right],
\end{equation}
where $\boldsymbol{\theta}_{A}$ and $\boldsymbol{\theta}_{B}$ are the positions of image A and B, respectively.
We use six gravitationally lensed quasars with measured time delays from H0LiCOW collaboration \cite{Wong:2019kwg} to constrain the Hubble constant and other cosmological parameters. Our work is based on the $H_0$ inference code\footnote{https://github.com/shsuyu/H0LiCOW-public/tree/master/H0\_inference\_code.} provided by Kenneth C. Wong etc. \cite{Wong:2019kwg}.
\subsection{The anisotropic cosmological model}
In this paper, we choose the Finslerian cosmological model as the anisotropic cosmological model. Different from the standard cosmological model, the Finslerian cosmological model has an intrinsically preferred direction that breaks the isotropy of the Universe. Many works about investigating the anisotropy of the Universe are based on the Finsler spacetime. For instance, investigating the cosmic anisotropy with supernovae of type Ia (SNe Ia) samples by the hemisphere comparison HC method \cite{Chang:2014nca,Zhao:2019azy} and the dipole fitting \cite{Chang:2017bbi,Lin:2016jqp,Lin:2015rza,Chang:2014nca,Chang:2019utc}, explaining the parity asymmetry and power deficit in the Finsler spacetime \cite{Chang:2018bjg}, the unified description for dipoles of the fine-structure constant and SNe Ia Hubble diagram \cite{Li:2015uda}.
In the Finsler spacetime, the scale factor $a$ takes the form \cite{Li:2015uda},
\begin{equation}\label{a_F}
a=\left(1+A_{D} \cos \theta\right) /(1+z).
\end{equation}
$A_{D}$ is a parameter in the Finsler spacetime, which can be regarded as the dipole amplitude. When $A_{D}=0$, the Finslerian cosmological model reduces to the $\Lambda$CDM model. $\cos \theta$ is the angle between the position of quasars and the preferred direction in the Finsler spacetime. By Eq. (\ref{a_F}), the luminosity distance in the Finsler spacetime can be written as
\begin{equation}
D_{L}=\frac{c\left(1+z\right)}{H_0} \int_{0}^{z} \frac{d z'}{E(z')},
\end{equation}
where $E(z)$ in the Finsler spacetime takes the form of
\begin{equation}\label{Ez}
E(z)=\sqrt{\Omega_{m 0}(1+z)^{3}(1 + A_{D} \cos \theta)^{-3}+1-\Omega_{m 0}}.
\end{equation}
Plugging Eq. (\ref{Ez}) into Eq. (\ref{DL}) and Eq. (\ref{DA}), we can get the form of the luminosity distance and angular diameter distance in the Finslerian cosmological model, respectively.
\section{Results}\label{sec:PJq}
To constrain the dipole amplitude and the preferred direction in the Finslerian cosmological model with the X-ray and UV fluxes of quasars, we employ the likelihood function \cite{Khadka:2019njj},
\begin{equation}
\ln (\mathrm{LF})=-\frac{1}{2} \sum_{i=1}^{N}\left[\frac{\left[\log \left(F_{X, i}^{\mathrm{obs}}\right)-\log \left(F_{X, i}^{\mathrm{th}}\right)\right]^{2}}{s_{i}^{2}}+\ln \left(2 \pi s_{i}^{2}\right)\right],
\end{equation}
where $s_{i}^{2}=\sigma_{i}^{2}+\delta^{2}$. $\sigma_{i}^{2}$ is the error of the observed flux $F_{X, i}^{\mathrm{obs}}$ and $\delta$ is the global intrinsic dispersion. $F_{X, i}^{\mathrm{th}}$ is the theoretical flux at the redshift $z_i$.
The Markov chain Monte Carlo (MCMC) method has been used to explore the whole parameters space in our work. Emcee\footnote{https://emcee.readthedocs.io/en/stable/} \cite{ForemanMackey:2012ig} as the Affine Invariant Markov chain Monte Carlo Ensemble sampler is widely used to investigate the parameters in astrophysics and cosmology. During the fitting process, we find that the parameters $\beta$, $\gamma$, and $\delta$ are insensitive to the $\Lambda$CDM model and the Finslerian cosmological model. The results of the three parameters in the two cosmological models are almost the same. For the sake of brevity and clarity, we only show the parameters related to the Finslerian cosmological model.
The flat prior of each parameter in the Finslerian cosmological model is
\begin{equation}
\begin{aligned}
&\Omega_{\mathrm{m}} \sim [0,1], A_D \sim [0,1], l \sim [-180,180], b \sim [-90,90].
\end{aligned}
\end{equation}
The results are shown in Fig. \ref{fig:xuv_result} and summarized in Table \ref{table:C_data}. In Fig. \ref{fig:xuv_result}, we show the marginalized posterior distribution of the parameters. The horizontal and vertical solid black lines denote the maximum of 1-dimensional marginalized posteriors. In Table \ref{table:C_data}, we show the 68\% confidence level constraints on the parameters. As can be seen, the dipole anisotropy is well-constrained by the X-ray and UV fluxes of quasars. The dipole amplitude is $A_D=0.302_{ -0.124}^{ +0.185}$, which is not zero in 1$\sigma$ confidence level. The dipole direction points towards $(l, b) = ( 288.92_{~ -28.80^{\circ}}^{^{\circ}+23.74^{\circ}},$
$ 6.10_{~ -16.40^{\circ}}^{^{\circ} +16.55^{\circ}} )$. Compared with results from SNe Ia \cite{Lin:2015rza,Zhao:2019azy,Chang:2019utc}, the precision of dipole direction has a significant improvement. Interestingly, we found that the dipole direction from the X-ray and UV fluxes of quasars is very close to the dipole direction $(l, b) = ( 291.60_{~ -92.96^{\circ}}^{^{\circ} +248.10^{\circ}}, 16.20_{~ -78.73^{\circ}}^{^{\circ} +73.80^{\circ}} )$ given by the JLA in the Finslerian cosmological model. The angular difference between the two dipole directions is only $10.44^{\circ}$. The dipole direction given by the Pantheon sample in the Finslerian cosmological model is $(l, b) = ( 298.80_{~ -118.69^{\circ}}^{^{\circ} +75.31^{\circ}}, -23.41_{~ -57.41^{\circ}}^{^{\circ} +19.26^{\circ}} )$ \cite{Chang:2019utc}, which is about $31.05^{\circ}$ away from the dipole direction given by the X-ray and UV fluxes of quasars.
\begin{table*}
\begin{center}
\caption{The 68\% confidence level constraints on the parameters in the Finslerian cosmological model with different datasets.}
\begin{threeparttable}
\setlength{\tabcolsep}{1.8mm}{
\begin{tabular}{cccc}
\hline Data & Quasars & Quasars + Pantheon + JLA & Quasars + TSDL \\
\hline$\Omega_{\mathrm{m}}$ & $0.509_{ -0.275}^{ +0.453}$ & $ 0.298_{ -0.042}^{ +0.039}$ & $0.204_{ -0.137}^{ +0.190}$ \\
$A_D$ & $0.302_{ -0.124}^{ +0.185}$ & $-$ & $0.142_{ -0.142}^{ +0.330}$ \\
$l$ & $288.92_{~ -28.80^{\circ}}^{^{\circ}+23.74^{\circ}}$ & $284.41_{~ -104.37^{\circ}}^{^{\circ} +220.14^{\circ}}$ & $ 296.24_{~ -94.22^{\circ}}^{^{\circ} +46.62^{\circ}}$ \\
$b$ & $6.10_{~ -16.40^{\circ}}^{^{\circ} +16.55^{\circ}}$ & $-9.00_{~ -80.99^{\circ}}^{^{\circ} +76.29^{\circ}}$ & $21.23_{~ -45.86^{\circ}}^{^{\circ} +51.33^{\circ}}$ \\
$H_{0}$\tnote{1} & $-$ & $-$ & $72.2_{-4.1}^{+3.6}$ \\
\hline
\end{tabular}}
\begin{tablenotes}
\footnotesize
\item[1] $\mathrm{~km} \mathrm{~s}^{-1} \mathrm{Mpc}^{-1}$
\end{tablenotes}
\end{threeparttable}
\label{table:C_data}
\end{center}
\end{table*}
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{xuv_result.pdf}
\caption{The marginalized posterior distribution of the parameters in the Finslerian cosmological model with the X-ray and UV fluxes of quasars. The horizontal and vertical solid black lines denote the maximum of 1-dimensional marginalized posteriors.}
\label{fig:xuv_result}
\end{center}
\end{figure}
The dipole directions from SNe Ia in the dipole-modulated $\Lambda$CDM model are also considered for comparison. In Table \ref{table:dipole_direction}, we summarized the dipole directions in the dipole-modulated $\Lambda$CDM model. In Fig. \ref{fig:DF-Compare}, we show all the dipole directions mentioned above in the galactic coordinate system. As can been seen in Fig. \ref{fig:DF-Compare}, the dipole direction given by the X-ray and UV fluxes of quasars is not far away from the dipole directions given by SNe Ia in the dipole-modulated $\Lambda$CDM model. We show the angular difference between the dipole direction given by the X-ray and UV fluxes of quasars and other dipole directions in Table \ref{table:difference}. For the three dipole directions in the dipole-modulated $\Lambda$CDM model, the angular difference is around $30^{\circ}$. The dipole direction from the X-ray and UV fluxes of quasars close to the ones from the SNe Ia sample, especially the JLA compilation, may hint that there could exist an underlying relation.
\begin{table}
\caption{The 68\% confidence level constraints on the dipole directions in the dipole-modulated $\Lambda$CDM model with Pantheon \cite{Zhao:2019azy}, JLA \cite{Lin:2015rza}, and Union2.1 \cite{Yang:2013gea}.}
\setlength{\tabcolsep}{1.8mm}{
\begin{tabular}{cccc}
\hline Data & Pantheon & JLA & Union2.1 \\
\hline
$l$ & $306.00_{~ -125.01^{\circ}}^{^{\circ} +82.95^{\circ}}$ & $316_{~ -110^{\circ}}^{^{\circ} +107^{\circ}}$ & $ 307.1^{\circ} \pm 16.2^{\circ}$ \\
$b$ & $-34.20_{~ -54.93^{\circ}}^{^{\circ} +16.82^{\circ}}$ & $-5_{~ -60^{\circ}}^{^{\circ} +41^{\circ}}$ & $ -14.3^{\circ} \pm 10.1^{\circ}$ \\
\hline
\end{tabular}}
\label{table:dipole_direction}
\end{table}
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{DF-Compare.pdf}
\caption{The dipole directions from the Finslerian cosmological model and the dipole-modulated $\Lambda$CDM model with different datasets in the galactic coordinate system.}
\label{fig:DF-Compare}
\end{center}
\end{figure}
\begin{table*}
\begin{center}
\caption{The angular difference between the dipole direction given by the X-ray and UV fluxes of quasars and other dipole directions. ``F" denotes the Finslerian cosmological model and ``$\Lambda$" denotes the $\Lambda$CDM model.}
\setlength{\tabcolsep}{1.8mm}{
\begin{tabular}{cccccc}
\hline Dipole direction & Pantheon-F & JLA-F & Pantheon-$\Lambda$ & JLA-$\Lambda$ & Union2.1-$\Lambda$ \\
\hline
Difference & $31.05^{\circ}$ & $10.44^{\circ}$ & $33.90^{\circ}$ & $29.23^{\circ}$ & $27.23^{\circ}$\\
\hline
\end{tabular}}
\label{table:difference}
\end{center}
\end{table*}
We combined the Pantheon sample and JLA compilation with quasars to constrain the Finslerian cosmological model. The results are shown in Fig. \ref{fig:PJX_result} and summarized in Table \ref{table:C_data}. In Fig. \ref{fig:PJX_result}, we show the marginalized posterior distribution of the parameters. The horizontal and vertical solid black lines denote the maximum of 1-dimensional marginalized posteriors. In Table \ref{table:C_data}, we show the 68\% confidence level constraints on the parameters. For the combined datasets, the 95\% confidence level upper limit of the dipole amplitude $A_D$ is $1.14\times 10^{-2}$ and the dipole direction points towards $(l, b) = ( 284.41_{~ -104.37^{\circ}}^{^{\circ} +220.14^{\circ}}, -9.00_{~ -80.99^{\circ}}^{^{\circ} +76.29^{\circ}} )$. The result is similar to one from the Pantheon sample in the Finslerian cosmological model \cite{Chang:2019utc}.
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{PJX_result.pdf}
\caption{The marginalized posterior distribution of the parameters in the Finslerian cosmological model from quasars, Pantheon sample, and JLA compilation. The horizontal and vertical solid black lines denote the maximum of 1-dimensional marginalized posteriors.}
\label{fig:PJX_result}
\end{center}
\end{figure}
To investigate the Hubble constant $H_0$ in the Finslerian cosmological model, we use six gravitationally lensed quasars with measured time delays from H0LiCOW collaboration \cite{Wong:2019kwg} in our analysis. The six gravitationally lensed quasars are combined with the X-ray and UV fluxes of quasars in the parameters fitting. The flat prior on each parameter is as follow: $H_0 \sim [60,100], \Omega_{\mathrm{m}} \sim [0,1], A_D \sim [0,1], l \sim [-180,180], b \sim [-90,90]$. We show the results in Fig. \ref{fig:SLXUV_result} and Table \ref{table:C_data}. We find that the dipole amplitude is $A_D=0.142_{ -0.142}^{ +0.330}$ and the dipole direction is $(l, b) = ( 296.24_{~ -94.22^{\circ}}^{^{\circ} +46.62^{\circ}}, 21.23_{~ -45.86^{\circ}}^{^{\circ} +51.33^{\circ}})$. Even though the errors of parameters are bigger than ones given by quasars, the results of parameters related to anisotropy are consistent with the results from quasars. The Hubble constant is $H_0=72.2_{-4.1}^{+3.6} \mathrm{~km} \mathrm{~s}^{-1} \mathrm{Mpc}^{-1}$. Compared with the results from six gravitationally lensed quasars with measured time delays $H_0=73.3_{-1.8}^{+1.7} \mathrm{~km} \mathrm{~s}^{-1} \mathrm{Mpc}^{-1}$ \cite{Wong:2019kwg}, the central value of $H_0$ decreases a little bit.
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{SLXUV_result.pdf}
\caption{The marginalized posterior distribution of the parameters related to the Finslerian cosmological model with six gravitationally lensed quasars and the X-ray and UV fluxes of quasars. The horizontal and vertical solid black lines denote the maximum of 1-dimensional marginalized posteriors.}
\label{fig:SLXUV_result}
\end{center}
\end{figure}
Finally, we forecast the future constraints on the dipole parameters in the Finslerian cosmological model with the X-ray and UV fluxes of quasars. We assume Finslerian cosmological model with $\Omega_{\mathrm{m}}=0.509$, $A_D=0.302$ and $(l,b)=(288.92^{\circ},6.10^{\circ})$, which are given by the the X-ray and UV fluxes of 808 quasars. In the simulation, the positions of the 808 quasars are unchanged and the redshifts of simulated quasars are generated from the redshift distribution of the 808 quasars. We replace the X-ray fluxes of \textit{i}th simulated quasar with a random number generated from the Gaussian distribution $G(F_{X}^{\mathrm{obs}},\sigma_{F_{X}^{\mathrm{obs}}})$, where the $F_{X}^{\mathrm{obs}}$ is the X-ray fluxes of observed quasars at the same position with the \textit{i}th simulated quasar and $\sigma_{F_{X}^{\mathrm{obs}}}$ is the error of the observed flux $F_{X, i}^{\mathrm{obs}}$. We construct 2000, 5000, and 10000 simulations for comparison, and the results of the dipole parameters from the simulated dataset are shown in Fig. \ref{fig:Sim_result} and summarized in Table \ref{table:S_data}. In Fig. \ref{fig:Sim_result}, the blue, red, and dark lines denote 2000, 5000, and 10000 simulations, respectively. As can be seen, the inferred errors of the dipole parameters get smaller as the number of simulations increases. For the dipole amplitude $A_D$, the inferred error of lower and upper limits reduce to $12.76\%$ and $15.86\%$. For the direction parameters $l$ and $b$, the precisions increase by about 50\% when the 2000 simulations are compared with the 10000 simulations. Our results show that the X-ray and UV fluxes of quasars have a promising future as a probe of the Finslerian cosmological model.
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{Sim.pdf}
\caption{The marginalized posterior distribution of the parameters of dipole with the simulated X-ray and UV fluxes of quasars. The horizontal and vertical solid black lines denote the maximum of 1-dimensional marginalized posteriors. The blue, red, dark lines denote 2000, 5000 and, 10000 simulations, respectively.}
\label{fig:Sim_result}
\end{center}
\end{figure}
\begin{table}
\caption{The 68\% confidence level constraints on the parameters in the Finslerian cosmological model with the simulated dataset.}
\setlength{\tabcolsep}{1.8mm}{
\begin{tabular}{cccc}
\hline Simulations & 2000 & 5000 & 10000 \\
\hline
$A_D$ & $0.280_{ -0.078}^{ +0.105}$ & $0.289_{ -0.054}^{ +0.065}$ & $ 0.290_{ -0.037}^{ +0.046}$ \\
$l$ & $288.35_{~ -19.92^{\circ}}^{^{\circ} +16.15^{\circ}}$ & $289.70_{~ -11.78^{\circ}}^{^{\circ} +9.81^{\circ}}$ & $289.00_{~ -7.57^{\circ}}^{^{\circ} +7.73^{\circ}}$ \\
$b$ & $6.66_{~ -12.17^{\circ}}^{^{\circ} +11.45^{\circ}}$ & $5.81_{~ -6.47^{\circ}}^{^{\circ} +7.36^{\circ}}$ & $6.40_{~ -4.87^{\circ}}^{^{\circ} +4.64^{\circ}}$ \\
\hline
\end{tabular}}
\label{table:S_data}
\end{table}
\section{Discussions and conclusions}\label{sec:DC}
In this paper, we tested the anisotropy in the Finslerian cosmological model with the X-ray and UV fluxes of quasars. The dipole anisotropy is well-constrained by the X-ray and UV fluxes of quasars. The dipole amplitude is $A_D=0.302_{ -0.124}^{ +0.185}$ and the dipole direction points towards $(l, b) = ( 288.92_{~ -28.80^{\circ}}^{^{\circ}+23.74^{\circ}}, 6.10_{~ -16.40^{\circ}}^{^{\circ} +16.55^{\circ}} )$. Interestingly, we found that the dipole direction from the X-ray and UV fluxes of quasars is very close to the dipole direction $(l, b) = ( 291.60_{~ -92.96^{\circ}}^{^{\circ} +248.10^{\circ}}, 16.20_{~ -78.73^{\circ}}^{^{\circ} +73.80^{\circ}} )$ given by the JLA in the Finslerian cosmological model. The angular difference between the two dipole directions is only $10.44^{\circ}$. We also found that the dipole direction given by the X-ray and UV fluxes of quasars is not far away from the dipole directions given by SNe Ia in the dipole-modulated $\Lambda$CDM model and the angular difference is around $30^{\circ}$. We thought the dipole direction from the X-ray and UV fluxes of quasars close to the ones from the SNe Ia sample, especially the JLA compilation, may hint that there could exist an underlying relation. We combined the Pantheon sample and JLA compilation with quasars to constrain the Finslerian cosmological model and the results are similar to the ones given by the Pantheon sample in the Finslerian cosmological model \cite{Chang:2019utc}. We also investigated the Hubble constant $H_0$ in the Finslerian cosmological model by combining the X-ray and UV fluxes of quasars with six gravitationally lensed quasars. We found a slightly smaller value of Hubble constant $H_0=72.2_{-4.1}^{+3.6} \mathrm{~km} \mathrm{~s}^{-1} \mathrm{Mpc}^{-1}$ than the value $H_0=73.3_{-1.8}^{+1.7} \mathrm{~km} \mathrm{~s}^{-1} \mathrm{Mpc}^{-1}$ given by the six gravitationally lensed quasars. At last, we forecasted the future constraints on the dipole parameters with the X-ray and UV fluxes of quasars. We constructed 2000, 5000, and 10000 simulations and found that the precisions of the parameters related to anisotropy have a significant improvement as the number of simulations increases. Our results show that the X-ray and UV fluxes of quasars have a promising future as a probe of anisotropy in the Finsler spacetime.
\section*{Acknowledgments}\noindent
We thank Yong Zhou for helpful discussions. J.-Q.Xia is supported by the National Science Foundation of China under grants No. U1931202, 11633001, and 11690023; the National Key R\&D Program of China No. 2017YFA0402600.
\bibliographystyle{prsty}
|
{
"timestamp": "2021-05-11T02:21:14",
"yymm": "2105",
"arxiv_id": "2105.03965",
"language": "en",
"url": "https://arxiv.org/abs/2105.03965"
}
|
\section{Introduction}
\section{Environment Description}
\label{appendix:environment}
Recently, datasets embodied in action and perception have been used to train models for various tasks \cite{Vries2018TalkTW,mao2018the}. One such dataset is the grounded SCAN (gSCAN) dataset \cite{ruis2020benchmark} which is used for systematic generalization. We base our environment gComm on the gSCAN dataset which is a grounded version of SCAN benchmark \cite{scan_benchmark}. While both these tasks focus on generalization with the meaning grounded in states of a grid-world, there are however, certain key differences between gComm and gSCAN: \textbf{(i)} Firstly, gSCAN focuses on rule-based generalization for navigation tasks, wherein, an agent learns to map a natural language instruction and its corresponding grid-view to a sequence of action primitives. Contrary to that, we present emergent communication as our main theme, using a pair of bots, a stationary speaker and a mobile listener, that process the language instruction and the grid-view respectively; \textbf{(ii)} Secondly, unlike the supervised framework adopted for learning gSCAN tasks, we use a more realistic RL-framework, wherein, the listener learns by exploring its environment and interacting with it. Our environment is conceptually similar to the BabyAI platform \cite{chevalier-boisvert2018babyai}. However, contrary to BabyAI , which focuses on language \textit{learning}, we intend to project gComm as a general purpose platform for investigating generalization from the perspective of grounded language \textit{acquisition} through emergent communication. A companion paper~\cite{DBLP:journals/corr/abs-2012-05011} introduced a intrinsic reward framework to induce compositionality in the emergent language using the gComm environment. In this paper, we intend to expound on the environment details.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\linewidth]{env_plots/env_desc.png}
\caption{\small gComm Environment}
\label{figure:comm_gscan}
\vspace{-0.4cm}
\end{figure}
\paragraph{Object Attributes:}
The gComm grid-world is populated with objects of different characteristics like shape, color, size and weight.
\begin{itemize}[leftmargin=*,noitemsep]
\item \textbf{Shapes:} \textit{circle, square, cylinder, diamond}
\item \textbf{Colors:} \textit{red, blue, yellow, green}
\item \textbf{Sizes:} $1,2,3,4$
\item \textbf{Weights:} \textit{light, heavy}
\end{itemize}
The weight attribute can be fixed corresponding to the object size at the beginning of training. For instance, smaller sized objects are lighter and vice versa. Alternatively, the weight can be set as an independent attribute. In the latter option, the weight is randomly fixed at the start of each episode so that the listener cannot deduce the same from the grid information (object size), and must rely on the speaker.
\subsection{Reinforcement Learning framework}
\label{appendix: rl framework}
\paragraph{Setup:}
In each round, a task is assigned to a stationary Speaker-Bot, the details of which (task and target information) it must share with a mobile Listener-Bot by transmitting a set of messages $m_{i=1}^{n_m}$, via a communication channel. At each time-step $t$, the listener agent selects an action from its action space $\mathcal{A}$, with the help of the received messages $m_{i=1}^{n_m}$ and its local observation (grid-view) $o_t \in \mathcal{O}$. The environment state is updated using the transition function $\mathcal{T}$: $\mathcal{S} \times \mathcal{A} \rightarrow \mathcal{S}$. The environment provides a reward to the agent at each time-step using a reward function $r$: $\mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}$. The goal of the agent is to find a policy $\bm{\pi}_{\theta}$ : $(\mathcal{O},m_{i=1}^{n_m}) \rightarrow \Delta(\mathcal{A})$ that chooses optimal actions so as to maximize the expected reward, $\mathcal{R} = \mathrm{E}_{\bm{\pi}} [\sum_{t} \gamma^t r^{(t)}]$ where $r^t$ is the reward received by the agent at time-step $t$ and $\gamma \in (0, 1]$ is the discount factor. At the beginning of training, their semantic repertoires are empty, and the speaker and listener must converge on a systematic usage of symbols to complete the assigned tasks thus, giving rise to an original linguistic system.
\paragraph{Observation Space:}
To encourage communication, gComm provides a partially observable setting in which neither the speaker nor the listener has access to the complete state information. The speaker knows the task and target specifics through the natural language instruction whereas, the listener has access to the grid representation. However, the listener is unaware of either the target object or the task, and therefore must rely on the speaker to accomplish the given task. The observation space of the listener comprises (i) the grid representation; (ii) the messages transmitted by the speaker.
The natural language instruction is parsed to $\langle\mathrm{VERB}, \{\mathrm{ADJ}_i\}_{i=1}^{3}, \mathrm{NOUN}\rangle$ with the help of an ad hoc semantic parser\footnote{$\mathrm{VERB}$: task; $\mathrm{ADJ}$: object attributes like color, size and weight; $\mathrm{NOUN}$: object shape}. It is then converted to the following 18-d vector representation before being fed to the speaker: \{\textit{1, 2, 3, 4, square, cylinder, circle, diamond, r, b, y, g, light, heavy, walk, push, pull, pickup}\}. Each position represents a bit and is set or unset according to the attributes of the target object and the task. The breakdown of the vector representation is as follows: bits [$0-3$]: target size; bits [$4-7$]: target shape; bits [$8-11$]: target color; bits [$12-13$]: target weight; bits [$14-17$]: task specification.
The grid information can either be a image input of the whole grid or a predefined cell-wise vector representation of the grid. In the latter case, each grid cell in is specified by a 17-d vector representation given by: \{\textit{$1$, $2$, $3$, $4$, square, cylinder, circle, diamond, r, b, y, g, agent, E, S, W, N}\}. The breakdown is as follows: bits [$0-3$]: object size; bits [$4-7$]: object shape; bits [$8-11$]: object color; bit $12$: agent location (is set $=1$ if agent is present in that particular cell, otherwise $0$); bits [$13-16$]: agent direction. For an $obstacle$ or a $wall$, all the bits are set to $1$.
\paragraph{Action Space:}
The action space comprises eight different actions that the listener agent can perform: \{\textit{left, right, forward, backward, push, pull, pickup, drop}\}. In order to execute the `push', `pull', and `pickup' actions, the agent must navigate to the same cell as that of the object. Upon executing a \textit{pickup} action, the object disappears from the grid. Conversely, an object that has been picked up can reappear in the grid only if a `drop' action is executed in the same episode. Also refer Section~\ref{section: task description} for further details about task descriptions.
\paragraph{Rewards:}
gComm generates a 0-1 (sparse) reward, i.e., the listener gets a reward of $r = 1$ if it achieves the specified task, otherwise $r = 0$.
\paragraph{Communication:}
Recall that the listener has incomplete information of its state space and is thus unaware of the task and the target object. To address the information asymmetry, the speaker must learn to use the communication channel for sharing information. What makes it more challenging is the fact that the semantics of the transmitted information must be learned in a sparse reward setting, i.e. to solve the tasks, the speaker and the listener must converge upon a common protocol and use it systematically with minimal feedback at the end of each round.
\subsection{Task Description}
\label{section: task description}
\textbf{(i) Walk} to a target object
\textbf{(ii) Push} a target object in the forward direction.
\textbf{(iii) Pull} a target object in the backward direction.
\textbf{(iv) Pickup} a target object.
\textbf{(v) Drop} the picked up object.
Additionally, there are modifiers associated with verbs, for instance: \textit{pull the red circle twice}. Here, \textit{twice} is a numeral adverb and must be interpreted to mean two consecutive `pull' actions. When an object is picked up, it disappears from the grid and appears only if a `drop' action is executed in the subsequent time-steps. However, no two objects can overlap. It should be noted that while defining tasks, it is ensured that the target object is unique.
\paragraph{Target and Distractor objects:}
Cells in the grid-world are populated with objects divided into two classes: the \textit{target} object and the \textit{distractor} objects. The distractors either have the same color or the same shape (or both) as that of the target. Apart from these, some random objects distinct from the target can also be sampled using a parameter \textit{other\_objects\_sample\_percentage}. The listener and the objects may spawn at any random location on the grid.
\paragraph{Levels:} In addition to the simple grid-world environment comprising target and distractor objects, the task difficulty can be increased by generating obstacles and mazes. The agent is expected to negotiate the complex environment in a sparse reward setting. The number of obstacles and the maze density can be adjusted.
\paragraph{Instruction generation:}
Natural language instructions are programmatically generated based on predefined lexical rules and the specified vocabulary. At the beginning of training, the user specifies the kind of verb (transitive or intransitive), noun (object shape), and adjectives (object weight, size, color). Note, that he instruction templates are fixed, and as such, cannot handle ambiguities in natural language.
\subsection{Communication}
\label{appendix: communication details}
gComm endows the agents with the ability to To encourage communication, gComm provides a partially observable setting in which neither the speaker nor the listener has access to the complete state information. The speaker knows the task and target specifics through the natural language instruction whereas, the listener has access to the grid representation. However, the listener is unaware of either the target object or the task, and hence, it must rely on the speaker to accomplish the given task. The observation space of the listener comprises (i) the grid representation; (ii) the messages transmitted by the speaker. communicate. This forms a crucial step in addressing the partial observability problem and encouraging language acquisition. Above all, gComm provides several tools for an in-depth analysis of grounded communication protocols and their relation to the generalization performance.
\begin{figure}[t]
\centering
\includegraphics[width=0.7\linewidth]{env_plots/maze_grid.png}
\caption{\small Maze-grid. The maze complexity and density are user-defined parameters. The agent is required to negotiate the obstacles while performing the given task.}
\label{figure:mazegrid}
\end{figure}
\paragraph{Communication Channel:}
\label{appendix:communication_channel}
The communication can be divided into two broad categories.
\begin{itemize}[leftmargin=*,noitemsep]
\item \textbf{Discrete}:
Discrete messages can either be binary (processed using Gumbel-Softmax \cite{JangEtAl:2017:CategoricalReparameterizationWithGumbelSoftmax}) or one-hot (processed using Categorical distribution)\footnote{The use of discrete latent variables render the neural network non-differentiable. The Gumbel Softmax gives a differentiable sample from a discrete distribution by approximating the hard one-hot vector into a soft version. For one-hot vectors, we use Relaxed one-hot Categorical sampling. Since we want the communication to be discrete, we employ the \textit{Straight-Through} trick for both binary and one-hot vectors.}. Discrete messages are associated with a temperature parameter $\tau$.
\item \textbf{Continuous}: As opposed to discrete messages, continuous signals are real-valued. Theoretically speaking, each dimension in the message can carry 32-bits of information (32-bit floating point). These messages don't pose the same kind of information bottleneck as their discrete counterpart, however, they are not as interpretable.
\end{itemize}
Apart from these, the communication channel can be utilized to compare against the following baseline implementations readily available in the gComm environment. These baselines not only enable us to investigate the efficacy of the emergent communication protocols, but also provides quantitative insights into the learned communication abilities.
\label{baselines}
\begin{itemize}[leftmargin=*,noitemsep]
\item \textbf{Random Speaker}: In this baseline, the speaker transmits a set of random symbols to the listener which it must learn to ignore (and focus only on its local observation).
\item \textbf{Fixed Speaker}: Herein, the speaker's transmissions are masked with a set of \textit{ones}. Intuitively, this baseline provides an idea of whether communication is being used in the context of the given task (whether the speaker actually influences the listener or just appears to do so).
\item \textbf{Perfect Speaker}: This baseline provides an illusion of a perfect speaker by directly transmitting the input concept encoding, hence, acting as an upper bound for comparing the learned protocols.
\item \textbf{Oracle Listener}: For each cell, we zero-pad the grid encoding with an extra bit, and set it ($=1$) for the cell containing the target object. Thus, the listener has complete information about the target in context of the distractors. This baseline can be used as the upper limit of performance.
\end{itemize}
\paragraph{Channel parameters:}
The communication channel is defined using the following parameters:
\begin{itemize}[leftmargin=*,noitemsep]
\item Message Length: Length of the message vector $d_m$ sets a limit on the vocabulary size, i.e. higher the message length, larger is the vocabulary size. For instance, for discrete (binary) messages, the vocabulary size is given by $|\mathcal{V}| = 2^{d_m}$. Note, that a continuous message can transmit more information compared to a discrete message of the same length.
\item Information Rate or the number of messages $n_m$ transmitted per round of communication.
\end{itemize}
These constitute the channel capacity, $|\mathrm{C}| = \mathrm{c}_{d_m}^{n_m}$.
\paragraph{Setting:}
Communication can either be modelled in form of \textit{cheap talk} or \textit{costly signalling}. In the latter case, each message passing bears a small penalty to encourage more economic and efficient communication protocols. Alternatively, the communication can either be unidirectional (message passing from speaker to listener only) or bidirectional (an interactive setting wherein message passing happens in either direction). gComm uses an unidirectional cheap talk setting.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{env_plots/lights_off_feature.pdf}
\caption{Lights Out}
\label{figure:lights_off}
\end{figure}
\subsection{Metrics:}
\label{appendix:communication_metrics}
In order to induce meaningful communication protocols, the speaker must transmit useful information, correlated with its input (\textit{positive signalling}). At the same time, the listener must utilize the received information to alter its behavior and hence, its actions (\textit{positive listening}). In alignment with the works of \cite{Lowe2019OnTP}, we incorporate the following metrics in our environment to assess the evolved communication protocols.
\begin{itemize}[leftmargin=*,noitemsep]
\item \textbf{Positive signalling}:
Context independence (CI) is used as an indicator of positive signalling. It captures the statistical alignment between the input concepts and the messages transmitted by the speaker and is given by:
\begin{align*}
\forall c \in \mathcal{C}: m_c = \argmax_m p_{cm}(c|m) \\
CI(p_{mc}, p_{cm}) = \frac{1}{|\mathcal{C}|} \sum_c p_{cm}(c|m_c)p_{mc}(m_c|c)
\end{align*}
Both $p_{cm}(c|m)$ and $p_{mc}(m|c)$ are calculated using a translation model by saving ($m,c$) pairs and running it in both directions. Since each concept element $c$ should be mapped to exactly one message $m$, CI will be high when the $p_{cm}(c|m)$ and $p_{mc}(m|c)$ are high.\\
\item \textbf{Positive listening}: We use Causal Influence of Communication (CIC) of the speaker on the listener as a measure of positive listening. It is defined as the mutual information between the speaker's message and the listener's action $I(m,a_t)$. Higher the CIC, more is the speaker's influence on the listener's actions, thus, indicating that the listener is utilizing the messages.\\
\item \textbf{Compositionality}: Compositionality is measured using the topographic similarity (topsim) metric \cite{10.1162/106454606776073323}. Given two pairwise distance measures, i.e. one in the concept (input) space $\Delta_{\mathcal{C}}^{ij}$ and another in the message space $\Delta_{\mathcal{M}}^{ij}$, topsim is defined as the correlation coefficient calculated between $\Delta_{\mathcal{C}}^{ij}$ and $\Delta_{\mathcal{M}}^{ij}$. Higher topsim indicates more compositionality.
\end{itemize}
\subsection{Additional features}
\label{section: additional features}
We introduce a \textit{lights out} feature in the gComm environment through which the grid (including all its objects) is subjected to varying illuminations (Figure~\ref{figure:lights_off}). The feature can be activated randomly in each episode and presents a challenging situation for the agent where it is required to navigate the grid using its memory of the past observation. Note that this feature is useful only when used with an image input as the grid representation.
\begin{table}[t]
\small
\begin{center}
\begin{tabular}{ >{\centering\arraybackslash}m{1.8cm} >{\centering\arraybackslash}m{2.2cm} >{\centering\arraybackslash}m{1.7cm}}
\toprule
\textbf{Task} & \textbf{Baseline} & \textbf{Convergence Rewards}\\[0.4ex]
\midrule
\textbf{Walk} & \begin{tabular}{>{\centering\arraybackslash}m{2.2cm}>{\centering\arraybackslash}m{1.5cm}>{\centering\arraybackslash}m{1.5cm}>{\centering\arraybackslash}m{1.5cm}>{\centering\arraybackslash}m{1.5cm}} Simple Speaker & $0.70$ \\ \midrule Random Speaker & $0.40$ \\ \midrule Fixed Speaker & $0.43$ \\ \midrule Perfect Speaker & $0.95$ \\ \midrule Oracle Listener & $0.99$ \end{tabular}\\
\midrule
\textbf{PUSH} \& \textbf{PULL} & \begin{tabular}{>{\centering\arraybackslash}m{2.2cm}>{\centering\arraybackslash}m{1.5cm}>{\centering\arraybackslash}m{1.5cm}>{\centering\arraybackslash}m{1.5cm}>{\centering\arraybackslash}m{1.5cm}} Simple Speaker & $0.55$ \\ \midrule Random Speaker & $0.19$ \\ \midrule Fixed Speaker & $0.15$ \\ \midrule Perfect Speaker & $0.85$ \\ \midrule Oracle Listener & $0.90$ \end{tabular}\\
\bottomrule
\end{tabular}
\end{center}
\caption{\small Comparison of baseline convergence rewards [\textbf{Task: Walk}, params: \{comm\_type: categorical, num\_episodes: 200000, episode\_len: 10, num\_msgs: 3, msg\_len: 4, num\_actions: 4 (left, right, forward, backward), type\_grammar: simple\_intrans, weights: light, enable\_maze: False, grid\_size: $4\times4$, distractors: 4, grid\_input\_type: vector\}][ \textbf{Task: Push/Pull}, params: \{comm\_type: categorical, num\_episodes: 400000, episode\_len: 10, num\_msgs: 3, msg\_len: 4, num\_actions: 6 (left, right, forward, backward, push, pull), type\_grammar: simple\_trans, weights: light, enable\_maze: False, grid\_size: $4\times4$, distractors: 2, grid\_input\_type: vector\}]. Note, that these rewards were recorded over a set of $100$ validation episodes.}
\label{tab_results1}
\end{table}
\section{Related Work}
\paragraph{Emergent Communication:} With regard to emergent communication, so far, most existing works are limited to analyzing simple referential games \cite{Lewis1969-LEWCAP-4} in simulated environments, where a speaker communicates the input (object's shape and color) to a stationary listener which, then, tries to classify the reconstructed messages from a list of classes \cite{kottur-etal-2017-natural,HavrylovEtAl:2017:EmergenceOfLanguageWithMultiAgentGamesLearningToCommunicateWithSequencesOfSymbols,CaoEtAl:2018:EmergentCommunicationThroughNegotiation,NEURIPS2019_b0cf188d}. These games do not involve world state manipulation and generally comprise elementary inputs with limited attributes, thus, restricting the scope of language usage. gComm introduces an additional challenge for the listener to navigate and manipulate objects to achieve the transmitted goal.
\paragraph{Visual Navigation:} The problem of navigating in an environment based on visual perception, by mapping the visual input to actions, has
long been studied in vision and robotics. The tasks are either specified implicitly via rewards~\cite{8100252}, or are explicitly conditioned on the goal state (Goal-conditioned Reinforcement Learning)~\cite{zhu2017icra,10.5555/3327546.3327593,NEURIPS2019_c8cc6e90}. In contrast, gComm tasks are specified using natural language and involves unidirectional messages from a \textit{task-aware} speaker to a \textit{state-aware} listener.
\paragraph{Embodied Learning:} Recent works on embodied learning include (but are not limited to) using embodied agents to complete tasks specified by natural language in a simple mazeworld \cite{10.5555/3305890.3305956}, Embodied Question Answering~\cite{8575449} and Embodied Language Grounding by mapping 2D scenes to 3D~\cite{Prabhudesai_2020_CVPR}. We also intend to project gComm as a embodied communication environment where the listener agent is required to ground the messages to its corresponding visual input and associate them with actions (\textit{push a red circle twice} suggests that the red circle is heavy and the listener needs to perform two consecutive ``push" actions to move it.)
\paragraph{Instruction Execution:} These approaches focus on natural language understanding to map instructions to actions \cite{branavan-etal-2009-reinforcement,10.5555/2900423.2900560,10.5555/2900423.2900661}. However, in gComm, the listener agent doesn't have direct access to the natural language instruction hence, it focuses on mapping transmitted messages from the speaker to actions. The challenge is to address the information bottleneck, i.e., given a limited channel capacity, the speaker must learn to convey the required task and target specifics to the listener based on the input instruction.
\paragraph{Visual Question Answering:} In VQA, agents are required to answer natural language questions based on a fixed view of the environment (image or video) \cite{7410636,7780870,fukui-etal-2016-multimodal,8954214}. However, unlike gComm, the agents cannot actively perceive or manipulate objects.
\section{Discussion}
\label{section:discussion}
We compared a Simple Speaker (speaker transmitting one-hot messages) with the baselines given in Section~\ref{baselines} for \textbf{(i) Walk} task wherein the listener is required to walk to a target object; \textbf{(ii)} \textbf{Push} $+$ \textbf{Pull} task wherein the listener is required to push or pull a target object. The grid we used was of size $4 \times 4$ with no obstacles. Moreover, we used 5 objects (4 distractors $+$ 1 target) for (i) and 3 objects (2 distractors $+$ 1 target) for (ii). The number of messages were set at 3 (i.e., one messages each for task, shape and color).
We present our analysis based on the results from Table~\ref{tab_results1}.
\begin{itemize}[leftmargin=*,noitemsep]
\item Simple Speaker outperforms the Fixed and Random baselines.
\item Perfect Speaker performs as well as Oracle Listener.
\item Oracle Listener had the fastest convergence ($\approx \frac{1}{5}$ of the episodes taken by Simple Speaker), followed by Perfect Speaker ($\approx \frac{1}{2}$ of the episodes taken by Simple Speaker).
\item Fixed Speaker baseline converges faster than the Random Speaker baseline which suggests that the Listener learns to ignore messages if they remain fixed over time.
\end{itemize}
|
{
"timestamp": "2021-05-18T02:14:50",
"yymm": "2105",
"arxiv_id": "2105.03943",
"language": "en",
"url": "https://arxiv.org/abs/2105.03943"
}
|
\section{Introduction}
There are a number of mechanisms which can be abstractly defined and are found
to be implemented in biology. The best-known is the selectionist mechanism of
biological evolution. But that mechanism can be abstractly described and then
implemented in non-biological settings as in `evolutionary' or selectionist
programs on computers. In that mechanism, a wide variety of different options
are randomly generated and then whittled down to a `select few' using some
fitness criterion.
Our purpose in this paper is to abstractly describe another type of mechanism,
a generative mechanism, that is also found to be implemented in biology. A
generative mechanism can be abstractly described using the graph-theoretic
notion of a tree usually pictured upside down with the root at the top and
then the branches going downward to eventually terminate in the leaves.
\textbf{Definition}: A \textit{generative mechanism} implements a
\textquotedblleft code\textquotedblright\ that determines which branch of a
tree is taken as one descends from the root to a specific leaf that was
encoded in the code
\begin{center}
\includegraphics[
height=1.5851in,
width=2.9769in
{figure1.eps
\end{center}
\begin{center}
Figure 1: Generative mechanisms can be illustrated by tree diagrams.
\end{center}
For instance, in Figure 1, reading the three-letter binary codes words from
left to right, the code word $011$ is implemented by taking the $0$-branch at
the first branching point and then the $1$-branch at the next two branching
points.\footnote{The use of a binary code for an illustration does not imply
that generative mechanisms are limited to binary codes, e.g., the genetic code
has a code alphabet of four letters.}
One can think of a code as a hierarchical set of switches. The number of
settings on each switch (aside from neutral) is the number of letters in the
code alphabet. Each switch determines a branching point in the tree as in
Figure 2.
\begin{center}
\includegraphics[
height=2.134in,
width=3.6725in
{figure2.eps
\end{center}
\begin{center}
Figure 2: Generative mechanism specified by a hierarchy of switches.
\end{center}
An everyday example of a generative mechanism is the game of 20-questions
where a player tries to traverse an implicit tree of binary branching points
(yes-or-no questions) to determine the hidden answer at a leaf of the tree. By
implementing a code, a generative mechanism navigates down through a diverse
set of possible outcomes, represented by the leaves on the tree, to reach a
specific outcome or message.
\section{Partitions and codes}
Mathematically, a partition on a set represents one way to differentiate the
elements of the set into different blocks. The join with another partition
generates a partition with more refined (smaller) blocks that makes all the
distinctions of the partitions in the join. Starting from a single block
consisting of the set of all possibilities (like the unbranched root of the
tree), a sequence of partitions joined together differentiates all the
elements of the set ultimately into singleton blocks that are the leaves of
the tree. All the (prefix-free) codes of coding theory can be generated in
this way and then the codes are implemented in practice to generate the coded
outcomes.
A \textit{partition} $\pi=\left\{ B_{1},...,B_{m}\right\} $ is a set of
non-empty subsets $B_{i}$, called \textit{block}s, of a universe set $U$ that
are disjoint and whose union is all of $U$---as in Figure 3
\begin{center}
\includegraphics[
height=1.641in,
width=1.9595in
{figure3.eps
\end{center}
\begin{center}
Figure 3: A partition with six blocks on a universe set $U$.
\end{center}
The general idea of a partition is that the elements in the same block are
considered indistinct from one another while the elements in different blocks
are considered as distinct. Given another partition $\sigma=\left\{
C_{1},...C_{n}\right\} $, the join $\pi\vee\sigma$ of the two partitions is
the partition whose blocks are the non-empty intersections $B_{i}\cap C_{j}$
of the blocks of the two partitions.
\begin{center}
\includegraphics[
height=1.5817in,
width=6.3512in
{figure4.eps
\end{center}
\begin{center}
Figure 4: Join of $\pi$ and $\sigma$ = partition of non-empty intersections
such as $B_{6}\cap C_{4}$.
\end{center}
With consecutive joins of partitions (always on the same universe set), the
blocks get smaller and smaller until they reach the discrete partition
$\mathbf{1}_{U}$ with the smallest non-empty blocks with are the singletons of
elements of $U$. The least refined partition is the indiscrete partition
$0_{U}=\left\{ U\right\} $ whose only block is all of $U$ and it represents
the root of the tree that would illustrate the consecutive joins in Table 1.
The indiscrete partition is the identity for the join operation: $0_{U}\vee
\pi=\pi$ for any partition $\pi$.
\begin{center
\begin{tabular}
[c]{|c|c|c|}\hline
Partitions & Consecutive Joins (tree) & Codes\\\hline\hline
$\{\{u_{1},u_{2},u_{3},u_{4},u_{5}\}\}=\mathbf{0}_{U}$ & $\{\{u_{1
,u_{2},u_{3},u_{4},u_{5}\}\}=\mathbf{0}_{U}$ & \\\hline
$\{\{u_{1}\},\{u_{2},u_{3},u_{4},u_{5}\}\}$ & $\{\{u_{1}\},\{u_{2},u_{3
,u_{4},u_{5}\}\}$ & $0=$ (code for) $u_{1}$\\\hline
$\{\{u_{1},u_{2},u_{3}\},\{u_{4},u_{5}\}\}$ & $\{\{u_{1}\},\{u_{2
,u_{3}\},\{u_{4},u_{5}\}\}$ & \\\hline
$\{\{u_{1},u_{2}\},\{u_{3},u_{4},u_{5}\}\}$ & $\{\{u_{1}\},\{u_{2
\},\{u_{3}\},\{u_{4},u_{5}\}\}$ & $100=u_{2},101=u_{3}$\\\hline
$\{\{u_{1},u_{2},u_{3},u_{4}\},\{u_{5}\}\}$ & $\{\{u_{1}\},\{u_{2
\},\{u_{3}\},\{u_{4}\},\{u_{5}\}\}=\mathbf{1}_{U}$ & $1110=u_{4},1111=u_{5
$\\\hline
\end{tabular}
Table 1: Prefix-free codes generated by consecutive partition joins.
\end{center}
A code is called \textit{prefix-free} or \textit{instantaneous} if no code
word is the beginning of another code word. All prefix-free code words can be
obtained by a sequence of consecutive partition joins in the following manner.
The number of letters in the code alphabet is the number of blocks in each
partition. All the partitions in the Partitions column of Table 1 (except the
indiscrete partition representing the root) have two blocks with the left
blocks associated with $0$ and the right block associated with $1$. When
taking consecutive joins, once a singleton block appears in the Consecutive
Joins column, it stays a singleton since it cannot be split any further.
The code for each $u_{i}$ in $U$ is generated in the Partitions column by the
sequence of blocks in the binary partitions (ignoring $\mathbf{0}_{U}$)
containing the element $u_{i}$, with each block contributing a $0$ or $1$ to
the code word for the element until it appears in a singleton block in the
Consecutive Joins column of Table 1. Then the code word stops so that code
word cannot be the prefix for the code word for any other element of $U$. For
instance, consider the element $u_{2}$ which appears in the $1$-block in the
first partition $\{\{u_{1}\},\{u_{2},u_{3},u_{4},u_{5}\}\}$ and then in the
$0$-block in the next two partitions (in the Partitions column of the table).
The element $u_{2}$ first becomes a singleton in the third row join so tracing
its history through the blocks generates the code $100=u_{2}$.
Corresponding to the generation of a code by consecutive partition joins as in
Table 1, one can construct a tree diagram where each branching is accomplished
by a partition join and each leaf corresponds to when an element first appears
in a singleton. The tree corresponding to Table 1 is the tree in Figure 2.
\section{The genetic code}
The most famous code is, of course, the genetic code which is prefix-free so
it can be generated by a sequence of partitions. In this case, each partition
has four blocks corresponding to the four code letters U, C, A, and G in the
code alphabet. For the partitions in Figure 5, which correspond to the
(non-indiscrete) partitions in the Partitions column in Table 1, the
consecutive joins give all 64 singletons after three branchings or joins so
the amino acids have 3-letter code words. The code is redundant since there
can be several codes for the same acid.
The circles in Figure 5 trace out the code for Thr4 (one of the code words for
Thr, Threonine) which is ACG = Thr4. Note that the order of the partitions
counts in the consecutive-joins determination of the genetic codes. A
different ordering gives a different code which may not describe the operation
of the DNA-RNA machinery to produce a certain amino acid from a given code word
\begin{center}
\includegraphics[
height=2.2475in,
width=3.1362in
{figure5.eps
\end{center}
\begin{center}
Figure 5: The three partitions that generate the genetic code.
\end{center}
In terms of a tree diagram, the tree would branch four ways at each branching
point and there are three levels, so there are $4^{3}=64$ leaves in the tree.
The generative mechanism associated with the genetic code is the whole DNA-RNA
machinery that generates the amino acid as the output from the code word as
the input. If we abstractly represent the DNA-RNA machinery as that tree with
$64$ leaves, then the given code word tells the machinery how to traverse the
tree to arrive at the desired leaf.
\section{The Principles \& Parameters mechanism for language acquisition}
Noam Chomsky's Principles \& Parameters (P\&P) mechanism
(\cite{chomp-lasnik:pandp}; \cite{chomsky:minimalist}) for language learning
can be modeled as a generative mechanism. Again, we can consider a tree
diagram where each branching point has a two-way switch to determine one
grammatical rule or another in the language being acquired.
\begin{quotation}
\noindent A simple image may help to convey how such a theory might work.
Imagine that a grammar is selected (apart from the meanings of individual
words) by setting a small number of switches - 20, say - either "On" or "Off."
Linguistic information available to the child determines how these switches
are to be set. In that case, a huge number of different grammars (here, 2 to
the twentieth power) will be prelinguistically available, although a small
amount of experience may suffice to fix one. \cite[p. 154]{higginbotham}
\end{quotation}
And the reference to \textquotedblleft20\textquotedblright\ recalls the game
of \textquotedblleft20 questions\textquotedblright\ where the answers to the
questions guides one closer and closer to the desired hidden answer. Chomsky
uses the Higginbotham model to describe a Universal Grammar (UG) as a
generative mechanism.
\begin{quotation}
\noindent Many of these principles are associated with parameters that must be
fixed by experience. The parameters must have the property that they can be
fixed by quite simple evidence, because this is what is available to the
child; the value of the head parameter, for example, can be determined from
such sentences as John saw Bill (versus John Bill saw). Once the values of the
parameters are set, the whole system is operative. Borrowing an image
suggested by James Higginbotham, we may think of UG as an intricately
structured system, but one that is only partially \textquotedblright wired
up.\textquotedblright\ The system is associated with a finite set of switches,
each of which has a finite number of positions (perhaps two). Experience is
required to set the switches. When they are set, the system functions. The
transition from the initial state S0 to the steady state Ss is a matter of
setting the switches. \cite[p. 146]{chomsky:knowoflang}
\end{quotation}
In the tree modeling of the P\&P approach, the relative poverty of linguistic
experience that sets the switches plays the role of the code that guides the
mechanism from the undifferentiated root state (all switches at neutral) to
the final specific grammar represented as a leaf.
The question about the acquisition of a grammar is a good topic to compare and
contrast a selectionist mechanism with a generative mechanism. What would a
selectionist approach to learning a grammar look like? A child would (perhaps
randomly) generate a diverse range of babblings, some of which would be
differentially reinforced by the linguistic environment (e.g.,
\cite{skinner:behave}).
\begin{quotation}
\noindent Skinner, for example, was very explicit about it. He pointed out,
and he was right, that the logic of radical behaviorism was about the same as
the logic of a pure form of selectionism that no serious biologist could pay
attention to, but which is [a form of] popular biology -- selection takes any
path. And parts of it get put in behaviorist terms: the right paths get
reinforced and extended, and so on. It's like a sixth grade version of the
theory of evolution. It can't possibly be right. But he was correct in
pointing out that the logic of behaviorism is like that [of na\"{\i}ve
adaptationism], as did Quine. \cite[Section 10]{chomp-mcgil:interview}
\end{quotation}
As noted, Willard Quine adopts essentially the behaviorist/selectionist account.
\begin{quotation}
\noindent An oddity of our garrulous species is the babbling period of late
infancy. This random vocal behavior affords parents continual opportunities
for reinforcing such chance utterances as they see fit; and so the rudiments
of speech are handed down. \cite[p. 73]{quine:wando}
It remains clear in any event that the child's early learning of a verbal
response depends on society's reinforcement of the response in association
with the stimulations that merit the response, from society's point of view,
and society's discouragement of it otherwise. \cite[p. 75]{quine:wando}
\end{quotation}
A more sophisticated version of a selectionist model for the
language-acquisition faculty or universal grammar (UG) could be called the
format-selection (FS) approach (Chomsky, private communication). The diverse
variants that are actualized in the mental mechanism are different sets of
rules or grammars. Then given some linguistic input from the linguistic
environment, the grammars are evaluated according to some evaluation metric,
and the best rules are selected.
\begin{quotation}
\noindent Universal grammar, in turn, contains a rule system that generates a
set (or a search space) of grammars, \{G1, G2,\ldots, Gn\}. These grammars can
be constructed by the language learner as potential candidates for the grammar
that needs to be learned. The learner cannot end up with a grammar that is not
part of this search space. In this sense, UG contains the possibility to learn
all human languages (and many more). ... The learner has a mechanism to
evaluate input sentences and to choose one of the candidate grammars that are
contained in his search space. \cite[p. 292]{nowak-komarove}
\end{quotation}
After a sufficient stream of linguistic inputs, the mechanism should converge
to the best grammar that matches the linguistic environment. Since it is
optimizing over sets of rules, this model at least takes seriously the need to
account for the choice of rules (rather than just assuming the child can infer
the rules from raw linguistic data). Early work (through the 1970s) on
accounting for the language-acquisition faculty or universal grammar (UG)
seems to have assumed such an approach. The problems that eventually arose
with the FS approach could be seen as the conflict between descriptive and
explanatory adequacy.
\begin{quotation}
\noindent It was an intuitively obvious way to conceive of acquisition at the
time for---among other things---it did appear to yield answers and was at
least more computationally tractable than what was offered in structural
linguistics, where the alternatives found in structural linguistics could not
even explain how that child managed to get anything like a morpheme out of
data. But the space of choices remained far too large; the approach was
theoretically implementable, but completely unfeasible. \cite[Appendix
IX]{chomp-mcgil:interview}
\end{quotation}
In order to describe the enormous range of human language grammars, the range
of grammars considered would make for an unfeasible computational load of
evaluating the linguistic experience. If the range was restricted to make
computation more feasible, then it would not explain the variety of human
languages. Hence the claim is that the P\&P generative mechanism gives a more
plausible account of human language acquisition than a behavioral/selectionist
approach.\footnote{For more of the mathematical background, see
\cite{ell:4ways}) and the references therein.}
\section{Embryonic development}
The role of stem cells in the development of an embryo into a full organism
can again be modeled as a generative mechanism. The original fertilized egg
becomes the stem cell that is the root of the tree. As illustrated in Figure
6, stem cells come in three general varieties: A) the stem cells that can
reproduce undifferentiated copies of themselves, B) the stem cells that can
reproduce but can also produce a somewhat differentiated cell, and C) a
specialized differentiated cell. Each branching point in a tree has a certain
number of possible leaves or terminal types of cells beneath it in the tree.
In a division (\#1) of an A-type cell, each of the resulting A-type cell could
have a full set of leaves beneath it. But when it splits (\#2) into another
A-type cell and a B-type cell, then the B-cell has a restricted number of
leaves beneath it. The B-type cells can split (\#3) in two, and finally when a
B-type cell gives rise (\#4) to a specific C-type of cell, that is a terminal
branch, i.e., a leaf, in the tree.
\begin{center}
\includegraphics[
height=3.0388in,
width=1.7443in
{figure6.eps
\end{center}
\begin{center}
[Attribution: Peter Znamenskiy, CC BY-SA 3.
$<
http://creativecommons.org/licenses/by-sa/3.0
$>
, via Wikimedia Commons]
Figure 6: Stem cell division and differentiation A: stem cell; B: progenitor
cell; C: differentiated cell;
1: symmetric stem cell division; 2: asymmetric stem cell division; 3:
progenitor division; 4: terminal differentiation.
\end{center}
The codes that inform the progress through the tree are not fully understood,
but apparently the positional epigenetic information in the developing embryo
provides the information about the next development steps. In general terms,
\begin{quotation}
\noindent\lbrack t]hat model harks back to the \textquotedblleft developmental
landscape\textquotedblright\ proposed by Conrad Waddington in 1956. He likened
the process of a cell homing in on its fate to a ball rolling down a series of
ever-steepening valleys and forked paths. Cells had to acquire more and more
information to refine their positional knowledge over time --- as if zeroing
in on where and what they were through \textquotedblleft the 20 questions
game,\textquotedblright\ according to Jan\'{e} Kondev, a physicist at Brandeis
University. \cite{cepel:mathcells}
\end{quotation}
Again, the reference to the game of 20-questions reveals the common generative
mechanism of traversing a tree from the root to a specific leaf.
\section{Selectionist and Generative Mechanisms}
There is a long tradition in biological thought of juxtaposing selectionism,
associated with Darwin, with instructionism, associated with Lamarck
(\cite{medawar:reith}; \cite{jerna:antibodies}). In an instructionist or
Lamarckian mechanism, the environment would transmit detailed instructions
about a certain adaptation to an organism, while in a selectionist mechanism,
a diverse variety of (perhaps random) variations would be generated, and then
some adaptations would be selected by the environment but without detailed
instructions from the environment. The discovery that the immune system was a
selectionist mechanism \cite{jerne:nat-sel} generated a wave of enthusiasm, a
\textquotedblleft Second Darwinian Revolution\textquotedblrigh
\ \cite{cziko:womiracles}, for selectionist theories \cite{dennett:darwin}.
In his Nobel Lecture \cite{jerne:nobel}, Niels Jerne even tried to draw
parallels between Chomsky's generative grammar and selectionism. One of the
distinctive features of a selectionist mechanism is that the possibilities
must be in some sense actualized or realized in order for selection to operate
on and differentially amplify or select some of the actual variants while the
others languish, atrophy, or die off. In the case of the human immune system,
"It is estimated that even in the absence of antigen stimulation a human makes
at least $10^{15}$ different antibody molecules---its preimmune antibody
repertoire." \cite[p. 1221]{alberts-et-al:mbio}. In Chomsky's critique of a
selectionist theory of universal grammar, he noted the computational
infeasibility of having representations of all possible human grammars in
order for linguistic experience and an evaluation criterion to perform a
selective function on them. The analysis of Chomsky's P\&P theory as a
generative mechanism instead suggests that the old juxtaposition of
\textquotedblleft selectionism versus instructionism\textquotedblright\ is not
the most useful framing for the study of biological mechanisms.
The discovery of the genetic code and DNA-RNA machinery for the production of
amino acids powerfully showed the existence of another biological mechanism, a
generative mechanism, that is quite distinct from a selectionist mechanism.
The examples of Chomsky's P\&P theory of grammar acquisition and the role of
stem cells in embryonic development provide more evidence of the importance of
generative mechanisms.
To better illustrate these two main candidates for biological mechanisms, it
might be useful to illustrate a selectionist and a generative mechanism in
solving the same problem of determining one among the $8=2^{3}$ options
considered in Figure 1. The eight possible outcomes might be represented as:
$|000\rangle,|100\rangle,|010\rangle,|110\rangle,|001\rangle,|101\rangle
,|011\rangle,|111\rangle.$
In the selectionist scheme, all eight variants are in some sense actualized or
realized in the initial state $S_{0}$ so that a fitness criterion or
evaluation metric (as in the FS scheme) can operate on them. Some variants do
better and some worse as indicated by the type size in Figure 7.
\begin{center}
\includegraphics[
height=2.0747in,
width=3.9597in
{figure7.eps
\end{center}
\begin{center}
Figure 7: A selectionist determination of the outcome $|010\rangle$.
\end{center}
\noindent The "unfit" options dwindle, atrophy, or die off leaving the most
fit option $|010\rangle$ as the final outcome.
With the generative mechanism, the initial state $S_{0}$ (the root of the
tree) is where all the switches are in neutral, so all the eight potential
outcomes are in a "superposition" (between left and right) state indicated by
the plus signs in the following Figure 8.
\begin{center}
\includegraphics[
height=1.7486in,
width=3.9597in
{figure8.eps
\end{center}
\begin{center}
Figure 8: A generative determination of the outcome $|010\rangle$.
\end{center}
The initial experience or first letter in the code sets the first switch to
the $0$ option which reduces the state to $|000\rangle+|001\rangle
+|010\rangle+|011\rangle$ (where the plus signs in the superposition of these
options indicate that the second and third switches are still in neutral).
Then subsequent experience sets the second switch to the $1$ option and the
third switch to the $0$ option. Thus, we reach the same outcome $|010\rangle$
as the final outcome in the two models but by quite different mechanisms. Note
that the generative mechanism `selects' a specific outcome but that does not
make it a `selectionist' mechanism.
There is another way to contrast a selective and generative mechanism. In
logic \cite{ell:intropartitions}, there are two principal lattices, the
lattice of subsets of a universe U and the lattice of partitions on $U$. For
$U=\{a,b,c\}$, the two lattices are pictured in Figure 9.
\begin{center}
\includegraphics[
height=1.8189in,
width=3.9597in
{figure9.eps
\end{center}
\begin{center}
Figure 9: The two basic logical lattices.
\end{center}
A selective mechanism to determine $a$, $b$, or $c$ from $U$ would start with
all the actual elements (top of the subset lattice) and then use a fitness or
evaluation criterion to narrow the set of actualities down to the selected
one.
A generative mechanism starts with the indiscrete partition $\mathbf{0
_{U}=\{\{a,b,c\}\}=\{U\}$ (at the bottom of the partition lattice) where all
the elements are only potential outcomes not distinguished from each other.
Then distinctions are made, as represented by consecutive partition joins,
until an element is fully distinguished (i.e., appears as a singleton). A
coding scheme to determine each element is given in Table 2.
\begin{center
\begin{tabular}
[c]{|c|c|c|}\hline
Partitions & Consecutive Joins & Codes\\\hline\hline
$\{\{a,b,c\}\}=\mathbf{0}_{U}$ & $\{\{a,b,c\}\}=\mathbf{0}_{U}$ & \\\hline
$\{\{a\},\{b,c\}\}$ & $\{\{a\},\{b,c\}\}$ & $0=a$\\\hline
$\{\{a,b\},\{c\}\}$ & $\{a\},\{b\},\{c\}\}=\mathbf{1}_{U}$ & $10=b,11=c
\\\hline
\end{tabular}
Table 2: Coding scheme for $a$,$b$, or $c$.
\end{center}
The tree diagram for the Table 2 code is given in Figure 10.
\begin{center}
\includegraphics[
height=1.8782in,
width=2.5974in
{figure10.eps
\end{center}
\begin{center}
Figure 10: Tree diagram for the code scheme of Table 2.
\end{center}
\section{Concluding remarks: The connection to information theory}
Finally, it should be mentioned that there is an intimate connection between
the tree diagrams representing generative mechanisms and information theory
(\cite{shannonweaver:comm}; \cite{ell:lit-igpl}). One can imagine a marble
rolling down from the root to one of the leaves where its path was
probabilistically determined at each branching point by a set of
probabilities. The simplest assumption on a binary tree is a half-half
probability of the marble taking each branch. From each leaf, there is a
unique path from the root to the leaf and the product of the probabilities
along that path gives the probability of reaching that leaf. With those
assumptions for the tree of Figure 1, each leaf has probability $1/2^{3}=1/8$.
Then the Shannon entropy of that probability distribution $p=(p_{1
,\ldots,p_{m})$ is:
\begin{center}
$H(p)=\sum_{i}p_{i}\log_{2}(1/p_{i})=8\times1/8\times\log_{2}(1/(1/8))=\log
_{2}(2^{3})=3$
\end{center}
\noindent and the logical entropy is:
\begin{center}
$h(p)=1-\sum_{i}p_{i}^{2}=1-8\times(1/8)^{2}=1-1/8=7/8.$
\end{center}
In this simple example, the Shannon entropy is the average number of binary
partitions (bits) needed to distinguish all the leaves on the tree
("messages"). The logical entropy always has the interpretation that on two
independent trials, there will be different outcomes. In this case, that is
the probability that on two independent rolls of a marble, it will end up at
different leaves. Since all the leaves are equiprobable, it is simply the
probability that the second marble took a different path (than on the first
trial), i.e., $1-1/8=7/8$.
With the same half-half branching probabilities for Figure 10, the leaf
probabilities are $\Pr(a)=1/2$ and $\Pr(b)=\Pr(c)=1/2\times1/2=1/4$. Then the
Shannon entropy is:
\begin{center}
$H(p)=1/2\times\log_{2}(1/(1/2)+2\times1/4\times\log_{2
(1/(1/4))=1/2+(1/2)\times2=3/2$
\end{center}
\noindent which is, in this simple case, the average length of the code words
for the leaves. The logical entropy is:
\begin{center}
$h(p)=1-(1/2)^{2}-2\times(1/4)^{2}=1-1/4-1/8=5/8$
\end{center}
\noindent which is always the probability that on two rolls, the marble will
end up at different leaves.
Tracing down a code tree from the root to the desired message generates the
code for that message on the sending side of a communications channel, and
then implementing the received code on the receiving side of the channel will
generate the received message. In the case of the biological examples (genetic
code, generative grammar, and embryonic development), the creation of the
codes is a matter of deep evolutionary history, so the focus of study is
usually on how those generative mechanisms implement codes to give specific outcomes.
|
{
"timestamp": "2021-05-11T02:19:31",
"yymm": "2105",
"arxiv_id": "2105.03907",
"language": "en",
"url": "https://arxiv.org/abs/2105.03907"
}
|
\section*{Data}
We share the \ac{USS} dataset at \url{https://github.com/sunnweiwei/user-satisfaction-simulation}.
\input{sections/acknowledgement}
\newpage
\bibliographystyle{ACM-Reference-Format}
\section{Conclusion}
We have proposed the task of simulating user satisfaction for evaluating task-oriented dialogue systems, so as to enhance the evaluation of dialogue systems.
We have collected and released a new benchmark dataset, namely \ac{USS}, for the proposed task.
Our dataset contains a total of 6,800 dialogues spanning multiple domains. We have introduced three baselines for our task: feature-based, RNN-based, and BERT-based methods.
Experiments conducted on the newly collected dataset suggest that distributed representations do outperform feature-based methods. Besides, HiGRU achieves the best performance in in-domain user satisfaction prediction, while a BERT-based method has better cross-domain generalization ability.
As to our future work, we would like to continue to investigate the combination of the user satisfaction prediction and action prediction task, and response generation based on user satisfaction.
\section{Constructing A Test Collection}
We propose a user satisfaction annotation dataset, \acfi{USS}.
Below, we detail the creation of the dataset. We divide this section into $3$ phases: data preparation, user satisfaction assessment, and measures and disclaimers.
\subsection{Data preparation}
The \ac{USS} dataset is based on five benchmark task-oriented dialogue datasets: JDDC \cite{Chen2020TheJC}, Schema Guided Dialogue (SGD) \cite{Rastogi2020TowardsSM}, MultiWOZ 2.1 \cite{Eric2020MultiWOZ2A}, Recommendation Dialogues (ReDial) \cite{Li2018TowardsDC}, and
Coached Conversational Preference Elicitation (CCPE) \cite{Radlinski2019CoachedCP}.
We first distinguish the user's emotion in the conversation by a classifier trained on annotated reddit data (weibo for Chinese), and then filter out all conversations that do not show negative emotions (i.e., anger, disgust, fear, sadness).
\begin{enumerate*}[label=(\arabic*)]
\item JDDC is a large-scale, real-world Chinese e-commerce conversation corpus with over 1 million multi-turn dialogues. We first classify the conversation into 11 types according to the type of transaction, e.g., delivery, return, invoice, etc. Then, we sample 300 dialogue sessions from each type, for a total of 3,300 conversations. The JDDC data set provides the action of each user utterance, including 234 categories. We compress them into 12 categories based on a manually defined classification method.
\item SGD is a dataset consisting of over 20K annotated task-oriented conversations between a human and a virtual assistant spanning 16 domains. MultiWOZ 2.1 is a multi-domain dialogue dataset spanning 7 distinct domains and containing over 10K dialogues. We sample 1,000 conversations from the two datasets. We directly use the action annotation from the original datasets. The SGD has 12 actions, and MultiWOZ has 21 actions.
\item ReDial is an annotated dataset consisting of over 10K conversations, where users recommend movies to each other. We sample 1,000 dialogues. Since the original dataset does not provide actions, we use the action annotation provided by IARD \cite{Cai2020PredictingUI}.
\item CCPE is a dataset consisting of 502 dialogues with 12K annotated utterances between a user and an assistant discussing movie preferences. We sample 300 dialogues from the CCPE dataset and used the actions provided by the original dataset.
\end{enumerate*}
\subsection{User satisfaction assessment}
We hired 40 annotators to annotate exchange-level and dialogue-level user satisfaction levels of each conversation with five levels (1--5).
We first show a dialogue between user and system in text form to the annotators and ask the annotators to label the user satisfaction of each user sentence at the \emph{exchange-level}.
We require annotators to rate user satisfaction based on past conversations, so the satisfaction is assessed before the user’s sentence, not after writing the sentence.
In this regard, we differ from previous annotation work \cite{Walker1997PARADISEAF,Schmitt2015InteractionQA,Bodigutla2019MultidomainCQ}.
The scale we asked annotators to follow was:
\begin{enumerate*}[label=(\arabic*)]
\item Very dissatisfied (the system fails to understand and fulfill user’s request);
\item Dissatisfied (the system understands the request but fails to satisfy it in any way);
\item Normal (the system understands users request and either partially satisfies the request or provides information on how the request can be fulfilled);
\item Satisfied (the system understands and satisfies the user request, but provides more information than what the user requested or takes extra turns before meeting the request); and
\item Very satisfied (the system understands and satisfies the user request completely and efficiently).
\end{enumerate*}
Using a 5 point scale over a binary scale provides an option for the annotators to factor in their subjective interpretation of the extent of success or failure of a system’s response to satisfy a user’s request.
In addition, we ask the annotators to rate the \emph{dialogue-level} satisfaction to capture the overall satisfaction of a user’s interaction with the system.
We divide the data into two groups based on language, JDDC (Chinese) and Others (English). In each group, we randomly assign data to annotators to ensure that the different types of conversations in the group are evaluated according to a consistent standard. For the JDDC group, we also ask annotators to give a textual explanation for the rating.
\subsection{Measures and disclaimers}
To guarantee annotation quality, we ask at least three annotators to repeatedly label the data. If there is a discrepancy among the three annotators (i.e., three annotators give three different ratings), we ask a fourth annotator to recheck it. We removed the results of annotators that were inconsistent with others. Finally, expert ratings are highly correlated with a Fleiss Kappa score of 0.574. See Table~\ref{table:statistic} for descriptive statistics of the \ac{USS} dataset.
In all the provided instruction materials, we described the purpose of this data construction effort and pointed out that the data will only be used for research.
We did not record any information about the annotators and warned the annotators not to divulge any of their private information.
\begin{table}[t]
\small
\centering
\setlength\tabcolsep{2pt}
\caption{Statistics of the \ac{USS} dataset.}
\label{table:statistic}
\begin{tabular}{l rrrrr}
\toprule
\textbf{Domain} & \textbf{JDDC} & \textbf{SGD} & \textbf{MultiWOZ} & \textbf{ReDial} & \textbf{CCPE} \\
\hline
Language & Chinese & English & English & English & English \\
\#Dialogues & 3,300 & 1,000 & 1,000 & 1,000 & 500 \\
Avg\# Turns & 32.3 & 26.7 & 23.1 & 22.5 & 24.9 \\
\hline
\#Utterances & 54,517 & 13,833 & 12,553 & 11806 & 6,860 \\
~~Rating 1 & 120 & 5 & 12 & 20 & 10 \\
~~Rating 2 & 4,820 & 769 & 725 & 720 & 1,472\\
~~Rating 3 & 45,005 & 11,515 & 11,141 & 9,623 & 5,315\\
~~Rating 4 & 4,151 & 1,494 & 669 & 1,490 & 59\\
~~Rating 5 & 421 & 50 & 6 & 34 & 4\\
\bottomrule
\end{tabular}
\end{table}
\section{Utilization of this Resource}
We have developed resources that are meant to help answer the question of what is a good dialogue.
Our annotations and prediction task offer a better characterization of what is a good dialogue than existing datasets.
Exchange-level user satisfaction and action prediction can reflect what kind of system behavior will bring positive user satisfaction and what behavior will harm the user experience, which makes our method applicable to many related fields.
\subsection{Building human-like user simulation}
In most prior work, user simulations mechanically give the slots, and thus measure very limited aspects of a dialogue.
Building a human-like user simulation remains an open challenge. In this study, we propose the task of user satisfaction simulation and release a dataset for the task. Inspired by previous work on similar tasks \citep{Jiao2019HiGRUHG,Yang2016HierarchicalAN,Barahona2021IsTU}, we provide a series of baselines.
However, due to the challenging nature of the task, there is plenty of room to improve user satisfaction prediction, and to explore how user satisfaction prediction can be combined with action prediction.
Response generation based on user satisfaction (i.e., reflect user satisfaction in a generated utterance) is still an open problem. Previous work on open-domain dialogue may serve as a reference \citep{Zhou2018EmotionalCM}.
In addition to user satisfaction, how to ground a user simulator by introducing external knowledge~\cite{sun2021conversations,Meng2020DukeNetAD,xu2020conversational,ma2020compare} and persona~\cite{Li2016APN} to establish a more human-like user simulator has not yet been studied.
\subsection{Future applications}
The \ac{USS} dataset can be used not only for user simulation but also for other conversational information access tasks. As a user satisfaction annotation dataset that exceeds existing ones in scale, our data can facilitate research on user satisfaction modeling~\cite{Pragst2016RecurrentNN} and POMDP-based dialogue systems~\cite{Lemon2012DataDrivenMF,Young2013POMDP}.
Moreover, the \ac{USS} dataset can also facilitate research into dialogue breakdown detection, and human-machine hand-off prediction~\cite{Liu2020TimeTT}.
In the JDDC domain, we provide annotators' explanations on user satisfaction annotations, which includes a total of 9,900 explanation texts.
This information can be applied to user studies of user satisfaction, and interpretability studies of evaluations.
\section{Evaluation}
\begin{table*}[!t]
\small
\centering
\setlength\tabcolsep{2pt}
\caption{Performance for user satisfaction prediction. Bold face indicates the best result in terms of the corresponding metric. Underline indicates comparable results to the best one.}
\label{table:satisfaction}
\begin{tabular}{l cccc cccc cccc cccc cccc}
\toprule
\multirow{2}{*}{\textbf{Domain}}
& \multicolumn{4}{c}{\textbf{JDDC}}
& \multicolumn{4}{c}{\textbf{SGD}}
& \multicolumn{4}{c}{\textbf{MultiWOZ}}
& \multicolumn{4}{c}{\textbf{ReDial}}
& \multicolumn{4}{c}{\textbf{CCPE}}
\\
\cmidrule(lr){2-5} \cmidrule(lr){6-9} \cmidrule(lr){10-13} \cmidrule(lr){14-17} \cmidrule(lr){18-21}
& UAR & Kappa & Rho & F1
& UAR & Kappa & Rho & F1
& UAR & Kappa & Rho & F1
& UAR & Kappa & Rho & F1
& UAR & Kappa & Rho & F1
\\
\midrule
LR
& 0.221 & 0.054 & 0.400 & 0.011
& 0.211 & 0.049 & 0.251 & 0.005
& 0.214 & 0.042 & 0.599 & 0.009
& 0.211 & 0.040 & 0.240 & 0.008
& 0.214 & 0.060 & 0.669 & 0.025
\\
SVM
& 0.235 & 0.061 & 0.347 & 0.026
& 0.230 & 0.074 & 0.169 & 0.020
& 0.215 & 0.030 & 0.425 & 0.021
& 0.209 & 0.038 & 0.205 & 0.015
& 0.212 & 0.027 & 0.534 & 0.040
\\
XGBoost
& 0.205 & 0.007 & \textbf{0.584} & 0.003
& 0.202 & 0.011 & 0.442 & 0.001
& 0.200 & 0.002 & 0.690 & 0.001
& 0.207 & 0.030 & 0.391 & 0.002
& 0.200 & 0.001 & 0.707 & 0.004
\\
\midrule
HiGRU+ATTN
& 0.330 & 0.115 & 0.502 & \underline{0.180}
& 0.262 & 0.082 & \underline{0.475} & 0.058
& 0.224 & \underline{0.142} & 0.842 & 0.197
& \textbf{0.261} & 0.097 & \textbf{0.441} & 0.118
& 0.223 & 0.109 & 0.869 & 0.214
\\
HiGRU
& \textbf{0.339} & 0.126 & 0.524 & 0.171
& \textbf{0.293} & \textbf{0.118} & 0.451 & \textbf{0.086}
& 0.225 & \textbf{0.143} & \textbf{0.886} & \textbf{0.238}
& \underline{0.257} & 0.084 & 0.324 & 0.083
& \textbf{0.237} & \textbf{0.167} & 0.881 & \textbf{0.274}
\\
GRU
& 0.302 & 0.092 & 0.497 & 0.132
& 0.245 & 0.072 & 0.248 & 0.027
& 0.231 & 0.105 & 0.813 & 0.167
& 0.254 & 0.104 & 0.421 & 0.121
& 0.226 & 0.124 & 0.880 & 0.207
\\
\midrule
BERT
& 0.329 & \textbf{0.131} & 0.554 & \textbf{0.185}
& 0.261 & 0.094 & \textbf{0.477} & 0.048
& \textbf{0.256} & 0.133 & 0.823 & 0.224
& \underline{0.257} & \textbf{0.122} & 0.390 & \textbf{0.125}
& \underline{0.232} & 0.147 & \textbf{0.891} & 0.245
\\
\bottomrule
\end{tabular}
\end{table*}
\begin{table*}[!t]
\small
\centering
\setlength\tabcolsep{2pt}
\caption{Performance for user action prediction. Bold face indicates the best result in terms of the corresponding metric. Underline indicates comparable results to the best one.}
\label{table:action}
\begin{tabular}{l cccc cccc cccc cccc cccc}
\toprule
\multirow{2}{*}{\textbf{Domain}}
& \multicolumn{4}{c}{\textbf{JDDC}}
& \multicolumn{4}{c}{\textbf{SGD}}
& \multicolumn{4}{c}{\textbf{MultiWOZ}}
& \multicolumn{4}{c}{\textbf{ReDial}}
& \multicolumn{4}{c}{\textbf{CCPE}}
\\
\cmidrule(lr){2-5} \cmidrule(lr){6-9} \cmidrule(lr){10-13} \cmidrule(lr){14-17} \cmidrule(lr){18-21}
& Acc & Prec & Recall & F1
& Acc & Prec & Recall & F1
& Acc & Prec & Recall & F1
& Acc & Prec & Recall & F1
& Acc & Prec & Recall & F1
\\
\midrule
LR
& 0.565 & 0.208 & 0.123 & 0.133
& 0.460 & 0.321 & 0.308 & 0.309
& 0.414 & 0.150 & 0.130 & 0.134
& 0.495 & 0.467 & 0.472 & 0.464
& 0.509 & 0.325 & 0.314 & 0.316
\\
SVM
& 0.493 & 0.214 & 0.139 & 0.147
& 0.451 & 0.344 & 0.351 & 0.345
& 0.374 & 0.141 & 0.138 & 0.135
& 0.459 & 0.423 & 0.444 & 0.427
& 0.462 & 0.327 & 0.327 & 0.322
\\
XGBoost
& 0.621 & 0.270 & 0.138 & 0.165
& 0.516 & 0.395 & 0.370 & 0.370
& 0.479 & 0.226 & 0.126 & 0.139
& 0.593 & 0.540 & 0.509 & 0.506
& 0.553 & 0.380 & 0.349 & 0.356
\\
\midrule
HiGRU+ATTN
& \textbf{0.623} & 0.363 & 0.176 & 0.194
& 0.617 & 0.498 & 0.481 & 0.481
& 0.487 & 0.221 & 0.152 & 0.155
& 0.590 & 0.548 & 0.512 & 0.488
& 0.611 & 0.421 & 0.408 & 0.411
\\
HiGRU
& 0.618 & 0.370 & \underline{0.196} & \textbf{0.229}
& 0.643 & 0.534 & 0.505 & 0.507
& \underline{0.518} & 0.216 & 0.162 & 0.167
& \textbf{0.622} & \textbf{0.584} & \textbf{0.532} & \textbf{0.534}
& \underline{0.672} & 0.503 & 0.472 & 0.482
\\
GRU
& 0.598 & 0.337 & 0.166 & 0.187
& 0.444 & 0.322 & 0.304 & 0.298
& 0.460 & 0.211 & 0.124 & 0.129
& 0.599 & 0.536 & 0.494 & 0.457
& 0.545 & 0.550 & 0.354 & 0.354
\\
\midrule
BERT
& 0.614 & \textbf{0.391} & \textbf{0.199} & \underline{0.224}
& \textbf{0.661} & \textbf{0.570} & \textbf{0.572} & \textbf{0.560}
& \textbf{0.519} & \textbf{0.255} & \textbf{0.183} & \textbf{0.191}
& 0.614 & 0.573 & \underline{0.531} & \underline{0.530}
& \textbf{0.674} & \textbf{0.696} & \textbf{0.495} & \textbf{0.496}
\\
\bottomrule
\end{tabular}
\end{table*}
\begin{table}[]
\small
\centering
\setlength\tabcolsep{2pt}
\caption{Cross-domain performance for user satisfaction prediction. Report UAR.}
\label{table:cross-domain}
\begin{tabular}{ll cccc}
\toprule
\textbf{From} & \textbf{To}
& \textbf{SGD} & \textbf{MWOZ} & \textbf{ReDial} & \textbf{CCPE}
\\
\midrule
\multirow{3}{*}{\textbf{SGD}}
& SVM
& \textcolor{gray}{0.230} & 0.209 & 0.211 & 0.198
\\
& HiGRU
& \textcolor{gray}{0.293} & 0.240 & 0.230 & 0.212
\\
& BERT
& \textcolor{gray}{0.261} & \textbf{0.249} & \textbf{0.254} & 0.223
\\
\midrule
\multirow{3}{*}{\textbf{MWOZ}}
& SVM
& 0.208 & \textcolor{gray}{0.215} & 0.206 & 0.208
\\
& HiGRU
& 0.224 & \textcolor{gray}{0.225} & 0.221 & 0.219
\\
& BERT
& \textbf{0.233} & \textcolor{gray}{0.256} & 0.219 & 0.226
\\
\midrule
\multirow{3}{*}{\textbf{ReDial}}
& SVM
& 0.216 & 0.227 & \textcolor{gray}{0.221} & 0.199
\\
& HiGRU
& 0.211 & 0.221 & \textcolor{gray}{0.261} & 0.220
\\
& BERT
& 0.228 & 0.218 & \textcolor{gray}{0.257} & \textbf{0.239}
\\
\midrule
\multirow{3}{*}{\textbf{CCPE}}
& SVM
& 0.217 & 0.208 & 0.218 & \textcolor{gray}{0.214}
\\
& HiGRU
& 0.211 & 0.223 & 0.227 & \textcolor{gray}{0.237}
\\
& BERT
& 0.216 & 0.213 & 0.219 & \textcolor{gray}{0.232}
\\
\bottomrule
\end{tabular}
\end{table}
\subsection{Evaluation metrics}
For the user satisfaction prediction task, following \citep{Schmitt2015InteractionQA}, we use the \emph{Unweighted Average Recall} (UAR), the arithmetic average of all class-wise recalls, a linearly weighted version of \emph{Cohen’s Kappa}, and \emph{Spearman’s Rho} as evaluation metrics.
We also use the \emph{F1-score} for the \emph{dissatisfactory} (rating $<3$) class as the binary classification metric, as most turns and dialogues belong to the \emph{satisfactory} (rating $\geq 3$) class.
For the user action prediction task, we use
\emph{Accuracy} (Acc, the proportion of predicted correct labels over the total number of predicted and actual labels for every utterance),
\emph{Precision} (Prec, the proportion of the predicted correct labels over the number of predicted labels),
\emph{Recall} (the proportion of the predicted correct labels over the number of actual labels), and
the \emph{F1-score} (the harmonic mean of precision and recall) as evaluation measures.
\subsection{Experimental results}
Table~\ref{table:satisfaction} shows the results for the user satisfaction prediction task.
The best results in terms of the corresponding metric are shown in bold. If there are multiple similar best results, we show them all underlined.
In general, HiGRU achieves the best overall performance (e.g., an absolute improvement of +3 for UAR, +2 for Kappa, and +4 for F1 over BERT in SGD data). BERT and HiGRU+ATTN can achieve performance comparable to HiGRU, followed by GRU.
Among the 3 feature-based methods, SVM performs best, followed by LR. XGBoost is significantly weaker than other methods in all metrics, except Rho.
Table~\ref{table:satisfaction} further shows that all deep learning methods perform better than feature-based metrics.
Table~\ref{table:action} shows the results for the user action prediction task.
In general, the BERT-based model performs best among all methods, followed by HiGRU.
BERT outperforms HiGRU on all performance measures except for the ReDial data, possibly due to the lack of sufficient training data.
Among the 3 feature-based methods, XGBoost achieves the best performance, obtaining an absolute improvement of about +6 for Acc, +7 for Prec, +3 for Recall, and +4 for F1 compared to LR. XGBoost also outperforms GRU in many metrics.
\subsection{Analysis}
Since we have multiple domains in the dataset, we further analyze the cross-domain generalization capabilities of the user satisfaction prediction model.
Table~\ref{table:cross-domain} shows the results. The rows and columns in Table~\ref{table:cross-domain} indicate training data and test data, respectively (e.g., 0.233 in the first column of the sixth row indicates that a BERT model trained on MultiWOZ can get a UAR score of 0.233 on SGD data).
In terms of datasets, the models trained on SGD and MultiWOZ get the best performance on each other’s data respectively, and the models trained on ReDial get the best performance on CCPE, possibly due to the similarity between domains. The model trained on CCPE has relatively poor generalization ability, possibly due to limited training data size.
In terms of methods, BERT achieves better generalization performance than SVM and HiGRU, possibly due to the improvement of pre-training on the large-scale corpus.
\section{Introduction}
\begin{figure}[t]
\centering
\includegraphics[width=0.93\columnwidth]{figures/figure_task.pdf}
\vspace*{0.5\baselineskip}
\caption{(a) Previous work on user simulation; (b) previous work on user satisfaction prediction; (c) our proposed task: simulating user satisfaction for evaluating task-oriented dialogues systems. We leave utterance generation (dotted line) as future work.}
\label{figure:task}
\end{figure}
Task-oriented systems are developed to help users solve a specific task as efficiently as possible~\cite{Young2013POMDP}.
Evaluation is a crucial part of the development process of task-oriented dialogue systems.
For evaluating the performance of each module of a dialogue system, human evaluation, user satisfaction modeling, corpus-based approaches, and user simulation have all been leveraged~\citep{Deriu2020SurveyOE}.
Human evaluation through in-field experiments \citep{Lamel2000TheLA,Black2011SpokenDC} or crowd-sourcing \citep{Jurccek2011RealUE} is considered to reflect the overall performance of the system in a real-world scenario, but it is intrusive, time-intensive, and does not scale \citep{Deriu2020SurveyOE}.
User satisfaction modeling can be an alternative; it aims to automatically estimate user satisfaction based on human-machine interaction log data, but still requires human involvement.
To evaluate a dialogue system fully automatically, offline evaluation based on test sets is commonly used. However, this method is limited to a single turn and does not inform us about the overall usefulness of the system or about users' satisfaction with the flow of the dialogue~\citep{Zhang2020EvaluatingCR}.
Therefore, evaluation results of offline methods have limited consistency with the results of human evaluation.
Simulation-based evaluation methods address the issues listed above; they are a viable choice for large-scale automatic evaluation~\citep{Deriu2020SurveyOE}.
User simulations can be used to evaluate functionalities of dialogue systems and they can serve as an environment to train reinforcement learning-based systems~\citep{Deriu2020SurveyOE}, leveraging agenda-based~\citep{Schatzmann2007AgendaBasedUS} or model-based simulation~\citep{Asri2016ASM}.
Building human-like user simulation is still an open challenge~\citep{Jannach2020ASO}.
To bridge the gap between human evaluation and user simulation, we attempt to combine user simulation with user satisfaction (cf.~Figure~\ref{figure:task}).
To this end, we first look into existing task-oriented dialogues and carry out a user study to investigate the characteristics of user satisfaction.
We arrive at two main observations:
\begin{enumerate*}[label=(\arabic*)]
\item \emph{User dissatisfaction is mainly caused by the system's failure in meeting the user's needs}.
Specifically, 36\% of the conversations are labeled as \emph{very dissatisfied} because the system does not understand the user's needs, and 43\% are because the system understands the user's problems but cannot provide proper solutions.
Figure~\ref{figure:example} illustrates the scenario.
\item \emph{Different degrees of satisfaction result in different sequences of user actions}.
For example, the right-side user in Figure~\ref{figure:example} may switch to customer service or explain further when encountering the same failed system reply in the context of different emotions.
We convert this intuition to a hypothesis that we verify by checking the records in the corpus.
When faced with a dialogue system's failure in understanding user needs, about 17.1\% of all users will switch to manual customer service, and about 64.3\% and 9.7\% will continue by providing additional information, or quit the conversation, respectively. This observation suggests that user simulation should work differently in different user satisfaction scenarios.
\end{enumerate*}
Informed by the observations just listed, we propose a novel task: \emph{to simulate user satisfaction for the evaluation of task-oriented dialogue systems}.
Figure~\ref{figure:task} illustrates the main difference between our task and previous work.
We extend the evaluation capability of user simulations and make the simulation more human-like by incorporating user satisfaction prediction and user action prediction.
To facilitate research on user satisfaction simulation,
we develop a user satisfaction annotation dataset, \acfi{USS}. We invite 40 annotators to label both the dialogue level and exchange level user satisfaction of 5 commonly used task-oriented dialogue datasets in different domains.
This results in a dataset of 6,800 dialogues, where each individual user utterance, as well as each complete dialogue, is labeled on a 5-point satisfaction scale.
Each dialogue is labeled by 3 annotators; the expert ratings are highly correlated, with a Fleiss Kappa score of 0.574.
The \ac{USS} dataset shares some characteristics with existing datasets for user satisfaction, but also differs in important ways (see Table~\ref{table:dataset-comparision}):
\begin{enumerate*}
\item Our user satisfaction labeling occurs before the user utterance, and is based on the dialogue context between user and system instead of the satisfaction expressed in the user's utterance.
\item The \ac{USS} dataset includes multiple domains, such as e-commerce, reservations, recommendations, etc.
\item The \ac{USS} dataset exceeds existing user satisfaction data in scale.
\end{enumerate*}
We share three baseline approaches to perform satisfaction prediction and user action prediction based on the newly collected data in \ac{USS}: a feature-based method, a hierarchical GRU-based method, and a BERT-based method.
Experimental results suggest that distributed representations outperform feature-based methods.
The hierarchical GRU-based method achieves the best performance in in-domain user satisfaction prediction, while the BERT-based method has a better cross-domain generalization ability thanks to the pre-training.
We also show that the BERT-based method achieves state-of-the-art performance on the action prediction task.
In summary, this paper makes the following contributions:
\begin{enumerate*}[label=(\arabic*)]
\item We propose the novel task of simulating user satisfaction for the evaluation of task-oriented dialogue systems.
\item We collect and share a dataset, \ac{USS}, that includes 6,800 annotated dialogues in multiple domains.
\item We introduce three baseline methods for the tasks of satisfaction prediction and action prediction using the \ac{USS} dataset.
\end{enumerate*}
\begin{figure}[t]
\centering
\includegraphics[width=0.95\columnwidth]{figures/figure_1_4.pdf}
\vspace*{0.75\baselineskip}
\caption{Two examples of dialogues in the JDDC dataset~\citep{Chen2020TheJC},
with different degrees of user satisfaction.
%
The right-side system fails to understand the user's needs, and causes the user to be dissatisfied emotions and have a poor user experience. The left-side dialogue demonstrates an opposite case.}
\label{figure:example}
\vspace*{-0.25\baselineskip}
\end{figure}
\section{Task formulation}
\label{sec:tr}
To formulate the task of simulating user satisfaction, we first carry out a user study to explore the characteristics of user satisfaction in task-oriented dialogues.
Specifically, we invite 12 experts and let each expert annotate 20 dialogues sampled from the JDDC dataset; we used the JDDC dataset since it is more realistic than data constructed by the Wizard-of-Oz approach.
We ask each expert to score the user satisfaction for each dialogue turn and the entire conversation. In addition, a rational explanation is requested.
We ask the experts to judge the user action changes after a change in satisfaction.
Based on this study, we answer the following questions:
\begin{enumerate*}[label=(\arabic*)]
\item \emph{What causes the user's dissatisfaction?}
We collect the results and find that, although annotators are satisfied with the system overall, about 12\% of the dialogue turns are labeled as unsatisfying.
This indicates that there are fluctuations in user satisfaction when interacting with the system.
We analyze the annotators' explanations and find that the main reason for dissatisfaction relates to the system's failure to understand the user's needs or handling the user's requests.
Specifically, 36\% of all conversations labeled as \emph{very dissatisfied} are because \emph{the system does not understand the user's needs}, whereas 43\% are because \emph{the user does not approve the system's response}. In 64\% of the data, users had a bad user experience because \emph{the system was not professional enough or did not respond in time}.
Figure~\ref{figure:example} illustrates the scenario where the system does not understand the user's needs and causes low user satisfaction.
\item \emph{How does user satisfaction influence the user's behavior?}
Different degrees of satisfaction result in different sequences of user actions.
Specifically, when encountering a failure in the a dialogue system's understanding of user needs, about 17.1\% of all users \emph{switch to manual customer service}, and about 64.3\% and 9.7\% continue by \emph{providing additional information}, or \emph{quit the conversation}, respectively.
Figure~\ref{figure:example} shows an example, where the right-side user switches to customer service or explains further when encountering the same failed system reply in light of different degrees of satisfaction.
Apart from user actions, we also observe changes such as attitude and information-seeking goal.
\end{enumerate*}
The above observations indicate that predicting the fluctuations of user satisfaction during interaction is non-trivial.
Thus, we formulate our research task, i.e., to \emph{simulate user satisfaction for the evaluation of task-oriented dialogue systems}.
This simulation task focuses on the prediction of the next user action as well as user satisfaction.
Suppose that we have a dataset $\mathcal{D} = \{(U_{i}, a_{i}, s_{i})\}_{i=1}^{N}$, where for all $i \in [1,N]$, $U_{i}$ is the dialogue context, $a_{i}$ is the next-turn user action, and $s_{i}$ denotes user satisfaction.
The task objective is to learn a classification model $P(a, s \mid U)$ from $\mathcal{D}$, and thus given a dialogue context $U$, it predicts the next-turn user action $a$ and user satisfaction $s$ based on $P(a, s \mid U)$.
The purpose of the task to increase the evaluation power of user simulations and to make the simulation more human-like by incorporating the user's potential changes in satisfaction in a simulator.
\section{Experiments}
\subsection{Models used for comparison}
Inspired by previous work \cite{Jiao2019HiGRUHG,Yang2016HierarchicalAN,Barahona2021IsTU}, we consider three types of approach: Feature-based, RNN-based, and BERT.
\subsubsection{Feature-based models}
We use (1) TF-IDF, (2) the length of the last utterance (i.e., the number of words), and (3) position of the current utterance as the features in feature-based models.
We compare several machine learning models that have popularly been used for text classification \cite{Aggarwal2012MiningTD}:
\begin{enumerate*}
\item logistic regression (LR),
\item support vector machines (SVM), and
\item XGBoost.
\end{enumerate*}
\subsubsection{RNN-based models}
Given the dialogue context $U=\{u_{j}\}_{j=1}^{t}$, we first encode it to get the context representation $\mathbf{h}^{U}$, and then predict the user satisfaction by $P(s\mid U)=\operatorname{softmax}(\operatorname{MLP}(\mathbf{h}^{U}))$.
Inspired by previous work, we compare three methods for context representation encoding:
\begin{enumerate*}
\item GRU, which first concatenates the dialogue history into a long sentence, and then feeds the sentence into a Bidirectional GRU (BiGRU) model. Then the context representation is defined as the average pooled outputs of the BiGRU model.
\item HiGRU, which explores the hierarchical structure. First, it encodes each utterance in the dialogue using a word-level BiGRU to get the utterance representations $\mathbf{h}^{u_{j}}$.
Then it feeds the utterance representations into a sentence-level GRU, and define the context representation as the last hidden state of the sentence-level GRU~\cite{Jiao2019HiGRUHG}.
\item HiGRU+ATTN, which applies a two-level attention mechanism in HiGRU~\cite{Yang2016HierarchicalAN}.
\end{enumerate*}
\subsubsection{BERT-based model}
Given the dialogue context $U=\{u_{j}\}_{j=1}^{t}$, we first concatenate it to a long sequence with $[SEP]$. Then we encode it into a latent representation via BERT~\citep{Devlin2019BERTPO}, and convert it into the condensed representation $\mathbf{h}^{U}$ through an average pooling operation.
User satisfaction is predicted as $P(s\mid U)=\operatorname{softmax}(\operatorname{MLP}(\mathbf{h}^{U}))$.
\subsection{Implementation details}
To integrate the user satisfaction prediction and action prediction, we train two independent models for two tasks, in which action prediction takes the predicted output of satisfaction prediction model as the input. We use ground truth satisfaction in training and the model predicted satisfaction in testing.
The Feature-based models are implemented using the scikit-learn toolkit.
For the BERT-based model, we use BERT-Base~(110M) pretrained weights\footnote{\url{https://github.com/huggingface/transformers}}~(hidden size is 768).
We use the BERT vocabulary~(size: 30,522) for all models (the Chinese BERT vocabulary for the JDDC domain), set the batch size $= 64$, the learning rate to 2e-5 for BERT and 1e-4 for others, use the AdamW optimizer~($\beta_1 = 0.9$, $\beta_2 = 0.999$, and $\epsilon$ = $10^{-8}$) to optimize parameters, use gradient clipping with a maximum gradient norm of 0.2, train up to 50 epochs on one NVIDIA TITAN RTX GPU, and select the best checkpoints based on performance on the validation set. Due to the serious imbalance of the satisfaction label, we up-sample the non-3 rating data during training. We take dialogue-level satisfaction as the last user utterance and use ``overall'' as the identification. As in previous work~\cite{Cai2020PredictingUI}, we use 10-fold cross-validation to evaluate the outcome.
\section{Cross-domain analysis}
\section{Related work}
Unlike chitchat systems, which focus on conversing with human on open domains, task-oriented dialogue systems aim to complete specific tasks for user~\cite{Wen2017ANE,lei2020interactive}.
Task-oriented dialogue systems can be divided into module-based and end-to-end-based methods~\cite{Jannach2020ASO}.
The former decomposes the dialogue system into four stages: language understanding, dialogue state tracking, dialogue policy learning, and response generation.
Recently, each stage in the module-based task-oriented dialogue systems has received increased attention~\cite{Hashemi2016QueryID,Yao2013RecurrentNN,Wen2017ANE,Mrksic2015MultidomainDS,Mrksic2017NeuralBT,Yan2017BuildingTD}.
End-to-end task-oriented dialogue systems rely on neural dialogue generation, which has received a lot of attention in recent years~\cite{Young2013POMDP,Banchs2013IRIS,Ameixa2014Luke}.
Among all these approaches, sequence-to-sequence structure neural generation models~\cite{vinyals2015neural,li2016a,serban2016building,chen18www,lei2018sequicity,jin2018explicit} have been proved to be capable in multiple dialogue systems with promising performance.
Evaluation is a crucial part of the development process of task-oriented dialogue systems.
Corpus-based approaches, user simulation, and user satisfaction modeling have all been leveraged~\citep{Zhang2020EvaluatingCR} for evaluating the performance of a task-oriented dialogue system.
Offline evaluation based on test sets is commonly used, but it is limited in scope to a single turn and does not inform us about the overall usefulness of the system or about users' satisfaction with the flow of the dialogue~\citep{Zhang2020EvaluatingCR}.
Employing simulation-based evaluation can tackle the above issues and become one viable choice for large-scale automatic evaluation~\citep{Deriu2020SurveyOE}.
User simulators are tools that are designed to simulate the user’s behavior, which can be used to train the dialogue manager in an offline environment \cite{Deriu2020SurveyOE} or to evaluate the dialogue policy \cite{Schatzmann2007AgendaBasedUS}.
\citet{Eckert1997UserMF} propose the first statistical user simulator.
\citet{Scheffler2000ProbabilisticSO} propose a graph-based model.
\citet{Georgila2005LearningUS} use a Markov Model, and a hidden Markov model has been proposed by \citet{Cuayhuitl2005HumancomputerDS}.
\citet{Schatzmann2007AgendaBasedUS} propose an agenda-based user simulator, which represents the user state elegantly as a stack of necessary user actions, called the agenda.
\citet{Zhang2020EvaluatingCR} evaluate conversational recommender systems via an agenda-based user simulator.
Recent work employs neural approaches, esp.\ sequence-to-sequence models \cite{Asri2016ASM,Kreyssig2018NeuralUS}.
As far as we know, no previous study explicitly models the user satisfaction in user simulations.
Unlike previous work, we are the first to incorporate user satisfaction into user simulation to make the simulation more human-like.
Next to user simulations, user satisfaction modeling is the other evaluation method that is based on the idea that the usability of a system can be approximated by the satisfaction of its users \cite{Deriu2020SurveyOE}.
\citet{Ultes2013OnQR} note the impracticability of having a user rate a live dialogue. Thus, automatic prediction can be an alternative.
\citet{Walker1997PARADISEAF} propose the PARADISE framework, which estimates user ratings on the dialogue level.
Evaluation methods that estimate user satisfaction at the exchange level have also been proposed \cite{Engelbrecht2009ModelingUS,Higashinaka2010IssuesIP,Hara2010EstimationMO}.
They yield more fine-grained predictions and are especially useful for online dialogue breakdown detection.
\citet{Schmitt2015InteractionQA} propose Interaction Quality (IQ) to assign user ratings by experts instead of real users.
\citet{Bodigutla2019MultidomainCQ} introduce the Response Quality (RQ) scheme to improve generalizability to multiple-domain conversations.
Unlike previous work on user satisfaction modeling, we simulate the user satisfaction changes without human involvement.
\begin{table}[]
\small
\centering
\setlength\tabcolsep{2pt}
\caption{Available datasets related to our task. AU/BU is short for After Utterance/Before Utterance.}
\label{table:dataset-comparision}
\begin{tabular}{l c l r r c c}
\toprule
\textbf{Dataset} & \textbf{Year} & \textbf{Domain} & \textbf{\#Dialog} & \textbf{\#Turns} & \textbf{Type} & \textbf{Level}
\\
\midrule
LEGO~\cite{Schmitt2012APA} & 2012 & Bus & 347 & 9,083 & AU & 5
\\
IARD~\cite{Cai2020PredictingUI} & 2020 & Movie & 336 & 2,203 & AU & 2
\\
Alexa~\cite{Bodigutla2020JointTA} & 2020 & Booking & 3,129 & 20,167 & AU & 5
\\
MHCH~\cite{Liu2020TimeTT} & 2020 & E-commerce & 7,500 & 75,548 & BU & 2
\\
\midrule
USS (Ours) & 2021 & Multiple & 6,800 & 99,569 & BU & 5
\\
\bottomrule
\end{tabular}
\end{table}
|
{
"timestamp": "2021-05-11T02:14:01",
"yymm": "2105",
"arxiv_id": "2105.03748",
"language": "en",
"url": "https://arxiv.org/abs/2105.03748"
}
|
\section{Introduction}
When studying a quantum system, it is generally desirable for it to be prepared in a low-entropy configuration.
Such systems can populate a relatively small number of quantum states, which makes their dynamics more controllable, predictable, and comprehensible~\cite{Bloch,Zhang}.
If the system of interest is a gas consisting of atoms or molecules, its entropy is typically reduced by utilizing laser cooling and optical pumping techniques.
In these processes, the particles absorb light from an applied laser field and incoherently scatter the light into free space~\cite{MetcalfText}.
While these methods can irreversibly remove entropy from the gas, the second law of thermodynamics requires that the total entropy of the universe does not decrease, and therefore that the entropy of the gas must have been absorbed by some other system.
In this context, the most common explanation is that the entropy of the gas is absorbed by the vacuum modes of the quantized electromagnetic field~\cite{weisskopf}.
This process is often cast in the framework of open quantum systems whereby the quantized electromagnetic field is treated as an external reservoir, allowing for an irreversible reduction of the gas's entropy through the process of spontaneous emission~\cite{Esposito,Ptaszynski,Braunstein}.
It is typically explained that the reservoir can absorb a substantial amount of entropy due to the large number of possible emission configurations~\cite{MetcalfText,ketterleNobel}.
Moreover, it is often stated that the coherent light field is not perturbed, and hence does not absorb entropy, because the quantum counterpart to the laser field, the coherent state, is unaffected by the absorption of photons by the gas.
However, there are some studies that predict entropy removal from the gas via interaction with the laser field~\cite{Korsunsky,Metcalf_2008,Metcalf_2015,Lignier,Miao}.
In an effort to further understand the underlying physics of laser cooling and optical pumping, we propose a simple Gedanken experiment that probes the change of a light field that coherently interacts with a particle containing nonzero initial entropy.
Our results conclude that the entropy initially possessed by the particle can be imprinted on the applied light field, but the same information is also encoded in the spontaneous photons emitted by the particle.
\begin{figure}
\includegraphics[width=\linewidth]{system.pdf}
\caption{(a) A schematic of the experimental setup. A particle (circle) is placed in a lossless ($\kappa=0$) optical cavity that contains a coherent light field. The particle can undergo spontaneous emission into free space at rate $\gamma$. (b) Energy diagram of the particle's internal state structure. It is coupled to the cavity with strength $g$ on the bright transition ($\ket{e} \leftrightarrow \ket{b}$) and has a linewidth $\gamma$ on the dark transition ($\ket{e} \rightarrow \ket{d}$).}
\label{fig:model}
\end{figure}
\section{Motivation}
We first consider the level of complexity necessary to demonstrate the state alteration and entropy transfer processes
with the goal of removing complications or details of any particular cooling or pumping scheme that might obscure the core physics of interest.
While other candidates may exist, we model the initial state of the laser field as a coherent state, as this is the quantum state that shares the most similarities with a classical coherent laser field~\cite{Agarwal}.
The coherent state is prepared in a lossless optical cavity, after which any external driving fields are turned off.
The cavity is an efficient way to incorporate quantization, and we anticipate that many of our findings also apply to free-space coherent light fields.
Because the essential physics of interest is the same, we focus only on the effects of optical pumping on the evolution of the particles' internal states.
To this end, we model the gas as a single, motionless particle that exists at an anti-node of the optical cavity.
With these considerations in mind, we construct the minimal experimental setup depicted in Fig~\ref{fig:model}(a).
The particle possesses two ground states and a single excited state, as shown in Fig~\ref{fig:model}(b).
The $\ket{b}\leftrightarrow\ket{e}$ (bright) transition is resonant with the optical cavity, which encapsulates the coherent interaction between the cavity field and the particle, while the $\ket{e}\rightarrow\ket{d}$ (dark) transition is mediated only by the spontaneous emission process, which models the incoherent interaction between the background radiation field and the particle.
The particle always transitions to the dark state $\ket{d}$ upon emission of a spontaneous photon so that the cavity is not trivially depleted of photons.
By studying this model, we aim to quantify the alteration of the cavity field due to its interaction with the particle and to determine if the particle's entropy is transferred to the cavity's degrees of freedom through their resulting correlations.
To achieve these goals, we employ both statistical and information theoretic techniques.
We first calculate the final cavity field state and determine its distinguishability from the initial cavity state qualitatively by comparing their Husimi $Q$-functions and quantitatively through their quantum fidelity.
Then, we demonstrate that information about the particle becomes encoded in the cavity field by using Bayesian inference.
Lastly, we quantify the amount of entropy transferred from the particle to the cavity field by calculating the quantum mutual information shared between the initial particle state and final cavity state.
\section{Model}
The system of interest [Fig.~\ref{fig:model}(a)] consists of a three-level particle [Fig.~\ref{fig:model}(b)] coupled to a light field in a lossless ($\kappa=0$) optical single-mode cavity.
Let us denote the Hilbert spaces of the particle and light field as $A$ and $L$, respectively.
In the time-independent interaction picture, the system is evolved according to the quantum master equation
\begin{equation}
\label{ME}
\frac{d \hat \rho_{AL}}{dt} =
\frac{1}{i \hbar} \left[ \hat H_{AL} , \hat \rho_{AL} \right]
+ \gamma \mathcal{L}(\hat J) \hat \rho_{AL}.
\end{equation}
Here, the coherent particle-cavity interaction is described by the Jaynes-Cummings Hamiltonian
\begin{equation}
\label{ham}
\hat H_{AL} = \frac{\hbar g}{2} \Big(\ket{b}\bra{e} \hat a^\dag + \ket{e}\bra{b} \hat a\Big),
\end{equation}
where $g$ is the coupling strength, and $\hat{a}$ is the annihilation operator for the cavity field.
As is achieved in the moving frame of a particle in Doppler cooling, we have set the cavity to be resonant with the particle's bright state transition.
The environment surrounding the system, which we model as an infinite bandwidth bosonic bath that is coupled to the particle's dark transition, has been traced out under the Born-Markov approximation~\cite{Lindblad,Meystre}.
Its effects are incorporated through the Lindblad superoperator
\begin{equation}
\mathcal{L}(\hat J) \hat \rho =
\hat J \hat \rho \hat J^\dag
- \frac{1}{2} \left(
\hat J^\dag \hat J \hat \rho + \hat \rho \hat J^\dag \hat J
\right)
\end{equation}
with jump operator $\hat J = \ket{d} \bra{e}$. This term describes the spontaneous emission of photons into free space by the particle that occurs at a rate $\gamma$. We consider both analytical and numerical solutions to Eq.~\eqref{ME} in the sections that follow. We use both MATLAB and the QuantumOptics package in the Julia programming language for numerical calculations~\cite{MATLAB:2020b,JuliaQuantumOptics}.
To model the entropy initially possessed by the particle, it is prepared in the mixed state
\begin{equation}
\label{initParticle}
\hat \rho_A(0) = x \ket{b}\bra{b} + (1-x)\ket{d}\bra{d},
\end{equation}
where $0\leq x\leq1$ is the probability that the particle begins in the bright state $\ket{b}$.
As already mentioned, the cavity field is initialized in a coherent state
\begin{equation}
\label{coherentState}
\ket{\alpha} = e^{-|\alpha|^2/2} \sum_{n=0}^\infty \frac{\alpha^n}{\sqrt{n!}} \ket{n},
\end{equation}
where $\ket{n}$ is the $n$-photon Fock state, so that its initial density matrix reads
\begin{equation}
\label{initCavity}
\hat \rho_L(0) = \ket{\alpha} \bra{\alpha}.
\end{equation}
\begin{figure*}
\includegraphics[width=\linewidth]{HusimiQn0100.png}
\caption{The $Q$-function of the final cavity state $Q_\infty(\beta)$ [see Eq.~\eqref{finalQ}] for various $m$. We have set $x=1$ and $\alpha = \sqrt{\bar n_0} = 10$. Although the amplitude of the state remains localized near $|\beta| = \sqrt{\bar n_0} = 10$, the spread in phase $\Delta \phi$ increases as $m$ decreases, indicating a change in the cavity state due to the particle-cavity interaction.}
\label{fig:QFunction}
\end{figure*}
\section{Alteration of cavity state}
Here we demonstrate that the cavity state is altered due to its interaction with the particle.
But first, we address a common counterargument, which postulates that there should be no change to the cavity field because a coherent state is an eigenstate of the annihilation operator $\hat a$ by definition: $\hat a \ket{\alpha} = \alpha \ket{\alpha}$, and therefore is unperturbed by photon absorption followed by free-space spontaneous emission.
What this argument fails to consider is the stimulated emission of a photon back into the cavity, which can occur if the particle-cavity coupling rate is sufficiently large.
With this additional process in mind, we emphasize that a change of the cavity state is expected since it implies the action of the creation operator $\hat a^\dag$ [see Eq.~\eqref{ham}] on the coherent state, which yields nontrivial dynamics~\cite{AgarwalArticle}.
More specifically, we expect a change in the cavity state when the phase coherence between different Fock states in $\ket{\alpha}$ is scrambled.
This occurs when the relative phases of the relevant Fock state amplitudes become substantially altered, which we now characterize.
After an interaction time $t$, the accumulated phase for an $n$-photon Fock state $\ket{n}$ is $\phi_n = \sqrt{n} g t/2$.
If we approximate the interaction time by the excited state lifetime, $t \approx 1/\gamma$, then the relative accumulated phase $\Delta \phi$ between two Fock states $\ket{n}$ and $\ket{n+\Delta n}$ is
\begin{equation}
\label{relativePhase}
\Delta \phi (n, \Delta n )
= \phi_{n + \Delta n} - \phi_n
\approx \frac{g }{2 \gamma }
\left(
\sqrt{n + \Delta n}-\sqrt{n}
\right).
\end{equation}
For a coherent state $\ket{\alpha}$, which has initial average intracavity photon number $\bar n_0 = \langle \hat a^\dag \hat a(0) \rangle = |\alpha|^2$, the most relevant Fock states lie within the range $(\bar n_0 - \sqrt{\bar n_0},\bar n_0 + \sqrt{\bar n_0})$, in which $\sqrt{\bar n_0}$ is the variance of the photon distribution. Using Eq.~\eqref{relativePhase}, the relative accumulated phase between the central Fock state $\ket{\bar n_0}$ and the Fock states near to the edge of the coherent state $\ket {\bar n_0 \pm \sqrt{ \bar n_0}}$ is then
\begin{equation}
\label{relCoherent}
|\Delta \phi (\bar n_0, \sqrt{ \bar n_0} )|
\approx \frac{1}{4} \frac{g}{\gamma},
\end{equation}
in which we have assumed $\bar n_0 \gg 1$.
If the phase coherence is to be destroyed, then this relative phase must be much larger than unity.
Defining the particle-cavity critical photon number $m \equiv \tfrac12 (\tfrac{\gamma}{g})^2$~\cite{Kimble} and using Eq.~\eqref{relCoherent}, phase scrambling of the coherent state is equivalent to the condition $m \ll 1$.
Therefore, $m$ is the important parameter for determining an alteration of the cavity state.
In Appendix~\ref{finalCavApp}, we derive an analytic expression for the final cavity state $\hat \rho_L(\infty)$ when the particle is initialized according to Eq.~\eqref{initParticle}.
It is parameterized by the initial bright state fraction $x$, initial intracavity photon number $\bar n_0$, and particle-cavity critical photon number $m$ [see Eq.~\eqref{finalCavity}].
To understand how the final cavity state differs from the initial coherent state, we pictorially compare their Husimi $Q$-functions, and then calculate their fidelity $F$.
\subsection{Cavity state $Q$-functions}
\label{QFunctionSection}
To gain intuition for the differences between the initial and final cavity states, we calculate their Husimi $Q$-functions, which are defined as
\begin{equation}
\label{Qfunc}
Q(\beta) \equiv \frac{\braket{\beta|\hat \rho |\beta}}{\pi}.
\end{equation}
Here, the coherent states $\ket{\beta}$ form a basis for the 2-dimensional optical phase space $(\text{Re}[\beta],\text{Im}[\beta])$.
The initial cavity state has $Q$-function
\begin{equation}
\label{initialQ}
Q_0(\beta) = \frac{\braket{\beta|\hat \rho_L(0) |\beta}}{\pi}
=\frac{|\braket{\alpha|\beta}|^2}{\pi}
= \frac{1}{\pi} e^{-|\alpha - \beta|^2},
\end{equation}
while the $Q$-function for the final cavity state is
\begin{equation}
\label{finalQ}
Q_\infty(\beta) = \frac{\braket{\beta|\hat \rho_L(\infty) |\beta}}{\pi}.
\end{equation}
The latter has a more complicated form since it is generally no longer a coherent state.
Figure~\ref{fig:QFunction} presents numerical plots of $Q_\infty(\beta)$ for various $m$.
To focus on the effects of the interaction, we choose $x=1$.
We also choose $\alpha = \sqrt{\bar n_0} = 10$ so that $Q_0(\beta)$ is localized and centered on the coordinate $(\text{Re}[\beta],\text{Im}[\beta])=(10,0)$ with a spread on the order of unity.
Figure~\ref{fig:QFunction}(a) displays $Q_\infty(\beta)$ for the intermediate-coupling case $m=1$.
We find that $Q_\infty(\beta)$ does not differ significantly from $Q_0(\beta)$, which indicates that the cavity field nearly remains in a coherent state.
In the case of stronger coupling [$m= 10^{-2}$, Fig.~\ref{fig:QFunction}(b)], however, $Q_\infty(\beta)$ remains localized near the circle $|\beta|=|\alpha|$, but is spread over a larger phase range $\Delta \phi$.
In the infinite-coupling limit $m\rightarrow0$ [Fig.~\ref{fig:QFunction}(c)], we find that $Q_\infty(\beta)$ has a uniform phase distribution, and therefore differs substantially from $Q_0(\beta)$.
Let us now interpret these results.
The function $Q_\infty(\beta)$ always remains localized near the circle $|\beta|=|\alpha| = 10$ because the average number of photons in the cavity $\bar n = \braket{\hat a^\dag \hat a}$ does not significantly change.
This is a consequence of the particle's internal state structure [see Fig.~\ref{fig:model}(b)], which prevents a reduction of $\bar n$ by more than one, and our choice of a large initial intracavity photon number ($\bar n_0 = 100$).
On the other hand, we observe diffusion-like behavior of the phase $\phi$ as $m$ decreases because the particle undergoes more Rabi oscillations with each Fock state before emitting a spontaneous photon, thereby scrambling the cavity field's phase coherence.
The coherences vanish completely in the limit $m \rightarrow 0$, resulting in the uniform phase distribution shown in Fig.~\ref{fig:QFunction}(c).
This potentially extreme change in the phase distribution is largely responsible for the distinguishability of the initial and final cavity states, which we now quantify in terms of their fidelity.
\subsection{Fidelity of initial and final cavity states}
\label{fidelitySection}
We now quantify the alteration of the cavity field by calculating the Uhlmann-Jozsa fidelity
\begin{equation}
\label{uhlmannjozsa}
F(\hat \rho, \hat \sigma)=
\left(\Tr \sqrt{\sqrt{\hat \rho} \hat \sigma \sqrt{\hat \rho}}\right)^2
\end{equation}
between its initial and final states, which is a generalization of the transition probability for pure states~\cite{uhlmann,jozsa}.
The fidelity satisfies $0 \leq F(\hat \rho, \hat \sigma) \leq 1$ for any two density matrices $\hat \rho$ and $\hat \sigma$, with $ F(\hat \rho, \hat \sigma)=1$ if and only if $\hat \rho = \hat \sigma$, and $F(\hat \rho, \hat \sigma)=0$ if $\hat \rho$ and $\hat \sigma$ have support on orthogonal subspaces.
In particular, if we find that $ F(\hat \rho, \hat \sigma) \neq 1$, then the cavity field is no longer in the coherent state $\ket{\alpha}$.
Because the cavity field is initially in a pure state, its fidelity with the final cavity state $\hat \rho_L(\infty)$ is simply
\begin{equation}
\label{simplifyFidelity}
F \equiv F[\hat \rho_L(0), \hat \rho_L(\infty)] = \braket{\alpha | \hat \rho_L(\infty) | \alpha}.
\end{equation}
We point out that $F = \pi Q_\infty(\alpha)$, i.e., the fidelity is (up to a factor of $\pi$) the $Q$-function of the final cavity state evaluated at $\beta = \alpha$.
We can therefore qualitatively predict features of the fidelity $F$ through the plots in Fig.~\ref{fig:QFunction}.
For example, we expect $F$ to decrease as $m$ decreases due to the associated diffusion-like behavior of the $Q$-function.
The fidelity $F$ between the initial and final cavity state is calculated to be [see Eq.~\eqref{fidEarly}]
\begin{equation}
\label{fidGeneral}
F =
1-x\left[
1 - f(\bar n_0,m)
\right],
\end{equation}
in which $0<f(\bar n_0,m)<1$ is their fidelity conditioned on the particle being prepared in the bright state ($x=1$).
We present this conditional fidelity $f(\bar n_0,m)$ in Fig.~\ref{fig:fidelity}(a).
We find that $f \ll 1$, and therefore that the cavity state is significantly altered, when $\bar n_0 \geq 1$ and $m < 1$, which agrees with our qualitative $Q$-function analysis.
This region of parameter space corresponds to a cavity containing at least one photon (on average) and a strong particle-cavity coupling, respectively.
\begin{figure}
\includegraphics[width=\linewidth]{fidelity.pdf}
\caption{Fidelity $F$ between the initial and final cavity states $\hat \rho_L(0)$ and $\hat \rho_L(\infty)$ [Eq.~\eqref{uhlmannjozsa}] when the particle begins in the bright state [$x=1$, see Eq.~\eqref{fidGeneral}] as a function of the initial intracavity photon number $\bar n_0$ and the critical photon number $m$. (a) Contour plot of $F$, calculated by numerically evaluating Eq.~\eqref{simplifyFidelity}. When $m < 1$ and $\bar n_0 > 1$, we find that $F \ll 1$, signaling a significant change in the cavity state. (b) Numerical (solid, $\bar n_0 = 10$) and analytical [dashed, Eq.~\eqref{fidelityMDependence}] results for $F$ as a function of $m$. For $m<m_{min}$ [dot-dashed, Eq.~\eqref{smallM}], the results diverge.}
\label{fig:fidelity}
\end{figure}
Because a typical laser field contains many photons, we focus on the thermodynamic limit $\bar n_0 \rightarrow \infty$ for the remainder of our investigation.
In this limit, the conditional fidelity is calculated to be
\begin{equation}
\label{fidelityMDependence}
\lim_{\bar n_0 \rightarrow \infty} f(\bar n_0,m)
= \sqrt{2\pi m} \, e^{2m} \erfc(\sqrt{2m}),
\end{equation}
as shown in Appendix~\ref{fidelitySectionSM}.
Figure~\ref{fig:fidelity}(b) displays Eq.~\eqref{fidelityMDependence} and a numerical result for $\bar n_0=10$, which agree very well when $m$ is sufficiently large:
\begin{equation}
\label{smallM}
m > m_{min} \equiv \frac{1}{8 \pi^2 \bar n_0}.
\end{equation}
We now consider the strong ($m \ll 1$) and weak ($m \gg 1$) coupling limits.
From Eqs.~\eqref{fidGeneral} and~\eqref{fidelityMDependence},
\begin{equation}
\label{fidelityLimiting}
F\approx
\begin{cases}
1-x(1-\sqrt{2 \pi m}), & m \ll 1\\
\displaystyle 1 - \frac{x}{4m}, & m \gg 1
\end{cases}.
\end{equation}
Therefore, in the infinite-coupling limit $m \rightarrow 0$, the fidelity becomes $F=1-x$.
This shows that the cavity field can be altered by the interaction provided that the particle has a chance of starting in the bright state ($0 \ll x \leq 1$).
However, in the zero-coupling limit $m\rightarrow\infty$, we find that $F=1$, and the field is not altered.
It is also interesting to consider how small the critical photon number must be for the initial and final cavity states to be substantially distinguishable, e.g., for their fidelity to be $F=\tfrac12$.
For simplicity and to achieve the greatest effect, we define such a critical photon number $m_{1/2}$ in the case when the particle is prepared in the bright state.
From Eq.~\eqref{fidelityMDependence}, we find that this occurs when
\begin{equation}
\label{m12}
m = m_{1/2} \approx 0.09.
\end{equation}
One can therefore observe a substantial alteration of the cavity state in an experimental setting by studying a system satisfying $m \leq m_{1/2}$, or more generally, $g \geq 2.3 \gamma \gg \kappa$.
These results are direct evidence that the cavity state can be altered by interaction with a particle, even in the limit of infinitely many photons, if the particle-cavity coupling is sufficiently large. We interpret the deviation from $F=1$ as result of the development of correlations between the particle and cavity states through the coherent interaction $\hat H_{AL}$ [see Eq.~\eqref{ham}].
\section{Bayesian analysis}
\label{bayesSection}
Now that we have demonstrated an alteration of the cavity state after interaction with the particle, we consider if any information about the particle is imprinted on the cavity field.
As shown in Section~\ref{QFunctionSection}, the change in the cavity state manifests primarily in its phase.
This suggests that simply measuring the field intensity
\begin{equation}
\langle \hat a^\dag \hat a(\infty)\rangle =
\bar n_0 - x(1-e^{-\bar n_0})
\end{equation}
would not provide an effective way to extract this information, as it does not access the phase of the state.
To create a measurement that can distinguish differences in phase, we propose the following scheme. First, we displace the final cavity state according to the operation
\begin{equation}
\label{displacedCavityState}
\hat \eta_L \equiv
\hat D(-\alpha) \hat \rho_L(\infty) \hat D^\dag (-\alpha),
\end{equation}
which can be done, e.g., by removing a cavity mirror at $t\rightarrow\infty$ and feeding the final cavity state and a coherent state $\ket{\alpha}$ into separate ports of a beam splitter~\cite{paris}.
(We emphasize that any information about the particle contained in the cavity state is unaltered because this operation is unitary.)
After the displacement operation, we then perform photon number measurements on the resulting state $\hat \eta_L$.
To understand why this scheme provides us with cavity phase information, consider again the Husimi $Q$-function perspective of Section~\ref{QFunctionSection}.
Notice that the displacement operation [Eq.~\eqref{displacedCavityState}] simply shifts each phase space coordinate $\beta$ to the new coordinate $\beta - \alpha$.
Therefore, a nearly unperturbed cavity state [as in Fig.~\ref{fig:QFunction}(a)] would be shifted near to the phase space origin, so photon number measurements of $\hat \eta_L$ would yield low values, regardless of the high-noise photon statistics of the initial distribution.
Contrarily, only one point on the circle defining a significantly perturbed cavity state [as in Figure~\ref{fig:QFunction}(c)] would be mapped near to the phase space origin, so photon number measurements of $\hat \eta_L$ would typically yield much higher values.
To demonstrate the utility of our proposed measurement scheme, consider a situation wherein the particle is prepared in some diagonal mixed state [Eq.~\eqref{initParticle}], and we are tasked with determining the initial particle state (``start in $\ket{b}$" or ``start in $\ket{d}$") for a given experimental run by performing measurements exclusively on the displaced cavity state $\hat \eta_L$.
For simplicity, we calculate the results for the maximally mixed state ($x=\tfrac{1}{2}$), but our results can be generalized to any $x$.
By symmetry, the most successful approach without performing any measurements would be to sample from a flat probability distribution (the ``prior"): $P(\text{start in $\ket{b}$})=P(\text{start in $\ket{d}$})=\tfrac{1}{2}$, for which the probability that we would be correct is $P_\text{prior}(\text{correct})=\tfrac{1}{2}$.
What we show here is that a more accurate probability distribution can be constructed by incorporating the results from a single measurement of the displaced cavity photon number distribution $\braket{n | \hat \eta_L | n}$, thereby proving that information about the particle is present in the cavity field.
For our purposes, it is sufficient to reduce the outcome space to a binary scenario: either we detect (i) zero photons ($n=0$, ``no click") or (ii) one or more photons ($n \geq 1$, ``click(s)").
This simplification is appropriate because we gain no additional information about the particle's initial state by distinguishing between nonzero numbers of clicks.
With this approach, we only need to calculate the vacuum state population $\braket{0|\hat \eta_L|0}$, which is equivalent to the fidelity $F$ [see Eq.~\eqref{fidGeneral}].
(Notice that one can experimentally probe $F$ by measuring this population.)
The probability distribution in the event of ``no clicks" is then
\begin{align}
\label{likelihood}
P(\text{no click}| \text{start in} \ket{i}) =
\begin{cases}
1, \;& i=d \\
f(\bar n_0, m), \;& i=b
\end{cases},
\end{align}
where $f(\bar n_0,m)$ is the conditional fidelity. The ``click(s)" conditional probability distribution is complementary to Eq.~\eqref{likelihood}.
With these results, we can use Bayesian inference to construct a posterior probability distribution for the initial particle state conditioned on the cavity measurement:
\begin{equation}
\label{bayes}
\begin{aligned}
P(\text{start in} \ket{i}|C) \propto
P(&C|\text{start in} \ket{i}) P(\text{start in} \ket{i}); \\
i \in \{b,d\}; \quad &C \in \{\text{click(s)}, \text{no click}\}.
\end{aligned}
\end{equation}
Here, $P(\text{start in} \ket{i})$ is the (initially flat) prior distribution for the initial particle state~\cite{holland}.
The posterior probability distribution is
\begin{equation}
\label{posterior}
\begin{aligned}
P(\text{start in} \ket{d} |\, \text{click(s)}) & = 0; \\
P(\text{start in} \ket{b} |\, \text{click(s)}) & = 1; \\
P(\text{start in} \ket{d} |\, \text{no click}) & = \frac{1}{1+f(\bar n_0, m)}; \\
P(\text{start in} \ket{b} |\, \text{no click}) & = \frac{f(\bar n_0, m)}{1+f(\bar n_0, m)}.
\end{aligned}
\end{equation}
It is only left to demonstrate that the posterior probability distribution predicts the initial particle state more accurately than the prior probability distribution, i.e., that we would correctly predict the initial particle state with a probability satisfying $P_\text{post}(\text{correct})>P_\text{prior}(\text{correct})=\tfrac{1}{2}$, by sampling from Eqs.~\eqref{posterior}.
Using the posterior probability distribution, the probability that we correctly predict the initial particle state is
\begin{equation}
\label{correct}
P_\text{post}(\text{correct}) = \frac{1}{1 + f(\bar n_0, m)}.
\end{equation}
Since $0<f(\bar n_0, m)<1$, we find that $P_\text{post}(\text{correct}) \geq \frac{1}{2}$, and therefore conclude that information about the particle is present in the cavity field.
We emphasize that we have increased our chance of predicting the initial particle state with a single measurement of the cavity state.
We now focus on the thermodynamic limit $\bar n_0 \rightarrow \infty$, as often occurs in laser cooling and optical pumping.
In this limit, the conditional fidelity $f$ is given by Eq.~\eqref{fidelityMDependence}.
Using this form of $f$ in Eq.~\eqref{correct}, we find that
\begin{equation}
\label{correctLimiting}
P_\text{post}(\text{correct}) \approx
\begin{cases}
1- \sqrt{2 \pi m}, & m \ll 1\\
\displaystyle \frac{1}{2}\left(1+ \frac{1}{8m}\right), & m \gg 1
\end{cases}.
\end{equation}
In the infinite-coupling limit $m \rightarrow 0$, $P_\text{post}(\text{correct}) = 1$. Therefore, a single measurement of the cavity state can in principle predict the initial particle state with 100$\%$ accuracy, regardless of the high-noise photon statistics.
This is possible because in this limit the final cavity states resulting from the particle starting in either $\ket{b}$ or $\ket{d}$ have orthogonal support, as evidenced by their vanishing fidelity \{see Eq.~\eqref{fidGeneral} and Ref.~\cite{Liang}\}.
In the zero-coupling limit $m \rightarrow \infty$, however, $P_\text{post}(\text{correct}) = \tfrac{1}{2} = P_\text{prior}(\text{correct})$, so the cavity measurement does not increase our chance of predicting the initial particle state.
\section{Mutual information}
In this section, we introduce an entropic perspective which we use to quantify the correlations between the particle, cavity field, and external reservoir due to the light-particle interactions.
In particular, we use this perspective to define the amount of entropy transferred from the particle to the cavity field.
Because the entropy of interest begins in the particle and ends in the cavity field, we posit that the amount of transferred entropy is characterized by the mutual information~\cite{Nielsen} shared between the initial particle state and the final cavity state.
In general, mutual information is defined as
\begin{equation}
\label{MIGeneral}
\begin{aligned}
I(Y:Z) & = S(Y) + S(Z) - S(YZ)\\
& = S(Y) - S(Y|Z),
\end{aligned}
\end{equation}
in which $Y$ and $Z$ are probability distributions, $\{S(Y),S(Z)\}$ are their information entropies, $S(YZ)$ is their joint entropy, and $S(Y|Z)$ is the conditional entropy of $Y$ given $Z$.
The entropy $S$ is given by the Shannon entropy
\begin{equation}
\label{shannon}
S(Y_\text{cl}) = - \sum_y P(y) \ln P(y)
\end{equation}
for a classical probability distribution $Y_\text{cl}$ with events $y$ and probabilities $P(y)$, whereas it is given by the von Neumann entropy
\begin{equation}
\label{vonNeumann}
S(\hat \rho_Y) \equiv - \Tr \left[\hat \rho_Y\ln \hat \rho_Y\right]
\end{equation}
for a quantum probability distribution described by a density matrix $\hat \rho_Y$.
We have used the natural logarithm in Eq.~\eqref{shannon} for convenience.
Intuitively, $I(Y:Z)$ is equal to zero in the absence of any correlations between the two distributions, and is maximized if the higher-entropy distribution contains all of the information of the lower-entropy distribution:
\begin{equation}
\label{MIIneq}
\begin{aligned}
0 \leq I(Y_\text{cl}:Z_\text{cl}) &\leq \text{min} [S(Y_\text{cl}),S(Z_\text{cl})], \\
0 \leq I(\hat \rho_Y: \hat \rho_Z) &\leq 2 \, \text{min} [S(\hat \rho_Y),S(\hat \rho_Z)].
\end{aligned}
\end{equation}
Notice that the upper bound of quantum mutual information is twice as large as its classical counterpart~\cite{luo}.
To connect the formalism of this section with the Bayesian inference approach of Section~\ref{bayesSection}, we first calculate the amount of mutual information shared between the initial particle state and final cavity state as determined by the classical conditional probability distribution we derived through the photon number measurements [see Eqs.~\eqref{posterior}].
Then, we calculate their quantum mutual information from a density matrix approach, which incorporates all particle-cavity correlations.
To understand the role of the cavity field in the entropy removal process, we also compare its final entropy to that of the reservoir, which contains the spontaneous photons emitted by the particle.
\subsection{Mutual information from cavity measurements}
Here we calculate the mutual information [Eq.~\eqref{MIGeneral}] between the initial particle state and final cavity state as determined by the photon number measurements of $\hat \eta_L$.
Using the click probabilities from Eq.~\eqref{likelihood}, the notation of Eq.~\eqref{bayes}, and generalizing the conditional probability distribution in Eq.~\eqref{posterior} to any $x$, the conditional entropy of the initial particle state given the cavity measurements is
\begin{widetext}
\begin{equation}
\label{conditionalEntropy}
\begin{aligned}
S[A(0)|L(\infty)] & =
-\sum_C P(C) \sum_i P(\text{start in} \ket{i} | C) \ln [P(\text{start in} \ket{i} | C)] \\
&\displaystyle =
-xf \ln \left[ \frac{fx}{1-x(1-f) }\right]
- (1-x) \, \ln \left[\frac{1-x}{1-x(1-f)}\right].
\end{aligned}
\end{equation}
\end{widetext}
From the classical mixture in Eq.~\eqref{initParticle}, the Shannon entropy of the initial state is
\begin{equation}
\label{particleEntropy}
S[A(0)] \equiv S_0 = - x \ln x - (1-x) \ln (1-x).
\end{equation}
Together, Eqs.~\eqref{conditionalEntropy} and~\eqref{particleEntropy} can be used to calculate the (classical) particle-cavity mutual information
\begin{equation}
\label{MICavity}
I[A(0):L(\infty)] = S[A(0)] - S[A(0)|L(\infty)].
\end{equation}
Because the cavity state is inherently quantum, the photon number measurements may not access all of its contained information.
Consequently, Eq.~\eqref{MICavity} underestimates the amount of mutual information shared between the particle and cavity.
However, as discussed in the next subsection, Eq.~\eqref{MICavity} will be useful for calculating the quantum mutual information in the thermodynamic limit $\bar n_0 \rightarrow \infty$, for which numerical calculations are intractable due to the infinite dimension of the cavity Hilbert space.
\subsection{Quantum mutual information}
We now include any additional correlations between the initial particle state $\hat \rho_A(0)$ and final cavity state $\hat \rho_L(\infty)$ by determining their quantum mutual information.
As seen from Eq.~\eqref{MIGeneral}, this calculation requires knowing the joint distribution of these two states.
Because these states are defined at different times, we calculate their joint distribution by incorporating an non-interacting auxiliary Hilbert space $R$ which purifies $\hat \rho_A(0)$ and hence contains the entropy of $\hat \rho_A(0)$ at all times.
The quantum mutual information will then be given by
\begin{equation}
\label{QMI}
I(\hat \rho_R :\hat \rho_L) = S(\hat \rho_R) + S(\hat \rho_L) - S(\hat \rho_{RL}),
\end{equation}
in which all entropies $S$ are von Neumann entropies [see Eq.~\eqref{vonNeumann}].
In Appendix~\ref{entropyMISMSection}, we show that the density matrix $\hat \rho_{RL}$ is separable, which means that the quantum mutual information satisfies the classical inequality [Eq.~\eqref{MIIneq}(a)]
\begin{equation}
0 \leq I(\hat \rho_R :\hat \rho_L) \leq S_0,
\label{MIinequality}
\end{equation}
in which $S_0 \equiv S[\hat \rho_A(0)]$. In other words, somewhere between none [$I(\hat \rho_R :\hat \rho_L)=0$] or all [$I(\hat \rho_R :\hat \rho_L)=S_0$] of the entropy initially contained in the particle becomes encoded in the cavity field.
We elaborate on the repercussions of Eq.~\eqref{MIinequality} in Section~\ref{reservoirEntanglementSection}.
We now explain how to calculate $I(\hat \rho_R :\hat \rho_L)$. First, the initial particle state is purified through its Schmidt decomposition.
That is, we view the particle ensemble as the reduced density matrix of a pure state,
$\hat{\rho}_A(0) = \Tr_R\op{u}{u}_{AR}$, where
\begin{equation}
\label{purification}
\ket{u}_{AR} = \sqrt{x} \ket{b,b} + \sqrt{1-x} \ket{d,d}.
\end{equation}
It can be shown that the reduced density matrices $\hat \rho_A(0)$ and $\hat \rho_R$ have the same eigenvalues \cite{Nielsen}, which motivates the interpretation of $\hat{\rho}_R = \Tr_A\op{u}{u}_{AR}$ as an identically prepared ensemble to $\hat{\rho}_A (0)$, but with particles that are not interacting with the field.
(This explains why the auxiliary particle describes the initial particle entropy: $S[\hat \rho_R(t)]=S_0$.)
The total system's density matrix becomes $\hat{\rho}_{ARL}$, and the master equation [Eq.~\eqref{ME}] is edited by incorporating the identity operator of $R$:
\begin{equation}
\label{substitutions}
\hat{H}_{AL} \rightarrow \hat{H}_{AL} \otimes \hat{\mathbb{I}}_R; \qquad \hat J \rightarrow \hat J \otimes \hat{\mathbb{I}}_R.
\end{equation}
This updated master equation is then used to evolve the initial pure state
\begin{equation}
\hat \rho_{ARL}(0) = \ket{u}\bra{u}_{AR} \otimes \ket{\alpha} \bra{\alpha}_L.
\end{equation}
We can then calculate the entropy of any subset of the $ARL$ composite Hilbert space by performing the appropriate trace operations.
\begin{figure}
\centerline{\includegraphics[width=\linewidth]{MI.pdf}}
\caption{Entropic quantities scaled by the initial particle entropy $S_0 \equiv S[\hat \rho_A(0)]$. (a) von Neumann entropies $S[\hat \rho_M(t)]$ [see Eq.~\eqref{vonNeumann}] of various subspaces $M$ and quantum mutual information between the field and auxiliary particle $I(\hat \rho_R:\hat \rho_L)$ [see Eq.~\eqref{QMI}] as a function of time $t$. Parameters are: $x=m=0.5$ and $\bar n_0 = 5$. (b) Equilibrium $I(\hat \rho_R:\hat \rho_L)$ as a function of $m$ for various $\bar n_0$ with $x=0.5$. The $\bar n_0 \rightarrow \infty$ curve is given by Eq.~\eqref{MICavity}. The quantum mutual information reaches its maximum value $I(\hat \rho_R:\hat \rho_L)=S_0$ [see Eq.~\eqref{MIinequality}] in the thermodynamic ($\bar n_0 \rightarrow \infty$), infinite-coupling ($m \rightarrow 0)$ limit.}
\label{fig:I_S}
\end{figure}
Figure~\ref{fig:I_S}(a) presents various entropic quantities as a function of time $t$ for the choices $x=m=\tfrac{1}{2}$ and $\alpha = \sqrt{\bar n_0} = \sqrt{5}$.
For convenience, we have scaled all quantities by the initial particle entropy $S_0$.
As expected, the particle entropy $S[\hat \rho_A(t)]$ decreases (as occurs in laser cooling and optical pumping), the cavity state entropy $S[\hat \rho_L(t)]$ increases, and the entire $ARL$ space entropy increases as it evolves under the non-unitary dynamics.
Importantly, the quantum mutual information $I(\hat \rho_ R:\hat \rho_L)$ increases and equilibrates to a nonzero value, which indicates the imprinting of the particle's initial entropy onto the cavity field.
We present numerical results for the equilibrium value of $I(\hat \rho_R:\hat \rho_L)$, scaled by $S_0$, as a function of $m$ for various $\bar n_0$ in Fig.~\ref{fig:I_S}(b) (non-solid curves).
We have dropped the time label $t \rightarrow \infty$ for notational simplicity.
We find for $\bar n_0 \geq 1$ that $I(\hat \rho_R:\hat \rho_L)$ can exceed $S_0/2$, and hence that a significant amount of information about the initial particle state can be imprinted on the cavity field, provided that $m$ is sufficiently small.
As $\bar n_0$ increases, we find numerically that the equilibrium quantum mutual information [Eq.~\eqref{QMI}] and mutual information as determined by the cavity measurements [Eq.~\eqref{MICavity}] converge.
Therefore, we use Eq.~\eqref{MICavity} to analytically calculate the equilibrium value of $I(\hat \rho_R : \hat \rho_L)$ in the thermodynamic limit $\bar n_0 \rightarrow \infty$ (solid curve).
In the strong and weak-coupling limits, this simplifies to
\begin{flalign*}
I(\hat \rho_R :\hat \rho_L) \approx&&
\end{flalign*}
\vspace{-2.1em}
\begin{equation}
\begin{cases}
\displaystyle S_0 - \epsilon(x,m), & \displaystyle m \ll \text{min} \left[1, \, \frac{1}{2 \pi} \left( \frac{1-x}{x}\right)^2\right]\\
\displaystyle -\frac{x\ln x}{4m}, & m \gg 1
\end{cases} \;,
\end{equation}
in which
\begin{equation}
\epsilon(x,m) = - \sqrt{2 \pi m}
\left[
\ln \sqrt{2 \pi m} + \ln \left( \frac{x}{1-x} \right)
-1
\right]
\end{equation}
and $0 \leq \epsilon \ll S_0$.
The additional constraint on $m$ in the strong-coupling limit is a consequence of the highly nonlinear behavior of $I(\hat \rho_R:\hat \rho_L)$ for small $m$.
In the zero-coupling limit $m \rightarrow \infty$, we find that $I(\hat \rho_R:\hat \rho_L) = 0$, so the cavity field does not contain the information entropy of the initial particle state, as expected.
However, in the infinite-coupling limit $m \rightarrow 0$, we find that $I(\hat \rho_R:\hat \rho_L) = S_0$, which means that the cavity field contains complete information about the initial particle state.
This agrees with our Bayesian inference result in the previous section.
We rigorously prove that $I(\hat \rho_R:\hat \rho_L)=S_0$ only when $\bar n_0 \rightarrow \infty$ and $m\rightarrow0$ in Appendix~\ref{entropyMISMSection}.
\subsection{Entanglement with the reservoir}
\label{reservoirEntanglementSection}
In the previous sections, we demonstrated in several ways that the final cavity state contains information about the initial particle state.
It is therefore tempting to conclude that the cavity field has removed entropy from the particle.
However, we have yet to consider the entropy contained in the only remaining subspace: the external reservoir, which contains the spontaneous photons emitted by the particle.
If we denote the reservoir Hilbert space by $P$, it can be shown that [see Eq.~\eqref{spontEntropyBigger}]
\begin{equation}
\label{photonEntropy}
S[\hat \rho_P(\infty)] \geq S[\hat \rho_L(\infty)].
\end{equation}
Physically, Eq.~\eqref{photonEntropy} demonstrates that the spontaneous photons always contain at least as much information about the initial particle state as the cavity field.
In this sense, spontaneous emission is a sufficient mechanism for removing entropy from the particle.
A specific instance of this inequality can be seen in Fig.~\ref{fig:I_S}(a) by noticing that $S[\hat \rho_{ARL}(t)]=S[\hat \rho_P(t)]$.
Although the initial particle state and final cavity state are correlated, the separability of $\hat \rho_{RL}$ indicates that they are not entangled.
This is why their quantum mutual information satisfies the stricter, classical bound [see Eqs.~\eqref{MIIneq} and~\eqref{MIinequality}].
However, Eq.~\eqref{photonEntropy} can be used to show that both the particle and the cavity field become entangled with the external reservoir.
As shown in Appendix~\ref{photonEntropySM},
\begin{equation}
\label{reservoirEntanglement}
\begin{aligned}
I[\hat \rho_P(\infty) : \hat \rho_R(\infty)] \geq S[\hat \rho_R(\infty)], \\
I[\hat \rho_P(\infty) : \hat \rho_L(\infty)] \geq S[\hat \rho_L(\infty)].
\end{aligned}
\end{equation}
Equations~\eqref{reservoirEntanglement} imply entanglement when the inequalities are strict, which occurs if $m>0$. Consequently, the particle and cavity can develop richer quantum correlations with the reservoir than with each other.
\section{Conclusion}
We have demonstrated through a simple Gedankenexperiment under what conditions the entropy of a quantum system can be imprinted on a classical coherent light field, which we modeled as a coherent state in a lossless cavity.
We have quantified the cavity state's alteration due to the particle-cavity interaction through the measurement of fidelity and shown that the cavity field contains information about the initial state of the particle by the method of Bayesian inference and using quantum information theoretic techniques.
Our results demand reconsideration of the underlying physics of laser cooling and optical pumping in the strong particle-light coupling regime
~\cite{swap_exp,swap_theory,SWAP_MOT_theory,corder2015laser,Metcalf_2008}, as it suggests that the assumption of an unperturbed light field is not necessarily accurate.
The entropy transfer from the particle to the light field could be realized in an experimental setting by studying a system that satisfies $\kappa \ll \gamma, g$. Although we have mainly focused on the high-$\bar n_0$ limit, we believe that there is also interesting physics in the low-$\bar n_0$ regime.
Our model could also be generalized to further understand some cavity-based quantum memories~\cite{impedanceMatched,spinWave,cavityEnhancedMemory,quantumMem1,quantumMem2,Giannelli_2018} by, e.g., incorporating cavity pumping and loss ($\kappa \neq0$) or initializing the cavity field in a different quantum state.
We also anticipate that this system can be used as a platform to generate and study novel photon-subtracted states~\cite{Agarwal}.
There are many ways to extend this study for the purpose of modeling a laser cooling process more accurately.
Most notably, the incorporation of cavity pumping and loss could permit a nontrivial equilibrium solution even when the particle can relax back to the bright state, which would bring the particle's internal state structure closer to that of typical two-level models.
In this case, one could altogether remove the effects of the background radiation field on the particle to further investigate the entropy dynamics of the laser-particle system~\cite{cavityCooling}.
Of course, particle motion could also be incorporated, potentially allowing for another Hilbert space to exchange entropy.
One could also study this system in the context of phase space compression.
However, the rich correlations generated by quantum mechanical processes preclude a clear phase space approach for quantum systems~\cite{Oliva,Bernardini,Bernardini2}.
This would require, for example, a deeper understanding of the connection between Wigner trajectories and Liouville's theorem~\cite{Sala}.
On a related note, one could consider the use of quantum discord as a measurement of quantum correlation~\cite{Ferraro} as opposed to von Neumann entropy.
\section{Acknowledgments}
The authors thank John Cooper, Konrad Lehnert, and Vera Sch\"afer for helpful discussions and comments.
This work was supported by NIST and by NSF Grant No. PHY 1806827, NSF PFC Grant No. PHY 1734006, and by NSF Grant No. OMA 2016244.
|
{
"timestamp": "2021-05-11T02:15:22",
"yymm": "2105",
"arxiv_id": "2105.03780",
"language": "en",
"url": "https://arxiv.org/abs/2105.03780"
}
|
\section{Introduction}
Machine learning (ML) models learn to perform tasks based on real-world data, rather than being explicitly programmed. These models learn the correlation between input features and output labels through a set of examples from the training set. Thus, information about the training samples is implicitly stored in the trained model. This raises severe privacy concerns in cases where the training data contains sensitive information, for instance, when using real patients' data in medical applications \cite{LUNDERVOLD2019102}.
A malicious adversary could potentially exploit access to a trained model to retrieve sensitive information about the training data. In the case of medical applications, most health care providers are bound by the Health Insurance Portability and Accountability Act (HIPAA) or similar laws, that protect patient information. If a health care center wants to use ML systems, such as \cite{Katzman_2018}, to aid diagnosis, it must guarantee that no information about the medical records in the training set is leaked by the model.
In our work, we focus on membership inference attacks as the main method to assess privacy. Membership inference attacks aim at determining if a sample belongs to the training set of a model. For such an attack, the adversary has access to a target sample and the model parameters, and tries to determine if the target sample was part of the training set. If an attacker cannot even determine membership, it is considered infeasible to obtain more detailed information. Therefore, robustness against membership inference attacks prevents other, more severe privacy violations.
On the other hand, a ML model needs to provide a certain utility. It is important that the model performs its task correctly, while maintaining privacy. The quality of a model is measured by its performance on test samples that were not seen during training. The difference between this testing performance and the performance observed during training is the generalization error. Previous work indicates that bad generalization is the common enemy of performance and privacy. Moreover, recent work by \citet{DBLP:journals/corr/abs-1709-01604} implies that there is a strong connection between generalization error and the probability of success of membership inference attacks. In our analysis we take a different approach by considering the success probability of an optimal (worst-case) attacker and show that, while a large generalization error implies a high success probability of the attacker, the converse does not necessarily hold.
Furthermore, we study the amount of information about the training set stored in the model parameters, and how it affects the success of an attacker that tries to infer sensitive information by observing the trained model. Our formalism allows us to answer these questions, draw privacy guarantees in a worst-case scenario and find connections between generalization and privacy.
\subsection{Summary of contributions}
Our work investigates fundamental bounds on information leakage and advances the state-of-the-art in multiple ways.
\textbf{Performance of the optimal attacker:}
We propose a formalism that allows any kind of inference attack, by modeling the sensitive attribute as a finite random variable. Then, by considering the success probability of the optimal attacker, we are able to draw strong conclusions about the privacy of a model.
The optimal attacker has perfect knowledge of the underlying probability distributions and from that it evaluates the conditional probability mass functions (p.m.f.s) of the sensitive attribute, given the observed data. As such, it provides an upper bound to the probability of success of any attack strategy (Theorem 1).
Such bound represents a privacy guarantee for the machine learning model and can be useful to guide the design of privacy defense mechanisms.
\textbf{Generalization error and success of the attacker:} A model that does not generalize well is susceptible to privacy attacks. We provide lower bounds for the success probability of the optimal membership inference attack under various conditions on the loss function. Theorem~\ref{thm:bounded_loss}, which generalizes Theorem~1 from \cite{DBLP:journals/corr/abs-1709-01604},
provides a lower bound under the assumption of a bounded loss. Theorem~\ref{thm:MSE_gaussian} extends this results to the case of a sub-Gaussian loss functions, and Theorem~\ref{thm:MSE_Exp} covers the case of exponentially tail-bounded loss functions. These results provide formal
evidence that bad generalization leads to privacy leakage.
However, as explained in the following paragraph, the converse
does not hold in general.
\textbf{Good generalization does not imply privacy:} Good generalization does not automatically prevent privacy leakage. This is surprising because, intuitively, one would think that a model that generalizes well would make it hard to distinguish whether a particular sample was part of the training set or not.
Nevertheless we show, by providing a suitable counterexample, that this intuition is wrong.
\textbf{Missing information in inference attacks and its connection to generalization:} Using mutual information, we study the amount of information stored by a trained model about its training set, and the role this information plays when the model is susceptible to privacy attacks.
\textbf{Numerical experiments:} Our first example consists of linear regression with Gaussian data. The simplicity of this setup allows us to analytically compute the generalization error and posterior distributions of the model parameters. In turn this allows to estimate the success rate of the optimal white-box attacker. Since the loss function in this setup is exponentially tail bounded, we can also apply Theorem~\ref{thm:MSE_Exp} to monitor the interplay between success rate and generalization. Our second example concerns Deep Neural Networks (DNNs) trained for classification. Considering MSE as the loss function for training these classifiers, we have a bounded loss function, which allows us to apply Theorem~\ref{thm:bounded_loss} to lower bound the success probability of the optimal attacker. We perform likelihood attacks to assess the quality of this bound.
\subsection{Related Work}
The most related work is \citet{DBLP:journals/corr/abs-1709-01604}, which studies the interplay between generalization, differential privacy, attribute and membership inference attacks. Our work investigates related questions, but offers a different and complementary perspective:
\begin{enumerate}
\item While \citet{DBLP:journals/corr/abs-1709-01604} define membership inference and attribute attacks as two separate experiments, our formalism generalizes both cases, making the connection between them explicit.
\item Their analysis considers only bounded loss functions. We extend the results to the more general case of tail-bounded loss functions.
\item Even though \citet{DBLP:journals/corr/abs-1709-01604} consider attackers with white-box access to the target model, in their analysis the worst-case scenario for privacy is not considered. Indeed, they consider the value of the loss function as the observation used to perform the attack. In such setting, they derive the equivalence between generalization error and membership advantage. In contrast, we consider the optimal Bayesian attacker with white-box access to the target model, yielding an upper bound on the probability of success of all possible adversaries and thus also on the generalization error.
\item We provide a simple example to showcase that overfitting is not a necessary condition for privacy leakage. In contrast to results presented in Section~3.4 of~\cite{DBLP:journals/corr/abs-1709-01604}, our example does not rely on assumptions on efficiency of computation and does not require collusion of learning algorithm and adversary.
\end{enumerate}
\cite{DBLP:journals/corr/ShokriSS16} utilize membership inference attacks to measure privacy leakage in deep neural networks. Their attacks consist in training a classifier that distinguishes members from non-members. While their first work covers the case of black-box attacks, subsequent work by \cite{Nasr_2019} considers white-box attacks, where the adversary has access to the model parameters. Moreover, \cite{8634878} study the influence of model choice on the privacy leakage of ML models via membership inference. Typically, when studying the privacy leakage of ML models, classifiers are considered as the target to privacy attacks. In contrast, \cite{LOGANHayes} consider membership inference attacks against generative models.
A more severe violation of privacy is represented by attribute inference attacks. Mainly two forms of these attacks have been considered in the literature. The first consists in inferring a sensitive attribute from a partially known record plus knowledge of a model that was trained using this record, e.g. \cite{184489,8476925}. The second consists in generating a representative sample of one of the members of the training set, or one of the classes in a classification problem, by exploiting knowledge of the target model, e.g. \cite{10.1145/2810103.2813677,DBLP:journals/corr/abs-1910-04257,Baumhauer2020MachineUL,10.1145/3319535.3354261}. Our framework can be applied to analyse the success of both membership and attribute inference attacks in machine learning.
A third class of privacy violation consists in stealing the functionality of a model, when the model and its parameters are considered sensitive information \cite{10.5555/3241094.3241142}, but this setup is out of the scope of our work.
Finally, \cite{10.1145/2976749.2978318} propose a Differentially Private (DP) Stochastic Gradient Descent method for training neural networks. Their analysis allows to estimate the privacy budget when successively applying noise to the model parameters during training. In later work \cite{Zhao_2020} present a comprehensive analysis of DP in ML, by considering the different stages in which noise can be added to make an ML model differentially private. \cite{236254} evaluate the effectiveness and cost of DP methods for ML in the light of inference attacks. We do not consider the connection between DP and memberhip inference attacks, as this is thoroughly analysed by \cite{DBLP:journals/corr/abs-1709-01604}.
\textbf{Outline of this paper.} Section~\ref{sec:Prelim} establishes the notation and definitions used throughout the rest of the paper. In Section~\ref{sec:Oracle} we assess the performance of the optimal attacker and utilize these results to obtain upper bounds on the success probability of inference attacks. Section~\ref{sec:GenErr} studies the connection between generalization error and the success probability of membership inference attacks, deriving lower bounds on the success probability of the optimal attacker in different scenarios. We then address the question of how much information about the training set is stored in the model parameters from the perspective of information theory in Section~\ref{sec:MutualInfo}. In Section~\ref{sec:Exp} we provide two examples that illustrate the usage of our main theoretical results. The first example consists of a linear regression model trained with Gaussian data, and the second example a DNN for classification trained with CIFAR10. Finally, conclusions and summary are presented in Section~\ref{sec:Conclusion}.
\section{Preliminaries}
\label{sec:Prelim}
We assume a fully Bayesian framework, where $Z = (X,Y) \sim p_{XY} \equiv p_Z$ denotes data $X$ and according labels $Y$, drawn from sets $\mathcal X$ and $\mathcal Y$, respectively. The training set consists of $n$ i.i.d.\ copies $\mathcal{D}_n \triangleq \{z_1,\dots , z_n\}$ drawn according to $\vt Z \sim p_Z^n$.
Let $\mathcal F \triangleq \big \{ f_\theta\, | \, \theta \in \Theta \subseteq \mathbb{R}^d\big\}$ be a \emph{hypothesis class} of (possibly randomized) decision functions parameterized with $\theta$, i.e., for every $\theta \in \Theta$, $ f_\theta({}\cdot{};x) $ is a probability distribution on $\mathcal Y$. We will abuse notation and let $f_\theta(y;x)$ be a probability mass function or a probability density function in $y$ for every $x \in \mathcal X$, depending on the context.
The symbol $\hat Y_\theta(x)$ will be used to denote the random variable on $\mathcal Y$ distributed according to $f_\theta( {}\cdot{};x)$.
In case the decision functions are deterministic, i.e., $f_\theta(y;x) \in \{0,1\}$ is a one-hot pmf for every $\theta \in \Theta$, $x \in \mathcal X$, then we write $\hat y_\theta(x) \in \mathcal Y$ to denote this deterministic decision, i.e., $\hat y_\theta(x)=\arg\max_{y\in\mathcal{Y}} f_\theta(y;x)$.
A \emph{learning algorithm} is a (possibly randomized) algorithm $\mathcal A$ that assigns to every training set $\mathcal{D}_n \in \mathcal X^n \times \mathcal Y^n$ a probability distribution on the parameter space $\Theta$ (and, thus, also on the hypothesis space $\mathcal F$). We have $\mathcal A\colon \mathcal{D}_n \mapsto \mathcal A({}\cdot{};\mathcal{D}_n)$, where $\mathcal A({}\cdot{};\mathcal{D}_n)$ is a probability distribution on $\Theta$. The symbol $\widehat\theta(\mathcal{D}_n)$ is used to denote a random variable on $\Theta$, distributed according to $\mathcal A({}\cdot{};\mathcal{D}_n)$.
In case of a deterministic learning algorithm, we have a p.m.f.\ $\mathcal A(\theta ;\mathcal{D}_n) \in \{0,1\}$ for every training set $\mathcal{D}_n$ and can thus define the function $\widehat\theta(\mathcal{D}_n) = \arg\max_{\theta \in \Theta} \mathcal A(\theta ;\mathcal{D}_n)$, yielding the (possibly random) decision function $f_{\widehat\theta(\mathcal{D}_n)}$.
To judge the quality of a decision function $f \in \mathcal F$ we require a loss function $\ell\colon \mathcal Y \times \mathcal Y \to \mathbb{R}$. We naturally extend this definition to vectors by an average over component-wise application, i.e., $\ell(\vt y, \vt y') = \frac{1}{n} \sum_{i=1}^n \ell(y_i, y'_i)$.
\begin{definition}[Expected risks]
\label{def:expected_risks}
The \emph{expected risk} of a learning algorithm $\mathcal A$ at training set $\mathcal{D}_n$ is\footnote{Note that the expectation is taken over all random quantities, i.e. $\widehat\theta \sim \mathcal A({}\cdot{};\mathcal{D}_n)$, $\hat Y_{\widehat\theta(\mathcal{D}_n)}(X) \sim f_{\widehat\theta}({}\cdot{};X)$ and, $(X,Y)\sim p_Z$.}
\begin{equation}
\mathcal E(\mathcal A,\mathcal{D}_n ) \triangleq \mathbb{E}\big[ \ell\big(\hat Y_{\widehat\theta(\mathcal{D}_n)}(X), Y\big) \big]\;.
\end{equation}
The \emph{empirical risk}\footnote{Note that the empirical risk is computed using the training data of the algorithm.} is defined as
\begin{equation}
\mathcal E_{\mathrm{emp}}(\mathcal A, \mathcal{D}_n) \triangleq \mathbb{E}\bigg[ \frac1n \sum\limits_{i=1}^n\ell\left(\hat Y_{\widehat\theta(\mathcal{D}_n)}(x_i), y_i\right) \bigg]\;.
\end{equation}
The difference between expected and empirical risk is the \emph{generalization error}
\begin{equation}
\mathcal E_{\mathrm{G}}(\mathcal A, \mathcal{D}_n) = \mathcal E(\mathcal A, \mathcal{D}_n) - \mathcal E_{\mathrm{emp}}(\mathcal A, \mathcal{D}_n)\;. \label{eq-generalization-error}
\end{equation}
\end{definition}
In order to make privacy guarantees for an algorithm $\mathcal A$, we need to specify an attacker model and the capabilities of an attacker. We will adopt a point of view of information-theoretic privacy and will not make assumptions about the computation power afforded to an attacker. We will also assume that the attacker has perfect knowledge of the underlying data distribution $p_Z$, as well as the algorithm $\mathcal A$.
In general, the goal of the attacker is to infer some property of $\mathcal{D}_n$ from $\widehat\theta(\mathcal{D}_n)$. However, in general the attacker may have access to certain side information. This may include the specific potential member of the training set that is queried (in case of a membership inference attack) or any additional knowledge gained by the attacker. This side information is modeled by a random variable $S\in\mathcal S$, dependent on $\vt Z$, the value of which is known to the attacker. The attacker is interested in a target (or concept) property denoted by a random variable $T\in \mathcal T$, which is also dependent on $(\vt Z, S)$. A (white box) \emph{attack strategy} is a (measurable) function $\varphi\colon \Theta \times \mathcal S \to \mathcal T$.
\textbf{Assumption.} We shall assume that $S$ and $T$ are independent, but not necessarily conditionally independent given $\vt Z$. This natural assumption ensures that knowledge of the side-information $S$ does not change the prior $p_T = p_{T|S}$ of the attacker.
\begin{definition}
The Bayes \emph{success probability} of a (randomized) attack strategy $\varphi$ is
\begin{align}
\mathcal P_{\mathrm{Suc}}(\varphi) = \mathrm P\{\varphi(\widehat\theta(\vt Z), S) = T \}.
\end{align}
We may additionally define the \emph{success probability conditioned on side information $S = s$}
\begin{align}
\mathcal P_{\mathrm{Suc}}(\varphi|s) = \mathrm P\{\varphi(\widehat\theta(\vt Z), s) = T | S = s\}.
\end{align}
\end{definition}
\begin{definition}[Membership inference attack]
\label{def:memb_inf}
In a membership inference attack, let $T$ be a Bernoulli variable on $\mathcal{T} = \{0,1\}$ and $J$ is independently, uniformly distributed on $\{1,2,\dots,n\}$. Then set $S = TZ_J + (1-T)Z$, where $Z_J$ is a random element of the training set and $Z \sim p_Z$ is independently drawn. Thus, an attacker needs to determine if $T=1$, i.e., whether $S$ is part of the training set or not.
We also define the expected loss function $\varrho(\theta, (x,y)) \triangleq \mathbb{E}[\ell(\hat Y_{\theta}(x), y)]$ and the corresponding random variable $R \triangleq \varrho(\widehat\theta(\vt Z), S)$ for later use.
\end{definition}
Although, in practice, the prior distribution of the target attribute $T$ is usually unknown, we define the optimal rejection region of an idealized attacker, having access to all other involved distributions.
\begin{definition}[Most powerful test {according to Neyman-Pearson lemma}]\label{def-decision-region}
In a membership inference setup (Definition~\ref{def:memb_inf}), define, for a threshold $0 < \gamma < \infty$, the decision region
\begin{align}
\widehat{\mathcal{T}}(\gamma)
\triangleq \left\{(\theta,s) \in\Theta\times\mathcal{S}:\frac{p_{\widehat{\theta}(\mathbf{Z})S|T}\big(\theta,s |1\big)}{p_{\widehat{\theta}(\mathbf{Z})S|T}\big(\theta,s | 0\big)}> \gamma\right\}.\label{eq:opt}
\end{align}
{By the Neyman-Pearson lemma,} the most powerful test at threshold $\gamma$ is then {given} by detecting $T = 1$ for all pairs $(\theta,s) \in \widehat{\mathcal{T}}(\gamma)$, i.e., $\varphi(\theta, s) = 1$ if and only if $(\theta, s) \in \widehat{\mathcal{T}}(\gamma)$.
\end{definition}
In Proposition~\ref{prop:1} we will provide lower bounds on the error achieved by this decision region and make the connection to the fully Bayesian case.
\section{Main Results}
The proofs of all theorems and propositions in this section can be found in the supplementary material.
\subsection{Performance of the Optimal (Worst-case) Attacker}
\label{sec:Oracle}
In this section, we establish two theorems that provide upper bounds on the success probability on an arbitrary attacker. First, consider the general case in which the target attribute $T$ in not necessarily binary, but finite. This case includes both membership and feature inference attacks. In this case the Bayes classifier is the best possible attacker:
\begin{theorem}[Success of the optimal attacker]
\label{thm:neyman_pearson_optimality}
Assume that $\mathcal T$ is a finite set and $\varphi$ is an arbitrary attack strategy. The Bayes success probability is upper bounded by,
\begin{align}
\mathcal P_{\mathrm{Suc}}(\varphi) \le \mathbb{E}\bigg[\max_{t\in\mathcal T}p_{T|\widehat\theta(\vt Z) S}(t|\widehat \theta(\vt Z), S)\bigg] \;,
\label{eq:thm1:1}
\end{align}
where the upper bound is achieved by the attack strategy,
\begin{align}
\varphi^{\star}(\theta, s) &= \arg\max_{t \in \mathcal T} p_{T|\widehat\theta(\vt Z) S}(t|\theta, s)\;.
\label{eq:thm1:2}
\end{align}
If the $\arg\max$ in \eqref{eq:thm1:2} is not unique, any $t \in \mathcal{T}$ achieving the maximum can be chosen.
\end{theorem}
Given white-box access to the model and its parameters, as well as side information, the attacker \eqref{eq:thm1:2} has the highest probability of successfully identifying a record in the training set. Thus, resilience against strategy \eqref{eq:thm1:2} provides a strong privacy guarantee.
The following proposition provides similar results for the membership inference problem.
\begin{proposition}[Decision tradeoff]
\label{prop:1}
In a membership inference setup (Definition~\ref{def:memb_inf}), let $\widehat{\mathcal{T}}\subseteq \Theta\times\mathcal{S}$ be any decision set, and define
\begin{align}
\epsilon_{1}(\widehat{\mathcal{T}}) &\triangleq \int_{\widehat{\mathcal{T}}}p_{\widehat{\theta}(\mathbf{Z})S|T}(\theta,s | 0)d\theta ds\;,
\label{eq:type1}\\
{\epsilon}_{0}(\widehat{\mathcal{T}}^c) & \triangleq \int_{\widehat{\mathcal{T}}^c}p_{\widehat{\theta}(\mathbf{Z})S|T}(\theta,s | 1)d\theta ds\;,
\label{eq:type0}
\end{align}
the average Type-I (false positive) and Type-II (false negative) error probabilities, respectively. Then,
\begin{align}
{\epsilon}_{0}(\widehat{\mathcal{T}}) + {\epsilon}_{1}(\widehat{\mathcal{T}}^c) &\geq 1-\Delta ,
\label{eq:prop1}
\end{align}
where $ \Delta \triangleq \big\Vert p_{\widehat{\theta}(\mathbf{Z})S|T=1} - p_{\widehat{\theta}(\mathbf{Z})S|T=0} \big\Vert_\mathrm{TV}\;$ and
$\Vert\cdot\Vert_\mathrm{TV}$ is the total variation distance~\cite{Gibbs2002choosing}. Equality is achieved by choosing $\widehat{\mathcal{T}}^\star \equiv \widehat{\mathcal{T}}(1)$ according to Definition~\ref{def-decision-region}. If the hypotheses are equality distributed, then the minimum average Bayesian error satisfies
\begin{equation}
\inf_\varphi\, \mathrm P\left\{\varphi(\widehat{\theta}(\mathbf{Z}),S)\neq T \right\} = \frac12 \left( 1 - \Delta\right)\;.
\label{eq:prop1:2}
\end{equation}
\end{proposition}
Equation~\eqref{eq:prop1}, similar to \eqref{eq:thm1:1}, provides a lower bound for the total error of an arbitrary attacker. Equation~\eqref{eq:prop1:2} provides the error of the optimal attacker from Theorem~\ref{thm:neyman_pearson_optimality} in the case where the hypotheses are equally distributed.
\subsection{Generalization Error and Success of the Attacker}
\label{sec:GenErr}
In this section, we explore the connection between generalization error and the success probability of membership inference attacks. Large generalization error implies poor privacy guarantees against membership inference attacks. Moreover, depending on characteristics of the loss function, the probability of success of the attacker is lower bounded by the generalization error:
\begin{theorem}[Bounded loss function]
\label{thm:bounded_loss}
If the loss is bounded by $|\ell| \le \ell_{\mathrm{max}}$, then
there is an attack strategy $\varphi$ for a membership inference attack (Definition~\ref{def:memb_inf}) such that,
\begin{equation*}
\mathcal P_{\mathrm{Suc}}(\varphi) \ge \max\left\{P_{\mathrm{m}}, P_{\mathrm{m}}\left( \frac{|\mathbb{E}[\mathcal E_{\mathrm{G}}(\mathcal A, \vt Z)]|}{2 \ell_{\mathrm{max}}} - 1\right) + 1\right\}\; ,
\end{equation*}
where $P_{\mathrm{m}}\triangleq \max_{t\in\{0,1\}} \mathrm P\{T=t\}$.
\end{theorem}
The above maximum results from the fact that an attacker, knowing the prior probabilities of $T$, will have a success probability of at least $P_{\mathrm{m}}$. Theorem~\ref{thm:bounded_loss} indicates that strong privacy guarantees (i.e., small success probability for any attacker), imply that the generalization error is also small.
We remark that, on the other hand, ensuring that the generalization error is small does not make a model robust against membership inference attacks. We shall return to this important point in Section~\ref{sec:good-gener-not}.
In the following, we extend the result of Theorem~\ref{thm:bounded_loss} to sub-Gaussian and exponentially tail-bounded loss functions.
\begin{theorem}[Sub-Gaussian loss]
\label{thm:MSE_gaussian}
In a membership inference problem (Definition~\ref{def:memb_inf}), assume that $R = \varrho(\widehat\theta(\vt Z), S)$ is a sub-Gaussian random variable with variance proxy $\sigma^2_R$. For all $R_{\mathrm{max}} \ge r_0 \triangleq \sqrt{2 \sigma_R^2 \log 2}$, there exists an attack strategy $\varphi$, such that,
\begin{align}
\mathcal P_{\mathrm{Suc}}(\varphi) &\ge \max\Big\{P_{\mathrm{m}},
P_{\mathrm{m}} \Big( \frac{|\mathbb{E}[\mathcal E_{\mathrm{G}}(\mathcal A, \vt Z)]|}{2 R_{\mathrm{max}}} - \frac{C(R_{\mathrm{max}}, \sigma_R)}{1-P_{\mathrm{m}}} - 1\Big) + 1\Big\}\;. \label{eq:th:psuc}
\end{align}
where $C(R_{\mathrm{max}}, \sigma_R) \triangleq \exp\left(-\frac{R_{\mathrm{max}}^2}{2\sigma_R^2}\right) \left(1 + \frac{\sigma_R^2}{R_{\mathrm{max}}^2} \right)$.
\end{theorem}
\begin{theorem}[Tail-bounded loss]
\label{thm:MSE_Exp}
In a membership inference problem (Definition~\ref{def:memb_inf}), assume that $R \triangleq \varrho(\widehat\theta(\vt Z), S)$ is such that $\mathrm P\{|R| \ge r\}\le 2\exp(-r/2\sigma_R^2)$ for all $r \ge 0$ with some variance proxy $\sigma_R^2 > 0$. Then, for all $R_{\mathrm{max}} \ge r_0 \triangleq 2 \sigma_R^2 \log 2$, there is an attack strategy $\varphi$ such that,
\eqref{eq:th:psuc} holds with
\begin{equation}
C(R_{\mathrm{max}},\sigma_R) \triangleq \exp\left(-\frac{R_{\mathrm{max}}}{2\sigma_R^2}\right) \left(1 + \frac{2\sigma_R^2}{R_{\mathrm{max}}} \right).\label{eq-missing-eq}
\end{equation}
\end{theorem}
Note that in principle both Theorems~\ref{thm:MSE_gaussian} and~\ref{thm:MSE_Exp} are applicable when the loss is bounded, since all bounded random variables are sub-Gaussian and exponentially tail-bounded; nonetheless, we expect Theorem \ref{thm:bounded_loss} to provide a tighter bound in this case, as it certainly does for $\ell_{\mathrm{max}}=R_{\mathrm{max}}$.
In practice the distribution of the loss for a particular model is often unknown; however, it can be estimated and fitted to one of the cases presented in this section. Then, these results can be applied to measure the potential impact of generalization on the privacy leakage of the model.
\subsection{Good Generalization Is Not Enough to Prevent Successful Attacks}
\label{sec:good-gener-not}
\emph{Generalization does not imply privacy}. We show this by constructing a synthetic example of a membership inference problem, where the generalization error can be made arbitrarily small, while $T$ can be determined with certainty by an attacker. The details of the construction are given in \citesupp{appendix-CounterExample}. This yields $\mathrm P\{R=0|T=1\} = 1$ and the conditional p.d.f.
\begin{equation}
\label{eq:pRT}
p_{R|T}(r|0) = \frac{1}{\epsilon}\Lambda((r-D)/\epsilon) ,
\end{equation}
where $\Lambda(r) \triangleq \max(1-|r|,0)$ is the triangle distribution. The parameters $0 < \epsilon < D$ can be chose arbitrarily. Clearly then an attacker can simply check whether $R=0$ to determine $T$ with probability one.
On the other hand, from~\eqref{eq:pRT}, it is easily verified that,
\begin{align*}
|\mathbb{E}[\mathcal E_{\mathrm{G}}(\mathcal A, \vt Z)]| &= \left| \mathbb{E}[R|T=0] - \mathbb{E}[R|T=1] \right| = D .
\end{align*}
Thus, by varying the parameter $D$, we can make the generalization error arbitrarily small, while the attacker maintains perfect success. Therefore, good generalization does not prevent the attacker from easily determining which samples were part of the training set.
\subsection{On the Amount of Missing Information in Attribute Inference and Generalization} \label{sec:MutualInfo}
We aim at investigating the following simple but fundamental questions, from the perspective of information theory:
\begin{itemize}
\itemsep1mm
\item \emph{How much information do the model parameters $\widehat\theta(\mathcal{D}_n)$ store about the training set $\mathcal{D}_n$? And how this information is related to the generalization error?}
\item \emph{How much information about the unknown (sensitive) attribute $T$ is contained in the model parameters $\widehat\theta(\mathcal{D}_n)$ and the side information $S$? And how much information is needed for the inference of $T$? }
\item \emph{How do the above information quantities relate or bound to each other? }
\end{itemize}
From the point of view of information theory these questions make sense only if we consider $\widehat\theta(\mathcal{D}_n)$ and $T$ as random variables, that is, attribute probabilities to the target attribute and model parameters, which is perfectly consistent with the investigated framework in this paper.
\begin{theorem}[Mutual information]
\label{the4}
Let $\widehat T \triangleq \varphi(\widehat\theta(\vt Z), S)$ be the (random) prediction of any attacker $\varphi$ with $\mathcal P_{\mathrm{Suc}}(\varphi)= \mathrm P\{\widehat T = T\}$. Then,
\begin{align}
I \big(T; \widehat{\theta}(\mathbf{Z}) \big| S \big) &\geq
d_{\text{KL}}\Big( \mathcal P_{\mathrm{Suc}}(\varphi) \, \Big\| \, \max_{t\in\mathcal{T}}\, p_{T} (t) \Big),\label{eq-inequality-missing1}
\end{align}
where $d_{\text{KL}}( p \|q )$ denotes the KL divergence between Bernoulli random variables with probabilities $(p,q)$. Moreover, for $\epsilon \ge 0$, the generalization error $\mathcal E_{\mathrm{G}}$ satisfies
\begin{align}
\mathrm P \big( {\mathcal E_{\mathrm{G}}(\mathcal A, \vt Z)} \ge \epsilon \big) &\leq \frac{I(\mathbf{Z}; \widehat{\theta}(\mathbf{Z}) )+1}{{n K(\epsilon) }}
\label{eq-inequality-missing2} , \end{align}
where
\begin{equation}
K(\epsilon)\triangleq \essinf_{\theta \sim P_{\widehat\theta(\mathbf{Z})}} \psi_{ { \mathbb{E}[\varrho(\widehat\theta,(X,Y))] - \varrho(\theta,(X,Y))}}^{\ast}(\epsilon)
\end{equation}
is an essential infimum w.r.t.\ $\theta \sim P_{\widehat\theta(\mathbf{Z})}$ of the Fenchel-Legendre dual function $\psi^\star$ of {the Log Moment Generating Function of} $ \mathbb{E}[\varrho(\theta,(X,Y))] - \varrho(\theta,(X,Y)) $. Furthermore,
\begin{align}
I(T; \widehat{\theta}(\mathbf{Z}) | S) & \leq I(\mathbf{Z}; \widehat{\theta}(\mathbf{Z}) ) - I( S;\widehat{\theta}(\mathbf{Z}) ). \label{eq-inequality-missing3}
\end{align}
\end{theorem}
The proof of Theorem~\ref{the4} is relegated to \citesupp{appendix-Theorem-info}.
The mutual information expressions in \eqref{eq-inequality-missing1} and \eqref{eq-inequality-missing2} are related by the inequality \eqref{eq-inequality-missing3},
where $I(\mathbf{Z}; \widehat{\theta}(\mathbf{Z}) )$ represents the average amount of information about the random training set $\mathbf{Z}$ retained in the model parameters $\widehat{\theta}(\mathbf{Z})$; and $I(S;\widehat{\theta}(\mathbf{Z}) )$ indicates the amount of information already contained in the side information $S$ before observing the parameters $\widehat{\theta}(\mathbf{Z})$.
From \eqref{eq-inequality-missing3} it is clear that by controlling the average number of bits of information about the training set $\mathbf{Z}$ that the model parameters $\widehat{\theta}(\mathcal{D}_n)$ store, i.e., $I(\mathbf{Z}; \widehat{\theta}(\mathbf{Z}) ) \leq r$, it is possible to control both the generalization error in \eqref{eq-inequality-missing2} and the accuracy of any possible attacker in \eqref{eq-inequality-missing1}. Nevertheless, a more effective defense strategy may aim directly at reducing the mutual information $I(T; \widehat{\theta}(\mathbf{Z}) | S)$, which is expected to have less severe impact on the performance of the trained model, i.e., the expected risk $\mathbb{E}\big[ \ell(\hat Y_{\widehat\theta(\mathbf{Z})}(X), Y)\big]$.
As~\eqref{eq-inequality-missing1} indicates, the performance of any attacker must be close to a random guess if the mutual information $I(T; \widehat{\theta}(\mathbf{Z}) | S)$ is suitably small. {This equation can be numerically computed to obtain an upper on $\mathcal P_{\mathrm{Suc}}(\varphi )$}.
Moreover, an explicit bound can be given:
\begin{align}
\mathcal P_{\mathrm{Suc}}(\varphi ) &\leq \min \Big \{
\sqrt{ \frac12 I(T; \widehat{\theta}(\mathbf{Z}) | S) } + \max_{t\in\mathcal{T}}\, p_{T} (t) , \frac{I(T; \widehat{\theta}(\mathbf{Z}) | S) +1 }{ -\log_2 \max_{t\in\mathcal{T}} \, p_{T} (t) }\Big\}. \nonumber
\end{align}
The generalization error bound in \eqref{eq-inequality-missing2} is subtly different from most PAC-Bayes scenarios of learning. In the present case, we are bounding the joint probability over both the training data $\mathbf{Z}$ and the randomness involved in the learning algorithm, which is within the spirit of \cite{pmlr-v83-bassily18a} but thanks to the term $K(\epsilon)$ the present bound is more tight.
Assuming that the loss is sub-Gaussian or bounded, it is not difficult to provide a lower bound for $K(\epsilon)$ that is independent of the underlying data distribution.
\section{Examples and Numerical Experiments }
\label{sec:Exp}
\subsection{Linear Regression on (Synthetic) Gaussian Data}
\label{sec:gaussians}
The following example allows us to illustrate how the theoretical results from the previous section might be used to assess the privacy guarantees of a specific model. We implement the optimal attacker from Theorem~\ref{thm:neyman_pearson_optimality} and estimate its success probability to monitor the privacy leakage of the model as a function of the number of training samples. Additionally, since the loss is tail-bounded exponentially, we use Theorem~\ref{thm:MSE_Exp} to derive lower bounds on the success probability of the attacker.
For $i\in[n]$, let $x_i$ be a fixed vector on $\mathbb{R}^d$ and for a fixed vector $\beta \in \mathbb{R}^d$, let $Y_i = \beta^T x_i + W_i$ with $\mathbb{E}[W_i] = 0$ and $\mathbb{E}[W_i^2] = \sigma^2 < \infty$ for $i\in[n]$. The training set is $\mathcal{D}_n = \{y_1,\ldots,y_n\}$, a realization of $Y_i$ for each $i\in[n]$. The function space $\mathcal F$ consists of linear regression functions $f_\theta(x_i) = \theta^T x_i$ for $\theta \in \mathbb{R}^d$ and the deterministic algorithm $\mathcal A$ minimizes squared error on the training set and thus yields\footnote{Let $\vt x$ be the $[d \times n]$ matrix $\vt x = (x_1, x_2, \dots, x_n)$. Similarly, $\vt y = (y_1, y_2, \dots, y_n)$ and $\vt W = (W_1, W_2, \dots, W_n)$ are $[1 \times n]$ vectors.} $\widehat\theta(\vt y) = (\vt x \vt x^T)^{-1} \vt x \vt y^T$ and the associated decision function
$f_{\widehat\theta(\vt y)}(x_i) = \vt y \vt x^T (\vt x \vt x^T)^{-1}x_i$. Using squared error loss, $\ell(y, y^\prime) = (y-y^\prime)^2$, we obtain the generalization error,
\begin{equation}
{ \mathbb{E} [ \mathcal E_{\mathrm{G}}(\mathcal A, \vt Z)] }= \frac{2d}{n} \sigma^2 \;,
\label{eq:genErrorGaus1}
\end{equation}
A derivation of this formula is presented in \citesupp{appendix-GaussianExample}. Assuming the noise $W$ to be Gaussian, the scalar response $\vt Y =\beta^T \vt x + \vt W$ then also follows a Gaussian distribution, with $\vt W$ a row vector of i.i.d components. Similarly, the model parameters $\widehat\theta(\vt Y)$ are normally distributed. Now choose a test sample $S=T(Y_J)+(1-T)(Y'_J)$, where $J$ is a random index in $[n]$, $Y_J$ is the $J-\mathrm{th}$ component of the (random) training set and $Y'_J$ is drawn independently of the training set. Assuming a Bernoulli $1/2$ prior on the hypothesis $T$, the success probability of the optimal attack $\varphi^{\star}$ is given by
\begin{align}
\mathcal P_{\mathrm{Suc}}(\varphi^{\star}) = 1-\frac12\left[\epsilon_{0}\big(\widehat{\mathcal{T}}(1)\big) + \epsilon_{1}\big(\widehat{\mathcal{T}}(1)^c\big)\right]\;,
\label{eq:SuccProbGaussian1}
\end{align}
with the Type-I and Type-II errors defined by~\eqref{eq:type1} and~\eqref{eq:type0}, respectively, and the optimal decision region $\widehat{\mathcal{T}}(1)$ defined by~\eqref{eq:opt}. With posteriors defined by,
\begin{align}
p_{S\widehat\theta(\vt Y)|T}(s,\theta|0) &= \frac{1}{n}\sum_{i=1}^n Q(\theta) p_{Y_i}(s)\;, &
p_{S\widehat\theta(\vt Y)|T}(s,\theta|1) &= \frac{1}{n}\sum_{i=1}^n Q_{i}(\theta|s) p_{Y_i}(s)\;.
\end{align}
The index $i$ indicates the feature vector $x_i$ from which the test sample $s$ is generated. $Q(\theta)$ is the distribution of the model parameters conditioned to $T=0$. It is independent of the test sample $s$ and of the index $i$. $Q_i(\theta|s)$ is the distribution of the model parameters conditioned to $T=1$. Since, under this hypothesis, the attacker assumes $s$ is one of the samples in the training set, this conditional distribution depends on the test sample $s$ and its corresponding index $i$. The distribution of the test sample $p_{Y_i}$ is defined by $p_{Y_i}({}\cdot{})\triangleq\mathcal N({}\cdot{};\beta^T x_i,\sigma^2)$ and the distributions of the parameters $Q$ and $Q_i$ are derived in \citesupp{appendix-GaussianExample}. These two distributions are also Gaussian and they differ only in their mean and variance.
The success probability of the optimal attack strategy in Theorem~\ref{thm:neyman_pearson_optimality} is given by~\eqref{eq:SuccProbGaussian1}. In our experiments we perform a Monte Carlo estimation of the integrals in~\eqref{eq:type1} and~\eqref{eq:type0}, by randomly drawing $T$, $s$ and $\theta$. The posterior distributions can be computed in closed form with the above definitions. Since the loss is exponentially tail-bounded, we can apply Theorem~\ref{thm:MSE_Exp} to obtain the lower bound
\begin{equation}
\mathcal P_{\mathrm{Suc}}(\varphi^{\star}) \ge \frac{1}{2} + \frac{d}{2n }\frac{\sigma^2}{R_{\mathrm{max}}} - C(R_{\mathrm{max}}, \sigma) ,
\label{eq:SuccProbGausLB1}
\end{equation}
where we used \eqref{eq:genErrorGaus1} and $C(R_{\mathrm{max}}, \sigma)$ is defined in expression \eqref{eq-missing-eq}. $R_{\mathrm{max}}$ can be chosen to maximize the upper bound in this expression. In our experiments, we choose the optimal $R_{\mathrm{max}}$ using the golden section search algorithm.
Algorithm~\ref{alg:exp1} details our simulations to estimate the success rate of the optimal attacker. It returns `$1$' when the attacker successfully predicts whether the test sample $s$ was part of the training set or not, and `$0$' otherwise. In our experiments we vary $n$ to study how the generalization error and success rate of the attacker evolve as a function of the number of training samples. The dimension of the feature space is fixed to $d=20$. For each value of $n$, we fix $\vt x$ and we repeat ($10$k times) Algorithm~\ref{alg:exp1} to estimate the success rate of the attacker. The feature vectors $\vt x$ are generated i.i.d. and then fixed for each value of $n$. Additionally, for $n$, we compute the lower bound \eqref{eq:SuccProbGausLB1} and the generalization error \eqref{eq:genErrorGaus1}.
\begin{algorithm}[tb]
\caption{Experiment 1}
\label{alg:exp1}
\begin{algorithmic}
\STATE {\bfseries Input:} feature vectors $\vt x$, training set size $n$
\STATE Draw $t$ uniform in $\{0,1\}$
\STATE Draw $j$ uniform in $[n]$
\STATE $\vt y \longleftarrow \beta^T \vt x + \vt W$
\IF{$t$}
\STATE $s\longleftarrow y_j$
\ELSE
\STATE $s\longleftarrow \beta^T x_j + W$
\ENDIF
\STATE $\theta \longleftarrow (\vt x \vt x^T)^{-1} \vt x \vt y^T$
\STATE {\bfseries return} $p_{S\widehat\theta(\vt Y)|T}(s,\theta|1)>p_{S\widehat\theta(\vt Y)|T}(s,\theta|0)$ {\bfseries XNOR } $t$
\end{algorithmic}
\end{algorithm}
Figure~\ref{fig:Gaus:2} shows the success rate (SR) of the optimal attacker as a function of the number of samples in the training set $n$. Along with it is the lower bound (LB) provided by Theorem~\ref{thm:MSE_Exp}. The bound predicts the behavior of the SR as a function of the generalization error. For large $n$ (small generalization error), the success rate and its lower bound approach $0.5$, the success rate of an attacker that only uses knowledge on the prior of $T$.
\begin{figure}
\centering
\begin{tikzpicture}
\begin{axis}[axis1,ylabel={LB: \ref*{pgf:LB3}, SR: \ref*{pgf:SR3}}]
\addplot +[] table[basicTable,x=c,y=b]{./plots/Gaussian/SucRateVsGenD20.csv};
\label{pgf:SR3}
\addplot +[] table[basicTable,x=c,y=b]{./plots/Gaussian/LBVsGenD20.csv};
\label{pgf:LB3}
\end{axis}
\end{tikzpicture}
\caption{Success Rate (SR) and Lower Bound (LB) depend on number of training samples $n$.}
\label{fig:Gaus:2}\vspace{-5mm}
\end{figure}
\subsection{Examples on DNNs}
\label{sec:DNNs}
We train DNNs on various datasets to study the interplay between generalization error and the success rate of an attacker that uses the confidence of the target model as a criterion for the attack. We compare the success rate of the attacker to the lower bound provided by Theorem~\ref{thm:bounded_loss}, to assess the quality of the bound.
The three data sets considered for this experiments are CIFAR10, MNIST and MNIST fashion. In the case of MNIST and MNIST fashion, the training set is picked from a pool of $60$k samples. A separate pool of $10$k samples is fixed as the test set. For CIFAR10, the pool of training samples is of size $50$k, and the pool of test samples is $10$k. For a given $n$, we pick uniform randomly that amount of samples from the training pool to form the training set.
The target model in this setup is a Deep Neural Network with $4$ convolutional layers and $3$ fully connected layers. For CIFAR10 the model has a total of $439722$ parameters, while for MNIST and MNIST fashion it has only $376714$. The model is trained for up to $150$ epochs using the Adam optimizer with learning rate $5\cdot 10^{-3}$. An early stop criteria compares the current loss over the training set to the total loss after the previous epoch, and stops training if the difference in below $10^{-3}$. The number of epochs of training can change drastically depending on the size of the training set.
The loss function used for training and for computing the generalization error is the Mean Squared Error (MSE) loss between the one-hot encoded labels and the soft probabilities output by the network. Since the output of the model is a vector of soft-probabilities summing up to one and the labels are one-hot encoded, the MSE between these two is bounded by $2$. This allows us to apply Theorem~\ref{thm:bounded_loss} to lower bound the success probability of the optimal attacker. However, in this setup it results impossible to estimate the success probability of the optimal attacker, due to the high number of model parameters. To circumvent this limitation and assess the quality of the bound provided by Theorem~\ref{thm:bounded_loss}, we implement a likelihood attack, which is a well known method for membership inference, and compare its success rate to the bound provided.
\begin{algorithm}[tb]
\caption{Likelihood Attack}
\label{alg:attak1}
\begin{algorithmic}
\STATE {\bfseries Input:} Target model $\mathrm{NN}$, threshold $h$
\STATE Draw $t$ uniform in $\{0,1\}$
\IF{$t$}
\STATE Draw $s$ uniform from the training set.
\ELSE
\STATE Draw $s$ uniform from the test set.
\ENDIF
\STATE $\mathrm{likelihood}\longleftarrow\mathbf{max}(\mathrm{NN}(s))$
\STATE {\bfseries return} $\mathrm{likelihood}>h$ {\bfseries XNOR } $t$
\end{algorithmic}
\end{algorithm}
The likelihood attack, detailed in Algorithm~\ref{alg:attak1}, exploits the level of confidence of the trained model in its prediction. The assumption is that the model will make more confident predictions on samples that were part of its training set. Algorithm~\ref{alg:attak1} outputs $1$ if the attacker infers membership correctly and $0$ otherwise. Experimentally, we found that a threshold of $h=0.8$ works best across different values of $n$. In our experiments we perform the attack $1$k times per trained model and take the average as success rate for the attacker.
\begin{figure}
\centering
\begin{tikzpicture}
\begin{semilogxaxis}[leftAxis, ylabel={LB: \ref*{pgf:LB}, SR: \ref*{pgf:Like}}]
\addplot+[] table[basicTable, y = b, x = c ]{./plots/Cifar10/PsucLikihood.csv};
\label{pgf:Like}
\addplot+[] table[basicTable, y = b, x = c ]{./plots/Cifar10/PsucLB2.csv};
\label{pgf:LB}
\end{semilogxaxis}
\begin{semilogxaxis}[rightAxis, ylabel={Acc: \ref*{pgf:Acc}}]
\pgfplotsset{cycle list shift=+3}
\addplot+[] table[basicTable, y = a, x = b ]{./plots/Cifar10/AccTrain.csv};
\label{pgf:Acc}
\end{semilogxaxis}
\end{tikzpicture}\vspace{-3mm}
\caption{Success Rate (SR), Lower Bound (LB) and Accuracy (Acc) depend on the number of training samples $n$. The CIFAR10 dataset was used.}
\label{fig:Cifar10}\vspace{-6mm}
\end{figure}
For each $n$, we repeat the following process $10$ times: Draw uniformly the training set, train the model, compute the generalization error, compute the lower bound on the success rate of the optimal attacker and perform the likelihood attack. The results over different training sets are averaged to produce a single value for each $n$. Figure~\ref{fig:Cifar10} shows the success rate (SR) of the likelihood attack as a function of the number of samples in the training set $n$, along with the lower bound (LB) provided by Theorem~\ref{thm:bounded_loss}, for experiments performed with CIFAR10. The accuracy on the training set is also reported in Figure~\ref{fig:Cifar10}. The lower bound predicts the behaviour of the success rate of the likelihood attack as a function of the generalization error; both approach $0.5$ as the generalization error vanishes. Additional results on MNIST and Fashion-MNIST are provided in \citesupp{appendix-MNIST}.
\section{Summary and Concluding Remarks}
\label{sec:Conclusion}
We proposed a theoretical framework to analyse and bound information leakage from machine learning models. Our framework allows us to draw strong privacy guarantees, even when facing an optimal (worst-case) attacker, and to derive bounds that relate the success rate of this attacker to generalization error. Furthermore, we studied how much information is stored in a trained model about its training data and its implications in terms of the leakage about the sensitive attribute from the model parameters.
Our experiments in linear regression and DNN models illustrate how the bounds from Section~\ref{sec:GenErr} can be used to assess the information leakage of ML models. More specifically, the success rate of the attacker follows the same behaviour as the lower bounds as the generalization error vanishes. As a lower bound, it cannot guarantee that there is no attack that can perform better under the same conditions. Nevertheless, if the lower bound is above the performance of a random guess, the target model is guaranteed to leak sensitive information about its training data. When designing a model and determining the hyperparameters for training, this knowledge can be used to mitigate information leakage.
The success rate of the optimal attacker provides a strong guarantee on the privacy of a model. However, computing the associated decision region seems computationally infeasible. In this paper we provided a synthetic example, using linear regression and Gaussian data, in which it is possible to analytically compute the involved distributions. In future work, we will explore novel tools to extend our illustrative examples to a systematic analysis of complex models. In addition, the present framework and in particular the characterization of the optimal (worst-case) attacker can be used to devise novel privacy attacks and defense mechanisms.
\onecolumn
\begin{center}
\vspace*{.1\baselineskip}
{\LARGE\textbf{Appendix}}
\end{center}
|
{
"timestamp": "2021-05-11T02:18:45",
"yymm": "2105",
"arxiv_id": "2105.03875",
"language": "en",
"url": "https://arxiv.org/abs/2105.03875"
}
|
\subsubsection*{Acknowledgements}
\begin{abstract}
Many automated operations in agriculture, such as weeding and plant counting, require robust and accurate object detectors. Robotic fruit harvesting is one of these, and is an important technology to address the increasing labour shortages and uncertainty suffered by tree crop growers. An eye-in-hand sensing setup is commonly used in harvesting systems and provides benefits to sensing accuracy and flexibility. However, as the hand and camera move from viewing the entire trellis to picking a specific fruit, large changes in lighting, colour, obscuration and exposure occur. Object detection algorithms used in harvesting should be robust to these challenges, but few datasets for assessing this currently exist. In this work, two new datasets are gathered during day and night operation of an actual robotic plum harvesting system. A range of current generation deep learning object detectors are benchmarked against these. Additionally, two methods for fusing depth and image information are tested for their impact on detector performance. Significant differences between day and night accuracy of different detectors is found, transfer learning is identified as essential in all cases, and depth information fusion is assessed as only marginally effective. The dataset and benchmark models are made available online.
Keywords: robotics, tree crop, object detection, deep learning, harvesting
\end{abstract}
\section{Introduction}
Automated tree crop harvesting is a well known focus in agricultural robotics and has the potential to reduce the high labour costs associated with fruit picking. Accurate target detection is the first step for most harvesting systems and inaccuracies at this early stage can disproportionately affect system performance. Optimising detector accuracy should therefore be a priority in agricultural robotics and automation. As deep learning methods from computer vision have advanced rapidly, the application of these in agriculture has become common place. However, these algorithms require a large and representative corpus of training data to perform well, something not currently available for the full range of image conditions seen in harvesting.
Eye-in-hand sensing provides distinct advantages for fruit harvesting by allowing for continuous feedback control right up to the point of picking. This decreases positioning error upon gripper final approach. Off-hand sensors provide a much more regular view of target fruit, but are difficult to re-position for better viewing angles. Unfortunately, the added sensor movement of eye-in-hand systems results in much more variable imaging conditions. Easily accessing the tree canopy also dictates the use of a compact, and preferably low cost, sensor. All of these present additional challenges for object detectors deployed in harvesting systems.
To address these challenges, new datasets representative of eye-in-hand harvesting conditions are required. For this, images were gathered during a field evaluation of a prototype system that uses an eye-in-hand configuration for harvesting plums on a 2D trellis~\parencite{brown2020}. A small and cheap Intel Realsense D435 RGB-Depth (RGBD) camera is used. Many image artefacts caused by the eye-in-hand setup can be observed in the dataset, see Figure~\ref{fig:DataExample}. A total of 700 images are extracted and annotated, these are split evenly between day and night datasets. Two previous generation object detection deep learning networks, Faster-RCNN and YoloV3, are benchmarked against this dataset, along with two current networks; RetinaNet and CenterNet.
\begin{figure}
\centering
\includegraphics[width=0.6\columnwidth]{ArtefactExample.jpg}
\captionof{figure}{Clockwise from top left are examples of; a typical night time eye-off-hand viewpoint, a typical day time eye-in-hand viewpoint, exposure and colour changes due to the camera entering the sun after being obscured by leaves, blurring due to low light levels at night.}
\label{fig:DataExample}
\end{figure}
Depth sensing is a required modality for harvesting, so the fusion of depth information for detection is also tested. Both early and late data fusion is trialled using the RetinaNet architecture. Early fusion treats the network input as a 4 dimensional RGBD image, while in late fusion the image and depth features are concatenated after being extracted by parallel network backbones.
Key contributions of this work are the presentation of two realistic datasets from an eye-in-hand harvesting operation, the benchmarking of multiple current deep learning architectures against these and the assessment of RGBD data fusion for detection. All data and models are available publicly at the link in Section~\ref{sec:dataset}.
The primary limitation of this work is the relatively small dataset size, which makes it impossible to test which benchmark networks perform best with very large datasets. Numerous works have shown a clear correlation between training set size and network performance for fruit detection, though the exact performance-by-size function is not linear and varies between architectures.
\section{Related Work}
Fruit detection is a key problem that is common in the literature for goals such as yield estimation, crop health assessment and harvesting~\parencite{koirala2019a,koirala2019b,arad2019,fernandez2018,sa2017,bargoti2017a,bargoti2017b,stein2016}. The most basic form consists of placing bounding boxes around each of the fruit in an image. Traditional computer vision methods have been extensively employed, while many recent works make use of deep learning tools. These require large amounts of training data, though this requirement can be relaxed using data augmentation and simulation methods, or transfer learning techniques.
Specular reflections from round fruit, combined with local image gradients are used in~\cite{wang2013} for object detection under controlled lighting. A Support Vector Machine (SVM) is trained to detect apples using thermal imagery in~\cite{feng2019}. The accuracy performance of SVM object detectors has now been surpassed by deep learning approaches, though they remain competitive for classification when using low dimensional hand-engineered image features~\parencite{kamilaris2018, gongal2015}. Edge features are used with Hough voting and an SVM classifier by~\cite{sengupta2014} to identify green citrus fruit under varying illumination. Similarly, \cite{maldonado2016} make use of a bas-relief representation with edges, Hough voting and an SVM to also count green citrus fruit. \cite{nguyen2016} perform RGB and depth channel thresholding to identify point cloud blobs corresponding to apples. These are separated using a Euclidean distance metric which is tested in the field under semi-controlled lighting conditions.
Avocados, apples and lemons are counted using a hand held camera in~\cite{vasconez2020}. Both Faster-RCNN and the Single Shot multibox Detector (SSD) network are tested with a video based object tracker to prevent multiple-counting. Avocados were the most accurately detected fruit for both networks, followed by lemons and apples. This is counter-intuitive given the strong colour features present for the latter two fruit, but may be explained by a larger set of training images for avocados.
Harvesting-oriented apple detection data is gathered in~\cite{kang2020b}. The YoloV3, Mask-RCNN and Faster-RCNN architectures are thoroughly tested, along with their own deep learning model, described in~\cite{kang2019}. This implements the idea of focal loss, similar to RetinaNet. Unfortunately the dataset and trained models are not released for comparison and, unlike the present work, the harvesting system does not use an eye-in-hand camera. \cite{gao2020} train a multi class Faster-RCNN detector to distinguish between different obscuration cases for apples, including those behind branches or wires, so that they can be harvested appropriately.
Camera movement is exploited to build 3D maps using Structure From Motion (SFM) in~\cite{gene-mola2020a}. SFM is effective where camera motion is smooth and unobscured, but comes at a computational cost and struggles with burring or poor exposure, both common during harvesting. Previous work within the same group examined apple detection using image, depth and radiometric data with Faster-RCNN~\parencite{gene-mola2019b}. This multi-modal data was gathered using a robotic platform at a fixed distance from the trellis and results indicated that early depth fusion alone was not effective, but combining image, intensity and range produced the best detector.
Bounding boxes can be trivially extracted from image instance semantic segmentation masks, so pixel wise classification is one approach to fruit detection. This technique is considered by~\cite{mccool2016} who trial various local pixel features to perform classification on multi-spectral images, and find local binary features to be most effective. Numerous pixel wise classifiers are applied to specifically detect plums in~\cite{pourdarbani2019} using a variety of manually specified colour space features.
Various sensing modalities for fruit detection, such as hyper-spectral, thermal and stereo vision are explored in~\cite{kapach2012}. Thermal imagery is found to only be useful at certain times of day, while geometry and colour are identified as the strongest features for distinguishing fruit. Similar observations regarding the limitations of thermal imagery use are made in~\cite{bulanon2009}, and by~\cite{gan2018} who also present a novel algorithm for fusing thermal and RGB imagery.
Depth data is frequently leveraged for object detection, as in~\cite{tu2020} where a multiscale implementation of Faster-RCNN is improved with the late fusion of RGBD imagery. A Microsoft Kinect V2 camera is used, which requires avoiding direct sunlight and is kept a fixed distance from the fruit trellis. This provides detailed and consistent depth data, intended to be used for fruit counting rather than harvesting. LiDAR sensors have large depth ranges and very high accuracy, but low resolution. \cite{gene-mola2020b} leverage the lighting invariance of LiDAR to detect fruit in point clouds, captured with and without a commercial air blower being applied to the crop. Combinations of data both with, and without, the blower active led to improved single frame detector accuracy, though it was not beneficial for yield prediction. Late fusion of near infra red and RGB imagery is used to detect a range of fruit using Faster-RCNN in~\cite{sa2016} and dataset size is found to have a critical impact on detector performance.
To the best of the authors' knowledge, this work presents the first example of an eye-in-hand RGBD dataset for object detection gathered during actual tree crop harvesting. Unlike much existing literature, multiple current generation object detector architectures are benchmarked and publicly released, along with day and night datasets.
\section{Method}
To test the performance of current generation object detector architectures on a fruit picking task, images are gathered during the trial of a robotic harvesting system, shown in Figure~\ref{fig:Platform}. This consists of an eye-in-hand RGBD camera mounted to a gripper on the end of a 6 Degree of Freedom (DoF) robotic arm. Several hundred images of red plums are gathered during both day and night operations, these are manually labelled to create two datasets. Four deep-learning detectors are trained and evaluated on this data. Each network architecture is trained and tested on the day and night datasets separately. Both pretrained weights for transfer learning from a non-agricultural task, and randomly initialised weights are used with each architecture. Separately to the benchmark tests, RGB and depth fusion is tested with RetinaNet on the day and night datasets.
\begin{figure}[h!]
\centering
\includegraphics[width=0.6\columnwidth]{Platform.jpg}
\captionof{figure}{The plum harvesting platform including; a 6 DoF robotic arm, RGBD camera, soft pneumatic gripper, mobile base with power and computation, pose tracking camera and development workstation.}
\label{fig:Platform}
\end{figure}
\subsection{Dataset}
\label{sec:dataset}
During an automated harvesting trial of red plums in Victoria, Australia, image feeds from a Realsense D435 RGBD camera were recorded. The crop is grown in a 2D trellis configuration, known as a fruiting wall, with fruit close to branches resulting in difficult harvesting conditions and high rates of target obscuration. The harvesting system travels parallel to this trellis, while the camera and gripper move with five degrees of freedom, three Cartesian plus roll and yaw, to harvest fruit. An embedded version of the YoloV3 model trained on previous data was used to detect harvesting targets with the D435, all detected targets were attempted, resulting in many failed picks in the current dataset. Performance of this embedded model was worse than expected so it is not included in the benchmarking tests. Several hours of sensor data were recorded over the period of one day with direct sunlight, shadowed sunlight and overcast conditions. For night time operation a single diffused floodlight was used, mounted off the arm.
During harvesting the image frames are processed at a rate of 10.5 per second. The camera driver provides on-board depth map alignment to the RGB images and all data is at a resolution of 640x480px. Images for the day dataset were extracted from the camera feed at 0.5 second intervals, then 350 were manually selected to form a representative dataset. Frames that did not contain plums, were excessively blurry, or similar to existing frames were not selected. Colour balance, distance to trellis and number of targets were not considered when selecting frames. This process was repeated for data gathered at night. These two datasets of 350 images each are then manually labelled with bounding boxes around all visible plums and split into train, test and validation subsets of 176, 87 and 87 images respectively. A total of 4449 plums are annotated in the day dataset, and 1402 in the night. Fewer plums were seen at night due to the light source not fully penetrating the canopy.
During harvesting, the camera is positioned approximately 70cm from the trellis for a global view of the trellis area being picked. Three dimensional positions of all detected fruit are filtered using an Extended Kalman Filter (EKF) and then passed to the arm control system which plans a harvest trajectory for each plum. Frames are continuously gathered during the harvesting process and used to improve fruit position estimates within the EKF. Thus the object detector must be robust to both near and far viewpoints of fruit, as seen in the dataset.
Many of the gathered images exhibit numerous artefacts directly related to the harvesting task. Some of these are caused by the camera motion as it moves from the global pose to harvesting a fruit. This results in a wide range of distances, bounding box sizes and illumination changes, as seen in Figures~\ref{fig:datasetDay} and~\ref{fig:datasetNight}. Most images also include part of the gripper, a design trade-off necessary to minimise the camera and gripper footprint. Exposure and white balance are handled automatically by the camera driver and must be variable to deal with changing light conditions throughout the day. Occasionally this results in extremely mis-exposed or mis-coloured images which the system should be tolerant to and are present in the datasets in small numbers.
\begin{figure}[h!]
\centering
\includegraphics[width=0.8\textwidth]{DayComposite.jpg}
\caption{Example RGB, RGBD overlay and depth images from the day dataset. Including effects specific to harvesting motion such as large illumination, object size and obscuration changes. Depth images shown after clipping is applied.}
\label{fig:datasetDay}
\end{figure}
Depth imagery is required for localising the fruit after they have been detected and is used by the harvesting system. Because this sensor modality is already present, and is increasingly common among automated agricultural platforms, the fusion of RGB and depth data is tested. To create the RGBD dataset each annotated RGB image has a corresponding depth image included as a separate file. Depth data suffers from holes where the range is out of sensor limits, shadowing where a point is obscured for one of the IR stereo pair used to calculate depth, and also from smoothing effects in the depth calculation algorithms. Various methods have been proposed to overcome these limitations, however for simplicity, the depth data is only normalised and clipped before being used in network training. Clipping occurs by setting values below 0.11m, where the camera can return incorrect readings, to be zero and values above 2.5m to be 2.5m. All depth readings are then divided by 2.5m to produce a pixel range from zero to one. At short ranges many depth errors are still present, seen in Figure~\ref{fig:datasetDay} column 3.
\begin{figure}[h!]
\centering
\includegraphics[width=0.8\columnwidth]{NightComposite.jpg}
\caption{Example RGB, RGBD overlay and depth images from the night dataset. Depth performance is improved at night with better defined object edges and less smoothing effects. Depth images shown after clipping is applied.}
\label{fig:datasetNight}
\end{figure}
The datasets are formatted to match the Pascal Visual Object Classes (VOC) 2007 standard~\parencite{everingham2010}. The two RGBD datasets and trained models for these are made available online\footnote{http://data.acfr.usyd.edu.au/Agriculture/PlumDetection/}.
\subsection{RGB Network Architectures}
Four commonly used object detector networks were chosen for evaluation; Faster-RCNN, YoloV3, RetinaNet and CenterNet. The first three use Keras implementations, while CenterNet is in PyTorch. These span a range of target frame rates and all lie on, or close to, the outer edge of the speed-accuracy curve for the standard computer vision dataset Common Objects in COntext (COCO)~\parencite{lin2015}. This can be seen in Table~\ref{table:cocoPerformance}.
Faster RCNN is the only two stage detector tested, an approach which shows improved accuracy over single stage detectors, at the cost of slower inference times~\parencite{ren2017}. YoloV3 is a single stage detector used in many existing works looking at object detection for agriculture, and remains a competitive detector for high frame rate applications. Updated versions of Yolo are also available~\parencite{redmon2018}.
RetinaNet implements the concept of focal loss which alters the loss function to down-weight the impact of easy negative examples, where there are clearly no objects within the bounding box~\parencite{lin2020b}. CenterNet is the newest and largest network tested~\parencite{duan2019}. A one-stage approach is used to predict heat maps of where bounding box corner and center points lie.
\begin{table*}
\caption{The originally published COCO Average Precision metric for each architecture. Additionally, the reported model inference time, although each model uses different GPU hardware so these are only roughly comparable. Faster-RCNN figures are from~\cite{huang2017} who note that model speed is highly dependent on the number of box proposals. Faster-RCNN and RetinaNet resize the input to make the short image edge match the stated value.}
\begin{center}
\resizebox{0.7\textwidth}{!}{%
\begin{tabular}{lcccc}
\textbf{Network} & \textbf{Backbone} & \textbf{\begin{tabular}[c]{@{}c@{}}Input Size\\ (pix)\end{tabular}} & \textbf{COCO AP} & \textbf{\begin{tabular}[c]{@{}c@{}}Inference Time\\ (ms)\end{tabular}} \\ \hline
Faster-RCNN & VGG-16 & 600xN & 34.7 & 250 \\
YoloV3 & DarkNet-53 & 416x416 & 33.0 & 29 \\
RetinaNet & ResNeXt-101-FPN & 800xN & 40.8 & 198 \\
CenterNet & Hourglass-104 & 511x511 & 47.0 & 340
\end{tabular}%
}
\label{table:cocoPerformance}
\end{center}
\end{table*}
All of the precise training configurations applied to these networks during benchmark testing can also be found at the dataset web page.
\subsection{RGBD Network Architecture}
Two forms of information fusion, early and late, are commonly presented in the literature. Both are tested here using the RetinaNet architecture against an RGB-only baseline.
Early fusion refers to concatenating the depth information as an additional input channel, in our case this makes the network input a $480 \times 640 \times 4$ tensor, prior to resizing. Early fusion is easily implemented and does not significantly increase computational requirements, but shows mixed results in the literature, often performing worse than RGB alone. Figure~\ref{fig:earlyNetwork} shows the early fusion network.
\begin{figure}[h!]
\centering
\includegraphics[width=1.0\textwidth]{EarlyNetwork.png}
\caption{The early RGBD fusion network using the ResNet-101 backbone, adapted from~\cite{lin2020b} and identical to their implementation apart from the input layer shape. B is the batch size, A is the number of anchors, there are a total of 5 FPN levels used which are numbered 3 to 7 to match the above mentioned paper.}
\label{fig:earlyNetwork}
\end{figure}
Late fusion runs a pair of feature extractor backbones and FPNs, on the RGB and depth data in parallel. At each of the 5 FPN scales, features from the RGB and depth FPN outputs are channel wise stacked before being passed through a 1x1 convolution which performs pooling over the RGB and depth feature maps. This reduces the FPN channels to 256 so the classification and regression subnetworks are identical to the RGB-only case. The overall network size is slightly less than doubled. Addition of an extra backbone creates more informative features which can be learned specifically for the depth modality, at the cost of additional complexity and execution time. Depth data has a meaningful absolute value and is pre-normalised to a fixed range, so batch normalisation layers are removed from the depth backbone.
\begin{figure}[h!]
\centering
\includegraphics[width=1.0\textwidth]{LateNetwork.png}
\caption{The late RGBD fusion network, adapted from~\cite{lin2020b}. The backbone and FPN is duplicated, output features from this are channel wise stacked and passed through a $1\times 1$ convolution to reduce the channels back to 256. The same two subnets as Figure~\ref{fig:earlyNetwork} are then applied.}
\label{fig:lateNetwork}
\end{figure}
\subsection{Training \& Testing Details}
All object detection networks can be improved through careful hyperparameter tuning. To provide a fair comparison, reduce evaluation time and because many application areas lack the resources for extensive network tuning, each architecture is trained using the default parameters provided by the authors. For YoloV3 and CenterNet these were found using the COCO dataset, while Faster-RCNN and RetinaNet were primarily developed using Pascal VOC.
Modifying the default anchor proposals to better suit plums was found to be counterproductive, so default anchor sizes are used for each network. All architectures do some form of data augmentation by default, specifics of which can be found at public link provided. Networks are trained until the validation loss plateaus. All inference time results were achieved using an Nvidia GTX 1080Ti and training batch sizes are set to the maximum that can fit on this GPU.
Transfer learning refers to using weights from an already trained model as the starting point for training on the day and night plums datasets. All networks are tested both with and without transfer learning.
All models are modified to produce a Pascal VOC format results file for the test set, which are then processed using the official VOC2007 Matlab development kit~\parencite{everingham2010}. Evaluation is done by plotting the Precision-Recall (PR) curve and reporting the official Average Precision (AP) metric with a bounding box Intersection Over Union (IOU) threshold of 0.5.
Depth data is absolute in nature and relating scene geometry to image data requires the camera focal length. So to preserve correlated features between RGB and depth inputs, a fixed focal length should be used. This prevents the use of image re-scaling and the augmentations, such as cropping, translation and rotation, which rely on it.
For all six RGBD tests, image augmentation is disabled and transfer learning is applied using ImageNet weights. The dual backbones were found to make late fusion training unstable so a two step process is required to effectively train this network. First the depth backbone is frozen and the RGB ResNet, FPN and subnet modules are trained, then all layers are then unfrozen and the depth backbone is also trained.
\section{Results}
Each RGB network is tested against the day and night dataset separately, both with and without transfer learning. The impact of depth fusion is assessed using the RetinaNet architecture. All tested configurations are summarised in Table~\ref{table:results} with PR curves for each dataset shown in Figure~\ref{fig:PR_RGB}.
Some training runs were unstable, resulting in no validation set AP increase during training. Each unstable training configuration was tested three times and in all cases the three runs failed. No training runs failed where an AP value is reported.
\begin{table}
\begin{center}
\caption{Results for each network on the day and night datasets. AP is calculated using the VOC2007 development kit, mean inference time is per image, not including network loading time.}
\resizebox{0.8\textwidth}{!}{%
\begin{tabular}{lllccc}
\multicolumn{1}{c}{\textbf{Architecture}} & \multicolumn{1}{c}{\textbf{Backbone}} & \multicolumn{1}{c}{\textbf{Configuration}} & \textbf{\begin{tabular}[c]{@{}c@{}}Day AP \\ @0.5 IOU\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Night AP \\ @0.5 IOU\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Mean Inference\\ Time (ms)\end{tabular}} \\ \hline
\multirow{2}{*}{Faster-RCNN} & VGG-16 & Transfer Learned & 0.691 & \textbf{0.795} & \multirow{2}{*}{128} \\
& VGG-16 & Random Weights & 0.537 & 0.788 & \\
& & & & & \\
\multirow{2}{*}{YoloV3} & DarkNet-53 & Transfer Learned & 0.597 & 0.746 & \multirow{2}{*}{\textbf{56}} \\
& DarkNet-53 & Random Weights & Unstable & 0.608 & \\
& & & & & \\
\multirow{2}{*}{RetinaNet} & ResNet-50 & Transfer Learned & 0.781 & 0.778 & \multirow{2}{*}{72} \\
& ResNet-50 & Random Weights & 0.639 & 0.744 & \\
& & & & & \\
\multirow{2}{*}{RetinaNet} & ResNet-101 & Transfer Learned & \textbf{0.787} & 0.767 & \multirow{2}{*}{102} \\
& ResNet-101 & Random Weights & Unstable & Unstable & \\
& & & & & \\
\multirow{2}{*}{CenterNet} & Hourglass-104 & Transfer Learned & 0.709 & 0.746 & \multirow{2}{*}{276} \\
& Hourglass-104 & Random Weights & 0.456 & 0.632 & \\
& & & & & \\ \hline
\multirow{3}{*}{\begin{tabular}[c]{@{}l@{}}Retinanet \\ RGBD\end{tabular}} & ResNet-101 & Early Depth Fusion & 0.608 & 0.732 & 109 \\
& ResNet-101 & Late Depth Fusion & \textbf{0.745} & \textbf{0.781} & 143 \\
& ResNet-101 & RGB Baseline & 0.730 & \textbf{0.781} & \textbf{99}
\end{tabular}%
}
\label{table:results}
\end{center}
\end{table}
\begin{figure}
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1\columnwidth]{DayPR.png}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1\columnwidth]{NightPR.png}
\end{subfigure}
\caption{The PR curve for each architecture on the day time (left) and night time (right) datasets.}
\label{fig:PR_RGB}
\end{figure}
\subsection{RGB Only}
Over the four baseline networks tested, RetinaNet with ResNet-101 achieved the highest AP on the day dataset while Faster-RCNN performed best on the night data. Transfer learned networks were much more accurate than those trained from scratch, while also taking less time to train.
YoloV3 was the fastest network by a significant margin, although with lower than average accuracy. RetinaNet with ResNet-50 provides a good speed-accuracy trade off for most applications. Data augmentation using the RetinaNet default methods was effective, shown by the difference between the RGB baseline from the RGBD tests and the ResNet-101 transfer learned results.
Faster-RCNN, YoloV3 and CenterNet all performed better on the night dataset. Fixed lighting conditions and fewer visible but obscured fruit should make this an easier detection task, though there are less training instances available.
\subsection{RGBD Fusion}
Precision-recall curves for the RGBD tests are plotted in Figure~\ref{fig:PR_RGBD}. Early fusion performed worse than RGB alone, even with other factors such as data augmentation, being equal. Late fusion slightly outperformed the baseline on both day and night data. Doubling of the network backbone produced only a 31\% increase in inference time. Many operations such as image pre-processing and bounding box non-maxima suppression are not dependent on network size.
\begin{figure}[h!]
\centering
\includegraphics[width=0.49\columnwidth]{RGBD_PR_v2.png}
\captionof{figure}{The PR curve for the three RGBD fusion approaches on both datasets.}
\label{fig:PR_RGBD}
\end{figure}
\section{Discussion}
Differences between performance reported on the COCO dataset, and the two datasets tested, were surprising. CenterNet performed poorly on both plums tasks, despite having the highest stated COCO accuracy, it was also the slowest. RetinaNet was effective for day time detection, with augmentation, transfer learning and backbone size all playing a role in overall performance.
Applying transfer learning had an overall larger impact than architecture selection and is essential when using small datasets, as in many agricultural applications. Conversely, using the ResNet-101 backbone significantly increased RetinaNet processing time, for only a small benefit in precision. Design decisions such as these can play a more essential role than architecture choice, and should be carefully considered. Additional factors, such as dataset and batch size, are not investigated in this work but typically also have an impact on accuracy.
Faster-RCNN outperformed both more modern and slower networks on the night dataset. The reason for this is unknown, and is in contrast to the daytime performance. Fortuitous hyperparameter defaults may be a contributing factor, though properly exploring these for all 10 RGB configurations is not feasible. This result highlights the importance of testing a range of network types on application specific data, such as harvesting under controlled lighting.
Early data fusion was counterproductive for these datasets. Although late data fusion was effective, the gains from this method were less than that provided by data augmentation, and expanding the RGB dataset size would likely be significantly more useful than incorporating depth information. Additionally, longer ResNet backbones exist and typically show a small AP improvement over ResNet101. So the additional network capacity introduced by the late fusion approach may be better used as a longer RGB only backbone.
Predicting the full extent of partially obscured fruit is a requirement of the harvesting system and is well met by all of the tested networks. No accuracy metric is ideal for all use cases, and the 0.5 IOU threshold is an arbitrary assessment point commonly used in computer vision. Other metrics may be more suitable for tasks such as harvesting, where the IOU threshold required for a successful pick can often be estimated.
The overall AP values for both datasets may be sufficient for commercial fruit harvesting, but definitely show room for improvement. Lack of standardised datasets for testing has been an issue when trying to measure improvement in both robotic harvesting and fruit detection tasks. We hope that releasing this dataset and suite of trained models goes some way towards addressing this for plums.
\section{Conclusions}
In this work two datasets gathered during a robotic harvesting trial on 2D trellis plums are presented, and four deep learning object detection architectures are benchmarked on these. The fusion of depth information was trialled and found to be marginally effective for late fusion, though data augmentation provides a larger performance boost. On the day time dataset RetinaNet was the most accurate, while Faster-RCNN showed the best average precision for the night time data.
Relative network performance differed significantly to that published for the COCO dataset, which is commonly used when making design decisions for applications in agriculture. So the public availability of a wide variety of application specific datasets, such as tree fruit harvesting, is important to future progress.
During the next harvesting season the limited dataset size can be addressed by expanding the amount of plum picking data and testing on additional fruit types. Future investigations into accuracy metrics specific to tree crop harvesting, multi-view detection and multi-spectral imaging are also planned.
\newpage
\printbibliography
\newpage
\end{document}
|
{
"timestamp": "2021-05-11T02:17:16",
"yymm": "2105",
"arxiv_id": "2105.03832",
"language": "en",
"url": "https://arxiv.org/abs/2105.03832"
}
|
\section{Introduction}\label{sec:01}
The classical log-symmetric distributions (LS) are a generalization of the log-normal distribution and are particularly flexible in providing models with positive asymmetry and that have lighter/heavier tails than those of the log-normal distribution \citep{vanegasp:16a}. Regression models based on LS distributions were recently studied by \cite{vanegasp:15,vanegaspaula:17}, \cite{medeirosferrari:16}, in which problems such as semi-parametric approach, presence of censored data, or hypothesis tests, are investigated.
Recently, \cite{ssls:20} proposed a reparametrization of the LS distributions, here denoted by \textit{quantile}-LS distributions. The authors' idea was to insert a quantile parameter in the LS distributions and thus obtain the \textit{quantile}-LS distributions. Then, based on these distributions, the authors proposed parametric \textit{quantile}-LS quantile regression models. Parametric quantile regression models \citep{gilchrist:00} are an alternative to semi-parametric models (distribution-free) \citep{koenker:78}, and to models based on pseudo-likelihood using the asymmetric Laplace distribution or a mixture distribution. Based on \textit{quantile}-LS distributions, \cite{cds:21} proposed a quantile tobit model useful for modeling left censored data.
A limitation of the LS \citep{vanegasp:15} and \textit{quantile}-LS \citep{ssls:20} distributions, and consequently of their respective regression models, is the impossibility of modeling (without the need for any type of transformation) data sets that contain zeros, since such distributions have positive support. In this sense, a strategy to circumvent such a problem is to consider a mixture of a continuous distribution with support $(0,\infty)$ with a degenerate distribution with mass at zero; see, for example, \cite{aitchisonbrown:57} for the case of the log-normal distribution, \cite{Heller2006} for the inverse Gaussian distribution, \cite{Leiva2016} and \cite{Tomazella2019} for Birnbaum-Saunders distributions, and most recently \cite{cc:21,cosavalente:21} for LS distributions. The mixture of two components, distribution with support $(0,\infty)$ and a degenerate distribution with zero value, can be called ``zero adjusted'' as in \cite{Heller2006}, and it is similar to the \cite{cragg:71} approach.
In this context, the primary objective of this work is to propose a class of zero-adjusted log-symmetric quantile regression models. To this end, initially zero-adjusted \textit{quantile}-LS distributions are proposed, which are denoted by \textit{quantile}-ZALS distributions. Then, we propose the \textit{quantile}-ZALS regression models. The immediate advantage of the proposed methodology lies in the flexibility of the quantile approach, which allows considering the effects of explanatory variables over the spectrum of the dependent variable, in addition to the possibility of including zero, which is not possible in the quantile regression models studied by \cite{ssls:20}. The \textit{quantile}-ZALS regression models, proposed in this work, generalize for a quantile context, the works of \cite{aitchisonbrown:57}, \cite{cc:21,cosavalente:21} and \cite{Leiva2016}. Secondary objectives include: (i) to obtain the estimates of the model parameters of using the maximum likelihood method; (ii) to carry out a Monte Carlo simulation to assess the performance of the maximum likelihood estimates; and (iii) to illustrate the proposed methodology using a real data set on extramarital affairs.
The rest of this work proceeds as follows. In Section \ref{sec:2}, the \textit {quantile}-LS distributions introduced by \cite{ssls:20} are briefly discussed, then the \textit{quantile}-ZALS distributions are introduced. In Section \ref{sec:3}, the proposed \textit{quantile}-ZALS regression model is presented. In Section \ref{sec:4}, a Monte Carlo simulation is carried out to assess the performance of the maximum likelihood estimates. In Section \ref{sec:5}, an application to real data is performed. Finally, in Section \ref{sec:6} presents the final conclusions.
\section{\textit{Quantile}-LS and \textit{quantile}-ZALS distributions}\label{sec:2}
In this section, the \textit{quantile}-LS distributions introduced by \cite{ssls:20} are initially described. Then, a zero-adjusted version of these distributions, denoted by \textit{quantile}-ZALS, are proposed.
\subsection{\textit{Quantile}-LS distributions}\label{sec:2.1}
A random variable $T$ follows a \textit{quantile}-LS distribution if its probability density function (PDF) and cumulative distribution function (CDF) are given, respectively, by
\begin{equation}\label{eq:quant:ft}
f_{T}(t;Q,\phi)
=
\dfrac{1}{\sqrt{\phi}\,t}\,
g\!\left(\frac{1}{\phi} \left[ \log(t)-\log(Q)+\sqrt{\phi}\,z_{p} \right]^2 \right),
\quad 0<t<\infty,
\end{equation}
and
\begin{equation}\label{eq:quant:cd}
F_{T}(t;Q,\phi)
=
G\!\left(\frac{1}{\phi} \left[ \log(t)-\log(\lambda) \right]^2 \right)=
G\!\left(\frac{1}{\phi} \left[ \log(t)-\log(Q)+\sqrt{\phi}\,z_{p} \right]^2 \right),\quad 0<t<\infty,
\end{equation}
where $Q>0$ is a scale parameter and also the quantile of the distribution, $\phi>0$ is a power parameter, $g$ is a density generator, which may involve an extra parameter $\xi$, and $G(\omega)=\eta{\int^{\omega}_{-\infty} g(z^2) \,\textrm{d}z }$ with $\omega\in\mathbb{R}$, with $\eta$ being a normalizing constant.
\cite{ssls:20} have shown that if $T\sim\textrm{\textit{quantile}-LS} (Q,\phi,g)$, then the following properties hold: (P1) $cT\sim\textrm{\textit{quantile}-LS} (cQ,\phi,g)$, with $c>0$; (P2) $T^c\sim\textrm{\textit{quantile}-LS} (Q^c,c^2\phi,g)$, with $c>0$; and (P3) the quantile function is given by $Q_{T}(q;\lambda,\phi)=\lambda\exp\big(\sqrt{\phi}\,G^{-1}(q)\big), \quad q\in(0,1)$. The properties (P1) and (P2) imply that $T=Q\,\epsilon^{\sqrt{\phi}}$, where $\epsilon \sim \textrm{\textit{quantile}-LS} (1, 1, g)$.
The log-normal, Log-Student-$t$, log-power-exponential and extended Birnbaum-Saunders distributions are obtained as particular cases for $g$:
\begin{itemize}
\item Log-normal($Q,\phi$): $ g(u)\propto\exp\left( -\frac{1}{2}u\right)$;
\item Log-Student-$t$($Q,\phi,\xi$): $g(u) \propto \left(1+\frac{u}{\xi} \right)^{-\frac{\xi+1}{2}}$, $\xi>0$;
\item Log-power-exponential($Q,\phi,\xi$): $g(u) \propto \exp\left( -\frac{1}{2}u^{\frac{1}{1+\xi}}\right)$, $-1<{\xi}\leq{1}$;
\item Extended Birnbaum-Saunders($Q,\phi,\xi$): $g(u)\propto \cosh(u^{1/2})\exp\left(-\frac{2}{\xi^2}\sinh^2(u^{1/2}) \right) $, $\xi>0$.
\end{itemize}
\subsection{\textit{Quantile}-ZALS distributions}\label{sec:2.2}
Consider a random variable $T\sim\textrm{\textit{quantile}-LS} (Q,\phi,g)$ with PDF and CDF given by \label{eq:quant:ft} and \label{eq:quant:cd}, respectively. We propose a \textit{quantile}-LS distribution that accommodates the zeros, denoted by \textit{quantile}-ZALS, by using a mixture approach given by
\begin{eqnarray}\label{eq:cragg}
\nonumber
g(z)&=&\pi\mathcal{I}_{(0)}(z)+(1-\pi)f_{T}(z)(1-\mathcal{I}_{(0)}(z)),\\\nonumber
&& \text{or}\\
g(z)&=&\pi^{\mathcal{I}_{(0)}(z)}\times\left\{(1-\pi)f_{T}(z)\right\}^{1-\mathcal{I}_{(0)}(z)}
\end{eqnarray}
where $0<\pi<1$ is a weight that determines the contribution of zeros, $f$ is the PDF of a random variable $leT$, and $\mathcal{I}_{A}(\cdot)$ is an indicator function, that is,
\begin{equation*}\label{eq:indic}
\mathcal{I}_{A}(x) =
\begin{cases}
1, & \quad \text{if}\; x=A;\\
0 & \quad \text{if}\; x \neq A.
\end{cases}
\end{equation*}
The CDF given by Equation \eqref{eq:cragg} can be rewritten by replacing $f_{T}$ with \eqref{eq:quant:ft}, namely,
\begin{eqnarray}\label{eq:cragg:pdf:zalogsym}
\nonumber
f_{Z}(z;Q,\phi,\pi)
&=&\pi\, \mathcal{I}_{\{0\}}(z) +
\left\{ (1-\pi)\dfrac{1}{\sqrt{\phi}\,z}\,
g\!\left(\frac{1}{\phi} \left[ \log(z)-\log(Q)+\sqrt{\phi}\,z_{p} \right]^2 \right)\right\}
(1-\mathcal{I}_{(0)}(z)),\\ \nonumber
&&\quad\text{or}\\
f_{Z}(z;Q,\phi,\pi)
&=&\pi^{\mathcal{I}_{\{0\}}(z)} \times
\left\{ (1-\pi)\dfrac{1}{\sqrt{\phi}\,z}\,
g\!\left(\frac{1}{\phi} \left[ \log(z)-\log(Q)+\sqrt{\phi}\,z_{p} \right]^2 \right)\right\}^{1-\mathcal{I}_{(0)}(z)}.
\end{eqnarray}
We use the notation $Z\sim\textrm{\textit{quantile}-ZALS}(Q,\phi,\pi,g)$. The CDF of $Z$ can be written as
\begin{equation*}\label{eq:cragg:cdf:zalogsym}
F_{Z}(z;Q,\phi,\pi) =
\begin{cases}
\pi, & \quad \text{if}\; z=0,\\
\pi+(1-\pi)F_{T}(z;Q,\phi), & \quad \text{if}\; z>0,
\end{cases}
\end{equation*}
where $F_{T}(\cdot)$ is the CDF of $T\sim\textrm{\textit{quantile}-LS} (Q,\phi,g)$ given in \eqref{eq:quant:cd}.
\section{\textit{Quantile}-ZALS regression model}\label{sec:3}
Based on the zero-adjusted quantile log-symmetric distributions proposed in Subsection~\ref{sec:2.2}, we propose the respective regression model. Consider $Z_1,\ldots,Z_n$ an independent random sample with $Z_i \sim \text{\textit{quantile}-ZALS}(Q_{i},\phi_{i},\pi_{i},g)$, for $i=1,\ldots,n$, such that
\begin{equation*}\label{eq:cdf_zaqlogsym1}
F_{Z}(z_{i};Q_{i},\phi_{i},\pi_{i}) =
\begin{cases}
\pi_{i}, & \quad \text{if } z_{i}=0,\\
\pi_{i}+(1-\pi_{i})F_{T}(z_i;Q_{i},\phi_{i}), & \quad \text{if } z_{i}>0,
\end{cases}
\end{equation*}
with
\begin{eqnarray*}\label{linkfunctions}\nonumber
Q_i &=& \exp(\bm{x}_i^\top\bm \beta), \\ \nonumber
\phi_i &=& \exp(\bm{w}^{\top}_{i}\bm{\kappa}) \,\, \mbox{and} \\
\pi_i &=& \Lambda(\bm{v}^{\top}_{i}\bm{\eta})=\frac{\exp(\bm{v}^{\top}_{i}\bm{\eta})}{1+\exp(\bm{v}^{\top}_{i}\bm{\eta})},
\end{eqnarray*}
where
$\bm{\beta} =(\beta_0,\beta_1,\ldots,\beta_{k})^\top$, $\bm{\kappa}=(\kappa_0,\kappa_1,\ldots,{\kappa_{l}})^\top$ and
$\bm{\eta}=(\eta_0,\eta_1,\ldots,{\eta_{m}})^\top$ are vectors of unknown parameters to be estimated,
${\bm{x}}^{\top}_{i}= (1,x_{i1},\ldots, x_{ip})^\top$,
${\bm{w}}^{\top}_{i} = (1,w_{i1}, \ldots, w_{iq})^\top$ and
${\bm{v}}^{\top}_{i} = (1,v_{i1}, \ldots, v_{ir})^\top$
are the values of $p$, $q$ and $r$ explanatory variable associated with the quantile $Q_i$, relative dispersion $\phi_i$ and probability of drawing a zero $\pi_i$, respectively. Note that $\Lambda(x)=\frac{\exp(x)}{1+\exp(x)}$ is the logistic function.
The estimation of the parameters of the \textit{quantile}-ZALS regression model presented in \eqref{eq:cdf_zaqlogsym1} is performed using the maximum likelihood method. Let $Z_1,\ldots,\ldots,Z_n$ be an independent random sample such that $Z_i \sim \text{\textit{quantile}-ZALS}(Q_{i},\phi_{i},\pi_{i},g)$, and $z_1,\ldots,z_n$ be the corresponding observed values. Then, the likelihood function for $\boldsymbol{\theta}=(\bm{\beta}^{\top},\bm{\kappa}^{\top},\bm{\eta}^{\top})^{\top}$ can be written as
\begin{eqnarray}\label{eq:like}
\small
L(\boldsymbol{\theta})&=&\prod_{i=1}^{n}f_{Z}(z_i;Q_i,\phi_i,\pi_i)\\ \nonumber
&=& \underbrace{\prod_{i=1}^{n} \pi_i^{\mathcal{I}_{\{0\}}(z_i)}\, (1-\pi_i)^{1-\mathcal{I}_{\{0\}}(z_i)}}_
{L_1(\bm{\eta})}
\underbrace{\prod_{i=1}^{n}
\left\{\dfrac{1}{\sqrt{\phi_i}\,z_i}\,
g\!\left(\frac{1}{\phi_i} \left[ \log(z_i)-\log(Q_i)+\sqrt{\phi_i}\,z_{p} \right]^2 \right)\right\}^{1-\mathcal{I}_{\{0\}}(z_i)}}_{L_2(\bm{\beta},\bm{\kappa})}.
\end{eqnarray}
Taking the logarithm in \ref{eq:like}, we obtain the log-likelihood function
\begin{eqnarray}\label{eq:loglike}
\small
\ell(\boldsymbol{\theta})&=&\sum_{i=1}^{n}\log(f_{Z}(z_i;Q_i,\phi_i,\pi_i))\\ \nonumber
&=& \underbrace{\sum_{i=1}^{n} {\mathcal{I}_{\{0\}}(z_i)}\log(\pi_i)\,+ (1-\mathcal{I}_{\{0\}}(z_i))\log(1-\pi_i)}_{\ell_1(\bm{\eta})} \\ \nonumber
&& \times \underbrace{\sum_{i=1}^{n}(1-\mathcal{I}_{\{0\}}(z_i))\log
\left\{\dfrac{1}{\sqrt{\phi_i}\,z_i}\,
g\!\left(\frac{1}{\phi_i} \left[ \log(z_i)-\log(Q_i)+\sqrt{\phi_i}\,z_{p} \right]^2 \right)\right\}}_{\ell_2(\bm{\beta},\bm{\kappa})}.
\end{eqnarray}
Note that in \ref{eq:loglike}, $\ell(\boldsymbol{\theta})$ is factored in two terms \citep{pacesalvan:97}, one that is associated with the probability of occurrence of zero, $\ell_1(\bm{\eta})$, and another that is associated with a continuous and positive part, $\ell_2(\bm{\beta},\bm{\kappa})$. Therefore, the maximum likelihood estimates can be obtained independently for $\boldsymbol{\theta}$ and $(\bm{\beta}^{\top},\bm{\kappa}^{\top})^{\top}$, that is, the maximization is performed separately for $\ell_1(\bm{\eta})$ and $\ell_2(\bm{\beta},\bm{\kappa})$. Nevertheless, as there is no analytical solution, an iterative procedure can be used for non-linear optimization, in particular the Broyden-Fletcher-Goldfarb-Shanno (BFGS) method is used in this work.
The \textit{quantile}-ZALS regression model proposed above can be interpreted as being divided into two equations:
\begin{itemize}
\item \textbf{Participation equation}
\begin{equation}\label{eq:indic}
\begin{cases}
\mathbb{P}(Z_i=0) = \pi_i= \frac{\exp(\bm{v}^{\top}_{i}\bm{\eta})}{1+\exp(\bm{v}^{\top}_{i}\bm{\eta})}, & \quad d_i=0,\,\text{if}\; z_i = 0,\\
\mathbb{P}(Z_i>0) = 1-\pi_i=\frac{1}{1+\exp(\bm{v}^{\top}_{i}\bm{\eta})} , & \quad d_i=1\,\text{if}\; z_i > 0,\\
\end{cases}
\end{equation}
where $d_i=1$ if the individual participates and $d_i=0$ otherwise.,
\item \textbf{Intensity equation}
\begin{equation}\label{eq:indic}
Q_i=Q(Z_i|d_i=1)=\exp(\bm{x}_i^\top\bm \beta),
\end{equation}
where $Q(Z_i|d_i=1)$ is the quantile o$f Z_i$ given that $d_i=1$.
\end{itemize}
\section{Monte Carlo simulation}\label{sec:4}
A Monte Carlo simulation study is carried out to evaluate the performance of the maximum likelihood estimates of the \textit{quantile}-ZALS regression model. The performance is assessed using bias and mean square error (MSE) estimates, given by
$$\widehat{\textrm{Bias}}(\widehat{\theta}) = \frac{1}{\text{NREP}} \sum_{i = 1}^{\text{NREP}} \widehat{\theta}^{(i)} - \theta \quad
\text{and} \quad
\widehat{\mathrm{EQM}}(\widehat{\theta}) = \frac{1}{\text{NREP}} \sum_{i = 1}^{\text{NREP}} (\widehat{\theta}^{(i)} - \theta)^2,$$
where $\theta$ and $\widehat{\theta}^{(i)}$ denote the true parameter value and its respective $i$-th maximum likelihood estimate, respectively, and $\text{NREP}$ is the number of Monte Carlo replicas. We use the \texttt{R} software has been used to do all numerical calculations; see \cite{r2020vienna}.
The model used to generate the samples is given by
\[
Q_i = \exp \left(\beta_0 + \beta_{1}x_{i1} + \beta_{2}x_{i2} \right), \,\, \phi_i = \exp \left(\kappa_0 + \kappa_{1}w_{i1} + \kappa_{2}w_{i2} \right); \text{and} \; \pi_i = \frac{\exp \left(\eta_0 + \eta_{1}v_{i1} + \eta_{2}v_{i2}\right)}{1+\exp \left(\eta_0 + \eta_{1}v_{i1} + \eta_{2}v_{i2}\right)},
\]
where the reference distribution is log-normal (the results of the log-Student-$t$, log-power-exponential and extended Birnbaum-Saunders are similar and are therefore omitted here). The simulation has the following setting: $(\beta_0,\beta_1,\beta_2)= (0.5,0.7,1.0)$, $(\kappa_0,\kappa_1,\kappa_2)= (0.5,0.8,1.0)$, $(\eta_0,\eta_1,\eta_2)= (0.5,0.3,0.5)$,
with $\text{NREP}=5,000$ Monte Carlo replicas.The explanatory variables $x_i$, $w_i$ and $v_i$ are generated from the Uniform(0,1) distribution. Figures \ref{fig:sim1}-\ref{fig:sim3} show the Monte Carlo simulation results for $q=\{0.10,0.50,0.90\}$. An analysis of the results allows us to conclude that, in general, as the sample size increases, the bias and MSE decrease, as expected.
\begin{figure}[!ht]
\centering
\subfigure[$\widehat{\textrm{Bias}}$($\widehat{\beta}_i$)]{\includegraphics[height=5cm,width=5cm]{q010_bias_beta.pdf}}
\subfigure[$\widehat{\textrm{MSE}}$($\widehat{\beta}_i$)]{\includegraphics[height=5cm,width=5cm]{q010_mse_beta.pdf}}
\subfigure[$\widehat{\textrm{Bias}}$($\widehat{\kappa}_i$)]{\includegraphics[height=5cm,width=5cm]{q010_bias_kappa.pdf}}
\subfigure[$\widehat{\textrm{MSE}}$($\widehat{\kappa}_i$)]{\includegraphics[height=5cm,width=5cm]{q010_mse_kappa.pdf}}
\subfigure[$\widehat{\textrm{Bias}}$($\widehat{\eta}_i$)]{\includegraphics[height=5cm,width=5cm]{q010_bias_eta.pdf}}
\subfigure[$\widehat{\textrm{MSE}}$($\widehat{\eta}_i$)]{\includegraphics[height=5cm,width=5cm]{q010_mse_eta.pdf}}
\caption{\small {Bias and MSE estimates for $q=0.10$ ($i=\{0,1,2\}$).}}
\label{fig:sim1}
\end{figure}
\begin{figure}[!ht]
\centering
\subfigure[$\widehat{\textrm{Bias}}$($\widehat{\beta}_i$)]{\includegraphics[height=5cm,width=5cm]{q050_bias_beta.pdf}}
\subfigure[$\widehat{\textrm{MSE}}$($\widehat{\beta}_i$)]{\includegraphics[height=5cm,width=5cm]{q050_mse_beta.pdf}}
\subfigure[$\widehat{\textrm{Bias}}$($\widehat{\kappa}_i$)]{\includegraphics[height=5cm,width=5cm]{q050_bias_kappa.pdf}}
\subfigure[$\widehat{\textrm{MSE}}$($\widehat{\kappa}_i$)]{\includegraphics[height=5cm,width=5cm]{q050_mse_kappa.pdf}}
\subfigure[$\widehat{\textrm{Bias}}$($\widehat{\eta}_i$)]{\includegraphics[height=5cm,width=5cm]{q050_bias_eta.pdf}}
\subfigure[$\widehat{\textrm{MSE}}$($\widehat{\eta}_i$)]{\includegraphics[height=5cm,width=5cm]{q050_mse_eta.pdf}}
\caption{\small {Bias and MSE estimates for $q=0.50$ ($i=\{0,1,2\}$).}}
\label{fig:sim2}
\end{figure}
\begin{figure}[!ht]
\centering
\subfigure[$\widehat{\textrm{Bias}}$($\widehat{\beta}_i$)]{\includegraphics[height=5cm,width=5cm]{q090_bias_beta.pdf}}
\subfigure[$\widehat{\textrm{MSE}}$($\widehat{\beta}_i$)]{\includegraphics[height=5cm,width=5cm]{q090_mse_beta.pdf}}
\subfigure[$\widehat{\textrm{Bias}}$($\widehat{\kappa}_i$)]{\includegraphics[height=5cm,width=5cm]{q090_bias_kappa.pdf}}
\subfigure[$\widehat{\textrm{MSE}}$($\widehat{\kappa}_i$)]{\includegraphics[height=5cm,width=5cm]{q090_mse_kappa.pdf}}
\subfigure[$\widehat{\textrm{Bias}}$($\widehat{\eta}_i$)]{\includegraphics[height=5cm,width=5cm]{q090_bias_eta.pdf}}
\subfigure[$\widehat{\textrm{MSE}}$($\widehat{\eta}_i$)]{\includegraphics[height=5cm,width=5cm]{q090_mse_eta.pdf}}
\caption{\small {Bias and MSE estimates for $q=0.90$ ($i=\{0,1,2\}$).}}
\label{fig:sim3}
\end{figure}
\section{Application to extramarital affairs data}\label{sec:5}
In this subsection, the \textit{quantile}-ZALS regression models are illustrated using data from extramarital affairs. The database is available at \cite{fair:78} and the objective here is to study the allocation of time in extramarital affairs for men and women married for the first time. There are 6,366 observations in the database, where the dependent variable ($Z$) is the time spent in extramarital affairs, and the explanatory variables are
\begin{itemize}
\item \texttt{ratemarr}: rating of the marriage, coded 1 to 4;
\item \texttt{age}: age, in years;
\item \texttt{yrsmarr}: number of years married;
\item \texttt{numkids}: number of children;
\item \texttt{relig}: religiosity, code 1 to 4, 1 = not, 4 = very;
\item \texttt{educ}: education, coded 9, 12, 14, 16, 17, 20, that is, 9 = elementary school, 12 = high school, . . . , 20 = doctorate or other;
\item \texttt{wifeocc}: wife's occupation - Hollingshead scale;
\item \texttt{husocc}: husband's occupation - Hollingshead scale.
\end{itemize}
Descriptive statistics for the time spent in extramarital affairs ($Z$) indicate that the mean, median and standard deviation are given by 0.705, 0 and 2.203, respectively. The coefficient of variation is 312.37\%, indicating a high dispersion of data around the mean. The coefficients of skewness and kurtosis are equal to 8.761 and 131.912, respectively, which shows the presence of a high positive skewness and the presence of heavy tails. Then, the hypothesis of the use of log-symmetric distributions is plausible. Note that the asymmetric nature of the data is confirmed by the histogram shown in Figure \ref{fig:histextra}(a). Note also a high concentration of zero values in the sample, about 4,313 individuals have 0 time spent in extramarital affairs.
\begin{figure}[!ht]
\centering
\subfigure[]{\includegraphics[height=5.5cm,width=5.5cm]{histextra1.pdf}}
\subfigure[]{\includegraphics[height=5.5cm,width=5.5cm]{qqplot1.pdf}}
\caption{\small {Histogram (a) for the time spent in extramarital affairs and QQ plot (b) and its envelope for the randomized quantile residuals based on the \textit{quantile}-ZAEBS regression model ($q=0.50$).}}
\label{fig:histextra}
\end{figure}
The proposed \textit{quantile}-ZALS regression models can accommodate heteroscedasticity, then specifications with and without explanatory variables in the relative dispersion $\phi$ can be fitted. We consider here \textit{quantile}-ZALS regression models based on the log-normal (\textit{quantile}-ZALNO), log-Student-$t$ (\textit{quantile}-ZAL$t$), log-power-exponential (\textit{quantile}-ZALPE) and extended Birnbaum-Saunders (\textit{quantile}-ZAEBS) distributions. An analysis of the significance of the coefficients suggested the following specification:
\clearpage
\begin{eqnarray*}\label{fsdfds}\nonumber
&Q_i = \exp\left(
\beta_0 + \beta_1 \texttt{ratemarr} + \beta_2\texttt{yrsmarr} + \beta_3\texttt{numkids} + \beta_4\texttt{relig} \right) \\ \nonumber
&\phi_i = \exp\left( \kappa_0 + \kappa_1 \texttt{age} \right) \,\, \mbox{and} \\
&\pi_i = \Lambda\left(
\eta_0 + \eta_1 \texttt{ratemarr} + \eta_2 \texttt{age} + \eta_3\texttt{yrsmarr} + \eta_4\texttt{relig} + \eta_5\texttt{educ} \right. \\
& \left. + \eta_6\texttt{wifeocc} \right).
\end{eqnarray*}
Table \ref{tab:resultssaicbicall} reports the results of the averages of the AIC and BIC values based on $q=\{0.01,0.02,\ldots,0.99\}$ for the proposed \textit{quantile}-ZALS regression models. The results indicate that the lowest values of AIC and BIC are those based on the \textit{quantile}-ZALPE and \textit{quantile}-ZAEBS models, with a slight advantage of the latter.
\begin{table}[ht]
\footnotesize
\centering
\caption{\small {Averages of the of the AIC and BIC values with $q=\{0.01,0.02,\ldots,0.99\}$ for different models.}}
\begin{tabular}{lrrrrrrrrrrrrrr}
\hline
& \multicolumn{4}{c}{Model} \\ \cline{2-5}
Criterion & \textit{quantile}-ZALNO & \textit{quantile}-ZAL$t$ & \textit{quantile}-ZALPE & \textit{quantile}-ZAEBS \\
\hline
AIC & 13165.14 & 13173.48 & 13163.49 & 13163.41 \\
BIC & 13259.77 & 13268.10 & 13258.12 & 13258.03 \\
\hline
\end{tabular}
\label{tab:resultssaicbicall}
\end{table}
Since the results of the \textit{quantile}-ZAEBS model showed the best results, we can compare them with the results of the zero-adjusted gamma (ZAGA) and zero-adjusted inverse Gaussian (ZAIG) regression models, studied by \cite{Heller2006} and \cite{stasinopoulosetal:17}. Table \label{tab:aicompar} reports the results of AIC and BIC for these models, and we observe that the \textit{quantile}-ZAEBS model has the best fit. The QQ plot for the randomized quantile residuals \citep{dunnsmyth:96} of this model for $q = 0.50$ is shown in Figure \ref{fig:histextra}(b); similar plots are obtained for other values of $q$. The results then indicate that the proposed model present adjustments that are superior to existing models in the literature.
\begin{table}[ht]
\footnotesize
\centering
\caption{\small {AIC and BIC values for different \textit{quantile}-ZALS regression models.}}
\begin{tabular}{lrrrrrrrrrrrrrr}
\hline
& & \multicolumn{3}{c}{Model} \\ \cline{3-5}
& & \textit{quantile}-ZAEBS & ZAGA & ZAIG \\
& & ($q=0.50$) & & \\
\hline
AIC & & 13163.18 & 13466.62 & 13953.84 \\
BIC & & 13257.80 & 13580.48 & 14067.71 \\
\hline
\end{tabular}
\label{tab:aicompar}
\end{table}
The estimates of the parameters of the \textit{quantile}-ZAEBS model for $\ pi_i$ (discrete component) are shown in Table \ref{tab:partdiscreta}, and those for $Q_i$ and $\phi_i$ in Figure \ref{fig:estimates}. The figure shows asymmetric dynamics. For example, the estimates of $\beta_0$ ($\beta_2$) tend to increase (decrease) with the increase of $q$. In general, such results show the importance of considering a quantile approach.
\begin{table}[!ht]
\footnotesize
\centering
\caption{\small {Estimated parameters (standard errors in parentheses) of the discrete part of the \textit{quantile}-ZAEBS regression model.}}
\begin{tabular}{lrrrrrrrrrrrrrr}
\hline
&\texttt{Interc.} & \texttt{ratemarr}&\texttt{age}&\texttt{yrsmarr}&\texttt{relig}&\texttt{educ}&\texttt{wifeocc} \\
\hline
Estimate & 3.7371* &-0.7153* &-0.0602* & 0.1095* &-0.3760* &-0.0380* & 0.1628* \\
Standard error& (0.2961) &(0.0314) &(0.0103) &(0.0097) &(0.0346) &(0.0153) &(0.0337) \\
\hline
\multicolumn{8}{l}{\scriptsize{* significant at 5\% level. ** significant at 10\% level.}}
\end{tabular}
\label{tab:partdiscreta}
\end{table}
\begin{figure}[!ht]
\centering
\subfigure[$\beta_0$]{\includegraphics[height=5cm,width=5cm]{beta0.pdf}}
\subfigure[$\beta_1$]{\includegraphics[height=5cm,width=5cm]{beta1.pdf}}
\subfigure[$\beta_2$]{\includegraphics[height=5cm,width=5cm]{beta2.pdf}}\\
\subfigure[$\beta_3$]{\includegraphics[height=5cm,width=5cm]{beta3.pdf}}
\subfigure[$\beta_4$]{\includegraphics[height=5cm,width=5cm]{beta4.pdf}}
\subfigure[$\kappa_0$]{\includegraphics[height=5cm,width=5cm]{kappa0.pdf}}\\
\subfigure[$\kappa_1$]{\includegraphics[height=5cm,width=5cm]{kappa1.pdf}}
\caption{\small {Parameter estimates (confidence intervals in grey) for the positive continuous part of the \textit{quantile}-ZAEBS regression model.}}
\label{fig:estimates}
\end{figure}
\section{Concluding remarks}\label{sec:6}
In this work, a class of zero-adjusted log-symmetric quantile regression models was proposed. The proposed regression model is based on a zero-adjusted version of the log-symmetric distributions parameterized by the quantile, also proposed in this work. The quantile proposal provides wide flexibility in the analysis of the effects of the explanatory variables on the dependent variable, which makes the proposed model a more interesting alternative than the existing zero-adjusted log-symmetric regression models \citep{cc:21,cosavalente:21}. The estimation of the parameters was performed by the maximum likelihood method and a Monte Carlo simulation study was carried out to evaluate the performance of the maximum likelihood estimates. An application of the proposed models is used to study the allocation of time in extramarital affairs of men and women married for the first time. The results showed that the proposed zero-adjusted log-symmetric quantile regression models perform better than the existing zero-adjusted gamma (ZAGA) and zero-adjusted inverse Gaussian (ZAIG) regression models studied by \cite{Heller2006} and \cite{stasinopoulosetal:17}.
\normalsize
|
{
"timestamp": "2021-05-11T02:16:19",
"yymm": "2105",
"arxiv_id": "2105.03806",
"language": "en",
"url": "https://arxiv.org/abs/2105.03806"
}
|
\section{Introduction}\label{sect0}
\noindent [\emph{An earlier version of this paper has circulated as a preprint for some years. Due to a few recent requests of the paper made to the author, the current version is prepared with some minor notations changed.}]
Let $F$ be a number field, $X$ be a $n$-dimensional smooth projective $F$-variety and set $X_\bbq=X\otimes_F \bbq$. With respect to the $\ell$-adic cycle map
\begin{equation}\label{intro_cl}
\cl_{et}:\CH^i(X_\bbq)\lrar \mathrm{H}^{2i}(X_\bbq, \bz_\ell(i)),
\end{equation}
Tate Conjecture predicts that for any finite extension $E/F$, the space $\mathrm{H}^{2i}(X_\bbq, \bq_\ell(i))^{\Gal(\bbq/E)}$ is spanned by the images of codimension-$i$ $E$-cycles. An accompanying question is to construct sufficient concrete $E$-cycles so that their images span $\mathrm{H}^{2i}(X_\bbq, \bq_\ell(i))^{\Gal(\bbq/E)}$.
When $X$ is a Shimura variety associated to a reductive $\bq$-group $G$ with the reflex field $F$, $X$ is defined over $F$. Each reductive $\bq$-subgroup of $G$ yields a Shimura subvariety, whose connected components are defined over finite abelian extensions of $F$. These connected components and their Hecke translates are called special cycles in $X$. Ramakrishnan raised a general question in \cite{Rama90}: are special cycles enough to generate the $\Gal(\bbq/F^{\mathrm{ab}})$-invariant subspace of the intersection cohomology $\mathrm{IH}^{2i}(X_\bbq,\bq_\ell(i))$?
The original example motivating Ramakrishnan's question is Hilbert modular surfaces $S$, for which the reflex field is $F=\bq$. Harder, Langlands and Rapport \cite{hlr1986} proved the Tate conjecture for $\mathrm{H}^2(S_{\bbq},\bq_\ell(1))$. They also discovered that modular curves are enough to generate the $\Gal(\bbq/\bqab)$-invariant subspace of the interior cohomology $\mathrm{H}^2_!(S_\bbq,\bq_\ell(1))$.
This paper provides Siegel 3-folds as a second affirmative example in this direction. Let $\ba:=\ba_\bq$ be the ring of adels over $\bq$ and $\ba_f$ be the subring of finite adels. For a neat compact open subgroup $K_f$ of $\GSp_4(\ba_f)$, let $\mkf$ denote the Siegel $3$-fold of level $K_f$. $\mkf$ is defined over $\bq$ and we let $\tmkf$ be a smooth toroidal compactification which is defined over $\bq$ and whose boundary divisors have normal crossings. Weissauer proved the Tate conjecture for $\mathrm{H}^2(\tmkf\otimes_\bq \bbq,\bq_\ell(1))$ \cite[Thm. 9.4]{Weiss88} and showed that $\Gal(\bbq/\bq^{ab})$ acts trivially on $\CH^1(\mkf\otimes_\bq \bbq)\otimes_\bz \bq$ \cite[Thm. 2]{Weiss92}. Based on his work, we prove that Tate classes of degree $2$ and $4$ are spanned by the images of special cycles and cycles on the boundary.
Let $\SC^i(\mkf)$ denote the subgroup of $\CH^i(\mkf\otimes_\bq \bbq)$ consisting of special cycles of codimension $i$ (see Sect. ~\ref{cycle}). When $i=1$, special cycles are Hecke translates of Hilbert modular surfaces in $\mkf$; when $i=2$, special cycles are Hecke translates of Shimura curves in $\mkf$. For a cycle $Z$ in $\mkf\otimes_\bq \bbq$, let $\overline{Z}$ denote its Zariski closure in $\tmkf\otimes_\bq \bbq$.
With respect to the cycle map (\ref{intro_cl}), let $\CH^i_0(X_\bbq)$ denote the kernel and $\overline{\CH}^i(X_\bbq)$ be the quotient group $\CH^i(X_\bbq)/\CH^i_0(X_\bbq)$. Let $\Ta^i(X_\bbq)$ be the union of $\mathrm{H}^{2i}(X_\bbq, \bq_\ell(i))^{\Gal(\bbq/E)}$ for all number fields $E\supset F$; it is the space of all degree-$2i$ Tate classes on $X_\bbq$.
Our main theorem is below.
\begin{theorem}\label{main}
Let $\{B_i\}_{i=1}^m$ be the boundary divisors of $\tmkf\otimes_\bq \bbq$.
\begin{itemize}
\item[(1)] $\CH^1(\mkf\otimes_\bq \bbq)\otimes_\bz \bq=\SC^1(\mkf)\otimes_\bz \bq$.
\item[(2)] $\Ta^1(\tmkf)$ is spanned by the images of $[\overline{Z}]$ and $[B_i]$, with $[Z]\in \SC^1(\mkf)$ and $1\leq i\leq m$.
\item[(3)] $\Ta^2(\tmkf)$ is spanned by the images of $[\overline{Z}]$, $[B_i\cdot B_j]$ and $[\overline{Z}\cdot B_i]$, with $[Z]\in \SC^2(\mkf)$ and $1\leq i,j\leq m$.
\item[(4)] $\overline{\CH}^2(\mkf\otimes_\bq \bbq)\otimes_\bz \bq$ is spanned by the image of $\SC^2(\mkf)$.
\end{itemize}
\end{theorem}
\begin{remark}\label{thm_com}
When the variety $X$ in (\ref{intro_cl}) is smooth, let $\HH^\ast(X,R)$ denote the singular cohomology of the complex manifold $X(\bc)$ with coefficients in an abelian group $R$. There is a cycle map
\[
\cl: \CH^i(X_\bbq)\rar \HH^{2i}(X,\bz)
\]
The Artin comparison theorem of etale cohomology and singular cohomology gives a canonical isomorphism $\HH^\ast(X_\bbq, \bz_\ell)\cong \HH^\ast(X,\bz)\otimes \bz_\ell$ that is compatible with $\cl_{et}$ and $\cl$, whence $\CH^i_0(X_\bbq)$ is also the kernel of $\cl$.
\end{remark}
\begin{remark}\label{thm_com2}
$\CH^1_0(\mkf\otimes_\bq \bbq)\otimes \bq$ vanishes because of the well-known fact $\HH^1(\mkf,\bc)=0$ (See Lemma ~\ref{cyclelemma}). $\CH^2_0(\mkf\otimes_\bq \bbq)$ is more difficult to study and is related to the intermediate Jacobian. It is a very interesting problem to characterize $\CH^2_0(\mkf\otimes_\bq \bbq)\cap \SC^2(\mkf)$.
\end{remark}
In Theorem ~\ref{main}, there is (1)$\Longrightarrow$(2)$\Longrightarrow$(3)$\Longrightarrow$(4). By Remark ~\ref{thm_com2}, the following cycle map with coefficients in $\bq$ is injective,
\[
\cl_{\mkf}: \CH^1(\mkf\otimes_\bq \bbq)\otimes_\bz \bq\rar \HH^2(\mkf,\bq).
\]
To prove (1), it suffices that the images of $\CH^1(\mkf\otimes_\bq \bbq)$ and $\SC^1(\mkf)$ span the same subspace in $\HH^2(\mkf,\bq)$, or equivalently, in $\HH^2(\mkf,\bc)$.
We choose to handle all $K_f$ simultaneously. Write $M:=\ilim_{K_f} M_{K_f}$, $\SC^i(M):=\dlim_{K_f}\SC^i(\mkf)$, $\CH^i(M\otimes_\bq \bbq):=\dlim_{K_f} \CH^i(\mkf\otimes_\bq \bbq)$, and $\mathrm{H}^{2i}(M,\bc):=\dlim \HH^{2i}(\mkf,\bc)$. The according cycle map
\begin{align}\label{intro_clc}
&\mathrm{cl}_M:\CH^1(M\otimes_\bq \bbq)\otimes_\bz \bc\rar \mathrm{H}^{2}(M,\bc).
\end{align}
is injective, $\GSp_4(\ba_f)$-equivariant and factors through $\HH^{1,1}(M,\bc)$.
\begin{theorem}\label{picardgroup}
There is an isomorphism
\[
\mathrm{cl}_M|_{\SC^1(M)}:\SC^1(M)\otimes_\bq \bc \isoto \mathrm{H}^{1,1}(M,\bc).
\]
\end{theorem}
Theorem ~\ref{picardgroup} implies $\SC^1(M)\otimes_\bz \bc=\CH^1(M\otimes_\bq \bbq)\otimes_\bz \bc$ and is sufficient to deduce Part (1) of Theorem ~\ref{main}. The main body of this paper is to prove Theorem ~\ref{picardgroup} by using the period pairing between cycles and cohomological differential forms. We sketch the main steps of the proof.
\begin{itemize}
\item[Step 1.] $\HH^2(M,\bc)$ is isomorphic to the $(\fg,K_\infty)$-cohomology of the discrete spectrum\\
$L^2_{\disc}(\GSp_4(\bq)\br^\times_+\backslash \GSp_4(\ba))$ and hence is completely reducible as an admissible $\GSp_4(\ba_f)$-module. (See Sect. ~\ref{algebra} and ~\ref{3fold} for the definition of $\fg$ and $K_\infty$.) For an irreducible admissible unitary representation $\pi_f$ of $\mathrm{GSp}_4(\ba_f)$, let $\SC^1(\pi_f)$, $\HH^2(\pi_f)$ and $\mathrm{H}^{1,1}(\pi_f)$ be the $\pi_f$-isotypic component of $\SC^1(M)\otimes_\bz \bc$, $\HH^2(M,\bc)$ and $\mathrm{H}^{1,1}(M,\bc)$ respectively. It reduces to check that the cycle map $\cl(\pi_f): \SC^1(\pi_f)\hrar\mathrm{H}^{1,1}(\pi_f)$ on each isotypic component is an isomorphism. We show that when $\HH^{1,1}(\pi_f)$ is nonzero, $\HH^{1,1}(\pi_f)=\HH^2(\pi_f)$ is irreducible. So it suffices that $\SC^1(\pi_f)$ is nonzero when $\pi_f$ occurs in $\HH^{1,1}(M,\bc)$.
\item[Step 2.] The map $\cl$ is defined with respect to the Poincar\'{e} duality,
\begin{equation}\label{intro_pd}
\mathrm{H}^2(M,\bc)\times \mathrm{H}^4_c(M,\bc)\lrar \bc.
\end{equation}
Here the singular cohomology $\mathrm{H}^\ast(M,\bc)$ and $\mathrm{H}^\ast_c(M,\bc)$ can be identified with the de Rham cohomology and de Rham cohomology with compact support. Also, the pairing in (\ref{intro_pd}) respects the action of $\GSp_4(\ba_f)$ and hence is a direct sum of the perfect pairings on the isotypic components,
\[
\HH^2(\pi_f)\times \HH^4_c(\pi_f^\vee)\rar \bc.
\]
To show $\SC^1(\pi_f)\neq 0$, one needs a cycle class $[Z]\in \SC^1(\pi_f)$ and a closed form $\Omega\in \HH^4_c(\pi_f^\vee)$ satisfying $\int_Z \Omega\neq 0$. However, compactly supported closed forms are hard to construct. As a substitute, rapidly decreasing differential forms are easier to construct and define cohomology groups that are isomorphic to the de Rham cohomology with compact support (see \cite{borel1980}). We propose the following Proposition (See Prop. ~\ref{formOmega} in Sect. ~\ref{pairng}) and show that it is sufficient for $\SC^1(\pi_f)\neq 0$.
\begin{proposition*}
When $\mathrm{H}^{1,1}(\pi_f)$ is nonzero, there exists a special divisor $Z$ and a $\bk_f$-finite rapidly decreasing closed form $\Omega$ on $M$ such that \emph{(i)} $<\mathrm{H}^{1,1}(\pi_f^\prime),\Omega>=0$ for all $\pi_f^\prime\neq \pi_f$, \emph{(ii)} $\int_Z \Omega\neq 0$.
\end{proposition*}
\item[Step 3.] To verify the above proposition, we utilize the automorphic description of $\HH^{1,1}(M,\bc)$ in \cite{Weiss92}. When $\mathrm{H}^{1,1}(\pi_f)\neq 0$, there exists a unique irreducible unitary $\GSp_4(\br)$-representation $\pi_\infty$ such that $\pi=\pi_\infty\times \pi_f$ occurs in the discrete spectrum. All such $\pi$ are the character twist of three types of basic representations (See Thm. ~\ref{wei_coho}). It suffices to treat the basic representations and, briefly speaking, they are
\begin{itemize}
\item[(I)] a Siegel-type CAP representation of $\PGSp_4(\ba)\cong \SO(3,2)(\ba)$,
\item[(II)] a residue representation of Siegel-type Eisenstein series,
\item[(III)] the trivial representation $1$.
\end{itemize}
We prove the Proposition for these three types in Section ~\ref{np1}, ~\ref{np2} and ~\ref{np3}.
When $\pi$ is of type I, $\pi_\infty$ is a cohomological parameter $\pi^{2+}$ (see Sect. ~\ref{coho-rep}) and $\pi$ is the global theta lift of an irreducible cuspidal metaplectic $\SL_2(\ba)$-representation. These two facts select out certain non-split $\SO_4\subset \SO(3,2)$ so that the automorphic periods
\[
\mpp(\varphi):=\int_{\SO_4(\bq)\backslash \SO_4(\ba)}\varphi(h)dh,\quad \varphi\in \pi
\]
can be nonzero. We let $Z$ be the Hilbert modular surface associated to this $\SO_4$ and consider forms $\Omega\in\HH^{2,2}(\fg,K_\infty,\pi)$. Condition (i) is satisfied automatically. The cohomological periods $\int_Z \Omega$ are related to the values of $\mpp$ on a subspace $v_0\times \pi_f\subset \pi$, where $v_0$ is a special vector in $\pi_\infty$. We use the Vogan-Zuckerman description of $\pi^{2+}$ to deduce that when $\mpp$ is nonzero on $\pi$, it must be nonzero on $v_0\otimes \pi_f$, whence certain cohomological period is nonzero. This subtle phenomenon happens partly because the $K_\infty$-type containing $v_0$ is minimal in $\pi^{2+}$.
When $\pi$ is of type II or III, the candidate for the cycle $Z$ is a product of modular curves and associated to a subgroup $\SO(2,2)\subset \SO(3,2)$. Let $\mathrm{P}$ be the Siegel parabolic subgroup in case II and the Borel subgroup in case III. We specify certain type of closed degree-$4$ form $\eta$ on the space $\mathrm{P}(\bq)\backslash \GSp_4(\ba)/K_\infty$. Let $L$ denote the left translation on $\GSp_4(\br)$. We follow Harder's method of Eisenstein cohomology and construct
\[
E(\eta)=\sum_{\gamma\in \mathrm{P}(\bq)\backslash \mathrm{GSp}_4(\bq)} L_\gamma^*\eta.
\]
When $\eta$ is carefully chosen, the form $E(\eta)$ would be closed and meet Condition (i) and (ii).
\end{itemize}
The results of this paper were obtained in 2007 and its writing was completed in 2008 when the author visited University of Wisconsin at Madison.
We later heard that He and Hoffman \cite{hh2012} independently proved a result similar to Theorem ~\ref{picardgroup} in the classical setting using a different method. As a comparison, \cite{hh2012} proves the result by applying the Kudla-Millson theory \cite{kudla_millson86, kudla_millson87, kudla_millson88}; we do not rely on the work of Kudla-Millson but take the direct approach of constructing rapidly decreasing cohomological forms and pairing them with concrete cycles. Such an explicit study of period pairing is sufficient to show that Tate classes on Siegel 3-folds are from special cycles; additionally, the explicit construction of representatives of the Eisenstein cohomology could be useful for other purposes.
\section{notations}\label{sect_notation}
Let $\ba$ be the ring of adels over $\bq$ and $\ba_f$ be the subring of finite adels. Let $\ba^\times$ be the multiplicative group of invertible elements in $\ba$ and $\ba_1^\times$ denote the subgroup consisting of norm $1$ elements. For $a\in \bq^\times$ (resp. $\bq_p^\times$), let $\chi_a=<a,\cdot>$ be the associated quadratic character of $\bq^\times\backslash \ba^\times$ (resp. $\bq_p^\times$), where $<,>$ is the Hilbert symbol. For a character $\psi$ of $\bq\backslash \ba$ (resp. $\bq_p^\times$), $\psi_a$ denotes the $a$-twist $\psi_a(x)=\psi(ax)$.
For a reductive group $H$ over $\ba$, let $H(\br)^+$ denote the identity component of $H(\br)$ with respect to the real topology. Set $\br_+:=\{t\in \br: t>0\}$ and embed it into $\ba^\times$ by sending $t$ to $(t,1_{\ba_f})$. Put $c=\mathrm{Vol}(\bq^\times \br_+\backslash \ba^\times)=\mathrm{Vol}(\bq^\times\backslash \ba^\times_1)$.
\subsection{The group $\mathrm{GSp}_4$}\label{group}
Set $J_n=\smalltwomatrix{}{I_n}{-I_n}{}$ and define
\begin{align*}
\mathrm{GSp}_{4}&=\{g\in \mathrm{GL}_{4}: \tran{g}J_2 g=\nu(g) J_2,\, \nu(g)\in \mathrm{GL}_1\}.
\end{align*}
One calls $\nu(g)$ the similitude character. Let $Z$ denote the center of $\GSp_4$. Set $\Sp_4=\Ker \nu$ and $\PGSp_4=\GSp_4/Z$.
There are three types of parabolic subgroups in $\GSp_4$: the Borel subgroup $B$, the Siegel parabolic subgroup $P$, and the Klingen parabolic subgroup $Q$. We fix a choice of them as below,
\[
B=\left\{\left(\begin{smallmatrix}
* &* &* &*\\
&* &* &*\\
& &* & \\
& &* &*\\
\end{smallmatrix}\right)\right\},\quad
P=\left\{\left(\begin{smallmatrix}
* &* &* &*\\
* &* &* &*\\
& &* &*\\
& &* &*\\
\end{smallmatrix}\right)\right\},\quad
Q=\left\{\left(\begin{smallmatrix}
* &* &* &*\\
* &* &* &*\\
& &* &\\
&* &* &*\\
\end{smallmatrix}\right)\right\}.
\]
We also fix a maximal compact subgroup $\bk=\prod_v \bk_p$ of $\GSp_4(\ba)$, with $\bk_\infty=\bk_\br\cdot\{\smalltwomatrix{I_2}{}{}{\pm I_2}\}$ and $\bk_p=\GSp_4(\bz_p)$ when $p$ is finite. Here $\bk_\br$ is specified by its Lie algebra as in Section ~\ref{algebra}. Write $\bk_f=\prod_{p<\infty}\bk_p$.
There is an exceptional isomorphism $\GSp_4\cong \mathrm{GSpin}(V)$, where $V$ is a $5$-dimensional quadratic space over $\bq$ of Witt index $2$ and determinant $1$. To make the isomorphism explicit, set $w=\smalltwomatrix{}{1}{-1}{}$ and
\begin{align*}
&V=\{Y=\smalltwomatrix{X}{x^\prime w}{-x^{\prime\prime}w}{\leftup{t}{X}}|x^\prime, x^{\prime\prime}\in \bq, X\in M_{2\times 2}(\bq), \mathrm{Tr}(X)=0\},\\
&q(Y)=\frac{1}{4}\mathrm{Tr}(Y^2).
\end{align*}
$\mathrm{GSp}_4$ acts on $V$ by $g\circ Y=gYg^{-1}$ and this action induces an isomorphism $\mathrm{PGSp}_4\isoto \mathrm{SO}(V)$. which is then lifted to an isomorphism $\mathrm{GSp}_4\isoto \mathrm{GSpin}(V)$.
\subsection{The Lie algebra of $\mathrm{GSp}_4(\br)$}\label{algebra}
Put $\fg=\Lie(G(\br))\otimes \bc$, $\fg_0=\Lie(\mathrm{Sp}_4(\br))\otimes \bc$, and $\fz=\Lie(Z(\br))\otimes \bc$. There is $\fg=\fg_0\oplus \fz$. We describe the structure of $\fg_0$ as a complex semi-simple Lie algebra:
\begin{itemize}
\item[(i)] The Cartan involution $\theta: X\rar -\tran{X}$ leads to a Cartan decomposition $\fg_0=\fk+\fp$, with $\theta$ acting on $\fk$ and $\fp$ by $1$ and $-1$ respectively.
\begin{align*}
&\fk=\{X\mid \theta(X)=X\}=
\left\{\smalltwomatrix{s_0J_1}{S}{-S}{s_0J_1}\mid s_0\in\br,\ S\in \mathrm{Sym}_{2\times2}(\bc)\right\},\\
&\fp=\{X\mid \theta(X)=-X\}=
\left\{\smalltwomatrix{S_1}{S_2}{S_2}{-S_1}\mid S_1,S_2\in
\mathrm{Sym}_{2\times2}(\bc) \right\}.
\end{align*}
$\ft=\bc H\oplus \bc J$ is a Cartan subalgebra of $\fg_0$ contained in $\fk$, where
\begin{align*}
&H=\left(\begin{smallmatrix}
&1 & &\\
-1& & &\\
& & &1\\
& &-1 &
\end{smallmatrix}\right),\ \
J=\left(\begin{smallmatrix}
& &1 &\\
& & &1\\
-1& & &\\
&-1 & &
\end{smallmatrix}\right).
\end{align*}
\item[(ii)] Let $\alpha$ and $\beta$ be linear functionals on $\ft$ given by
\begin{align*}
&\alpha(n_1H+n_2J)=-2n_1i,\\
&\beta(n_1H+n_2J)=-2n_2i.
\end{align*}
Both $\fp$ and $\fk$ are the direct sum of root spaces (with respect to $\ft$),
\begin{align*}
&\fp=V_{-\alpha+\beta}\oplus V_{\alpha-\beta}\oplus
V_{\beta}\oplus V_{-\beta}\oplus
V_{\alpha+\beta}\oplus V_{-\alpha-\beta},\\
&\fk=\ft\oplus \bc V_\alpha \oplus \bc V_{-\alpha},
\end{align*}
where the subscripts denote the roots.
\item[(iii)] Put $\fk_\br=\fk\cap \Lie(\mathrm{Sp}_4(\br))$ and $\ft_\br=\ft\cap \Lie(\mathrm{Sp}_4(\br))=\br H\oplus \br J$, then $\ft_\br$ is a Cartan subalgebra of $\fk_\br$. There is
\[
\fk_\br= \br J\oplus (\bc H\oplus V_\alpha\oplus V_{-\alpha})\cap \fk_\br.
\]
$\br J$ is the center of $\fk_\br$ and $(\bc H\oplus V_\alpha\oplus V_{-\alpha})\cap \fk_\br\cong \fs\fu_2$.
\end{itemize}
Let $K_\br$ be the analytic subgroup of $\mathrm{Sp}_4(\br)$ with Lie algebra $\fk_\br$ and $T$ be the analytic subgroup with Lie algebra $\ft_\br$.
The finite-dimensional irreducible complex representations of $K_\br$ are exactly the finite-dimensional irreducible complex representations of $\fk$. They are parameterized by highest weights and we denote by $\delta_\gamma$ the one with highest weight $\gamma$. The following lemma is easy to verify.
\begin{lemma}\label{hw}
A linear functional $\gamma:\ft\rar \bc$ is the highest weight of a finite-dimensional irreducible complex representation of $\fk_\br$ if and only if it is of the form $\gamma=n_1\alpha+n_2\beta, n_1\in \frac{1}{2}\bz_{\geq 0}, n_2\in \frac{1}{2}\bz$.
\end{lemma}
\subsection{Siegel $3$-folds}\label{3fold}
We follow Deligne \cite{Deligne71} to define the Siegel $3$-folds associated to the $\bq$-group $G:=\GSp_4$. Fix a $\br$-group homomorphism
\begin{align*}
h:\Res_\br^\bc \bc^\times &\rar G(\br)\\
x+iy &\rar \smalltwomatrix{xI_2}{yI_2}{-yI_2}{xI_2}.
\end{align*}
The centralizer of $h$ in $G(\br)$ is $K_\infty=Z(\br)K_\br$. For a compact open subgroup $K_f$ of $G(\ba_f)$, the Siegel $3$-fold $\mkf$ of level $K_f$ is a $3$-dimensional quasi-projective variety over $\bq$ whose set of complex points are
\[
\mkf(\bc)=G(\bq)\backslash G(\ba)/K_\infty K_f.
\]
The family $\{\mkf\}$ forms an inverse system of $\bq$-varieties and its inverse limit $M:={\ilim}_{K_f} \mkf$ is a $\bq$-scheme (not of finite type)
\subsection{Special cycles}\label{cycle}
We follow \cite{kudla1997} to define special cycles on Siegel $3$-folds. Recall the isomorphism $G\cong \mathrm{GSpin}(V)$. For a positive-definite subspace $V_0$ of $V$, regard $G_{V_0}:=\mathrm{\mathrm{\mathrm{GSp}in}}(V_0^\perp)$ as a $\bq$-subgroup of $G$.
\begin{itemize}
\item[(i)] Choose an element $g_\infty\in G(\br)$ such that
\[
L_\infty:=g_\infty K_\infty g_\infty^{-1} \cap G_{V_0}(\br)
\]
contains a maximal connected compact subgroup of $G_{V_0}(\br)$. All such choices then form a double coset $G_{V_0}(\br) g_\infty K_\infty$.
\item[(ii)] For a compact open subgroup $K_f\subset G(\ba_f)$ and an element $g_f\in G_{V_0}(\ba_f)$, put $L_f=g_fK_fg_f^{-1}\cap G_{V_0}(\ba_f)$.
\item[(iii)] Let $\mz_{V_0,g_f,K_f}$ be the Shimura variety associated to $G_{V_0}$ of level $L_f$. It is quasi-projective and defined over $\bq$, with $\mz_{U,g_f,K_f}(\bc)=G_{V_0}(\bq)\backslash G_{V_0}(\ba)/L_\infty L_f$ and $\dim \mz_{V_0,g_f,K_f}=\dim V_0$.
\end{itemize}
There is a natural morphism $i_{g_f,K_f}:\mz_{V_0,g_f,K_f}\rar \mkf$; over complex points, it is given by $\mz_{V_0,g_f,K_f}(\bc) \rar \mkf(\bc)$, $(x_\infty, x_f) \rar (x_\infty g_\infty, x_f g_f)$. We write $Z_{V_0,g_f,K_f}$ for the $\bq$-cycle $i_{g_f,K_f}(\mz_{V_0,g_f,K_f})$.
\begin{definition}
$i=1,2,3$. $\SC^i(\mkf)$ is the subgroup of the Chow group $\CH^i(\mkf\otimes_\bq \bbq)$ generated by connected components of $Z_{V_0,g_f,K_f}\otimes_\bq \bar{\bq}$, with $g_f$ running over $G(\ba_f)$ and $V_0$ running over positive-definite subspaces of dimension $i$.
\end{definition}
The family $\{\SC^i(\mkf)\}$ forms a direct system: for two compact open subgroups $K_f \subset K_f^\prime$, the projection $M_{K_f} \rar M_{K_f^\prime}$ is flat and induces a pull-back homomorphism $\SC^i(M_{K_f^\prime})\rar \SC^i(\mkf)$. The direct limit
\[
\SC^i(M):=\dlim\ _{K_f} \SC^i(\mkf)
\]
is a $G(\ba_f)$-module. For $g_f\in \mathrm{\mathrm{GSp}in}(V)_{\ba_f}$, the translation map $\rho(g_f)_{K_f}: \mkf \rar M_{g_f^{-1}K_fg_f}$, $(x_\infty, x_f)\rar (x_\infty, x_fg_f)$ induces an isomorphism $\rho(g_f)_{K_f}^\ast: \SC^i(M_{g_f^{-1}K_fg_f})\rar \SC^i(\mkf)$, then $g_f$ acts on $\SC^i(M)$ by $\rho(g_f)^\ast=\dlim_{K_f}\rho(g_f)_{K_f}^\ast$. When $K_f$ is neat, $\SC^i(M)^{K_f}=\SC^i(\mkf)$.
We comment that the group $G_{V_0}$ has a realization other than $\GSpin$.
(i) $V_0=\bq v$, then $G_{V_0}\cong \GL_2^\prime(F)$, with $F=\bq(\sqrt{q(v)})$ and
\[
\mathrm{GL}_2^\prime(F):=\{g\in \mathrm{GL}_2(F)|\det g\in \bq^\times\};
\]
Specifically, when $q(v)\in {\bq^\times}^2$, there are $F=\bq\oplus \bq$ and $\GL_2^\prime(F)=\{(g_1,g_2)\in \GL_2\times \GL_2: \det g_1=\det g_2\}$; the connected components of $Z_{\bq v,g_f,K_f}$ are essentially the product of two modular curves associated to $\GL_2$.
(ii) $\dim V_0=2$. There exists a indefinite quaternion algebra $D$ over $\bq$ such that $G_{V_0}\cong D^\times$. Here indefinite means $D(\br)\cong M_{2\times 2}(\br)$.
\begin{remark}
Let $\SC^1_{s}(M)$ (resp. $\SC^1_{ns}(M)$) denote the subspace of $\SC^1(M)$ spanned by connected components of $Z_{\bq v,g_f, K_f}$ with $q(v)\in {\bq^\times}^2$ (resp. $q(v)\in \bq_+\backslash {\bq^\times}^2$). We call cycles in $\SC^1_s(M)$ split divisors and cycles in $\SC^1_{ns}(M)$ non-split divisors.
\end{remark}
\subsection{CAP representations}\label{parabolic}
Let $\tau$ be an irreducible cuspidal automorphic representation of $\mathrm{GL}_2(\ba)$ and $V_\tau\subset \ma_{cusp}(\PGL_2)$ be the underlying space of $\tau$. Let $\chi$ be a character of $\ba^\times/\bq^\times$ and $z\in \bc$. Let $\Pi(\tau\boxtimes \chi,z)$ denote the induced representation of $G(\ba)$ consisting of smooth $\bk$-finite functions $f:G(\ba)\rar V_\tau$ that satisfy
\[
f\left(\smalltwomatrix{A}{u}{}{x\leftup{t}{A^{-1}}}g\right)=|\frac{\det A}{x}|^{\frac{3}{2}+z}\chi(x)\tau(A)f(g)
\]
for $u\in \mathrm{Sym}_{2\times 2}(\ba)$, $A\in \GL_2(\ba)$, and $x\in \ba^\times$.
\begin{definition}
An irreducible cuspidal automorphic representation $\pi$ of $\mathrm{GSp}_4(\ba)$ is called CAP of Siegel type $(\tau\boxtimes \chi,z)$ if there exists an irreducible constituent $\Pi$ of $\Pi(\tau\boxtimes \chi,z)$ such that $\pi_p\cong \Pi_p$ for almost all $p$.
\end{definition}
To describe CAP representations of Siegel type, it is necessary to use the theta correspondence between $\PGSp_4, \PGL_2$ and the metaplectic group $\wtilde{\SL}_2$. We refer to Section ~\ref{theta} for notions of theta correspondence.
Choose a non-trivial character $\psi$ of $\bq\backslash \ba$. Let $\ma_{00}(\wtilde{\SL}_2)$ be the space of cuspidal automorphic forms on $\wtilde{\SL}_2(\ba)$ that are orthogonal to elementary theta series. Waldspurger \cite{Wald80} \cite{Wald91} proved a packet decomposition
\[
\ma_{00}(\wtilde{\SL}_2)=\sqcup_{\tau\subset \ma_{cusp}(\PGL_2)} \Wd_\psi(\tau),
\]
where $\tau$ runs over irreducible cuspidal automorphic represnetation of $\PGL_2(\ba)$ and the Waldspurger packet $\Wd_\psi(\tau)$ is the collection of all non-zero global theta lifts $\Theta_{\wtilde{\SL}_2\times \PGL_2}(\tau\otimes \chi_a,\psi_a)$, $a\in \bq^\times$. Define the $\ell_\psi$-Whittaker functional on $\ma(\wtilde{\SL}_2)$ by
\[
\ell_{\psi}(\varphi):=\int_{\bq\backslash \ba} \varphi\big(\smalltwomatrix{1}{n}{}{1}\big)\psi(-n)dn.
\]
For $\sigma\in \Wd_\psi(\tau)$ and $a\in \bq^\times$, there is
\begin{equation*}
\Theta_{\wtilde{\SL}_2\times \PGL_2}(\sigma,\psi_a)=
\begin{cases}
\tau\otimes \chi_a, &\ell_{\psi_a}|_{\sigma}\neq 0,\\
0, &\ell_{\psi_a}|_{\sigma}=0.
\end{cases}
\end{equation*}
\begin{theorem}\cite{PS83}\label{ps_cap}
$\pi$ is CAP of Siegel type $(\tau\boxtimes \chi, z)$ if and only if
\begin{itemize}
\item[(i)] $z=\pm\frac{1}{2}$ and $\tau$ has a trivial central character,
\item[(ii)] $\pi=\Theta_{\wtilde{\SL}_2\times \PGSp_4}(\sigma,\psi)\otimes \chi$, where $\sigma$ is an irreducible cuspidal automorphic representation of $\wtilde{\SL}_2(\ba)$ belonging to the Waldspurger packet $\Wd_\psi(\tau)$ and satisfying $\Theta_{\wtilde{\SL}_2\times \PGL_2}(\sigma,\psi)=0$.
\end{itemize}
\end{theorem}
CAP representation of Siegel type all occur with multiplicity one in the discrete spectrum $L^2_{\disc}(G(\bq)Z(\br)^+\backslash G(\ba))$.
\subsection{Cohomological parameters}\label{coho-rep}
A cohomological parameter of $G$ is an irreducible unitary $(\fg,K_\infty)$-module that has non-trivial $(\fg,K_\infty)$-cohomology. By the Vogan-Zuckerman theory \cite{VZ84}, one can determine all seven cohomological parameters of $G$. Five of them contribute to $\mathrm{H}^2$ and, among these five, four contribute to $\mathrm{H}^{1,1}$. The four parameters are $\{1, \sgn\circ \nu, \pi^{2+},\pi^{2-}\}$ and their nonzero $(\fg,K_\infty)$-cohomology groups are
\begin{align*}
&H^{1,1}(\fg,K_\infty,\pi^{2,\pm})=H^{2,2}(\fg,K_\infty,\pi^{2,\pm})=\bc,\\
&H^{i,i}(\fg,K_\infty,1)=\bc\ (i=0,1,2,3),\\
&H^{i,i}(\fg,K_\infty,\sgn)=\bc\ (i=0,1,2,3).
\end{align*}
\subsubsection{The Langlands parameter of $\pi^{2\pm}$}\label{sss_lppi2}
Let $\tau_\infty$ be an irreducible tempered unitary representation of $\GL_2(\br)$ on the space $V_{\tau_\infty}$. Let $\chi_\infty$ be a character of $\br^\times$ and $z\in \bc$. Let $I^\infty_{P,\tau_\infty\boxtimes \chi_\infty, z}$ be the smooth induced representation of $G(\br)$ consisting of smooth functions $f:G(\br)\rar V_{\tau_\infty}$ that satisfy
\[
f_\infty\left(\smalltwomatrix{I_2}{u}{}{I_2}\smalltwomatrix{A}{}{}{x\leftup{t}{A^{-1}}}g\right)=\big|\frac{\det A}{x}\big|^{z+\frac{3}{2}}\chi_\infty(x)\tau_\infty(A)f_\infty(g)
\]
for $A\in \GL_2(\br), u\in \mathrm{Sym}_{2\times 2}(\br)$ and $x\in \br^\times$. Let $I_{P,\tau_\infty\boxtimes \chi_\infty, z}$ be the underlying $(\fg,K_\infty)$-module of $I^\infty_{P,\tau_\infty\boxtimes \chi_\infty, z}$ consisting of $K_\infty$-finite functions. When $\re(z)>0$, $I_{P,\tau_\infty\boxtimes \chi_\infty, z}$ is irreducible for almost all $z$ and, when it is reducible, has a unique quotient. We set $J_{P,\sigma\boxtimes \chi, z}$ to be $I_{P,\sigma\boxtimes \chi, z}$ in the former case and to be the unique quotient in the latter case.
The calculation in \cite{VZ84} allows one to identify $\pi^{2\pm}$ as a Langlands quotient of the above type. Let $\fD_{2n}$ ($n\geq 1$) be the discrete series representation of $\GL_2(\br)$ with trivial central character and of weight $2n$. In other words, $\fD_{2n}$ is the unique subrepresentation of the $\GL_2(\br)$-representation induced from the quasi-character $\smalltwomatrix{a_1}{*}{}{a_2}\rar |\frac{a_1}{a_2}|^{n-\frac{1}{2}}$. There is $\{\pi^{2+},\pi^{2-}\}=\{J_{P,\fD_4\otimes 1, \frac{1}{2}},
J_{P,\fD_4\otimes \sgn, \frac{1}{2}}\}$ and we set
\[
\pi^{2+}=J_{P,\fD_4\otimes 1, \frac{1}{2}},\quad \pi^{2-}=J_{P,\fD_4\otimes \sgn, \frac{1}{2}}.
\]
\subsubsection{The structure of $H^2(\fg,K_\infty,\pi^{2+})$}\label{coho_liealg}
We keep the notations as in Section ~\ref{algebra}. For a vector space $\fa$ over $\bc$, let $\fa^\ast$ denote its dual $\mathrm{Hom}(\fa,\bc)$. For $j\geq 0$, there is a canonical isomorphism $(\wedge^j \fa)^\ast \cong \wedge^j \fa^\ast$ and we identify $(\wedge^j \fa)^\ast$ with $\wedge^j \fa^\ast$.
Now we compute the $(\fg,K_\infty)$-cohomology of $\pi^{2+}$. Set $B_0=B\cap \mathrm{Sp}_4$, $\fb_0=\Lie(B_0(\br))\otimes \bc$ and $\tilde{\fk}:=\mathrm{Lie}(K_\infty)=\fk\oplus \fz$. There is $\fg=\fb_0\oplus \wtilde{\fk}$, whence $\fb_0\cong \fg/\tilde{\fk}\cong \fp$ and $\fb_0^\ast \cong (\fg/\tilde{\fk})^\ast \cong \fp^\ast$. $K_\infty$ and $\tilde{\fk}$ act on $\fb_0$ and $\fb_0^*$ in an according way. There is
\begin{align*}
H^2(\fg,K_\infty,\pi^{2+})
=&\Hom_{K_\infty}(\wedge^2(\fg/\wtilde{\fk}),\pi^{2+})\\
=&\Hom_{\fk}(\wedge^2\fb_0,\pi^{2+})\\
=&\big((\wedge^2\fb_0)^*\otimes \pi^{2+}\big)^{\fk}\\
=&\big(\wedge^2\fb_0^*\otimes \pi^{2+}\big)^{\fk}
\end{align*}
The $\fk$-module structure of $\wedge^2\fb_0^*$ can be determined as below.
\begin{itemize}
\item[(i)] Chose the following explicit basis of $\fb_0$:
\begin{align*}
&a=\left(\begin{smallmatrix}
1& & &\\
&1 & &\\
& &-1 &\\
& & &-1
\end{smallmatrix}\right),\ \
h=\left(\begin{smallmatrix}
1& & &\\
&-1 & &\\
& &-1 &\\
& & &1
\end{smallmatrix}\right),\ \
n_0=\left(\begin{smallmatrix}
0&1 & &\\
&0 & &\\
& &0 &\\
& &-1 &0
\end{smallmatrix}\right),\ \ \\
&n_1=\left(\begin{smallmatrix}
0& &1 &\\
&0 & &1\\
& &0 &\\
& & &0
\end{smallmatrix}\right),\ \
n_2=\left(\begin{smallmatrix}
0& &1 &\\
&0 & &-1\\
& &0 &\\
& & &0
\end{smallmatrix}\right),\ \
n_3=\left(\begin{smallmatrix}
0& & &1\\
&0 &1 &\\
& &0 &\\
& & &0
\end{smallmatrix}\right).
\end{align*}
The weight space decomposition of $\fb_0$ with respect to the action of $\ft$ is $\fb_0=\bc e_{-\alpha-\beta}\oplus \bc e_{-\beta}\oplus \bc e_{\alpha-\beta}\oplus \bc e_{-\alpha+\beta}\oplus \bc e_{\beta}\oplus \bc e_{\alpha+\beta}$, where the subscripts denote the weights and
\begin{align*}
&e_{-\alpha-\beta}=\frac{1}{2}h+n_0i+n_2i-n_3,
&e_{\alpha+\beta}=\frac{1}{2}h-n_0i-n_2i-n_3,\\
&e_{\alpha-\beta}=\frac{1}{2}h-n_0i+n_2i+n_3,
&e_{-\alpha+\beta}=\frac{1}{2}h+n_0i-n_2i-n_3,\\
&e_{\beta}=\frac{1}{2}a-n_1i, & e_{-\beta}=\frac{1}{2}a+n_1i.
\end{align*}
\item[(ii)]
$\wedge^2\fb_0, \wedge^4\fb_0, \wedge^2\fb_0^*, \wedge^4\fb_0^*$ are isomorphic as $\fk$-modules and each is the direct sum of five irreducible $\fk$-submodules with highest weights $0,\alpha-2\beta, \alpha, \alpha+2\beta, 2\alpha$ respectively.
\item[(iii)] Let $\{e^*_{-\alpha-\beta}, e^*_{-\beta}, e^*_{\alpha-\beta}, e^*_{-\alpha+\beta}, e^*_{\beta}, e^*_{\alpha+\beta}\}$ be the basis of $\fb_0^*$ dual to the basis $\{$ $e_{-\alpha-\beta}$, $e_{-\beta}$, $e_{\alpha-\beta}$, $e_{-\alpha+\beta}$, $e_{\beta}$, $e_{\alpha+\beta}$$\}$ of $\fb_0$. The irreducible $\fk$-submodule of $\wedge^2 \fb^*_0$ with highest weight $2\alpha$ is of dimension $5$ and spanned by
\begin{align*}
&\eta_2=e^*_{-\alpha-\beta}\wedge e^*_{-\alpha+\beta},\\
&\eta_1=e^*_{-\alpha-\beta}\wedge e^*_{\beta}+e^*_{-\beta}\wedge
e^*_{-\alpha+\beta},\\
&\eta_0=e^*_{-\alpha-\beta}\wedge
e^*_{\alpha+\beta}+2e^*_{-\beta}\wedge
e^*_{\beta}+e^*_{\alpha-\beta}\wedge e^*_{-\alpha+\beta},\\
&\eta_{-1}=e^*_{\alpha-\beta}\wedge e^*_{\beta}+e^*_{-\beta}\wedge
e^*_{\alpha+\beta},\\
&\eta_{-2}=e^*_{\alpha-\beta}\wedge e^*_{\alpha+\beta}.
\end{align*}
The vector $\eta_j (-2\leq j\leq 2)$ is of weight $j\alpha$.
\end{itemize}
\begin{lemma}\label{cohopi2}
\begin{itemize}
\item[(i)]The irreducible $\fk$-submodules of $\pi^{2+}$ are of highest weights $m_1\alpha+m_2\beta$, with $m_1\in \bz_{\geq 2},m_2\in\bz$ and $m_1\equiv m_2\mod 2$. Specifically, the irreducible $\fk$-submodule of $\pi^{2+}$ with highest weight $2\alpha$ occurs with multiplicity one in $\pi^{2+}$.
\item[(ii)]Let $\pi^{2+}(\delta_{2\alpha})$ be the irreducible $\fk$-submodule of $\pi^{2+}$ with highest weight $2\alpha$. There is a basis $\{v_j|-2\leq j\leq 2\}$ of $\pi^{2+}(\delta_{2\alpha})$ such that $v_j$ is of weight $j\alpha$ and $\HH^2(\fg,K_\infty,\pi^{2+})=\bc\sum_{j=-2}^2 v_{-j}\otimes\eta_j$.
\end{itemize}
\end{lemma}
\begin{proof}
Part (i) is a direct application of the Vogan-Zuckerman theory to $G$. For (ii), recall that $H^2(\fg,K_\infty,\pi^{2+})=\Hom_{\fk}(\wedge^2\fb_0,\pi^{2+})$; by (i) and the structure of $\wedge^2 \fb_0$, this space is $1$-dimensional and an $\fk$-invariant homomorphism from $\wedge^2\fb_0$ to $\pi^{2+}$ can only
happen between their $\fk$-submodules with highest weight $2\alpha$. Choose a basis $\{v_j|-2\leq j\leq 2\}$ of $\delta$ such that $v_j$ is of weight $j\alpha$, then such a homomorphism is of the form $\sum_{i=-2}^2 c_j v_{-j}\otimes \eta_j$, where $c_j$ are nonzero numbers. Replacing $v_j$ by $c_{-j}v_j$, one gets the desired basis of $\delta$.
\end{proof}
\subsection{Theta correspondence}\label{theta}
We prepare here the relevant notions of theta lifting for the discussion of CAP representations in Section ~\ref{parabolic}.
\subsubsection{Local theta correspondence}
Let $k$ be a local field of characteristic zero and $\psi$ a non-trivial character of $k$. For a reductive group $\mathrm{G}$ over $k$, let $\mathrm{Irr}(\mathrm{G})$ denote the set of irreducible admissible representations of $\mathrm{G}$ when $k$ is non-archimedean and the set of irreducible admissible $(\mathrm{Lie}(\mathrm{G})\otimes_\br \bc,\mathrm{K})$-modules when $k$ is archimedean. ($\mathrm{K}$ denotes a maximal compact subgroup of $\mathrm{G}$ in case of $k$ being archimedean.)
Let $\wtilde{\SL}_2(k)$ be the $2$-fold metaplectic cover of $\SL_2(k)$; it is $\SL_2(k)\times \bz_2$ as a set and the upper triangular unipotent group in $\SL_2(k)$ has a lift to $\wtilde{\SL}_2(k)$.
Let $(U,q)$ be a quadratic space over $k$ of odd dimension and $\ms(U)$ be the space of Bruhat-Schwartz functions on $U$. The Weil representation $\omega:=\omega_{\psi,U}$ of $\wtilde{\SL}_2(k)\times \OO(U)$ on $\ms(U)$ is given by:
\begin{align*}
&\omega(h)\phi(X)=\phi(h^{-1}X),\quad h\in O(U),\\
&\omega\big(\smalltwomatrix{1}{n}{}{1},\epsilon\big)\phi(X)=\epsilon \psi\big(nq(X)\big)\phi(X),\quad n\in k,\, \epsilon\in \bz_2,\\
&\omega\big(\smalltwomatrix{a}{}{}{a^{-1}},\epsilon\big)\phi(X)=\epsilon\chi_{\psi,U}(a)|a|^{\frac{\dim U}{2}}\phi(aX),\quad a\in k^\times,\\
&\omega(\smalltwomatrix{}{1}{-1}{},\epsilon)\phi(X)=\epsilon \gamma(\psi, U)\int_{U}\phi(Y)\phi(q(X,Y))dY.
\end{align*}
Here $q(X,Y)=q(X+Y)-q(X)-q(Y)$ is the associated quadratic pairing, $\gamma(\psi,U)$ is a constant of norm $1$, and $\chi_{\psi,U}:k^\times \rar \mathrm{S}^1$ is a function satisfying $\chi_{\psi,U}(a_1a_2)=\chi_{\psi,U}(a_1)\chi_{\psi,U}(a_2)<a_1,a_2>$.
Let $\pi\in \mathrm{Irr}(\OO(U))$ and $\sigma\in \mathrm{Irr}(\wtilde{\SL}_2)$ satisfying $\sigma((I_2,\epsilon))=\epsilon$. The maximal $\pi$-isotypic quotient of $\omega$ is of the form $\pi\boxtimes \theta_0(\pi)$ and the maximal $\sigma$-isotypic quotient of $\omega$ is of the form $\theta_0(\sigma)\boxtimes \sigma$, where $\theta_0(\pi)$ and $\theta_0(\sigma)$ are finite-length admissible representations of $\wtilde{\SL}_2(k)$ and $\OO(U)$ respectively. Let $\theta(\pi)$ and $\theta(\sigma)$ be the maximal semi-simple quotient of $\theta_0(\pi)$ and $\theta_0(\sigma)$ respectively. The Howe Duality Conjecture asserts that $\theta(\pi)$ and $\theta(\sigma)$ are irreducible. The Howe Duality Conjecture has been proved for general reductive dual pairs when $k$ is not a dyadic field; when $k$ is dyadic, it is also known for the current pair $(\SL_2, \OO(U))$ when $\dim U=1, 3$ and $5$.
When $\theta(\pi)=\sigma$ and $\theta(\sigma)=\pi$, we say that $\pi$ and $\sigma$ are in local theta correspondence with respect to $\psi$ and call $\sigma$ (resp. $\pi$) the local theta lift of $\pi$ (resp. $\sigma$). We use the notations $\theta(\pi,\psi)$ and $\theta(\sigma,\psi)$ when the role of $\psi$ needs to be emphasized.
Because $\dim U$ is assumed to be odd, each $\pi\in \mathrm{Irr}(\SO(U))$ has two extensions to $\OO(U)$. When $\dim U\geq 3$, at most one of the extensions occurs in the local theta correspondence with $\wtilde{\SL}_2(k)$ and one actually considers local theta correspondence between $\mathrm{Irr}(\SO(U))$ and $\mathrm{Irr}(\wtilde{\SL}_2)$.
\subsubsection{Global theta lifting}\label{sss_gtheta}
Let $F$ be a number field, $\ba_F$ be the ring of adels over $F$, and $\psi$ be a non-trivial character of $\ba_F/F$. For a semisimple group $\mathrm{G}$ over $F$, $[\mathrm{G}]$ denotes the quotient space $\mathrm{G}(F)\backslash \mathrm{G}(\ba_F)$.
Let $\wtilde{\SL}_2(\ba_F)$ be the two-fold metaplectic cover of $\SL_2(\ba_F)$. Let $(U,q)$ be a quadratic space over $F$ of odd dimension and $\ms(U(\ba_F))$ be the space of Bruhat-Schwartz functions on $U(\ba_F)$. Let $\omega:=\omega_{\psi, U}=\otimes_v \omega_{\psi_v, U(F_v)}$ be the global Weil representation of $\wtilde{\SL}_2(\ba_F)\times \OO(U)_{\ba_F}$ on $\ms(U(\ba_F))$ with respect to $\psi$. For $\phi\in \ms(U(\ba_F))$, the associated theta kernel function
\[
\theta_\phi(h,g)=\sum_{\xi\in U(F)} \omega(h,g)\phi(\xi)
\]
is a slowly increasing function on $(\OO(V)_F\backslash \OO(V)_\ba)\times (\Sp_n(F)\backslash \wtilde{\Sp}_n(\ba))$. Specifically, when $\dim U=1$, the functions $\theta_\phi(g)$ are called elementary theta series on $\wtilde{SL}_2(\ba)$.
Suppose that $\dim U\geq 3$. Let $\sigma$ be an irreducible cuspidal representation of $\wtilde{\SL}_2(\ba_F)$ and $\pi$ be an irreducible cuspidal representation of $\SO(U)_{\ba_F}$. We define the global theta lift of $\sigma$ to $\SO(U)_{\ba_F}$ and the global theta lift of $\pi$ to $\wtilde{\SL}_2(\ba_F)$ as
\begin{align*}
&\Theta(\sigma,\psi):=\{\Theta(\phi,\varphi):\phi\in \ms(U(\ba_F)), \varphi\in \sigma\},\\
&\Theta(\pi,\psi):=\{\Theta(\phi,f):\phi\in \ms(U(\ba_F)), f\in \pi\},
\end{align*}
where
\begin{align*}
&\Theta(\phi,\varphi):=\int_{[SL_2]}\overline{\varphi(g)}\Theta_\phi(h,g),\quad \Theta(\phi,f):=\int_{[\SO(U)]}\overline{f(h)}\Theta_\phi(h,g).
\end{align*}
\subsubsection{Theta correspondence between $\wtilde{\SL}_2$, $\PGL_2$ and $\PGSp_4$}\label{sss_pi2+}
There is an isomorphism $\PGL_2\cong \SO(V^\prime)$, where
\[
V^\prime=\{X\in M_{2\times 2}(\bq):\Tr(X)=0\},\quad q^\prime(X)=-\det X,
\]
Recall that $\PGSp_4\cong \SO(V)$. We refer the theta correspondence between $\wtilde{\SL}_2$ and $\PGL_2$ (resp. $\PGSp_4$) to the theta correspondence between $\wtilde{\SL}_2$ and $\SO(V^\prime)$ (resp. $\SO(V)$).
Choose a non-trivial character $\psi_\infty$ of $\br$. The following lemma gives a description of $\pi^{2+}$ in terms of local theta correspondence.
\begin{lemma}\label{localtheta}
Set $\sigma_\infty=\theta_{\wtilde{\SL}_2\times \PGL_2}(\fD_{2n},\psi_\infty)$, then $\theta_{\wtilde{\SL}_2\times \PGSp_4}(\sigma_\infty,$ $\psi_\infty)=J_{P,\fD_{2n}\otimes 1,\frac{1}{2}}$ and $\theta_{\wtilde{\SL}_2\times \PGSp_4}(\sigma_\infty,\psi^{-1}_\infty)$ is a discrete series.
\end{lemma}
\begin{proof}
This lemma is a special case of Proposition 5.5 in \cite{wgan2008}.
\end{proof}
\section{Cycle maps and Proof of Theorem ~\ref{main}}\label{cyclemap}
We review the cycle map and prove Theorem \ref{main} based on Theorem ~\ref{picardgroup}.
\subsection{The maps $\cl_{et}$ and $\cl$}
Let $X$ be an algebraic variety over a number field $F$. For $0\leq i\leq 2\dim X$, let $\mz^i(X_\bbq)$ be the free abelian group generated by codimension-$i$ irreducible closed subvarieties of $X_\bbq$ and $\CH^i(X_\bbq)$ be the quotient group of $\mz^i(X_\bbq)$ modulo the relation of rational equivalence. For a cycle $Z\in \mz^i(X_\bbq)$, $[Z]$ refers to its class in $\CH^i(X_\bbq)$.
Let $\HH^\ast(X_\bbq,\bz_\ell)$ and $\HH^\ast_c(X_\bbq,\bz_\ell)$ be the $\ell$-adic cohomology and $\ell$-adic cohomology with compact support. For a $\bz_\ell$-module $R$, set $\HH^\ast_?(X_\bbq,R)=\HH^\ast_?(X_\bbq,\bz_\ell)\otimes R$. The image of $\HH^\ast_c$ in $\HH^\ast$ is denoted by $\HH^\ast_!$ and called the interior cohomology. Let $\mu_n$ be the multiplicative group of the $n$-th roots of unity in $\bbq$ and set $\bz_\ell(1):=\ilim_n \mu_{\ell^n}$; $\bz_\ell(1)$ is isomorphic to $\bz_\ell$ as a $\bz_\ell$-module but $\Gal(\bbq/\bq)$ acts on it by the cyclotomic character. Let $\bz_\ell(i)$ be the $i$-fold tensor of $\bz_\ell$ and set $\bq_\ell(i)=\bz_\ell(i)\otimes \bq_\ell$. There is a $\Gal(\bbq/F)$-equivariant $\ell$-adic cycle map
\[
\cl_{et}: \CH^i(X_\bbq)\rar \HH^{2i}(X_\bbq,\bz_\ell(i)).
\]
For a finite field extension $E\supset F$ contained in $\bbq$, let $\CH^i(X_E)$ be the subgroup of $\CH^i(X_\bbq)$ generated by irreducible closed subvarieties defined over $E$. There is $\CH^i(X_E)=\CH^i(X_\bbq)^{\Gal(\bbq/E)}$. Elements in $\Ta_E^{2i}(X_\bbq):=\HH^{2i}(X_\bbq,\bq_\ell(i))^{\Gal(\bbq/E)}$ are called degree-$2i$ Tate classes over $E$. The union $\Ta^{2i}(X_\bbq)=\cup_E \Ta_E^{2i}(X_\bq)$ for all finite extensions $E\supset F$ is the space of all degree-$2i$ Tate classes. Tate's conjecture asserts that $\Ta_E(X_\bbq)$ is generated by the image of $\CH^i(X_E)$.
When $X$ is smooth, for an abelian group $R$, let $\mathrm{H}^*(X,R)$ (reps. $\mathrm{H}^*_c(X,R)$) denote the singular cohomology (reps. the singular cohomology with compact support) of $X(\bc)$ with respect to its complex topology. When $R$ is $\br$ and $\bc$, $\mathrm{H}^*(X,R)$ (resp. $\mathrm{H}^*_c(X,R)$) is isomorphic to the de Rham cohomology (resp. de Rham cohomology with compact support) defined with the cochain of differential forms (resp. differential forms with compact support). There is a cycle map
\[
\cl: \CH^i(X_\bbq)\rar \HH^{2i}(X,\bz).
\]
defined with regard to the Poincar\'{e} duality between $\HH^\ast_c$ and $\HH^\ast$.
\begin{remark}\label{rm_comparison}
When $X$ is smooth, by \cite[Expos\'{e} XVI]{SGA4.3}, there are canonical isomorphisms between the etale cohomology with constant sheaf $\bz/n\bz$ and the singular cohomology with coefficients in $\bz/n\bz$.
\[
\HH^j(X_\bbq,\bz/n\bz)\cong \HH^j(X,\bz/n\bz),\quad \HH^j_c(X_\bbq,\bz/n\bz)\cong H^j(X,\bz/n\bz).
\]
The comparison map is functorial and respect Poincar\'{e} duality, whence it is compatible with the cycle map $\cl_{et}$ into $\HH^{2i}(X_\bbq,\bz/n\bz(i))$ and the cycle map $\cl$ into $\HH^{2i}(X,\bz/n\bz)$. By passing to the inverse limit, one gets that $\cl_{et}:\CH^i(X_\bbq)\rar \HH^{2i}(X_\bbq,\bz_\ell(i))$ and $\cl:\CH^i(X_\bbq)\rar \HH^{2i}(X,\bz)$ are compatible with the canonical isomorphism
\[
\HH^{2i}(X_\bbq,\bz_\ell(i))\cong \HH^{2i}(X,\bz_\ell)= \HH^{2i}(X,\bz)\otimes_\bz \bz_\ell
\]
\end{remark}
\subsection{The cycle map on $\mkf$ and $M$}
Let $K_f$ be a neat compact open subgroup of $G(\ba_f)$. Let $\tmkf$ be a smooth projective toroidal compactification of $\mkf$ which is defined over $\bq$ and whose boundary is the union of divisors $B_i$ ($1\leq i\leq m$) with normal crossings. Consider the cycle maps on $\mkf$ and $\tmkf$ respectively,
\begin{align*}
&\cl_{\mkf}:\CH^i(\mkf\otimes_\bq \bbq)\rar \HH^{2i}(\mkf,\bz),\\
&\cl_{\tmkf}:\CH^i(\tmkf\otimes_\bq \bbq)\rar \HH^{2i}(\tmkf,\bz).
\end{align*}
For a codimension-$i$ irreducible closed subvariety of $\mkf\otimes_\bq \bbq$, its Zariski closure $\overline{Z}$ in $\tmkf\otimes_\bq \bbq$ is also irreducible. Extend the map $Z\rar \overline{Z}$ to a homomorphism $\mz^i(\mkf)\rar \mz^i(\tmkf)$ and let $j:\HH^\ast(\tmkf,\bz)\rar \HH^\ast(\mkf,\bz)$ denote the restriction map, then
\begin{equation}\label{comp}
\cl_\mkf([Z])=j\circ \cl_{\tmkf}([\overline{Z}]).
\end{equation}
There is a short exact sequence (see \cite{Weiss88}),
\begin{equation}\label{ses}
0\lrar \oplus_{i=1}^m \bc[B_i] \overset{\cl_{\tmkf}}{\lrar} \HH^2(\tmkf,\bc)\overset{j}{\lrar} \HH^2(\mkf,\bc)\lrar 0,
\end{equation}
Also, it is well-known that $\HH^1(\mkf,\bc)=0$.
\begin{lemma}\label{cyclelemma}
\begin{itemize}
\item[(i)] $\CH^1_0(\mkf\otimes_\bq \bbq)\otimes_\bz \bq=0$
\item[(ii)] Suppose $Z_1, Z_2\in \mz^1(\mkf\otimes_\bq \bbq)$. If $[Z_1]=[Z_2]$ in $\CH^1(\mkf\otimes_\bq \bbq)\otimes_\bq \bq$, then $[\overline{Z}_1]\in [\overline{Z}_2]+(\oplus_{i=1}^m \bq[B_i])$ in $\CH^1(\tmkf\otimes_\bq \bbq)\otimes_\bq \bq$.
\end{itemize}
\end{lemma}
\begin{proof}
Because $\HH^1(\mkf,\bc)=0$, there is $\HH^1(\tmkf,\bc)=0$. So the Picard variety is trivial and therefore $\CH^1_0(\tmkf\otimes_\bq \bbq)=0$.
We first verify (i). Suppose $[Z]\in \CH^1_0(\mkf\otimes_\bq \bbq)$, then $j\circ \cl_{\tmkf}([\overline{Z}])=0$ by (\ref{comp}). By (\ref{ses}), $\cl_{\tmkf}([\overline{Z}])$, considered as an element in $\HH^2(\tmkf,\bq)$, belongs to $\oplus_i \bq\cdot \cl_{\tmkf}([B_i])$. Hence there exists a nonzero integer $n$ and integers $n_i$ such that $n\cdot \cl_{\tmkf}([\overline{Z}])-\sum_i n_i \cdot \cl_{\tmkf}([B_i])$ is equal to zero in $\HH^2(\tmkf,\bz)$. Because $\CH^1_0(\tmkf\otimes_\bq \bbq)$ vanishes, the cycle $n \overline{Z}-\sum_i n_i B_i$ is rationally equivalent to zero and of the form $\mathrm{Div}(F)$ for certain rational function $F$ on $\tmkf\otimes_\bq \bbq$. Therefore $\mathrm{Div}(F|_{\mkf\otimes_\bq \bbq})=n\overline{Z}$ and $[Z]$ is equal to $0$ in $\CH^1_0(\tmkf\otimes_\bq \bbq)\otimes \bq$.
Now consider (ii). By the hypothesis, $[Z_1-Z_2]$ is a torsion element in $\CH^1(\mkf\otimes_\bq \bbq)$, whence there exists $n\in \bn$ so that $nZ_1-nZ_2$ is rationally equivalent to zero. Write $nZ_1-nZ_2=\mathrm{Div}(f)$ for certain rational function $f$ on $\mkf\otimes_\bq \bbq$. The divisor of $f$ on $\tmkf\otimes_\bq \bbq$ is of the shape $n\overline{Z}_1-n\overline{Z}_2+\sum_i n_i B_i$ with $n_i\in \bz$. So (ii) holds.
\end{proof}
We now pass to the direct limits $\CH^\ast(M\otimes_\bq \bbq)=\dlim \CH^\ast(\mkf\otimes_\bq \bbq)$, $\HH^\ast(M,\bc)=\dlim \HH^\ast(\mkf,\bc)$ and consider the according cycle map
\[
\cl_M:\CH^1(M\otimes_\bq \bbq)\otimes_\bq \bc\rar \HH^2(M,\bc).
\]
\begin{proposition}
$\cl_M$ is $G(\ba_f)$-equivariant and injective. $\HH^2(M,\bc)$ is a completely reducible admissible $G(\ba_f)$-module.
\end{proposition}
\begin{proof}
The $G(\ba_f)$-equivariance is obvioius. Because of Lemma ~\ref{cyclelemma} (i), the map $\cl_{\mkf}:\CH^1(\mkf\otimes_\bq \bbq)\otimes_\bq \bq\rar \HH^2(\mkf,\bq)$ is injective for neat $K_f$, whence $\cl_M$ is injective. By Section 13 and specifically Lemma 12 in \cite{Weiss92}, $\HH^2(M,\bc)$ is isomorphic to the $(\fg,K_\infty)$-cohomology of the discrete spectrum $L^2_{\disc}(G(\bq)Z(\br)^+\backslash G(\ba))$, whence $\HH^2(M,\bc)$ is a completely reducible admissible $G(\ba_f)$-module.
\end{proof}
We now prove Theorem ~\ref{main} by assuming Theorem ~\ref{picardgroup}.
\begin{proof}[Proof of Theorem ~\ref{main}]
(1) Theorem ~\ref{picardgroup} implies
\[
\CH^1(M\otimes_\bq \bbq)\otimes_\bz \bc=\SC^1(M)\otimes_\bz \bc.
\]
Hence $\CH^1(M\otimes_\bq \bbq)\otimes_\bz \bq=\SC^1(M)\otimes_\bz \bq$. Taking the $K_f$-invariant subspaces on both sides, we get $\CH^1(\mkf\otimes_\bq \bbq)\otimes_\bz \bq=\SC^1(\mkf)\otimes_\bz \bbq$.
(1)$\Longrightarrow$(2) By Theorem 9.4 in \cite{Weiss88}, the space of Tate classes $\Ta(\tmkf)$ is spanned by the image of $\CH^1(\tmkf\otimes_\bq \bbq)$. By (1) and Lemma ~\ref{cyclelemma} (ii), $\CH^1(\tmkf\otimes_\bq \bbq)\otimes_\bq \bbq$ is spanned by $[B_i]$ ($1\leq i\leq m$) and $[\overline{Z}]$ with $[Z]\in \SC^1(\mkf\otimes_\bq \bbq)$. The assertion in (2) then follows.
(2)$\Longrightarrow$(3) Choose a $\bq$-embedding $\tmkf\rar \bp^N$ for certain $N\in \bn$. Let $L_0$ be a $\bq$-hyperplane in $\bp^N$ that has a non-trivial intersection with $\tmkf$. Put $L=L_0\cap \tmkf$, then the class $\cl_{et}([L])\in H^2(\tmkf\otimes_\bq \bbq,\bq_\ell(1))$ is invariant by $\Gal(\bbq/\bq)$. By Hard Lefschetz Theorem, the map
\begin{align*}
\ml: \HH^2(\tmkf\otimes_\bq \bbq,\bq_\ell(1)) &\lrar \HH^4(\tmkf\otimes_\bq \bbq,\bq_\ell(2))\\
t&\lrar t\cup \cl_{et}([L]).
\end{align*}
is an isomorphism. It respects the action of $\Gal(\bbq/\bq)$ and hence, for every number filed $E$, induces an isomorphism
\[
\HH^4(\tmkf\otimes_\bq \bbq,\bq_\ell(2))^{\Gal(\bbq/E)}\cong \ml \left(\HH^2(\tmkf\otimes_\bq \bbq,\bq_\ell(1))^{\Gal(\bbq/E)}\right).
\]
Thus, $\Ta^2(\tmkf)=\ml(\Ta^1(\tmkf))$.
By (2), $\Ta(\tmkf)$ is spanned by the images of $[B_i]$ and $[\overline{Z}]$ for $1\leq i\leq m$ and $[Z]\in\SC^1(\mkf)$. Specifically, $\cl_{et}([L])$ is a linear combination of the images of these $[B_i]$ and special divisors. Therefore, $\Ta^2(\tmkf)=\ml(\Ta^1(\tmkf))$ is spanned by the listed elements in (3). (The map $\ml$ may depend on the embedding into $\bp^N$.)
(3)$\Longrightarrow$(4) Suppose $Z\in \mz^2(\mkf\otimes_\bq \bbq)$. By (3), there is a cycle class $[Z^\prime]$ in $\CH^1(\tmkf\otimes_\bq \bbq)\otimes_\bq$ that satisfies $\cl_{\tmkf,et}([\overline{Z}])=\cl_{\tmkf,et}([{Z^\prime}])$ and is of the shape $\sum_i q_i [\overline{Z}_i]+\sum_{i^\prime,j}q_{i^\prime,j}[\overline{Z}_{i^\prime}\cdot B_j]+\sum_{j_1,j_2}[B_{j_1}\cdot B_{j_2}]$ with $[Z_i], [Z_{i^\prime}]\in \SC^2(\mkf\otimes_\bq \bbq)$ and the coefficients in $\bq$. Because the cycle maps are compatible with the comparison maps between etale cohomology and singular cohomology, there is $\cl_{\tmkf}([\overline{Z}])=\cl_{\mkf}([{Z^\prime}])$ in $\HH^4(\tmkf,\bq)$. Thus
\[
\cl_{\mkf}([Z])=j\circ \cl_{\tmkf}([\overline{Z}])=j\circ \cl_{\tmkf}([{Z^\prime}])=\sum_i q_i\cl_{\mkf}([Z_i]).
\]
Here $j\circ \cl_{\tmkf}([\overline{Z}_{i^\prime}\cdot B_j])$ and $j\circ \cl_{\tmkf}([B_{j_1}\cdot B_{j_2}])$ vanish because the cycles are not supported on $\mkf(\bc)$. So $[Z]=\sum_i q_i [Z_j]$ in $\overline{\CH}^2(\mkf\otimes_\bq \bbq)$ and the assertion in (4) follows.
\end{proof}
\section{The decomposition of $\mathrm{H}^{1,1}(M,\bc)$}\label{repofh2}
For an irreducible unitary automorphic representation $\pi$ of $G(\ba)$, let $m(\pi)$ denote its multiplicity in $L^2_{\mathrm{disc}}(G(\bq)Z(\br)^+\backslash G(\ba))$. For an irreducible admissible unitary representation $\pi_f$ of $G(\ba_f)$, let $\HH^{1,1}(\pi_f)$ be the $\pi_f$-isotypic component of $\HH^{1,1}(M,\bc)$.
There is a decomposition
\begin{equation}\label{h11_d}
\mathrm{H}^{1,1}(M,\bc)=\oplus_{\pi}m(\pi)\mathrm{H}^{1,1}(\fg,K_\infty,\pi)=\oplus_{\pi_f}\mathrm{H}^{1,1}(\pi_f).
\end{equation}
Here $\mathrm{H}^{1,1}(\fg,K_\infty,\pi)=\mathrm{H}^{1,1}(\fg,K_\infty,\pi_\infty)\otimes \pi_f$. Recall that $\mathrm{H}^{1,1}(\fg,K_\infty,\pi_\infty)$ is $\bc$ when $\pi_\infty\in\{\pi^{2+},\pi^{2-}, 1,\sgn\circ\nu\}$ and zero otherwise. Similarly, one writes $\HH^2(M,\bc)=\HH^2(\pi_f)$ as the direct sum of isotypic components.
Weissauer determined that all $\pi$ occurring in (\ref{h11_d}) are the twists of three types of basic representations.
\begin{theorem}\cite[Thm. 4, Lemma 6]{Weiss92}\label{wei_coho}
If $\mathrm{H}^{1,1}(\fg,K_\infty,\pi)\neq 0$, then $\pi=\pi^\prime\otimes \chi$, where $\chi$ is a character with $\chi_\infty\in \{1,\sgn\circ \nu\}$ and $\pi^\prime$ is one of the following types:
\begin{itemize}
\item[(I)] a CAP representation of $\mathrm{PGSp}_4(\ba)$ of Siegel type $(\tau\boxtimes 1,\frac{1}{2})$ with $\tau\subset \ma_{cusp}(\PGL_2))$, $\pi_\infty=\pi^{2+}$ and $\tau_\infty=\fD_4$
\item[(II)] $J(P,\tau,\frac{1}{2})$, with $\tau\subset \ma_{cusp}(\PGL_2)$, $\tau_\infty=\fD_4$, and $L(\frac{1}{2},\tau)\neq 0$. $J(P,\tau,\frac{1}{2})$ denotes the unique irreducible quotient of $\Pi(\tau\boxtimes 1,\frac{1}{2})$ and is realized as the residue representation of the Eisenstein series associated to $\Pi(\tau\boxtimes 1,z)$ at $z=\frac{1}{2}$.
\item[(III)] the trivial representation $1$.
\end{itemize}
\end{theorem}
Because representations of type I, II and III all have multiplicity one in the discrete spectrum, there is $m(\pi)=1$ when $\mathrm{H}^{1,1}(\fg,K_\infty,\pi)\neq 0$.
\begin{lemma}\label{multiplicity}
If $\HH^{1,1}(\pi_f)$ is nonzero, then there exists a unique $\pi_\infty$ such that $\pi=\pi_\infty\times \pi_f$ occurs in $L^2_{\mathrm{disc}}(G(\bq)Z(\br)^+\backslash G(\ba))$. As a consequence, $\HH^2(\pi_f)=\HH^{1,1}(\pi_f)$ is an irreducible admissible $G(\ba_f)$-module.
\end{lemma}
\begin{proof}
Since $\HH^{1,1}(\pi_f)$ is nonzero, there is $ \pi_\infty\in\{\pi^{2+},\pi^{2-}, 1,\sgn\circ\nu\}$ such that $\pi=\pi_\infty\times \pi_f$ occurs in the discrete spectrum. We show that if $\pi^\prime=\pi_\infty^\prime\times \pi_f$ occurs in the discrete spectrum, then $\pi^\prime=\pi$. By Theorem ~\ref{wei_coho}, one may assume that $\pi$ is one of the basic types.
(i) $\pi$ is of type I or II with respect to $\tau\subset \ma_{cusp}(\PGL_2)$. In this situation, there is an irreducible cuspidal $\wtilde{\SL}_2(\ba)$-representation $\sigma\in \Wd_\psi(\tau)$ such that $\pi=\Theta_{\wtilde{\SL}_2\times \PGSp_4}(\sigma,\psi)$. When $\pi$ is of type I, $\Theta_{\wtilde{\SL}_2\times \PGL_2}(\sigma,\psi)=0$ (See Thm. ~\ref{ps_cap}); when $\pi$ is of type II, $\Theta_{\wtilde{\SL}_2\times \PGL_2}(\sigma,\psi)=\tau$ (See Lemma 10 in \cite{Weiss92}).
Note that for almost all finite $p$, $\pi^\prime_p=\pi_p$ is equal to $J(P,\tau_p,\frac{1}{2})$, the $p$-component of $J(P,\tau,\frac{1}{2})$. If $\pi^\prime$ is cuspidal, then it is CAP of Siegel type $(\tau\boxtimes 1,\frac{1}{2})$. If $\pi^\prime$ is in the discrete residue spectrum, then there must be $L(\frac{1}{2},\tau)\neq 0$ and $\pi^\prime=J(P,\tau,\frac{1}{2})$; this is because the residue spectrum of $\GSp_4(\ba)$ have been known explicitly and the fact $\pi_p^\prime=J(P,\tau_p,\frac{1}{2})$ for almost all $p$ selects out onely one possible choice $J(P,\tau,\frac{1}{2})$. In either case, there is $\pi^\prime=\Theta_{\wtilde{\SL}_2\times \PGSp_4}(\sigma^\prime,\psi)$ for certain $\sigma^\prime\in \Wd_\psi(\tau)$.
Because $\pi_f^\prime=\pi_f$, there is $\sigma_f^\prime=\sigma_f$. However, for two representations $\sigma$ and $\sigma^\prime$ in the same Waldspurger packet, their local components can differ only at a even number of places. (See \cite[(1.8), (1.9)]{wgan2008} or \cite[Coro. 1, 2]{Wald91}). It forces that $\sigma_\infty^\prime=\sigma_\infty$, whence $\sigma^\prime=\sigma$ and $\pi^\prime=\pi$.
(ii) $\pi=1$ is of type III. So $\pi_f^\prime=1$ and this forces $\pi^\prime=1$. Actually, $G(\bq)Z(\br)^+\backslash G(\ba)/G(\ba_f)=G(\bq)Z(\br)^+\backslash G(\br)$ and any continuous function on it must be a constant function.
Since $m(\pi)=1$, $\HH^2(\pi_f)=\HH^2(\fg,K_\infty,\pi_\infty)\times \pi_f=\HH^{1,1}(\fg,K_\infty,\pi_\infty)\times \pi_f=\HH^{1,1}(\pi_f)$ is irreducible.
\end{proof}
\begin{corollary}\label{criterion0}
The map $\mathrm{cl}_M:\SC^1(M)\otimes_\bz \bc\lrar \mathrm{H}^{1,1}(M,\bc)$ is an isomorphism if and only if $\SC^1(\pi_f)$ is nonzero when $\HH^{1,1}(\pi_f)$ is nonzero.
\end{corollary}
\begin{proof}
Because $\mathrm{cl}_M:\SC^1(M,\bc) \lrar \mathrm{H}^{1,1}(M,\bc)$ is injective and $G(\ba_f)$-equivariant, it is an isomorphism if and only if $\cl(\pi_f):\SC^1(\pi_f)\rar \HH^{1,1}(\pi_f)$ is an isomorphism for each $\pi_f$ occurring in $\mathrm{H}^{1,1}(M,\bc)$. Since $\HH^{1,1}(\pi_f)$ is irreducible by Lemma ~\ref{multiplicity}, the injective homomorphism $\cl(\pi_f)$ is an isomorphism if and only if $\SC^1(\pi_f)\neq 0$.
\end{proof}
\section{The period pairing}\label{pairng}
A cycle $[Z]\in \SC^1(M,\bc)$ can be written as
\[
[Z]=\sum_{\pi_f} [Z]_{\pi_f},
\]
where $[Z]_{\pi_f}\in \SC^1(\pi_f)$ is called the $\pi_f$-component of $[Z]$. (Given $[Z]$, $[Z]_{\pi_f}$ is nonzero for only finitely many $\pi_f$ because $[Z]$ is invariant under certain compact open subgroup $K_f\subset G(\ba_f)$ and only finitely many $\pi_f$ occurring in $\HH^{1,1}(M,\bc)$ have a nontrivial $K_f$-invariant subspace.)
The natural way to test the image of a cycle in cohomology is to use the period pairing between cycles and cohomology with compact support. It is difficult to construct closed differential forms with compact support. So we consider the alternative of rapidly decreasing closed differential forms. There are two basic facts:
\begin{itemize}
\item[(i)] The pairing between $\HH^2(M,\bc)$ and $\HH^4_c(M,\bc)$ given by $<[\omega_1],[\omega_2]>$ $:=\int_M \omega_1\wedge \omega_2$ is perfect and $G(\ba_f)$-equivariant. The restricted pairing on the isotypic components
\[
\HH^2(\pi_f)\times \HH^4_c(\pi_f^\prime)\rar \bc
\]
is zero when $\pi_f^\prime\not\cong \pi_f^\vee$ and perfect when $\pi_f^\prime\cong \pi_f^\vee$. Here $\pi_f^\vee$ denotes the dual representation.
\item[(ii)] $\mathrm{H}^*_c(M,\bc)$ is defined using the cochain $\Omega^*_c(M,\bc)$ consisting of compactly supported differential forms on $M$. Let $\mathrm{H}^*_{rd}(M,\bc)$ be the cohomology groups defined using the cochain $\Omega^*_{rd}(M,\bc)$ consisting of rapidly decreasing differential forms on $M$. Borel \cite{borel1980} proved that the inclusion map $\Omega^*_c(M,\bc)\hrar \Omega^*_{rd}(M,\bc)$ induces an isomorphism $\Lambda:\mathrm{H}^*_c(M,\bc) \isoto \mathrm{H}^*_{rd}(M,\bc)$.
\end{itemize}
We make a simple but very useful observation below.
\begin{lemma}\label{criterion1}
Suppose $\HH^{1,1}(\pi_f)\neq 0$. If there exists $[Z]\in \SC^1(M)$ and a $\bk_f$-finite rapidly decreasing closed form $\Omega$ on $M$ such that \emph{(i)} $<\mathrm{H}^{1,1}(\pi^\prime_f),\Omega>=0$ for $\pi_f^\prime\neq \pi_f$ and \emph{(ii)} $\int_Z \Omega\neq 0$, then $[Z]_{\pi_f}$ is nonzero and as a consequence $\SC^1(\pi_f)$ is nonzero.
\end{lemma}
\begin{proof}
Let $[\Omega]$ be the cohomology class of $\Omega$ in $\HH^4_{rd}(M,\bc)$ and $\omega$ be a compactly supported closed form on $M$ that represents the class $\Lambda([\Omega])$ in $\HH^4_c(M,\bc)$, then $\Omega-\omega$ is the boundary of a rapidly decreasing form. Hence
\[
\int_Z \Omega=\int_Z \Omega-\omega+ \int_Z \omega=0+<\cl_M([Z]),[\omega]>.
\]
Write $[Z]=\sum_{\pi_f^\prime} [Z]_{\pi_f^\prime}$ and $[\omega]=\sum_{\pi_f^\pprime} [\omega]_{\pi_f^\pprime}$, where $[Z]_{\pi_f^\prime}$ is the $\pi_f^\prime$-component of $\pi_f^\prime$ and $[\omega]_{\pi_f^\pprime}$ is the $\pi_f^\pprime$-component of $[\omega]_{\pi_f^\pprime}$. These are finite sums because $[Z]$ and $[\Omega]$ (and hence $[\omega]$) are $\bk_f$-finite. Note that $\cl_M([Z]_{\pi_f^\prime})\in \HH^{1,1}(\pi_f^\prime)$.
By Condition (i), one has $<\HH^{1,1}(\pi_f^\prime),[\omega]>=0$ for $\pi_f^\prime\neq \pi_f$, whence
\[
<\cl_M([Z]),[\omega]>=\sum_{\pi_f^\prime}<\cl_M([Z]_{\pi_f^\prime}),[\omega]>=<\cl_M([Z]_{\pi_f}),[\omega]>.
\]
Since the pairing on $\HH^2(\pi_f)\times \HH^4(\pi_f^\pprime)$ is zero for $\pi_f^\pprime\neq \pi_f^\vee$, there is
\[
<\cl_M([Z]_{\pi_f}),[\omega]>=\sum_{\pi_f^\pprime}<\cl_M([Z]_{\pi_f}),[\omega]_{\pi_f^\pprime}>=<\cl_M([Z]_{\pi_f}),[\omega]_{\pi_f^\vee}>.
\]
So $\int_Z \Omega=<\cl_M([Z]_{\pi_f}),[\omega]_{\pi_f^\vee}>$. With Condition (ii), one sees that both $\cl_M([Z]_{\pi_f})$ and $[\omega]_{\pi_f^\vee}$ are nonzero. Specifically, $[Z]_{\pi_f}$ is a nonzero element in $\SC^1(\pi_f)$.
\end{proof}
So we propose the following proposition.
\begin{proposition}\label{formOmega}
If $\HH^{1,1}(\pi_f)$ is nonzero, then there exist $[Z]\in \SC^1(M)$ and a $\bk_f$-finite rapidly decreasing closed form $\Omega$ on $M$ such that \emph{(i)} $<\mathrm{H}^{1,1}(\pi_f^\prime),\Omega>=0$ for $\pi_f^\prime\neq \pi_f$, \emph{(ii)} $\int_Z \Omega \neq 0$.
\end{proposition}
\begin{lemma}\label{reduction}
Suppose that $\HH^{1,1}(\pi_f)$ is nonzero. Let $\pi_\infty$ be the unqiue member of $\{\pi^{2+},\pi^{2-}, 1, \sgn\circ \nu\}$ such that $\pi_\infty\times \pi_f$ occurs in the discrete spectrum. Let $\chi$ be a character of $\bq^\times\backslash \bq^\times$ with $\chi_\infty\in \{1,\sgn\}$. If Proposition ~\ref{formOmega} holds for $\pi_f$, then it also holds for $\pi_f\otimes \chi_f$.
\end{lemma}
\begin{proof}
Suppose that Proposition ~\ref{formOmega} holds for $\pi_f$, then there exist a $\bk_f$-finite rapidly decreasing closed form $\Omega$ on $M$ and $[Z]\in \SC^1(M)$ such that $<\mathrm{H}^{1,1}(\pi_f^\prime),\Omega>=0$ for $\pi_f^\prime\neq \pi_f$ and $\int_Z \Omega \neq 0$.
Choose a neat compact open subgroup $K_f$ sufficiently small so that (i) $\chi_f\circ \nu$ is invariant by $K_f$, (ii) $[Z]=\sum_i c_i[Z_i]$, where $Z_i$ are irreducible special diviors on $\mkf\otimes_\bq \bar{\bq}$. For each $i$, select an element $g_i$ in $G(\ba)$ that represents a point on $Z_i\subset \mkf$. Define the $\chi$-twist of $Z$ by $Z_{\chi}:=\sum_i c_i \chi^{-1}(\nu(g_i)) Z_i$. Then $\Omega_\chi:=\chi(\nu(g))\Omega^\prime$ and $Z_{\chi}$ satisfy the requirement of Proposition ~\ref{formOmega} for $\pi_f\otimes \chi_f$.
\end{proof}
By Corollary ~\ref{criterion0} and Lemma ~\ref{criterion1}, Theorem ~\ref{picardgroup} is a consequence of Proposition ~\ref{formOmega}. Furthermore, by Theorem ~\ref{wei_coho} and Lemma ~\ref{reduction}, it is sufficient to verify Proposition ~\ref{formOmega} when $\pi=\pi_\infty\times \pi_f$ is of basic type I, II, and III in Theorem ~\ref{wei_coho}. In next three section, we prove Proposition ~\ref{formOmega} when $\pi$ is one of the three basic types I, II, and III. This would complete the proof of Theorem ~\ref{picardgroup}. Note that $\pi$ is self-dual in these cases.
\section{Nonvashing Periods I}\label{np1}
We verify Proposition ~\ref{formOmega} when $\pi$ is of type I. The candidates for $[Z]$ are non-split cycles in $\SC^1_{ns}(M)$ and the candidates for $\Omega$ are forms in $\mathrm{H}^{2,2}(\fg,K_\infty,\pi)$, which are cuspidal and hence rapidly decreasing.
\begin{proposition}\label{cuspidalcase}
Let $\pi$ be a CAP representation of $\mathrm{PGSp}_4(\ba)$ of Siegel type $(\tau,\frac{1}{2})$ with $\pi_\infty=\pi^{2+}$. There exist $[Z]\in \SC^1_{ns}(M)$ and $\Omega\in \HH^{2,2}(\fg,K,\pi)$ such that \emph{(i)} $<\mathrm{H}^{1,1}(\pi_f^\prime),\Omega>=0$ for all $\pi_f^\prime\neq \pi_f$, \emph{(ii)} $\int_Z \Omega \neq 0$.
\end{proposition}
Proposition ~\ref{cuspidalcase} has an immediate corollary below. We prove Proposition ~\ref{cuspidalcase} at the end of this section, after preparing relevant lemmas.
\begin{corollary}
Let $\mathrm{H}^{1,1}_{cusp}(M,\bc)$ be the subspace of $\mathrm{H}^{1,1}(M,\bc)$ spanned by cuspidal closed differential forms, then $\mathrm{H}^{1,1}_{cusp}(M,\bc)$ is spanned by the images of certain cycle classes in $\SC^1_{ns}(M)$.
\end{corollary}
\subsection{Nonvanishing of automorphic periods}
Recall the identification $\GSp_4\cong \GSpin(V)$ in Section ~\ref{group}. For a nonisotropic vector $v\in V$. Let $\pi$ be an irreducible cuspidal unitary representation in $L^2_{\disc}(G(\bq)Z(\br)^+\backslash G(\ba))$. For a nonisotropic vector $v\in V$, define a period functional $\mpp_{v}$ on the space of smooth vectors in $\pi$,
\[
\mpp_{v}(\varphi)=\int_{[\SO({v}^\perp]} \varphi(h)dh.
\]
\begin{lemma}\label{nonvanishing}
Let $\pi$ be a CAP representations of $\mathrm{PGSp}_4(\ba)\cong\mathrm{SO}(V_\ba)$ of Siegel type $(\tau,\frac{1}{2})$ with $\pi_\infty=\pi^{2+}$. There exists a vector $v\in V$ with $q(v)\in \bq_+\backslash {\bq^\times}^2$ such that $\mpp_{v}$ is nonzero.
\end{lemma}
\begin{proof}
Choose a non-trivial character $\psi$ of $\ba/\bq$. By Theorem ~\ref{ps_cap}, $\pi=\Theta_{\wtilde{\SL}_2\times \PGSp_4}(\sigma, \psi)$ with $\sigma\in \Wd_\psi(\tau)$ satisfying $\Theta_{\wtilde{\SL}_2\times \PGL_2}(\sigma,\psi)= 0$. There exists $a\in \bq$ such that the $\psi_a$-Whittaker functional $\ell_{\psi_a}$ is nonzero on $\sigma$. This implies $\Theta_{\wtilde{\SL}_2\times \PGL_2}(\sigma,\psi_a)=\tau\otimes \chi_a$, whence $a\not\in {\bq^\times}^2$. Because $\pi_\infty=\pi^{2+}$, we must have $a\in \bq_+$. Otherwise, $\sigma_\infty=\theta_{\wtilde{\SL}_2\times \PGL_2}(\fD_4\otimes \chi_{a_\infty},\psi_{a_\infty})=\theta_{\wtilde{\SL}_2\times \PGL_2}(\fD_4,\psi^{-1}_\infty)$ and, by Lemma ~\ref{localtheta}, $\pi^{2+}=\theta_{\wtilde{\SL}_2\times \PGSp_4}(\sigma_\infty, \psi_\infty)$ is a discrete series, which is a contradiction.
We have $\sigma=\Theta_{\wtilde{\SL}_2\times \PGSp_4}(\pi,\psi)$. For a form $\varphi=\Theta(\phi,f)$ with $f\in \pi$ and $\phi\in \ms(V(\ba))$, there is
\begin{align*}
\ell_{\psi_a}(\varphi)=&\int_{\bq\backslash \ba}\big[\int_{[\SO(V)]}\overline{f(h)}\sum_{\xi\in V}\omega(\smalltwomatrix{1}{n}{}{1},h)\phi(\xi)dh\big] \psi(-an) dn\\
=&\int_{[\SO(V))]} \overline{f(h)} \sum_{\xi\in V} \omega(h)\phi(\xi) \int_{\bq\backslash \ba}\psi\big((q(\xi)-a)n)dn.
\end{align*}
The integral $\int_{\bq\backslash \ba}\psi\big((q(\xi)-a)n)dn$ is zero when $q(\xi)\neq a$ and $1$ when $q(\xi)=a$. Let $v$ be a vector in $V$ with $q(v)=a$, then vectors $\xi$ with $q(\xi)=a$ can be expressed as $\gamma\cdot v$, with $\gamma\in \SO(v^\perp)_\bq\backslash \SO(V)_\bq$. So,
\begin{align*}
\ell_{\psi_a}(\varphi)=&\int_{[\SO(V)]}\sum_{\gamma\in \SO(v^\perp)_\bq\backslash \SO(V)_\bq} \overline{f( h)}\phi(h^{-1}\gamma^{-1} v)dh\\
=&\int_{\SO(v^\perp)_\ba\backslash \SO(V)_\ba} \overline{f(h)}\phi(h^{-1}\circ v)dh\\
=&\int_{\SO(v^\perp)_\ba\backslash \SO(V)(\ba)}\big(\int_{[\SO(v^\perp)]}f(hh^\prime)dh\big)\phi({h^\prime}^{-1}v)dh^\prime.
\end{align*}
Because $\ell_{\psi_a}$ is nonzero on $\sigma$, the function $\int_{[\SO(v^\perp)]}f(hh^\prime)dh=\mpp_{v}(\pi(h^\prime)f)$ is not identically zero. Therefore, $\mpp_{v}$ is nonzero.
\end{proof}
\subsection{Periods of cohomological forms}\label{pocf}
We describe the periods of cohomological forms in $\mathrm{H}^4(\fg,K_\infty,\pi)$ on $Z_{\bq v,g_f,K_f}$. Note the following:
\begin{itemize}
\item[(i)] The homomorphism $\GSp_4(\br) \isoto \SO(V)(\br)$ induces an isomorphism $\Lie(\mathrm{Sp}_4(\br))\isoto \Lie(\SO(V)_\br)$. We take the Cartan decomposition $\fg_0=\fp\oplus \fk$ in Section ~\ref{algebra} as the Cartan decomposition of $\Lie(\SO(V)_\br)\otimes \bc$.
\item[(ii)] Recall the compact torus $T$ whose Lie algebra is $\ft_\br$. There exists $g_\infty\in G(\br)$ such that $g_\infty T g_\infty^{-1}$ is a maximal connected compact subgroup of $\GSpin({v}^\perp)_\br$. With this choice of $g_\infty$, $\fp^\prime:=\fp\cap \Ad_{g_\infty^{-1}}\big[\Lie(\SO(V)_\br)\otimes \bc\big]$ is 4-dimensional and $\ft$-invariant. Thus $\Ad_{g_\infty} \fp^\prime\oplus \Ad_{g_\infty} \ft$ is a Cartan decomposition of $\Lie(\SO(V)_\br)\otimes \bc$.
\end{itemize}
Recall that the $\fk$-submodule $\pi^{2+}(\delta_{2\alpha})$ of $\pi^{2+}$ has a basis $\{v_j, -2\leq j\leq 2\}$ consisting of weight vectors (see Sect. ~\ref{coho_liealg}). The vector $v_0$ of weight $0$ is particularly important, as shown by the lemma below.
\begin{lemma}\label{formulation}\label{period-transfer}
Let $\pi$ be a CAP representations of $\mathrm{PGSp}_4(\ba)$ of Siegel type $(\tau,\frac{1}{2})$ with $\pi_\infty=\pi^{2+}$. Let $v$ be as in Lemma ~\ref{nonvanishing} and choose $g_\infty\in G(\br)$ such that $g_\infty T g_\infty^{-1}$ is a maximal connected compact subgroup of $\GSpin({v}^\perp)_\br$. For the special divisor $Z_{\bq v,g_f,K_f}$, there is
\[
\{\int_{Z_{\bq v,g_f,K_f}}\omega: \omega\in \mathrm{H}^4(\fg,K_\infty,\pi)\}=\{\mpp_v(\pi(g_\infty)\varphi)| \varphi \in v_0\otimes \pi_f\}.
\]
\end{lemma}
\begin{proof}
Choose a basis $\{X_i\}_{i=1}^4$ of $\fp^\prime$ consisting of weight vectors with respect to $\ft$ and add two other weight vectors $X_5,X_6$ to form a basis of $\fp$. Let $\{\omega_i\}_{i=1}^6$ be the dual basis in $\fp^*$. For a subset $I=\{i_1<\cdots<i_q\}$ of $\{1,\cdots,6\}$, set $\omega_I=\omega_{i_1}\wedge\cdots\wedge \omega_{i_q}$. By the discussion in Section ~\ref{coho_liealg} about the $\fk$-module structure of $\pi^{2+}$ and $\wedge^4 \fp\cong \wedge^4 \fb_0$, there is
\[
\mathrm{H}^4(\fg,K_\infty,\pi^{2+})= \Hom_{K_\infty}(\wedge^4\fp,\pi^{2+})=\bc\cdot \big(\sum_{|I|=4} v_I\cdot \omega_I\big),
\]
where $v_I\in \pi^{2+}(\delta_{2\alpha})$ and its weight is the negative of the weight of $\omega_I$.
Thus, a form in $\HH^4(\fg,K_\infty,\pi)$ is of the shape $\omega=\sum_{|I|=4} \varphi_I\omega_I$, with $\varphi_I=v_I\otimes v_f\in \pi$ for certain $v_f\in \pi_f$. Let $R$ denote the right translation action on $G(\ba)$. There is
\begin{align*}
\int_{Z_{\bq v,g_f,K_f}}\omega
=&\int_{\GSpin({v}^\perp)_\bq \backslash
\GSpin({v}^\perp)_\ba/L_\infty}
R_{g_\infty}^*\omega \\
=&c\int_{\mathrm{SO}(v^\perp)_\bq \backslash
\mathrm{SO}(v^\perp)_\ba/\bar{L}_\infty}
\sum_{|I|=4}
\varphi_I(gg_\infty)\mathrm{Ad}^*_{g_\infty^{-1}}(\omega_I).
\end{align*}
Here $L_\infty=g_\infty K_\infty g_\infty^{-1}$, $\overline{L}_\infty=L_\infty/Z(\br)^+$, and $c$ is as in Section ~\ref{sect_notation}.
One needs to determine the restriction of $\mathrm{Ad}^*_{g_\infty^{-1}}(\omega_I)$ to $\SO(v^\perp)_\br/\overline{L}_\infty$. Because $\Ad_{g_\infty}\fp^\prime\oplus \Ad_{g_\infty}\ft$ is a Cartan decomposition of $\Lie(\SO(v^\perp)_\br)\otimes \bc$ and $\Ad_{g_\infty}\ft=\Lie(\overline{L}_\infty)\otimes \bc$, left invariant vector fields on $\SO(v^\perp)_\br/\overline{L}_\infty$ are identified with elements in $\Ad_{g_\infty}\fp^\prime$. Hence the restriction of $\Ad^\ast_{g_\infty^{-1}}\omega_i$ to $\SO(v^\perp)_\br/\overline{L}_\infty$ is nonzero if and only if $i=1,2,3,4$. Therefore, among all $I$ with $|I|=4$, only $\mathrm{Ad}^*_{g_\infty^{-1}}(\omega_{1234})$ has nonzero restriction. Its restriction, when combined with the volume form of $\overline{L}_\infty$, gives the volume form of $\SO(v^\perp)_\br$. So
\[
\int_{Z_{\bq v,g_f,K_f}}\omega
=c \int_{SO(v^\perp)_\bq \backslash \SO(v^\perp)_\ba}\varphi_{1234}(gg_\infty)dg
=c \mpp_v(\pi(g_\infty)\varphi_{1234}).
\]
Note that $\varphi_{1234}$ belongs to $v_0\otimes \pi_f$ because it is of weight $0$.
On the other hand, given $\varphi=v_0\otimes v_f \in v_0\otimes \pi_f$, set $\omega=\sum_{|I|=4} \varphi_I \omega_I$ with $\varphi_I=v_I\otimes v_f$, then $\mpp_v(\pi(g_\infty)\varphi)=c^{-1}\int_{Z_{\bq v,g_f,K_f}}\omega$. So we obtain the equality of sets in the Lemma.
\end{proof}
\subsection{Period relation}
We show that $\mpp_v$ is nonzero on $\pi$ if and only if it is nonzero on the subspace $v_0\otimes \pi_f$. We prove this assertion by arguing that the value of $\mpp_v$ on a general vector is a scalar multiple of its value on a vector in $v_0\times \pi_f$. Such a relation exists partly because the $K_\br$-type to which $v_0$ belongs is minimal in $\pi^{2+}$.
\begin{lemma}\label{HLie}
Let $\pi$ be an irreducible cuspidal automorphic representation of $\SO(V)(\ba)$ and $v$ be a nonisotropic vector of $V$. For a smooth vector $\varphi\in \pi$ and $X\in \Lie(\SO(V)_\br)$, there is $\mpp_v(X\circ \varphi)=0$.
\end{lemma}
\begin{proof}
Write $F(h,t)=\frac{\varphi(he^{tX})-\varphi(h)}{t}$, then $X\circ \varphi(h)=\underset{t\rar 0}{\lim}F(h,t)$ and
\[
\mpp_v(X\circ \varphi)=\int_{[\SO(v^\perp)]} \lim_{t\rar 0} F(h,t)dh.
\]
By Mean Value Theorem, for each couple $(h, t)$, there exists some number $\xi_{h,t}$ between $0$ and $t$ such that $F(h,t)=X\circ \varphi(he^{\xi_{h,t}X})$. Because $\pi$ is cuspidal, both $\varphi$ and $X\circ \varphi$ are rapidly decreasing. So there exists an integrable function $\wtilde{F}(h)$ on $\SO(V)_\bq\backslash \SO(v^\perp)_\ba$ such that $|F(h,t)|\leq \wtilde{F}(h)$ when $|t|$ is small. By the dominated convergence theorem, there is
\[
\mpp_v(X\circ \varphi)=\lim_{t\rar 0} \int_{[\SO(v^\perp)]} F(h,t)dh=\lim_{t\rar 0} \frac{1}{t}\big[\mpp_v\big(\pi(e^{tX})\varphi\big)-\mpp_v(\varphi)\big].
\]
Because $X\in \mathrm{Lie}(\SO(v^\perp)_\br)$, there is $\mpp_v\big(\pi(e^{tX})\varphi\big)=\mpp_v(\varphi)$, whence $\mpp_v(X\circ \varphi)=0$.
\end{proof}
\begin{lemma}\label{specialform}
Let $\pi$ be a CAP representations of $\mathrm{PGSp}_4(\ba)$ of Siegel type $(\tau,\frac{1}{2})$ with $\pi_\infty=\pi^{2+}$. Let $v$ and $g_\infty$ be as in Lemma ~\ref{nonvanishing} and ~\ref{period-transfer}. For a $K_\infty$-finite smooth vector $\varphi=v_\infty\otimes v_f\in \pi$, put $\varphi_0=v_0\otimes v_f$, then $\mpp_v(\pi(g_\infty)\varphi)=C\mpp_v(\pi(g_\infty)\varphi_0)$ for a number $C$ depending on $v_\infty$.
\end{lemma}
\begin{proof}
Let $\muu$ denote the universal enveloping algebra of $\fg_0$. Because $v_\infty$ is $K_\infty$-finite, $v_\infty=R\cdot v_0$ for certain $R\in \muu$. Put $\ell:=\mpp_v\circ \pi(g_\infty)$ and observe the following facts:
\begin{itemize}
\item[(i)] By the choice of $g_\infty$, there is $\mathrm{Lie}(\SO(v^\perp)_\br)\otimes \bc=\Ad_{g_\infty}\fp^\prime\oplus \Ad_{g_\infty}\ft$ with $\fp^\prime\subset \fp$. Because $\fp^\prime\oplus \ft$ is closed under Lie brackets, there is
\[
\fp^\prime=V_{\alpha+\beta}\oplus V_{-\alpha-\beta}\oplus V_{\alpha-\beta}\oplus V_{-\alpha+\beta}.
\]
Set $\fh:=\ft\oplus V_{\alpha+\beta}\oplus V_{-\alpha-\beta}\oplus V_{\alpha-\beta}\oplus V_{-\alpha+\beta}$. By Lemma ~\ref{HLie}, $\ell\circ X=0$ for $X\in \fh$.
\item[(ii)]The Casimir element $\Omega$ in $\muu$ acts by a scalar, say $\lambda$, on the smooth vectors of $\pi^{2+}$. Choose a nonzero vector $E_\gamma\in V_\gamma$ for each positive root $\gamma$ of $\fg_0$. By choosing a suitable nonzero vector $E_{-\gamma}\in V_{-\gamma}$ and setting $H_\alpha=[E_\alpha,E_{-\alpha}]$, $H_\beta=[E_\beta,E_{-\beta}]$, one can write $\Omega$ as
\[
\Omega=F(H_\alpha,H_\beta)+E_\alpha E_{-\alpha}+E_\beta E_{-\beta}+ E_{\alpha+\beta} E_{-\alpha-\beta}+E_{\alpha-\beta}E_{-\alpha+\beta},
\]
where $F(\cdot,\cdot)$ is certain degree-2 polynomial. By the observation made in (i), there is
\[
\ell\circ E_\beta E_{-\beta}=\lambda \ell-\ell\circ E_\alpha E_{-\alpha}.
\]
\item[(iii)]By Poincar\'{e}-Birkhoff-Witt theorem, we may write $R$ as
\[
R=R^\prime+\underset{0\leq i_1,i_2,j_1,j_2\leq n}{\sum}c_{i_1,i_2,j_1,j_2}E_\beta^{i_1}E_{-\beta}^{i_2}E_\alpha^{j_1} E_{-\alpha}^{j_2},
\]
where $R^\prime\in \fh\cdot \muu$ and $n\in \bn$.
\item[(iv)] $\ell$ vanishes on vectors of nonzero weight. Suppose that $\varphi^\prime$ is of weight $\gamma\neq 0$; choose $X\in \ft$ satisfying $\gamma(X)\neq 0$, then $0=\ell(X\varphi^\prime)=\ell(\gamma(X)\varphi^\prime)=\gamma(X)\ell(\varphi^\prime)$, whence $\ell(\varphi^\prime)=0$.
\end{itemize}
Now for $\varphi_0$, by (i) and (iii), there is
\[
\ell(R\varphi_0)=\underset{0\leq i_1,i_2,j_1,j_2\leq n}{\sum}c_{i_1,i_2,j_1,j_2}\ell[E_\beta^{i_1}E_{-\beta}^{i_2}E_\alpha^{j_1} E_{-\alpha}^{j_2}\varphi_0].
\]
Because $\varphi_0$ is of weight zero, the vector $E_\beta^{i_1}E_{-\beta}^{i_2}E_\alpha^{j_1} E_{-\alpha}^{j_2}\varphi_0$ is of weight $(i_1-i_2)\beta+(j_1-j_2)\alpha$. Applying (iv), we get that
\[
\ell(R\varphi_0)=\underset{0\leq i,i,j,j\leq n}{\sum}c_{i,i,j,j}\ell[E_\beta^{i}E_{-\beta}^{i}E_\alpha^{j} E_{-\alpha}^{j}\varphi_0].
\]
Because $E_\alpha$ and $E_{-\alpha}\in \fk$, the vector $E_\alpha^{j} E_{-\alpha}^{j}\varphi_0$ is a scalar multiple of $v_0$. So there are number $c_i$ ($0\leq i\leq n$) such that
\[
\ell(R\varphi_0)=\underset{0\leq i\leq n}{\sum} c_i\ell(E_\beta^i E_{-\beta}^i \varphi_0).
\]
Furthermore, the relation $E_\beta E_{-\beta}=E_{-\beta} E_\beta+H_\beta$ implies $E_\beta^j H_\beta =\big(H_\beta-j\beta(H_\beta)\big)E_\beta^j$ and
\begin{align*}
&E_\beta^i E_{-\beta}^i\\
=&E_\beta E_{-\beta}E^{i-1}_\beta E^{i-1}_{-\beta}+\sum_{1\leq j\leq i-1} E_\beta^j H_\beta E_\beta^{i-1-j} E_{-\beta}^{i-1}\\
=&E_\beta E_{-\beta}E^{i-1}_\beta E^{i-1}_{-\beta}+\sum_{1\leq j\leq i-1} \big(H_\beta-j\beta(H_\beta)\big)E_\beta^{i-1} E_{-\beta}^{i-1}\\
=&E_\beta E_{-\beta}E^{i-1}_\beta E^{i-1}_{-\beta}+(i-1)H_\beta E_\beta^{i-1} E_{-\beta}^{i-1}-\frac{i(i-1)\beta(H_\beta)}{2}E_\beta^{i-1} E_{-\beta}^{i-1}
\end{align*}
Applying (i) and (ii), one gets
\begin{align*}
&\ell(E_\beta^{i} E_{-\beta}^{i}\varphi_0)\\
=&\ell(E_\beta E_{-\beta}E^{i-1}_\beta E^{i-1}_{-\beta}\varphi_0)-\frac{i(i-1)\beta(H_\beta)}{2}\ell(E_\beta^{i-1} E_{-\beta}^{i-1}\varphi_0)\\
=&\big(\lambda-\frac{i(i-1)\beta(H_\beta)}{2}\big)\ell(E_\beta^{i-1} E_{-\beta}^{i-1}\varphi_0)
-\ell(E_{\alpha}E_{-\alpha}E_\beta^{i-1} E_{-\beta}^{i-1}\varphi_0)\\
=&\big(\lambda-\frac{i(i-1)\beta(H_\beta)}{2}\big)\ell(E_\beta^{i-1} E_{-\beta}^{i-1}\varphi_0)
-\ell(E_\beta^{i-1} E_{-\beta}^{i-1}E_{\alpha}E_{-\alpha}\varphi_0)\\
&-\ell([E_{\alpha}E_{-\alpha},E_\beta^{i-1} E_{-\beta}^{i-1}]\varphi_0).
\end{align*}
Note that the second term is a multiple of $\ell[E_\beta^{i-1} E_{-\beta}^{i-1}\varphi_0]$ and that, by (i) and PBW theorem, the third term is a linear combination of $\ell[E_\beta^{i^\prime-1} E_{-\beta}^{i^\prime-1}\varphi_0] (0\leq i^\prime\leq i-1)$. By induction, we have $\ell[E_\beta^{i} E_{-\beta}^{i}\varphi_0]=C_i\ell(\varphi_0)$. (It obviously holds when $i=0$.) Therefore, $\ell(R\varphi_0)=C\ell(\varphi_0)$ for $C=\sum_{0\leq i\leq n}c_i C_i$.
\end{proof}
\begin{proof}[Proof of Proposition ~\ref{cuspidalcase}]
We first apply Lemma ~\ref{nonvanishing} to get a vector $v\in V$ with $q(v)\in \bq^+\backslash {\bq^\times}^2$ such that $\mpp_v$ is nonzero on the space of smooth vectors of $\pi$. Choose a $K_\infty$-finite smooth vector $\varphi=v_\infty\otimes v_f$ of $\pi$ such that $\mpp_v(\varphi)\neq 0$. (This is possible because $K_\infty$-finite smooth decomposable vectors span a dense subspace in the space of smooth vectors and $\mpp_v$ is continuous.) By Lemma ~\ref{specialform}, $\mpp_v(\varphi_0)\neq 0$ for $\varphi_0=v_0\otimes v_f\in v_0\times \pi_f$. By Lemma ~\ref{period-transfer}, there exists $\Omega\in \HH^{2,2}(\fg,K_\infty,\pi)=\HH^{2,2}(\pi_f)$ such that $\int_{Z_{\bq v,g_f,K_\infty}}\Omega\neq 0$. Condition (i) in Proposition ~\ref{cuspidalcase} obviously holds: for $<\HH^{1,1}(\pi_f^\prime),\HH^{2,2}(\pi_f)>\neq 0$, it is necessary that $\pi_f^\prime=\pi_f^\vee=\pi_f$.
\end{proof}
\section{Nonvanishing Periods II}\label{np2}
We verify Proposition ~\ref{formOmega} when $\pi$ is of type II. The candidates for $[Z]$ are cycles in $\SC^1_{s}(M)$ (see Sect. ~\ref{cycle}) and the form $\Omega$ will be constructed using Harder's method of Eisenstein cohomology.
It is more convenient to describe $\SC^1_{s}(M)$ in terms of the group
\[
H:=\{(g_1,g_2):g_1,g_2\in \GL_2, \det g_1=\det g_2\}.
\]
and the embedding $H\hrar \GSp_4$ given by
\[
\left(\smalltwomatrix{a_1}{b_1}{c_1}{d_1},
\smalltwomatrix{a_2}{b_2}{c_2}{d_2}\right)\lrar
\left(\begin{smallmatrix}
a_1 & &b_1 &\\
&a_2& &b_2\\
c_1 & &d_1&\\
&c_2 & &d_2
\end{smallmatrix}\right).
\]
Note that $\bk_{H}:=\bk\cap H(\ba)=\prod_p \bk_{H,p}$ is a maximal compact subgroup of $H(\ba)$, where $\bk_{H,p}=\bk_p\cap H(\bq_p)$. Set $K_{H,\infty}=H(\br)\cap K_\infty$ and $K_{H,f}=H(\ba_f)\cap K_f$ for a compact open subgroup $K_f$ of $G(\ba_f)$. Let $Z_{H,K_f}$ denote the image of the Shimura variety $H(\bq)\backslash H(\ba)/K_{H,\infty}K_{H,f}$ in $\mkf$. The connected components of $Z_{H,K_f}$ and their Hecke translates, with $K_f$ varying, span $\SC^1_s(M)$.
\begin{proposition}\label{eseriescase}
Let $\pi=J(P,\tau,\frac{1}{2})$ with $\tau\subset \ma_{cusp}(\PGL_2)$, $\tau_\infty=\fD_4$ and $L(\frac{1}{2},\tau)\neq 0$. There exists a $\bk_f$-finite rapidly decreasing closed form $\Omega$ on M and a sufficiently small $K_f$ such that \emph{(i)} $<\mathrm{H}^{1,1}(\pi_f^\prime),\Omega>=0$ for $\pi_f^\prime\neq \pi_f$, \emph{(ii)} $\int_{Z_{H,K_f}}\Omega \neq 0$.
\end{proposition}
\begin{proof}
We will construct a differential form $E(\eta^{\kappa,\omega})$ in Section ~\ref{ss_ecoho}. Proposition ~\ref{eseriescase} is a consequence of Lemma ~\ref{closedness}, ~\ref{vp_p} and ~\ref{nvp_p}.
\end{proof}
\begin{corollary}\label{coro_eisen2}
Let $\pi$ be as in Proposition ~\ref{eseriescase} and $\chi$ be a character of $\bq^\times\backslash \ba$ with $\chi_\infty\in \{1,\sgn\}$, then $\mathrm{H}^{1,1}(\pi_f\otimes \chi_f)$ is spanned by the images of certain split special divisors.
\end{corollary}
\subsection{Eisenstein cohomology}\label{ss_ecoho}
We construct $\Omega$ by the method of Eisenstein cohomology, which was first used by Harder in \cite{harder1975}. The key observation is that $\HH^3(\fg,K_\infty,\Pi(\tau,\frac{1}{2}))$ is nonzero when $\tau_\infty=\fD_4$. We wedge a form in this space with a degree-1 form on $P(\bq)\backslash G(\ba)/K_\infty$. The wedge product is a closed form on $P(\bq)\backslash G(\ba)/K_\infty$ and we then sum its translates by $P(\bq)\backslash G(\bq)$ to obtain a form on $G(\bq)\backslash G(\ba)/K_\infty$.
Fix a Levi decomposition $P=UM$ as below,
\begin{align*}
&U:=\left\{u=\smalltwomatrix{I_2}{n}{}{I_2}: n\in \mathrm{Sym}_{2\times 2} \right\},\\
&M:=\left\{m=\smalltwomatrix{A}{}{}{x\leftup{t}{A^{-1}}}: A\in
\mathrm{GL}_2, x\in \mathrm{GL}_1 \right\}.
\end{align*}
Set $\fm=\Lie(M(\br))\otimes \bc$, $\fu=\Lie(U(\br))\otimes \bc$, $K_{M,\infty}=K_\infty\cap M(\br)$ and $\tilde{\fk}_M=\Lie(K_{M,\infty})\otimes_\br \bc$. Define a quasi-character $\lambda$ on $P(\bq)\backslash P(\ba)$,
\begin{equation*}
\lambda(p)=\big|\frac{\det A}{x}\big|,\quad p=\smalltwomatrix{A}{\ast}{}{x\leftup{t}{A^{-1}}}.
\end{equation*}
Set $P_H=H\cap P$, $M_H=H\cap M$, $U_H=H\cap U$.
\subsubsection{The $(\fg,K_\infty)$-cohomology}
By \cite[Sect. 3.3, 3.4]{bw2000}), there is
\begin{align}\label{cohop0}
\nonumber &\HH^3(\fg,K_\infty,I_{P,\fD_4\otimes 1,\frac{1}{2}})\\
\nonumber =&\HH^3(\fp, K_{M,\infty}, \fD_4\otimes \lambda^2)\\
\nonumber =&\HH^1(\fm,K_{M,\infty},\fD_4\otimes \HH^2(\fu,\bc)\otimes \lambda^2)\\
=&\Hom_{K_{M,\infty}}(\fm/\tilde{\fk}_M,\fD_4\otimes \HH^2(\fu,\bc)\otimes \lambda^2).
\end{align}
Note that $\ft\cap \wtilde{\fk}_M=\bc H$ and that $\fm/\wtilde{\fk}_M=\bc h\oplus \bc n_0$ with respect to the identification $\fg/\wtilde{\fk}=\fb_0$. Observe the following:
\begin{itemize}
\item[(i)] $(\fm/\tilde{\fk}_M)^*=\bc \eta^+\oplus\bc\eta^-$, where $\eta^\pm=h^*\pm\frac{i}{2}n_0^*$ are of weights $\pm\alpha$ with respect to $\ft\cap \wtilde{\fk}_M$.
\item[(ii)] $\mathrm{H}^2(\fu,\bc)=\bc\eta_+\oplus\bc \eta_-$, where $\eta_\pm=n_1^*\wedge n_3^*\pm in_1^*\wedge n_2^*$ are of weights $\pm\alpha$ with respect to $\ft\cap\wtilde{\fk}$.
\item[(iii)] $\fD_4=\oplus_{n\in \bz, |n|\geq 2} \fD_4(n\alpha)$, where $\fD_4(n\alpha)$ refers to the weight space of $\fD_4$ with weight $n\alpha$ and is $1$-dimensional.
\item[(iv)] $K_{M,\infty}\cong Z(\br)^+ \times \OO(2)_\br$. An element $(\alpha,A)$ on the RHS corresponds to $\smalltwomatrix{\alpha A}{}{}{\alpha\leftup{t}{A^{-1}}}$ on the LHS. The element $\smalltwomatrix{1}{}{}{-1}\in \OO(2)_\br$ sends $\eta^\pm$ to $\eta^\mp$, $\eta_\pm$ to $-\eta_\mp$, and $\fD_4(n\alpha)$ to $\fD_4(-n\alpha)$.
\end{itemize}
By choosing $v_+\in \fD_4(2\alpha)$ and setting $v_-=-\smalltwomatrix{1}{}{}{-1}\circ v_+$, we can write
\begin{align}\label{cohop}
\nonumber &\Hom_{K_{M,\infty}}(\fm/\tilde{\fk}_M,\fD_4\otimes \HH^2(\fu,\bc)\otimes \lambda^2).\\
=&\bc\cdot (v_+\eta^-\wedge\eta_-\otimes 1+v_-\eta^+\wedge\eta_+\otimes 1).
\end{align}
\subsubsection{The global induction}\label{gicoho}
By (\ref{cohop}), there is a global isomorphism
\begin{align}\label{cohop_global}
\nonumber &\HH^3(\fg,K_\infty,\Pi(\tau,\frac{1}{2}))\\
\cong &\HH^1(\fm,K_{M,\infty},\fD_4\otimes \HH^2(\fu,\bc)\otimes \lambda^2)\otimes \Pi(\tau_f,\frac{1}{2}).
\end{align}
Here $\Pi(\tau_f,\frac{1}{2})$ refers to the abstract induced representation of $G(\ba_f)$ consisting of $\bk_f$-finite functions $\phi_f: G(\ba_f)\rar \tau_f$ that satisfy
\[
\phi_f\left(\smalltwomatrix{A}{\ast}{}{x\leftup{t}{A^{-1}}}g_f\right)=\big|\frac{\det A}{x}\big|^2\tau_f(A)(\phi_f(g_f)).
\]
We define a map from the RHS of (\ref{cohop_global}) to the LHS.
(i) Let $v_\pm$ be as in (\ref{cohop}). To $\phi_f\in \Pi(\tau_f,\frac{1}{2})$, we associate two functions $F_\pm(g)$ on $P(\br)\times G(\ba_f)$: at $g_f\in G(\ba_f)$ and $p=\smalltwomatrix{A}{\ast}{}{x\leftup{t}{A^{-1}}}\in P(\ba)$, set
\[
F_\pm(pg_f)=\lambda^2(p)\cdot [v_\pm\otimes \phi_f(g_f)](A).
\]
Here $[v_\pm\otimes \phi_f(g_f)]$ refer to the cuspidal automorphic forms in the space of $\tau$ that correspond to $v_\pm\otimes \phi_f(g_f)$ under the isomorphism $\tau=\fD_4\otimes \tau_f$.
(ii) Accordingly, define a differential form $\omega$ on $P(\br)\times G(\ba_f)$:
\[
\omega:=F_+(g)\eta^-\wedge\eta_-+F_-(g)\eta^+\wedge\eta_+
\]
Because of (\ref{cohop}), $\omega$ is right invariant by $K_{M,\infty}$ and descends to a form on $P(\br)\times G(\ba_f)/K_\infty$. By the identification $P(\br)/K_{M,\infty}=G(\br)/K_\infty$, it is regarded as a form on $G(\ba)/K_\infty$. Its pullback to $G(\ba)$ is
\[
\omega(p_\infty k_\infty, g_f)=R_{k_\infty^{-1}}^\ast \big(\omega(p_\infty,g_f)\big),\quad p_\infty\in P(\br), k_\infty\in K_\infty, g_f\in G(\ba_f).
\]
By (\ref{cohop_global}), the form $\omega$ is closed and belongs to $\mathrm{H}^3(\fg,K_\infty,\Pi(\tau,\frac{1}{2}))$.
\subsubsection{Eisenstein series operation}
Regard $\lambda$ as a function on $G(\ba)$ by setting $\lambda(pk)=\lambda(p)$ for $p\in P(\ba), k\in \bk$. Define $\eta_o:=\lambda^*(\frac{dt}{t})$. It is a closed degree-$1$ form on $P(\bq)\backslash G(\ba)/K_\infty$ and $\eta_o(p)=2a^\ast$ for $p\in P(\ba)$.
To $\kappa\in C^\infty_c(\br_+)$ and $\omega\in \HH^3(\fg,K_\infty,\Pi(\tau,\frac{1}{2}))$, we associate a differential form $\eta^{\kappa,\omega}$ on $P(\bq)\backslash G(\ba)/K_\infty$,
\[
\eta^{\kappa,\omega}:=\kappa(\lambda(g))\omega\wedge \eta_o.
\]
Because $\omega$ and $\eta_o$ are closed and $d\big(\kappa(\lambda(g))\big)=\lambda(g)\kappa^\prime(\lambda(g))\eta_0$,
there is $d(\eta^{\kappa,\omega})=d\big(\kappa(\lambda(g))\big)\wedge \omega\wedge \eta_o=0$, whence $\eta^{\kappa,\omega}$ is a closed form.
Imitating the construction of Eisenstein series, we define
\begin{align*}\label{ecoho_p}
&E(\eta^{\kappa,\omega})=\sum_{\gamma\in P(\bq)\backslash G(\bq)}\ L_{\gamma}^*\big(\eta^{\kappa,\omega}\big),\\
\end{align*}
It is a $\bk_f$-finite differential form on $G(\bq)\backslash G(\ba)/K_\infty$.
\begin{remark}\label{rm_eclassp}
Choose a basis $\{\omega_i\}$ of $\fp^\ast$ and write $\eta_o=\sum_i c_i(g)\omega_i$, then $c_i(g)$ are left $P(\ba)$-invariant and right $\bk_f$-invariant because $\eta_o$ is so. Write $\omega=\sum_I F_I(g)\omega_I$ with $F_I(g)\in \Pi(\tau,\frac{1}{2})$ and $\omega_I\in \wedge^3\fp^\ast$, then
\begin{equation}\label{eclassp}
E(\eta^{\kappa,\omega})=\sum_{i,I}E(\kappa\circ \lambda\cdot c_iF_I)\omega_I\wedge \omega_i.
\end{equation}
Here $c_iF_i\in \Pi(\tau,\frac{1}{2})$ and $E(\kappa\circ \lambda\cdot c_iF_I)=\sum_{\gamma\in P(\bq)\backslash G(\bq)}[\kappa\circ\lambda\cdot c_iF_I](\gamma g)$ are pseduo-Eisenstein series of type $(P,\tau)$. $E(\eta^{\kappa,\omega})$ is closed by Lemma ~\ref{closedness} and we may call it a pseudo-Eisenstein cohomological form.
\end{remark}
\begin{lemma}\label{closedness}
Suppose $\tau\subset \ma_{cusp}(\PGL_2)$ with $\tau_\infty=\fD_4$. $E(\eta^{\kappa,\omega})$ is a $\bk_f$-finite rapidly decreasing closed form on $G(\bq)\backslash G(\ba)/K_\infty$.
\end{lemma}
\begin{proof}
By Proposition II.1.10 in \cite{moegwald95}, the pseudo-Eisenstein series $E(\kappa\circ \lambda\cdot c_iF_I)$ in (\ref{eclassp}) are rapidly decreasing. So is $E^{\kappa,\omega}$.
For closedness, as observed by Harder in \cite{harder1975}, it suffices to show that for each $\bq$-parabolic subgroup $\mathrm{P}$ of $G$, the constant term of $E(\eta^{\kappa,\omega})$ along the unipotent radical of $\mathrm{P}$ is a closed form on $\mathrm{P}(\bq)\backslash G(\ba)/K_\infty$. One may suppose that $\mathrm{P}$ is one of $B, P$ and $Q$. The constant terms along $B$ and $Q$ are zero because of the cuspidality of $\tau$.
We now compute the constant term along $P$. Set
\[
w_1=1,\
w_2=\left(\begin{smallmatrix}
& &1 &\\
&1& &\\
-1& & &\\
& & &1
\end{smallmatrix}\right),\
w_3=\left(\begin{smallmatrix}
1& & &\\
&& &1\\
& &1 &\\
&-1 & &
\end{smallmatrix}\right),\
w_4=\left(\begin{smallmatrix}
& &1 &\\
&& &1\\
1& & &\\
&1 & &
\end{smallmatrix}\right).
\]
There is $G(\bq)=\sqcup_{j=1}^4 P(\bq)w_j P(\bq)$. Set $U_{j}=w_j^{-1}Pw_j\cap U$ and $M_{j}=w_j^{-1}Pw_j\cap M$, then $P(\bq)\omega_j P(\bq)$ is the disjoint union of $P(\bq)\omega_j \gamma_j \gamma_j^\prime$ with $\gamma_j\in U_j(\bq)\backslash U(\bq)$ and $\gamma_j^\prime\in M_j(\bq)\backslash M(\bq)$. So
\[
E_P(\eta^{\kappa,\omega})=\int_{[U]} L_u^*E(\eta^{\kappa,\omega})du=\sum_j \sum_{\gamma_j,\gamma_j^\prime} \int_{[U]} L_{\omega_j \gamma_j \gamma_j^\prime u}^*\eta^{\kappa,\omega}du.
\]
We make a change of variable $u\rar {\gamma_j^\prime}^{-1}u\gamma_j^\prime$ and then combine the summation over $\gamma_j$ with the integration over $[U]$. It yields
\begin{equation}\label{constanterm}
E_P(\eta^{\kappa,\omega})=\sum_j \sum_{\gamma_j^\prime}\int_{U_j(\bq)\backslash U_j(\ba)}L_{\omega_j u\gamma_j^\prime}^\ast \eta^{\kappa,\omega}.
\end{equation}
When $j=2,3$, there exists a $1$-dimensional subgroup $U_j^\prime\subset U_j$ satisfying $w_jU_j^\prime w_j^{-1}\subset M$, whence $\int_{U_j(\bq)\backslash U_j(\ba)}L_{\omega_j u\gamma_j^\prime}^\ast \eta^{\kappa,\omega}=0$ by the cuspidality of $\tau$. So only the terms for $j=1, 4$ remain in (\ref{constanterm}),
\[
E_P(\eta^{\kappa,\omega}) =\eta^{\kappa,\omega}+\int_{U(\ba)}
L_{w_4 u}^*\eta^{\kappa,\omega}du
\]
We now show $\eta^\prime:=\int_{U(\ba)} L_{w_4 u}^*\eta^{\kappa,\omega}du$ is closed. Write $\omega=\sum_I F_I \omega_I$ with $F_I\in \Pi(\tau,\frac{1}{2})$ and $\omega_I\in \wedge^3\fp^\ast$. Put $F_{I,\kappa}=F_I(g)\kappa\big(\lambda(g)\big)$, then
\begin{align*}
&\eta^{\kappa,\omega}=\sum_I F_{I,\kappa}(g)\omega_I\wedge \eta_0,\\
&\eta^\prime=\sum_I \omega_I\wedge \big[\int_{U(\ba)}L^\ast_{w_4u}(F_{I,\kappa}\eta_o) du\big].
\end{align*}
Observe that
\begin{align*}
d\big[\int_{U(\ba)}L^\ast_{w_4u}(F_{I,\kappa}\eta_o) du\big]=&\int_{U(\ba)}d\big[L^\ast_{w_4u}(F_{I,\kappa}\eta_o)\big]du.
\end{align*}
Here one can change the order of differentiation and integration because $\kappa$ is compactly supported and $F_I$ is cuspidal on $M(\ba)$. So $d\eta^\prime$ equals
\begin{align*}
&\sum_I d\omega_I\wedge \big[\int_{U(\ba)}L^\ast_{w_4u}(F_{I,\kappa}\eta_o) du\big]-\sum_I\omega_I\wedge d\big[\int_{U(\ba)}L^\ast_{w_4u}(F_{I,\kappa}\eta_o) du\big]\\
=&\sum_I \int_{U(\ba)} d\omega_I\wedge L^\ast_{w_4u}(F_{I,\kappa}\eta_o)- \omega_I\wedge d[L^\ast_{w_4u}(F_{I,\kappa}\eta_o)]\\
=&\sum_I \int_{U(\ba)} d\big[L^\ast_{w_4u} (\eta^{\kappa,\omega})\big]du.
\end{align*}
Because $\eta^{\kappa,\omega}$ is closed, there is $d\big[L^\ast_{w_4u} (\eta^{\kappa,\omega})\big]=L^\ast_{w_4u} (d\eta^{\kappa,\omega})=0$, whence $d\eta^\prime=0$ and $\eta^\prime$ is closed. Thus, $E(\eta^{\kappa,\omega})$ is closed.
\end{proof}
\subsection{Properties of $E(\eta^{\kappa,\omega})$}
\begin{lemma}\label{vp_p}
$<\mathrm{H}^{1,1}(\pi_f^\prime),E(\eta^{\kappa,\omega})>=0$ for $\pi_f^\prime\neq \pi_f$.
\end{lemma}
\begin{proof}
By Section ~\ref{repofh2}, nonzero $\HH^{1,1}(\pi_f^\prime)$ are of the form
\[
\HH^{1,1}(\pi_f^\prime)=\chi^\prime(\nu(g))\cdot \HH^{1,1}(\fg,K_\infty,\pi^\pprime),
\]
where $\chi^\prime$ is a character with $\chi^\prime_\infty\in \{1,\sgn\}$ and $\pi^\pprime$ is of type I, II or III. Hence a form $\omega^\prime$ in $\HH^{1,1}(\pi_f^\prime)$ is of the shape
\[
\omega^\prime=\chi^\prime(\nu(g))\sum_{J} \varphi_J \omega_J,\quad \varphi_J\in \pi^\pprime,\ \omega_J\in \wedge^2\fp^\ast.
\]
Write $E(\eta^{\kappa,\omega})=\sum_{i,I}E(\kappa\circ \lambda \cdot c_iF_I)\omega_I\wedge \omega_i$ as in (\ref{eclassp}) of Remark ~\ref{rm_eclassp}, with $\omega_I\in \wedge^3\fp^\ast$, $\omega_i\in \fp^\ast$, $F_I(g)\in \Pi(\tau,\frac{1}{2})$, and $c_i(g)$ left invariant by $P(\ba)$. It follows that
\[
<\omega^\prime,E(\eta^{\kappa,\omega})>=\sum_{i,I,J} \int_M \chi^\prime(\nu(g))\varphi_J(g)E(\kappa\circ \lambda \cdot F_I)\omega_J\wedge \omega_I\wedge \omega_i.
\]
The form $\omega_J\wedge \omega_I\wedge \omega_i$ is either zero or a scalar multiple of the volume form on $M$. So $<\omega^\prime,E(\eta^{\kappa,\omega})>$ is a finite sum of integrals of the following type,
\begin{align}\label{pi_g}
\nonumber &\int_{G(\bq)Z(\br)^+\backslash G(\ba)} \chi^\prime(\nu(g))\varphi(g) E(\kappa\circ \lambda \cdot F) dg\\
=&\int_{P(\bq)U(\ba)Z(\br)^+\backslash G(\ba)}\lambda(p)^{-3}\chi^\prime(\nu(g))\varphi_P(g) \kappa\big(\lambda(g)\big) F(g)dg,
\end{align}
where $\varphi\in \pi^\pprime$, $F\in \Pi(\tau,\frac{1}{2})$ and $\varphi_P(g):=\int_{[U]}\varphi(ug)du$ refers to the constant term of $\varphi$ along $P$. (Note that $c_i(g)F_I(g)\in \Pi(\tau,\frac{1}{2})$.)
(i) When $\pi^\pprime$ is of type I, $\varphi_P=0$ and the above integral vanishes. When $\pi^\pprime$ is of type III, $\varphi$ is constant and the above integral vanishes because $F$ is cuspidal when restricted to $M(\ba)$. So $<\omega^\prime,E(\eta^{\kappa,\omega})>=0$.
(ii) $\pi^\pprime$ is of type II. So $\pi^\pprime=J(P,\tau^\pprime,\frac{1}{2})$ with $\tau^\pprime\subset \ma_{cusp}(\PGL_2)$ and $L(\frac{1}{2},\tau^\pprime)\neq 0$. For $\varphi\in \pi^\pprime$, the constant term $\varphi_P$ belongs to the space $\Pi(\tau^\pprime,-\frac{1}{2})$. Writing elements in $G(\ba)$ as $g=\smalltwomatrix{A}{}{}{x\leftup{t}{A^{-1}}}k$ with $k\in \bk$, we can rewrite the integral in (\ref{pi_g}) as
\begin{equation}\label{pp_pintegral}
\int_{\bk}\int_{M(\bq)Z(\br)^+\backslash M(\ba)}\chi^\prime(x)\kappa(\big|\frac{\det A}{x}\big|)f^\pprime_k(A)f_k(A)d^\times A d^\times x,
\end{equation}
with
\begin{align*}
&f_k^\pprime(A)=|\det A|^{-1}\varphi_P\big(\smalltwomatrix{A}{}{}{\leftup{t}{A^{-1}}}k\big)\in \tau^\pprime,\\
&f_k(A)=|\det A|^{-2}F\big(\smalltwomatrix{A}{}{}{\leftup{t}{A^{-1}}}k\big)\in \tau.
\end{align*}
Setting $x=x\det A$, we can turn the inner integral in (\ref{pp_pintegral}) into
\begin{equation*}\label{sintegral}
\int_{\bq^\times\backslash \ba^\times}\chi^\prime(x)\kappa(|x^\prime|^{-1})d^\times x\cdot \int_{\bq^\times\br_+\backslash \ba^\times}\chi^\prime(z)d^\times z \int_{[\PGL_2]}f^\pprime_k(A)f_k(A)d^\times A.
\end{equation*}
For the above expression to be nonzero, it is necessary that $\chi^\prime=1$ and $\tau^\pprime={\tau}^\vee=\tau$. Thus, for (\ref{pp_pintegral}) and (\ref{pi_g}) to be nonzero, it is necessary that $\pi^\prime=\pi$. Therefore, $<\omega^\prime,E(\eta^{\kappa,\omega})>=0$ for $\pi^\prime_f\neq \pi_f$.
\end{proof}
\begin{lemma}\label{nvp_p}
Suppose $\tau\in \ma_{cusp}(\PGL_2)$ with $\tau_\infty=\fD_4$ and $L(\frac{1}{2},\tau)\neq 0$. There exists $\kappa\in C^\infty_c(\br_+)$, $\omega\in \HH^3(\fg,K_\infty,\Pi(\tau,\frac{1}{2}))$ and a sufficiently small $K_f$ such that $\int_{Z_{H,K_f}}E(\eta^{\kappa,\omega})\neq 0$.
\end{lemma}
\begin{proof}
The map $\phi_f\in \Pi(\tau_f,\frac{1}{2})\rar \omega\in \HH^3(\fg,K_\infty,\Pi(\tau,\frac{1}{2}))$ in Section ~\ref{gicoho} guides the choice of $\omega$. To a decomposable vector $v_f=\otimes v_p\in\tau_f$, we associate a specific section $\phi_{v_f}=\otimes \phi_p\in \Pi(\tau_f,\frac{1}{2})$: let $S(v_f)$ be the set of finite places of $\bq$ such that $v_p$ is spherical; for $p\in S(v_f)$, choose the $\phi_p$ with $\phi_p|_{\bk_p}=v_p$; for $p\not\in S(v_f)$, set $\overline{U}=J_2^{-1}UJ_2$ and let $\phi_p$ be the one supported in $P(\bq_p)\overline{U}(\bq_p)$ with
\[
\phi_p\smalltwomatrix{I_2}{}{n}{I_2}=
\begin{cases}
v_p, &n\in \mathrm{Sym}_{2\times 2}(\bz_p),\\
0, &n\not\in \mathrm{Sym}_{2\times 2}(\bz_p).
\end{cases}
\]
(i) Choose a decomposable $v_f\in \tau_f$, set $\phi_f=\phi_{v_f}$ and let $\omega$ be the form associated to $\phi_f$. Choose $K_f$ sufficiently small so that $\phi_f$ is $K_f$-invariant.
Because $H(\bq)$ acts transitively on $P(\bq)\backslash G(\bq)$, there is
\begin{align}\label{pip_eq1}
\nonumber \int_{Z_{H,K_f}}E(\eta^{\kappa,\omega})=&\int_{H(\bq)\backslash H(\ba)/K_{H,\infty}}\sum_{\gamma\in P(\bq)\backslash G(\bq)}L_{\gamma}^\ast \eta^{\kappa,\omega} dg_f\\
=&\int_{P_H(\bq)\backslash H(\ba)/K_{H,\infty}} \eta^{\kappa,\omega} dh_f.
\end{align}
Note that $H(\br)/K_{H,\infty}=P_H(\br)/(K_{H,\infty}\cap P_H(\br))$ and that $\eta^{\kappa,\omega}$ is right-invariant by $K_{H,\infty}\cap P_H(\br)=Z(\br)^+\cdot \{I_4, \diag(1,-1,1,-1)\}$. So
\begin{align}\label{pip_eq2}
\int_{P_H(\bq)\backslash H(\ba)/K_{H,\infty}} \eta^{\kappa,\omega} dg_f
\nonumber =&\int_{P_H(\bq)\backslash P_H(\br)\times H(\ba_f)/ (K_{H,\infty}\cap P_H(\br)} \eta^{\kappa,\omega}dh_f\\
=&2\int_{P_H(\bq)Z(\br)^+ \backslash P_H(\br)\times H(\ba_f)} \eta^{\kappa,\omega}dh_f
\end{align}
(ii) Recall that $\eta^{\kappa,\omega}=\kappa\big(\lambda(g)\big)\omega\wedge \eta_o$. Also recall from Section ~\ref{gicoho} that $\omega=F_+(g)\eta^-\wedge\eta_-+F_-(g)\eta^+\wedge\eta_+$ with
\[
F_\pm(pg_f)=\lambda^2(p)\cdot [v_\pm\otimes \phi_f(g_f)](A),\quad p=\smalltwomatrix{A}{\ast}{}{x\leftup{t}{A^{-1}}},\ g_f\in G(\ba_f).
\]
The right translation by $\ \diag(1,-1,1,-1)$ sends $F_+(g)\eta^-\wedge\eta_-$ to $F_-(g)\eta^+\wedge\eta_+$, and vice versa. So the RHS of (\ref{pip_eq2}) is equal to
\begin{equation}\label{pip_eq3}
4\int_{P_H(\bq)Z(\br)^+ \backslash P_H(\br)\times H(\ba_f)} \kappa\big(\lambda(h)\big)F_+(h)\eta^-\wedge\eta_-\wedge \eta_o\cdot dh_f
\end{equation}
The form $(\eta^-\wedge\eta_-\wedge \eta_o)|_{P_H(\br)}=2ia^\ast\wedge h^\ast\wedge n_1^\ast \wedge n_2^\ast$ represents a left Haar measure on $P_H(\br)$. Write it as $c_{H,\infty}dp^\prime_\infty$, then (\ref{pip_eq3}) is equal to
\begin{equation}\label{pip_eq4}
4c_{H,\infty}\int_{P_H(\bq)Z(\br)^+ \backslash P_H(\br)\times H(\ba_f)} \kappa\big(\lambda(h)\big)F_+(p^\prime_\infty,h_f)dp^\prime_\infty dh_f.
\end{equation}
(iii) Set $\overline{U}_H:=\overline{U}\cap H$ and $H_{v_f}:=P_H(\ba_f)\cdot H^\prime_{v_f}$, with
\[
H^\prime_{v_f}=(\prod_{p\not\in S(v_f)} \overline{U}_H(\bz_p))\times (\prod_{p\in S(v_f)} \bk_{H,p}).
\]
By the choice of $\phi_f$, the function $F_+(p_\infty^\prime,h_f)$ is supported in $P_H(\br)\times H_{v_f}$ and is right invariant by $H_{v_f}^\prime$. So (\ref{pip_eq4}) is equal to
\begin{align}\label{pip_eq5}
\nonumber &4c_{H,\infty}\mathrm{Vol}(H^\prime_{v_f})\int_{P_H(\bq)Z(\br)^+ \backslash P_H(\ba)} \kappa\big(\lambda(p^\prime)\big)F_+(p^\prime)dp^\prime\\
=&4c_{H,\infty}\mathrm{Vol}(H^\prime_{v_f})\underset{M_H(\bq)Z(\br)^+ \backslash M_H(\ba)}{\int} \lambda(m^\prime)^{-2} \kappa\big(\lambda(m^\prime)\big)F_+(m^\prime)dm^\prime
\end{align}
Write $m^\prime=\diag(a_1,a_2,a_0a_1^{-1},a_0a_2^{-1})$ and $f_+=[v_+\otimes v_f]\in \tau$, then
\[
\kappa\big(\lambda(m^\prime)\big)=\kappa(\big|\frac{a_1a_2}{a_0}\big|),\quad F_+(m^\prime)=\lambda(m^\prime)^2 f_+\smalltwomatrix{a_1}{}{}{a_2}.
\]
By changing variables, one can simplify (\ref{pip_eq5}) and turn it into
\[
4c_{H,\infty}\mathrm{Vol}(H^\prime_{v_f})\mathrm{Vol}(\bq^\times \br_+\backslash \ba^\times)^2\int_{\br_+}\kappa(t)d^\times t\int_{\bq^\times\backslash \ba^\times}f_+\smalltwomatrix{a_1}{}{}{1}d^\times a_1.
\]
Thus, setting $C=4c_{H,\infty}\mathrm{Vol}(H^\prime_{v_f})c^2$, we have
\[
\int_{Z_{H,K_f}}E(\eta^{\kappa,\omega})=C\int_{\br_+}\kappa(t)d^\times t\cdot \int_{\bq^\times\backslash \ba^\times} f_+\smalltwomatrix{a_1}{}{}{1}d^\times a_1.
\]
(iv) When $L(\frac{1}{2},\tau)\neq 0$, the integral $\int_{\bq^\times\backslash \ba^\times} f_+\smalltwomatrix{a_1}{}{}{1}d^\times a_1$ is nonvanishing on $v_+\otimes \tau_f$. (This is well-known and follows from the Jacquet-Langlands theory \cite{jl70} for $L$-functions of $\GL(2)$-representations.) Choose $v_f$ such that this integral is nonzero and choose $\kappa$ such that $\int_{\br_+}\kappa(t)d^\times t\neq 0$, then the period $\int_{Z_{H,K_f}}E(\eta^{\kappa,\omega})$ is nonzero.
\end{proof}
\section{Nonvanishing Periods III}\label{np3}
We verify Proposition ~\ref{formOmega} for $\pi=1$. As in Section ~\ref{np2}, we consider the split divisor $Z_{H,K_f}$ associated to the group $H\subset G$ and construct a form $\Omega$ by using Eisenstein cohomology. Note that $H^2(\fg,K_\infty,1)=\bc \omega_0$ with
\[
\omega_0=h^*\wedge n_2^*+\frac{1}{2}n_0^*\wedge n_3^*+a^*\wedge n_1^*.
\]
Note that $\omega_0\in \wedge^2 \fb_0^\ast\cong \wedge^2 \fp^\ast$ is $K_\infty$-invariant.
\begin{proposition}\label{charactercase}
There exists a $\bk_f$-finite rapidly decreasing closed form $\Omega$ on $M$ such that \emph{(i)} $<\mathrm{H}^{1,1}(\pi_f^\prime),\Omega>=0$ for $\pi_f^\prime\neq 1$, \emph{(ii)} $\int_{Z_{H,K_f}} \Omega \neq 0$.
\end{proposition}
We prove Proposition ~\ref{charactercase} at the end of this section. Here is an immediate corollary.
\begin{corollary}
Let $\chi$ be a character of $\bq^\times\br_+\backslash \ba^\times$ and $\pi=\chi\circ \nu$, then $\mathrm{H}^{1,1}(\pi_f)$ is spanned by the image of certain split special divisor.
\end{corollary}
\subsection{Eisenstein cohomology}
Let $N$ be the unipotent radical of $B$ and $A$ be the diagonal subgroup of $B$. Set $K_{A,\infty}=K_\infty\cap A(\br)$.
To $\tau\in C^\infty_c(\br_+\times \br_+)$, we associate a function $f^\tau$ on $B(\bq)\backslash G(\ba)/K_\infty$:
\[
f^\tau(g)=\tau(|\frac{a_1}{a_2}|,|\frac{a_2^2}{a_0}|),
\]
for $g=nak$ with $n\in N(\ba)$, $a=\diag(a_1,a_2,\frac{a_0}{a_1},\frac{a_0}{a_2})\in A(\ba)$, and $k\in \bk$.
To $\tau_1, \tau_2 \in C^\infty_c(\br_+\times \br_+)$, we associate a differential form $\eta^{\tau_1,\tau_2}$ on $B(\bq)\backslash B(\br)\times G(\ba_f)/K_{A,\infty}$:
\[
\eta^{\tau_1,\tau_2}(g)=f^{\tau_1}(g) a^* \wedge h^*\wedge n_1^* \wedge n_2^*+
f^{\tau_2} (g) a^* \wedge n_0^* \wedge (n_1^*-n_2^*) \wedge n_3^*.
\]
Because $B(\br)/K_{A,\infty}=G(\br)/K_\infty$, the form $\eta^{\tau_1,\tau_2}$ can be regarded as a differential form on $B(\bq)\backslash B(\br)\times G(\ba_f)/K_\infty$. Its pullback to $G(\ba)$ is
\[
\eta^{\tau_1,\tau_2}(b_\infty k_\infty,g_f)=R_{k_\infty^{-1}}\big(\eta^{\tau_1,\tau_2}(b_\infty,g_f)\big),
\]
for $b_\infty\in B(\br)$, $k_\infty\in K_\infty$, and $g_f\in G(\ba_f)$.
Imitating the construction of Eisenstein series, we define
\begin{align*}
E(\eta^{\tau_1,\tau_2})=\sum_{\gamma\in B(\bq)\backslash G(\bq)} L_\gamma^*\big(\eta^{\tau_1,\tau_2}\big).
\end{align*}
\begin{remark}\label{exp_eb}
$\eta^1:=a^* \wedge h^*\wedge n_1^* \wedge n_2^*$ and $\eta^2:=h^* \wedge n_0^* \wedge (n_1^*-n_2^*) \wedge n_3^*$ are left-invariant differential forms on $B(\bq)\backslash B(\br)\times G(\ba_f)/K_{A,\infty}$. We use the same notations to denote their pullback to $G(\ba)$. Choose a basis $\{\omega_I\}$ of $\wedge^4\fp^\ast$ and write $\eta^i=\sum_I C_{i,I}(g)\omega_I$, then $C_{i,I}(g)$ are left $B(\ba)$-invariant and right $\bk_f$-invariant because $\eta^i$ are so. There is
\begin{equation}\label{eclass4}
E(\eta^{\tau_1,\tau_2})=\sum_{i,I}E(C_{i,I}f^{\tau_i})\omega_I.
\end{equation}
Here $E(C_{i,I}f^{\tau_i}):=\sum_{\gamma\in B(\bq)\backslash G(\ba)}C_{i,I}(\gamma g)f^{\tau_i}(\gamma g)$ are Borel-type pseudo-Eisenstein series and are rapidly decreasing. So $E(\eta^{\tau_1,\tau_2})$ is rapidly decreasing.
\end{remark}
\begin{lemma}\label{closedness1}
$\eta^{\tau_1,\tau_2}$ is closed if and only if $\tau_1=2\tau_2-2t_1\ppder{\tau_2}{t_1}+2t_2\ppder{\tau_2}{t_2}$.
\end{lemma}
\begin{proof}
One needs to calculate $d(\eta^{\tau_1,\tau_2})$. Recall that for a differential form $\Omega$ of degree $m$, $d\Omega$ is defined by the expression
\begin{align*}
d\Omega(X_0,\cdots,&X_m)=\sum_{i=0}^m (-1)^i X_i
\left(\Omega(X_0,\cdots,\hat{X_i},\cdots,X_m)\right)\\
&+\sum_{0\leq i< j\leq m}
(-1)^{i+j}\Omega([X_i,X_j],X_0,\cdots,\hat{X_i},\cdots,\hat{X_j},\cdots,X_m),
\end{align*}
where $X_0,\cdots,X_m$ are smooth vector fields.
$\{a^*,h^*,n_0^*,n_1^*,n_2^*,n_3^*\}$ is a frame of the cotangent bundle on $G(\br)/K_\infty=B(\br)/K_{A,\infty}$. Direct calculation shows that
\begin{align*}
&d(a^*)=d(h^*)=0,\quad d(n_0^\ast)=-2h^\ast\wedge n_0^\ast,\\
&d(n_1^\ast)=-2a^\ast \wedge n_1^\ast-2h^\ast\wedge n_2^\ast-n_0^\ast\wedge n_3^\ast,\\
&d(n_2^\ast)=-2a^\ast \wedge n_2^\ast-2h^\ast\wedge n_1^\ast-n_0^\ast\wedge n_3^\ast,\\
&d(n_3^\ast)=-2a^\ast\wedge n_3^\ast-n_0^\ast\wedge (n_1^\ast+n_2^\ast),\\
&d(f^\tau)=f^{2t_2\ppder{\tau}{t_2}}a^* +f^{2t_1\ppder{\tau}{t_1}-2t_2\ppder{\tau}{t_2}}h^*.
\end{align*}
It follows that
\begin{align*}
&d(n_1^*\wedge n_2^*)=-4a^*\wedge n_1^*\wedge n_2^*
-n_0^*\wedge
(n_1^*-n_2^*)\wedge n_3^*;\\
&d(n_0^*\wedge (n_1^*-n_2^*)\wedge n_3^*)=-4a^*\wedge n_0^*\wedge
(n_1^*-n_2^*)\wedge n_3^*.
\end{align*}
Hence
\begin{align*}
d(\eta^{\tau_1,\tau_2})
&=(-f^{\tau_1}-f^{2t_1\ppder{\tau_2}{t_1}-2t_2\ppder{\tau_2}{t_2}}
+2f^{\tau_2})a^*\wedge h^*\wedge n_0^*\wedge (n_1^*-n_2^*)\wedge n_3^*\\
&=f^{-\tau_1-2t_1\ppder{\tau_2}{t_1}+2t_2\ppder{\tau_2}{t_2}+2\tau_2}
a^*\wedge h^*\wedge n_0^*\wedge (n_1^*-n_2^*)\wedge n_3^*.
\end{align*}
Therefore, $d(\eta^{\tau_1,\tau_2})=0$ if and only $\tau_1=2\tau_2-2t_1\ppder{\tau_2}{t_1}+2t_2\ppder{\tau_2}{t_2}$.
\end{proof}
\begin{lemma}
$E(\eta^{\tau_1,\tau_2})$ is closed when $\tau_1=2\tau_2-2t_1\ppder{\tau_2}{t_1}+2t_2\ppder{\tau_2}{t_2}$.
\end{lemma}
\begin{proof}
Express $E(\eta^{\tau_1,\tau_2})$ using equation (\ref{eclass4}). We first use reduction theory to show that the summation defining $E(C_{i,I}f^{\tau_i})$ is locally finite. Write $G(\ba)^1=\{g\in G(\ba):|\nu(g)|=1\}$, then $G(\ba)=Z(\br)^+G(\ba)^1$. For $c>0$, Let $\fS(c)$ be the set of $g\in G(\ba)^1$ of the form $nak$ with $n\in N(\ba)$, $a\in A(\ba)$, $k\in \bk$ and $|\frac{a_1}{a_2}|$, $|\frac{a_2^2}{a_0}|\geq c$, then
\begin{itemize}
\item[(i)] $G(\ba)^1=G(\bq)\fS(c)$ when $c$ is small enough.
\item[(ii)] the set $\{\gamma\in G(\bq):\gamma\fS(c)\cap \fS(c^\prime)\neq \emptyset\}$ for given $c,c^\prime$ is a finite union of cosets in $B(\bq)\backslash G(\bq)$
\end{itemize}
Given $\tau_1, \tau_2$, there exists $c^\prime$ such that $f^{\tau_i}(g)=0$ ($i=1,2$) when $g\not\in Z(\br)^+\fS(c^\prime)$. Choose a small $c$ so that $G(\ba)^1=G(\bq)\fS(c)$. By (ii), there are only finitely many $[\gamma_j]\in B(\bq)\backslash G(\bq)$ such that $\gamma_j\fS(c)\cap \fS(c^\prime)\neq \emptyset$, whence $E(C_{i,I}f^{\tau_i})(g)=\sum_{\gamma_j}C_{i,I}(\gamma g)f^{\tau_i}(\gamma_i g)$ is a finite summation for $g\in \fS(c)$. It follows that $E(\eta^{\tau_1,\tau_2})=\sum_j L_{\gamma_i}^*(\eta^{\tau_1,\tau_2})$ is a finite summation for $g\in \fS(c)$.
When $\tau_1=2\tau_2-2t_1\ppder{\tau_2}{t_1}+2t_2\ppder{\tau_2}{t_2}$, Lemma ~\ref{closedness1} tells that $\eta^{\tau_1,\tau_2}$ is a closed form on $G(\ba)/K_\infty$, whence each $L_{\gamma_j}^*(\eta^{\tau_1,\tau_2})$ is closed. So $E(\eta^{\tau_1,\tau_2})$ is a closed form on $\fS(c)^\circ$, the interior of $\fS(c)$. Since $G(\ba)^1=G(\bq)\fS(c)^\circ$ when $c$ is small enough, $E(\eta^{\tau_1,\tau_2})$ is closed on $G(\ba)/K_\infty$.
\end{proof}
\subsection{Properties of $E(\eta^{\tau_1,\tau_2})$}
We suppose $\tau_1=2\tau_2-2t_1\ppder{\tau_2}{t_1}+2t_2\ppder{\tau_2}{t_2}$.
\begin{lemma}
$<\mathrm{H}^{1,1}(\pi_f^\prime),E(\eta^{\tau_1,\tau_2})>=0$ when $\pi_f^\prime\neq 1$.
\end{lemma}
\begin{proof}
By Section ~\ref{repofh2}, a nonzero $\HH^{1,1}(\pi_f^\prime)$ is of the form
\[
\HH^{1,1}(\pi_f^\prime)=\chi^\prime(\nu(g))\cdot \HH^{1,1}(\fg,K_\infty,\pi^\pprime),
\]
where $\chi^\prime$ is a character with $\chi^\prime_\infty\in \{1,\sgn\}$ and $\pi^\pprime$ is of type I, II or III.
(i) If $\pi^\pprime=1$ is of type III, then $\HH^{1,1}(\pi_f^\prime)=\bc\cdot \chi^\prime(\nu(g))\omega_0$. Since $\pi_f^\prime\neq 1$, there is $\chi^\prime_f\neq 1$. By unfolding the integral, we have
\begin{align*}
<\chi^\prime(\nu(g))\omega_0, E(\eta^{\tau_1,\tau_2})>=\int_{B(\bq)\backslash G(\ba)/K_{A,\infty}} \chi^\prime(\nu(g))\big(\frac{1}{2}f^{\tau_1}(g)+f^{\tau_2}(g)\big)\omega_\fp,
\end{align*}
where $\omega_\fp:=a^\ast\wedge h^\ast\wedge n_0^\ast \wedge n_1^\ast \wedge n_2^\ast \wedge n_3^\ast$ is the volume form on $B(\br)/K_{A,\infty}$. Noticing $|K_{A,\infty}/Z(\br)^+|=4$, we rewrite the RHS as
\begin{equation}\label{pi_beq1}
4 \int_{B(\bq)Z(\br)^+\backslash G(\ba)} \chi^\prime(\nu(g))\big(\frac{1}{2}f^{\tau_1}(g)+f^{\tau_2}(g)\big)dg
\end{equation}
Because $\chi_f^\prime\neq 1$ and $\chi_\infty^\prime\in \{1,\sgn\}$, there is $\chi^\prime|_{\ba^\times_1}\neq 1$. However, $f^{\tau_i}$ are left-invariant under $\diag(a_1,a_2,\frac{a_0}{a_1},\frac{a_0}{a_2})$ when $a_i\in \ba^\times_1$. Hence, (\ref{pi_beq1}) vanishes because $\int_{\bq^\times\backslash \ba_1^\times} \chi^\prime(z)d^\times z=0$. So $<\HH^{1,1}(\pi_f^\prime),E(\eta^{\tau_1,\tau_2})>=0$.
(ii) If $\pi^\pprime$ is of type I or II, then $\pi^\pprime_\infty=\pi^{2+}$. By the description of $\HH^{1,1}(\fg,K_\infty,\pi^{2+})$ in Lemma ~\ref{cohopi2},
forms in $\HH^{1,1}(\pi^\prime_f)$ are of the shape
\begin{equation}\label{omega2}
\omega^\prime=\chi^\prime(\nu(g))\sum_{j=-2}^2 \varphi_j\eta_{-j},\quad \varphi_j\in \pi^\pprime,\, \eta_j\in \wedge^2\fp^\ast.
\end{equation}
Recalling the expression for $E(\eta^{\tau_1,\tau_2})$ in (\ref{eclass4}), we have
\[
\omega^\prime\wedge E(\eta^{\tau_1,\tau_2})=\sum_{i,j,I} \chi^\prime(\nu(g))\varphi_j(g)E(C_{i,I}f^{\tau_i})\eta_{-j}\wedge \omega_I,
\]
The form $\eta_{-j}\wedge \omega_I$, being left-invariant and of degree $6$, is either zero or a scalar multiple of the volume form $\omega_\fp$ on $M$. Thus, $<\omega^\prime,E(\eta^{\tau_1,\tau_2})>=\int_M \omega^\prime\wedge E(\eta^{\tau_1,\tau_2})$ is a finite sum of integrals of the following type: $\varphi\in \pi^\pprime$,
\begin{align*}
&\int_{G(\bq)Z(\br)^+\backslash G(\ba)} \chi^\prime(\nu(g))\varphi(g)E(C_{i,I}f^{\tau_i})(g)dg\\
=&\int_{B(\bq)N(\ba)Z(\br)^+\backslash G(\ba)}\chi^\prime(\nu(g))\varphi_B(g)C_{i,I}(g)f^{\tau_i}(g)dg.
\end{align*}
Here $\varphi_B(g):=\int_{N(\bq)\backslash N(\ba)}\varphi(ng)dn$ is the constant term of $\varphi$ along $B$. When $\pi^\pprime$ is of type I or II, $\varphi_B$ is zero and the above integral vanishes. Therefore, $<\mathrm{H}^{1,1}(\pi_f^\prime),E(\eta^{\tau_1,\tau_2})>=0$.
\end{proof}
\begin{lemma}\label{pi_last}
Suppose $\tau_i(t_1,t_2)=0$ when $t_1\in (0,1)$ ($i=1,2$), then
\[ \int_{Z_{H,K_f}} E(\eta^{\tau_1,\tau_2})=8c^3\underset{\br_+\times \br_+}{\int}\frac{\tau_1(t_1,t_2)}{|t_1t_2|^3}dt_1dt_2.
\]
\end{lemma}
\begin{proof}
Because $H(\bq)$ acts transitively on $P(\bq)\backslash G(\bq)$, there is
\begin{align}\label{pi_ebeq}
\nonumber \int_{Z_{H,K_f}} E(\eta^{\tau_1,\tau_2})=&\int_{P_H(\bq)\backslash H(\ba)/K_{H,\infty}} \sum_{\gamma\in B(\bq)\backslash P(\bq)} L_{\gamma}^*[\eta^{\tau_1,\tau_2}]dh_f\\
=&4\int_{P_H(\bq)Z(\br)^+\backslash P_H(\ba)} \sum_{\gamma\in B(\bq)\backslash P(\bq)} L_{\gamma}^*[\eta^{\tau_1,\tau_2}]dp_f^\prime.
\end{align}
(i) The coset space $B(\bq)\backslash P(\bq)$ is parameterized by $1$ and $\gamma_\delta$ ($\delta\in \bq$), with $\gamma_\delta=\smalltwomatrix{\beta_\delta}{}{}{\leftup{t}{\beta_\delta}^{-1}}$ and $\beta_\delta=\smalltwomatrix{}{1}{-1}{}\smalltwomatrix{1}{\delta}{}{1}$. One needs to restrict
\[
L^\ast_{\gamma}[\eta^{\tau_1,\tau_2}]=f^{\tau_1}(\gamma g)L_{\gamma}^\ast \eta^1+f^{\tau_2}(\gamma g)L_{\gamma}^\ast \eta^2
\]
to $P_H(\ba)$, that is, to restrict $L^\ast_{\gamma} \eta^i$ to $P_H(\br)$. By definition,
\begin{equation}\label{transexp}
(L^\ast_{\gamma}\eta^i)|_{p_\infty^\prime}=L^\ast_{\gamma}\big(\eta^i(\gamma p^\prime_\infty)\big),\quad p^\prime_\infty\in P_H(\br).
\end{equation}
Write $\gamma p^\prime_\infty=p_\infty k_\infty$ with $p_\infty\in B(\br)$ and $k_\infty\in K_\infty$, then
\begin{equation}\label{etaexp}
\eta^i(p_\infty k_\infty)=R_{k_\infty^{-1}}^\ast \eta^i(p_\infty).
\end{equation}
For $\eta=\sum_{I}\omega_I$ with $\omega_I\in \wedge^4 \fp^\ast$ on $B(\br)$, there is $R_{k_\infty^{-1}}^\ast \eta=\sum_I \Ad_{k_\infty}^\ast \omega_I$.
\begin{itemize}
\item[(ia)] When $\gamma=1$, there are $\eta_1|_{P_H(\br)}=\eta_1$ and $\eta_2|_{P_H(\br)}=0$ because $n_0^\ast$ and $n_3^\ast$ restrict to zero on $P_H(\br)$.
\item[(ib)] Consider $\gamma=\gamma_\delta$. Set $k_\theta:=\smalltwomatrix{\cos \theta}{\sin \theta}{-\sin \theta}{\cos \theta}$ and $k(\theta):=\smalltwomatrix{k_\theta}{}{}{k_\theta}\in K_\br$ for $\theta\in \br$. For $p^\prime_\infty=[\smalltwomatrix{r_1}{\ast}{}{r_0r_1^{-1}},\smalltwomatrix{r_2}{\ast}{}{r_0r_2^{-1}}]\in P_H(\br)\subset H(\br)$,
there is $\gamma_\delta p^\prime_\infty\in B(\br)k(\theta)$ with $\theta=-\tan^{-1}\frac{r_1}{r_2\delta}$, whence
\[
(L^\ast_{\gamma_\delta}\eta^i)|_{p_\infty^\prime}=\Ad_{k(\theta)}^\ast\eta^i.
\]
\item[(ic)] The action of $\Ad_{k(\theta)}$ on $\fb_0\cong \fp$ is given by
\begin{align*}
&a\rar a, &n_0\rar n_0,\\
&h\rar (\cos 2\theta) h-(\sin 2\theta) n_0, &n_1\rar n_1,\\
&n_2\rar (\cos 2\theta)n_2-(\sin 2\theta)n_3, &n_3\rar -(\sin 2\theta)n_2+(\cos 2\theta)n_3.
\end{align*}
Hence $\Ad_{k(\theta)}^\ast$ acts on $\fb_0^\ast$ by
\begin{align*}
&a^\ast\rar a^\ast, &n_0^\ast\rar n_0^\ast-(\sin 2\theta)h^\ast,\\
&h^\ast\rar (\cos 2\theta) h^\ast, &n_1^\ast\rar n_1^\ast,\\
&n_2^\ast\rar (\cos 2\theta)n_2^\ast-(\sin 2\theta)n_3^\ast, &n_3\rar -(\sin 2\theta)n_2^\ast+(\cos 2\theta)n_3^\ast.
\end{align*}
Therefore $\Ad_{k(\theta)}\eta^1=(\cos 2\theta)a^\ast\wedge h^\ast\wedge n_1^\ast\wedge (\cos 2\theta\cdot n_2^\ast-\sin 2\theta\cdot n_3^\ast)$. Note that $n_3^\ast$ restricts to zero on $P_H(\br)\subset B(\br)$, so
\[
L^\ast_{\gamma_\delta}\eta^1|_{p_\infty^\prime}=(\cos 2\theta)^2 a^\ast\wedge h^\ast\wedge n_1^\ast\wedge n_2^\ast=\frac{(r_1^2-r_2^2\delta^2)^2}{(r_1^2+r_2^2\delta^2)^2}\cdot \eta^1
.
\]
Similarly, $L^\ast_{\gamma_\delta}\eta^2|_{p_\infty^\prime}=(\sin 2\theta)^2 a^\ast\wedge h^\ast\wedge n_1^\ast\wedge n_2^\ast=\frac{(2r_1r_2\delta)^2}{(r_1^2+r_2^2\delta^2)^2}\cdot \eta^1$.
To summarize, we have $L^\ast_{\gamma_\delta}\eta^i|_{p^\prime}=f_{i,\delta}(p^\prime)\eta^1$, where $f_{i,\delta}$ are functions on $P_H(\ba)$ given by
\[
f_{1,\delta}(p^\prime)=\frac{(r_1^2-r_2^2\delta^2)^2}{(r_1^2+r_2^2\delta^2)^2},\quad f_{2,\delta}(p^\prime)=\frac{(2r_1r_2\delta)^2}{(r_1^2+r_2^2\delta^2)^2},
\]
where $p^\prime\in P_H(\ba)$ with $p^\prime_\infty=[\smalltwomatrix{r_1}{\ast}{}{r_0r_1^{-1}},\smalltwomatrix{r_2}{\ast}{}{r_0r_2^{-1}}]$,
\end{itemize}
Noticing that $f_{1,0}(p^\prime)=1$ and $f_{2,0}(p^\prime)=0$, we have
\[
\sum_{\gamma\in B(\bq)\backslash P(\bq)} L_{\gamma}^*[\eta^{\tau_1,\tau_2}](p^\prime)=\big[2f^{\tau_1}+\sum_{\delta\in \bq^\times}f_{1,\delta}f^{\tau_1}(\gamma_\delta \cdot)+f_{2,\delta}f^{\tau_2}(\gamma_\delta \cdot)\big](p^\prime)\eta^1.
\]
(ii) Now we compute the integral in (\ref{pi_ebeq}). Note that $\eta^1$ is the volume form on $Z(\br)^+\backslash P_H(\br)$. By doing integration on $U_H(\bq)\backslash U_H(\ba)$ first, we turn the integral in (\ref{pi_ebeq}) to
\begin{equation}\label{pi_ebeq3}
\int_{M_H(\bq)Z(\br)^+\backslash M_H(\ba)} \lambda(m^\prime)^{-2}\big[2f^{\tau_1}+\sum_{\delta\in \bq^\times}f_{1,\delta}f^{\tau_1}(\gamma_\delta \cdot)+f_{2,\delta}f^{\tau_2}(\gamma_\delta \cdot)\big](m^\prime)dm^\prime.
\end{equation}
We analyze the integrands $f^{\tau_1}$ and $f_{1,\delta}f^{\tau_1}(\gamma_\delta \cdot)+f_{2,\delta}f^{\tau_2}(\gamma_\delta \cdot)$ separately.
(iia) By changing variables, one computes quickly that
\[
\underset{M_H(\bq)Z(\br)^+\backslash M_H(\ba)}{\int} \frac{f^{\tau_1}(m^\prime)}{\lambda(m^\prime)^2}dm^\prime=c^3\underset{\br_+\times \br_+}{\int}\frac{\tau_1(t_1,t_2)}{|t_1t_2|^2}d^\times t_1d^\times t_2.
\]
(iib) Write $m^\prime=\diag(a_1,a_2,a_0a_1^{-1},a_0a_2^{-1})$ and then make a change of variables $a_1\rar a_1a_2$, $a_0\rar a_0a_2^2$. It yields
\begin{align}\label{2summand}
&\int_{M_H(\bq)Z(\br)^+\backslash M_H(\ba)}\lambda(m^\prime)^{-2}\big[\sum_{\delta\in \bq^\times}f_{1,\delta}f^{\tau_1}(\gamma_\delta \cdot)+f_{2,\delta}f^{\tau_2}(\gamma_\delta \cdot)\big](m^\prime)dm^\prime\\
\nonumber =&c\underset{(\bq^\times\backslash \ba^\times)^2}{\iint}\sum_{\delta\in\bq^\times, i}\big[f_{i,\delta}(\cdot)f^{\tau_i}(\gamma_\delta\cdot)\big](\diag(a_1,1,a_0a_1^{-1},a_0))|\frac{a_0}{a_1}|^2d^\times a_0 d^\times a_1.
\end{align}
The integrand of the double integral on the RHS can be written as
\begin{equation}\label{summation_last}
\sum_{\delta\in\bq^\times, i}\phi_i(a_1^{-1}\delta)f^{\tau_i}\smalltwomatrix{\beta(a_1,a_1^{-1}\delta)}{}{}{a_0\leftup{t}{\beta(a_1,a_1^{-1}\delta)^{-1}}}|\frac{a_0}{a_1}|^2,
\end{equation}
where $\beta(a_1,b_1):=\smalltwomatrix{1}{}{}{a_1}\smalltwomatrix{}{1}{-1}{}\smalltwomatrix{1}{b_1}{}{1}$ and $\phi_1, \phi_2$ are functions on $\ba^\times$ given by
\[
\phi_1(a_1)=\frac{(1-a_{1,\infty}^2)^2}{(1+a_{1,\infty}^2)^2},\quad \phi_2(a_1)=\frac{(2a_{1,\infty})^2}{(1+a_{1,\infty}^2)^2}.
\]
One actually has $\phi_i(a_1)=f_{i,\delta}(\diag(a_1,1,a_0a_1^{-1},a_0))$.
Combining the $\delta$-summation with the $a_1$-integration in (\ref{summation_last}), one rewrites the RHS of (\ref{2summand}) as
\begin{align*}
&c\iint_{a_1\in \ba^\times,\, a_0\in \bq^\times\backslash \ba^\times}\sum_{i=1,2} \phi_i(a_1^{-1})f^{\tau_i}\smalltwomatrix{\beta(a_1,a_1^{-1})}{}{}{a_0\leftup{t}{\beta(a_1,a_1^{-1})^{-1}}}d^\times a_0 d^\times a_1.
\end{align*}
Now we write $a_1=p^{-1}q\cdot (r,a_f)$ with $p,q\in \bz_{>0}$ being coprime and $r\in \br^\times, a_f\in \prod_{p<\infty}\bz_p^\times$. Accordingly,
\begin{equation}\label{integrand}
f^{\tau_i}\smalltwomatrix{\beta(a_1,a_1^{-1})}{}{}{a_0\leftup{t}{\beta(a_1,a_1^{-1})^{-1}}}=\tau_i\big(\frac{1}{p^2+r^2q^2},\frac{1}{|a_0|(p^2+r^2q^2)}\big).
\end{equation}
Because $\tau_i$ are compactly supported on $\br^+\times \br^+$, there are only finitely many $(p,q)$ such that the above expression is nonzero.
Specifically, if $\tau_i(t_1,t_2)=0$ when $t_1\in (0,1)$, then (\ref{integrand}) vanishes for all $(p,q)$ and hence for all $a_1\in \ba^\times$. As a consequence, the LHS of (\ref{2summand}) is zero and the period $\int_{Z_{H,K_f}}E(\eta^{\tau_1,\tau_2})$ is just $8$ times the expression in (iia). The lemma is proved.
\end{proof}
\begin{remark}
In the above computation, we only care about nonvanishing result and hence take $\mathrm{Vol}(K_f)=1$ for simplicity. In general, one needs to carefully choose the measure in order to get a precise equality.
\end{remark}
\begin{proof}[Proof of Proposition ~\ref{charactercase}]
Set $\tau_i(t_1,t_2)=t_1^2t_2^2\wtilde{\tau}_i(t_1,t_2)$, then the condition $\tau_1=2\tau_2-2t_1\ppder{\tau_2}{t_1}+2t_2\ppder{\tau_2}{t_2}$ becomes $\wtilde{\tau}_1=2\wtilde{\tau}_2-2t_1\ppder{\wtilde{\tau}_2}{t_1}+2t_2\ppder{\wtilde{\tau}_2}{t_2}$. When $\wtilde{\tau}_i$ vanishes for $t_1\in (0,1)$, Lemma ~\ref{pi_last} tells that
\[
\int_{Z_{H,K_f}} E(\eta^{\tau_1,\tau_2})=8c^3\underset{\br_+\times \br_+}{\int}\frac{\tau_1(t_1,t_2)}{|t_1t_2|^3}dt_1d t_2=8c^3\underset{\br_+\times \br_+}{\int}\frac{\wtilde{\tau}_2(t_1,t_2)}{t_1t_2}dt_1d t_2.
\]
So we choose $\wtilde{\tau}\in C^\infty_c\big((1,\infty)^2\big)$ with ${\int}_{\br_+\times \br_+}\wtilde{\tau}(t_1,t_2)d^\times t_1d^\times t_2\neq 0$ and set $\tau_1=t_1^2t_2^2(2\wtilde{\tau}-2t_1\ppder{\wtilde{\tau}}{t_1}+2t_2\ppder{\wtilde{\tau}}{t_2})$, $\tau_2=t_1^2t_2^2\wtilde{\tau}$. The form $E(\eta^{\tau_1,\tau_2})$ then meets the requirement of Proposition ~\ref{charactercase}.
\end{proof}
\def\cprime{$'$} \def\cprime{$'$}
|
{
"timestamp": "2021-05-11T02:19:43",
"yymm": "2105",
"arxiv_id": "2105.03916",
"language": "en",
"url": "https://arxiv.org/abs/2105.03916"
}
|
\section{Introduction}
\label{Sec:introduction}
In the super-Higgs mechanism, a spin $1/2$ fermion, the goldstino, combines with the gravitino and provides it with the appropriate number of degrees of freedom for a massive spin 3/2 \cite{Fayet:1974jb,Volkov:1973jd,Fayet:1977vd,Deser:1977uq,Cremmer:1978hn}. When the spin 1/2 propagates in a generic background with a non-relativistic dispersion relation, for instance at the sound speed $c_s <1$ in fluids, the result of the super-Higgs mechanism has been denoted "slow gravitinos" \cite{Benakli:2014bpa,Benakli:2013ava,Benakli:2015mbb,Kahn:2015mla,Ferrara:2015tyn}. Such situations might occur in cosmological backgrounds as cosmological solutions treat time and space differently \cite{Kallosh:1999jj,Giudice:1999yt,Giudice:1999am,Schenkel:2011nv,Kallosh:2000ve}.
At the end of inflation, during the period of reheating, the inflaton dissipates its energy while oscillating around the minimum of its potential. This energy is in part converted into a non-thermal production of gravitinos. This process was studied in \cite{Kallosh:1999jj,Giudice:1999yt,Giudice:1999am,Kallosh:2000ve} where the equations of propagation of the different gravitino modes were established. In particular, there is a copious production of the helicities $\pm 1/2$ components of the gravitinos, the goldstino fermion at the moment, which was also numerically evaluated in those papers and in \cite{Nilles:2001ry}.
Recently, \cite{Kolb:2021xfn,Kolb:2021nob} have reconsidered this process. The equation of motion of the fermion $\theta$ describing the longitudinal mode in the vaccuum after reheating can be written as
\begin{equation}
\left[\bar{\gamma}^{0} \partial_{0}+\mathrm{i}\bar{\gamma}^{i} k_{i} c_s \right]
\theta + \cdots =0,
\label{vs of theta}
\end{equation}
where the $\cdots$ stand for mass and mixing terms with other fermions. The sound speed is identified then as given by:
\begin{equation}
c_{s}^{2}=\frac{\left(p-3 m_{3/2}^{2} M_{P}^{2}\right)^{2}}{\left(\rho+3 m_{3/2}^{2} M_{P}^{2}\right)^{2}}+\frac{4M_{P}^{4}\left( \partial m_{3/2}/\partial t\right)^{2}}{\left(\rho+3 m_{3/2}^{2} M_{P}^{2}\right)^{2}}\,,\label{Sound-speed-1}
\end{equation}
where $M_P$ is the reduced Planck mass, $m_{3/2}$ the gravitino mass, while $\rho,p$ denote the energy density and pressure of the matter system.
It was then noticed in \cite{Kolb:2021xfn,Kolb:2021nob} that a catastrophic gravitino production occurs whenever the above sound speed vanishes.
In the supergravity cosmological models we discuss here where supersymmetry is linearly realised (but spontaneously broken), the slow goldstino is not an eigenstate of the Hamiltonian. It mixes with the fermion whose scalar partner has a non-vanishing kinetic energy, for instance the inflatino in the works mentioned above. These give rise at each moment, after diagonalisation, to combinations of the two fermions that are eigenstates of the Hamiltonian. We will show that they have dispersion relations where the coefficient of $k_i$ in (\ref{vs of theta}) is 1. Therefore, one does not expect any catastrophic production process.
In section 2, we consider the general case with a goldstino combination of two fermions. We show that through diagonalisation of the kinetic terms, we can always rewrite the system as two fermions with relativistic dispersion relations but time dependent mass terms. One of the novelties of our work is to consider also the case where one of the fermions arises from a vector multiplet. Section 3 present some exemples where the whole diagonalisation can be carried out explicitly.
\section{The General case with two fermions}
\label{sec:diago}
We consider an $N=1$ supergravity model where in addition to the graviton and gravitino $\psi_{\nu}$, one has two possible sources of supersymmetry breaking {\it in the vacuum}. The first is a potentially non-vanishing $D$-term $\mathcal{P}$ for a $U(1)$ vector multiplet. The second possibility uses the non-vanishing $F$-term of a chiral multiplet. In addition, during the cosmological evolution, there is an extra source of supersymmetry breaking given by the non-vanishing kinetic energy of a rolling scalar, the inflaton. To describe this system we consider a vector supermultiplet with field strength and gaugino denoted as $F_{\mu \nu}$ and $\lambda$, respectively, as well as one or two chiral multiplets consisting in scalars $\phi_i$ and fermions $\chi_i$, $i=1,2$. The corresponding Lagrangian is given by: \footnote{The Lagrangian with a generic number of chiral and vector multiplets can be found in \cite{Kallosh:2000ve}.}
\begin{equation}
\begin{aligned}
e^{-1} \mathcal{L} &=-\frac{1}{2} M_{P}^{2} R-g_{i}^{j}\left(\hat{\partial}_{\mu} \phi^{i}\right)\left(\hat{\partial}^{\mu} \phi_{j}\right)-V \\
&-\frac{1}{2} M_{P}^{2} \bar{\psi}_{\mu} R^{\mu}+\frac{1}{2} m \bar{\psi}_{\mu R} \gamma^{\mu \nu} \psi_{\nu R}+\frac{1}{2} m^{*} \bar{\psi}_{\mu L} \gamma^{\mu \nu} \psi_{\nu L}\\
&+\left(\operatorname{Re} f \right)\left[-\frac{1}{4} F_{\mu \nu} F^{\mu \nu }-\frac{1}{2} \bar{\lambda} \mathcal{D} \lambda\right]
+\frac{1}{4} \mathrm{i}\left(\operatorname{Im} f \right)\left[F_{\mu \nu} \tilde{F}^{\mu \nu }-\hat{\partial}_{\mu}\left(\bar{\lambda} \gamma_{5} \gamma^{\mu} \lambda\right)\right]
\\
&+\frac{1}{4}\left\{\left(\operatorname{Re} f\right) \bar{\psi}_{\mu} \gamma^{\nu \rho} F_{\nu \rho} \gamma^{\mu} \lambda-\left[f^{i} \bar{\chi}_{i} \gamma^{\mu \nu} F_{\mu \nu}^{-} \lambda_{L}+\mathrm{h.c.}\right]\right \} \\
&-g_{i}{}^{j}\left[\bar{\chi}_{j} \mathcal{D} \chi^{i}+\bar{\chi}^{i} \mathcal{D} \chi_{j}\right]-m^{i j} \bar{\chi}_{i} \chi_{j}-m_{i j} \bar{\chi}^{i} \chi^{j}\\
&-2 m_{i\alpha} \bar{\chi}^{i} \lambda-2 m^{i\alpha} \bar{\chi}_{i} \lambda-m_{R, \alpha \beta} \bar{\lambda}_{R} \lambda_{R}-m_{L, \alpha \beta} \bar{\lambda}_{L} \lambda_{L} \\&+\left(2 g_{j}{}^{i} \bar{\psi}_{\mu R} \gamma^{\nu \mu} \chi^{j} \hat{\partial}_{\nu} \phi_{i}+\bar{\psi}_{R} \cdot \gamma \upsilon_{L}+\mathrm{h.c.}\right)
\end{aligned}\label{lagrangian}
\end{equation}
where $L,R$ subscripts refer to the left and right chiralities, respectively. Moreover, $\chi_i$ is a left-handed field while $\chi^i$ is right-handed, and $\phi^i$ denotes the complex conjugate of $\phi_i$. The kinetic term of the gravitino is defined as $R^{\mu}=\gamma^{\mu \rho \sigma} \mathcal{D}_{\rho} \psi_{\sigma}$. The covariant derivatives as well as the mass terms in this Lagrangian can be found in Appendix \ref{appendix}. The Greek index $\alpha$ in the gaugino mass terms $m_{i\alpha}$ is set to 1, since there is at most one vector multiplet, in which case there is also only one chiral multiplet and thus the latin index $i$ is also set to 1. The field strengths $F_{\mu\nu},\tilde{F}^{\mu\nu}, F_{\mu\nu}^-$ are irrelevant for our discussion, and are defined in \cite{Kallosh:2000ve}.
The K\"ahler metric $g_i{}^j$ is given by the K\"ahler potential $K$ with
\begin{equation}
g_i{}^j=\frac{\partial}{\partial \phi^{i}} \frac{\partial}{\partial \phi_{j}} K
\end{equation}
and the gravitino mass $m_{3/2}$ is determined by the superpotential $W$ as well as the K\"ahler potential
\begin{equation}
m_{3/2}=|m| M_{\mathrm{P}}^{-2} ,\quad m \equiv \mathrm{e}^{\frac{K}{2 M_{\mathrm{P}}^{2}}} W
\end{equation}
The scalar potential is a sum of the $F$-term and $D$-term contributions, with $m_i$ the K\"ahler covariant derivative of $m$ and $f \equiv 1/g^2$ denoting the gauge kinetic function, assumed to be constant:
\begin{equation}
V=V_F+V_D,\quad V_F=-3 M_{\mathrm{P}}^{-2}|m|^{2}+m_{i} \left(g_{j}{}^i\right)^{-1} m^{j},\quad V_D=\frac{1}{2}g^2\mathcal{P}^2
\label{potential}
\end{equation}
where $\mathcal{P}$ is the Killing potential.
In the following, we will consider a flat universe described by the Friedmann-Lema\^itre-Robertson-Walker (FLRW) metric $\mathrm{d} s^{2}=a^{2}(\eta)\left(-\mathrm{d} \eta^{2}+ \mathrm{d} \mathbf{x}^{2}\right)$ where $a$ is the scale factor and $\eta\equiv x^0$ is the conformal time. The determinant of the vierbein is then $e=a^4$. We further introduce the dot derivative with respect to the physical time $t$, with $\dot{\mathrm{f}}\equiv a^{-1}\partial_0 \mathrm{f}$. The Hubble rate is defined as $H\equiv \dot{a}/a$.
Throughout this work, we will assume real backgrounds, and use the plane wave expansion for the fermions $\Psi(\eta,\vec{k})=\exp (\mathrm{i}\vec{k}\cdot\vec{x})\Psi(\eta)$. Useful notations are:
\begin{equation}
\begin{aligned}&\alpha\equiv \rho+3 M_{P}^{-2}m^{2},\quad
\alpha_{1} \equiv p-3 M_{P}^{-2}m^{2},\quad \alpha_{2} \equiv 2 \dot{{m}},\\&\hat{A}=\hat{A}_1+\bar{\gamma}^0 \hat{A}_2\equiv \frac{1}{\alpha}\left(\alpha_{1}+\bar{\gamma}^{0} \alpha_{2}\right),\\& \hat{B}=\hat{B}_1+\bar{\gamma}^0 \hat{B}_2\equiv -\frac{3}{2} \dot{a} \hat{A}-\frac{1}{2} M_{P}^{-2} {m} a \bar{\gamma}^{0}(1+3 \hat{A})\\&n_{i}\equiv g_{i}{}^{j} \dot{\phi}_{j}, \quad n^{i}\equiv g_{j}{}^{i} \dot{\phi}^{j} ,\quad |\dot{\phi}|^{2} \equiv g_{i}{}^{j} \dot{\phi}_{j} \dot{\phi}^{i}\\&\xi^{i} \equiv m^{i}+\bar{\gamma}^{0} n^{i},\quad \xi_{i} \equiv m_{i}+\bar{\gamma}^{0} n_{i}\\&\Delta^2 \equiv 1-\frac{\alpha_1^2}{\alpha^2}-\frac{\alpha_2^2}{\alpha^2}
=\frac{4}{\alpha^2} \left[\dot{\phi}^{i} \dot{\phi}_{j} m_{k} m^{\ell}\left({g_\ell{}^{ k}}^{-1} g_{i}{}^{j}-\delta_{i}^{k} \delta_{\ell}^{j}\right)+\frac{1}{2}|\dot{\phi}|^{2}g^2 \mathcal{P}^2 \right] \end{aligned}
\label{notations}
\end{equation}
In the FLRW background, the energy density and pressure are given in terms of the Hubble parameter as
\begin{equation}
\rho=3 M_{P}^{2} H^{2}, \quad p=-M_{P}^{2}\left(3 H^{2}+2 \dot{H}\right)
\end{equation}
Before choosing a gauge, the goldstino $\upsilon$ in the last line of \eqref{lagrangian} takes the form
\begin{equation}
\upsilon=\xi^{\dagger i} \chi_{i}+\xi_{i}^{\dagger} \chi^{i}+\frac{\mathrm{i}}{2} \gamma_{5} \mathcal{P} \lambda\label{goldstino}
\end{equation}
To describe the theory in the supersymmetry broken phase, we follow \cite{Kallosh:2000ve} and introduce the combination of spin-$1/2$ fermions
\begin{equation}
\begin{aligned}
& \theta \equiv \bar{\gamma}^i {\psi}_i\quad ,\quad \Upsilon \equiv a\left(n_{i} \chi^{i}+n^{i} \chi_{i}\right)
\\&\Xi_{R}=-m^{k} g_{k}^{-1 j} m_{j i} \chi^{i}+\bar{\gamma}^{0} \dot{\phi}_{j}\left(m^{j i} \chi_{i}+m^{j} \lambda_{L} \right)+\mathrm{i} M_{P}^{-2} m \mathcal{P} \lambda_{R}-\mathrm{i}g^2 \mathcal{P} m_{i} \chi^{i}
\end{aligned}\label{3fermions}
\end{equation}
In the unitary gauge $\upsilon=0$, \eqref{goldstino} then relates the gaugino to the chiral fermions. The spinors in \eqref{3fermions} are \textit{a priori} independent. However, as we consider the case of two fermions, $\theta$ will be associated with the longitudinal component in the vaccum of the gravitino in the unitary gauge. The fermion $\Upsilon$ describes the correction to this mode from supersymmetry breaking by the rolling scalar kinetic energy. In this case, $\Upsilon$ and $\Xi$ are proportional to each other, with
\begin{equation}
\Xi=-a^{-1} \hat{F} \Upsilon\label{def:F}
\end{equation}
For two chiral multiplets, the matrix $\hat{F}$ is provided in \cite{Kallosh:2000ve}. In the presence of a $D$-term, the remaining chiral multiplet is written as $(\chi_1, \phi_1)$, and the kinetic energy becomes $|\dot{\phi}|^2=g_1{}^1\dot{\phi}_1^2$ . We find (for non-vanishing $|\dot{\phi}|^2$ and $\mathcal{P}$):
\begin{equation}
\begin{aligned}
\hat{F}=& \frac{\dot{V}}{2|\dot{\phi}|^2}-\frac{\dot{\mathcal{P}}}{\mathcal{P}} + \bar{\gamma}^0\left((g_1{}^1)^{-1} m_{11} - \frac{\dot{\mathcal{P}}}{\mathcal{P}}\frac{m^1}{n^1}+\frac{2m}{M_P^2} \right)
\end{aligned}\label{F-Dterm}
\end{equation}
As emphasised by \cite{Kallosh:2000ve,Nilles:2001ry}, the equations of motions for $\theta$ and $\Upsilon$ are coupled together, thus spin-$1/2$ particles produced are not necessarily the longitudinal component of the gravitino, but the fermions that diagonalise the Hamiltonian. We call them the \textit{physical fermions} thereafter.
Moreover, the spin-${1}/{2}$ fermions ($\theta$, $\Upsilon$) have non-canonical kinetic terms and thus will be rescaled before diagonalising their equations of motion. It was noticed in \cite{Nilles:2001ry}, in the two chiral multiplets case, that the kinetic terms can be
written as
\begin{equation}
\mathcal{L}\supset -\frac{4 a}{\alpha \Delta^{2}} \bar{\Upsilon} \bar{\gamma}^{0} \partial_{0} \Upsilon -\frac{\alpha}{4 k^{2}} a^{3} \bar{\theta} \bar{\gamma}^{0} \partial_{0} \theta\label{kinetic}
\end{equation}
Here we are going to generalise the above expression in the presence of a vector multiplet with a non-vanishing $D$-term. $\Upsilon$ can be projected onto the left-handed gaugino by
\begin{equation}
\begin{aligned}
P_{L} \xi^{\dagger 1} \Upsilon &=-\frac{1}{2} a P_{L} \xi^{\dagger 1} \bar{\gamma}^{0}\left(\xi_{1} \chi^{1}+\xi^{1} \chi_{1}+\frac{\mathrm{i}}{2} \gamma_{5} \mathcal{P} \lambda\right) \end{aligned}
\end{equation}
where in the first line we used the unitary gauge condition $\upsilon=0$ from \eqref{goldstino}. The right-handed gaugino is obtained by charge conjugation:
\begin{equation}
P_{R} \xi_{1}^{\dagger} \Upsilon=\frac{\mathrm{i}}{2} a n_{1} \mathcal{P} P_{R} \lambda
\end{equation}
On the other hand, it is easier to project $\Upsilon$ onto the chiral fermions given its definition:
\begin{equation}
\chi_{1}=\frac{P_{L} \Upsilon}{a n^{1}}\quad , \quad \chi^{1}=\frac{P_{R} \Upsilon}{a n_{1}}
\label{Up-kin}\end{equation}
Expressing the gaugino and chiral fermion kinetic terms in terms of $\Upsilon$, we find
\begin{equation}
\mathcal{L}\supset -\frac{4a V_D}{\alpha^2 \Delta^2}\left(\bar{\Upsilon} \bar{\gamma}^0\partial_0\Upsilon \right) -\frac{4a(\alpha-V_D)}{\alpha^2\Delta^2}\bar {\Upsilon}\bar{\gamma}^0\partial_0 \Upsilon= -\frac{4 a}{\alpha \Delta^{2}} \bar{\Upsilon} \bar{\gamma}^{0} \partial_{0} \Upsilon
\label{Up-kin2}\end{equation}
As for $\theta$, one uses the fact that the spatial component of the gravitino can be decomposed into
\begin{equation}
\vec{\psi}=\vec{\psi}^{T}+\frac{1}{{k}^{2}}\left[\vec{k}(\bar{\gamma}^ik_i)+\frac{1}{2} \mathrm{i}(3 \vec{k}-\vec{\gamma}(\bar{\gamma}^ik_i))\left(\dot{a} \bar{\gamma}^{0}+M_{P}^{-2} a {m}\right)\right] \theta
\end{equation}
where $\psi^T$ corresponds to the transverse mode. Inserting the above equation into the gravitino kinetic term in the Lagrangian, we recover the same form as in \eqref{kinetic}. Consequently, in presence of one chiral multiplet and one vector multiplet, the kinetic terms of $\theta$ and $\Upsilon$ are the same as for two chiral multiplets, up to a redefinition of $\alpha$ and $\Delta$. We thus can use the same rescaling in the two cases, allowing to have canonical fields $\{\Psi_1, \Psi_2\}$:
\begin{equation}
\theta=\frac{2 \mathrm{i} \bar{\gamma}^{i} k_{i}}{\left(\alpha a^{3}\right)^{1 / 2}} \Psi_1 ,\quad
\Upsilon=\frac{\Delta}{2}\left(\frac{\alpha}{a}\right)^{1 / 2} \Psi_2
\end{equation}
\subsection{The mixing matrices}
In the basis $\{\Psi_1,\Psi_2\}$, the spin-$\frac{1}{2}$ part of the Lagrangian takes the form:
\begin{equation}
\begin{aligned}
\mathcal{L}_{\Psi_1 \Psi_2}=&-\bar{\Psi}_1 \left[ \bar{\gamma}^0\partial_0 \Psi_1 -\frac{1}{2}\frac{\partial_0(\alpha a^3)}{\alpha a^3}\bar{\gamma}^0\Psi_1 + \bar{\gamma}^0 \hat{B}\Psi_1 +\mathrm{i}\bar{\gamma}^i k_i\hat{A}^\dagger \Psi_1-i\bar{\gamma}^i k_i\Delta\bar{\gamma}^0 \Psi_2\right]\\
&-\bar{\Psi}_2\left[ \bar{\gamma}^0\partial_0 \Psi_2 +\frac{\partial_0\left( \Delta \sqrt{\frac{\alpha}{a}}\right)}{\Delta \sqrt{\frac{\alpha}{a}}}\bar{\gamma}^0\Psi_2 + \bar{\gamma}^0 \hat{B}^\dagger \Psi_2 +\mathrm{i}\bar{\gamma}^i k_i\hat{A}\Psi_2+2\dot{a}\bar{\gamma}^0\Psi_2\right. \\&\quad \left.+\frac{am}{M_P^2}\Psi_2 + a\bar{\gamma}^0\hat{F}\Psi_2+\mathrm{i}\bar{\gamma}^0\bar{\gamma}^i k_i\Delta \Psi_1\right]
\end{aligned}
\label{canoL}
\end{equation}
where the different parameters are given in \eqref{notations} and $\hat{F}=\hat{F}_1+\bar{\gamma}^0\hat{F}_2$ is defined in \eqref{def:F}. The explicit form of \eqref{canoL} depends on the specific model. We will illustrate some examples in Section \ref{sec:examples}.
With $\Psi$ designating the vector $(\Psi_1,\Psi_2)^T$, the above Lagrangian can be put in a simple form
\begin{equation}
\mathcal{L}_{\Psi_1 \Psi_2}=-\bar{\Psi}\left[\bar{\gamma}^{0}
\partial_{0}+\mathrm{i}\bar{\gamma}^{i} k_{i}N +M\right] \Psi
\end{equation}
with equations of motion
\begin{equation}
\left[\bar{\gamma}^{0} \partial_{0}+\mathrm{i}\bar{\gamma}^{i} k_{i}N +M\right]_{mn} \Psi_n=0,\quad m,n\in\{1,2\}.
\end{equation}
The $N$ and $M$ mixing matrices are given by\footnote{The difference of some signs compared to \cite{Nilles:2001ry,Dudas:2021njv} is due to the $\bar{\gamma}^0$ convention.}
\begin{equation}\begin{aligned}M&=\mathbb{1}_4\left(
\begin{array}{cc}
-\hat{B}_2 & 0 \\
0&\hat{B}_2+\frac{am}{M_P^2}-a\hat{F}_2\! \!
\end{array}\right)+\bar{\gamma}^0\left( \begin{array}{cc}
-\frac{1}{2}\frac{\partial_0(\alpha a^3)}{\alpha a^3}+\hat{B}_1 & 0 \\
0& \frac{\partial_0\left( \Delta \sqrt{\frac{\alpha}{a}}\right)}{\Delta \sqrt{\frac{\alpha}{a}}} + \hat{B}_1 +2\dot{a} +a \hat{F}_1\! \!
\end{array}\right)\\&=\mathbb{1}_4\left(
\begin{array}{cc}
-\hat{B}_2 & 0 \\
0&\hat{B}_2+\frac{am}{M_P^2}-a\hat{F}_2
\end{array}\right)
\end{aligned}
\label{M-mat}
\end{equation}
and
\begin{equation}
N=N_1+\bar{\gamma}^0 N_2=\mathbb{1}_4\left(\begin{array}{cc}
\hat{A}_1 & 0 \\
0 & \hat{A}_1
\end{array} \right)+\bar{\gamma}^0 \left(\begin{array}{cc}
- \hat{A}_2 & -\Delta \\
-\Delta & \hat{A}_2
\end{array} \right)\label{N-mat}
\end{equation}
where $\mathbb{1}_4$ is a $4\times 4$ unit matrix. We should stress that in the decompositions of $M$ and $N$, the $2\times2$ matrices act on the basis $\{\Psi_1,\Psi_2\}$, while $\bar{\gamma}^0$ is a $4\times4$ matrix acting on the spinor indices.
In the first line of \eqref{M-mat}, the $\bar{\gamma}^0$-dependent part vanishes both for two chiral multiplets and for one chiral multiplet with a $D$-term due to the property
\begin{equation}
\hat{B}_1=\frac{1}{2}\left( \hat{B}+\hat{B}^\dagger\right)=\frac{a\dot{\alpha}}{2\alpha}+\frac{3\dot{a}}{2}
\end{equation}
Note also that for $\Delta$ appearing in the denominator of $\Upsilon$ kinetic term \eqref{Up-kin2}, there seems to be a singularity at $\Delta\rightarrow0$, but this singularity cancels out in the mixing matrices $M,N$, because there is another $\Delta$ in the denominator of $\hat{F}_1$ compensating the one in front of the $\Upsilon$ kinetic term.
In the $\{\Psi_1,\Psi_2\}$ basis, only $N$ contributes to the mixing. One might be tempted to diagonalize $N$ so as to decouple the two fermions, but in general the mixing matrices depend on time, thus a unitary transformation to the basis diagonalising $N$ would also be time-dependent, which by time derivative gives a contribution to the mass matrix, rendering $M$ non-diagonal.
Though on general grounds, we will not provide an analytical expression of the physical fermions in terms of $\{\Psi_1, \Psi_2\}$, as long as we consider the catastrophic production in \cite{Kolb:2021nob,Kolb:2021xfn} it is not necessary to carry out the entire diagonalization of \eqref{canoL}.
\subsection{Dispersion relations for the physical fermions}\label{sec:dispersion}
Expressing \eqref{Sound-speed-1} in terms of the parameters of \eqref{notations}, we observe that the sound speed \eqref{Sound-speed-1} amounts to the norm of $N_{11}$, namely
\begin{equation}
c_s^2=\hat{A}_1^2+\hat{A}_2^2=1-\Delta^2 \label{kolb-speed}
\end{equation}
In the case of a single chiral multiplet, the fields $\Upsilon$ and $\Xi$ vanish in the unitary gauge, and we are left with $\theta$. The norm of $N\equiv N_{11}$ enters into the dispersion relation as the gravitino velocity. It is a well-known result \cite{Kallosh:1999jj,Giudice:1999yt,Giudice:1999am} that $\hat{A}_1^2+\hat{A}_2^2=1$ for one chiral multiplet, thus the gravitino sound speed is the speed of light.
However, when two fermions are present, $\Upsilon$ cannot be omitted and the physical fermions are combinations of $\theta$ and $\Upsilon$. The question is then raised whether \eqref{kolb-speed} is the sound speed of a \textit{physical} propagating state.
One can check that the mixing matrix $N$ in \eqref{N-mat} is unitary, thus it can be written as an exponential of a phase
\begin{equation}
N=\exp \left(2 \Phi \bar{\gamma}^{0}\right)=\cos(2\Phi)+\bar{\gamma}^0\sin(2\Phi), \quad \Phi^{\dagger}=\Phi
\label{N-exponential}
\end{equation}
Furthermore, notice that $N_1$ and $N_2$ are real, then $\Phi$ is a real, thus symmetric matrix. By a unitary transformation $\hat{\Psi}=\operatorname{exp}(\bar{\gamma}^0\Phi)\Psi$, the exponent in (\ref{N-exponential}) is taken away, making $N$ equal to the identity, and the Lagrangian \eqref{canoL} in the new basis $\hat{\Psi}=(\hat{\Psi}_1,\hat{\Psi}_2)^T$ becomes
\begin{equation}
\mathcal{L}_{\hat{\Psi}_1\hat{\Psi}_2}=-\bar{\hat{\Psi}}\left[ \bar{\gamma}^0 \partial_0+ \mathrm{i}\bar{\gamma}^i k_i + \hat{M}\right]\hat{\Psi}\,.
\label{lagrangian-hat}
\end{equation}
The new mass matrix
\begin{equation}
\begin{aligned}
&\hat{M}=\hat{M}_1+\bar{\gamma}^0\hat{M}_2=\operatorname{exp}(\bar{\gamma}^0\Phi)M \operatorname{exp}(-\bar{\gamma}^0\Phi)+\partial_0 \Phi \\&\hat{M}_1=\operatorname{cos}(\Phi)M\operatorname{cos}(\Phi)+\operatorname{sin}(\Phi)M\operatorname{sin}(\Phi)+\partial_0 \Phi ,\\& \hat{M}_2=\operatorname{sin}(\Phi)M\operatorname{cos}(\Phi)-\operatorname{cos}(\Phi)M\operatorname{sin}(\Phi)
\end{aligned}\label{M-hat}\end{equation}
is in general non-diagonal, due to the off-diagonal elements of $\Phi$. We obtain therefore a system of two propagating fermions subject to oscillations due to the time-dependent mixing in their mass matrix.
As a side remark, the matrix $\Phi$ of \eqref{N-exponential} is a phase and defined up to a constant, as long as $N_1=\cos(2\Phi), N_2=\sin(2\Phi)$ are satisfied. On the other hand, the transformation matrix $\exp(\bar{\gamma}^0\Phi)=\cos(\Phi)+\bar{\gamma}^0\sin(\Phi)$ may take a minus sign according to the choice of $\Phi$, but the Lagrangian \eqref{lagrangian-hat} is independent of this choice. Moreover, the constant ambiguity does not change $\partial_0 \Phi $ and the minus signs in $\cos(\Phi)$, $\sin(\Phi)$ are compensated in the expressions of \eqref{M-hat}. As a result, this ambiguity has no effect on the Lagrangian or on the mass matrix.
Since $\hat{M}_2$ is antisymmetric, we can further perform an orthogonal transformation \cite{Nilles:2001ry} in order to eliminate this matrix
\begin{equation}
\hat{\Psi}=L\tilde{\Psi},\quad \text{with } \left( \partial_0 + \hat{M}_2 \right)L=0\,.
\label{L-eq}\end{equation}
Thus, we arrive at a Lagrangian where the mixing comes only from the mass matrix, that is $\bar{\gamma}^0-$independent
\begin{equation}
\mathcal{L}_{\tilde{\Psi}_1 \tilde{\Psi}_2}=-\bar{\tilde{\Psi}}\left[ \bar{\gamma}^{0} \partial_{0}+\mathrm{i} \bar{\gamma}^{i} k_{i}+L^{T} \hat{M}_{1} L\right]\tilde{\Psi}\,.
\end{equation}
$\tilde{M}\equiv L^{T} \hat{M}_{1} L$ is real and symmetric, hence it can be diagonalised by an orthogonal matrix $C$, with
\begin{equation}
\mu=\operatorname{diag}(\mu_1,\mu_2)=C^T \tilde{M} C\label{mu}\,.
\end{equation}
The energy squared eigenvalues for the fermions are then of the form
\begin{equation}
E_i^2=k^2+\mu_i^2 \label{eigen-E}\,.
\end{equation}
Although the momentum squared is multiplied by 1, yet one cannot conclude from \eqref{eigen-E} that the sound speed of the physical fermions is the speed of light.
To have a closer look at the propagation of physical degrees of freedom, we follow the approach in \cite{Nilles:2001ry,Ema:2016oxl} and expand $\tilde{\Psi}_i$ into creation and annihilation operators:
\begin{equation}
\tilde{\Psi}_i(x)=C_{i j} \int \frac{d^{3} \mathbf{x}}{(2 \pi)^{3 / 2}} e^{i \mathbf{k} \cdot \mathbf{x}}\left[U_{r}^{j \ell}(k, \eta) a_{r}^{\ell}(k)+V_{r}^{j \ell}(k, \eta) b_{r}^{\dagger \ell}(-k)\right]\,,
\label{expansion}
\end{equation}
where $r=\pm$ denotes the helicity components and a summation over repeated indices is understood.
The spinorial Fourier coefficients are written in terms of the helicity eigenfunctions $\psi_\pm$ and mode functions (matrices) $U_\pm$, $V_\pm$:
\begin{equation}
U_{r}^{i j} \equiv\left[\frac{U_{+}^{i j}}{\sqrt{2}} \psi_{r}, r \frac{U_{-}^{i j}}{\sqrt{2}} \psi_{r}\right]^{T}, \quad V_{r}^{i j} \equiv\left[\frac{V_{+}^{i j}}{\sqrt{2}} \psi_{r}, r \frac{V_{-}^{i j}}{\sqrt{2}} \psi_{r}\right]^{T}\,.
\end{equation}
Since $U_\pm$ and $V_\pm$ are related by charge conjugation invariance of $\tilde{\Psi}_i$, we can restrict ourselves to the mode equations of $U_\pm$.
Taking the momentum along the $x^3-$axis and defining the antisymmetric matrix
\begin{equation}
\Gamma \equiv C^{T} \partial_0{C}\,,
\label{defGamma}
\end{equation}
the equations of motion of $\tilde{\Psi}_i$ result in
\begin{equation}
\mathrm{i}\partial_0 \left(\begin{array}{c}
U_+\\U_-
\end{array} \right)= D \left(\begin{array}{c}
U_+\\U_-
\end{array} \right)\quad ,\quad D=\left(\begin{array}{cc}
-\mathrm{i}\Gamma-\mu &-k \mathbb{1}_2 \\
-k \mathbb{1}_2 & -\mathrm{i}\Gamma+\mu
\end{array} \right)
\end{equation}
where $D$ is a $4\times4$ hermitian matrix, whose diagonal blocks encode the time dependence. Its real eigenvalues are
\begin{equation}
\begin{aligned}
& \omega_{1,\pm}=\pm \left[\Gamma_{12}^2+k^2 +\frac{1}{2}\left(\mu_1^2+\mu_2^2 \right)+\left(\frac{1}{4}\left( \mu_1^2-\mu_2^2\right)^2+\Gamma_{12}^2 \left(4k^2+\left(\mu_1+\mu_2 \right)^2 \right) \right)^\frac{1}{2}\right]^\frac{1}{2}
\\ &\omega_{2,\pm}=\pm \left[\Gamma_{12}^2+k^2 +\frac{1}{2}\left(\mu_1^2+\mu_2^2 \right)-\left(\frac{1}{4}\left( \mu_1^2-\mu_2^2\right)^2+\Gamma_{12}^2 \left(4k^2+\left(\mu_1+\mu_2 \right)^2 \right) \right)^\frac{1}{2}\right]^\frac{1}{2}
\end{aligned}\label{dispersion-general}
\end{equation}
where we denoted the $(1,2)$ element of $\Gamma$ by $\Gamma_{12}$.
Note that when the fermions have degenerate mass $\mu_1=\mu_2=\bar{\mu}$, then from \eqref{mu}, $\hat{M}_1=\bar{\mu}\mathbb{1}_2$ and $\Gamma_{12}=0$. Thus, in this case we recover the relativistic dispersion relation with time-dependent mass:
\begin{equation}
\omega_{1,\pm}= \omega_{2,\pm}=\pm \sqrt{k^2+\bar{\mu}^2}
\label{relat-dispersion}
\end{equation}
In general, the dispersion relations of the physical fermions in \eqref{dispersion-general} are very different from that in \cite{Kolb:2021nob}, describing the vacuum helicity-$1/2$ mode of the gravitino. We recall the latter for clarity:
\begin{equation}
\omega_{k} \equiv \sqrt{c_{s}^{2} k^{2}+a^{2} m_{3/2}^{2}}\label{kolb-dispersion}
\end{equation}
One of the arguments for catastrophic gravitino production at $c_s=0$ is based on the adiabaticity violation. The dimensionless coefficient of non-adiabaticity is defined as \cite{Kofman:1997yn}
\begin{equation}
\mathcal{A}_k\equiv \frac{\partial_0 \omega_k}{\omega_k^2}
\label{non-adia}\end{equation}
Indeed, in \eqref{kolb-dispersion}, the momentum is multiplied by $c_s$, so when $c_s=0$, the coefficient of non-adiabaticity is independent of $k$, and can even exceed one under some circumstances implying particle production with an arbitrarily large momentum. This is, however, not the case here: the sound speed of \eqref{kolb-speed} does not enter explicitly\footnote{$\Gamma_{12}$ can potentially depend on $c_s$, because $\Gamma$ is related to the diagonalization of the mixing matrices, which contain $\Delta$.} the physical dispersion relations, and cannot suppress the momentum dependence when it vanishes. Therefore, $c_s$ is not, at least not directly, responsible for the divergent particle production.
To see if particles of arbitrarily large momenta can actually be produced, we assume that the only source of non-adiabaticity is due to the variations of frequencies of the two physical fermions, with the coefficient $\mathcal{A}_k$ being a sum of them. We then consider the limit of high momenta $k\gg\mu_i$ and $k\gg \Gamma_{12}$, where $\Gamma_{12}$ is roughly the time derivative of the logarithm of masses (see its definition \eqref{defGamma}), defining a scale related to the variation of masses.
In this limit, the non-adiabaticity coefficient at leading order becomes
\begin{equation}
\mathcal{A}_k\equiv \frac{\partial_0 \omega_{1,+}}{\omega_{1,+}^2}+\frac{\partial_0 \omega_{2,+}}{\omega_{2,+}^2}\approx -6 \Gamma_{12}\partial_0 \Gamma_{12}\frac{1}{k^3}
\end{equation}
implying that $\mathcal{A}_k$ falls as $k^{-3}$, and thus particles with arbitrarily large $k$ cannot be produced.
\section{Examples}
\label{sec:examples}
\subsection{Two chiral multiplets}
Given two chiral multiplets $(\chi_1, \phi_1)$, $(\chi_2, \phi_2)$, we investigate the case with $\Delta=1$ \textit{at all times}, so that the sound speed defined in \eqref{kolb-speed} vanishes, and according to \cite{Kolb:2021nob,Kolb:2021xfn}, the gravitino production is expected to diverge. In this toy example, the physical fermions as well as their equations of motion can be explicitly obtained. For simplicity, the K\"ahler potential is taken to be canonical, so that the expression of $\Delta$ becomes:
\begin{equation}
\Delta=\frac{2}{\alpha}\left( m_1\dot{\phi}_2-m_2\dot{\phi}_1\right) =1
\end{equation}
On the other hand, from \eqref{notations} on sees that $\Delta=1$ is equivalent to the conditions $\alpha_1=\alpha_2=0$. The latter implies that the gravitino mass is constant which is equivalent to the condition
$m_1\dot\phi_1+m_2\dot\phi_2=0$. The former implies $\dot\phi_1^2+\dot\phi_2^2=m_1^2+m_2^2$, where we used the expression \eqref{potential} for the potential in the absence of $D$-term contribution.
One possible setup for a solution is therefore $m_2=\dot{\phi}_1=0$ and $m_1= \dot{\phi}_2\neq0$, in other words, $\phi_1$ breaks supersymmetry via its $F$-term and $\phi_2$ via its kinetic term, by the same amount.
The mixing matrix $N$ then takes the form
\begin{equation}
N=\bar{\gamma}^0\left(\begin{array}{cc}
0 & -1 \\
-1 & 0
\end{array} \right)\,,
\label{N-twochiral}
\end{equation}
while, for convenience, we write $M$ as
\begin{equation}
M=\mathbb{1}_4\left(\begin{array}{cc}
M_{11} & 0 \\
0 & M_{22}
\end{array} \right)\quad,\quad M_{11},M_{22}\neq0\,.
\end{equation}
The angle $\Phi$ in \eqref{N-exponential} is constant because $N$ does not depend on time; we then choose
\begin{equation}
\Phi =\frac{3\pi}{4} \left( \begin{array}{cc}
0 &1 \\
1 &0
\end{array}\right),\quad \operatorname{exp}(\bar{\gamma}^0\Phi )=-\frac{1}{\sqrt{2}}\mathbb{1}_2 + \frac{1}{\sqrt{2}} \bar{\gamma}^0 \left( \begin{array}{cc}
0&1 \\
1&0
\end{array}\right)\,.
\end{equation}
Upon the unitary transformation $\hat{\Psi}=\operatorname{exp}(\bar{\gamma}^0\Phi) \Psi$, we obtain the new mass matrix $\hat{M}$ with
\begin{equation}
\hat{M}_1=\frac{M_{11}+M_{22}}{2}\mathbb{1}_2
\quad ,\quad \hat{M}_2=\frac{M_{11}-M_{22}}{2}\left( \begin{array}{cc}
0& 1 \\
-1&0
\end{array}\right)\,.
\end{equation}
We now look for the orthogonal matrix $L$ cancelling the $\bar{\gamma}^0$ component of $\hat{M}$. Parametrising $L$ by an angle $\tau$, from \eqref{L-eq} we get
\begin{equation}
L=\left(\begin{array}{cc}
\operatorname{cos}\tau(\eta) &- \operatorname{sin}\tau (\eta)\\
\operatorname{sin}\tau(\eta)& \operatorname{cos}\tau(\eta)
\end{array} \right),\quad \tau^\prime(\eta)=\frac{M_{11}-M_{22}}{2}
\end{equation}
Once such an orthogonal transformation is found, the Lagrangian in the $\tilde{\Psi}=L^T \operatorname{exp}(\bar{\gamma}^0\Phi)\Psi$ basis becomes
\begin{equation}
\mathcal{L}_{\tilde{\Psi}_1\tilde{\Psi}_2}=-\bar{\tilde{\Psi}}\left[\bar{\gamma}^{0} \partial_{0}+i \bar{\gamma}^{i} k_{i}+ \frac{M_{11}+M_{22}}{2} \right]\tilde{\Psi}\,.
\end{equation}
Thus, the mass matrix in this particular case is diagonal and one concludes that $\tilde{\Psi}=\{\tilde{\Psi}_1,\tilde{\Psi}_2\}$ are the physical fermions. They have degenerate mass and their equations of motion are decoupled:
\begin{equation}
\left[\bar{\gamma}^{0} \partial_{0}+i \bar{\gamma}^{i} k_{i}+ \frac{M_{11}+M_{22}}{2} \right]\tilde{\Psi}_j=0,\quad j\in\{1,2\}
\label{eom-psi-tilde}\end{equation}
These are just standard Dirac equations with time-dependent mass, similar to the transverse mode of the gravitino. As a result, the physical fermions, which are linear combinations of $\theta$ and $\Upsilon$, have the dispersion relation
\begin{equation}
\omega^2=k^2+\left( \frac{M_{11}+M_{22}}{2} \right)^2\,.
\end{equation}
The coefficient of non-adiabaticity is suppressed by large momenta, and particle production is not expected to be divergent in this case, despite the fact that the speed of sound \eqref{Sound-speed-1} associated to $\theta$ vanishes.
\subsection{One chiral multiplet and one vector multiplet}
Another example with two fermions is one chiral multiplet $(\chi_1,\phi_1)$ accompanied by a vector multiplet with non-vanishing $D$-term. The simplest model one may consider is that of a constant (Fayet-Iliopoulos) $D$-term when the vector multiplet gauges the R-symmetry~\cite{Freedman}. One can then consider two possibilities. The first consists of a neutral chiral multiplet, in which case the superpotential vanishes. Again, we take the K\"ahler potential to be canonical. The scalar potential is then
\begin{equation}
V =V_{D}=\frac{1}{2}g^2 \mathcal{P}^2
\end{equation}
with $\mathcal{P}$ constant.
Since the gravitino is massless in this model, all mass terms appearing in \eqref{F-Dterm} vanish, and we have $\hat{F}=0$. Let
\begin{equation}
w\equiv \frac{p}{\rho}, \quad \text{with}\quad p=|\dot{\phi}|^2-V_D,\quad \rho=|\dot{\phi}|^2+V_D \,.
\end{equation}
In this model, the scalar equation of motion can be easily solved. We have
\begin{equation}
\ddot{\phi}+3H\dot{\phi}=0
\end{equation}
leading to $\dot{\phi}=e^{-3Ht}$, where the constant of integration is absorbed by a redefinition of the origin of time. The expression of $w$ is then
\begin{equation}
w(t)=\frac{2e^{-6Ht}-g^2\mathcal{P}^2}{2e^{-6Ht}+g^2\mathcal{P}^2}\label{wt}\,.
\end{equation}
Moreover, the parameters in \eqref{notations} and $\Delta$ can be written as
\begin{equation}
\begin{aligned}
\hat{A}=\hat{A}_1=w,\quad \hat{B}=\hat{B}_1=-\frac{3\dot{a}}{2}\left(1-w^2 \right),\quad\Delta^2=1-w^2\,.
\end{aligned}
\end{equation}
The sound speed defined in \eqref{kolb-speed} is therefore simply given by $w$, and the mixing matrices in this case are
\begin{equation}M=0,\quad N=\left(\begin{array}{cc}
w &0 \\
0& w
\end{array}\right)+\bar{\gamma}^0\left(\begin{array}{cc}
0 & -\sqrt{1-w^2} \\
-\sqrt{1-w^2}&0
\end{array}\right)\label{MN-Dterm}
\end{equation}
It follows that the physical fermions production and their equations of motion are determined only by $w$. Here, we study some particular limits:
\begin{itemize}
\item
When $w\rightarrow 1$, the pressure and the energy density are equal, implying that the $D$-term is vanishing. This limit amounts to a theory with a single chiral multiplet. $N$ is the identity matrix whereas $M=0$, so $\Psi_i$ are the physical fermions described by the massless Dirac equation and propagating at the speed of light.\footnote{This agrees also with the literature for the one chiral multiplet case; see the discussion in Section \ref{sec:dispersion}.}
\item
On the other hand, the same situation occurs for $w\rightarrow-1$, which means that $|\dot{\phi}|^2\rightarrow0$, or equivalently $t\rightarrow +\infty$ in \eqref{wt}, so that maximal symmetry is unbroken.
The equations of motion of the physical fermions differ from the massless Dirac equation by a minus sign in front of $\bar{\gamma}^ik_i$, but the dispersion relation $\omega^2=k^2$ is unchanged compared to the previous case.
\item
Another special value is $w=0$, corresponding to zero pressure and $e^{-6Ht}=V_D$, which is always satisfied for a certain time $t$. The mixing matrix $N$ is identical to \eqref{N-twochiral} for two chiral multiplets. The diagonalisation can be carried out in exactly the same way and we obtain two physical fermions with degenerate mass (massless here). Their equations of motion are the massless Dirac equation and we do not expect a divergent particle production, even though $c_s=0$.
\end{itemize}
More generally, the matrix $\Phi$ in \eqref{N-exponential} can be chosen as
\begin{equation}
\Phi=-\frac{1}{2}\operatorname{arccos}(w)\left( \begin{array}{cc}
0 &1 \\
1& 0
\end{array}\right)
\end{equation}
while for $w\neq \pm 1$
\begin{equation}\hat{M}=\partial_0 \Phi =\frac{\partial_0 w}{2\sqrt{1-w^{2}}}\left(
\begin{array}{cc}
0&1 \\
1&0
\end{array}\right)\,.
\end{equation}
The matrix $\hat{M}$ has no $\bar{\gamma}^0$ component and can be diagonalised by a constant orthogonal matrix $C$. Thus $\Gamma=0$, leading again to relativistic dispersion relations.
The details of particle production can be worked out by doing the expansion \eqref{expansion}, and solving numerically differential equations for the Bogolyubov coefficients, which will not be discussed here.
Finally, we comment briefly on the second possibility where the chiral field has a non-vanishing R-charge and the K\"ahler potential is non-canonical. Consider for instance a realistic model of inflation driven by supersymmetry breaking \cite{Antoniadis:2017gjr}, where the K\"ahler potential and the superpotential are
\begin{equation}
\begin{aligned}
K=\phi^1\phi_1+A (\phi^1\phi_1)^2,\quad W=f \phi_1
\end{aligned}
\label{KW}
\end{equation}
with $f$ a constant and $|\phi_1|$ playing the role of the inflaton, while its phase is absorbed in the gauge field to make it massive.
$A$ is a small positive constant, so that the potential has a maximum at the origin allowing hilltop inflation with the slow-roll parameter $\eta$ controlled by $A$.
In this case, the $D$-term part of the potential is given by
\begin{equation}
V_D=\frac{q^2}{2}\left(1+\phi^1 \phi_1 + 2A (\phi^1 \phi_1)^2\right)^2\,,
\end{equation}
with $q$ a constant parameter corresponding to the R-charge of $\phi_1$, that must be small compared to the $F$-term so that $V_D$ is subdominant during inflation.
It follows that $\Delta$ has the form:
\begin{equation}
\begin{aligned}
\alpha^{2} \Delta^{2}=4V_D (1+4A\phi^1\phi_1) \dot{\phi}^1 \dot{\phi}_1\quad ,\quad \alpha^2=\left( \rho+3 M_{P}^{-2}|m|^{2}\right)^2\,.
\end{aligned}
\end{equation}
Note that it appears possible to have $\Delta^2<0$ for some negative values of $A$, leading to $c_s>1$ according to equation \eqref{kolb-speed}. However, this region is unphysical since the K\"ahler metric becomes negative. This is actually similar to the situation that can be obtained in pathological models where $\Upsilon$ is dropped out by constraints, leading to $c_s>1$ \cite{Dudas:2021njv}.
We will now investigate again the case of $\Delta=1$ at all times. From \eqref{notations}, one sees that the condition $\alpha_2=0$ implies that the gravitino mass is constant with $m_1=0$, which is equivalent that $\phi_1$ has vanishing $F$-term and breaks supersymmetry only by its kinetic energy. This may indeed be satisfied around the vacuum at the minimum of the potential after the end of inflation. There, $\phi_1$ is in general far from the maximum at the origin where corrections to the K\"ahler potential \eqref{KW} become important and change its form. On the other hand, the condition $\alpha_1=0$ implies $|\dot{\phi}_1|^2=V_D$ where the latter is now given by $V_D=(q^2/2)(1+K^1\phi_1)^2$ with $K^1\equiv\partial K/\partial\phi_1$. The analysis of the two fermions $\theta$ and $\Upsilon$ can now proceed as in the previous subsection of two chiral multiplets, giving rise to two decoupled equations of motion with a relativistic dispersion relation.
\section{Conclusions}
\label{Conclusions}
We have studied the equations of motion for the longitudinal modes of gravitinos in supergravity models where supersymmetry is linearly realised but spontaneously broken. We have considered the general case of two supermultiplets. One contains a scalar field $\phi$ that has non-vanishing kinetic energy, $\partial_\mu \phi \neq 0$. In a cosmological background, this scalar can be identified with the inflaton which is time dependent. The other multiplet is at the origin of the gravitino mass in the vacuum at late times. We have found that, after diagonalisation of the Hamiltonian, in all cases the dispersion relations of the propagating fermions take relativistic forms with in general a time-dependent mixing mass matrix. While this might be expected, it is shown here explicitly. Such cases are not expected to show a catastrophic production of gravitinos.
We did not discuss here the non-linear models as those considered in \cite{Ferrara:2015tyn,Dudas:2021njv,Terada:2021rtp} as it is not clear to us which microscopic supergravity Lagrangian is at the origin of the constraint imposed there on the inflaton superfield. We note that in these cases one ends up with one fermion propagating in peculiar backgrounds.
The result of \cite{Kolb:2021xfn,Kolb:2021nob} constrains the background on which Rarita-Schwinger fields are allowed to propagate.
\section*{Acknowledgements}
Work partially performed by I.A. as International Professor of the Francqui Foundation, Belgium. The work of K.B. is supported by the Agence Nationale de Recherche under grant ANR-15-CE31-0002 ``HiggsAutomator''.
I.A. would like to thank Toine Van Proeyen for enlightening discussions.
|
{
"timestamp": "2021-05-11T02:15:25",
"yymm": "2105",
"arxiv_id": "2105.03784",
"language": "en",
"url": "https://arxiv.org/abs/2105.03784"
}
|
\section{Introduction}\setcounter{equation}{0}
The groundbreaking work \cite{LYZActa} by Huang, Lutwak, Yang and Zhang provides an extraordinarily beautiful connection between the Brunn-Minkowski theory for convex bodies and its dual. Among the most elegant concepts in \cite{LYZActa} are the $q$-th dual curvature measures. These measures not only give the conceptual dual of the Federer's curvature measures, but also can be derived from the first order variation of the $q$th dual volume under the logarithmic perturbations of given convex bodies. Let $\mathscr{K}_{(o)}^n$ be the set of all convex compact subsets in ${\mathbb{R}^n}$ containing the origin in their interiors. By the $q$th dual volume of $K\in \mathscr{K}_{(o)}^n$, we mean $$\widetilde{V}_q(K)=\frac{1}{n}\int_{S^{n-1}}\rho_K^q(\xi)\,d\xi,$$ where $0\neq q\in \mathbb{R}$, $\,d\xi$ is the canonical spherical measure on $S^{n-1}$ and $\rho_K: S^{n-1} \rightarrow [0, \infty)$ is the radial function of $K$. By the logarithmic perturbations of $K\in \mathscr{K}_{(o)}^n$, we mean the family of convex bodies $[h_K\cdot e^{\epsilon g}]\in \mathscr{K}_{(o)}^n$, the Wulff shapes generated by $h_K\cdot e^{\epsilon g}$ (see \eqref{july287} for the definition of the Wulff shape), where $\epsilon\in \mathbb{R}$ is small enough, $h_K: S^{n-1} \rightarrow [0, \infty)$ is the support function of $K$, and $g: S^{n-1} \rightarrow \mathbb{R}$ is a continuous function. Regarding the $q$-th dual curvature measures is the remarkable dual Minknowski problem \cite{LYZActa}: {\em given a real number $q$ and a nonzero finite Borel measure $\mu$ defined on $S^{n-1}$, can one find a $K\in \mathscr{K}_{(o)}^n$ so that $\mu=\widetilde{C}_q(K, \cdot)$, with $\widetilde{C}_q(K, \cdot)$ the $q$-th dual curvature measure of $K\in \mathscr{K}_{(o)}^n$?} Since its introduction, the dual Minkowski problem has received a lot of attention, see e.g., \cite{BHP, BLYZZ2017, ChenLi-2018, Henk, Huangjiang, JiangWu, LiShengWang, WangFangZhou, WangZ, zhao, zhao-jdg}.
The dual Minkowski problem has been pushed forward to the $L_p$ dual Minkowski problem \cite{LYZ-Lp} by Lutwak, Yang and Zhang and to the general dual Orlicz-Minkowski problem \cite{GHWXY,GHXY} by Gardner {\it et al}. The latter one asks: {\em given two continuous functions $G: S^{n-1} \times (0, \infty)\to \mathbb{R}$ and $\psi: (0, \infty)\to (0,\infty)$, under what conditions on a nonzero finite Borel measure $\mu$ defined on $S^{n-1}$ do there exist a $K\in \mathscr{K}_{(o)}^n$ and a constant $\tau\in \mathbb{R}$ so that $\mu=\tau \widetilde{C}_{G,\psi}(K, \cdot)$?} Here, $\widetilde{C}_{G,\psi}(K, \cdot)$ is the general dual Orlicz curvature measure for $K\in \mathscr{K}_{(o)}^n$ and can be formulated by: for any Borel set $\omega \subset S^{n-1}$, \begin{align} \label{c-g-21-1-10} \widetilde{C}_{G,\psi}(K, \omega)=\frac{1}{n}\int_{\pmb{\alpha}^*_K(\omega)} \frac{\rho_{K}(\xi)\, G_t(\xi, \rho_K(\xi)) }{\psi(h_{K}(\alpha_K(\xi)))}\,d\xi, \end{align} where $\alpha_K$ is the radial Gauss image of $K$, $\pmb{\alpha}^*_K$ is the reverse radial Gauss image of $K$ (see Section \ref{section-2} for detailed information), and $G_t$ is the first order partial derivative of $G$ with respect to its second variable. Like the $q$-th dual curvature measure, the measure $\widetilde{C}_{G,\psi}(K, \cdot)$ for $K\in \mathscr{K}_{(o)}^n$ can be obtained via the first order variation of the general dual volume $$\widetilde{V}_G(K)=\int_{S^{n-1}}G(\xi, \rho_K(\xi))\,d\xi$$ in terms of the Orlicz $L_{\varphi}$ addition $\varphi^{-1}[\varphi(h_K)+\epsilon \varphi(g)],$ where $\varphi: (0, \infty)\to \mathbb{R}$ is a strictly monotonic function whose first order derivative $\varphi'$ satisfies $\psi(t)=t\varphi'(t)$ for $t\in (0,\infty)$.
One of the biggest advantages for the general dual Orlicz-Minkowski problem \cite{GHWXY,GHXY} is its power to integrate various Minkowski type problems into a unified formula. Here we give some special cases. First of all, if $G=t^n/n$, then the $L_p$ and logarithmic Minkowski problems \cite{Lu93, min1897, min1903} are related to $\varphi(t)=t^p$ for $0\neq p\in \mathbb{R}$ and $\varphi(t)=\log t$, respectively. These problems have great impact on the development of the $L_p$ Brunn-Minkowski theory for convex bodies and have received immense attention, see \cite{BHZ2016,BLYZ2013, CLZ-2019, chen, chouw06, HLW-1, HugLYZ, JLW-1, JLZ2016, LuWang-1, Lut-Oli-12, LYZ04, ShengYi, Umanskiy, zhug20141, zhug2015b,GZhu2015II, zhug2017} among others. The Orlicz-Minkowski problem \cite{HLYZ2010} is related to the case when $G=t^n/n$ and $\varphi$ is a non-homogeneous function. Solutions to the Orlicz-Minkowski problem can be found in, e.g., \cite{BBColesanti, Bryanivakisch, Huanghe2012, JianLu, liaijun2014, LiuLu1, sun, SunLong,SunZhang,WuXiLeng2019, WuXiLeng2020}. When $G=t^q/n$ and $\varphi(t)=t^p$ for $0\neq p\in \mathbb{R}$, the general dual Orlicz-Minkowski problem reduces to the $L_p$ dual Minkowski problem \cite{LYZ-Lp}; contributions to this problem can be seen in, e.g., \cite{BorFod, Chen-H-Z, ChenLi, ChenTWX1, HuangZhao, JWW-CVPDE-2021, LiLiuLu, ShengXia}. By letting $G(u, t)=\log t$ for all $(u, t)\in S^{n-1}\times (0, \infty)$, $\widetilde{V}_G(K)$ for $K\in \mathscr{K}_{(o)}^n$ reduces to the dual entropy of $K$; in this case one can get the ($L_p$ and Orlicz) Aleksandrov problems \cite{Alexs1942, FH, HLYZ} (see also \cite{LiShengWang, zhao-pams}). Lastly, the general dual Orlicz-Minkowski problem also extends the dual Orlicz-Minkowski problems \cite{XY2017-1, ZSY2017} and the Minkowski problem for Gaussian measures \cite{HuangXiZhao}. Solutions to the (general) dual Orlicz-Minkowski problem by using the techniques from partial differential equations can be found in \cite{ChenKuyLuXiang,ChenTWX, LiuLu}.
In view of formula \eqref{c-g-21-1-10}, it is the radial Gauss image $\pmb{\alpha}_K: S^{n-1} \to S^{n-1}$ (or more precisely, the reverse radial Gauss image $\pmb{\alpha}_K^*$) which plays an essential role to transfer $\,d\xi$ to $\widetilde{C}_{G,\psi}(K, \cdot)$. Considering the importance of the reverse radial Gauss image, a new innovative problem bearing the flavour of the Minkowski type problems has been proposed in a recent paper \cite{BLYZZ2020} by B\"{o}r\"{o}czky \emph{et al}. Such an elegant problem was named as the Gauss image problem which asks: {\it under what conditions on two given spherical Borel measures $\lambda$ and $\mu$, does there exist a $K\in \mathscr{K}_{(o)}^n$ such that $\mu=\lambda(\pmb{\alpha}_K(\cdot))$?} (See Problem \ref{Gauss-I-p} for more general version.) The Gauss image problem involves two pre-given measures, and this is a major difference from the Minkowski type problems requiring only one pre-given measure. As mentioned in \cite{BLYZZ2020}, if $\,d\lambda(\xi)=\,d\xi$, the Gauss image problem reduces to a Minkowski type problem. We would like to comment that if $\, d\lambda(\xi)=p_{\lambda}(\xi)\,d\xi$ with a continuous function $p_{\lambda}: S^{n-1}\to (0, \infty)$, the Gauss image problem indeed becomes a special case of the general dual Orlicz-Minkowski problem \cite{GHWXY,GHXY}. However, if the measure $\lambda$ does not have a continuous density with respect to $\,d\xi$ or even is not absolutely continuous with respect to $\,d\xi$, then the Gauss image problem is different from the Minkowski type problems. Under some mild conditions on $\lambda$ and $\mu$, the existence and uniqueness of solutions to the Gauss image problem have been established in \cite{BLYZZ2020}. See \cite{ChenWX} for smooth solutions to the Gauss image problem.
The present paper has two major goals. The first one is to provide a unified formulation to integrate the Minkowski type problems and the Gauss image problem. The second one is to further push forward these problems to their next generation; this has the potential to initiate a brand-new theory: the Musielak-Orlicz-Brunn-Minkowski theory for convex bodies. A closer observation on the Gauss image problem and the general dual Orlicz-Minkowski problem indicates that a triple $\Theta=(G, \Psi, \lambda)$ containing three parameters shall be needed to fulfill these goals. Here $G$ and $\Psi$ are two Musielak-Orlicz functions defined on $S^{n-1}\times (0, \infty)$ and $\lambda$ is a spherical Lebesgue measure on $S^{n-1}$. The function $G$ and the measure $\lambda$ are used to define the general dual volume of $K\in \mathscr{K}_{(o)}^n$ with respect to $\lambda$, namely, \begin{align*}
\widetilde{V}_{G,\lambda}(K)=\int_{S^{n-1}} G(\xi, \rho_K(\xi))\, d\lambda(\xi).
\end{align*} The strictly monotone function $\Psi$ is used to define the addition of functions which produces the perturbation of convex bodies. In fact, the $L_{\Psi}$ addition of continuous functions $f$ and $g$ on $\Omega\subseteq S^{n-1}$ can be formulated by
\begin{align}\label{genplus21-21-1-1} \Psi(\xi, f_{\varepsilon}(\xi))=\Psi(\xi, f(\xi))+\varepsilon g(\xi),\ \ \ \xi\in \Omega,
\end{align} where $\varepsilon\in \mathbb{R}$ is small enough and $f$ is strictly positive.
In Section \ref{Section-var}, we will define the Musielak-Orlicz-Gauss image measure and proves a variational formula to derive this measure. Let $\Theta=(G, \Psi, \lambda)$ be a given triple with $\lambda$ a nonzero finite Lebesgue measure on $S^{n-1}$, $G\in \mathcal{C}$ where $\mathcal{C}$ is defined in \eqref{G-def-2021-01-16}, and $\Psi\in \mathcal{C}_I\cup \mathcal{C}_d$ where $\mathcal{C}_I$ and $\mathcal{C}_d$ are defined in \eqref{G-I-2021-01-16-I} and \eqref{G-d-2021-01-16-d}, respectively. The Musielak-Orlicz-Gauss image measure $\widetilde{C}_{\Theta}(K, \cdot)$ for $K\in \mathscr{K}_{(o)}^n$ is defined as follows (see Definition \ref{m---111}): for each Borel set $\omega\subseteq S^{n-1}$, \begin{align*}
\widetilde{C}_{\Theta}(K, \omega)=\int_{\pmb{\alpha}^{*}_K(\omega)} \frac{\rho_K(\xi) G_t(\xi, \rho_K(\xi))}{h_{K}(\alpha_K(\xi))\Psi_t(\alpha_K(\xi), h_{K}(\alpha_K(\xi)))}\,d\lambda(\xi),
\end{align*} where $G_t$ and $\Psi_t$ are the first order partial derivatives of $G$ and $\Psi$ with respect to their second variables. Under certain additional conditions on the measure $\lambda$ and the set $\Omega\subseteqS^{n-1}$, the following variational formula can be established in Theorem \ref{ovev-cor}:
\begin{align*}
\lim_{\varepsilon\rightarrow 0}\frac{\widetilde{V}_{G,\lambda}([f_{\varepsilon}])-\widetilde{V}_{G,\lambda}([f])}{\varepsilon}&=
\int_{\Omega} g(u)\, d\widetilde{C}_{\Theta}([f], u),
\end{align*} where $[f]$ denotes the Wulff shape of $f$, and $f_{\varepsilon}$ is given in \eqref{genplus21-21-1-1}.
In Section \ref{M:4}, we will propose our Musielak-Orlicz-Gauss image problem (i.e., Problem \ref{MOGIP}): {\em Under what conditions on $\Theta=(G, \Psi, \lambda)$ and a nonzero finite Borel measure $\mu$ on $S^{n-1}$ do there exist a $K\in \mathscr{K}_{(o)}^n$ and a constant $\tau\in \mathbb{R}$ such that $\mu =\tau \widetilde{C}_{\Theta}(K,\cdot)?$} As one can see in Section \ref{M:4}, the Musielak-Orlicz-Gauss image problem extends all previously mentioned Minkowski type problems and the Gauss image problem in their (arguably) most general formulations. Some special cases of particular interest are discussed; these include the Musielak-Orlicz-Minkowski problem (i.e., Problem \ref{MOMP-vol}) when $ \widetilde{V}_{G,\lambda}(\cdot)$ is the volume, the dual Musielak-Orlicz-Minkowski problem (i.e., Problem \ref{MOMP-dual}) when $\,d\lambda(\xi)=\,d\xi$, and the Musielak-Orlicz-Aleksandrov problem (i.e., Problem \ref{MOMP-Alex}) when $G=\log t$ and $\,d\lambda(\xi)=\,d\xi$. As byproducts, we obtain some fully nonlinear Monge-Amp\`{e}re partial differential equations. Indeed, if $\mu$ and $\lambda$ have continuous density functions $p_{\mu}$ and $p_{\lambda}$, respectively, then the Musielak-Orlicz-Gauss image problem could be reformulated by \begin{align*}
p_{\mu}= \tau \frac{P(\bar{\nabla}h+h\iota) \,\det(\bar{\nabla}^2h+hI)}{\Psi_{t}(\cdot, h)} p_{\lambda}\! \left(\!\frac{\bar{\nabla} h+h \iota}{|\bar{\nabla} h+h \iota|}\!\right), \end{align*}
where $\tau\in \mathbb{R}$, $P(y)=|y|^{1-n}G_{t}(\frac{y}{|y|}, |y|)$ for $y\in \mathbb{R}^n$ with $|y|$ the Euclidean norm of $y\in \mathbb{R}^n$, $\iota$ denotes the identity map on $S^{n-1}$, $\bar{\nabla}h$ and $\bar{\nabla}^2h$ are the gradient and the Hessian matrix of $h$ with respect to an orthonormal frame on $S^{n-1}$, and $I$ is the identity matrix.
Under the condition that $G$ is strictly decreasing on its second variable, the existence of solutions to the Musielak-Orlicz-Gauss image problem will be established in Sections \ref{solution-8-25} and \ref{even-8-25}. (Section \ref{even-8-25} deals with the Musielak-Orlicz-Gauss image problem for even data). A typical result is Theorem \ref{solution-general-dual-Orlicz-main theorem-11-270}, which is stated below.
\vskip 2mm \noindent {\bf Theorem \ref{solution-general-dual-Orlicz-main theorem-11-270}.} {\em Let $\lambda$ and $\mu$ be two nonzero finite Borel measures on $S^{n-1}$ that are not concentrated on any closed hemisphere. Assume that $\lambda$ is absolutely continuous with respect to $\,d\xi$. Then, there exists a $K\in \mathscr{K}_{(o)}^n$ such that
\begin{align}\label{msol-8-5-1-intro}
\frac{\mu}{|\mu|}=\frac{\widetilde{C}_{\Theta}(K,\cdot)}{\widetilde{C}_{\Theta}(K, S^{n-1})},
\end{align} if $G$ and $\Psi$ satisfy one of the following conditions:
\vskip 2mm \noindent i) $G\in \mathscr{G}_d$ with $\mathscr{G}_d$ given by \eqref{G-2021-1-16}, and $\Psi\in \mathcal{C}_I$ with $\mathcal{C}_I$ given by \eqref{G-I-2021-01-16-I} such that
$$\lim_{t\rightarrow \infty} \Psi(\xi, t)=+\infty \ \ \mathrm{for\ each}\ \xi\in S^{n-1};$$
\vskip 2mm \noindent ii) $\Psi\in \mathscr{G}_I$ with $\mathscr{G}_I$ given by \eqref{G-2021-1-16}, and $G\in \mathcal{C}_d$ with $\mathcal{C}_d$ given by \eqref{G-d-2021-01-16-d} such that $$\lim_{t\rightarrow 0^+} G(\xi, t)=+\infty \ \ \mathrm{for\ each}\ \xi\in S^{n-1}. $$
If, in addition, $\lambda$ is strictly positive on nonempty open subsets of $S^{n-1}$, then the assumption on $\mu$, i.e., $\mu$ is a nonzero finite Borel measure on $S^{n-1}$ that is not concentrated on any closed hemisphere, is also necessary for \eqref{msol-8-5-1-intro} holding true for some $K\in \mathscr{K}_{(o)}^n$.}
Under special choices of $\Theta=(G, \Psi, \lambda)$, one also obtains the solutions to the dual Musielak-Orlicz-Minkowski problem and the Musielak-Orlicz-Aleksandrov problem.
We would like to mention that, under additional condition on $\mu$ (i.e., $\mu$ vanishes on great subspheres), the Musielak-Orlicz-Gauss image problem for even data can also be solved when $\Psi\in \mathcal{C}_d$ and $G\in \mathscr{G}_d$ (see Theorem \ref{rl-even-ee}). Corollary \ref{MMM-2021-1-16} provides the existence of solutions to the Musielak-Orlicz-Aleksandrov problem for even data, under both $\Psi\in \mathscr{G}_I$ and $\Psi \in \mathscr{G}_d$. Solutions to the Musielak-Orlicz-Gauss image problem related to ``an increasing function" $G\in \mathcal{C}_I$ will be studied in our future work \cite{HXYZ-2}; while the solutions to the Musielak-Orlicz-Gauss image problem by the technique of flows will be provided in \cite{LSYY-2021}.
\section{Preliminaries and notations}\label{section-2}\setcounter{equation}{0}
Denote by $\mathbb{N}$ the set of all natural numbers. Let $n\in \mathbb{N}$ be such that $n\geq 2$. In the $n$-dimensional Euclidean space ${\mathbb{R}^n}$, let $S^{n-1}$ be the unit sphere and $B^n$ be the unit Euclidean ball of ${\mathbb{R}^n}$; namely, $$S^{n-1}=\{x\in{\mathbb{R}^n}: |x|=1\} \ \ \mathrm{and} \ \ B^n= \{x\in
{\mathbb{R}^n}:|x|\leq 1\},$$ where $|x|$ denotes the Euclidean norm of $x\in \mathbb{R}^n$. The origin of ${\mathbb{R}^n}$ is denoted by $o$, and the inner product of $x, y
\in {\mathbb{R}^n}$ is written by $x\cdot y$. For $x\neq o$, let $\overline{x}=x/|x|.$ Denote by $\mathscr{H}^{n-1}$ the $(n-1)$-dimensional Hausdorff measure. In particular, let $\,d\xi$ be the canonical spherical Lebesgue measure on $S^{n-1}$.
Let $K\subseteq {\mathbb{R}^n}$ be a nonempty, compact and convex set. Define $h_K: \mathbb{R}^n\rightarrow \mathbb{R}$, the support function of $K$, to be \begin{align}\label{support-function--111}
h_K(x)=\max \big\{x\cdot y: y\in K\big\} \ \ \ \mathrm{for} \ x\in {\mathbb{R}^n}. \end{align} It is easily checked that $h_K(ru)=rh_K(u)$ for $r>0$ and $u\inS^{n-1}$. For two nonempty, compact and convex sets $K, L\subseteq {\mathbb{R}^n}$, let $$d_H(K, L)=\max_{u\in S^{n-1}}|h_K(u)-h_{L}(u)|.$$ The convergence of $K_i\rightarrow K$ in the Hausdorff metric, where $K_i, K\subseteq {\mathbb{R}^n}$ for all $i\in \mathbb{N}$ are nonempty, compact and convex sets, is defined by $$\lim_{i\rightarrow \infty}d_H(K_i, K)=0. $$
A convex body is a compact and convex subset of $\mathbb{R}^n$ whose interior is nonempty. The Blaschke selection theorem asserts that every sequence of convex bodies, if uniformly bounded, must have a subsequence converging to a compact convex set in ${\mathbb{R}^n}$. Let $\mathscr{K}_o^n$ be the set of all convex bodies containing the origin. For $K\in \mathscr{K}_o^n$, let $\rho_K:
\mathbb{R}^n\setminus\{o\}\rightarrow[0,\infty)$ stand for the radial function of $K$. That is,
\begin{align}\label{support-function--222}
\rho_K(x)=\max \big\{\lambda\ge
0:\lambda x\in K\big\},
\end{align} for $x\in \mathbb R^n\backslash\{o\}$. It is easily checked that $\rho_K(ru)=r^{-1}\rho_K(u)$ for $r>0$ and $u\inS^{n-1}$. In this paper, we are mainly interested in the class of convex bodies $\mathscr{K}_{(o)}^n$ which consists of all convex bodies whose interiors contain the origin $o$. We say $K\in \mathscr{K}_{(o)}^n$ is origin-symmetric if $-K=K$, where $aK=\{ax: x\in K\}$ for $a\in \mathbb{R}$. The subclass $\cK_e^n$ of $\mathscr{K}_{(o)}^n$ denotes the set of all origin-symmetric convex bodies. For $K\in \mathscr{K}_{(o)}^n$, let $$
K^*=\big\{x\in\mathbb R^n: x\cdot y\le 1\quad\textrm{for all}~y\in
K\big\}.
$$ It can be easily checked that $K^* \in \mathscr{K}_{(o)}^n$ for $K\in \mathscr{K}_{(o)}^n$ having the following properties: \begin{align}\label{bi-polar--12}
h_{K^*}(u) \rho_K(u) =1 \quad \textrm{and}\quad
\rho_{K^*}(u) h_K(u)=1,
\end{align} for $u\in S^{n-1}$. The convex body $K^*$ is called the polar body of $K\in \mathscr{K}_{(o)}^n$.
Formulas \eqref{support-function--111} and \eqref{support-function--222} naturally bring many key concepts which play important roles in this paper. The first one is the so-called radial map $r_K$ of $K\in \mathscr{K}_{(o)}^n$ which maps $u\in S^{n-1}$ to $r_K(u)=\rho_K(u)u\in \partial K$, the boundary of $K$. The map $r_K$ is invertible for $K\in \mathscr{K}_{(o)}^n$ and its reverse, denoted by $r_K^{-1}$, maps $x\in \partial K$ to $S^{n-1}$ by letting $r_K^{-1}(x)=\overline{x}.$ The second one is the Gauss map $\pmb{\nu}_K: \partial K\mapsto S^{n-1}$ which maps an $x\in \partial K$ to all $u\in S^{n-1}$ satisfying $x\cdot u =h_K(u)$. Note that $\pmb{\nu}_K$ may not be injective on $\partial K$. Let $$\sigma_K=\big\{x\in \partial K: \ \pmb{\nu}_K(x) \ \mathrm{contains\ two\ or\ more \ elements} \big\}\subseteq \partial K.$$ It is well known that $\mathscr{H}^{n-1}(\sigma_K)=0$ \cite[p.84]{Sch}. Clearly, $\pmb{\nu}_K$ is injective in $\mathrm{reg}(K)=\partial K\setminus \sigma_K$; in this case, for simplicity, we write $\nu_K(x)$ for $\pmb{\nu}_K(x)$ if $x\in \mathrm{reg}(K)$.
Likewise, the inverse Gauss map $\pmb{\nu}^{-1}_K: S^{n-1} \to \partial K$ maps $u\in S^{n-1}$ to all $x\in \partial K$ such that $ x\cdot u =h_K(u).$ Note that $\pmb{\nu}^{-1}_K$ may not be injective and it is injective in the set $S^{n-1} \setminus \eta_K$ where $$\eta_K=\big\{u\in S^{n-1}: \ \pmb{\nu}^{-1}_K(u) \ \mathrm{contains\ two \ or \ more\ elements} \big\}\subseteq S^{n-1}.$$
Again $\mathscr{H}^{n-1}(\eta_K)=0$ as shown in \cite[Theorem 2.2.11]{Sch}. Let $\nu^{-1}_K(u)=\pmb{\nu}^{-1}_K(u)$ for $u\in S^{n-1} \setminus \eta_K$.
Let $K\in \mathscr{K}_{(o)}^n$. One can define the radial Gauss image $\pmb{\alpha}_K: S^{n-1}\rightarrow S^{n-1}$ which maps $u\in S^{n-1}$ to the set $\pmb{\alpha}_K(u)=\pmb{\nu}_K(r_K(u)).$ Namely,
$\pmb{\alpha}_K(u)$ is the set of all outer unit normal vectors of $\partial K$ at the point $\rho_K(u)u\in \partial K$. Define
$$\omega_K=\big\{u\in S^{n-1} \ \mathrm{such\ that}\ \pmb{\alpha}_K(u)\ \mathrm{contains\ two \ or \ more\ elements} \big\}.$$ Note that $\mathscr{H}^{n-1}(\omega_K)=0$ as shown in \cite[Theorem 2.2.5]{Sch} (or see \cite[p.340]{LYZActa}). Let $ \alpha_K(u)=\pmb{\alpha}_K(u)$ for $u\in S^{n-1}\setminus \omega_K$.
On the other hand, one can define the reverse radial Gauss image $\pmb{\alpha}^*_K: S^{n-1} \to S^{n-1}$ which maps $u\in S^{n-1}$ to the set $\pmb{\alpha}^*_K(u)=r_K^{-1}(\pmb{\nu}^{-1}_K(u)).$ Moreover, $\pmb{\alpha}^*_K$ is injective on the set $S^{n-1}\setminus \eta_K$, and in this case, $\pmb{\alpha}^*_K(u)$ will often be written as $\alpha^*_K(u)$. According to \cite[Lemma 2.5]{LYZActa}, the radial Gauss image and its reverse can be connected through the polar body:
for any $K\in \mathscr{K}_{(o)}^n$ and $\eta\subseteq S^{n-1}$, one has \begin{align}\label{star}
\pmb{\alpha}^{*}_K(\eta)=\pmb{\alpha}_{K^{*}}(\eta).
\end{align}
Let $\mathcal{B}$ and $\mathcal{L}$ be the $\sigma$-algebras of spherical Borel and Lebesgue measurable subsets of $S^{n-1}$, respectively.
We say $\lambda$ a spherical Lebesgue submeasure if $\lambda: \mathcal{L} \to [0, \infty)$ satisfies that $\lambda(\omega)=0$ if $\omega$ is an empty set; $\lambda(\omega_1)\leq \lambda(\omega_2)$ for all $\omega_1, \omega_2\in \mathcal{L}$ such that $\omega_1\subseteq \omega_2$; and
$\lambda(\bigcup_{i=1}^{\infty} \omega_i)\leq \sum_{i=1}^{\infty} \lambda(\omega_i)$ if $\omega_i\in \mathcal{L}$ for all $i\in \mathbb{N}$. A spherical Borel submeasure can be defined in a similar way with $\mathcal{L}$ replaced by $\mathcal{B}$. A nonzero finite Borel measure $\mu$ on $S^{n-1}$ is said to be not concentrated on any closed hemisphere of $S^{n-1}$, if for any $u\in S^{n-1}$, one has \begin{align}\label{not-concentration-1} \int_{S^{n-1}} (u\cdot \xi)_+\,d\mu(\xi)>0\end{align} with $a_+=\max\{a, 0\}$ for $a\in \mathbb{R}$.
This is equivalent to the following statement: $\mu(S^{n-1}\setminus\omega)>0$ if $\omega$ is an arbitrary closed hemisphere of $S^{n-1}$.
Let $K\in \mathscr{K}_{(o)}^n$. The surface area measure $S(K, \cdot)$ is defined by \begin{align}\label{surface-8-7} S(K, \omega)=\mathscr{H}^{n-1}(\pmb{\nu}_K^{-1}(\omega)
)\ \ \ \mathrm{for} \ \omega\in \mathcal{B}.\end{align} Similarly, the composition of a spherical Lebesgue submeasure $\lambda$ and the reverse radial Gauss image $\pmb{\alpha}^*_K$ naturally defines a spherical Borel submeasure on $S^{n-1}$ (see \cite{BLYZZ2020}); such a spherical Borel submeasure is named as the reverse Gauss image measure of $\lambda$ via $K\in \mathscr{K}_{(o)}^n$ and is denoted by $\lambda^*(K, \cdot)$. That is, for each $\omega\in \mathcal{B}$, \begin{align}\label{r-G-i-measure}
\lambda^*(K, \omega)=\lambda(\pmb{\alpha}_K^*(\omega))=\lambda(\pmb{\alpha}_{K^*}(\omega)).
\end{align} The Gauss image measure of $\lambda$ via $K$ \cite{BLYZZ2020}, another spherical Borel submeasure, can be defined by \begin{align}\label{G-i-measure}
\lambda(K, \omega)=\lambda(\pmb{\alpha}_K(\omega)).
\end{align} According to \eqref{r-G-i-measure} and \eqref{G-i-measure}, it is easily checked that $ \lambda^*(K, \cdot)= \lambda(K^*, \cdot)$ for each $K\in \mathscr{K}_{(o)}^n$. It has been proved in \cite[Lemma 3.3]{BLYZZ2020} that if $\lambda$ is a Borel measure which is absolutely continuous with respect to $\,d\xi$, then $\lambda^*(K, \cdot)$ for $K\in \mathscr{K}_{(o)}^n$ is a spherical Borel measure; moreover, for bounded Borel function $f$ defined on $S^{n-1}$, one has \begin{align} \int_{S^{n-1}} f(u)\,d\lambda^* (K, u)&=\int_{S^{n-1}} f(\alpha_K(\xi))\,d\lambda(\xi) \nonumber; \\ \int_{S^{n-1}} f(u)\,d\lambda (K, u)&=\int_{S^{n-1}} f(\alpha^*_{K}(\xi))\,d\lambda(\xi). \label{int-G-i-measure-1-1} \end{align}
When $\,d\lambda(\xi)=\,d\xi$, $\lambda(K, \cdot)$ for $K\in \mathscr{K}_{(o)}^n$ reduces to the Aleksandrov's integral curvature \cite{Alexs1942}, which will be denoted by $J(K, \cdot)$. Similarly, one can let $J^*(K, \cdot)=J(K^*, \cdot)$ for $K\in \mathscr{K}_{(o)}^n$. Hence, the following formulas hold (see \cite{HLYZ}):
\begin{align*} \int_{S^{n-1}} f(u)\,dJ^* (K, u)=\int_{S^{n-1}} f(\alpha_K(\xi))\,d\xi \ \ \mathrm{and}\ \ \int_{S^{n-1}} f(u)\,dJ (K, u)=\int_{S^{n-1}} f(\alpha^*_{K}(\xi))\,d\xi. \end{align*}
We shall need the following lemma, which is essentially a restatement of \cite[Lemmas 5.3 and
5.4]{BLYZZ2020}. We will present a proof
here for completeness.
\begin{lemma}\label{conc} Let
\(\lambda\) be an absolutely continuous Borel measure on \(S^{n-1}\)
that is strictly positive on nonempty open subsets of $S^{n-1}$. Then,
$\lambda(K, \cdot)$ for $K\in
\mathscr{K}_{(o)}^n$ is not
concentrated on any closed hemisphere. In particular, for any $u \in S^{n-1}$, one has \begin{align}\label{not-concentration-1-22} \int_{S^{n-1}} (u\cdot v)_+\,d\lambda(K, v)=\int_{S^{n-1}}(u\cdot \alpha_K^*(\xi))_+\,d\lambda(\xi)>0.\end{align}
\end{lemma}
\begin{proof} Let $\omega\subseteq S^{n-1}$ be a nonempty subset. Define \begin{align*} \mathrm{cone} (\omega)=\big\{ tu: \ u\in \omega \ \mathrm{and}\ t\geq 0\big\}\ \ \mathrm{and}\ \ \omega^*=\big\{v\in S^{n-1}: u\cdot v\leq 0 \ \mathrm{for} \ u\in \omega\big\}.\end{align*} If $\mathrm{cone} (\omega)$ is a proper convex subset of ${\mathbb{R}^n}$, then $\omega$ is called a spherically convex set; in this case, $\omega$ certainly is contained in a closed hemisphere of $S^{n-1}$.
It is easily checked that $\pmb{\alpha}_K(\omega)\subseteq S^{n-1}\setminus \omega^*$ holds for all spherically convex subset $\omega\subseteq S^{n-1}$; this can be seen from \cite[Lemma 3.2]{BLYZZ2020} where the following statement is also given: $(S^{n-1}\setminus \omega^*)\setminus \pmb{\alpha}_K(\omega)$ has interior points. It follows from \eqref{G-i-measure} and the assumption on $\lambda$ (in particular, the positiveness of $\lambda$ on nonempty open subsets of $S^{n-1}$) that \begin{align}\label{2021-02-01-1} \lambda(K, \omega)= \lambda\left(\pmb{\alpha}_{K}(\omega)\right)<\lambda\left(S^{n-1} \backslash \omega^*\right)
\end{align} holds for any spherically convex $\omega\subseteq S^{n-1}$; this argument can be seen in \cite[Lemma 3.7]{BLYZZ2020}.
Note that $\lambda(K, S^{n-1})=\lambda\left(\boldsymbol{\alpha}_{K}(S^{n-1})\right)=\lambda\left(S^{n-1}\right)$. Then, for any spherically convex set $\omega\subseteq S^{n-1}$, by \eqref{2021-02-01-1}, one gets
\begin{align*}\lambda\left(\omega^*\right) =\lambda\left(S^{n-1}\right) - \lambda\left(S^{n-1} \setminus
\omega^*\right) <\lambda\left(S^{n-1}\right) -\lambda(K, \omega)=\lambda(K, S^{n-1}\setminus\omega)
\end{align*} Now let $\omega$ be an arbitrary closed hemisphere of $S^{n-1}$. Then $\omega^*$ contains only one vector in $S^{n-1}$ and $S^{n-1}\setminus \omega$ is an open hemisphere of $S^{n-1}$. Consequently, $\lambda(K, \cdot)$ is not concentrated on any closed hemisphere because $\lambda(K, S^{n-1}\setminus\omega)>\lambda\left(\omega^*\right) \geq 0. $
Formula \eqref{not-concentration-1-22} is then an immediate consequence of \eqref{not-concentration-1} and \eqref{int-G-i-measure-1-1}. \end{proof}
Regarding the Gauss image measure, the following Gauss image problem has been posed in \cite{BLYZZ2020}.
\begin{problem}[The Gauss image problem]\label{Gauss-I-p} Let $\lambda$ be a spherical Lebesgue submeasure and $\mu$ be a spherical Borel submeasure. Under what conditions on $\lambda$ and $\mu$, does there exist a $K\in \mathscr{K}_{(o)}^n$ such that $\mu(\omega)=\lambda(K,\omega)$ holds for all $\omega\in \mathcal{B}$?
\end{problem}
Solutions to the Gauss image problem can be found in \cite{BLYZZ2020, ChenWX}. When $\,d\lambda(\xi)=\,d\xi$, Problem \ref{Gauss-I-p} becomes the classical Aleksandrov problem aiming to characterize the Aleksandrov's integral curvature. As mentioned in the introduction, our goal in this paper is to introduce a problem extending the Minkowski type problems and the Gauss image problem in their (arguably) most general setting by taking use of the Musielak-Orlicz functions. The precise definition for the Musielak-Orlicz functions can be found in e.g., \cite{Harjulehto, Musielak}, and they play important roles in the analysis of the Musielak-Orlicz space (or the generalized Orlicz space).
The Musielak-Orlicz functions in e.g., \cite{Harjulehto, Musielak}, are usually referred to strictly positive and nondecreasing functions. However, throughout this paper, when we refer to the Musielak-Orlicz functions, we mean $G\in \mathcal{C}$ with \begin{align} \mathcal{C}=\big\{G: S^{n-1}\times(0, \infty)\to \mathbb{R} \ \mathrm{such\ that}\ G \ \mathrm{and}\ G_t\ \mathrm{ are\ continuous\ on}\ S^{n-1}\times(0, \infty)\big\}, \label{G-def-2021-01-16} \end{align} where $G_t$ denotes the partial derivative of $G$ with respect to the second variable, namely, $$G_t(\xi, t)=\frac{\partial G}{\partial t}(\xi, t) \ \ \mathrm{for}\ \ (\xi, t)\in S^{n-1}\times (0, \infty).$$ As the function class $\mathcal{C}$ does contain all (smooth enough) Musielak-Orlicz functions
defined in, e.g., \cite{Harjulehto, Musielak}, taking use of the function class $\mathcal{C}$ not only provides convenience in later context but also gives reasons to name $\widetilde{C}_{\Theta}(K, \cdot)$ the Musielak-Orlicz-Gauss image measure (see Definition \ref{m---111}).
Let $\mathscr{G}_I$ and $\mathscr{G}_d$ be subclasses of $\mathcal{C}$ defined by \begin{align}
\mathscr{G}_I = \big\{G\in \mathcal{C}: G \ \mathrm{satisfies \ condition\ ({\bf A})} \big\} \ \mathrm{and}\ \mathscr{G}_d = \big\{G\in \mathcal{C}: G \ \mathrm{satisfies \ conditions\ ({\bf B})} \big\}, \label{G-2021-1-16}
\end{align} where conditions ({\bf A}) and ({\bf B}) are given below: \begin{itemize} \item[({\bf A})] $G: S^{n-1}\times (0, \infty)\to (0, \infty)$ satisfies that, for each $u\in S^{n-1}$, $G_t(u, \cdot)$ is strictly positive on $(0, \infty)$, $\lim_{t\to 0^+}G(u, t)=0$, and $\lim_{t\to \infty}G(u, t)=\infty;$
\item[({\bf B})] $G: S^{n-1}\times (0, \infty)\to (0, \infty)$ satisfies that, for each $u\in S^{n-1}$, $G_t(u, \cdot)$ is strictly negative on $(0, \infty)$, $\lim_{t\to 0^+}G(u, t)=\infty,$ and $\lim_{t\to \infty}G(u, t)=0.$
\end{itemize}
The fact that $G\in \mathscr{G}_I\cup \mathscr{G}_d$ is assumed to be strictly positive is mainly for technique reasons and for convenience. Our arguments in later context mainly rely on the monotonicity of $G$ on its second variable, $\sup_{t>0}G(u, t)=+\infty$ for each $u\in S^{n-1}$, and the fact that $G$ has controllable lower bounds; our results in later context should still hold if the function $\inf_{t>0}G(u, t): S^{n-1} \rightarrow \mathbb{R}$ is continuous on $S^{n-1}$.
We shall also need the following classes of functions: \begin{align}
\mathcal{C}_I&=\big\{G\in \mathcal{C}: \ G_t \ \mathrm{is\ strictly\ positive\ on\ } S^{n-1}\times (0, \infty) \big\}, \label{G-I-2021-01-16-I}\\
\mathcal{C}_d&=\big\{G\in \mathcal{C}: \ G_t \ \mathrm{is\ strictly\ negative \ on\ } S^{n-1}\times (0, \infty) \big\} \label{G-d-2021-01-16-d}.\end{align} When we write $G=\varphi(t)$ for some function $\varphi: (0, \infty)\rightarrow \mathbb{R}$, we mean $G(\xi, t)=\varphi(t)$ holding true for all $(\xi, t)\in S^{n-1}\times (0, \infty).$ For $\Psi\in\mathcal{C}$, let $\psi_{\xi}$ and $\widetilde{\Psi}$ be the functions given by
\begin{align}\label{formula220}
\psi_{\xi}(t)=\Psi(\xi, t) \ \ \mathrm{and}\ \ \widetilde{\Psi}(\xi, t)=\Psi\big(\xi, 1/t\big)\ \ \mathrm{for}\ \ (\xi, t)\in S^{n-1}\times (0, \infty).\end{align}
Clearly, $\widetilde{\Psi}_t(\xi, t)=-\frac{1}{t^2}\Psi_t(\xi, \frac{1}{t})$ and $\widetilde{\Psi}(\xi, t)\in \mathcal{C}$.
\section{The Musielak-Orlicz-Gauss image measure and related variational formula}\label{Section-var} \setcounter{equation}{0}
In this section, the Musielak-Orlicz-Gauss image measure is introduced and a variational formula to derive such a measure is established. Hereafter, $\lambda$ is always assumed to be a nonzero finite Lebesgue measure on $S^{n-1}$. For convenience, let \begin{align*} \mathcal{M}&=\big\{\mathrm{nonzero\ finite\ Borel\ measures\ on\ } S^{n-1} \ \mathrm{ that\ are \ absolutely\ continuous\ w.r.t.}\ \,d\xi \big\}. \end{align*}
The definition for the Musielak-Orlicz-Gauss image measure is given below.
\begin{definition}\label{m---111} Let $\Theta=(G, \Psi, \lambda)$ be a given triple with $G\in \mathcal{C}$, $\Psi\in \mathcal{C}_I\cup \mathcal{C}_d$, and $\lambda$ a nonzero finite Lebesgue measure on $S^{n-1}$. Define $\widetilde{C}_{\Theta}(K, \cdot)$, the Musielak-Orlicz-Gauss image measure of $K\in \mathscr{K}_{(o)}^n$, as follows: for each Borel set $\omega\in \mathcal{B}$, \begin{align}\label{gencdef-7-1-115}
\widetilde{C}_{\Theta}(K, \omega)=\int_{\pmb{\alpha}^{*}_K(\omega)} \frac{\rho_K(\xi) G_t(\xi, \rho_K(\xi))}{h_{K}(\alpha_K(\xi))\Psi_t(\alpha_K(\xi), h_{K}(\alpha_K(\xi)))}\,d\lambda(\xi).
\end{align}
\end{definition}
Let $\Theta_0=(\log t, \log t, \lambda)$. It follows from \eqref{r-G-i-measure} that, for all $\omega\in \mathcal{B}$, one has \begin{align}\label{gencdef-7-1-115-uu}
\widetilde{C}_{{\Theta}_0}(K, \omega)=\int_{\pmb{\alpha}^{*}_K(\omega)} \,d\lambda(\xi)=\lambda(\pmb{\alpha}^{*}_K(\omega))=\lambda^*(K, \omega). \end{align} Formula \eqref{gencdef-7-1-115} implies that $ \widetilde{C}_{\Theta}(K, \cdot)$ is absolutely continuous with respect to $\lambda^*(K, \cdot)$, and
\begin{align} \frac{\,d \widetilde{C}_{\Theta}(K, u)} {\,d \widetilde{C}_{{\Theta}_0}(K, u)} =\frac{\,d \widetilde{C}_{\Theta}(K, u)} {\,d\lambda^*(K, u)} = \frac{\rho_K(\alpha^*_K(u)) G_t(\alpha^*_K(u), \rho_K(\alpha^*_K(u)))}{h_{K}(u)\Psi_t(u, h_{K}(u))}\ \ \ \mathrm{for}\ \ u\in S^{n-1}. \label{relation-G} \end{align}
When $\,d\lambda=\,d\xi$, \eqref{gencdef-7-1-115} reduces to
\begin{align}\label{gencdef-7-1-115--1}
\widetilde{C}_{G, \Psi}(K, \omega)=\int_{\pmb{\alpha}^{*}_K(\omega)} \frac{\rho_K(\xi) G_t(\xi, \rho_K(\xi))}{h_{K}(\alpha_K(\xi))\Psi_t(\alpha_K(\xi), h_{K}(\alpha_K(\xi)))}\,d\xi.
\end{align} In this case, if $\Psi=\varphi(t)$ for some function $\varphi: (0, \infty)\to \mathbb{R}$ whose derivative, denoted by $\varphi'$, satisfies $t\varphi'(t)=\psi(t)$, then $\frac{1}{n} \widetilde{C}_{G, \Psi}(K, \cdot)$ becomes the general dual Orlicz curvature measure $\widetilde{C}_{G, \psi}(K, \cdot)$ in \cite[Definition 3.1]{GHWXY}. Hence $\widetilde{C}_{\Theta}(K, \cdot)$ naturally extends $\widetilde{C}_{G, \psi}(K, \cdot)$ to its (arguably) most general setting; and certainly contains many well-known measures appeared in the Minkowski type problems as its special cases, including but not limited to the surface area measure \eqref{surface-8-7}, the $L_p$ surface area measure \cite{Lu93}, the Orlicz surface area measure \cite{HLYZ2010}, the $L_{p}$ dual curvature measure \cite{LYZActa, LYZ-Lp}, the dual Orlicz curvature measure \cite{XY2017-1, ZSY2017}, and the Aleksandrov's integral curvature and its extensions \cite{Alexs1942, FH, HLYZ} (up to a difference of polarity of convex bodies).
In the following definition, we define another measure which is closely related to the Musielak-Orlicz-Gauss image measure. It gives a great convenience to establish solutions to the Musielak-Orlicz-Gauss image problem.
\begin{definition} Let $\Theta=(G, \Psi, \lambda)$ be a given triple with $G\in \mathcal{C}$, $\Psi\in \mathcal{C}_I\cup \mathcal{C}_d$, and $\lambda$ a nonzero finite Lebesgue measure on $S^{n-1}$. Define $C_{\Theta}(K, \cdot)$,
the polar Musielak-Orlicz-Gauss image measure of $K\in \mathscr{K}_{(o)}^n$,
as follows: for each Borel set $\omega\in \mathcal{B}$,
\begin{align}\label{pd}
C_{\Theta}(K, \omega)=\int_{\pmb{\alpha}^{*}_K(\omega)} \frac{\rho_K(\xi) G_t( \xi, \rho_K(\xi)) }{\rho_{K^*}(\alpha_K(\xi))\Psi_t
(\alpha_K(\xi), \rho_{K^*}(\alpha_K(\xi)))}\,d\lambda(\xi).
\end{align}
\end{definition}
Associated to $\Theta=(G, \Psi, \lambda)$, let $\widetilde{\Theta}=(G, \widetilde{\Psi},\lambda)$ with $\widetilde{\Psi}$ defined in (\ref{formula220}). For $K\in \mathscr{K}_{(o)}^n$, one has \begin{align}\label{wc}
C_{\widetilde{\Theta}}(K, \cdot )=- \widetilde{C}_{\Theta}(K, \cdot).
\end{align} To this end, by \eqref{bi-polar--12},
\eqref{gencdef-7-1-115}, \eqref{pd} and $\widetilde{\Psi}_t(\xi, t)=-\frac{1}{t^2}\Psi_t(\xi, \frac{1}{t})$, one gets, for any $\omega\in \mathcal{B}$,
\begin{align*}
C_{\widetilde{\Theta}}(K, \omega)&=\int_{\pmb{\alpha}^{*}_K(\omega)} \frac{\rho_K(\xi) G_t(\xi, \rho_K(\xi)) }{\rho_{K^*}(\alpha_K(\xi))\widetilde{\Psi}_t (\alpha_K(\xi), \rho_{K^*}(\alpha_K(\xi)))}\,d\lambda(\xi)
\nonumber\\
&=-\int_{\pmb{\alpha}^{*}_K(\omega)} \frac{\rho_{K^*}(\alpha_K(\xi))\rho_K(\xi) G_t(\xi, \rho_K(\xi)) }{\Psi_t (\alpha_K(\xi), 1/\rho_{K^*}(\alpha_K(\xi)))}\,d\lambda(\xi)\nonumber\\
&=-\int_{\pmb{\alpha}^{*}_K(\omega)} \frac{\rho_K(\xi)
G_t(\xi, \rho_K(\xi))
}{h_{K}(\alpha_K(\xi))\Psi_t(\alpha_K(\xi), h_{K}(\alpha_K(\xi)))}\,d\lambda(\xi)
\nonumber\\
&=- \widetilde{C}_{\Theta}(K, \omega).
\end{align*}
It is not hard to prove that both $\widetilde{C}_{\Theta}(K, \cdot)$ and $C_{\Theta}(K, \cdot)$, for $K\in \mathscr{K}_{(o)}^n$, are finite signed Borel measures on $S^{n-1}$. The proof of this argument for $\widetilde{C}_{\Theta}(K,
\cdot)$ (and hence for $C_{\Theta}(K, \cdot)$ due to \eqref{wc}) is rather standard and follows from steps very similar to those in \cite[p.9]{GHWXY} or \cite[p.351-352]{LYZActa}. It can be also proved by using \eqref{relation-G} and the fact that $\lambda^*(K, \cdot)$ is a Borel measure on $S^{n-1}$ (see \cite[Lemma 3.3]{BLYZZ2020}), thus the proof is omitted. A standard argument, based on the simple functions and a limit approach, shows that, for any bounded Borel function $g:S^{n-1}\to \mathbb{R}$,
\begin{align} \int_{S^{n-1}} g(u)\, d\widetilde{C}_{\Theta}(K, u) &= \int_{S^{n-1}}\frac{g(\alpha_K(\xi))\rho_K(\xi) G_t(\xi, \rho_K(\xi))}
{h_{K}(\alpha_K(\xi))\Psi_t(\alpha_K(\xi), h_{K}(\alpha_K(\xi)))}\,d\lambda(\xi),\label{new measue-11-27}\\
\int_{S^{n-1}} g(u)\, dC_{\Theta}(K, u)&=\int_{S^{n-1}}\frac{g(\alpha_K(\xi))\rho_K(\xi) G_t(\xi, \rho_K(\xi)) }{\rho_{K^*}(\alpha_K(\xi))\Psi_t
(\alpha_K(\xi), \rho_{K^*}(\alpha_K(\xi)))}\,d\lambda(\xi).\label{nn}
\end{align}
It is well known that, by letting $\xi=\overline{x}$ with $x\in \partial K$ for $K\in \mathscr{K}_{(o)}^n$ and $\,dx=\,d\mathscr{H}^{n-1}(x)$, \begin{align}\label{vol-change-8-8} \int_{S^{n-1}}f(\xi)\,d\xi=\int_{\partial K} (x \cdot \nu_K(x)) f(\overline{x})|x|^{-n}\,dx,\end{align} (see \cite[(2.31)]{LYZActa}). Hence, if $\,d\lambda(\xi)=p_{\lambda}(\xi)\,d\xi$ with $p_{\lambda}: S^{n-1}\rightarrow [0, \infty)$, then
\begin{align} \int_{S^{n-1}} g(u)\, d\widetilde{C}_{\Theta}(K, u) &= \int_{\partial K}g(\nu_K(x)) \frac{p_{\lambda}(\overline{x})|x|^{1-n} G_t(\overline{x}, |x|)} { \Psi_t(\nu_K(x), x\cdot \nu_K(x))}\,dx \label{new measue-11-27-K}, \\
\int_{S^{n-1}} g(u)\, dC_{\Theta}(K, u)&=\int_{\partial K}g(\nu_K(x)) \frac{ [x\cdot \nu_K(x)]^{2} p_{\lambda}(\overline{x}) |x|^{1-n} G_t(\overline{x}, |x|)} { \Psi_t(\nu_K(x), [x\cdot \nu_K(x)]^{-1})}\,dx.\label{nn-K}
\end{align}
We now prove a variational formula to derive the Musielak-Orlicz-Gauss image measure. Let $\lambda$ be a Lebesgue measure on $S^{n-1}$. Suppose that $G: S^{n-1} \times (0, \infty)\to \mathbb{R}$ is a function such that, for
$K\in\mathscr{K}_{(o)}^n$, the function
$\xi\mapstoG(\xi, \rho_K(\xi))$ is measurable on $S^{n-1}$ and is integrable with respect to $\lambda$. Define, $\widetilde{V}_{G,\lambda}(K)$, the general dual volume of $K\in \mathscr{K}_{(o)}^n$ with respect to $\lambda$, by \begin{align}\label{vlam}
\widetilde{V}_{G,\lambda}(K)=\int_{S^{n-1}} G(\xi, \rho_K(\xi))\, d\lambda(\xi).
\end{align} In general, one can define $\widetilde{V}_{G,\lambda}$ for all $f\in C^+(S^{n-1})$, where $C^+(\Omega)$ for $\Omega\subseteq S^{n-1}$ denotes the set of all positive
continuous functions defined on $\Omega$. That is, if $f\in C^+(S^{n-1})$,
\begin{align}\label{vlam2}
\widetilde{V}_{G,\lambda}(f)=\int_{S^{n-1}}G(\xi, f(\xi))\,d\lambda(\xi).
\end{align} Clearly, $\widetilde{V}_{G,\lambda}(K)=\widetilde{V}_{G,\lambda}(\rho_K)$ for
$K\in \mathscr{K}_{(o)}^n$. When $\,d\lambda=\,d\xi$, $\widetilde{V}_{G,\lambda}$ becomes the general dual volume $\widetilde{V}_G(K)$ \cite{GHWXY, GHXY} given by \begin{align*}
\widetilde{V}_G(K)=\int_{S^{n-1}} G(\xi, \rho_K(\xi))\, d\xi.
\end{align*}
Hereafter, $\Omega\subseteq S^{n-1}$ is always assumed to be a
closed set not contained in any closed hemisphere of $S^{n-1}$. Denote by $C(\Omega)$ the set of all continuous functions defined on
$\Omega\subseteq S^{n-1}$. For each $f\in C^+(\Omega)$, one can define two convex bodies associated to $f$: the
Wulff shape generated by $f$ \begin{align}\label{july287}
[f]= \bigcap_{\xi\in\Omega}\big\{x\in\mathbb{R}^n: x \cdot \xi \leq f(\xi)\big\},
\end{align} and the convex hull generated by $f$: \begin{align}\label{conv}
\langle f\rangle ={\mathrm{conv}}\,\big\{f(\xi)\xi: \xi\in\Omega\big\}. \end{align} Here, ${\mathrm{conv}}\,(E)$ denotes the convex hull of set $E\subseteq {\mathbb{R}^n}$, i.e., ${\mathrm{conv}}\,(E)$ is the smallest convex closed set containing $E$. It is easily checked that $[f]\in \mathscr{K}_{(o)}^n$ and $\langle f\rangle \in \mathscr{K}_{(o)}^n$ for $f\in C^+(\Omega)$.
A fundamental relation between the Wulff shape and the convex hull is \begin{align}\label{relation}
[f]^{*}=\langle 1/f \rangle
\end{align} for $f\in C^+(\Omega)$ (see e.g., \cite[Lemma 2.8]{LYZActa}). Obviously, for $f\in C^+(\Omega)$,
\begin{align}\label{le1}h_{[
f]}\leq f \ \ \mathrm{and}\ \ \rho_{\langle
f\rangle}\geq f \ \ \mathrm{on} \ \Omega.\end{align} In particular, if $\Omega=S^{n-1}$ and $K\in \mathscr{K}_{(o)}^n$,
\begin{align}\label{hk}[h_K]=K \ \ \mathrm{and}\ \ \langle \rho_{K}\rangle =K .\end{align}
Let $\Psi\in \mathcal{C}_I\cup\mathcal{C}_d$. Recall that
$\psi_{\xi}=\Psi(\xi, \cdot)$ for each fixed $\xi\in S^{n-1}$. Then $\psi_{\xi}(\cdot)$ is a strictly monotonic function on $(0,\infty)$, and hence $\psi_{\xi}^{-1}(\cdot)$ exists and is also strictly monotonic. Let $f\in C^+(\Omega)$ and $g\in C(\Omega)$. As $\Omega\subseteq S^{n-1}$ is compact and $\Psi_t$ is continuous, there exists a constant $\delta>0$, such that, for all $(\xi,\varepsilon)\in \Omega\times(-\delta,\delta)$, it is meaningful to define the {\em Musielak-Orlicz
addition} by
\begin{align}\label{genplus21} f_{\varepsilon}(\xi)= \psi_{\xi}^{-1}\left(\psi_{\xi}(f(\xi))+\varepsilon g(\xi)\right).
\end{align}
Clearly, $f_{0}(\xi)=f(\xi)$ and $ f_{\varepsilon}(\xi) \in C^+(\Omega)$. Using the chain rule, it is easy
to verify that, \begin{align}\label{der}
\frac{\partial f_{\varepsilon}(\xi)
}{\partial \varepsilon}\Big|_{\varepsilon=\varepsilon_0}
= \frac{g(\xi)}{\psi'_{\xi} (f_{\varepsilon_0}(\xi))}
\end{align} for any $(\xi,\varepsilon_0)\in
\Omega\times(-\delta,\delta)$, where $$\psi'_{\xi}(t)=\Psi_t(\xi, t)=\frac{\partial \Psi(\xi, t)}{\partial t}.$$
Moreover, due to the compactness of $\Omega$, a standard argument shows that $f_{\varepsilon}\to f$
uniformly on $\Omega$ as $ \varepsilon\rightarrow 0$.
We shall need the following lemma.
\begin{lemma} \label{variation--11-21}Let $\Omega\subseteq S^{n-1}$ be a closed
set that is not contained in any closed hemisphere of $S^{n-1}$. Let $f_{\varepsilon}$ be given as in \eqref{genplus21} with $f\in C^+(\Omega)$, $g\in C(\Omega)$ and
$\Psi \in \mathcal{C}_I\cup\mathcal{C}_d$.
\vskip 2mm \noindent i)
For $v\in
S^{n-1}\setminus \eta_{\langle f\rangle}$, let $u_0=\alpha_{\langle f\rangle^*}(v)$ and then
\begin{align}\label{hf}
\lim_{\varepsilon\rightarrow 0}\frac{\log h_{\langle
f_{\varepsilon}\rangle}(v)-\log h_{\langle
f\rangle}(v)}{\varepsilon}=\frac{g(u_0)}{f(u_0)\,
\Psi_t(u_0, f(u_0))}.
\end{align} ii) For $\xi\in S^{n-1}\setminus \eta_{\langle 1/f\rangle}$, let $u_1=\alpha_{[f]}(\xi)$ and then
\begin{align}\label{414}
\lim_{\varepsilon\rightarrow 0}\frac{\log
\rho_{[f_{\varepsilon}]}(\xi)-\log
\rho_{[f]}(\xi)}{\varepsilon}=\frac{g(u_1)}{f(u_1)\,
\Psi_t(u_1, f(u_1))}.
\end{align}
\end{lemma}
\begin{proof} From \eqref{der}, for any $(\xi,\varepsilon_0)\in \Omega\times(-\delta,\delta)
$, we have
\begin{align*}
\frac{\partial \log f_{\varepsilon}(\xi)
}{\partial \varepsilon}\Big|_{\varepsilon=\varepsilon_0}
=
\frac{g(\xi)}{f_{\varepsilon_0}(\xi)\psi_{\xi}^{\prime}(f_{\varepsilon_0}(\xi))}.
\end{align*}
By the mean value theorem, there exists $\theta(\xi,\varepsilon)\in
(0,1)$ such that
\begin{align}\label{hope1}
\log f_{\varepsilon}(\xi)-\log f(\xi)=\varepsilon\,\frac{g(\xi)}
{f_{\theta(\xi,\varepsilon)\varepsilon}(\xi)\,\psi_{\xi}^{\prime}(f_{\theta(\xi,\varepsilon)\varepsilon}(\xi))}.
\end{align}
Recall that
$\alpha_{\langle f\rangle^*}(S^{n-1}\setminus \eta_{\langle
f\rangle})\subseteq\Omega$ \cite[(4.24)]{LYZActa}. Let $v\inS^{n-1}\setminus \eta_{\langle f\rangle}$. Then there exists a vector, say $v_0\in S^{n-1}$, such that, for $\varepsilon\in (-\delta, \delta)$, \begin{align}\label{u0}
h_{\langle
f\rangle}(v)= ( v_{0}\cdot v) f(v_{0}) \quad \textrm{and} \quad h_{\langle f_{\varepsilon}\rangle}(v)\geq ( v_{0}\cdot v)
f_{\varepsilon}(v_{0}).
\end{align} From \eqref{star} and \eqref{u0}, one clearly has $v_0=\alpha^*_{\langle f\rangle}(v)=\alpha_{\langle f\rangle^*}(v)$ since $f(v_0)v_0\in \partial \langle f\rangle$ and $v\inS^{n-1}\setminus
\eta_{\langle f\rangle}$. As $\alpha_{\langle f\rangle^*}$ is injective on $S^{n-1}\setminus
\eta_{\langle f\rangle}$, one further has $v_0=u_0\in \Omega$.
It follows from \eqref{hope1} and \eqref{u0} that
\begin{align}
\log h_{\langle f_{\varepsilon}\rangle}(v)-\log h_{\langle
f\rangle}(v)
\geq \log f_{\varepsilon}(u_{0})-\log f(u_{0}) =\varepsilon\,\frac{g(u_{0})}
{f_{\theta(u_{0},\varepsilon)\varepsilon}(u_{0})\,\psi_{u_{0}}^{\prime}(f_{\theta(u_{0},\varepsilon)\varepsilon}(u_{0}))}.
\label{comparison-11-21-1}
\end{align} By \eqref{conv}, there exists a $u_{\varepsilon}\in\Omega$ such that
\begin{align}\label{rel}
h_{\langle f_{\varepsilon}\rangle}(v)= (u_{\varepsilon}\cdot v)
f_{\varepsilon}(u_{\varepsilon})\quad \textrm{and} \quad h_{\langle
f\rangle}(v)\geq (u_{\varepsilon}\cdot v) f(u_{\varepsilon}).
\end{align}
Thus, from \eqref{hope1} and \eqref{rel}, we have
\begin{eqnarray}\label{comparison-11-21}
\log h_{\langle f_{\varepsilon}\rangle}(v)-\log h_{\langle
f\rangle}(v)
\leq\log f_{\varepsilon}(u_{\varepsilon})-\log f(u_{\varepsilon})
=\varepsilon\,\frac{g(u_{\varepsilon})}
{f_{\theta(u_{\varepsilon},\varepsilon)\varepsilon}(u_{\varepsilon})\,\psi_{u_{\varepsilon}}^{\prime}(f_{\theta(u_{\varepsilon},\varepsilon)\varepsilon}(u_{\varepsilon}))}.
\end{eqnarray} Note that $\lim_{\varepsilon\rightarrow0} u_{\varepsilon}=u_{0}$ which is a direct consequence of the fact that $f_{\varepsilon}\rightarrow f$ uniformly on $\Omega$; this can be easily proved following along the same lines as the proof of formula (4.8) in \cite{LYZActa}.
By letting $\varepsilon\to 0$, the desired argument \eqref{hf} follows immediately from \eqref{comparison-11-21-1}, \eqref{comparison-11-21},
the continuity of $g$ and $\Psi_t,$ and $\theta(\cdot, \varepsilon)\in (0, 1)$ for all $\varepsilon\in (-\delta, \delta)$.
Now let us prove \eqref{414}.
For any $\xi\inS^{n-1}\setminus \eta_{\langle 1/f\rangle}$ and
$\varepsilon\in (-\delta, \delta)$, it follows from \eqref{bi-polar--12} and \eqref{relation} that
\begin{align}\label{f}
\rho_{[f_{\varepsilon}]}(\xi)=\rho_{\langle
\frac{1}{f_{\varepsilon}}\rangle^*}(\xi)=\frac{1}{h_{\langle
\frac{1}{f_{\varepsilon}}\rangle}(\xi)}.
\end{align}
Recall that $\widetilde{\Psi}(\xi, t)=\Psi(\xi, 1/t)$ and let
$\widetilde{\psi}_{\xi}(t)=\psi_{\xi}(1/t)$ for $t\in (0, \infty)$. Clearly, $\widetilde{\psi}_{\xi}(\cdot)$ is a monotonic function on $(0, \infty)$ if $\Psi\in \mathcal{C}_I\cap\mathcal{C}_d$.
Hence, it is meaningful to rewrite
\eqref{genplus21} as follows:
\begin{align}\label{til}
\frac{1}{f_{\varepsilon}(\xi)}=\widetilde{\psi}_{\xi}^{-1}\left(\widetilde{\psi}_{\xi}\left(\frac{1}{f(\xi)}\right)+\varepsilon
g(\xi)\right).
\end{align} It is easily checked that
$\widetilde{\psi}_{\xi}'(t)=-t^{-2}\psi_{\xi}'(1/t)$. By \eqref{relation}, \eqref{hf} (in fact, with $u_0$ replaced by $\alpha_{\langle 1/f\rangle^*}(\xi)=\alpha_{[f]}(\xi)=u_1$ due to \eqref{relation}), \eqref{f} and \eqref{til}, one has
\begin{align*}
\lim_{\varepsilon\rightarrow 0}\frac{\log
\rho_{[f_{\varepsilon}]}(\xi)-\log \rho_{[f]}(\xi)}{\varepsilon} =
-\lim_{\varepsilon\rightarrow 0}\frac{\log h_{\langle
1/f_{\varepsilon}\rangle}(\xi)-\log h_{\langle
1/f\rangle}(\xi)}{\varepsilon} =\frac{g(u_1)}{f(u_1)\,
\Psi_t(u_1, f(u_1))}.
\end{align*}
This completes the proof of \eqref{414}. \end{proof}
Note that $[f_i]\rightarrow [f]$ and $\langle f_i\rangle \rightarrow \langle f\rangle$, if $f\in C^+(\Omega)$ and $f_i\in C^+(\Omega)$ for all $i\in \mathbb{N}$ such that $f_i\to f$ uniformly on $\Omega$ (see e.g., \cite [p.345]{LYZActa} and \cite[Lemma 7.5.2]{Sch}). It is also true that $K_i\rightarrow K$ with $K_i\in \mathscr{K}_{(o)}^n$ for all $i\in \mathbb{N}$ and $K\in \mathscr{K}_{(o)}^n$ is equivalent to $\rho_{K_i}\rightarrow \rho_K$ uniformly on $S^{n-1}$. An application of the dominated convergence theorem yields the continuity of $\widetilde{V}_{G,\lambda}(\langle f_{\varepsilon}\rangle^*)$ and $\widetilde{V}_{G,\lambda}([f_{\varepsilon}])$ on $\varepsilon\in (-\delta, \delta)$. The following theorem provides a result on the variational formulas regarding $\widetilde{V}_{G,\lambda}(\langle f_{\varepsilon}\rangle^*)$ and $\widetilde{V}_{G,\lambda}([f_{\varepsilon}])$, which can be used to derive the Musielak-Orlicz-Gauss image measure and its polar.
\begin{theorem}\label{ovev-cor} Let $\Omega\subseteq S^{n-1}$ be a closed
set that is not contained in any closed hemisphere of $S^{n-1}$. Let $\Theta=(G, \Psi, \lambda)$ be a triple such that $G\in \mathcal{C}$, $\Psi \in \mathcal{C}_I\cup\mathcal{C}_d$, and $\lambda\in \mathcal{M}.$ For $f_{\varepsilon}$ given by \eqref{genplus21} with $f\in C^+(\Omega)$ and $g\in C(\Omega)$, one has
\begin{align}\label{rr}
\lim_{\varepsilon\rightarrow 0}\frac{\widetilde{V}_{G,\lambda}(\langle f_{\varepsilon}\rangle^*)-\widetilde{V}_{G,\lambda}(\langle f\rangle^*)}{\varepsilon} &=
-\int_{\Omega} g(u)\, dC_{\Theta}(\langle f\rangle^*, u), \\ \label{variation-11-27-12}
\lim_{\varepsilon\rightarrow 0}\frac{\widetilde{V}_{G,\lambda}([f_{\varepsilon}])-\widetilde{V}_{G,\lambda}([f])}{\varepsilon}&=
\int_{\Omega} g(u)\, d\widetilde{C}_{\Theta}([f], u).
\end{align}
\end{theorem}
\begin{proof} It has been shown in \cite[p.17]{GHWXY} that for any $h_0\in C^+(\Omega)$, one has
$$ h_{[h_0]}(\alpha_{[h_0]}(\xi))=h_0(\alpha_{[h_0]}(\xi)) \quad \mathrm{for}\ \mathscr{H}^{n-1}-\mathrm{almost\
all}\ \xi\in S^{n-1}.
$$ Applying this result to $h_0=1/f$ for $f\in C^+(\Omega)$, one can obtain that, by
\eqref{bi-polar--12}, \eqref{relation} and the fact that $\lambda$ is absolutely continuous with respect to $\,d\xi$,
$$ \rho_{\langle
f\rangle}(\alpha_{\langle f\rangle^*}(\xi))=f(\alpha_{\langle
f\rangle^*}(\xi))\quad \mathrm{for}\ \lambda-\mathrm{almost\
all}\ \xi\in S^{n-1}.
$$
For $\varepsilon\in (-\delta, \delta)$, let $K_{\varepsilon}=\langle f_{\varepsilon}\rangle^*=[1/f_{\varepsilon}]\in \mathscr{K}_{(o)}^n$ where $f_{\varepsilon}\in C^+(\Omega)$ is given by \eqref{genplus21} with $f\in C^+(\Omega)$ and $g\in C(\Omega)$. When $\varepsilon=0$, $K_{0}=\langle f\rangle^*=[1/f]\in \mathscr{K}_{(o)}^n$. By
\eqref{bi-polar--12}, one sees that $(K^*)^*=K$ for $K\in \mathscr{K}_{(o)}^n$ and hence for all $\varepsilon\in (-\delta, \delta)$, $$\rho_{K_{\varepsilon}}=\frac{1}{h_{K_{\varepsilon}^*}}=\frac{1}{h_{\langle f_{\varepsilon}\rangle}}.$$ This further implies that for each $\xi\in S^{n-1}$,
\begin{align}
\frac{\partial G(\xi, \rho_{K_{\varepsilon}}(\xi))}{\partial \varepsilon} &= \rho_{K_{\varepsilon}}(\xi)
G_t(\xi, \rho_{K_{\varepsilon}}(\xi)) \frac{\,d \log \rho_{K_{\varepsilon}}(\xi)}{\,d\varepsilon} \nonumber \\ &=- \left(\frac{1}{h_{\langle f_{\varepsilon}\rangle}(\xi)}
G_t\Big(\xi, \frac{1}{h_{\langle f_{\varepsilon}\rangle}(\xi)} \Big) \frac{\,d \log h_{\langle f_{\varepsilon}\rangle}(\xi) }{\,d\varepsilon}\right).
\label{chain}
\end{align} Hence, for $\lambda$-almost all $\xi\in S^{n-1}$, by \eqref{hf} and \eqref{chain}, one can get
\begin{align} \frac{\partial G(\xi, \rho_{K_{\varepsilon}}(\xi))}{\partial \varepsilon} \bigg|_{\varepsilon=0} &=- \left(\frac{1}{h_{\langle f\rangle}(\xi)}
G_t\Big(\xi, \frac{1}{h_{\langle f\rangle}(\xi)} \Big) \frac{g(\alpha_{\langle f\rangle^*}(\xi))}{f(\alpha_{\langle f\rangle^*}(\xi))\,
\Psi_t(\alpha_{\langle f\rangle^*}(\xi), f(\alpha_{\langle f\rangle^*}(\xi)))}\right) \nonumber \\ &=- \left(\frac{\rho_{K_0}(\xi)
G_t\big(\xi, \rho_{K_0}(\xi) \big) g(\alpha_{K_0}(\xi))}{f(\alpha_{K_0}(\xi))\,
\Psi_t(\alpha_{K_0}(\xi), f(\alpha_{K_0}(\xi)))}\right). \label{chain-7-31-1}
\end{align} Moreover, there are $\delta_0\in (0,\delta)$ and a constant $M>0$,
such that for $\varepsilon\in (-\delta_0,\delta_0)$ and all
$\xi\inS^{n-1}$,
\begin{align*}
\left|\frac{G(\xi, \rho_{K_{\varepsilon}}(\xi))-G(\xi, \rho_{K_{0}}(\xi))}{\varepsilon}\right|\leq M.
\end{align*} This can be done because, involving in \eqref{comparison-11-21-1}, \eqref{comparison-11-21} and \eqref{chain}, the sets (i.e., $\Omega$ and $S^{n-1}$) are all compact, the functions $G\in \mathcal{C}$ and $\Psi\in \mathcal{C}_I\cup \mathcal{C}_d$ (hence $G_t$ and $\Psi_t$ are all continuous),
and the family of functions $f_{\varepsilon}$ is uniformly bounded on $\Omega$ from below by a positive number and from above by a finite number. For more details on how to find $M$, please refer to, e.g., the proofs of Lemma 4.2 in \cite{LYZActa}, Theorem 4.1 \cite{XY2017-1} and Lemma 4.2 in \cite{ZSY2017}. It follows from \eqref{chain-7-31-1} and the dominated convergence theorem that, for $K_{\varepsilon}=\langle f_{\varepsilon}\rangle^*$ and $K_{0}=\langle f\rangle^*$,
\begin{align}
\lim_{\varepsilon\rightarrow
0}\frac{\widetilde{V}_{G,\lambda}(\langle
f_{\varepsilon}\rangle^*)-\widetilde{V}_{G,\lambda}(\langle
f\rangle^*)}{\varepsilon} &=\lim_{\varepsilon\rightarrow
0}\frac{\widetilde{V}_{G,\lambda}(K_{\varepsilon})-\widetilde{V}_{G,\lambda}(K_{0})}{\varepsilon} \nonumber \\ & =
\lim_{\varepsilon\rightarrow 0} \int_{S^{n-1}}\frac{G(\xi, \rho_{K_{\varepsilon}}(\xi))-G(\xi, \rho_{K_{0}}(\xi))}{\varepsilon}
\,d\lambda(\xi)
\nonumber\\&= \int_{S^{n-1}} \lim_{\varepsilon\rightarrow 0} \frac{G(\xi, \rho_{K_{\varepsilon}}(\xi))-G(\xi, \rho_{K_{0}}(\xi))}{\varepsilon}
\,d\lambda(\xi)
\nonumber\\
&=-\int_{S^{n-1}\setminus \eta_{\langle f\rangle }}\frac{\rho_{K_0}(\xi)
G_t\big(\xi, \rho_{K_0}(\xi) \big) g(\alpha_{K_0}(\xi))}{f(\alpha_{K_0}(\xi))\,
\Psi_t(\alpha_{K_0}(\xi), f(\alpha_{K_0}(\xi)))}\,d\lambda(\xi). \label{var-7-31-2-1}
\end{align}
Recall that
$\alpha_{\langle f\rangle^*}(S^{n-1}\setminus \eta_{\langle
f\rangle})\subseteq\Omega$ \cite[(4.24)]{LYZActa}.
The compactness of $\Omega$, together with the Tietze extension theorem, yields the existence of $\overline{g}: S^{n-1}\rightarrow \mathbb{R}$ such
that $\overline{g}$ is continuous on $S^{n-1}$ and for $\xi\in S^{n-1}\setminus \eta_{\langle f\rangle}$,
$$g(\alpha_{\langle f\rangle^*}(\xi))= g(\alpha_{K_0}(\xi))= (\overline{g}1_{\Omega})(\alpha_{K_0}(\xi))=(\overline{g}1_{\Omega})(\alpha_{\langle f\rangle^*}(\xi)),
$$ where $1_{E}$ is the indicator function of $E$, i.e., $1_{E}(x)=1$ if $x\in E$ and $1_{E}(x)=0$ if $x\notin E$. Applying this to \eqref{var-7-31-2-1}, one further gets
\begin{align}
\lim_{\varepsilon\rightarrow
0}\frac{\widetilde{V}_{G,\lambda}(\langle
f_{\varepsilon}\rangle^*)-\widetilde{V}_{G,\lambda}(\langle
f\rangle^*)}{\varepsilon}
&=-\int_{S^{n-1}} \frac{(\overline{g}1_{\Omega})(\alpha_{K_0}(\xi)) \rho_{K_0}(\xi)
G_t\big(\xi, \rho_{K_0}(\xi) \big) }{f(\alpha_{K_0}(\xi))\,
\Psi_t(\alpha_{K_0}(\xi), f(\alpha_{K_0}(\xi)))} \,d\lambda(\xi)\nonumber\\
&=-\int_{S^{n-1}}(\overline{g}1_{\Omega})(u) \,
dC_{\Theta}(K_0,
u)\nonumber\nonumber\\
&=-\int_{\Omega}g(u) \, dC_{\Theta}(\langle
f\rangle^*, u).\nonumber
\end{align} where we have used \eqref{nn} in the second equality and $$f(\alpha_{K_0}(\xi))=f(\alpha_{\langle f\rangle^*}(\xi))=\rho_{\langle f\rangle}(\alpha_{\langle f\rangle^*}(\xi))=\rho_{K_0^*}(\alpha_{K_0}(\xi)).$$
This concludes the proof of \eqref{rr}.
The variational formula \eqref{variation-11-27-12} follows along the same lines as the proof for \eqref{rr}, based on \eqref{414}. A more
direct proof for \eqref{variation-11-27-12} can be given by
the combination of \eqref{wc} and \eqref{rr}. Indeed, let $\Theta=(G, \Psi, \lambda)$ be a given triple and $\widetilde{\Theta}=(G, \widetilde{\Psi}, \lambda)$. It follows from \eqref{wc},
\eqref{relation} and \eqref{rr} (applied to $\widetilde{\Theta}$ instead of $\Theta$ due to \eqref{til}) that
\begin{align}
\lim_{\varepsilon\rightarrow
0}\frac{\widetilde{V}_{G,\lambda}([f_{\varepsilon}])-\widetilde{V}_{G,\lambda}([f])}{\varepsilon}
&= \lim_{\varepsilon\rightarrow
0}\frac{\widetilde{V}_{G,\lambda}(\langle
1/f_{\varepsilon}\rangle^*)-\widetilde{V}_{G,\lambda}(\langle
1/f\rangle^*)}{\varepsilon}
\nonumber\\&= -\int_{\Omega} g(u)\, dC_{\widetilde{\Theta}}(\langle 1/f\rangle^*, u)
\nonumber\\&= \int_{\Omega} g(u)\, d\widetilde{C}_{\Theta}(\langle 1/f\rangle^*, u)
\nonumber\\&= \int_{\Omega} g(u)\, d\widetilde{C}_{\Theta}([f], u).\nonumber
\end{align} This completes the proof of \eqref{variation-11-27-12}.\end{proof}
\section{The Musielak-Orlicz-Gauss image problem}\label{M:4} \setcounter{equation}{0}
This section is dedicated to introduce the Musielak-Orlicz-Gauss image problem and some of its special cases. Indeed, it has been explained in Section \ref{Section-var} that both $\widetilde{C}_{\Theta}(K,
\cdot)$ and $C_{\Theta}(K,\cdot)$ are signed Borel measures on $S^{n-1}$. We now prove some basic properties for $C_{\Theta}(K,\cdot)$ and $\widetilde{C}_{\Theta}(K,\cdot)$ for $K\in \mathscr{K}_{(o)}^n$.
\begin{proposition}\label{prop 11-28--1} Let $\Theta=(G, \Psi, \lambda)$ be a triple such that $G\in \mathcal{C}$, $\Psi \in \mathcal{C}_I\cup\mathcal{C}_d$ and $\lambda\in \mathcal{M}.$ Let $K\in \mathscr{K}_{(o)}^n$. Then the following statements hold. \vskip 1mm \noindent i) Both $C_{\Theta}(K_i,\cdot)\rightarrow C_{\Theta}(K,\cdot)$ and $\widetilde{C}_{\Theta}(K_i,\cdot)\rightarrow \widetilde{C}_{\Theta}(K,\cdot)$ weakly for any sequence of $\{K_i\}_{i\in \mathbb{N}}$ such that $K_i\in \mathscr{K}_{(o)}^n$ for any $i\in \mathbb{N}$ and $K_i\rightarrow K\in \mathscr{K}_{(o)}^n$.
\vskip 1mm \noindent ii) The signed measures $C_{\Theta}(K,\cdot)$ and $\widetilde{C}_{\Theta}(K,\cdot)$ are absolutely continuous with respect to $S(K,\cdot)$.
\vskip 1mm \noindent iii) If $G$ and $\Psi$ are either both in $\mathcal{C}_I$ or both in $\mathcal{C}_d$, then $\widetilde{C}_{\Theta}(K,\cdot)$
and $C_{\Theta}(K,\cdot)$ are nonzero finite Borel
measures. If, in addition, $\lambda$ is strictly positive on nonempty open subsets of $S^{n-1}$, then $\widetilde{C}_{\Theta}(K,\cdot)$
and $C_{\Theta}(K,\cdot)$ are not concentrated on any closed hemisphere of
$S^{n-1}$. The same arguments also hold for $-\widetilde{C}_{\Theta}(K,\cdot)$ and $-C_{\Theta}(K,\cdot)$, if one of $G$ and $\Psi$ is in $\mathcal{C}_I$ and the other one is in $\mathcal{C}_d$.
\end{proposition} \begin{proof} Due to \eqref{wc}, only $\widetilde{C}_{\Theta}(K,
\cdot)$ will be discussed. Part i) follows easily from a standard argument of the dominated convergence theorem, by \eqref{relation-G} and the facts that $\rho_{K_i}\to \rho_K$ and $h_{K_i}\to h_K$ uniformly on $S^{n-1}$, $\alpha_{K_i}\rightarrow \alpha_K$ (holding except a subset of $S^{n-1}$ whose $\mathscr{H}^{n-1}$-measure is zero \cite[Lemma 2.2]{LYZActa}), and $\lambda(K, \cdot)$ (and hence $\lambda^*(K, \cdot))$ is weakly convergent on $\mathscr{K}_{(o)}^n$ (\cite[Lemma 3.4]{BLYZZ2020}).
\vskip 2mm \noindent ii) Let $K\in \mathscr{K}_{(o)}^n$ and $\omega\in \mathcal{B}$ such that $S(K, \omega)=0$. It has been proved in \cite[Lemma 3.5]{BLYZZ2020} that $\lambda(K, \cdot)$ for $K\in \mathscr{K}_{(o)}^n$ is absolutely continuous with respect to the surface area measure $S(K^*, \cdot)$. Applying this to $K^*$, one gets $\lambda(K^*, \omega)=\lambda^*(K, \omega)=0$. As $K\in \mathscr{K}_{(o)}^n$, there exist two constants $0<r_0<R_0<\infty$ such that both $h_{K}$ and $\rho_{K}$ are in $(r_0, R_0)$ on $S^{n-1}$. The continuity of
$G_{t}$ and $\Psi_{t}$ yield that
\begin{align}c_1:=\sup_{\xi\in S^{n-1}} \left| \frac{\rho_K(\xi) G_t(\xi, \rho_K(\xi))}{h_{K}(\alpha_K(\xi))\Psi_t(\alpha_K(\xi), h_{K}(\alpha_K(\xi)))}\right|<\infty. \label{upp-7-31-1}\end{align} Together with \eqref{gencdef-7-1-115} (or \eqref{new measue-11-27}) and \eqref{gencdef-7-1-115-uu}, one has
\begin{align} \big| \widetilde{C}_{\Theta}(K, \omega) \big| &= \bigg| \int_{\pmb{\alpha}^{*}_K(\omega)} \frac{\rho_K(\xi) G_t(\xi, \rho_K(\xi))}{h_{K}(\alpha_K(\xi))\Psi_t(\alpha_K(\xi), h_{K}(\alpha_K(\xi)))}\,d\lambda(\xi)\bigg| \nonumber \\
&\leq c_1\int_{{\pmb{\alpha}^{*}_K(\omega)}}\,d\lambda(\xi)= c_1\lambda\left(\boldsymbol{\alpha}^{*}_{K}(\omega)\right)=c_1\lambda^*(K, \omega)=0. \label{non-conc-2}
\end{align} This concludes that $\widetilde{C}_{\Theta}(K,\cdot)$ is absolutely continuous with respect to $S(K,\cdot)$.
\vskip 2mm \noindent iii) Only the proof for the case when $G\in \mathcal{C}_I$ and $\Psi\in \mathcal{C}_I$ will be given, and the other cases follow along the same lines. A calculation similar to \eqref{upp-7-31-1} yields that
\begin{align}\label{small}
c_2:=\inf_{\xi\inS^{n-1}} \frac{\rho_K(\xi) G_t(\xi, \rho_K(\xi))}{h_{K}(\alpha_K(\xi))\Psi_t(\alpha_K(\xi), h_{K}(\alpha_K(\xi)))}>0.
\end{align} This implies that $\widetilde{C}_{\Theta}(K, \cdot)$ is a nonzero measure.
Following the proof of \eqref{non-conc-2}, one can also prove \begin{align*} \widetilde{C}_{\Theta}(K, S^{n-1}) \leq c_1\lambda^*(K, S^{n-1})<\infty.\end{align*} Hence $\widetilde{C}_{\Theta}(K, \cdot)$ is finite.
Assume that, in addition, $\lambda$ is strictly positive on nonempty open subsets of $S^{n-1}$. We now claim that $\widetilde{C}_{\Theta}(K,\cdot)$ satisfies \eqref{not-concentration-1}. This is an easy consequence of \eqref{new measue-11-27}, \eqref{small} and Lemma
\ref{conc}: for any $u\inS^{n-1}$,
\begin{align*}
\int_{S^{n-1}}( u\cdot v)_+\, d\widetilde{C}_{\Theta}(K, v)&=
\int_{S^{n-1}}( u \cdot\alpha_K(\xi))_+ \ \frac{\rho_K(\xi) G_t(\xi, \rho_K(\xi))}{h_{K}(\alpha_K(\xi))\Psi_t(\alpha_K(\xi), h_{K}(\alpha_K(\xi)))}\,d\lambda(\xi)\nonumber\\
&\geq c_2 \int_{S^{n-1}}(u \cdot\alpha_K(\xi))_+ \,d\lambda(\xi)\nonumber>0,
\end{align*}
where the second inequality follows from \eqref{star} and \eqref{not-concentration-1-22} (applying to $K^*$). \end{proof}
Proposition \ref{prop 11-28--1} suggests the following Musielak-Orlicz-Gauss image problem.
\begin{problem}[The Musielak-Orlicz-Gauss image problem] \label{MOGIP} Let $G\in \mathcal{C}$, $\Psi\in \mathcal{C}$, and
$\lambda$ be a nonzero finite Lebesgue measure on $S^{n-1}$. Under what conditions on $\Theta=(G, \Psi, \lambda)$ and a nonzero finite Borel measure $\mu$ on $S^{n-1}$ do there exist a $K\in \mathscr{K}_{(o)}^n$ and a constant $\tau\in \mathbb{R}$ such that $\mu =\tau \widetilde{C}_{\Theta}(K,\cdot)?$
\end{problem} Let $|\mu|=\int_{S^{n-1}}\,d\mu(u)$.
Clearly, if Problem \ref{MOGIP} has $K\in \mathscr{K}_{(o)}^n$ as its solution, then $$\tau=\frac{|\mu|}{\widetilde{C}_{\Theta}(K,S^{n-1})}.$$
Problem \ref{MOGIP} is for the characterization of the Musielak-Orlicz-Gauss image measure. Similar problem can be posed for the polar Musielak-Orlicz-Gauss image measure.
\begin{problem}[The polar Musielak-Orlicz-Gauss image problem] \label{MOGIP-p} Let $G\in \mathcal{C}$, $\Psi\in \mathcal{C}$, and
$\lambda$ be a nonzero finite Lebesgue measure on $S^{n-1}$. Under what conditions on $\Theta=(G, \Psi, \lambda)$ and a nonzero finite Borel measure $\mu$ on $S^{n-1}$ do there exist a $K\in \mathscr{K}_{(o)}^n$ and a constant $\kappa\in \mathbb{R}$ such that $\mu =\kappa C_{\Theta} (K,\cdot)?$
\end{problem} Again if Problem \ref{MOGIP-p} has $K\in \mathscr{K}_{(o)}^n$ as its solution, then $$\kappa=\frac{|\mu|}{C_{\Theta}(K,S^{n-1})}.$$
When $G=\Psi=\log t$, it can be seen from \eqref{relation-G} that Problems \ref{MOGIP} and \ref{MOGIP-p} reduce to the Gauss image problem (i.e., Problem \ref{Gauss-I-p}) introduced in \cite{BLYZZ2020}. From the discussion in Section \ref{Section-var}, one clearly sees that
Problem \ref{MOGIP} also generalizes the Minkowski problem \cite{min1897, min1903}, the $L_p$ Minkowski problem \cite{Lu93}, the Orlicz-Minkowski problem \cite{HLYZ2010}, the ($L_{p}$) dual Minkowski \cite{LYZActa, LYZ-Lp}, the dual Orlicz-Minkowski problems \cite{GHWXY, GHXY, XY2017-1, ZSY2017}, and the ($L_p$ and Orlicz) Aleksandrov problem \cite{Alexs1942, FH, HLYZ}.
Problems \ref{MOGIP} and \ref{MOGIP-p} have close connections with the Monge-Amp\`{e}re type equations. To see this, let $K\in \mathscr{K}_{(o)}^n$ be smooth enough, in particular, satisfying that $h_K$ is differentiable at each point on $S^{n-1}$ and $\partial K$ has positive Gauss curvature at each point. Denote by $\nabla h_K(u)$ the gradient of $h_K$ at $u\in S^{n-1}$ and by $\bar{\nabla}h$ the gradient of $h$ with respect to an orthonormal frame on $S^{n-1}$. Then $\nabla h_K=\bar{\nabla}h_K+h_K\iota,$ where $\iota$ denotes the identity map on $S^{n-1}$ (see, e.g., \cite[(2.2)]{LYZ-Lp}). Let $\bar{\nabla}^2h$ be the Hessian matrix of $h$ with respect to an orthonormal frame on $S^{n-1}$. Then, see e.g., \cite[(3.28)]{LYZ-Lp}, for all $u\in S^{n-1}$, $$\frac{\,dS(K, u)}{\,du}=\det (\bar{\nabla}^2h_K(u)+h_K(u)I), $$ where $I$ denotes the identity matrix.
Recall that $\nabla h_K(u)=\nu^{-1}_K(u)$ and $\nabla h_K(\nu_K(x))=x$ hold for all $u\in S^{n-1}$ and $x\in \partial K$. Consequently, by \eqref{surface-8-7}, \eqref{new measue-11-27-K} and \eqref{nn-K}, one gets \begin{eqnarray} \, d\widetilde{C}_{\Theta}(K, u) \!\! &=&\!\! \frac{p_{\lambda}\Big(\frac{\nabla h_K(u)}{|\nabla h_K(u)|}\Big)|\nabla h_K(u)|^{1-n} G_t\Big(\frac{\nabla h_K(u)}{|\nabla h_K(u)|}, |\nabla h_K(u)|\Big)} { \Psi_t(u, h_K(u))}\,dS(K, u), \label{change-v-8-7-1} \\ dC_{\Theta}(K, u)\!\!&=&\!\! \frac{p_{\lambda}\Big(\frac{\nabla h_K(u)}{|\nabla h_K(u)|}\Big) \big(h_K(u)\big)^2 |\nabla h_K(u)|^{1-n} G_t\Big(\frac{\nabla h_K(u)}{|\nabla h_K(u)|}, |\nabla h_K(u)|\Big)} { \Psi_t(u, (h_K(u))^{-1})}\,dS(K, u),
\nonumber \end{eqnarray} where $\,d\lambda(\xi)=p_{\lambda}(\xi)\,d\xi$ with $p_{\lambda}: S^{n-1}\rightarrow [0, \infty)$. Subsequently, if $\mu$ has its density function to be $p_{\mu}$ with respect to $\,d\xi$, then \eqref{change-v-8-7-1} yields the following rephrase of Problem \ref{MOGIP} as an Monge-Amp\`{e}re type equation: \begin{align}\label{new2}
p_{\mu}= \tau \frac{P(\bar{\nabla}h+h\iota) \,\det(\bar{\nabla}^2h+hI)}{\Psi_{t}(\cdot, h)} p_{\lambda}\! \left(\!\frac{\bar{\nabla} h+h \iota}{|\bar{\nabla} h+h \iota|}\!\right), \end{align}
where $P(y)=|y|^{1-n}G_{t}(\bar{y}, |y|)$ for $y\in \mathbb{R}^n$. Thus, finding a solution to Problem \ref{MOGIP} requires to find a
$\tau\in \mathbb{R}$ and $h:S^{n-1}\to (0,\infty)$ satisfying \eqref{new2}. Similarly, Problem \ref{MOGIP-p} can be rephrased as follows:
\begin{align*}
p_{\mu}= \kappa \frac{h^2 P(\bar{\nabla}h+h\iota) \,\det(\bar{\nabla}^2h+hI)}{\Psi_{t}(\cdot, 1/h)} p_{\lambda}\! \left(\!\frac{\bar{\nabla} h+h \iota}{|\bar{\nabla} h+h \iota|}\!\right). \end{align*}
Some special cases of Problems \ref{MOGIP} and \ref{MOGIP-p} are of particular interest.
\vskip 2mm \noindent {\bf Case 1:} $G=t^n/n$, $\Psi\in \mathcal{C}$, and $\,d\lambda(\xi)=\,d\xi$. In this case, the measure $\widetilde{C}_{\Theta}(K, \cdot)$ will be denoted by $S_{\Psi}(K, \cdot)$ and called the Musielak-Orlicz surface area measure of $K\in \mathscr{K}_{(o)}^n$. Indeed, $S_{\Psi}(K, \cdot)$ has the ($L_p$ and Orlicz) surface area measures \eqref{surface-8-7} as its special cases, and satisfies the following formula due to \eqref{gencdef-7-1-115}:
$$
S_{\Psi}(K, \omega)=\int_{\pmb{\alpha}^{*}_K(\omega)} \frac{\rho_K^n(\xi)}{h_{K}(\alpha_K(\xi))\Psi_t(\alpha_K(\xi), h_{K}(\alpha_K(\xi)))}\,d\xi.
$$ Moreover, letting $u=\alpha_K(\xi)$, it can be verified by \eqref{surface-8-7} and \eqref{vol-change-8-8} that $$\frac{\,dS_{\Psi}(K, u)}{\,dS(K, u)}=\frac{1}{\Psi_t(u, h_K(u))}.$$ A direct consequence of Theorem \ref{ovev-cor} is the following result, which provides a variational formula to derive $S_{\Psi}(K, \cdot)$.
\begin{theorem}\label{ovev-cor-vol} Let $\Psi \in \mathcal{C}_I\cup\mathcal{C}_d$ and $\Omega\subseteq S^{n-1}$ be a closed
set that is not contained in any closed hemisphere of $S^{n-1}$. For $f_{\varepsilon}$ given by \eqref{genplus21} with $f\in C^+(\Omega)$ and $g\in C(\Omega)$, one has
$$
\lim_{\varepsilon\rightarrow 0}\frac{V([f_{\varepsilon}])-V([f])}{\varepsilon}=
\int_{\Omega} g(u)\, dS_{\Psi}([f], u).
$$
\end{theorem}
Thus, the following Musielak-Orlicz-Minkowski problem can be proposed. \begin{problem}[The Musielak-Orlicz-Minkowski problem] \label{MOMP-vol} Under what conditions on $\Psi\in \mathcal{C}$ and a nonzero finite Borel measure $\mu$ on $S^{n-1}$ do there exist a $K\in \mathscr{K}_{(o)}^n$ and a constant $\tau\in \mathbb{R}$ such that $$\mu =\tau S_{\Psi} (K,\cdot)?$$
\end{problem} Problem \ref{MOMP-vol} is related to ``an increasing function" $G\in \mathcal{C}_I$, and will be studied in our future work \cite{HXYZ-2}. The Musielak-Orlicz-Minkowski problem deserves its own special attention as it is the direct extension of the $L_p$ and Orlicz Minkowski problems and lies in the framework of (the extension of) the Brunn-Minkowski theory of convex bodies. By \eqref{new2}, the Monge-Amp\`{e}re type equation related to Problem \ref{MOMP-vol} is \begin{align*} p_{\mu}= \tau \frac{\det(\bar{\nabla}^2h+hI)}{\Psi_{t}(\cdot, h)}. \end{align*}
\noindent {\bf Case 2:} $G\in \mathcal{C}$, $\Psi\in \mathcal{C}$, and $\,d\lambda(\xi)=\,d\xi$. In this case, $\widetilde{V}_{G,\lambda}(K)= \widetilde{V}_G(K)$ becomes the general dual volume of $K$, $\widetilde{C}_{G, \Psi}(K, \cdot)$ for $K\in \mathscr{K}_{(o)}^n$ given by \eqref{gencdef-7-1-115--1} defines an Musielak-Orlicz extension of the dual curvature measures \cite{GHWXY, LYZActa, LYZ-Lp, XY2017-1, ZSY2017}, and hence the following dual Musielak-Orlicz-Minkowski problem can be posed.
\begin{problem}[The dual Musielak-Orlicz-Minkowski problem] \label{MOMP-dual} Under what conditions on $G\in \mathcal{C}$, $\Psi\in \mathcal{C}$ and a nonzero finite Borel measure $\mu$ on $S^{n-1}$ do there exist a $K\in \mathscr{K}_{(o)}^n$ and a constant $\tau\in \mathbb{R}$ such that $\mu =\tau \widetilde{C}_{G, \Psi}(K,\cdot)?$
\end{problem} By \eqref{new2}, the corresponding Monge-Amp\`{e}re type equation related to the dual Musielak-Orlicz-Minkowski problem is \begin{align*} p_{\mu}= \tau \frac{P(\bar{\nabla}h+h\iota) \,\det(\bar{\nabla}^2h+hI)}{\Psi_{t}(\cdot, h)}, \end{align*}
where $P(y)=|y|^{1-n}G_{t}(\bar{y}, |y|)$ for $y\in \mathbb{R}^n$.
\vskip 2mm \noindent {\bf Case 3:} $G=\log t$, $\Psi\in \mathcal{C}$ and $\lambda$ a nonzero finite Lebesgue measure on $S^{n-1}$. In this case, we shall give the following Musielak-Orlicz extension of $\lambda^*(K,\cdot)$.
\begin{definition} Let $\Theta=(G, \Psi, \lambda)$ be such that $\lambda$ is a nonzero finite Lebesgue measure on $S^{n-1}$, and $\Psi\in \mathcal{C}_I\cup\mathcal{C}_d$. For $K\in \mathscr{K}_{(o)}^n$, define $ \widetilde{J}_{\Psi, \lambda}(K, \cdot)=\widetilde{C}_{\Theta}(K, \cdot)$ with $G=\log t$, namely, for each Borel set $\omega\in \mathcal{B}$, $$
\widetilde{J}_{\Psi, \lambda}(K, \cdot)=\int_{\pmb{\alpha}^{*}_K(\omega)} \frac{1}{h_{K}(\alpha_K(\xi))\Psi_t(\alpha_K(\xi), h_{K}(\alpha_K(\xi)))}\,d\lambda(\xi).
$$
\end{definition}
Clearly, one can also have, for all $K\in \mathscr{K}_{(o)}^n$, \begin{align}\label{equiv-8-25} \widetilde{C}_{(-\log t, \Psi, \lambda)}(K, \cdot)=- \widetilde{J}_{\Psi, \lambda}(K, \cdot).\end{align} This formula is convenient in later context when finding solutions to Problem \ref{MOMP-log-8-3}. According to \eqref{relation-G}, it follows that \begin{align} \frac{\,d \widetilde{J}_{\Psi, \lambda}(K, u)} {\,d\lambda^*(K, u)} = \frac{1}{h_{K}(u)\Psi_t(u, h_{K}(u))}\ \ \ \mathrm{for}\ \ u\in S^{n-1}. \label{relation-int-8-3} \end{align}
Clearly, $\widetilde{J}_{\Psi, \lambda}(K, \cdot)$ is a finite signed Borel measure on $S^{n-1}$. Moreover, it follows from \eqref{new measue-11-27} that, for any bounded Borel function $g:S^{n-1}\to \mathbb{R}$,
\begin{align} \int_{S^{n-1}} g(u)\, d\widetilde{J}_{\Psi, \lambda}(K, u) = \int_{S^{n-1}}\frac{g(\alpha_K(\xi))}
{h_{K}(\alpha_K(\xi))\Psi_t(\alpha_K(\xi), h_{K}(\alpha_K(\xi)))}\,d\lambda(\xi).\label{new measue-11-27-int}
\end{align} Regarding this measure, one can pose the following problem.
\begin{problem}\label{MOMP-log-8-3} Let $\Psi\in \mathcal{C}$ and $\lambda$ be a nonzero finite Lebesgue measure on $S^{n-1}$. Under what conditions on $\Psi, \lambda$ and a nonzero finite Borel measure $\mu$ on $S^{n-1}$ do there exist a $K\in \mathscr{K}_{(o)}^n$ and a constant $\kappa\in \mathbb{R}$ such that $\mu =\kappa \widetilde{J}_{\Psi, \lambda}(K, \cdot)?$
\end{problem}
Again, by \eqref{new2}, the corresponding Monge-Amp\`{e}re type equation related to Problem \ref{MOMP-log-8-3} is \begin{align*}
p_{\mu}= \kappa \frac{ \det(\bar{\nabla}^2h+hI)}{\Psi_{t}(\cdot, h)|\bar{\nabla}h+h\iota|^n } p_{\lambda}\! \left(\!\frac{\bar{\nabla} h+h \iota}{|\bar{\nabla} h+h \iota|}\!\right), \end{align*} where $\,d\lambda(\xi)=p_{\lambda}(\xi)\,d\xi$ with $p_{\lambda}: S^{n-1}\rightarrow [0, \infty)$. Note that $ \widetilde{J}_{\log, \lambda}(K, \cdot)=\lambda^*(K, \cdot)$. Consequently, Problem \ref{MOMP-log-8-3} becomes the Gauss image problem \cite{BLYZZ2020} (up to a difference of polarity of convex bodies).
A crucial geometric invariant related to $\widetilde{J}_{\Psi, \lambda}(K, \cdot)$ is $\mathcal{E}_{\lambda}(K)$, the entropy of $K\in \mathscr{K}_{(o)}^n$ with respect to the measure $\lambda$. For $K\in \mathscr{K}_{(o)}^n$, let \begin{align}\label{rela} \mathcal{E}_{\lambda}(K)=
\widetilde{V}_{\log,\lambda}(K^*)=\int_{S^{n-1}} \log \rho_{K^*}(\xi) d \lambda(\xi).
\end{align} Clearly $\mathcal{E}_{\lambda}(B^n)=0$. When $\,d\lambda(\xi)=\,d\xi$, it reduces to the entropy of $K\in \mathscr{K}_{(o)}^n$, which plays essential roles in solving the Aleksandrov type problems. Letting $G=\log t$ in \eqref{rr} and \eqref{variation-11-27-12}, by \eqref{new measue-11-27}, \eqref{nn} and \eqref{new measue-11-27-int}, one can easily get the following variational formula.
\begin{theorem}\label{ovev-cor-ent} Let $\Omega\subseteq S^{n-1}$ be a closed
set that is not contained in any closed hemisphere of $S^{n-1}$. Let $\Psi \in \mathcal{C}_I\cup\mathcal{C}_d$ and $\lambda\in \mathcal{M}$. For $f_{\varepsilon}$ given by \eqref{genplus21} with $f\in C^+(\Omega)$ and $g\in C(\Omega)$, one has
\begin{align*}
\lim _{\varepsilon \rightarrow 0}
\frac{\mathcal{E}_{\lambda}\left(\left\langle
f_{\varepsilon}\right\rangle\right)-\mathcal{E}_{\lambda}\left(\left\langle
f\right\rangle\right)}{\varepsilon}&=
-\int_{\Omega} g(u)\, d J_{\Psi, \lambda} (\langle f\rangle^*, u), \\
\lim_{\varepsilon\rightarrow 0}\frac{\mathcal{E}_{\lambda}([f_{\varepsilon}]^*) -\mathcal{E}_{\lambda}([f]^*)}{\varepsilon}&=
\int_{\Omega} g(u)\, d \widetilde{J}_{\Psi, \lambda}([f], u),
\end{align*} where $J_{\Psi, \lambda}(K, \cdot)$ is the measure, such that, for all $\omega\in \mathcal{B}$, \begin{align}\label{pd-int}
J_{\Psi, \lambda}(K, \omega)=\int_{\pmb{\alpha}^{*}_K(\omega)} \frac{1}{\rho_{K^*}(\alpha_K(\xi))\Psi_t
(\alpha_K(\xi), \rho_{K^*}(\alpha_K(\xi)))}\,d\lambda(\xi).
\end{align} \end{theorem} Clearly, $J_{\Psi, \lambda}(K, \cdot)=C_{\Theta}(K, \cdot)$ for $K\in \mathscr{K}_{(o)}^n$ and $\Theta=(\log t, \Psi, \lambda)$. Formula \eqref{wc} yields that, if $\widetilde{\Psi}(\xi, t)=\Psi(\xi, \frac{1}{t})$ for $(\xi, t)\in S^{n-1}\times (0, \infty)$, then \begin{align}\label{J-relation}
J_{\widetilde{\Psi}, \lambda}(K, \cdot )=- \widetilde{J}_{\Psi, \lambda} (K, \cdot).\end{align}
We now state some basic properties for $ \widetilde{J}_{\Psi, \lambda}(K, \cdot)$ and $J_{\Psi, \lambda}(K,\cdot)$, which follow from Proposition \ref{prop 11-28--1} by letting $G=\log t$.
\begin{proposition}\label{prop 11-28--1-log} Let $K\in \mathscr{K}_{(o)}^n$, $\Psi \in \mathcal{C}_I\cup\mathcal{C}_d$ and $\lambda\in \mathcal{M}$. Then the following statements hold. \vskip 1mm \noindent i) Both $J_{\widetilde{\Psi}, \lambda}(K_i,\cdot)\rightarrow J_{\widetilde{\Psi}, \lambda}(K,\cdot)$ and $\widetilde{J}_{\Psi, \lambda}(K_i,\cdot)\rightarrow \widetilde{J}_{\Psi, \lambda}(K,\cdot)$ weakly for any sequence of $\{K_i\}_{i\in \mathbb{N}}$ such that $K_i\in \mathscr{K}_{(o)}^n$ for any $i\in \mathbb{N}$ and $K_i\rightarrow K\in \mathscr{K}_{(o)}^n$.
\vskip 1mm \noindent ii) Both $J_{\widetilde{\Psi}, \lambda}(K,\cdot)$ and $\widetilde{J}_{\Psi, \lambda}(K,\cdot)$ are absolutely continuous with respect to $S(K,\cdot)$.
\vskip 1mm \noindent iii) If $\Psi\in \mathcal{C}_I$, then $\widetilde{J}_{\Psi, \lambda}(K, \cdot)$
and $J_{\widetilde{\Psi}, \lambda}(K,\cdot)$ are nonzero finite Borel
measures. If, in addition, $\lambda$ is strictly positive on nonempty open subsets of $S^{n-1}$, then $\widetilde{J}_{\Psi, \lambda}(K, \cdot)$
and $J_{\widetilde{\Psi}, \lambda}(K,\cdot)$ are not concentrated on any closed hemisphere of
$S^{n-1}$. The same arguments also hold for $-\widetilde{J}_{\Psi, \lambda}(K, \cdot)$ and $-J_{\widetilde{\Psi}, \lambda}(K,\cdot)$, if $\Psi\in \mathcal{C}_d$. \end{proposition}
\vskip 2mm \noindent {\bf Case 4:} $G=\log t$, $\Psi\in \mathcal{C}$, and $\,d \lambda(\xi)=\,d\xi$. In this case, Problem \ref{MOGIP} becomes the Musielak-Orlicz extension of the Aleksandrov problem. Recall that the Aleksandrov's integral curvature $J(K, \cdot)$ for $K\in \mathscr{K}_{(o)}^n$ is $\lambda(K,\cdot)$ with $\,d\lambda(\xi)=\,d\xi$. Moreover, $J^*(K, \cdot)=J(K^*, \cdot)$ for $K\in \mathscr{K}_{(o)}^n$. Comparing $J(K, \cdot)$ and \eqref{gencdef-7-1-115-uu}, one sees $J^*(K, \cdot)=\widetilde{C}_{\Theta_1}(K, \cdot)$ with $\Theta_1=(\log t, \log t, \,d\xi)$. So $\widetilde{J}_{\Psi}(K, \cdot)=\widetilde{J}_{\Psi, \,d\xi}(K, \cdot)$ defines a Musielak-Orlicz extension of $J^*(K, \cdot)$ and by \eqref{relation-int-8-3}, \begin{align*} \frac{\,d \widetilde{J}_{\Psi}(K, u)} {\,dJ^*(K, u)} = \frac{1}{h_{K}(u)\Psi_t(u, h_{K}(u))}\ \ \ \mathrm{for}\ \ u\in S^{n-1}.\end{align*} Thus, the following Musielak-Orlicz-Aleksandrov problem can be posed; this provides an extension of the Aleksandrov problems \cite{Alexs1942, FH, HLYZ} (again, up to a difference of polarity of convex bodies). \begin{problem}[The Musielak-Orlicz-Aleksandrov problem] \label{MOMP-Alex} Under what conditions on $\Psi$ and a nonzero finite Borel measure $\mu$ on $S^{n-1}$ do there exist a $K\in \mathscr{K}_{(o)}^n$ and a constant $\kappa\in \mathbb{R}$ such that $\mu =\kappa \widetilde{J}_{\Psi}(K, \cdot)?$
\end{problem} Again, by \eqref{new2}, the corresponding Monge-Amp\`{e}re type equation related to the Musielak-Orlicz-Aleksandrov problem is \begin{align*}
p_{\mu}= \kappa \frac{ \det(\bar{\nabla}^2h+hI)}{\Psi_{t}(\cdot, h)|\bar{\nabla}h+h\iota|^n }. \end{align*}
\section{A solution to the Musielak-Orlicz-Gauss image problem}\label{solution-8-25} \setcounter{equation}{0}
Our goal in this section is to provide solutions to Problems \ref{MOGIP} and \ref{MOGIP-p}, mainly under the condition that $G$ is strictly decreasing on its second variable. Let $\lambda\in\mathcal{M}$ and $\mu$ be nonzero finite Borel measures on $S^{n-1}$. Let $G: S^{n-1} \times (0, \infty)\to (0,\infty)$ be a continuous function and $\Psi\in \mathcal{C}_{d}$. Consider the following two optimization problems:
\begin{align}
& \inf\left\{\widetilde{V}_{\Psi,\mu}(Q): \widetilde{V}_{G,\lambda}(Q^*)=\widetilde{V}_{G,\lambda}(B^n)\ \mathrm{and}\ Q\in
\mathscr{K}_{(o)}^n\right\}, \label{geom} \\ &\inf\left\{\widetilde{V}_{\Psi,\mu}(f): \widetilde{V}_{G,\lambda}(\langle f\rangle^*)=\widetilde{V}_{G,\lambda}(B^n)\ \mathrm{and}\ f \in C^{+}\left(S^{n-1}\right)\right\}, \label{fun} \end{align} where $\widetilde{V}_{G,\lambda}(K)$ and
$\widetilde{V}_{G,\lambda}(f)$ are given in \eqref{vlam} and
\eqref{vlam2}, i.e.,
$$
\widetilde{V}_{G,\lambda}(K)=\int_{S^{n-1}} G(\xi, \rho_K(\xi))\, d\lambda(\xi) \ \ \mathrm{and} \ \ \widetilde{V}_{G,\lambda}(f)=\int_{S^{n-1}}G(\xi, f(\xi))\,d\lambda(\xi).$$ Recall that $\psi_{\xi}(t)=\Psi(\xi, t)$ and $\psi_{\xi}^{-1}$ is its inverse on $(0, \infty)$.
The following lemma plays important roles in solving Problem \ref{MOGIP-p}.
\begin{lemma}\label{m1} Let $\lambda\in\mathcal{M}$ and $\mu$ be nonzero finite Borel measures on $S^{n-1}$. Suppose that $G\in \mathcal{C}$ and $\Psi\in \mathcal{C}_{d}$ such that $C_{\Theta}(Q, S^{n-1})\neq 0$ for all $Q\in \mathscr{K}_{(o)}^n$. If the optimization problem \eqref{geom} admits a solution, say $K\in \mathscr{K}_{(o)}^n$,
then $K_0=K^*$ is a solution to Problem \ref{MOGIP-p}, namely, the following holds:
\begin{align}\label{solution-20-8-2}
\frac{\mu}{|\mu|}=\frac{ C_{\Theta}(K_0 ,\cdot)}{ C_{\Theta}(K_0, S^{n-1})}.
\end{align}
\end{lemma}
\begin{proof} For any $f\in C^+(S^{n-1})$ such that
$\widetilde{V}_{G,\lambda}(\langle
f\rangle^*)=\widetilde{V}_{G,\lambda}(B^n)$, it follows from \eqref{hk} that $\langle \rho_{\langle f\rangle}\rangle=\langle f\rangle$, which further implies $\widetilde{V}_{G,\lambda}(\langle \rho_{\langle
f\rangle}\rangle^*)=\widetilde{V}_{G,\lambda}(\langle
f\rangle^*)=\widetilde{V}_{G,\lambda}(B^n)$ and
$\widetilde{V}_{\Psi,\mu}(\rho_{\langle
f\rangle})=\widetilde{V}_{\Psi,\mu}(\langle
f\rangle)$. On the other hand, by $\Psi\in \mathcal{C}_{d}$ and
\eqref{le1}, one has
\begin{align*} \widetilde{V}_{\Psi,\mu}(f)=\int_{S^{n-1}}\Psi\left(\xi, f(\xi)\right) \mathrm{d} \mu(\xi) \ge \int_{S^{n-1}}\Psi\left(\xi, \rho_{\langle f\rangle}(\xi)\right) \mathrm{d} \mu(\xi) = \widetilde{V}_{\Psi,\mu}\left(\rho_{\langle f\rangle}\right). \end{align*} Hence if $K\in \mathscr{K}_{(o)}^n$ solves the optimization problem \eqref{geom}, then $\rho_K\in C^+(S^{n-1})$ solves the optimization problem \eqref{fun}.
Let $g\in C(S^{n-1})$ be an arbitrary continuous function on $S^{n-1}$. As $\rho_K\in C^+(S^{n-1})$, for sufficiently small
$\varepsilon_1,\varepsilon_2, \varepsilon$, it is meaningful to define \begin{align*}
f_{\varepsilon_1+\varepsilon,\varepsilon_2}(\xi)=\psi_{\xi}^{-1}\left(\psi_{\xi}(f_{\varepsilon_1,\varepsilon_2}(\xi))+\varepsilon
g(\xi)\right) \ \ \mathrm{and}\ \
f_{\varepsilon_1,\varepsilon_2+\varepsilon}(\xi)=\psi_{\xi}^{-1}\left(\psi_{\xi}(f_{\varepsilon_1,\varepsilon_2}(\xi))+\varepsilon\right),
\end{align*} where $f_{\varepsilon_1, \varepsilon_2}$ is given by
\begin{align}\label{feb172}
f_{\varepsilon_1, \varepsilon_2}(\xi) &=\psi_{\xi}^{-1}\left(\psi_{\xi}(\rho_{K}(\xi))+\varepsilon_1 g(\xi)+\varepsilon_2\right). \end{align} A more convenient formula for $f_{\varepsilon_1, \varepsilon_2}$ given in \eqref{feb172} is \begin{align}\label{feb172-1-2020-12}
\Psi(\xi, f_{\varepsilon_1, \varepsilon_2}(\xi)) &= \Psi(\xi, \rho_{K}(\xi))+\varepsilon_1 g(\xi)+\varepsilon_2. \end{align}
It follows from \eqref{rr} (with $f=f_{\varepsilon_1, \varepsilon_2}$) that
\begin{align}\label{variation-11-27-1---1}
\frac{\partial}{\partial\varepsilon_1}\widetilde{V}_{G,\lambda}(\langle f_{\varepsilon_1,
\varepsilon_2}\rangle^*)&=
\lim_{\varepsilon\rightarrow 0} \frac{\widetilde{V}_{G,\lambda}(\langle f_{\varepsilon_1+\varepsilon, \varepsilon_2}\rangle^*)-\widetilde{V}_{G,\lambda}(\langle f_{\varepsilon_1,\varepsilon_2}\rangle^*)}{\varepsilon} \nonumber \\
&=
-\int_{S^{n-1}} g(\xi)\,dC_{\Theta}(\langle f_{\varepsilon_1,\varepsilon_2}\rangle^*,
\xi).
\end{align} Similarly, one can also have \begin{align}\label{variation-11-27-1---2}
\frac{\partial}{\partial\varepsilon_2}\widetilde{V}_{G,\lambda}(\langle f_{\varepsilon_1, \varepsilon_2}\rangle^*) =-\int_{S^{n-1}} \,dC_{\Theta}(\langle f_{\varepsilon_1,\varepsilon_2}\rangle^*, \xi) =-C_{\Theta}(\langle f_{\varepsilon_1,\varepsilon_2}\rangle^*, S^{n-1})\neq 0.
\end{align}
Note that $f_{\varepsilon_1, \varepsilon_2}$ depends continuously on $\varepsilon_1$ and $\varepsilon_2$. Hence, part i) in Proposition \ref{prop 11-28--1} implies the weak convergence of $C_{\Theta}(\langle f_{\varepsilon_1, \varepsilon_2}\rangle^*,\cdot)$ on $\varepsilon_1$ and $\varepsilon_2$, respectively. Together with (\ref{variation-11-27-1---1}) and (\ref{variation-11-27-1---2}), $\widetilde{V}_{G,\lambda}(\langle f_{\varepsilon_1, \varepsilon_2}\rangle^*)$ has a gradient which has rank $1$ and is continuous on $\varepsilon_1$ and $\varepsilon_2$. In particular, $\widetilde{V}_{G,\lambda}(\langle f_{\varepsilon_1, \varepsilon_2}\rangle^*)$ is continuously differentiable on $\varepsilon_1$ and $\varepsilon_2$. Therefore, the method of Lagrange multipliers can be applied to the optimization problem \eqref{fun} to get a constant $\kappa=\kappa(g)$ such that
\begin{align}\label{lagrange method}
\frac{\partial}{\partial \varepsilon_i}\left(\widetilde{V}_{\Psi,\mu}(f_{\varepsilon_1, \varepsilon_2})+\kappa \big(\widetilde{V}_{G,\lambda}(\langle f_{\varepsilon_1, \varepsilon_2}\rangle^*) - \widetilde{V}_{G,\lambda}(B^n)\big)\right)\Big|_{\varepsilon_1=\varepsilon_2=0}=0, \ \ \ i=1, 2.
\end{align}
It follows from \eqref{bi-polar--12}, \eqref{relation}, \eqref{hk} and \eqref{feb172} that
\begin{align*}
\langle f_{0,0}\rangle^*=\langle \rho_{K}\rangle^*=\langle 1/h_{K^*}\rangle^*=[h_{K^*}]=K^*.
\end{align*} Hence, $\widetilde{V}_{G,\lambda}(\langle f_{0,0}\rangle^*)=\widetilde{V}_{G,\lambda}(K^*)=\widetilde{V}_{G,\lambda}(B^n).$ Together with \eqref{vlam}, \eqref{feb172-1-2020-12}, (\ref{variation-11-27-1---1}), (\ref{variation-11-27-1---2}), and \eqref{lagrange method}, one can easily have
\begin{align}\label{identity-2020-1}
\int_{S^{n-1}} g(\xi)\,d\mu(\xi) = \kappa(g) \int_{S^{n-1}} g(\xi)\,dC_{\Theta}(K^*,
\xi) \ \ \ \mathrm{and}\ \ \ |\mu|= \kappa(g) C_{\Theta}(K^*, S^{n-1}).
\end{align} In particular, $\kappa=\kappa(g)$ is a constant independent of the choice of $g\in C(S^{n-1})$:
\begin{align*}
\kappa = \frac{|\mu|}{C_{\Theta}(K^*,S^{n-1})}.
\end{align*} Thus, by \eqref{identity-2020-1}, the following formula holds for any $g\in C(\Omega)$: \begin{align*}
\int_{S^{n-1}} g(\xi)\,d\mu(\xi) =\frac{|\mu|}{C_{\Theta}(K^*,S^{n-1})} \ \int_{S^{n-1}} g(\xi)\,dC_{\Theta}(K^*,
\xi).
\end{align*} This concludes that, by letting $K_0=K^*$, \eqref{solution-20-8-2} holds on $\mathcal{B}$.
\end{proof}
\begin{lemma} \label{Lemma-8-23} Let $G\in \mathscr{G}_d$ and $\mu$ be a nonzero finite Borel measure on $S^{n-1}$ that is not concentrated on any closed hemisphere. Assume that $\{K_i\}_{i\in \mathbb{N}}\subseteq \mathscr{K}_{(o)}^n$ is a sequence such that \begin{align} \label{sup-8-25}\sup_{i\in \mathbb{N}}\int_{S^{n-1}}G(\xi, \rho_{K^*_i}(\xi))\,d\mu(\xi)<+\infty. \end{align} Then, the sequence $\{K_i\}_{i\in \mathbb{N}}$ is uniformly bounded, namely, there exists a finite constant $R$ such that $K_i\subseteq RB^n$ for all $i\in \mathbb{N}$.
\end{lemma}
\begin{proof} For each $i\in \mathbb{N}$, let $R_i=\max_{v\inS^{n-1}}\rho_{K_{i}}(v)$ and $v_i\in S^{n-1}$ be such that $R_i=\rho_{K_{i}}(v_i)$. We now claim that $\sup_{i\in \mathbb{N}} R_{i}<+\infty$ by contradiction. Assume not, a subsequence $\{R_{i_j}\}_{j\in \mathbb{N}}$ of $\{R_i\}_{i\in \mathbb{N}}$ can be obtained so that $v_{i_j}\rightarrow v_0\in S^{n-1}$ (due to the compactness of $S^{n-1}$) and $\lim_{j\to\infty}R_{i_j}=+\infty$. For $v\inS^{n-1}$ and $\beta \in (0,1)$, let
\begin{align*} \Sigma_{\beta}(v)=\{u\in S^{n-1}:u\cdot v\ge \beta \}. \end{align*}
As $\mu$ is not concentrated on any closed hemisphere, a simple argument by the monotone convergence theorem implies the existence of $\beta_0\in (0, 1)$ such that $\mu(\Sigma_{\beta_0}(v_0))>0$. For any $\xi \in \Sigma_{\beta_0}(v_0)$ and $i\in \mathbb{N}$, one has $\rho_{K_{i}}(v_i)v_i\in K_i$ and hence $$h_{K_i}(\xi)\ge \rho_{K_{i}}(v_i)(\xi\cdot v_i)=R_i(\xi\cdot v_i). $$ The continuity of the function $u\mapsto \xi\cdot u$ and the facts that $\xi\cdot v_0\geq \beta_0$ and $v_{i_j}\to v_0$ yield the existence of $j_0\in \mathbb{N}$ such that for all $j\geq j_0$, $$h_{K_{i_j}}(\xi)\ge R_{i_j}(\xi\cdot v_{i_j})\geq \frac{R_{i_j} \beta_0}{2}. $$
By \eqref{bi-polar--12}, \eqref{vlam},
\eqref{sup-8-25} and $G
\in\mathscr{G}_d$, one gets, for all $j\geq j_0$,
\begin{align*}
+ \infty &>\int_{S^{n-1}}G(\xi, \rho_{K^*_{i_j}}(\xi))\,d\mu(\xi)= \int_{S^{n-1}} G(\xi, h_{K_{i_j}}(\xi)^{-1})\,d\mu(\xi) \ge \int_{\Sigma_{\beta_0}(v_0)} G\Big(\xi, \frac{2} {R_{i_j} \beta_0}\Big)\,d\mu(\xi). \end{align*}
As $\lim_{t\rightarrow 0^+} G(\xi, t)=+ \infty$ for each $\xi\in S^{n-1}$ and $\lim_{j\to\infty}R_{i_j}=+\infty$, Fatou's lemma yields that \begin{align*} + \infty> \liminf_{j \to \infty} \int_{\Sigma_{\beta_0}(v_0)} G\Big(\xi, \frac{2} {R_{i_j} \beta_0}\Big)\,d\mu(\xi) \geq \int_{\Sigma_{\beta_0}(v_0)} \liminf_{j \to \infty} G\Big(\xi, \frac{2} {R_{i_j} \beta_0}\Big)\,d\mu(\xi) = + \infty. \end{align*}
This is a contradiction and hence the sequence $\{K_i\}_{i\in \mathbb{N}}$ is uniformly bounded. \end{proof}
\begin{lemma} \label{Lemma-8-23-2} Let $\Psi\in \mathcal{C}_d$ be such that \begin{align}\label{growth-8-25} \lim_{t\rightarrow 0^+} \Psi(\xi, t)=+\infty \ \ \mathrm{for\ each}\ \xi\in S^{n-1}. \end{align} Let $\mu$ be a nonzero finite Borel measure on $S^{n-1}$ that is not concentrated on any closed hemisphere. Assume that the sequence $\{K_i\}_{i\in \mathbb{N}}\subseteq \mathscr{K}_{(o)}^n$ is uniformly bounded such that \begin{align} \label{max-8-23} \sup_{i\in \mathbb{N}} \int_{S^{n-1}}\Psi \left(\xi, \rho_{K_i}(\xi)\right)\,d\mu(\xi)<+\infty. \end{align} Then, there exists a subsequence of $\{K_i\}_{i\in \mathbb{N}}$ which converges to some $L\in \mathscr{K}_{(o)}^n$.
\end{lemma}
\begin{proof} Let $R$ be a finite constant such that $K_i\subseteq RB^n$ for all $i\in \mathbb{N}$. Applying the Blaschke selection theorem to $\{K_i\}_{i\in \mathbb{N}}$, a convex compact set $L\subseteq {\mathbb{R}^n}$ and a subsequence of $\{K_i\}_{i\in \mathbb{N}}$ can be found (which will still be denoted by $K_i$), such that $K_i\to L$ in the Hausdorff metric.
We now claim that $L\in\mathscr{K}_{(o)}^n$. Assuming the contrary, there exist $w_0\in S^{n-1}$ and $\beta_1>0$ such that $0=h_{L}(w_0)= \lim_{i\rightarrow \infty} h_{K_i}(w_0)$ and $\mu(\Sigma_{\beta_1}(w_0))>0$, where the latter one follows from the fact that $\mu$ is a nonzero finite Borel measure not concentrated on any closed
hemisphere. From \eqref{vlam}, \eqref{max-8-23}, $\Psi\in \mathcal{C}_d$, and $K_i\subseteq R B^n$ for all $i\in \mathbb{N}$, one has
\begin{align}
+ \infty&>
\liminf_{i\rightarrow \infty} \int_{S^{n-1}}\Psi \left(\xi, \rho_{K_i}(\xi)\right)\,d\mu(\xi) \nonumber \\ &\ge \liminf_{i\rightarrow \infty} \int_{\Sigma_{\beta_1}(w_0)}\Psi \left(\xi, \rho_{K_i}(\xi) \right)\,d\mu(\xi)+\int_{S^{n-1}\setminus \Sigma_{\beta_1}(w_0)}\Psi \left(\xi, R\right)\,d\mu(\xi) \nonumber \\&
\ge \liminf_{i\rightarrow \infty} \int_{\Sigma_{\beta_1}(w_0)}\Psi \left(\xi, \rho_{K_i}(\xi) \right)\,d\mu(\xi)
+\mu(S^{n-1}\setminus \Sigma_{\beta_1}(w_0)) \min_{u\inS^{n-1}} \Psi \left(u, R \right).\label{con-83}
\end{align} If $v\in
\Sigma_{\beta_1}(w_0)$, then $\beta_1 \rho_{K_i}(v) \leq \rho_{K_i}(v)v\cdot w_0\leq h_{K_i}(w_0).$ Thus $\rho_{K_i}\rightarrow 0$ uniformly on $\Sigma_{\beta_1} (w_0)$ as $i\to\infty$, due to $ \lim_{i\rightarrow \infty} h_{K_i}(w_0)=0$. This further yields, by \eqref{growth-8-25}, for any $\xi\in S^{n-1}$, $$ \liminf_{i\rightarrow \infty}\Psi \left(\xi, \rho_{K_i}(\xi) \right)=+\infty.$$
A contradiction can be obtained by \eqref{con-83}: \begin{align*} +\infty &
> \liminf_{i\rightarrow \infty} \int_{\Sigma_{\beta_1}(w_0)}\Psi \left(\xi, \rho_{K_i}(\xi) \right)\,d\mu(\xi)
+\mu(S^{n-1}\setminus \Sigma_{\beta_1}(w_0)) \min_{u\inS^{n-1}} \Psi \left(u, R \right)\\&
\ge \int_{\Sigma_{\beta_1}(w_0)} \liminf_{i\rightarrow \infty}\Psi \left(\xi, \rho_{K_i}(\xi) \right)\,d\mu(\xi)+\mu(S^{n-1}\setminus \Sigma_{\beta_1}(w_0)) \min_{u\inS^{n-1}} \Psi \left(u, R \right)=+\infty,
\end{align*} where in the second inequality, we have used the Fatou's lemma to the nonnegative functions $\Psi \left(\xi, \rho_{K_i}(\xi) \right)-\Psi \left(\xi, R \right)$ on $\Sigma_{\beta_1}(w_0)$. This concludes that $L \in \mathscr{K}_{(o)}^n$ as desired. \end{proof}
Now let us prove the existence of a solution to Problem \ref{MOGIP-p}.
\begin{theorem}\label{rl} Let $\lambda\in \mathcal{M}$ and $\mu$ be two nonzero finite Borel measures on $S^{n-1}$ that are not concentrated on any closed hemisphere. There exists a $K\in \mathscr{K}_{(o)}^n$ such that
\begin{align}\label{msol}
\frac{\mu}{|\mu|}=\frac{ C_{\Theta}(K,\cdot)}{ C_{\Theta}(K, S^{n-1})}
\end{align} if either $\Psi\in \mathcal{C}_d$ satisfies \eqref{growth-8-25} and $G\in \mathscr{G}_d$, or $G\in \mathcal{C}_d$ satisfies \eqref{growth-8-25} and $\Psi\in \mathscr{G}_d$.
\end{theorem}
\begin{proof} Under the assumptions on $G$ and $\Psi$, it follows from Proposition \ref{prop 11-28--1} iii) that $C_{\Theta}(Q, S^{n-1})\neq 0$ for all $Q\in \mathscr{K}_{(o)}^n$. In view of Lemma \ref{m1}, one only needs to find an $L \in \mathscr{K}_{(o)}^n$ which solves the optimization problem \eqref{geom}, i.e., $L$ must satisfy that
$\widetilde{V}_{G,\lambda}(L^*)=\widetilde{V}_{G,\lambda}(B^n)$ and $ \widetilde{V}_{\Psi,\mu}(L)=\alpha$ with
\begin{align}\label{optimization-convex-body-11-28-110}
\alpha:=\inf\left\{\widetilde{V}_{\Psi,\mu}(Q):\ \widetilde{V}_{G,\lambda}(Q^*)=\widetilde{V}_{G,\lambda}(B^n)\ \mathrm{and}\ Q\in
\mathscr{K}_{(o)}^n \right\}.
\end{align} It is clear that the infimum is taking over a nonempty subset of $\mathscr{K}_{(o)}^n$ because $B^n$ satisfies the desired constraint condition in \eqref{optimization-convex-body-11-28-110}. In particular, this shows $$\alpha\leq \widetilde{V}_{\Psi,\mu}(B^n)=\int_{S^{n-1}} \Psi(\xi, 1)\,d\mu(\xi)<+\infty.$$ Moreover, for each $\widetilde{Q}\in \mathscr{K}_{(o)}^n$, the function $c\mapsto \widetilde{V}_{G,\lambda}(c\widetilde{Q}^*)$ is continuous on $c\in (0, \infty)$ and $ \widetilde{V}_{G,\lambda}(c_0\widetilde{Q}^*)= \widetilde{V}_{G,\lambda}(B^n)$ for some $$c_0\in \Big[\min_{\xi\in S^{n-1}} \rho^{-1}_{\widetilde{Q}^*}(\xi), \max_{\xi\in S^{n-1}} \rho^{-1}_{\widetilde{Q}^*}(\xi)\Big].$$ The latter statement can be seen from the following argument: the fact that $G$ is strictly decreasing on its second variable, and $\min_{\xi\in S^{n-1}} \rho^{-1}_{\widetilde{Q}^*}(\xi)\cdot \widetilde{Q}^*\subseteqB^n \subseteq \max_{\xi\in S^{n-1}} \rho^{-1}_{\widetilde{Q}^*}(\xi)\cdot \widetilde{Q}^*$ yield that
$$ \widetilde{V}_{G,\lambda}\Big( \max_{\xi\in S^{n-1}} \rho^{-1}_{\widetilde{Q}^*}(\xi)\cdot \widetilde{Q}^*\Big)\leq \widetilde{V}_{G,\lambda}(B^n)\leq \widetilde{V}_{G,\lambda}\Big( \min_{\xi\in S^{n-1}} \rho^{-1}_{\widetilde{Q}^*}(\xi)\cdot \widetilde{Q}^*\Big). $$
In conclusion, the optimization problem \eqref{optimization-convex-body-11-28-110} is well-defined and admits a minimizing sequence, say $\left\{K_{i}\right\}_{i=1}^{\infty} \subseteq \mathscr{K}_{(o)}^n$, such that, by \eqref{vlam}, \begin{align}
&\widetilde{V}_{G,\lambda}(B^n)=\widetilde{V}_{G,\lambda}(K_{i}^*)= \int_{S^{n-1}}G(\xi, \rho_{K^*_i}(\xi))\,d\lambda(\xi)<+\infty, \label{sup-8-25--1}\\
\label{lim}
& \alpha=\lim _{i \rightarrow \infty}
\widetilde{V}_{\Psi,\mu}\left(K_{i}\right)= \lim_{i\rightarrow \infty} \int_{S^{n-1}}\Psi \left(\xi, \rho_{K_i}(\xi)\right)\,d\mu(\xi).
\end{align}
For the case when $\Psi\in \mathcal{C}_d$ satisfies \eqref{growth-8-25} and $G\in \mathscr{G}_d$, one sees that \eqref{sup-8-25--1} verifies \eqref{sup-8-25}, and then Lemma \ref{Lemma-8-23} yields the uniform boundedness of $\{K_i\}_{i\in \mathbb{N}}$.
On the other hand, \eqref{lim} implies \eqref{max-8-23}, and Lemma \ref{Lemma-8-23-2} can be applied to obtain that (without loss of generality) $K_i\rightarrow L$ for some $L\in \mathscr{K}_{(o)}^n$.
For the case when $G\in \mathcal{C}_d$ satisfies \eqref{growth-8-25} and $\Psi\in \mathscr{G}_d$, one sees that \eqref{lim} verifies \eqref{sup-8-25}, and then Lemma \ref{Lemma-8-23} yields the uniform boundedness of $\{K_i^*\}_{i\in \mathbb{N}}$. Similarly, \eqref{sup-8-25--1} verifies \eqref{max-8-23}, and Lemma \ref{Lemma-8-23-2} can be applied to obtain that (without loss of generality) $K_i^*\rightarrow L^*$ for some $L\in \mathscr{K}_{(o)}^n$. In both cases, one has $K_i\to L\in \mathscr{K}_{(o)}^n$ and $K_i^*\to L^*$ as $i\rightarrow \infty$. It is easily checked by the dominated convergence theorem and $\widetilde{V}_{G,\lambda}(B^n)=\lim_{i\to \infty}\widetilde{V}_{G,\lambda}(K_{i}^*)$ for all $i\in \mathbb{N}$ that
\begin{align*}
& \alpha =\lim_{i\rightarrow \infty} \widetilde{V}_{\Psi,\mu}(K_i) = \int_{S^{n-1}} \lim_{i\rightarrow
\infty}\Psi(\xi, \rho_{K_i}(\xi))\,d\mu(\xi) =
\int_{S^{n-1}}\Psi(\xi, \rho_{L}(\xi))\,d\mu(\xi)=\widetilde{V}_{\Psi,\mu}(L),\\ & \widetilde{V}_{G,\lambda}(B^n) = \int_{S^{n-1}} \lim_{i\rightarrow
\infty}G(\xi, \rho_{K^*_i}(\xi))\,d\mu(\xi) =
\int_{S^{n-1}}G(\xi, \rho_{L^*}(\xi))\,d\mu(\xi)=\widetilde{V}_{G,\lambda}(L^*).
\end{align*} Consequently $L\in \mathscr{K}_{(o)}^n$ solves the optimization problem \eqref{geom}. By Lemma \ref{m1}, \eqref{msol} holds for $K=L^*$.
\end{proof}
The following corollary is an easy consequence of Theorem \ref{rl} and Proposition \ref{prop 11-28--1} iii).
\begin{corollary}\label{rl-equiv} Assume that either $\Psi\in \mathcal{C}_d$ satisfies \eqref{growth-8-25} and $G\in \mathscr{G}_d$, or $G\in \mathcal{C}_d$ satisfies \eqref{growth-8-25} and $\Psi\in \mathscr{G}_d$. Let $\lambda\in \mathcal{M}$ be strictly positive on nonempty open subsets of $S^{n-1}$ and $\mu$ be a nonzero finite Borel measure on $S^{n-1}$. Then the following statements are equivalent.
\vskip 1mm \noindent i) The measure $\mu$ on $S^{n-1}$ is not concentrated on any closed hemisphere.
\vskip 1mm \noindent ii) There exists a $K\in \mathscr{K}_{(o)}^n$ such that
$$
\frac{\mu}{|\mu|}=\frac{ C_{\Theta}(K,\cdot)}{ C_{\Theta}(K, S^{n-1})}.
$$
\end{corollary}
Recall that if $\Psi\in\mathscr{G}_I$, then
$\widetilde{\Psi}(\xi, t)=\Psi(\xi, \frac{1}{t})\in\mathscr{G}_d$. Similarly, if $\Psi\in\mathcal{C}_I$, then
$\widetilde{\Psi}\in\mathcal{C}_d$. Moreover, if $\Psi$ satisfies \begin{align}\label{growth-8-25-I} \lim_{t\rightarrow \infty} \Psi(\xi, t)=+\infty \ \ \mathrm{for\ each}\ \ \xi\in S^{n-1},\end{align} then $\widetilde{\Psi}$ satisfies \eqref{growth-8-25}. Applying Theorem \ref{rl} and Corollary \ref{rl-equiv} to the triple $\widetilde{\Theta}=(G, \widetilde{\Psi}, \lambda)$, together with \eqref{wc}, one can easily get a solution to the Musielak-Orlicz-Gauss image problem (i.e., Problem \ref{MOGIP}), which is summarized below.
\begin{theorem} \label{solution-general-dual-Orlicz-main theorem-11-270} Let $\lambda\in \mathcal{M}$ and $\mu$ be two nonzero finite Borel measures on $S^{n-1}$ that are not concentrated on any closed hemisphere. Assume that either $\Psi\in \mathcal{C}_I$ satisfies \eqref{growth-8-25-I} and $G\in \mathscr{G}_d$, or $G\in \mathcal{C}_d$ satisfies \eqref{growth-8-25} and $\Psi\in \mathscr{G}_I$. Then, there exists a $K\in \mathscr{K}_{(o)}^n$ such that
\begin{align}\label{msol-8-5-1}
\frac{\mu}{|\mu|}=\frac{\widetilde{C}_{\Theta}(K,\cdot)}{\widetilde{C}_{\Theta}(K, S^{n-1})}.
\end{align} If, in addition, $\lambda$ is strictly positive on nonempty open subsets of $S^{n-1}$, then the assumption on $\mu$, i.e., $\mu$ is a nonzero finite Borel measure on $S^{n-1}$ that is not concentrated on any closed hemisphere, is also necessary for \eqref{msol-8-5-1} holding true for some $K\in \mathscr{K}_{(o)}^n$.
\end{theorem}
Theorem \ref{solution-general-dual-Orlicz-main theorem-11-270} not only gives a Musielak-Orlicz generalization of \cite[Theorem 6.4]{GHWXY}, but also provides additional quite different assumptions on $G, \Psi$ (i.e., $G\in \mathcal{C}_d$ satisfies \eqref{growth-8-25} and $\Psi\in \mathscr{G}_I$) to guarantee the existence of solutions to the corresponding Minkowski type problems. In particular, the assumption that $G\in \mathcal{C}_d$ satisfies \eqref{growth-8-25} and $\Psi\in \mathscr{G}_I$ easily implies the existence of solutions to Problem \ref{MOMP-log-8-3},
due to \eqref{equiv-8-25}, by letting $G=-\log t\in \mathcal{C}_d$ which of course satisfies \eqref{growth-8-25}.
\begin{theorem}\label{sol-J-d} Let $\lambda\in \mathcal{M}$ and $\mu$ be two nonzero finite Borel measures on $S^{n-1}$ that are not concentrated on any closed hemisphere. For $\Psi \in \mathscr{G}_{I}$, there exists a $K \in \mathscr{K}_{(o)}^n$ such that
\begin{align}
\frac{\mu}{|\mu|}=\frac{\widetilde{J}_{\Psi, \lambda}(K, \cdot)}{\widetilde{J}_{\Psi, \lambda}\left(K,
S^{n-1}\right)}.\label{msol-8-25-1}
\end{align}
If, in addition, $\lambda$ is strictly positive on nonempty open subsets of $S^{n-1}$, then the assumption on $\mu$, i.e., $\mu$ is a nonzero finite Borel measure on $S^{n-1}$ that is not concentrated on any closed hemisphere, is also necessary for \eqref{msol-8-25-1} holding true for some $K\in \mathscr{K}_{(o)}^n$. \end{theorem}
The existence of solutions to the Musielak-Orlicz-Aleksandrov problem (i.e., Problem \ref{MOMP-Alex}) is an easy consequence of Theorem \ref{sol-J-d} by letting $\,d\lambda(\xi)=\,d\xi$.
\begin{corollary} Let $\Psi \in \mathscr{G}_{I}$ and $\mu$ be a nonzero finite Borel measure on $S^{n-1}$. The following two statements are equivalent.
\vskip 1mm \noindent i) The measure $\mu$ is not concentrated on any closed hemisphere. \vskip 1mm \noindent ii) There exists a $K \in \mathscr{K}_{(o)}^n$ such that $$
\frac{\mu}{|\mu|}=\frac{\widetilde{J}_{\Psi}(K, \cdot)}{\widetilde{J}_{\Psi}\left(K,
S^{n-1}\right)}.$$ \end{corollary}
\section{A solution to the Musielak-Orlicz-Gauss image problem for even data}\label{even-8-25} \setcounter{equation}{0}
In this section, we will discuss the existence of solutions to the Musielak-Orlicz-Gauss image problem for even data. Most of the proofs in this sections follow along the lines similar to those in Section \ref{solution-8-25}, so we will mainly focus on the difference and modifications in the proofs.
Recall that a convex body $K\in \mathscr{K}_{(o)}^n$ is said to be origin-symmetric if $-x\in K$ for all $x\in K$. Denote by $\mathscr{K}_e^n\subseteq \mathscr{K}_{(o)}^n$ the collection of all origin-symmetric convex bodies. Let $C_e(\Omega)$ be the set of all even continuous functions defined on $\Omega\subseteq S^{n-1}$, and $C^+_e(\Omega)$ contains all strictly positive functions in $C_e(\Omega)$. Consider the following optimization problems: \begin{align}
& \inf \left\{\widetilde{V}_{\Psi,\mu}(Q): \widetilde{V}_{G,\lambda}(Q^*)=\widetilde{V}_{G,\lambda}(B^n)\ \mathrm{and}\ Q\in
\mathscr{K}_e^n\right\}, \label{geom-int-e-I} \\ & \inf \left\{\widetilde{V}_{\Psi,\mu}(f): \widetilde{V}_{G,\lambda}(\langle f\rangle^*)=\widetilde{V}_{G,\lambda}(B^n) \ \mathrm{and}\ f \in C^+_e \left(S^{n-1}\right)\right\}, \label{fun-int-e-I} \\ &\alpha_s:= \sup \left\{\widetilde{V}_{\Psi,\mu}(Q): \widetilde{V}_{G,\lambda}(Q^*)=\widetilde{V}_{G,\lambda}(B^n)\ \mathrm{and}\ Q\in
\mathscr{K}_e^n\right\}, \label{geom-int-e-I-825} \\ & \sup \left\{\widetilde{V}_{\Psi,\mu}(f): \widetilde{V}_{G,\lambda}(\langle f\rangle^*)=\widetilde{V}_{G,\lambda}(B^n) \ \mathrm{and}\ f \in C^+_e \left(S^{n-1}\right)\right\}. \label{fun-int-e-I-825}
\end{align}
\begin{lemma}\label{m1-even} Let $\lambda\in\mathcal{M}$ and $\mu$ be nonzero finite even Borel measures on $S^{n-1}$. Suppose that $G\in \mathcal{C}$ and $\Psi\in \mathcal{C}$ such that $G(\xi, t)=G(-\xi, t)$ and $\Psi(\xi, t)=\Psi(-\xi, t)$ for all $(\xi, t)\in S^{n-1}\times (0, \infty)$, and $C_{\Theta}(Q, S^{n-1})\neq 0$ for all $Q\in \mathscr{K}_e^n$.
\vskip 1mm \noindent i) Let $\Psi\in \mathcal{C}_d$. If \eqref{geom-int-e-I} admits a solution, say $K\in \mathscr{K}_e^n$,
then $K_0=K^*\in \mathscr{K}_e^n$ is a solution to Problem \ref{MOGIP-p}, namely, the following holds:
\begin{align}\label{solution-20-8-2-e-I}
\frac{\mu}{|\mu|}=\frac{ C_{\Theta}(K_0 ,\cdot)}{ C_{\Theta}(K_0, S^{n-1})}.
\end{align}
\noindent ii) Let $\Psi\in \mathcal{C}_I$. If \eqref{geom-int-e-I-825} admits a solution, say $K\in \mathscr{K}_e^n$,
then $K_0=K^*\in \mathscr{K}_e^n$ is a solution to Problem \ref{MOGIP-p}, namely, \eqref{solution-20-8-2-e-I} holds. \end{lemma}
\begin{proof}
Let $f\in C_e^+(S^{n-1})$ be such that
$\widetilde{V}_{G,\lambda}(\langle
f\rangle^*)=\widetilde{V}_{G,\lambda}(B^n)$. It follows from \eqref{hk} that $\widetilde{V}_{G,\lambda}(\langle \rho_{\langle
f\rangle}\rangle^*)=\widetilde{V}_{G,\lambda}(B^n)$ and
$\widetilde{V}_{\Psi,\mu}(\rho_{\langle
f\rangle})=\widetilde{V}_{\Psi,\mu}(\langle
f\rangle)$. If $\Psi\in \mathcal{C}_{d}$,
\eqref{le1} yields
\begin{align*} \widetilde{V}_{\Psi,\mu}(f)=\int_{S^{n-1}}\Psi\left(\xi, f(\xi)\right) \mathrm{d} \mu(\xi) \ge \int_{S^{n-1}}\Psi\left(\xi, \rho_{\langle f\rangle}(\xi)\right) \mathrm{d} \mu(\xi) = \widetilde{V}_{\Psi,\mu}\left(\rho_{\langle f\rangle}\right), \end{align*} and similarly, if $\Psi\in \mathcal{C}_{I}$,
\eqref{le1} yields
\begin{align*} \widetilde{V}_{\Psi,\mu}(f)=\int_{S^{n-1}}\Psi\left(\xi, f(\xi)\right) \mathrm{d} \mu(\xi) \le \int_{S^{n-1}}\Psi\left(\xi, \rho_{\langle f\rangle}(\xi)\right) \mathrm{d} \mu(\xi) = \widetilde{V}_{\Psi,\mu}\left(\rho_{\langle f\rangle}\right). \end{align*} Hence, $\rho_K\in C_e^+(S^{n-1})$ solves the optimization problem \eqref{fun-int-e-I}, if $K\in \mathscr{K}_e^n$ solves \eqref{geom-int-e-I}; while
$\rho_K\in C_e^+(S^{n-1})$ solves the optimization problem \eqref{fun-int-e-I-825}, if $K\in \mathscr{K}_e^n$ solves \eqref{geom-int-e-I-825}.
Let $G\in \mathcal{C}$ and $\Psi\in \mathcal{C}_{I}\cup \mathcal{C}_d$ satisfy the assumptions in Lemma \ref{m1-even}. Let $g\in C_e(S^{n-1})$ be an arbitrary continuous function on $S^{n-1}$. As $\rho_K\in C_e^+(S^{n-1})$, for sufficiently small
$\varepsilon_1$ and $\varepsilon_2$, it is meaningful to define $f_{\varepsilon_1, \varepsilon_2}$ as in \eqref{feb172}.
It follows from \eqref{rr} (with $f=f_{\varepsilon_1, \varepsilon_2}$) that
\begin{align} \label{variation-8-25-even}
\frac{\partial}{\partial\varepsilon_1}\widetilde{V}_{G,\lambda}(\langle f_{\varepsilon_1,
\varepsilon_2}\rangle^*)&=
-\int_{S^{n-1}} g(\xi)\,dC_{\Theta}(\langle f_{\varepsilon_1,\varepsilon_2}\rangle^*,
\xi), \\ \label{variation-even-8-25-2}
\frac{\partial}{\partial\varepsilon_2}\widetilde{V}_{G,\lambda}(\langle f_{\varepsilon_1, \varepsilon_2}\rangle^*) &=-C_{\Theta}(\langle f_{\varepsilon_1,\varepsilon_2}\rangle^*, S^{n-1})\neq 0.
\end{align} Again, the method of Lagrange multipliers can be applied to the optimization problems \eqref{fun-int-e-I} or \eqref{fun-int-e-I-825} to get a constant $\kappa=\kappa(g)$, independent of $g$, such that
\begin{align}\label{lagrange method-8-25-even}
\frac{\partial}{\partial \varepsilon_i}\left(\widetilde{V}_{\Psi,\mu}(f_{\varepsilon_1, \varepsilon_2})+\kappa \big(\widetilde{V}_{G,\lambda}(\langle f_{\varepsilon_1, \varepsilon_2}\rangle^*) - \widetilde{V}_{G,\lambda}(B^n)\big)\right)\Big|_{\varepsilon_1=\varepsilon_2=0}=0, \ \ \ i=1, 2.
\end{align} Note that $\widetilde{V}_{G,\lambda}(\langle f_{0,0}\rangle^*)=\widetilde{V}_{G,\lambda}(K^*)=\widetilde{V}_{G,\lambda}(B^n).$ Together with \eqref{vlam}, \eqref{feb172-1-2020-12}, (\ref{variation-8-25-even}), (\ref{variation-even-8-25-2}), and \eqref{lagrange method-8-25-even}, one can easily have, for all $g\in C_e(S^{n-1})$,
\begin{align*}
\int_{S^{n-1}} g(\xi)\,d\mu(\xi) =\frac{|\mu|}{C_{\Theta}(K^*,S^{n-1})} \ \int_{S^{n-1}} g(\xi)\,dC_{\Theta}(K^*,
\xi).
\end{align*} That is, \eqref{solution-20-8-2-e-I} holds by letting $K_0=K^*$.
\end{proof}
The existence of solutions to Problem \ref{MOGIP-p} for even data is given in the following theorem.
\begin{theorem}\label{rl-even} Let $\lambda\in \mathcal{M}$ and $\mu$ be two nonzero finite even Borel measures on $S^{n-1}$ that are not concentrated on any great subsphere. Suppose that $G\in \mathcal{C}$ and $\Psi\in \mathcal{C}$ such that $G(\xi, t)=G(-\xi, t)$ and $\Psi(\xi, t)=\Psi(-\xi, t)$ for all $(\xi, t)\in S^{n-1}\times (0, \infty)$.
\vskip 2mm \noindent i) If either $\Psi\in \mathcal{C}_d$ satisfies \eqref{growth-8-25} and $G\in \mathscr{G}_d$, or $G\in \mathcal{C}_d$ satisfies \eqref{growth-8-25} and $\Psi\in \mathscr{G}_d$, then there exists a $K_0\in \mathscr{K}_e^n$ such that \eqref{solution-20-8-2-e-I} holds.
If, in addition, $\lambda$ is strictly positive on nonempty open subsets of $S^{n-1}$, then the assumption on $\mu$, i.e., $\mu$ is a nonzero finite even Borel measure on $S^{n-1}$ that is not concentrated on any great subsphere, is also necessary for \eqref{solution-20-8-2-e-I} holding true for some $K\in \mathscr{K}_e^n$.
\vskip 2mm \noindent ii) Assume that, in addition, $\mu$ vanishes on great subspheres. If $\Psi\in \mathcal{C}_I$ and $G\in \mathscr{G}_d$, then there exists a $K_0\in \mathscr{K}_e^n$ such that \eqref{solution-20-8-2-e-I} holds.
\vskip 2mm \noindent iii) Assume that, in addition, $\mu$ vanishes on great subspheres and there exists a constant $C\in (-\infty, \infty)$, such that
\begin{align}\label{con-8-7-int} \inf_{v\in S^{n-1}} \int_{S^{n-1}} \log \left|v \cdot
\xi\right| \,d\lambda(\xi) >C.\end{align} If $\Psi\in \mathcal{C}_I$, then there exists a $K\in \mathscr{K}_e^n$ such that \begin{align}
\frac{\mu}{|\mu|}=\frac{J_{\Psi, \lambda}(K, \cdot)}{J_{\Psi, \lambda}\left(K,
S^{n-1}\right)}.\label{con-8-7-int-21-1}
\end{align}
\end{theorem}
\begin{proof} For $v\inS^{n-1}$ and $\beta\in (0,1)$, let
$$
\widehat{\Sigma}_{\beta}(v)=\{u\in S^{n-1}:| u\cdot v|\ge \beta\}.
$$
\noindent i) We first claim that the optimization problem \eqref{geom-int-e-I} has a solution under the assumptions $\Psi\in \mathcal{C}_d$ satisfies \eqref{growth-8-25} and $G\in \mathscr{G}_d$. The case when $G\in \mathcal{C}_d$ satisfies \eqref{growth-8-25} and $\Psi\in \mathscr{G}_d$ follows along the same lines.
Following the proof of Theorem \ref{rl}, the optimization problem \eqref{geom-int-e-I} is well-defined and a limiting sequence $\{K_i\}_{i\in \mathbb{N}}$ can be found such that $\widetilde{V}_{G, \lambda} (K_i^*)=\widetilde{V}_{G, \lambda} (B^n)$ for all $i\in \mathbb{N}$ and $\lim_{i\rightarrow \infty} \widetilde{V}_{\Psi, \mu}(K_i)\leq \widetilde{V}_{\Psi, \mu}(B^n).$ In particular, $G\in \mathscr{G}_d$ and $\{K_i\}_{i\in \mathbb{N}}\subseteq \mathscr{K}_e^n$ is a sequence satisfying \eqref{sup-8-25}. This shows that the sequence $\{K_i\}_{i\in \mathbb{N}}$ is uniformly bounded, which follows from an argument similar to the proof of Lemma \ref{Lemma-8-23}, mainly with $\mathscr{K}_{(o)}^n$ replaced by $\mathscr{K}_e^n$ and with the inner product replaced by its absolute value (consequently, with $\Sigma_{\beta}(\cdot)$ replaced by $\widehat{\Sigma}_{\beta}(\cdot)$), respectively. On the other hand, as $\Psi\in \mathcal{C}_d$ satisfies \eqref{growth-8-25} and $\{K_i\}_{i\in \mathbb{N}}\subseteq\mathscr{K}_e^n$ satisfies \eqref{max-8-23}, there exists a subsequence of $\{K_i\}_{i\in \mathbb{N}}$ converging to some $L\in \mathscr{K}_e^n$; again this follows from an argument similar to the proof of Lemma \ref{Lemma-8-23-2}. Without loss of generality, let $K_i\to L\in \mathscr{K}_e^n$ and then $K_i^*\to L^*$. Consequently, $\widetilde{V}_{G, \lambda} (L^*)=\widetilde{V}_{G, \lambda} (B^n)$ and $\lim_{i\rightarrow \infty} \widetilde{V}_{\Psi, \mu}(K_i)= \widetilde{V}_{\Psi, \mu}(L),$ namely, $L\in \mathscr{K}_e^n$ solves the optimization problem \eqref{geom-int-e-I}. This, together with Lemma \ref{m1-even}, yields $K_0=L^*\in \mathscr{K}_e^n$ satisfying \eqref{solution-20-8-2-e-I}.
Recall that if $\lambda\in \mathcal{M}$ is strictly positive on nonempty open subsets of $S^{n-1}$, then $C_{\Theta}(K, \cdot)$ is not concentrated on any closed hemisphere of $S^{n-1}$. As $C_{\Theta}(K, \cdot)$ is an even measure, then $C_{\Theta}(K, \cdot)$ is in fact not concentrated on any great subsphere. Consequently, if, in addition, $\lambda$ is strictly positive on nonempty open subsets of $S^{n-1}$, then the assumption on $\mu$, i.e., $\mu$ is a nonzero finite even Borel measure on $S^{n-1}$ not concentrated on any great subsphere, is also necessary for \eqref{solution-20-8-2-e-I} holding true for some $K\in \mathscr{K}_e^n$.
\vskip 2mm \noindent ii) We first claim that the optimization problem \eqref{geom-int-e-I-825} has a solution under the assumptions $\Psi\in \mathcal{C}_I$ and $G\in \mathscr{G}_d$. Indeed, following the proof of Theorem \ref{rl}, the optimization problem \eqref{geom-int-e-I-825} is well-defined and a limiting sequence $\{K_i\}_{i\in \mathbb{N}}$ can be found such that $\widetilde{V}_{G, \lambda} (K_i^*)=\widetilde{V}_{G, \lambda} (B^n)$ for all $i\in \mathbb{N}$ and \begin{align} \alpha_s=\lim_{i\rightarrow \infty} \widetilde{V}_{\Psi, \mu}(K_i)=\lim _{i \rightarrow \infty} \int_{S^{n-1}}\Psi \left(\xi, \rho_{K_i}(\xi)
\right)\,d\mu(\xi) \geq \widetilde{V}_{\Psi, \mu}(B^n) >0.\label{lim-a-s}
\end{align} In particular, $G\in \mathscr{G}_d$ and $\{K_i\}_{i\in \mathbb{N}}\subseteq \mathscr{K}_e^n$ satisfy \eqref{sup-8-25}; this implies that the sequence $\{K_i\}_{i\in \mathbb{N}}$ is uniformly bounded (by an argument similar to the proof of Lemma \ref{Lemma-8-23}). Let $R$ be the constant such that $K_i\subseteq RB^n$ for all $i\in \mathbb{N}$. The Blaschke selection theorem can be applied to get a compact convex set $L\subseteq{\mathbb{R}^n}$ and a subsequence of $\{K_i\}_{i\in \mathbb{N}}$ (which will still be denoted by $\{K_i\}_{i\in \mathbb{N}}$) such that $K_i\to L$ in the Hausdorff metric.
Clearly $L$ is origin-symmetric. If $L\notin \mathscr{K}_e^n$, then there exists $w_0\in S^{n-1}$, such that \begin{align}\label{L belong} L\subseteq w_{0}^{\perp}=\big\{x\in \mathbb{R}^n: \ x\cdot w_0=0\big\}.\end{align} The fact that $\Psi\in \mathscr{G}_I$ implies $0<\max_{u\inS^{n-1}} \Psi \left(u, R\right):=C_1<\infty$. As $\mu$ is a nonzero finite even Borel measure that vanishes on all great subspheres of
$S^{n-1}$, it can be checked that $$0=\mu( S^{n-1}\cap w_0^{\perp})=\mu\Big(\bigcap\limits_{\beta\in (0, 1]}(S^{n-1}\setminus
\widehat{\Sigma}_{\beta}(w_0))\Big)=\lim_{\beta\rightarrow 0^+} \mu (S^{n-1}\setminus
\widehat{\Sigma}_{\beta}(w_0)).$$ Let $\varepsilon>0$. Then there exists $\beta_{\varepsilon}\in (0, 1)$ such that
\begin{align*}
\mu(S^{n-1}\setminus \widehat{\Sigma}_{ \beta_{\varepsilon}}(w_0))
<\frac{\varepsilon}{2C_1}.
\end{align*} As $\Psi\in \mathscr{G}_I$ and $K_i\subseteq RB^n$ for all $i\in \mathbb{N}$, one has \begin{align}
\int_{S^{n-1}\setminus \widehat{\Sigma}_{ \beta_{\varepsilon}}(w_0)}\Psi \left(\xi, \rho_{K_i}(\xi)
\right)\,d\mu(\xi)\leq \int_{S^{n-1}\setminus \widehat{\Sigma}_{ \beta_{\varepsilon}}(w_0)}\Psi \left(\xi, R
\right)\,d\mu(\xi)<\frac{\varepsilon}{2}.\label{s-con-8-7}
\end{align}
It follows from \eqref{L belong} that $\lim_{i\rightarrow
\infty} h_{K_i}(w_0)=h_{L}(w_0)=0$. This further implies that $\rho_{K_i}\rightarrow 0$ uniformly on
$\widehat{\Sigma}_{\beta} (w_0)$ as $i\to\infty$ for any $\beta\in (0, 1)$ (see a similar argument in Lemma \ref{Lemma-8-23-2}). The dominated convergence theorem and $\Psi \in \mathscr{G}_{I}$ yield the existence of $i_{\varepsilon} \in \mathbb{N}$, such that, for
all $i\ge i_{\varepsilon} $,
\begin{align*}
\int_{\widehat{\Sigma}_{\beta_{\varepsilon}}(w_0)}\Psi \left(\xi, \rho_{K_i}(\xi)
\right)\,d\mu(\xi)<\frac{\varepsilon}{2}.
\end{align*} Together with \eqref{s-con-8-7}, one sees, for all $i\ge i_{\varepsilon} $,
\begin{align*}
\int_{S^{n-1}}\Psi \left(\xi, \rho_{K_i}(\xi)
\right)\,d\mu(\xi)<\varepsilon.
\end{align*}
Taking \eqref{lim-a-s} into account, one gets a contradiction as follows: $$0= \alpha_s=\lim _{i \rightarrow \infty} \int_{S^{n-1}}\Psi \left(\xi, \rho_{K_i}(\xi)
\right)\,d\mu(\xi) =\int_{S^{n-1}}\Psi \left(\xi, \rho_{L}(\xi)
\right)\,d\mu(\xi) >0.$$ This concludes that $L\in \mathscr{K}_e^n$.
In conclusion, one gets an origin-symmetric convex body $L\in \mathscr{K}_e^n$ such that $K_i\to L$ and then $K_i^*\to L^*$. Moreover, $\alpha_s= \widetilde{V}_{\Psi,\mu}\left(L\right)$ and $ \widetilde{V}_{G,\lambda}(L^*)=\widetilde{V}_{G,\lambda}(B^n)$, namely, $L\in \mathscr{K}_e^n$ solves the optimization problem \eqref{geom-int-e-I-825}. This, together with Lemma \ref{m1-even}, yields $K_0=L^*\in \mathscr{K}_e^n$ satisfying \eqref{solution-20-8-2-e-I}.
\vskip 2mm \noindent iii) In view of \eqref{pd-int}, to find a $K\in \mathscr{K}_e^n$ satisfying \eqref{con-8-7-int-21-1}, one needs to solve the optimization problem \eqref{geom-int-e-I-825} under the case $G=\log t$ (or, equivalently, $G=-\log t$ which can be clearly seen from \eqref{nn} and \eqref{pd-int}). In this case, $\widetilde{V}_{G,\lambda}(\cdot)$ has to be replaced by $ \mathcal{E}_{\lambda}(K)$ defined in \eqref{rela}: \begin{align*} \mathcal{E}_{\lambda}(K)=
\widetilde{V}_{\log,\lambda}(K^*)=\int_{S^{n-1}} \log \rho_{K^*}(\xi) d \lambda(\xi).
\end{align*} To be more precise, the optimization problem \eqref{geom-int-e-I-825} now becomes
\begin{align}\label{max-21-1}
\alpha_s:=\sup\left\{\widetilde{V}_{\widetilde{\Psi},\mu}(Q): \mathcal{E}_{\lambda}(Q)=0\
\mathrm{and}\ Q\in
\mathscr{K}_e^n\right\}.
\end{align}
Following the proof of Theorem \ref{rl}, the optimization problem \eqref{max-21-1} is well-defined and a limiting sequence $\{K_i\}_{i\in \mathbb{N}}$ can be found such that
$\mathcal{E}_{\lambda}(K_{i})=0$ for all $i\in \mathbb{N}$ and \begin{align*} \alpha_s=
\lim _{i \rightarrow \infty}
\widetilde{V}_{\widetilde{\Psi},\mu}\left(K_{i}\right)=\lim _{i \rightarrow \infty} \int_{S^{n-1}}\widetilde{\Psi } \left(\xi, \rho_{K_i}(\xi)
\right)\,d\mu(\xi)\geq \widetilde{V}_{\widetilde{\Psi},\mu}\left(B^n\right)>0.
\end{align*}
The sequence $\{K_i\}_{i\in \mathbb{N}}$ is uniformly bounded. To see this, let $R_i=\max_{v\inS^{n-1}}\rho_{K_{i}}(v)=\rho_{K_{i}}(v_i)$ and $v_i\rightarrow v_0\in S^{n-1}$. Assume that $\sup_{i\in \mathbb{N}} R_i=\infty$. As $K_i\in \mathscr{K}_e^n$ for $i\in \mathbb{N}$, one has $h_{K_{i}}(\xi) \geq R_{i}\left|v_{i} \cdot \xi\right|$ for all $\xi \in S^{n-1}$. By \eqref{bi-polar--12}, \eqref{rela}, \eqref{con-8-7-int}, and $ \mathcal{E}_{\lambda}\left(K_{i}\right)=0$ for all $i\in \mathbb{N}$, one gets, for all $i\in \mathbb{N}$,
\begin{align*}
0 =\int_{S^{n-1}} \log h_{K_{i}}(\xi)
d\lambda(\xi) \geq \int_{S^{n-1}} \log \left|v_{i} \cdot
\xi\right| \,d\lambda(\xi)+\lambda(S^{n-1}) \log R_{i} \ge C+\lambda(S^{n-1}) \log R_{i}.
\end{align*}
Consequently, a contradiction can be obtained as follows: \begin{align*}+\infty=\sup_{i\in \mathbb{N}} \log R_{i} \leq \frac{- C} {\lambda(S^{n-1}) } <\infty.
\end{align*} Hence $\sup_{i\in \mathbb{N}} R_i<\infty$ and the sequence $\{K_i\}_{i\in \mathbb{N}}$ is uniformly bounded. The rest of the proof is then identical to the proof for ii). \end{proof}
The existence of solutions to the Musielak-Orlicz-Gauss image problem (i.e., Problem \ref{MOGIP}) for even measures can be obtained by applying Theorem \ref{rl-even} to the triple $\widetilde{\Theta}=(G, \widetilde{\Psi}, \lambda)$ and by \eqref{wc}.
\begin{theorem}\label{rl-even-ee} Let $\lambda\in \mathcal{M}$ and $\mu$ be two nonzero finite even Borel measures on $S^{n-1}$ that are not concentrated on any great subsphere. Suppose that $G\in \mathcal{C}$ and $\Psi\in \mathcal{C}$ such that $G(\xi, t)=G(-\xi, t)$ and $\Psi(\xi, t)=\Psi(-\xi, t)$ for all $(\xi, t)\in S^{n-1}\times (0, \infty)$.
\vskip 2mm \noindent i) If either $\Psi\in \mathcal{C}_I$ satisfies \eqref{growth-8-25-I} and $G\in \mathscr{G}_d$, or $G\in \mathcal{C}_d$ satisfies \eqref{growth-8-25} and $\Psi\in \mathscr{G}_I$, there exists a $K_0\in \mathscr{K}_e^n$ such that \begin{align}\label{solution-20-8-2-e-I-II}
\frac{\mu}{|\mu|}=\frac{\widetilde{C}_{\Theta}(K_0 ,\cdot)}{\widetilde{C}_{\Theta}(K_0, S^{n-1})}.
\end{align}
If, in addition, $\lambda$ is strictly positive on nonempty open subsets of $S^{n-1}$, then the assumption on $\mu$, i.e., $\mu$ is a nonzero finite even Borel measure on $S^{n-1}$ that is not concentrated on any great subsphere, is also necessary for \eqref{solution-20-8-2-e-I-II} holding true for some $K_0\in \mathscr{K}_e^n$.
\vskip 2mm \noindent ii) Assume that, in addition, $\mu$ vanishes on great subspheres. If $\Psi\in \mathcal{C}_d$ and $G\in \mathscr{G}_d$, then there exists a $K_0\in \mathscr{K}_e^n$ satisfying \eqref{solution-20-8-2-e-I-II}.
\end{theorem}
The existence of solutions to Problem \ref{MOMP-log-8-3} for even measures can be established as well. Part i) of the following theorem is obtained from \eqref{equiv-8-25} and Theorem \ref{rl-even-ee}, and by letting $G=-\log t\in \mathcal{C}_d$ which satisfies \eqref{growth-8-25}. Part ii) of the following theorem is obtained by \eqref{J-relation} and by applying Part iii) of Theorem \ref{rl-even} to the the function $\widetilde{\Psi}(\xi, t)=\Psi\big(\xi, 1/t\big)$ instead of $\Psi$ itself.
\begin{theorem} \label{th-21-1} Let $\lambda\in \mathcal{M}$ and $\mu$ be two nonzero finite even Borel measures on $S^{n-1}$ that are not concentrated on any great subsphere.
Let $\Psi\in \mathcal{C}$ be such that $\Psi(\xi, t)=\Psi(-\xi, t)$ for all $(\xi, t)\in S^{n-1}\times (0, \infty)$.
\vskip 2mm \noindent i) If $\Psi\in \mathscr{G}_I$, then there exists a $K\in \mathscr{K}_e^n$ such that \begin{align}
\frac{\mu}{|\mu|}=\frac{\widetilde{J}_{\Psi, \lambda}(K, \cdot)}{\widetilde{J}_{\Psi, \lambda}\left(K,
S^{n-1}\right)}.\label{sol-int-e-8-7}
\end{align}
If, in addition, $\lambda$ is strictly positive on nonempty open subsets of $S^{n-1}$, then the assumption on $\mu$, i.e., $\mu$ is not concentrated on any great subsphere of $S^{n-1}$, is also necessary for \eqref{sol-int-e-8-7} holding for some $K\in \mathscr{K}_e^n$.
\vskip 2mm \noindent ii) Assume that, in addition, $\mu$ vanishes on great subspheres and there exists a constant $C\in (-\infty, \infty)$ such that \eqref{con-8-7-int} holds. If $\Psi\in \mathcal{C}_d$, then there exists a $K\in \mathscr{K}_e^n$ satisfying \eqref{sol-int-e-8-7}.
\end{theorem}
Note that $\int_{\{\xi:v\cdot\xi\neq0\}}\log |\xi\cdot v|\,d\xi=C$ is independent of $v$ and $C$ is finite.
Applying Theorem \ref{th-21-1} to the measure $\,d\lambda(\xi)=\,d\xi$, one can obtain a solution to the Musielak-Orlicz-Aleksandrov problem (i.e., Problem \ref{MOMP-Alex}) for even measures.
\begin{corollary} \label{MMM-2021-1-16} Let $\mu$ be a nonzero finite even Borel measure on $S^{n-1}$ and $\Psi\in \mathscr{G}_I\cup \mathscr{G}_d$ be such that $\Psi(\xi, t)=\Psi(-\xi, t)$ for all $(\xi, t)\in S^{n-1}\times (0, \infty)$.
\vskip 1mm \noindent i) If $\Psi\in \mathscr{G}_I$, then
there exists a $K\in \mathscr{K}_e^n$ such that \begin{align}
\frac{\mu}{|\mu|}=\frac{\widetilde{J}_{\Psi}(K, \cdot)}{\widetilde{J}_{\Psi}\left(K,
S^{n-1}\right)}\label{sol-int-e-8-7-sss}
\end{align} if and only if $\mu$ is a nonzero finite even Borel measure on $S^{n-1}$ that is not concentrated on any great subsphere of $S^{n-1}$.
\vskip 1mm \noindent ii) Assume that, in addition, $\mu$ vanishes on
any great subspheres of $S^{n-1}$. If $\Psi \in \mathscr{G}_d$, then there exists $K\in \mathscr{K}_e^n$ such that \eqref{sol-int-e-8-7-sss} holds.
\end{corollary}
\vskip 2mm \noindent {\bf Acknowledgement.} The research of QH has been supported by NSFC (No. 11701219) and AARMS postdoctoral fellowship. The research of DY has been supported by an NSERC grant, Canada. The research of BZ has been supported by NSFC (No.\ 11971005). The authors are greatly indebted to Dr. Shaoxiong Hou for the introduction of the Musielak-Orlicz functions.
\begin{small}
|
{
"timestamp": "2021-05-11T02:20:54",
"yymm": "2105",
"arxiv_id": "2105.03952",
"language": "en",
"url": "https://arxiv.org/abs/2105.03952"
}
|
\section{Introduction}
\subsection{Background}
Let $X$ and $Y$ be metric spaces. A map $\phi: X \to Y$ is a quasiisometry if there exists $\lambda \geqslant 1$ and $c \geqslant 0$ such that
$\lambda^{-1} d(x,x') - c \leqslant d(\phi(x), \phi(x')) \leqslant \lambda d(x,x') + c$ and for every $y$ in $Y$, $d(y, \phi(X)) \leqslant c$.
Let a locally compact, compactly generated group $G$ act continuously co-compactly properly by isometries on a locally compact geodesic space $X$; we call $X$ a geometric model of $G$. Every such $G$ has a geometric model (e.g. Cayley graphs if it is finitely generated, Riemannian metrics if it is connected Lie), and two geometric models of a given $G$ will always be equivariantly quasiisometric. Thus one can speak of quasiisometries between compactly generated locally compact groups.
Quasiisometries arose from the interpretation by Margulis of the work of Mostow on the rigidity of locally symmetric spaces \cite{MargulisMostow}.
Specifically, Margulis conjectured that a quasiisometry of a higher rank symmetric space $X$ should lie at bounded distance from an isometry, implying Mostow rigidity for the co-compact lattices in $X$, but also the fact that any finitely $G$ quasiisometric to $X$ must surject with finite kernel onto such a uniform lattice.
This was first proved by Kleiner and Leeb using asymptotic cones, a tool formerly introduced by Gromov, in the form recast by van den Dries and Wilkie \cite{KleinerLeebQI}.
The interplay of quasiisometries and asymptotic cones can actually be expressed in the following way: between geodesic metric spaces, a map is a quasiisometry if and only if it goes through any asymptotic cone (with possibly moving observation centers); see \S\ref{subsec:going-through-cones} for a precise statement.
Kleiner and Leeb's theorem is part of a more general principle which, in contrast with Mostow rigidity, makes sense (and is stated below) for locally compact compactly generated groups.
\begin{theorem*}[Many authors, see {\cite[Theorem 19.25]{CornulierQIHLC}} and the references there]
Let $G$ be a compactly generated locally compact group and let $X$ be a Riemannian symmetric space of non-compact type. The following are equivalent:
\begin{enumerate}[{\rm (1)}]
\item
$G$ is quasiisometric to $X$.
\item
$X$ is a Riemannian geometric model for $G$.
\end{enumerate}
Moreover, if $G$ is a Lie group isomorphic to a closed subgroup of upper triangular real matrices (call such groups completely solvable), then the former are equivalent to:
\begin{enumerate}[{\rm (1)}]
\setcounter{enumi}{2}
\item
$G$ is isomorphic to a maximal completely solvable\footnote{Beware that the maximal solvable subgroups of $\operatorname{Isom}(X)$ (which is a real Lie group) are not always completely solvable; they only have a co-compactly embedded such subgroup.} subgroup of $\operatorname{Isom}(X)$.
\end{enumerate}
\end{theorem*}
The case $G$ finitely generated and $X = \mathbb H_{\mathbf R}^n$, $n \geqslant 3$ is up to formulation due to Tukia \cite{TukiaQC2mob} and was among the early results motivating the first formulation of quasiisometric rigidity by Gromov \cite{GromovQIprogram}.
Gromov almost simultaneously proposed a vast programme of classifying finitely generated groups and isometrically homogeneous spaces up to quasiisometry \cite{GromovAsymptoticHomogeneous}.
For nonsemisimple connected or nonarchimedean Lie groups and their lattices, this is far from being achieved today.
Between geodesic metric spaces, quasiisometries are exactly the coarse equivalences, that is, they respect the bounded coarse structure described as the family of entourages
\begin{equation}
\notag
\mathcal{E}^{O(1)} = \left\{ E \subseteq X \times X: \exists D > 0,\, \sup_{(x,x') \in E} {d_X(x,x')} \leqslant D \right\}.
\end{equation}
A broad interpretation of Gromov's programme is the following: classify the coarse structures generated by compactly generated groups, and characterize those that are generated by particular geometric models, especially the Riemannian symmetric or homogeneous spaces.
Recently, certain extensions of Gromov's questions have been addressed where coarse surjectivity is relaxed.
These are the study of the rigidity of quasiisometric embeddings (see \cite{fisher2015quasi} and \cite{FisherWhyteQIEmbedSym} for symmetric spaces) and of
the (non)-existence of coarse embeddings (see \cite{hume2020poincare} for connected Lie groups).
\subsection{Main results}
In this paper, we are interested in maps more general than quasiisometries.
In contrast with quasiisometries, these can still be characterized as going through asymptotic cones, though not through asymptotic cones for any sequence of basepoints (we elaborate on \cite{CornulierCones11} for this; see \S\ref{subsec:going-through-cones} for a precise statement). The coarse surjectivity assumption is not exactly relaxed, but adapted accordingly.
For the needs of the next definition, say that a function $u: [0,+\infty) \to (0,+\infty)$ is admissible if
$\limsup_{r \to + \infty} u(r)/r = 0$ (that is, $u$ is sublinear)
and for every $A \geqslant 1$ there exists $B<+\infty$ such that for all sequences $(r_n, s_n)$ with $1/A \leqslant \inf s_n/r_n \leqslant \sup s_n/r_n \leqslant A$,
$
\sup u(s_n)/u(r_n) \leqslant B
$.
Examples of admissible function include $u(r) = r^\alpha \log^\beta (r) $ for $r \geqslant 2$ (and $u(r)=1$ otherwise) when $\alpha \in(-\infty, 1)$ and $\beta \in (-\infty, +\infty)$.
\begin{definition*}[After\footnote{
This is \cite[Definition 2.1]{cornulier2017sublinear} with a mild difference in the definition of the class of admissible functions that we make in order to include functions with limit $0$ at $\infty$ (see \S \ref{sec:coars-geometry} for why).} {\cite{cornulier2017sublinear}}]
\label{def:sbe}
Let $u$ be an admissible function.
A map $\phi: (X,o_X) \to (Y,o_Y)$ between pointed metric spaces realizes a (large-scale) $O(u)$-bilipschitz equivalence if there are $\kappa \geqslant 1$ and $c \geqslant 0$ such that, for all $x,x' \in X$ and $y \in Y$,
\begin{align}
\label{eq:sbe-1}
-cu(\vert x \vert \vee \vert x' \vert) + \frac{d_X(x,x') }{\kappa} \leqslant d_Y(\phi(x), \phi(x')) & \leqslant \kappa d_X(x,x') + cu(\vert x \vert \vee \vert x' \vert) \\
\label{eq:sbe-2}
d_Y(y, \phi(x)) & \leqslant c u (\vert y \vert),
\end{align}
where $\vert x \vert$ denotes $d_X(o_X,x)$, and ``$\vee$'' denotes max.
\end{definition*}
We also call $o(r)$-bilipschitz equivalence, or sublinear bilipschitz equivalence (abbreviated SBE in some places), a $\phi$ such that \eqref{eq:sbe-1} and \eqref{eq:sbe-2} hold with some unspecified strictly sublinear function in lieu of $cu$.
Quasiisometries correspond to $u\equiv 1$.
Of particular importance in this paper is $u = \log$.
Given an admissible function $u$, we consider the coarse structure on metric spaces with the following entourages:
\begin{equation}
\notag
\mathcal{E}^{O(u)} = \left\{ E \subseteq X \times X: \limsup_{r \to +\infty} \sup_{(x,x') \in E,\, \sup (d(o_X,x), d(o_X,x')) \geqslant r} \frac{d_X(x,x')}{u (\vert x \vert)} < +\infty \right\}.
\end{equation}
These are quantitative refinements of the coarse structure introduced in \cite{DranishnikovSmith}.
$O(u)$-bilipschitz equivalences are always $\mathcal E^{O(u)}$-coarse equivalences. We prove that the converse holds between geodesic spaces when $u= \log$:
\begin{theoremintro}
\label{th:coarse-is-sbe}
Assume that $X$ and $Y$ are geodesic. Then $\phi : X \to Y$ is $O(\log)$-bilipschitz if and only if it is a coarse equivalence of $\mathcal{E}^{O(\log)}$.
\end{theoremintro}
This is a variant of the well-known fact that coarse equivalences between geodesic spaces are quasiisometries, however the proof is significantly more involved.
Keeping quasiisometric rigidity and classification in mind, it is natural to ask:
\begin{ques}[Rigidity]
\label{ques:rigidity}
Let $u$ be as above, $u \geqslant 1$. Which compactly generated locally compact groups $G$ are $O(u)$-bilipschitz equivalent to a given symmetric space $X$?
\end{ques}
\begin{ques}[Classification]
\label{ques:classification}
Given $u$ as above, $u \geqslant 1$, classify isometrically homogeneous spaces up to $O(u)$-bilipschitz equivalence.
\end{ques}
The following theorem was stated in the introduction of the author's thesis.
While essentially following from the combination of \cite{DranishnikovSmith}, \cite{HigesPeng} and the coarse interpretation of $o(r)$-bilipschitz equivalences, it was not extracted at first sight from the literature, so we provide a proof here (relying on the above cited works).
Recall for the statement that all the maximal compact subgroups of a connected Lie groups are conjuguated \cite{BorelMaximaux}.
\begin{theoremintro}[After {\cite{DranishnikovSmith}} and {\cite{HigesPeng}}]
\label{th:geodim}
Let $G$ and $H$ be connected Lie groups. If there exists a $o(r)$-bilipschitz equivalence $\phi: G \to H$, then
\begin{equation}
\operatorname{geodim}(G) = \operatorname{geodim}(H),
\end{equation}
where $\operatorname{geomdim}(G)$ denotes $\dim G/K$ if $K$ is any maximal compact subgroup of $G$.
Especially, if $G$ and $H$ are solvable and simply connected, then $\dim G = \dim H$.
\end{theoremintro}
The theorem actually holds for every $o(r)$-coarse equivalences $\phi$, see {\S\ref{subsec:assouad-nagata}}. If $G$ and $H$ are nilpotent, then $\operatorname{geodim}$ is the covering dimension of their asymptotic cones and Theorem \ref{th:geodim} also follows from \cite{PanCBN}.
Next, building on \cite{CornulierCones11}, \cite{CoTesContracting} and \cite{pallier2019conf} (which was already concerned with Question \ref{ques:classification}) we formulate below a partial answer to Question \ref{ques:rigidity} for connected Lie groups $G$ and real hyperbolic space $X$.
While this is not made apparent in the statement, all the groups obtained are either of Heintze or rank-one type, in the typology of \cite{CoTesContracting} and \cite{CCMT}.
\begin{theoremintro}
\label{th:Tukia-SBE}
Let $G$ be a Lie group with finitely many connected components and $n \geqslant 2$ an integer.
The following are equivalent:
\begin{enumerate}[{\rm (\ref{th:Tukia-SBE}.1)}]
\item
\label{sublinear-characterization}
$G$ is $O(u)$-bilipschitz equivalent to $\mathbb H^n_{\mathbf R}$, for some sublinear admissible $u$.
\ite
\label{item:log-characterization}
$G$ is $O(\log)$-bilipschitz equivalent to $\mathbb H^n_{\mathbf R}$.
\item
\label{pinching-characterization}
For every $\varepsilon>0$, $G$ has a Riemannian model $X$ with $-1 \leqslant K \leqslant -1+ \varepsilon$.
\end{enumerate}
Moreover, if $G$ is completely solvable with Lie algebra $\mathfrak g$, the former conditions are equivalent to:
\begin{enumerate}[{\rm (\ref{th:Tukia-SBE}.1)}]
\setcounter{enumi}{3}
\item
\label{item:degeneration-characterization-real}
$\mathfrak g$ degenerates to the (isomorphism class of a) maximal completely solvable subalgebra $\mathfrak g_\infty$ of $\mathfrak o(n,1)$.
\item
\label{item:explicit-characterization-tukia}
The Lie algebra $\mathfrak g$ decomposes as $\mathfrak [\mathfrak g, \mathfrak g] \oplus \mathbf R A$, where $[\mathfrak g, \mathfrak g]$ is abelian and $\operatorname{ad}_A$ is unipotent on $[\mathfrak g, \mathfrak g]$.
\end{enumerate}
\end{theoremintro}
Here saying that $\mathfrak g$ degenerates to $\mathfrak g_\infty$ means that the Zariski closure of the orbit of $\mathfrak g$ in the variety of Lie algebra laws contains $\mathfrak g_\infty$, which occurs especially if there is a continuous $( \varphi_t)_{t \in [0,+\infty)}$ in $\mathrm{GL}(\mathfrak g)$ and a linear isomorphism $\psi : \mathfrak g \to \mathfrak g_\infty$ such that for every $X,Y \in \mathfrak g$,
\begin{equation*}
\lim_{t \to + \infty} \varphi_t^{-1} [\varphi_t X, \varphi_t Y]_{\mathfrak g} = \psi^{-1} [\psi X, \psi Y]_{\mathfrak g_\infty}.
\end{equation*}
Theorem \ref{th:Tukia-SBE} combines known results.
That (\ref{th:Tukia-SBE}.\ref{sublinear-characterization}) implies (\ref{th:Tukia-SBE}.\ref{pinching-characterization}) rests on \cite{CoTesContracting} and \cite{pallier2019conf}, the equivalence of the last two conditions (\ref{th:Tukia-SBE}.\ref{item:degeneration-characterization-real}) and (\ref{th:Tukia-SBE}.\ref{item:explicit-characterization-tukia}) is \cite[Theorem 6.2]{LauretDegenerations} with minor enhancement, the implication from (\ref{th:Tukia-SBE}.\ref{pinching-characterization}) to (\ref{th:Tukia-SBE}.\ref{item:explicit-characterization-tukia}) uses \cite{PansuDimConf}, while the fact that (\ref{th:Tukia-SBE}.\ref{item:explicit-characterization-tukia}) implies (\ref{th:Tukia-SBE}.\ref{item:log-characterization}) is a consequence of \cite{CornulierCones11}.
When $n=2$, Theorem \ref{th:Tukia-SBE} reduces to a weak form of \cite[Corollary 1.10(2)]{cornulier2017sublinear}. The statement is simpler for homogeneous metrics have constant curvature, hold with the mere assumption that $G$ be compactly generated locally compact, and the techniques are specific, relying essentially on \cite{GabaiConFuchsAnnals}, \cite{CassonJungreis}.
For general connected Lie groups, the process of going from $\mathfrak g$ to a less complicated $\mathfrak g_\infty$ so that the simply connected $G$ and $G_\infty$ remain $O(u)$-bilipschitz equivalent has an alternative description given in \cite{CornulierCones11} (recalled here in Theorem \ref{th:cornulier-red}) which does not require degenerations.
Our formulation using degeneration is half-successful in this generality. While it also applies well when $\mathfrak g$ is nilpotent (in this case it is due to Pansu \cite{PanCBN}), we do not know whether $\mathfrak g_\infty$ is a degeneration of $\mathfrak g$ in general. This will be discussed in \S\ref{subsec:nilpotent}.
The appearance of the sectional curvature pinching in characterization (\ref{th:Tukia-SBE}.\ref{pinching-characterization}) might appeal to some comments.
The sphere theorem of Berger and Klingenberg implies that on a positively curved Riemannian manifold of positive curvature, a pinching\footnote{Recall that a connected $C^2$ Riemannian manifold is $\delta$-pinched for some $\delta$ with $0 < \vert \delta \vert \leqslant 1$ if all its sectional curvatures are comprised between $\delta$ and $1$ if positively curved or between $\delta$ and $-1$ if negatively curved.} sufficiently close to $1$ determines the homotopy type of the (finite) universal cover. Namely, the latter must be a sphere.
As demonstrated by Gromov and Thurston, there is no counterpart for this in negative curvature as one constructs sequences of closed manifolds supporting negatively curved metrics, arbitrarily pinched close to $1$, albeit with vanishing first cohomology, hence not homotopy equivalent to any locally symmetric space of constant negative curvature \cite{MGromovThurstonPinching}.
This is not even repaired if one replaces homotopy equivalence with quasiisometry, as one constructs isometrically homogeneous manifolds with pinching $> -1/4$ or even arbitrarily close to $-1$ (characterized in \cite{EberleinHeber}, see \S\ref{subsec:pinching}), that are not quasiisometric to $\mathbb H^n_{\mathbf R}$ \cite{XieLargeScale}.
Theorem \ref{th:Tukia-SBE} implies the following as far as Lie groups are concerned.
\begin{corollaryintro}[of Theorem {\ref{th:Tukia-SBE}}]
\label{cor:sbe-corona}
If a connected Lie group $G$ has Riemannian models with pinching arbitrarily close to $-1$, then its sublinear Higson corona $\nu_L G$ is homeomorphic to that of a real hyperbolic space.
\end{corollaryintro}
(We recall the definition of the sublinear Higson corona in \S\ref{subsec:assouad-nagata}.)
Finally, we also characterize the Lie groups $O(u)$-bilipschitz equivalent to $\mathbb H^2_{\mathbf C}$. Following \cite{Cornulier_Focal}, say that the locally compact $G$ and $H$ are commable if there exists a finite sequence of homomorphisms with compact kernels and co-compact images (both directions allowed) between $G$ and $H$.
\begin{theoremintro}
\label{thm:groups-sbe-to-h2c}
Let $G$ be a Lie group with finitely many connected components.
The following are equivalent:
\begin{enumerate}[{\rm (\ref{thm:groups-sbe-to-h2c}.1)}]
\item
\label{item:G-SBE-to-H2C}
$G$ is $O(u)$-bilipschitz equivalent to $\mathbb H^2_{\mathbf C}$
\item
\label{item:G-log-SBE-to-H2C}
$G$ is $O(\log)$-bilipschitz equivalent to $\mathbb H^2_{\mathbf C}$
\item
\label{item:G-SBE-to-commable}
$G$ is {\em commable} either to the semisimple $\operatorname{SU}(2,1)$ or to the solvable $S' = H_3 \rtimes \mathbf R$, where $H_3$ is the $3$-dimensional Heisenberg group and $t \in \mathbf R$ acts by
\[ t.\exp (x,y,z) = \exp(e^t x + te^ty, e^t y, e^{2t} z) \]
in a basis of infinitesimal generators $X, Y, Z$ such that $[X,Y] = Z$.
\end{enumerate}
Moreover, if $G$ is completely solvable, the former conditions are equivalent to:
\begin{enumerate}[{\rm (\ref{thm:groups-sbe-to-h2c}.1)}]
\setcounter{enumi}{3}
\label{item:G-SBE-to-H2C-degenerates}
\item
$\mathfrak g$ degenerates to the maximal completely solvable subalgebra of $\mathfrak{u}(2,1)$
\end{enumerate}
where $\mathfrak g$ denotes the Lie algebra of $G$.
\end{theoremintro}
The restriction that $G$ be a Lie group with finitely many connected components makes Theorems \ref{th:Tukia-SBE} and \ref{thm:groups-sbe-to-h2c} very special compared to the QI rigidity recalled above, and we benefit from some constraints of the structure theory of Lie groups. Unlike Theorem \ref{th:Tukia-SBE}, Theorem \ref{thm:groups-sbe-to-h2c} requires some additional technical work, done in \S\ref{sec:proofE}.
\subsection{Other spaces}
We know little even about Question \ref{ques:classification} for higher rank symmetric spaces and other settings, even when quasiisometric rigidity is known to hold.
In the end of this paper, we summarize the current situation for symmetric space of higher rank and Fuchsian buildings; especially we explain why their classification is still open at the time of writing.
\subsection{Organization of the paper}
\S\ref{sec:coars-geometry} is a general discussion on the theoretical status of SBE (especially, as compared to QI). It is not concerned with Lie groups and can be read independently. \S\ref{subsec:prelim} provides some preliminaries for \S\ref{sec:coars-geometry}. \S\ref{sec:proofB} and \S\ref{sec:proofE} establish the characterizations of Lie groups $O(u)$-bilipschitz equivalent to real, resp. complex hyperbolic space, and follow a similar scheme, so we advise to read \S\ref{sec:proofB} first.
Most of the technical input in this paper serve the proofs of Theorems \ref{th:coarse-is-sbe} and \ref{thm:groups-sbe-to-h2c} and is concentrated in \S\ref{subsec:coarse-structures} and \S\ref{subsec:pointedSphere} respectively.
SBE appears to be quite a new notion and some of the contents of this paper are rather expository in nature, including especially \S\ref{subsec:assouad-nagata} on Theorem \ref{th:geodim}, \S\ref{subsec:pinching} and \S\ref{subsec:degenerations} preparing the proof of Theorem \ref{th:Tukia-SBE}, and \S\ref{subsec:nilpotent} on general connected Lie groups.
\S\ref{subsec:hrss} and \S\ref{subsec:rafb} gather a collection of independent remarks. Finally, a certain amount of actual Lie algebra cohomology computations (for trivial and adjoint modules) are required in particular in Lemma \ref{lem:deg-to-bnC} and Example \ref{exm:L67}; we summarize these in Appendix \ref{app:cohomcomput}.
\subsection*{Convention, notation}
When $G,H, \ldots $ are simply connected Lie groups, then $\mathfrak g, \mathfrak h, \ldots$ denote their Lie algebra.
We often consider semi-direct products of the form $N \rtimes \mathbf R$ or $\mathfrak n \oplus \mathbf R$; we then write $N \rtimes_\alpha \mathbf R$ or $\mathfrak n \rtimes_\alpha \mathbf R$ meaning that the {Lie algebra} representation $\rho : \mathbf R \to \operatorname{Der}(\mathfrak n)$ (and not the Lie group representation) is determined by $1 \mapsto \alpha$.
If $V$ is a module and $n$ a nonnegative integer, we denote by $\Lambda^n V$ its $n$-fold exterior product and by $\Lambda^n V^\ast$ the $n$-fold exterior product of its dual.
If $\mathfrak g$ is a Lie (sub)algebra, $\operatorname{Vect}(\mathfrak g)$ will denote its underlying vector (sub)space. (This is useful to avoid confusions because we may sometimes consider several Lie brackets on a given space.)
\endgroup
\tableofcontents
\section{Coarse geometry and Theorems \ref{th:coarse-is-sbe} and \ref{th:geodim}}
\label{sec:coars-geometry}
This section motivates sublinear bilipschitz equivalence (defined in \S\ref{def:sbe}) by comparing it to the more standard notions of quasiisometry and coarse equivalence. This comparison will be made through the relations that sublinear bilipschitz equivalence enjoys with asymptotic cones and certain coarse structures. The relation to asymptotic cones is the reason why they were introduced by Cornulier in the first place, in \cite{CornulierDimCone} and then more explicitly\footnote{We should warn the reader about terminology: they were called ``cone bilipschitz'' in \cite{CornulierCones11} and ``asymptotically bilipschitz'' in \cite{DrutuKapovich}.} in \cite{CornulierCones11}, \cite{cornulier2017sublinear} (See \S\ref{subsubsec:unique} for precisely why).
In the end of this section, we show that the geometric dimension of connected Lie groups is a SBE invariant.
\subsection{Preliminaries}
\label{subsec:prelim}
\begin{definition}[Coarse equivalence and quasiisometry]
\label{def:qi-and-coarse}
Let $X$ and $Y$ be two metric spaces and let $\phi : X \to Y$.
$\phi$ is a (uniform) coarse embedding if there exists two proper functions $\rho_-$ and $\rho_+: [0,+\infty) \to [0,+\infty)$ such that for every $x,x' \in X$
\begin{equation}
\label{eq:coarse-equiv}
\rho_-(d_X(x,x')) \leqslant d_Y(\phi(x), \phi(x')) \leqslant \rho_+(d_X(x, x')).
\end{equation}
$\phi$ is a coarse equivalence if moreover, there exists a coarse embedding $\psi: Y \to X$ and a constant $R \geqslant 0$ such that for all $x \in X$, $d_X(\psi \circ \phi(x),x) \leqslant R$ and for all $y \in Y$, $d_Y(\phi \circ \psi(y),y) \leqslant R$; we call $g$ a coarse inverse.
$\phi$ is a $(\kappa, c)$-quasiisometric embedding if $\rho_-$ and $\rho_+$ can be taken affine in \eqref{eq:coarse-equiv}, namely $\rho_\pm(r) = \kappa^{\pm 1} r \pm c$. If in addition $\phi$ a coarse equivalence, $\phi$ is called a quasiisometry and any coarse inverse $g$ is also a quasiisometry ; equivalently a quasiisometry is a quasiisometric embedding $\phi$ such that $\sup_{y \in Y} d_Y(y, \phi(X)) < + \infty$.
We may define a quasiisometry only on a net, that is, a closed subspace $X^{(0)} \subseteq X$ such that $\sup_{x \in X} d(x, X^{(0)}) < + \infty$.
\end{definition}
\begin{proposition}[{See e.g. \cite[3.B.9]{CornulierHarpeMetLCGroups}}]
\label{prop:coarse-geod-qi}
Let $X$ and $Y$ be two metric spaces and let $\phi ; X \to Y$ be a coarse equivalence. If $X$ and $Y$ are geodesic, then any coarse equivalence $X \to Y$ is a quasiisometry.
\end{proposition}
\begin{proposition}
\label{prop:milnor-svarc}
Let $G$ be a compactly generated locally compact group. Then
\begin{enumerate}[{\rm (1)}]
\item
\label{item:svarc-milnor}
If $G$ acts continuously, properly cocompactly on the locally compact geodesic spaces $X$ and $Y$, then there exists a quasiisometry $\phi: X \to Y$ such that $\sup_{(g,x) \in G \times X} d_Y(\phi(g.x), g.\phi(x)) < +\infty$.
\item
\label{item:svarc-milnor-exists}
There exists $X$ locally compact geodesic metric space and an isometric proper co-compact continuous action of $G$ on $X$.
\end{enumerate}
\end{proposition}
\eqref{item:svarc-milnor} is a consequence of \cite[Theorem 4.C.5]{CornulierHarpeMetLCGroups}. For \eqref{item:svarc-milnor-exists}, see \cite[Proposition 2.1]{CCMT}.
In this paper we call $X$ and $Y$ as in the previous proposition {geometric models} for $G$.
\subsection{Admissible sublinear functions}
\begin{definition}
\label{dfn:admissible}
Call $u: [0,+\infty) \to (0,+\infty)$ admissible if
$\limsup_{r \to + \infty} u(r)/r = 0$
and for every $A \geqslant 1$ there exists $B<+\infty$ (only depending on $A$) such that for all sequences $(r_n, s_n)$ with $r_n \to + \infty$ and $1/A < \inf s_n /r_n \leqslant \sup s_n/r_n < A$, one has
\begin{equation}
\label{eq:admissible}
1/B \leqslant \liminf \frac{u(s_n)}{u(r_n)} \leqslant \limsup \frac{u(s_n)}{u(r_n)} \leqslant B.
\end{equation}
\end{definition}
\begin{lemma}
\label{lem:admissible2adm}
Let $u:[0,+\infty) \to (0,+\infty)$ be a sublinear function.
If $u$ is nondecreasing and $\limsup u(2r)/u(r) < +\infty$, resp.\ if $u$ is nonincreasing and $\liminf u(2r)/u(r)$, then $u$ is admissible.
\end{lemma}
\begin{proof}
Let us consider only the case where $u$ is nondecreasing, the proof going the same way . Let $A > 1$ and $(r_n, s_n)$ be such that $r_n \to + \infty$ and $\lbrace s_n /r_n \rbrace \in [1/A, A]$. Set $\beta = \limsup u(2r)/u(r)$. Since $u(s_n)/u(r_n) \leqslant 1$ when $s_n \leqslant r_n$, one has
\begin{align*}
\limsup \frac{u(s_n)}{u(r_n)} & = \sup \left( 1,\, \limsup_{n: s_n \geqslant r_n} \frac{u(s_n)}{u(r_n)} \right) \leqslant \beta^{\lceil \log_2 A \rceil}
\end{align*}
This is the inequality on the right in \eqref{eq:admissible} with $B= \beta^{\lceil \log_2 A \rceil}$. The left inequality is obtained by reversing $r_n$ and $s_n$.
\end{proof}
The usefulness of Lemma \ref{lem:admissible2adm} may not be obvious. Let us give two motivations.
The first is that it ensures that the functions $u$ considered in \cite[Definition 2.4]{cornulier2017sublinear} are admissible in our sense. The second is that, while Definition \ref{dfn:admissible} allows a unified treatment for sublinear functions $u$ with $u(r) \to +\infty$ or $u(r) \to 0$ and is sufficient for our purposes in \S\ref{subsec:going-through-cones} and \S\ref{subsec:coarse-structures}, it appears that is is often easier to argue and prove the main statement of this section with monotonic functions $u$.
The above notion of admissible function resembles the much-studied class of (not necessarily sublinear) regularly varying function in real analysis, but we found no implication between the two without further assumptions.
\subsection{Going through cones}
\label{subsec:going-through-cones}
Let $(\sigma_n)$ be a sequence of positive real numbers.
For $x_n, x'_n \in X^{\mathbf N}$, denote $x_n \sim_{\sigma_n} x'_n$ if $\sup d(x_n, x'_n)/\sigma_n <+\infty$
and $x_n \approx_{\sigma_n} x'_n$ if $\limsup d(x_n, x'_n)/\sigma_n =0$.
$\operatorname{Precone}(X,x_n, \sigma_n)$ denotes the $\sim_{\sigma_n}$ equivalence class of $X$.
Further, given a nonprincipal ultrafilter $\omega \in \beta \mathbf N \setminus \mathbf N$, $\operatorname{Cone}_\omega(X, x_n, \sigma_n)$ is\footnote{
When $\sigma_n \to 0$ and $x_n$ is constant, the space $\operatorname{Cone}_\omega(X,x_n, \sigma_n)$ is more commonly referred to as a metric tangent. However because our emphasis is on large-scale geometry and moving basepoints, and because the distinction would be artificial here, we denote both by the same name.} the largest quotient of $\operatorname{Precone}(X,x_n, \sigma_n)$ whose points are separated by distance $d_\omega((x'_n), (x''_n))= \lim_{n \to \omega} d(x'_n, x''_n)/\sigma_n$. When $\sigma_n \to + \infty$ and $x_n$ is bounded (or $\vert x_n \vert \ll \sigma_n$), the cone does not depend on $x_n$ so we write it $\operatorname{Cone}^\bullet_\omega(X, \sigma_n)$.
Though our main interest is in homogeneous spaces, it is useful to work out some examples of asymptotic cones of nonhomogeneous spaces in order to appreciate the difference between quasiisometry and $O(u)$-bilipschitz equivalence.
\begin{examples}
\label{exm:riemannian-planes}
For $i \in \lbrace 1, \ldots, 4 \rbrace$ let $P_i$ be a Riemannian plane with metric $ds^2 = dr^2 + A_i(r)^2 d\theta^2$, where $A_1(r) = 1/r$, $A_2(r) = 1$, $A_3(r)= \log r$ and $A_4(r) = r/2$ for $r$ large enough.
See some sketches of $P_i$ on Figure \ref{fig:riemannian-planes}, and various cones on Table \ref{tab:riemannian-planes}.
\end{examples}
\begin{figure}
\begin{center}
\begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=0.5cm,y=0.5cm]
\clip(-5.5,-1.8) rectangle (11.5,5.5);
\draw [samples=50,rotate around={0.:(4.,-0.25)}, shift={(4,-0.5)},line width=0.5pt,domain=-2.3:2.3)] plot (\x,{(\x)^2/2/0.5});
\draw [shift={(-4.,1.)},line width=0.5pt] plot[domain=-3.9269:0.78539,variable=\t]({1.*1.41421*cos(\t r)},{1.*1.414213*sin(\t r)});
\draw [shift={(0.,0.5)},line width=0.5pt] plot[domain=3.14159:6.283,variable=\t]({1.*1.*cos(\t r)},{1.*1.*sin(\t r)});
\draw [line width=0.5pt] (-1.,0.5)-- (-1.,5);
\draw [line width=0.5pt] (1.,0.5)-- (1.,5);
\draw [variable=\y, domain=2:5] plot({-4+exp(2-\y)},{\y});
\draw [variable=\y, domain=2:5] plot({-4-exp(2-\y)},{\y});
\node at (-4,-1.4) {$P_1$};
\node at (0,-1.4) {$P_2$};
\node at (4,-1.4) {$P_3$};
\node at (9,-1.4) {$P_4$};
\draw [rotate around={0.:(-4.,1.875394)},line width=0.5pt,dash pattern=on 2pt off 2pt] (-4.,1.875) ellipse (1.062409 and 0.36899);
\draw [rotate around={0.:(0.,2.001)},line width=0.5pt,dash pattern=on 2pt off 2pt] (0.,2.001) ellipse (1. and 0.42269);
\draw [rotate around={0.:(0.,0.5011)},line width=0.5pt,dash pattern=on 2pt off 2pt] (0.,0.5011) ellipse (1.00000 and 0.4226);
\draw [rotate around={0.:(4.,0.501555)},line width=0.5pt,dash pattern=on 2pt off 2pt] (4.,0.5015) ellipse (0.843405 and 0.35371);
\draw [rotate around={0.:(4.,2.0022)},line width=0.5pt,dash pattern=on 2pt off 2pt] (4.,2.0022) ellipse (1.48754 and 0.6133);
\draw [rotate around={0.:(9,2.)},line width=0.5 pt,dash pattern=on 2pt off 2pt] (9,2.) ellipse (0.827 and 0.3748);
\draw [rotate around={0.:(9,1.)},line width=0.5 pt,dash pattern=on 2pt off 2pt] (9,1.) ellipse (0.4962 and 0.2248);
\draw [rotate around={0.:(9,0.)},line width=0.5 pt,dash pattern=on 2pt off 2pt] (9,0.) ellipse (0.1654 and 0.074);
\draw [shift={(9,0.03612)},line width=0.5 pt] plot[domain=3.463343:5.9614,variable=\t]({1.*0.1695*cos(\t r)},{1.*0.169539*sin(\t r)});
\draw [line width=0.5 pt,domain=1.15:2.8391] plot(6+\x,{(-5.46852+1.9300*\x)/-0.643});
\draw [line width=0.5 pt,domain=3.1608:4.8] plot(6+\x,{(-7.972+2.5174*\x)/0.83});
\end{tikzpicture}
\end{center}
\caption{Sketch view of the four Riemannian planes of Example \ref{exm:riemannian-planes} with $\mathrm{U}(1) \rtimes \mathbf Z/2 \mathbf Z$ symmetry.}
\label{fig:riemannian-planes}
\end{figure}
\begin{proposition}[Characterizing quasiisometries I]
\label{prop:charact-qi-1}
Let $X$ and $Y$ be metric spaces, and $\phi: X \to Y$.
Then, $\phi$ is a quasiisometric embedding if and only if
for every $(\sigma_n)$ such that $\lim_n \sigma_n = +\infty$, it holds:
\begin{align}
\label{item:coarse-embed-precones}
\forall (x_n) \in X^{\mathbf N}, \forall (x'_n) \in X^{\mathbf N},\, x_n \sim_{\sigma_n} x'_n & \implies \phi(x_n) \sim_{\sigma_n} \phi(x'_n) \tag{$\mathrm{I}_\sigma$} \\
\forall (x_n) \in X^{\mathbf N}, \forall (x'_n) \in X^{\mathbf N}, \,\phi(x_n) \approx_{\sigma} \phi(x'_n) & \implies x_n \approx_{\sigma} x'_n \tag{$\mathrm{II}_\sigma$}
\label{eq:second-condition-precone}
\end{align}
and then, given any such $\sigma_n$, for all pair $(x_n)\in X^{\mathbf N}$ and $(y_n) \in Y^{\mathbf N}$, either $\phi(\operatorname{Precone}(X,x_n, \sigma_n)) \cap \operatorname{Precone}(Y,y_n, \sigma_n)$ is empty or for every $\omega \in \beta \mathbf N \setminus \mathbf N$, $\phi$ induces a bilipschitz embedding
\begin{equation}
\operatorname{Cone}_\omega(\phi, x_n, y_n, \sigma_n): \operatorname{Cone}_\omega(X, x_n, \sigma_n) \to \operatorname{Cone}_\omega(Y, y_n, \sigma_n)
\tag{Cone}
\label{eq:cone}
\end{equation}
whose bilipschitz constant only depends on $\phi$.
Further, $\phi$ is a quasiisometry if and only if for every $\sigma_n$ with $\lim_n \sigma_n = + \infty$, the conditions \eqref{item:coarse-embed-precones} and \eqref{eq:second-condition-precone} hold and in addition
\begin{align}
\forall (y_n) \in Y^{\mathbf N} \, \exists (x_n) \in X^{\mathbf N} & : \phi(x_n) \sim_{\sigma_n} y_n
\label{eq:coarse-surj-prec}
\tag{$\mathrm{III}_\sigma$}
\end{align}
and then for every $x_n, y_n, \sigma_n$ as above $\operatorname{Cone}_\omega(\phi, x_n, y_n, \sigma_n)$ is either completely undefined or a bilipschitz homeomorphism.
\end{proposition}
\begin{proof}
First, assume that $\phi$ is not a quasiisometric embedding. Especially it is not a coarse embedding, so there exists an integer $M \geqslant 0$ and a sequence of points $(x_n, x'_n)$ such that either $d(x_n, x'_n) \leqslant M$ and $d(\phi(x_n), \phi(x'_n)) \geqslant \rho(n)$ or $d(x_n, x'_n) \geqslant \rho(n)$ and $d(\phi(x_n), \phi(x'_n)) \leqslant M$, where $\rho(n) \to + \infty$.
In the first case, $\operatorname{Precone}(x_n, \rho(n)^{1/2})$ contains $(x_n)$ and $(x'_n)$, but no $\operatorname{Precone}(y_n, \rho(n)^{1/2})$ will contain $(\phi(x_n))$ and $(\phi(y_n))$ at the same time, contradicting \eqref{item:coarse-embed-precones}.
In the second case, note that $\phi(x_n) \approx_{\rho_n} \phi(x'_n)$, while $x_n \sim_{\rho_n}$ does not hold, contradicting \eqref{eq:second-condition-precone} with $\sigma = \rho$. Conversely, assume that $\phi$ is a $(\kappa, c)$-quasiisometric embedding; then $x_n \sim_{\sigma_n} y_n$ means that $d_X(x_n, y_n) \leqslant C \sigma_n$ for some $C \geqslant 0$, so that $d_Y(\phi(x_n), \phi(y_n)) \leqslant \kappa C \sigma_n + c \leqslant (\kappa C + 1) \sigma_n$ for $n > \sup \lbrace m : \sigma_m \leqslant c \rbrace$. This proves \eqref{item:coarse-embed-precones}; the proof of \eqref{eq:second-condition-precone} goes the same way using the left inequality in \eqref{eq:coarse-equiv} with $\rho_-(r) = \kappa^{-1}r - c$.
Now assume that $\phi$ is a quasiisometric embedding (or equivalently has \eqref{item:coarse-embed-precones} and \eqref{eq:second-condition-precone}) and let $(x_n)$ and $(y_n)$ be sequences in $X$ and $Y$. Then $\phi \left( \operatorname{Precone}(X, x_n, \sigma_n) \right) \cap \operatorname{Precone}(Y,y_n, \sigma_n)$ equals
\[
\begin{cases}
\emptyset & y_n \nsim \phi(x_n) \\
\phi(\operatorname{Precone}(X,x_n, \sigma_n)) & y_n \sim \phi(x_n).
\end{cases}
\]
If the second case occurs, let $\kappa$ be the large scale bilipschitz constant of $\phi$. For any $\omega \in \beta \mathbf N \setminus \mathbf N$, $\operatorname{Cone}_\omega(\phi, x_n \sigma_n)$ exists and is $\kappa$-bilipschitz.
To prove that $\phi$ is a quasiisometry if and only if it has \eqref{item:coarse-embed-precones}, \eqref{eq:second-condition-precone} and \eqref{eq:coarse-surj-prec} for all $\sigma$ with limit $+\infty$, it remains only to prove that \eqref{eq:coarse-surj-prec} implies coarse surjectivity (the converse being clear). If $\phi$ is not coarsely surjective, then there exists a sequence $(y_n)$ in $(Y_n)$ and $\rho_n \to + \infty$ such that $B(y_n, \rho_n) \cap \phi(X) = \emptyset$. This disproves \eqref{eq:coarse-surj-prec} with $\sigma_n = \sqrt{\rho_n}$.
Finally, if $\phi$ is a quasiisometry, then for every parameters $x_n, y_n, \sigma_n$ with $\sigma_n \to + \infty$, $\phi \left( \operatorname{Precone}(X, x_n, \sigma_n) \right) \cap \operatorname{Precone}(Y,y_n, \sigma_n)$ becomes
\[
\begin{cases}
\emptyset & y_n \nsim \phi(x_n) \\
\phi(\operatorname{Precone}(Y,y_n, \sigma_n)) & y_n \sim \phi(x_n)
\end{cases}
\]
and in the latter case, for every $\omega \in \beta \mathbf N \setminus \mathbf N$, $\operatorname{Cone}_\omega(\phi, x_n, \sigma_n)$ is a bilipschitz homeomorphism, with bilipschitz constant $\kappa$ independent of $\omega$.
\end{proof}
\begin{proposition}[Characterizing quasiisometries, II]
\label{prop:charact-qi-2}
Let $X$ and $Y$ be metric spaces and $\phi : X \to Y$.
If for all $(x_n, y_n) \in X^{\mathbf N} \times Y^{\mathbf N}$ and $(\sigma_n)$ with limit $+\infty$, either
\[ \phi(\operatorname{Precone}(X,x_n, \sigma_n)) \cap \operatorname{Precone}(Y,y_n, \sigma_n) = \emptyset \]
or $\operatorname{Cone}_\omega(\phi, x_n, y_n, \sigma_n)$ is well-defined and a bilipschitz embedding for all $\omega \in \beta \mathbf N \setminus \mathbf N$, then $\phi$ is a quasiisometric embedding.
If for all $(x_n, y_n, \sigma_n)$ as above, either \[ \phi(\operatorname{Precone}(X,x_n, \sigma_n)) \cap \operatorname{Precone}(Y,y_n, \sigma_n) = \emptyset \] or $\operatorname{Cone}_\omega(\phi)$ is well-defined and a bilipschitz homeomorphism for all $\omega$, then $\phi$ is a quasiisometry.
\end{proposition}
\begin{proof}
The first hypothesis implies, for every $\sigma$, the conditions \eqref{item:coarse-embed-precones} and \eqref{eq:second-condition-precone} of Proposition \ref{prop:charact-qi-1} for $\phi$ (where the injectivity of the coned map implies \eqref{eq:second-condition-precone}).
Similarly, the second hypothesis implies, for every $\sigma$, \eqref{item:coarse-embed-precones}, \eqref{eq:second-condition-precone} and \eqref{eq:coarse-surj-prec}.
\end{proof}
The characterization given by Proposition \ref{prop:charact-qi-2} may be summarized as follows:
a quasiisometry is a map between metric spaces which, when photographed between any pair of asymptotic cones with equal scaling factors, is either completely undefined or induces a bilipschitz homeomorphism.
As mentionned in the introduction, $o(r)$-bilipschitz equivalences are the maps inducing bilipschitz homeomorphisms between asymptotic cones with fixed basepoints. This is less demanding than the previous characterization. We recall Cornulier's characterization below.
\begin{proposition}[Cornulier]
\label{prop:sbe-cor}
Let $X$ and $Y$ be metric spaces. Let $\phi : X \to Y$.
The following are equivalent:
\begin{enumerate}[{\rm (\ref{prop:sbe-cor}.1)}]
\item
\label{item:sbe-cor-1}
$\phi$ is $o(r)$-bilipschitz, i.e.
There exists $\kappa \geqslant 1$ and $v: \mathbf R_{\geqslant 0} \to \mathbf R_{\geqslant 0}$ with $\lim_{r + \infty} v(r)/r = 0$ and for every $(x,x')\in X$ and $y \in Y$,
\begin{align*}
-v(\vert x \vert \vee \vert x' \vert) + \frac{1}{\kappa} d_X(x,x') & \leqslant d_Y(\phi(x), \phi(x')) \\
& \leqslant \kappa d_X(x,x') + v(\vert x \vert \vee \vert x' \vert) \\
d_Y(y, \phi(x)) & \leqslant v (\vert y \vert),
\end{align*}
\item
\label{item:sbe-cor-2}
For every sequence $(\sigma_n)$ of positive real numbers with $\sigma_n \to + \infty$,
there is a well-defined, bilipschitz homeomorphism
\begin{equation}
\tag{$\operatorname{Cone}^\bullet$}
\operatorname{Cone}_\omega^\bullet(\phi, \sigma_n): \operatorname{Cone}_\omega^\bullet(X, \sigma_n) \to \operatorname{Cone}^\bullet_\omega(Y, \sigma_n)
\label{eq:conep}
\end{equation}
\end{enumerate}
\end{proposition}
\begin{proof}
This results from the combination of \cite[Propositions 2.4, 2.5, 2.9, 2.12 and 2.13]{CornulierCones11}.
There is no sequence $\sigma_n$ in Cornulier's statement, however the formulations are easily seen to be equivalent to ours.
\end{proof}
In this way, the groupoids of quasiisometries and $o(r)$-bilipschitz equivalences respectively are the largest groupoids over metric spaces so that the parametrized family of functors $\operatorname{Cone}$ and $\operatorname{Cone}^\bullet$ respectively are well defined to the groupoid of metric spaces with bilipschitz homeomorphism.
Note that bilipschitzness at the level of asymptotic cone came for free in Proposition \ref{prop:charact-qi-1}, while it is explicitely required in Proposition \ref{prop:sbe-cor}, and there is indeed a strictly larger groupoid, that of isomorphisms in the category of cone-defined maps, whose pictures through $\operatorname{Cone}^\bullet$ only have nonzero and finite local lipschitz and expansion constant at basepoint; see \cite[\S 2.2]{CornulierCones11} for characterizations of this category.
Let us state a refinement of (\ref{prop:sbe-cor}.\ref{item:sbe-cor-1}) $\implies$ (\ref{prop:sbe-cor}.\ref{item:sbe-cor-2}) in the last Proposition.
\begin{proposition}
\label{prop:Ou-to-cone}
Let $X$ and $Y$ be metric spaces. Let $\phi : X \to Y$ and assume that (\ref{prop:sbe-cor}.\ref{item:sbe-cor-1}) holds for some $\kappa$ and $v$, where $v$ is admissible (Definition \ref{dfn:admissible}).
Then for every sequence $(\sigma_n)$ of positive real numbers and for every $(x_n) \in X^{\mathbf N}$ such that $\limsup v(\vert x_n \vert) / \sigma_n = 0$,
$\phi$ induces a bilipschitz homeomorphism
\begin{equation}
\operatorname{Cone}_\omega(\phi, x_n, \sigma_n): \operatorname{Cone}_\omega(X, x_n, \sigma_n) \to \operatorname{Cone}_\omega(Y, \phi(x_n), \sigma_n)
\tag{Cone}
\label{eq:conelast}
\end{equation}
\end{proposition}
\begin{proof}
This conveniently follows from \cite{KramerWeiss}, by setting for any $\rho >0$, $X_n = B(x_n, \rho \sigma_n)$, $t_n = v((1+\rho) \vert x_n \vert)$ and $\phi_n = \phi_{\mid X_n}$. Since $t/\sigma$ is infinitesimal, by \cite[Lemma 1.16]{KramerWeiss} the sequence $\phi_n$ defines $\phi_\omega$ between the ultralimits of the spaces $X_n/\sigma_n$, namely, the ball of radii $\rho$ in the asymptotic cones.
\end{proof}
In Proposition \ref{prop:Ou-to-cone} the assumption that $v$ be admissible is necessary. Otherwise $t_n$ may not be negligible when compared to $\sigma_n$, which is necessary assumption so that the sequence $t_n/\sigma_n$ defines an infinitesimal number in the real field $\prod_\omega \mathbf R$ for every ultrafilter $\omega$.
As an application, we can now distinguish the nonhomogeneous spaces from Examples \ref{exm:riemannian-planes}:
\begin{itemize}
\item
None of $P_1$, $P_2$, $P_3$ is $o(r)$-bilipschitz to $P_4$ since $\dim \operatorname{Cone}^\bullet_\omega(P_i)$ is $1$ for $i=1,2,3$ and $2$ for $i=4$.
\item
$P_3$ and $P_4$ are $O(\log)$-bilipschitz through the identity map in polar coordinates, but they are not $O(\log^{1-\epsilon})$-bilipschitz equivalent for any $\epsilon >0$, since $\dim \operatorname{Cone}_\omega(P_3, x_n, n) = 2$ and $\dim \operatorname{Cone}_\omega(P_3, x_n, n) = 1$ if $\vert x_n \vert = e^n$ (See Table \ref{tab:riemannian-planes}) and $\log (e^{n})^{1-\epsilon} \ll n$.
\item
$P_1$ and $P_2$ are quasiisometric; however they are not $O(u)$-bilipschitz equivalent for $u \to 0$.
\end{itemize}
\iffalse
\begin{lemma}[Variant of {\cite[\S 2.2]{CornulierCones11}}]
Let $X$ and $Y$ be metric spaces, let $o_X \in X$ and $o_Y \in Y$. Let $\phi : X \to Y$.
If for every sequence $(\tau_n)$ of positive numbers with $\lim \tau_n = + \infty$ one has
\begin{align}
\tag{$\mathrm{IV}^\tau$}
\label{eq:I^tau}
\forall (x_n) \in X^{\mathbf N}:\, o_X \sim_{\tau_n} x_n & \iff o_Y \sim_{\tau_n} \phi(x_n) \\
\tag{$\mathrm{V}^\tau$}
\label{eq:II^tau}
\forall (x_n), (x'_n) \in X^{\mathbf N}:\, x_n \approx_{\tau_n} x'_n & \iff \phi(x_n) \approx_{\tau_n} \phi(x'_n)
\end{align}
then for every $\omega \in \beta \mathbf N \setminus \mathbf N$ and for every sequence $(\tau_n)$ of positive numbers with $\lim \tau_n = + \infty$, $\phi$ induces a continuous map between the based cones
\begin{equation}
\operatorname{Cone}^\bullet_\omega(\phi, \tau_n) : \operatorname{Cone}_\omega(X, o_X, \tau_n) \longrightarrow \operatorname{Cone}_\omega(Y, o_Y, \tau_n).
\tag{Cone}
\end{equation}
Moreover, $\operatorname{Cone}_\omega(\phi, \tau_n)$ has finite and nonzero local lipschitz and expansion constant at basepoint.
\end{lemma}
\begin{proof}
Assume that $\phi$ has properties \eqref{eq:I^tau} and \eqref{eq:II^tau} for every $\tau$.
Then $\phi$ is cone-defined according to \cite[Definition 2.2]{CornulierCones11}.
So $\operatorname{Cone}^\bullet_\omega(\phi, \tau_n)$ is well defined and continuous for all $\omega \in \beta \mathbf N \setminus \mathbf N$ by the combination of \cite[Proposition 2.4 and Proposition 2.5]{CornulierCones11}.
It remains to prove the part about local lipschitz and expansion constants.
By \cite[Proposition 2.4]{CornulierCones11}, $\vert \phi(x) \vert \leqslant C \sup (R, \vert x \vert)$ for some $C$ and $R>0$; observe that this only uses the implication from left to right in \eqref{eq:I^tau}. Let us prove that, possibly for larger $C$ and $R$, $\sup( R, \vert \phi(x) \vert) \geqslant (\vert x \vert)/C$ as well. If it were not the case, then there is a sequence $(x_n)$ with $\lim \vert x_n \vert = + \infty$ and $\lim \vert \phi(x_n) \vert /\vert x_n \vert = 0$. Taking $\tau_n = \sup \lbrace \vert \phi(x_m) \vert : m \leqslant n \rbrace$ we have that either $\tau_n$ is bounded, or $x_n \sim_{\tau_n} o_X$, which implies the desired inequality.
\end{proof}
\fi
\begin{table}[t]
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
& \small{$\operatorname{Cone}_\omega(P_1)$}
& \small{$\operatorname{Cone}_\omega(P_2)$}
& \small{$\operatorname{Cone}_\omega(P_3)$}
& \small{$\operatorname{Cone}_\omega(P_4)$} \\
\hline
bounded $x_n$, $\sigma_n \equiv 1$ & $(P_1, x_\omega)$ & $(P_2, x_\omega)$ & $(P_3, x_\omega)$ & $(P_4, x_\omega)$ \\
bounded $x_n$, $\sigma_n \to + \infty${} & $\mathbf R_{\geqslant 0}$ & $\mathbf R_{\geqslant 0}$ & $\mathbf R_{\geqslant 0}$ & $(C,0)$ \\
bounded $x_n$, $\sigma_n \to 0$ & $\mathbf E^2$ & $\mathbf E^2$ & $\mathbf E^2$ & $\mathbf E^2$ \\
$\vert x_n \vert = n, \sigma_n = 1/n$ & $S^1 \times \mathbf R$ & $\mathbf E^2$ & $\mathbf E^2$ & $\mathbf E^2$ \\
$\vert x_n \vert = n, \sigma_n = 1$ & $\mathbf R$ & $S^1 \times \mathbf R$ & $\mathbf E^2$ & $\mathbf E^2$ \\
$\vert x_n \vert = n, \sigma_n = n$ & $\mathbf R_{\geqslant -1}$ & $\mathbf R_{\geqslant -1}$ & $\mathbf R_{\geqslant -1}$ & $(C,i)$ \\
$\vert x_n \vert = e^n, \sigma_n = 1/n$ & $\mathbf R$ & $\mathbf E^2$ & $\mathbf E^2$ & $\mathbf E^2$ \\
$\vert x_n \vert = e^n, \sigma_n = 1$ & $\mathbf R$ & $S^1 \times \mathbf R$ & $\mathbf E^2$ & $\mathbf E^2$ \\
$\vert x_n \vert = e^n, \sigma_n = n$ & $\mathbf R$ & $\mathbf R$ & $S^1 \times \mathbf R$ & $\mathbf E^2$ \\
$\vert x_n \vert = e^n, \sigma_n = n^2$ & $\mathbf R$ & $\mathbf R$ & $\mathbf R$ & $\mathbf E^2$ \\
\hline
\end{tabular}
\vskip 10pt
\caption{Various cones on the Riemannian planes $P_i$ from Example \ref{exm:riemannian-planes}.
We provide the cones as pointed metric spaces (on the second line they do not depend on $\sigma_n$ as soon as it goes to $+\infty$). Here $C$ denotes $\lbrace z \in \mathbf C : \Im z \geqslant 0 \rbrace /(x \sim -x)$ with the distance induced from the absolute value.
}
\label{tab:riemannian-planes}
\end{table}
\subsubsection{On cone dimension}
\label{subsubsec:unique}
We have seen that the covering dimension of (moving) cones is an efficient tool to discriminate between the Examples \ref{exm:riemannian-planes} up to quasiisometry or $O(u)$-bilipschitz equivalence.
When $X$ is co-boundedly acted upon, however (which is one case of interest for geometric group theorists) all its asymptotic cones are isometric once the ultrafilter is fixed.
Hence, computing $\dim \operatorname{Cone}_\omega$ for fixed $\omega$ will provide the same information with respect to QI or SBE.
Beyond geometric models of polynomially growing groups $G$, it should not be expected that different ultrafilters will yield isometric or even just homeomorphic asymptotic cones; an extensive litterature and even the notion of lacunary hyperbolic group on its own have been built over this distinction (\cite{VT}, \cite{KSTT}, \cite{OOS}).
If $G$ is a simply connected, completely solvable Lie group with a completely solvable $\mathfrak g$, nevertheless, then for every geometric model $X$, $\omega \in \beta \mathbf N \setminus \mathbf N$ and $\sigma_n$ with $\lim_{\sigma_n} = +\infty$,
\begin{equation}
\label{eq:cornlier-formula}
\dim \operatorname{Cone}^\bullet_\omega(X,\sigma_n) = \dim G^{\mathrm{nil}}
\tag{conedim}
\end{equation}
where $G^{\mathrm{nil}}$ is the largest nilpotent quotient of $G$ \cite{CornulierDimCone}.
Following Cornulier we denote this integer $\operatorname{conedim}$. This is the first, and perhaps the most natural numerical SBE invariant.
In the special case when $G$ is nilpotent, \eqref{eq:cornlier-formula} follows from the earlier construction of Pansu, which can be formulated in terms of Gromov-Hausdorff convergence with no reference to a ultrafilter \cite{PanCBN}. Beware that this limit is not functorial, however.
When no homogeneity assumption is made, the dimension of the asymptotic cone (even with fixed basepoint) depends not only on the ultrafilter but also on the scaling sequence.
One encounters four-dimensional complete Riemannian spaces with positive Ricci curvature and $\operatorname{SU}(2)$ symmetry, for which the covering dimension of the asymptotic cones can be $2$ or $4$ depending on how one chooses the scaling factors \cite{PerelmanCone}. These cones are genuine rescaled Gromov-Hausdorff limits, obtained without passing to a subsequence and thus do not depend on the ultrafilter.
\subsection{Coarse structures}
\label{subsec:coarse-structures}
In the 1930s, Weil abstracted the notion of a uniform structure from the topology of locally compact groups.
Coarse structures are large-scale counterparts of uniform structures; they were introduced by Roe in the 1990s.
We recall below the definition of a coarse space.
Let $X$ be a set. The square $X \times X$ is a groupoid for the composition law $(x_0, x_1) \circ (x_1, x_2) = (x_0, x_2)$ and $(x_0, x_1)^{-1} = (x_1, x_0)$ for $x_0, x_1, x_2 \in X$. For $E, F \subseteq X \times X$, define $E \circ F = \lbrace e \circ f : e \in E, f \in F\rbrace$ and $E^{-1} = \lbrace e^{-1} : e\in E \rbrace$.
\begin{definition}[{\cite[Definition 2.3]{RoeLectures}}]
A collection $\mathcal{E} \subseteq X \times X$ is called a coarse structure if it contains the diagonal $\Delta_{X \times X}$, is stable by composition, inverse, taking subsets, and taking finite unions; the subsets $E \in \mathcal{ E}$ are called entourages.
\end{definition}
A coarse structure $\mathcal{E}$ is called monogenic if it is generated by a single entourage, that is if there exists $E \in \mathcal E$ such that $\mathcal E$ is smallest among all coarse structures containing $E$. Note that this notion has no analog among uniform structures.
\begin{definition}[Coarse equivalence]
\label{def:coars-equivalence}
Given two coarse spaces $(X, \mathcal{E}_X)$ and $(Y, \mathcal E_Y)$ and a map $\phi : X \to Y$, we say that $\phi$ is coarse if
\begin{enumerate}[(\ref{def:coars-equivalence}.1)]
\item
\label{item:Coarse1}
for all $B \subseteq Y$, $B \times B \in \mathcal{E}_Y \implies \phi ^{-1}(B) \times \phi^{-1}(B) \in \mathcal{E}_X$
\item
\label{item:Coarse2}
for all $E\in \mathcal E_X$, $(\phi \times \phi) (E) \in \mathcal{E}_Y$, where $\phi \times \phi(x,y) = (\phi(x), \phi(y))$.
\end{enumerate}
A pair of coarse maps $\lbrace \phi: X \to Y$, $\psi: Y \to X \rbrace$ realizes a coarse equivalence if the graphs of $\phi \circ \psi$ and $\psi \circ \phi$ are both contained in entourages of the coarse structures.
\end{definition}
\begin{proposition}
[$O(u)$-coarse structure, $o(v)$-coarse structure]
\label{prop:Ou-coarse-structure}
Let $u: [0, +\infty) \to (0,+\infty)$ be a an admissible function, let $v$ be either an admissible function or $v(r) = r$, and let $(X, d_X)$ be a metric space.
Given some $o \in X$, define
\begin{align}
\mathcal{E}^{O(u)} & = \left\{ E \subseteq X \times X: \exists M,\, \limsup_{(x,x') \in E} \frac{d_X(x,x')}{u(\vert x \vert)} \leqslant M \right\}
\label{eq:EOu}
\\
\mathcal{E}^{o(v)} & = \left\{ E \subseteq X \times X: \limsup_{(x,x') \in E} \frac{d_X(x,x')}{v(\vert x \vert)} =0 \right\}
\label{eq:Eov}
\end{align}
where $\vert x\vert = d_X(o,x)$ and $\limsup$ are taken as $(x,x')$ evades every bounded set fixed in advance (for the sup distance in $X \times X$).
$\mathcal{E}^{O(u)}$ and $\mathcal E^{o(v)}$ define coarse structures on $X$.
\end{proposition}
The bounded coarse structure is $\mathcal{E}_X^{O(1)}$, and the coarse equivalences between metric spaces equipped with $\mathcal{E}_X^{O(1)}$ are the coarse equivalences as defined in \eqref{eq:coarse-equiv}. Wright's $c_0$ coarse structure is $\mathcal{E}^{o(1)}$ \cite[Definition 1.1]{WrightC0scalar}.
Dranishnikov and Smith's sublinear coarse structure is $\mathcal{E}^{o(r)}$ (See \S\ref{subsec:assouad-nagata}) \cite{DranishnikovSmith}.
\begin{proof}
We need to check Roe's axioms. In view of \eqref{eq:EOu} and \eqref{eq:Eov} it is clear that $\mathcal{E}_X^{O(u)}$ and $\mathcal{E}_X^{o(v)}$ are closed under finite union and taking subsets. Possibly left nonobvious is the stability when taking inverses and composing.
\par {\em Inverses.}
Fix a basepoint $o$ and take a sequence $x_n, x'_n$ such that
$\sup(\vert x_n\vert, \vert x'_n \vert) \to +\infty$, with $d_X(x_n, x'_n) \leqslant K u(\vert x_n \vert)$ for some $K \geqslant 0$ when $n$ is large enough, resp. $d_X(x_n, x'_n) \leqslant k_n v(\vert x_n \vert)$ where $k_n \to 0$.
We need prove that $d_X(x_n, x'_n) \leqslant L u(\vert x'_n \vert)$ for some $L\geqslant 0$, resp. $d_X(x_n, x'_n) \leqslant \ell_n v(\vert x'_n \vert)$ for some $L\geqslant 0$ when $n$ is large enough.
We claim that
\begin{equation}
\label{eq:bounded-quotients}
0 < \liminf \frac{\vert x'_n \vert}{\vert x_n \vert} \leqslant \limsup \frac{\vert x'_n \vert}{\vert x_n \vert} < +\infty.
\end{equation}
Indeed, if it were not the case there would be a sequence $R_n$ such that for arbitrarily large values of $D$, either for arbitrarily large $n$, $\vert x_n \vert \leqslant R_n \leqslant DR_n \leqslant \vert x'_n \vert$ or for arbitrarily large $n$, $\vert x'_n \vert \leqslant R_n \leqslant DR_n \leqslant \vert x_n \vert$. In the first case, along a sub-sequence, by the triangle inequality $\vert x'_n \vert \leqslant R_n + Ku(R_n)$ (where we may replace $u$ by $v$ and $K$ by some $k_{n_0}$ if necessary) contradicting the hypothesis that $\vert x'_n \vert \geqslant DR_n$ for $n$ large enough (observe that $\vert x'_n \vert \to +\infty$ along that sub-sequence). In the second case, again by the triangle inequality one would have $R_n \geqslant \vert x_n \vert - Ku(\vert x_n \vert)$ (or $\vert x_n \vert - k_{n_0} \vert x_n \vert$ if necessary); but the right-hand side can be assumed greater than $\vert x_n \vert/2$ for $n$ large enough if $D$ is set large enough; this is a contradiction. Now from \eqref{eq:bounded-quotients} and the property that $u$, resp. $v$ is admissible, we obtain that also
\begin{equation}
0 < \liminf \frac{\vert u(x'_n) \vert}{\vert u(x_n) \vert} \leqslant \limsup \frac{\vert u(x'_n) \vert}{\vert u(x_n) \vert} < +\infty \notag
\end{equation}
(resp. the same with $v$ replacing $u$), which provides the requested constant $L$ (resp. $\ell_n$) as a function of $K$ (resp. of $k_n$) and $u$, resp. $v$.
At this point it is useful to record that we can rewrite $\mathcal E$ in a more symmetric way:
\begin{align*}
\mathcal{E}_X^{O(u)} & = \left\{ E \subseteq X \times X: \exists r > 0,\, \sup_{(x,x') \in E \setminus B_r(o) \times B_r(o)} {d_X(x,x')}/{(u(\vert x \vert) + u(\vert x' \vert))} < +\infty \right\}
\end{align*}
\par {\em Composition.}
Start assuming $u$ nondecreasing; we will explain how to adapt the proof in case it is not the case in the end (this philosophy was alluded to after Lemma \ref{lem:admissible2adm}).
For every $K, r \geqslant 0$, introduce
\begin{equation}
\notag
E_K^r(X,o) = \left\{ (x,x') : \inf (\vert x \vert, \vert x' \vert) \geqslant r, \; d_X(x,x') \leqslant K(u(\vert x \vert + \vert x' \vert)) \right\}.
\end{equation}
We need to prove that for every $K,L$ there are $r,s,t$ and $\eta(K,L)$ such that
\begin{equation}
\label{eq:composing}
E_L^s \circ E_K^r \subseteq E_{\eta(K,L)}^t.
\end{equation}
Let $(x,x'') \in E_L \circ E_K$. By definition, there exists $x' \in X$ such that $d_X(x,x') \leqslant K(u(\vert x \vert) + u(\vert x' \vert))$ and $d_X(x',x'') \leqslant L(u(\vert x' \vert) + u(\vert x'' \vert))$. \\
Set a radius $R = \sup \left\{ r \geqslant 0 : u(r) > r/(2K+1) \right\}$. We claim that
\begin{equation}
u(\vert x' \vert) \leqslant \sup (u(3R), u(3 \vert x \vert))
\label{eq:to-prove-by-exhaustion}
\end{equation}
To prove \eqref{eq:to-prove-by-exhaustion} we proceed by exhausting all the case arising from the comparison of $\vert x \vert$ and $\vert x' \vert$ with $R$.
First, note that either $\vert x' \vert \leqslant R$, or $\vert x' \vert > R$ and then $u(\vert x' \vert) \leqslant \frac{\vert x' \vert}{2K+1}$. In the second case, by the triangle inequality
\begin{equation}
\vert x' \vert \leqslant \vert x \vert + Ku(\vert x \vert) + Ku(\vert x' \vert) \leqslant \vert x \vert + Ku(\vert x \vert) + \frac{\vert x' \vert}{2},
\notag
\end{equation}
so that $\vert x' \vert \leqslant 2 \vert x \vert + 2 K u(\vert x \vert)$.
So we always have
$\vert x' \vert \leqslant \sup(R, 2 \vert x \vert + 2 Ku(\vert x \vert))$.
Since $u$ has been assumed nondecreasing,
\begin{equation}
\notag
u(\vert x' \vert) \leqslant \sup(u(R), u(2 \vert x \vert + 2 Ku(\vert x \vert))).
\end{equation}
Now, either $\vert x \vert \leqslant R$, in which case $u(\vert x' \vert) \leqslant \sup(u(R), u(3 \vert x \vert)$ and \eqref{eq:to-prove-by-exhaustion} holds, or $\vert x \vert > R$ and then $2Ku(\vert x \vert) \leqslant \vert x \vert$, so $u(\vert x' \vert) \leqslant K \sup (u(3R), u(3 \vert x \vert))$: \eqref{eq:to-prove-by-exhaustion} holds as well.
We can now finish the proof using the claim. By the triangle inequality,
\begin{align*}
d_X(x, x'') & \leqslant Ku(\vert x \vert) + (K+L) u(\vert x' \vert) + L u (\vert x'' \vert) \\
& \leqslant (K+L) \left[ u(\vert x \vert) + \sup(u(3R), u (3 \vert x \vert) + u(\vert x'' \vert) \right]
\end{align*}
so we may set $\eta(K,L) = 2(K+L) \limsup_{r \to + \infty} u(3r) /u(r)$; then for $r$ large enough and arbitrary $s$, \eqref{eq:composing} holds.
We now resume to the general case when $u$ is not assumed non-decreasing.
If $\vert x' \vert \leqslant R$ then there is a uniform bound on $\vert x \vert$.
If $\vert x' \vert > R$ then by the triangle inequality,
\[ \vert x' \vert \geqslant \vert x \vert - Ku(\vert x \vert) - Ku(\vert x' \vert) \geqslant \vert x \vert - \frac{\vert x' \vert}{2} - Ku(\vert x \vert), \]
so that $\vert x' \vert \geqslant 2 \vert x \vert /3 - 2 Ku(\vert x \vert)/3$.
As soon as $\vert x \vert \geqslant R$, $\vert x' \vert \geqslant \vert x \vert /3$.
Using the assumption that $u$ is admissible, then, $u(\vert x'\vert) \leqslant Bu(\vert x \vert)$ for some $B \geqslant 1$. Using the same line of reasonning as before, this implies \eqref{eq:composing} with $\eta(K,L) = B(K+L)$. \qedhere
\end{proof}
Let $u_1, u_2$ and $v$ be as $u$ and $v$ in the previous proposition. Let $X$ and $Y$ be metric spaces.
If $u_1 = O(u_2)$, resp. if $u=o(v)$, then the identity map $(X, \mathcal{E}^{O(u_1)}) \to (X, \mathcal{E}^{O(u_2)})$ is coarse. Especially, if $\varphi : (X, d_X, \mathcal{E}^{O(u_1)}) \to (Y, d_Y, \mathcal{E}^{O(u_1)})$ is a coarse equivalence, then $\varphi : (X, d_X, \mathcal{E}^{O(u_2)}) \to (Y, d_Y, \mathcal{E}^{O(u_2)})$ is a coarse equivalence, as summarized in the diagram below where the arrows are coarse, $u_1 = 1$, $u_2 = \log$ and $v(r) =r$, namely the three most important coarse structures in this paper.
\[
\begin{tikzcd}
(X, \mathcal{E}^{O(1)}) \arrow[r, "\mathrm{id}_X"] \arrow[d, "\phi"]
& (X, \mathcal{E}^{O(\log)}) \arrow[d, "\phi" ] \arrow[r, "\mathrm{id}_X" ]
& (X, \mathcal{E}^{o(r)}) \arrow[d, "\phi"] \\
(Y, \mathcal{E}^{O(1)}) \arrow[r, "\mathrm{id}_Y" ]& (Y, \mathcal{E}^{O(\log)}) \arrow[r, "\mathrm{id}_Y" ] & (Y, \mathcal{E}^{o(r)})
\end{tikzcd}
\]
\begin{figure}
\begin{center}
\begin{tikzpicture}[line cap=round,line join=round,>=angle 45,x=0.5cm,y=0.5cm]
\clip(-0.5,-0.5) rectangle (20.5,5.5);
\draw [-, line width=0.5pt] (0,0)-- (0,5);
\draw [-, line width=0.5pt] (0,0)-- (5,0);
\draw [-, line width=0.5pt] (7,0)-- (7,5);
\draw [-, line width=0.5pt] (7,0)-- (12,0);
\draw [-, line width=0.5pt] (14,0)-- (14,5);
\draw [-, line width=0.5pt] (14,0)-- (19,0);
\draw [dash pattern= on 2pt off 2pt] (0,0) -- (4.5,4.5);
\draw [dash pattern= on 2pt off 2pt] (7,0) -- (11.5,4.5);
\draw [dash pattern= on 2pt off 2pt] (14,0) -- (18.5,4.5);
\draw [line width= 1pt, variable=\x, samples = 200, domain=0:3.5] plot({\x},{\x + sqrt(\x)+0.5});
\draw [line width= 1pt, variable=\y, domain=0:3.5] plot({\y + sqrt(\y)+0.5},{\y});
\draw [line width= 1pt, variable=\x, domain=7:11.5] plot({\x},{\x-7 + 0.5});
\draw [line width= 1pt, variable=\y, domain=0:4.5] plot({7+ \y + 0.5},{\y});
\draw [line width= 1pt, variable=\x, domain=14:18.5] plot({\x},{sqrt((\x-14)*(\x-14) + 1)});
\draw [line width= 1pt, variable=\y, domain=0:4.5] plot({14+ sqrt((\y)*(\y) + 1)},{\y});
\end{tikzpicture}
\end{center}
\caption{Some entourages of the $O(u)$-coarse structure on the half real line $X = [0, +\infty)$, with $u(r) = \sqrt{r}, u(r) = 1$ and $u(r) = 1/r$.}
\label{fig:my_label}
\end{figure}
\begin{proposition}
\label{prop:sbe-is-coarse}
Let $X$ and $Y$ be metric spaces. Let $u:[0,+\infty) \to (0,+\infty)$ be a regularly varying function.
Let $\phi : X\to Y$ be a $O(u)$-bilipschitz equivalence. Then $\phi$ induces a coarse equivalence $(X, d_X, \mathcal{E}^{O(u)}) \to (Y, d_Y, \mathcal E^{O(u)})$.
\end{proposition}
\begin{proof}
Let $(x_n, x'_n)$ be a sequence of points with $d(x_n, x'_n) \leqslant Mu(\vert x_n \vert)$ and $\vert x_n \vert \to + \infty$. Then, for $n$ large enough, $\vert x'_n \vert \leqslant 2 \vert x_n \vert$.
Hence
\begin{align*}
d(\phi(x_n), \phi(x'_n)) & \leqslant \kappa M u(\vert x_n \vert) + cu(\vert x_n \vert \vee \vert x'_n \vert) \leqslant (\kappa M + c) u(\vert x_n \vert)
\end{align*}
for some $C \geqslant 1$. But also, for $n$ large enough,
\begin{equation}
\label{eq:phi-boundedly-proper}
\vert \phi(x_n) \vert \geqslant \vert x_n \vert/(2\kappa).
\end{equation}
So there exists a constant $C'$ so that
$
d(\phi(x_n), \phi(x'_n)) \leqslant C'u(\vert \phi(x_n) \vert).
$
On the other hand, $\phi$ has axiom (\ref{def:coars-equivalence}.\ref{item:Coarse1}) by \eqref{eq:phi-boundedly-proper}.
This proves that $\phi$ is a coarse map.
$\phi$ has a coarse inverse $\widetilde{\phi}$ such that $d(\widetilde{\phi} \circ \phi(x), x) \leqslant c'u(\vert x \vert)+c'$ for all and $x \in X$ and $d({\phi} \circ \widetilde{\phi}(y),y) \leqslant c'u(\vert y \vert) + c'$ for all $y\in Y$ \cite{cornulier2017sublinear}. So $\phi$ is a coarse equivalence.
\end{proof}
\begin{lemma}
\label{lem:constructing-hatd}
Assume that $(X, d_X)$ is a geodesic metric space. Let $u$ be admissible and unbounded. Then
$ E_u = \left\{ (x,x') \in X \times X: d_X(x,x') \leqslant 1 + u(\vert x \vert + \vert x' \vert) \right\} $
is a symmetric entourage generating $\mathcal{E}^{O(u)}$ on $X$.
Define $\widehat d_X$ on $X$ such that
\[ \widehat{d}(x,x') = \inf \left\{ n: (x,x') \in E_u^n \right\}. \]
Then, the identity map
$ \left(X, d_X, \mathcal E^{O(u)} \right) \to \left( X, \widehat d_X, \mathcal E^{O(1)} \right) $
is a coarse equivalence.
\end{lemma}
\begin{proof}
Let us check first that $E_u$ generates $\mathcal E$. Take $E \in \mathcal{E}^{O(u)}$; then by definition
\begin{equation}
\notag
\sup_{(x,x') \in E} \frac{d_X(x,x')}{1+u(\vert x \vert) + u(\vert x' \vert)} = M < +\infty.
\end{equation}
For all $(x,x')$, and for every segment $\gamma : [0, d_X(x,x')] \to X$ and set $x_1 = \gamma(1+u(\vert x \vert)), x_2 = \gamma(2+u(\vert x \vert) + u(\vert x_1 \vert)), \ldots$.
Let
\[ N_\gamma(x,x') = \inf \left\{ n : n+u(\vert x \vert) + \cdots + u(\vert x_n \vert) > d_X(x,x') \right\}. \]
We claim that $\sup_{(x,x')\in E} \inf_\gamma N < +\infty$.
Indeed, if $x$ and $x'$ are far enough there exists some constant $\mu>0$ such that $u(\vert x_k \vert) \geqslant \mu u(\vert x \vert)$ as long as $\vert x_k \vert \geqslant \vert x \vert/2$, especially as long as $k + u(\vert x \vert) + \cdots + u(\vert x_k \vert) \leqslant \vert x \vert/2$. So either $N(x,x') \leqslant \lceil M/\mu \rceil$ or $N +u(\vert x \vert) + \cdots + u(\vert x_N \vert) > \vert x \vert /2$. But in the latter case,
\begin{equation}
\label{eq:contrad-with-N}
M(1 +u(\vert x \vert) + u(\vert x' \vert)) \geqslant d_X(x,x') > \frac{\vert x \vert}{2} - 1 - u(\vert x_{N} \vert)
\end{equation}
where we used the definition of $N$ on the right. To reach a contradiction, note that again by the definition of $N$, $d(x_N, x') < 1+ u(\vert x_N \vert)$, so there exists $L$ such that $d(x_N, x') \leqslant 1+ Lu(\vert x' \vert)$, reproducing the reasoning in the ``Inverse'' part of the proof of Proposition \ref{prop:Ou-coarse-structure}. Hence, there exists some constant $M'$ such that if $x'$ is far enough, $u(\vert x_N \vert) \leqslant M' u(\vert x' \vert)$. Plugging this in \eqref{eq:contrad-with-N} yields an inequality of the form $u(\vert x' \vert) + u(\vert x \vert) \geqslant \rho \vert x \vert$ for some $\rho >0$, which can only occur if $\vert x \vert$ is close to the origin.
We conclude that $E \subseteq E_u^{N_{\max}}$, where $N_{\max} = \sup_{(x,x')\in E} \inf_\gamma N$ is a finite integer.
This proves that $(X, d_X, \mathcal E^{O(u)} ) \to ( X, \widehat d_X, \mathcal E^{O(1)} )$ has the axiom (\ref{def:coars-equivalence}.\ref{item:Coarse2}) of a coarse map. In order to check (\ref{def:coars-equivalence}.\ref{item:Coarse1}) we must prove that if $B \times B$ is in $\mathcal{E}^{O(u)}$ then $B$ is bounded; fixing $x \in B$, by \eqref{eq:EOu}, for any sequence $x'_n$ that escape to infinity $x'_n$ cannot stay in any entourage of $\mathcal{E}^{O(u)}$ fixed in advance.
Conversely, if $B$ is bounded then $B \times B$ is in $\mathcal{E}^{O(u)}$, while axiom (C2) holds for $(X, \widehat d_X, \mathcal E^{O(1)} ) \to ( X, d_X, \mathcal E^{O(u)} )$ by definition of $\widehat d$.
\end{proof}
The new distance $\widehat d$ may be made geodesic as well, by adding metric edges between pairs of point at distance $1$. Note however that one may loose properness in this process. Isometric group actions are also lost when passing to $\widehat d$, and in fact its main interest is theoretical, and appears in the next Proposition.
Say that a map $\phi: X \to Y$ between pointed metric spaces is radial if there exists $\kappa \geqslant 1$ and $R, R' \geqslant 0$ such that for all $x \in X$,
\begin{equation}
\frac{1}{2 \kappa} \sup(R, \vert x \vert) \leqslant \sup(R', \vert \phi(x) \vert) \vert \leqslant 2 \kappa \sup(R, \vert x \vert).
\end{equation}
Also, call discrete geodesic between $x$ and $x'$ at distance $n$ in $X$ a finite sequence of points $x_i$ with $x=x_0$, $x_n=x'$ and $d(x_i, x_{i+1}) = 1$.
\begin{proposition}
\label{prop:fromOucoasretoSBE}
Let $X$ and $Y$ be geodesic metric spaces, and let $\phi : X \to Y$ be a $O(\log)$-coarse equivalence. Then
\begin{enumerate}[{\rm (1)}]
\item
\label{item:phi-radial}
$\phi$ is radial.
\item
\label{item:phi-is-SBE}
$\phi$ is a $O(\log)$-bilipschitz equivalence.
\end{enumerate}
\end{proposition}
We need a preliminary Lemma.
\begin{lemma}
\label{lem:s-and-t}
Let $t$ and $s$ be positive real numbers.
Then for every $M > 0$, there exists $R\geqslant 1$ and $M' > 0$ such that
\[
\begin{cases}
\frac{t}{\log t} & \leqslant M \frac{s}{\log s} \\
\inf (s,t) & \geqslant R
\end{cases}
\implies t \leqslant M's
\]
\end{lemma}
\begin{proof}
We will prove first a weaker inequality and then self-improve it.
Taking logarithms on both sides we get $\log t - \log \log t \leqslant \log M + \log s - \log \log s$, so for every $\varepsilon > 0$ one has, for $s$ and $t$ large enough, $(1-\varepsilon/2) \log t \leqslant (1+ \varepsilon/2) \log s$, and then $t \leqslant s^{1+\varepsilon}$. Now, assume by contradiction that there is a sequence $(s_n, t_n)$ with $t_n / \log t_n \leqslant M s_n / \log s_n$, but $q_n = t_n /s_n$ going to infinity. Then $t_n/\log t_n = t_n /(\log s_n + \log q_n)$; but we know that $\log q_n \leqslant \varepsilon \log s_n$; so $t_n / \log s_n \leqslant M' s_n / \log s_n$ for some $M'$, reaching the desired inequality.
\end{proof}
\begin{proof}[Proof of the Proposition \ref{prop:fromOucoasretoSBE}]
Consider the metrics $\widehat d_X$ and $\widehat d_Y$ provided by Lemma on $X$ and $Y$.
Then $\phi : (X, \widehat d_X) \to (Y, \widehat d_Y)$ becomes a $O(1)$-coarse equivalence. Since $\widehat d_X$ and $\widehat d_Y$ are geodesic, $\phi$ is a $\widehat d$-quasiisometry, especially it is $\widehat d$-radial.
Now, we need to compare $\widehat d$ and $d$.
Start with \eqref{item:phi-radial}; for this we need to compare $\vert x \vert$ and $\widehat d(0,x)$ for all $x \in X$.
Let $(x_n)$ be a discrete $\widehat d$-geodesic segment from $o$ (we do not specify an endpoint yet).
We claim that $\vert x_n \vert \leqslant 2n \log n + 2n$ for $n >0$.
Let us proceed by induction on $n$. This holds for $n=1$.
Assume it holds for some $n >0$.
Then,
\begin{align*}
\vert x_{n+1} \vert & = \vert x_n \vert + d(x_n, x_{n+1}) \\
& \leqslant \vert x_n \vert + 1 + \log (\vert x_n \vert) \\
& \leqslant 2n + 2n\log n + 1 + \log 2 + \log n + \log (1 + \log n) \\
& \leqslant 2n+2n\log n + 2 + 2 \log n \\
& = (2n+2) + (2n+2) \log n \leqslant (2n+2) + (2n+2) \log (n+1)
\end{align*}
where we used $\log 2 < 1$ and $\log n \leqslant n-1$.
Using this inequality, we deduce
\begin{align}
\widehat{d}_X(o,x) \geqslant \inf \left\{ n : 2n(1+\log n) \geqslant
\vert x \vert \right\} \geqslant \frac{\vert x \vert}{1+3 \log \vert x \vert}
\label{eq:lower-estimate-on-hat-distance-to-origin}
\end{align}
Conversely, repeating a construction made in the proof Lemma \ref{lem:constructing-hatd}, consider a geodesic segment $\gamma : [0, \vert x \vert] \to X$, and a sequence \[ x_0 = o, \, x_1 = \gamma(2), x_2 = \gamma(1+\log \vert x_1 \vert), \ldots x_{i+1} = \gamma(\vert x_i \vert + \log \vert x_i \vert) \]
and define $N$ such that $x_N$ is the farthest element from $o$ before reaching $x$; in this way, $\widehat d_X(o,x) \leqslant N+1$. By induction on $n$, we can prove that $\vert x_n \vert \geqslant n \log n$ for all $n$. So
\begin{equation}
\widehat d_X(o,x) \leqslant 1+ \inf \left\{ n : n \log n \geqslant
\vert x \vert \right\} \leqslant 1 + \frac{\vert x \vert}{1+ \log \vert x \vert}.
\label{eq:upper-estimate-on-hat-distance-to-origin}
\end{equation}
We are now ready to prove \eqref{item:phi-radial}. We know that $\phi$ is $(\widehat d_X, \widehat d_Y)$-radial ; so there exists $\kappa_0$ such that
\begin{equation}
\frac{\vert \phi(x) \vert}{1+ 3 \log \vert \phi(x) \vert} \leqslant
\widehat{d}_Y(o, \phi(x)) \leqslant 2 \kappa_0 \left( 1+ \frac{\vert x \vert}{1+\log \vert x \vert} \right)
\end{equation}
Combining both inequality, $\vert \phi(x) \vert$ and $\vert x \vert$ satisfy the hypotheses of $t$ and $s$ in Lemma \ref{lem:s-and-t}. We conclude from the Lemma that $\phi$ is radial.
The proof of \eqref{item:phi-is-SBE} will now rely on \eqref{item:phi-radial} together with an estimate akin to \eqref{eq:lower-estimate-on-hat-distance-to-origin} and \eqref{eq:upper-estimate-on-hat-distance-to-origin}, but where we replace $o$ with $x' \in X$.
Let $x, x' \in X$; assume $2 \leqslant \vert x \vert \leqslant \vert x' \vert$, and let $\gamma$ be a geodesic segment from $x$ to $x'$. Define $x_0 = x$, $x_i+1 = \gamma (d(x_0, x_i) + 1 + \log \vert x_i \vert)$ as long as it makes sense (let $n$ be the largest one, so that $x_n$ is the closest to $x'$ among all $x_i$'s).
By the triangle inequality, for all $i$ such that $0 \leqslant i \leqslant n$,
\begin{equation}
\notag
\vert x_i \vert \leqslant \vert x' \vert + d(x', x_i) \leqslant \vert x \vert + d(x,x') \leqslant 2 \vert x' \vert + \vert x \vert \leqslant 3 \vert x' \vert.
\label{eq:xi-leq-3x}
\end{equation}
From this inequality, we deduce that
\begin{equation}
\notag
\widehat d_X(x,x') \geqslant \operatorname{long}(\gamma)/(2\log \vert 3 x \vert) \geqslant \frac{d(x,x')}{4 \log \vert x' \vert}
\end{equation}
Conversely, if $\inf_t \vert \gamma(t) \vert \leqslant \vert x' \vert/2$, then $d(x,x') \geqslant \vert x' \vert /2$. So
\begin{align*}
\widehat d(x,x') & \leqslant \widehat d(x,o) + \widehat d(o,x') \leqslant 2 + \frac{2 \vert x' \vert}{1 + \log \vert x' \vert}
\leqslant 2 + \frac{4 d(x,x')}{1 + \log \vert x' \vert}.
\end{align*}
Otherwise, $\inf_t \vert \gamma(t) \vert > \vert x' \vert /2$, and then
$\widehat d(x,x') \leqslant \frac{d(x,x')}{\log(\vert x' \vert /2)}$.
Combining the previous inequalities, we get that for every pair $x,x'$ with $\sup(\vert x \vert, \vert x' \vert)$ large enough,
\begin{equation}
\label{eq:compare-d-dhat}
\frac{1}{\lambda_X} \frac{d_X(x,x')}{\log (\sup(\vert x \vert, \vert x' \vert)}
\leqslant \widehat d_X(x,x')
\leqslant \lambda \frac{d_X(x,x')}{\log (\sup(\vert x \vert, \vert x' \vert)}
\end{equation}
for some $\lambda_X >1$. A similar inequality holds for pairs of points in $Y$, with a multiplicative factor $\lambda_Y$.
We are ready to finish the proof. Assume that $\phi$ is a $(\kappa_0, c_0)$ quasiisometry with respect to $\widehat d_X$ and $\widehat d_Y$.
Then
\begin{equation}
\notag
- c_0 + \frac{1}{\kappa_0} \widehat d_X(x,x') \leqslant \widehat d(\phi(x), \phi(x')) \leqslant \kappa_0 \widehat d_X(x,x') + c_0
\end{equation}
for all $x, x'$. So, setting $\lambda = \sup(\lambda_X, \lambda_Y)$ and using \eqref{eq:compare-d-dhat} and its counterpart in $Y$,
\begin{equation}
\notag
- c_0 + \frac{1}{\lambda^2 \kappa_0} \frac{d_X(x,x')}{\log \sup(\vert x\vert, \vert x' \vert)} \leqslant \frac{d_Y(\phi(x), \phi(x'))}{\log(\sup(\vert \phi(x) \vert, \vert \phi(x'))) \vert} \leqslant \lambda^2 \kappa_0 \frac{d_X(x,x')}{\log \sup(\vert x\vert, \vert x' \vert)} + c_0
\end{equation}
Using that $\phi$ is radial, we know that $\vert \phi(x) \vert$ and $\vert \phi(x') \vert$ are within linear control from $\vert x \vert$ and $\vert x' \vert$. So we may rewrite the previous estimate as
\begin{equation}
\notag
- c_1 + \frac{1}{\kappa_1} \frac{d_X(x,x')}{\log \sup(\vert x\vert, \vert x' \vert)} \leqslant \frac{d_Y(\phi(x), \phi(x'))}{\log(\sup(\vert x \vert, \vert x' \vert)} \leqslant \kappa_1 \frac{d_X(x,x')}{\log \sup(\vert x\vert, \vert x' \vert)} + c_1
\end{equation}
where $\kappa_1 \geqslant 1$ and $c_1 \geqslant 0$. Multiplying by $\log \sup(\vert x \vert, \vert x' \vert)$ on both sides yields the required \eqref{eq:sbe-1}.
\end{proof}
\begin{remark}
The assumption $u =\log$ made in Proposition \ref{prop:fromOucoasretoSBE} is possibly too strong.
On the other hand, it is not true that every coarse equivalence between $o(r)$-coarse structure is a $o(r)$-bilipschitz equivalence: consider $\phi : \mathbf R^n \to \mathbf R^n$ such that $\phi(x) = \Vert x \Vert x$. A notable distinction between $\mathcal{E}^{O(\log)}$ and $\mathcal{E}^{o(r)}$ is that the former is monogenic whereas the latter is not.
Also, observe that Lemma \ref{lem:s-and-t} breaks down for $u(t)= t^e$, $e >0$.
\end{remark}
\subsection{Invariance of the geometric dimension for connected Lie groups}
\label{subsec:assouad-nagata}
\begin{definition}[sublinear Higson function]
Let $X$ be a proper metric space.
Define the $\operatorname{C}^\ast$-algebra $C_{h_L}(X)$ of sublinear Higson functions on $X$ as
\[ \left\{ f \in C_b(X, \mathbf C) \;: \, \forall E \in \mathcal E^{o(r)}, \lim_{r \to + \infty} \sup_{(x,x') \in E, \inf (\vert x \vert, \vert x' \vert) \geqslant r} \vert df (x,x') \vert = 0 \right\} \]
where $f \in C_b$ means $\sup \vert f \vert < + \infty$ and $df(x,x') = f(x) - f(x')$.
\end{definition}
\begin{figure}
\begin{tikzcd}
\nu X \arrow[d, two heads] & \text{Higson corona} \\
\nu_L X \arrow[d, two heads] & \text{sublinear Higson corona} \\
\partial_\infty X & \text{Gromov boundary}
\end{tikzcd}
\caption{Coronae and Gromov boundary for hyperbolic $X$.}
\label{fig:coronae}
\end{figure}
\begin{remark}[Compare Fukaya \cite{Fukaya}, 3.1]
$f$ is Higson sublinear if and only if there exists $C_f < + \infty$ such that
for all $x, x'$ in $X$ and $R >0$ large enough, if $\inf(\vert x \vert, \vert x' \vert) \geqslant R$ and $d_X(x,x') \leqslant R/2$, then $\vert f(x) - f(x') \vert \leqslant \frac{C_f}{R}$.
\end{remark}
The closure $\overline{C_{h_L}(X)}$ is a unital $\operatorname{C}^\ast$ algebra; once modded out by the ideal of functions vanishing at infinity, this algebra is the spectrum of a compact space, the sublinear Higson corona $\nu_L X$ of $X$ \cite[Definition 2.37]{RoeLectures}.
\begin{remark}[See Figure \ref{fig:coronae}]
If $X$ is a Gromov-hyperbolic space, then the Gromov functions on $X$ are Higson sublinear.
The Higson sublinear functions are Higson. It follows that the sublinear Higson corona sits in between the Higson corona $\nu X$ and the Gromov boundary $\partial_\infty X$ seen in the topological category.
\end{remark}
The following is a generalization of \cite[Proposition 2.1]{DranishnikovSmith}.
\begin{proposition}
\label{prop:SBE-to-corona}
Let $X$ and $Y$ be metric spaces. Let $\nu_L X$ and $\nu_L Y$ be their sublinear Higson coronae.
Then, any $o(r)$-bilispchitz equivalence $f : X \to Y$ induces a homeomorphism
$\nu_L f : \nu_L X \to \nu_L Y$.
\end{proposition}
\begin{proof}
By proposition \ref{prop:sbe-is-coarse}, a $o(r)$-bilipschitz equivalence $X \to Y$ represents a coarse equivalence $(X, d_X, \mathcal{E}^{o(r)}) \to (Y, d_Y, \mathcal{E}^{o(r)})$, and then induces a homeomorphism between the Higson coronae \cite[Corollary 2.42]{RoeLectures}.
\end{proof}
\begin{theorem}[{\cite[Theorem 3.10 and Corollary 3.11]{DranishnikovSmith}}; see also {\cite{DydakLipExt}}]
\label{th:DS}
Let $X$ be a proper connected metric space. Assume that $\operatorname{Isom}(X)$ is co-compact on $X$, and that $\operatorname{asdim}_{\mathrm{AN}}(X) < + \infty$.
Then
\begin{equation}
\dim \nu_L X = \operatorname{asdim}_{\mathrm{AN}}(X).
\end{equation}
\end{theorem}
\begin{theorem}[{\cite[Theorem 7.9]{HigesPeng}}]
\label{th:HP}
Let $G$ be a connected Lie group, and let $X$ be any geometric model of $G$. Then
\begin{equation}
\operatorname{asdim}_{\mathrm{AN}}(X) = \dim G - \dim K.
\end{equation}
where $K$ is any maximal compact subgroup of $G$.
\end{theorem}
Theorem \ref{th:geodim} from the introduction now follows by combining \ref{prop:SBE-to-corona} with Theorems \ref{th:DS} and \ref{th:HP}.
To the best of the author's knowledge, the only connected Lie group for which some description of the sublinear Higson corona is currently available is $\mathbf R^n$: Fukaya proved that $\nu_L \mathbf R^n \simeq S^{n-1} \times \nu_L \mathbf R$ \cite{Fukaya}. These spaces are ``big'' and not metrizable, so it seems not easy to extract fine topological invariants from them as one would do for, say, the Gromov boundary.
\begin{ques}
Let $X$ be a proper metric space. Is the Čech cohomology group $\check{H}^1(\nu_L X, \mathbf Q)$ finitely generated?
\end{ques}
The answer is known to be negative for the Higson coronae associated to bounded coarse structures \cite{keesling}; nevertheless Fukaya proves that $\nu_L \phi$ is homotopic to the identity whenever $\phi \in \operatorname{GL}(n,\mathbf R)$ has positive determinant.
\section{Real hyperbolic spaces and Theorem \ref{th:Tukia-SBE}}
\label{sec:proofB}
In this section we prove Theorem \ref{th:Tukia-SBE} on Lie groups $O(u)$-bilipschitz equivalent to real hyperbolic spaces. \S\ref{subsec:pinching} gathers preliminary results on pinching and conformal dimension, and \S\ref{subsec:degenerations} sets the terminology of degenerations and deformations. The equivalences of Theorem \ref{th:Tukia-SBE} are proved in \S\ref{subsec:proofA}.
\subsection{Heintze groups, conformal dimension and pinching}
\label{subsec:pinching}
In 1955, Jacobson proved that all real Lie algebras who possess a derivation with no purely imaginary eigenvalue are nilpotent \cite{JacobsonCNLA}. Later Heintze characterized the semidirect products of nilpotent Lie algebra by derivations whose spectrum has positive real part, as the Lie algebras of Lie groups that possess at least one negatively curved left-invariant metric (note that these are centerless) \cite{Heintze}. Most importantly, Heintze showed that the negatively curved metrics on these groups exhaust all the isometrically homogeneous negatively curved manifolds, shedding light on the earlier result of Kobayashi that these spaces had to be simply connected \cite{KobayashiHMN}.
\begin{definition}[{\cite{CoTesContracting}}]
\label{def:heintze-type}
Let $G$ be a Lie group with finitely many components. Then $G$ is of Heintze type if there exists a simply connected nilpotent $N$, a derivation $\alpha \in \operatorname{Der}(\mathfrak n)$ with $\inf \left\{ \Re \lambda : \lambda \in \operatorname{Sp}(\alpha) \right\} > 0$ and a compact group $K$ with a representation $\rho: K \to \operatorname{Aut}(N)$ such that
\begin{equation}
\label{eq:heintze}
G = (K \times \mathbf R) \ltimes N,
\end{equation}
where $(k,t).n = \rho(k)(n)e^{\alpha t} n$.
A Heintze group is a group of Heintze type with $K=1$.
\end{definition}
By normalized Jordan form of a derivation $\alpha$ as in Definition \ref{def:heintze-type}, we mean the Jordan form of the unique positive multiple $[\alpha]$ of $\alpha$ such that
\begin{equation}
\label{eq:normalHeintze}
\inf \left\{ \Re \lambda : \lambda \in \operatorname{Spec}([\alpha]) \right\} = 1.
\end{equation}
Note that $N \rtimes_\alpha \mathbf R \simeq N \rtimes_{[\alpha]} \mathbf R$ (Compare Example \ref{exm:affine}.)
The following useful fact is proved in E. Sequeira's thesis using a highest weight argument \cite[Proposition 5.2.2]{Sequeirathesis}\footnote{\cite{Sequeirathesis} has the assumption that $G_\alpha$ and $G_\beta$ are purely real, but the general proof goes along the same lines.}.
\begin{proposition}
Let $N$ be a simply nilpotent Lie group.
If the Heintze groups $G_\alpha = N \rtimes_\alpha \mathbf R$ and $G_\beta = N \rtimes_\beta \mathbf R$ are isomorphic, then $\alpha$ and $\beta$ have the same normalized Jordan form.
\end{proposition}
\begin{definition}[after {\cite[Section 4]{EberleinHeber}}]
\label{def:amalgam}
Given two Heintze groups $G = N \rtimes_\alpha \mathbf R$ and $G' = N' \rtimes_{\alpha'} \mathbf R$ and $\lambda >0$, we write $G \; \sharp \; (G')^\lambda = (N \times N') \rtimes \mathbf R$ where $t.n = (e^{\alpha t}, e^{\lambda \alpha' t})$ with the convention that both $\alpha$ and $\alpha'$ are normalized as in \eqref{eq:normalHeintze}, and call this group Heintze amalgam of $G$ and $G'$. The Lie algebra $\operatorname{Lie}(G \; \sharp \; (G')^\lambda )$ we denote $\mathfrak g \; \sharp \; \lambda \mathfrak g'$.
\end{definition}
A Heintze group is purely real if it is completely solvable, i.e. if $\operatorname{Sp}(\alpha) \subseteq \mathbf R$; every group of Heintze type has a Riemannian model in common with a purely real Heintze group, that we call its shadow (See \cite{AlekHMN} and \S\ref{subsec:nilpotent}).
If $G$, $N$, $\alpha$ are as in Definition \ref{def:heintze-type} with $K=1$ and if $\mathfrak n = \operatorname{Liespan}(\ker([\alpha] - 1))$, then we say that $G$, resp. $\mathfrak g$ is a Carnot-type Heintze group, resp. algebra. In this case isomorphism type of $G$ does not depend on $\alpha$, so we abbreviate $G = N \rtimes_{\mathrm{Carnot}} \mathbf R$ \cite[Proposition 3.5]{CrnulierSystolicGrowth}. Carnot-type Heintze groups are purely real.
\begin{example}
\label{exm:bnK}
Let $\mathbf K$ be a division algebra over $\mathbf R$ and $n$ a positive integer, $n=2$ if $\mathbf K = \mathbf {Ca}$. $\mathfrak b(n,\mathbf K)$ is the solvable Lie algebra over the vector space $V = \mathbf K^{n-1} \oplus \Im \mathbf K \oplus \mathbf R$ (where $\Im \mathbf K = 0$ if $\mathbf K = \mathbf R$) with Lie bracket
\begin{equation*}
\left[ (z_i, \tau, s), (z'_i, \tau', s') \right] =\left[ sz_i'- s'z_i, 2s\tau' - 2s'\tau + \sum_{i=1}^{n-1} \Im(z_i \overline{z'_i}), 0 \right].
\end{equation*}
$\mathfrak b(n, \mathbf K)$ for $\mathbf K = \mathbf R, \mathbf C, \mathbf H$ is the maximal completely solvable subalgebra of $\mathfrak o(n,1)$, $\mathfrak u(n,1)$, $\mathfrak {sp}(n,1)$ respectively.
\end{example}
The Heintze groups with Lie algebra $\mathfrak b(n, \mathbf K)$ are exactly those who carry (rank one) symmetric metrics \cite{Heintze} (for $\mathbf K= \mathbf R$, all the left-invariant metrics are symmetric, see e.g. \cite{LauretDegenerations}).
The topological dimension $\operatorname{Topdim} \partial_\infty$ and conformal dimension $\operatorname{Cdim} \partial_\infty$ are quasisometry invariant of Gromov-hyperbolic locally compact compactly generated groups (\cite{MTconfdim}, \cite{CCMT}). For a group of Heintze type $G = (K \times \mathbf R) \ltimes_\alpha N$,
\begin{align}
\label{Topdim-Heintze}
\operatorname{Topdim} \partial_\infty G & = \dim G - \dim K - 1 = \operatorname{geodim} G - 1; \\
\label{Cdim-Heintze}
\operatorname{Cdim} \partial_\infty G & = \operatorname{Tr} [\alpha] .
\end{align}
Though not explicitly stated there, the following is a direct consequence of \cite[Section 5]{PansuDimConf}.
\begin{theorem}[After Pansu]
\label{th:pansuconf}
Let $(M,g)$ be a complete, simply connected Riemannian manifold of dimension $n \geqslant 2$. Let $b \geqslant 1$. Assume that $M$ is $-1/b^2$-pinched, i.e. (up to normalization of $g$) $- b^2 \leqslant K_g \leqslant -1$. Then
\begin{equation}
\label{eq:Cdim-pinching}
\operatorname{Cdim} \partial_\infty M \leqslant (n-1)b.
\end{equation}
\end{theorem}
\begin{proof}
It follows from the lower bound on sectional curvature that $\operatorname{Ric} \geqslant (n-1)b^2g$. Then, by the Bishop-Gromov inequality
\begin{equation*}
\operatorname{vol}(B(x,r)) \leqslant \operatorname{cst.} \int_0^r \sinh^{{n-1}}(b t) dt,
\end{equation*}
so that the volume-theoretic entropy $h = \limsup_{r \to + \infty} r^{-1} \log \operatorname{vol}(B(x,r))$ is bounded above by $(n-1)b$.
Pansu proves $\operatorname{Cdim} \partial_\infty S \leqslant h$ \cite[Lemme 5.2]{PansuDimConf}.
Combining these inequalities yields the desired \eqref{eq:Cdim-pinching}.
\end{proof}
\begin{corollary}
Let $G$ be a group of Heintze type; then every Riemannian model of $G$ has a pinching of at least
\begin{equation}
\label{eq:pansu-bound}
- \left( \frac{\operatorname{geodim} G -1}{\operatorname{Tr}[\alpha]} \right)^2.
\end{equation}
\end{corollary}
The bound \eqref{eq:pansu-bound} is not optimal. Building on a theorem of Belegradek and Kapovitch and curvature computations, Healy determined the exact optimal pinching (which is attained) when $G$ is Carnot-type and $N$ has a lattice (equivalently, when $\mathfrak n$ has a $\mathbf Q$-form) and found
an optimal pinching of $-1/s^2$, where $s$ is the nilpotency step of $N$ \cite[Theorem 4.3]{healy2021pinched}. Note that for Carnot type groups, $s$ is the spectral radius of $[\alpha]$ so $\operatorname{Tr}[\alpha] \leqslant s(\operatorname{Topdim} \partial_\infty G) = s(\operatorname{geodim} G -1)$.
\begin{corollary}
\label{cor:on-pinching}
Let $G$ be a group of Heintze type. Assume that $G$ has Riemannian models with pinching arbitrarily close to $-1$. Then $\alpha$ has all its eigenvalues with the same real part, and $N$ is abelian.
\end{corollary}
\begin{proof}
Order the eigenvalues of $\alpha$ as $\sigma_1 \leqslant \cdots \leqslant \sigma_r$. Pansu's theorem forces the equality in
\[ \sigma \dim \mathfrak n = \sum_{\lambda} \Re \lambda = \operatorname{tr}(\alpha). \]
Denote by $\mathfrak n^\lambda$ the generalized eigenspace of $\alpha$ with eigenvalue $\lambda$, observe that $[\mathfrak n^\lambda, \mathfrak n^\mu]$ for any $\lambda, \mu$.
Consequently, if $\oplus_{\tau \in \mathbf R} \mathfrak n^{\sigma+i\tau} = \mathfrak n$ for a given positive $\sigma$ then $[\mathfrak n, \mathfrak n] \subseteq \oplus_{\tau \in \mathbf R} \mathfrak n^{\sigma+2i\tau}$, and $N$ is abelian.
\end{proof}
\begin{remark}
The conclusion that $N$ is abelian remains if a single left-invariant metric on $S$ is assumed to be strictly more than quarter-pinched, a theorem by Eberlein and Heber, who also characterized the Heintze groups with a quarter-pinched Riemannian metric \cite{EberleinHeber}.
\end{remark}
We note that the converse of Corollary \ref{cor:on-pinching} also holds.
\begin{proposition}
\label{prop:Riemannian-computation}
Let $S=\mathbf R^{n-1}\rtimes_\alpha \mathbf R$, where $\operatorname{sp}(\alpha) \subseteq \lbrace 1 + i \tau : \tau \in \mathbf R \rbrace$.
Then, $S$ has left invariant Riemannian metrics with pinching arbitrarily close to $-1$.
Moreover, if $K$ is a compact group of automorphisms of $S$, then one can assume that those metrics are all $K$-invariant.
\end{proposition}
\begin{proof}
Let $\varepsilon >0$ be a parameter.
Let $(e_1, \ldots e_{n-1})$ be a basis of $\mathbf R^{n-1}$ in which $\alpha$ appears in real Jordan normal form; group the generalized eigenspaces with non-real eigenvalue first (if there are any), so that there is $m$ such that in the basis
\[ \mathcal F_\varepsilon = (e_1, e_2, \varepsilon e_3, \varepsilon e_4 \cdots \varepsilon^{m-1} e_{2m-1}, \varepsilon^{m-1}e_{2m}, e_{2m+1}, \ldots \varepsilon^{n-2m-1} e_n), \]
$\alpha$ has a block upper triangular form with blocks of the form
\[
J'_{2d}(1+i \tau) = \begin{bmatrix}
A_\tau & \varepsilon I & & \\
& \ddots & & \varepsilon I \\
& & & A_\tau
\end{bmatrix} \text{ where } \; A_\tau = \begin{pmatrix} 1 & \tau \\ -\tau & 1
\end{pmatrix}\]
and
\[
J_d(1) = \begin{bmatrix}
1 & \varepsilon & & \\
& \ddots & & \varepsilon \\
& & & 1
\end{bmatrix}
\]
where $d$ denotes the size of the block.
Consider the left invariant metric $ \langle \cdot , \cdot \rangle_\varepsilon$ such that $\mathcal F_\varepsilon$ is orthonormal and $T \perp [\mathfrak s, \mathfrak s]$, $\langle T, T \rangle = 1$ for some $T$ such that $\alpha = \operatorname{ad}(T)$.
Decompose $\operatorname{ad}(T) = D_\varepsilon + S_\varepsilon$, $D_\varepsilon$ is symmetric and $S_\varepsilon$ is skew-symmetric. To express the Riemann curvature tensor, following Heintze, Eberlein and Heber it is convenient to introduce\footnote{They are denoted $D_0, S_0$ in \cite{Heintze} and $D_0, S_0, N_0$ in \cite{EberleinHeber}.} $N_\varepsilon = D_\varepsilon^2 +[D_\varepsilon,S_\varepsilon]$. For all $X$, $Y$, $Z$ in $\mathfrak s$,
\begin{align*}
R_{X,Y} Z = & - \langle D_\varepsilon \underline Y, Z \rangle D_\varepsilon \underline X + \langle D_\varepsilon \underline X, Z \rangle D_\varepsilon \underline Y \\
& - \left\langle \underline Z, \langle X,T \rangle N_\varepsilon \underline Y - \langle Y, T \rangle N_\varepsilon \underline X \right\rangle T \\
& + \langle Z, T \rangle (\langle X, T \rangle N_\varepsilon \underline Y - \langle Y,T \rangle N_\varepsilon \underline X ) ,
\end{align*}
where $\underline X$, $\underline Y$ and $\underline Z$ are the orthogonal projections of $X,Y,Z$ to $[\mathfrak s, \mathfrak s]$.
(This is differently expressed as in, but still in agreement with, \cite{EberleinHeber} who performed a more general computation where $[\mathfrak s, \mathfrak s]$ is not assumed abelian and provided $R_{X,Y}Z$ for $X,Y,Z \in [\mathfrak s, \mathfrak s]$ and the sectional curvature of all planes.)
Any $2$-plane $\pi$ in $\mathfrak s$ can be generated by $u, v \in \mathfrak s$ such that $v\in [\mathfrak s, \mathfrak s]$, so that $v = \underline v$.
Observe that as $\varepsilon \to 0$, $D_\varepsilon \to I$ and $N_\varepsilon \to I$ so that
\begin{align*}
K^\varepsilon(\pi) &= \frac{\langle R^\varepsilon (u,v)v,u \rangle}{\langle u, u \rangle \langle v,v \rangle - \langle u,v \rangle^2} \\
& = \frac{\langle -D_\varepsilon \underline u, \underline u \rangle \langle D_\varepsilon v, v \rangle + \langle D_\varepsilon \underline u, v \rangle^2
- \langle u, T \rangle^2 \langle v, N_\varepsilon v \rangle
}{\langle u, u \rangle \langle v,v \rangle - \langle u,v \rangle^2} \\
& \longrightarrow_{\varepsilon \to 0}
\frac{- \langle \underline u, \underline u \rangle \langle v, v \rangle + \langle u, v \rangle^2 - \langle u,T \rangle^2 \langle v, v \rangle}{\langle u, u \rangle \langle v,v \rangle - \langle u,v \rangle^2} = -1,
\end{align*}
using that $\langle u, u \rangle = \langle \underline u, \underline u \rangle + \langle u, T \rangle^2$ and $\langle \underline u, \underline v\rangle = \langle u, v \rangle$.
Finally, the pointwise convergence of a rational function on a Grassmanian implies its uniform convergence, so $\sup K - \inf K$ goes to zero and $\sup K / \inf K$ goes to $1$ as $\varepsilon \to 0$.
Let us now prove the ``moreover'' part.
$K$ has a normal connected subgroup $K_0$ whose Adjoint representation stabilizes the direct sum $V$ of $J'_{2}$ and $J_1$ type blocks, while $\Phi = K/K_0$ is a finite retract of $K$ that permutes the blocks of equal higher size; $K$ splits as $K_0 \rtimes \Phi$. In our construction, $\langle \cdot , \cdot \rangle_\varepsilon$ is already $\operatorname{Ad}_\Phi$-invariant. In order to make it $\operatorname{Ad}_K$-invariant, we can average the metric on $V$ using Weyl's unitary trick, decompose $\mathbf R^{n-1} = V \oplus V'$ and reproduce our variation of orthonomal basis $\mathcal F_\varepsilon$ only on the complementary subspace $V'$ which is the sum of higher sized Jordan blocks.
\qedhere
\end{proof}
\begin{remark}
Using Eberlein and Heber's amalgams (Definition \ref{def:amalgam}) and curvature estimates would simplify the proof of the first part of Proposition \ref{prop:Riemannian-computation} (yet not drastically so) by reducing it to the case where $\alpha$ has a single Jordan block as Jordan normal form. See also Remark \ref{rem:dejavu}.
\end{remark}
\begin{ques}
\label{ques:minimize-pinching}
Let $G = N \rtimes (K \times R)$ be a group of Heintze type.
Is it true that among all negatively curved Riemannian models of $G$, an optimal pinching is attained if and only if $\alpha$ is diagonalizable over $\mathbf C$?
\end{ques}
Note that the (Ahlfors-regular) conformal dimension of $\partial_\infty [\mathbf R^{n-1} \rtimes_{\alpha} \mathbf R]$ is attained if and only if $\alpha$ is diagonalizable over $\mathbf C$ \cite{BonkKleinerCdim}.
\subsection{Degenerations and deformations}
\label{subsec:degenerations}
We provide more information here than is strictly needed for Theorem \ref{th:Tukia-SBE}.
That will be useful to us in the discussion in \S\ref{subsec:nilpotent}.
\subsubsection{Setting}
Let $\mathcal L_n(\mathbf R) \subseteq (\Lambda^2 \mathbf R^n)^\ast \otimes \mathbf R^n$ be the subset of Lie algebra laws on $\mathbf R^n$.
Note that $\mu \in \Lambda^2 (\mathbf R^n)^\ast$ is in $\mathcal L_n(\mathbf R)$ if and only if the Jacobi identity holds in $\mu$, that is, if and only if
\begin{equation}
\label{eq:mu-circ-nu}
\mu^2(X_1 \wedge X_2 \wedge X_3) = \sum_{\sigma} \mu \left( \mu(X_{\sigma(1)} \wedge X_{\sigma(2)}) \wedge X_{\sigma(3)} \right) = 0
\end{equation}
for every $X_1, X_2, X_3 \in \mathbf R^n$, the sum being taken over the three positive permutations $\sigma$ over $\lbrace 1, 2, 3 \rbrace$.
$\mathcal L_n(\mathbf R)$ has two topologies: the Zariski topology, and the topology it inherits as a subspace of $\Lambda^2 (\mathbf R^n)^\ast \otimes \mathbf R^n$ with the operator norm, that we will call the metric topology.
It follows from Engel's theorem that the nilpotent laws form a Zariski closed subset $\mathcal N_n(\mathbf R)$.
Let $\lambda \in \mathcal L_n(\mathbf R)$.
$\mathbf R$, resp. $\lambda$, is a $\lambda$-module for the trivial, resp. the adjoint representation of $\lambda$. Following Chevalley and Eilenberg \cite[Theorem 10.1]{ChevalleyEilenberg} there are differential complexes $K_\lambda$ and $K'_\lambda$ on $\Lambda^\bullet(\mathbf R^n)^\ast$ and $\Lambda^\bullet(\mathbf R^n)^\ast \otimes \mathbf R^n$ with the following exterior derivatives $d_\lambda$, resp. $d_\lambda'$ on degree $q$-forms, resp.\ on $\lambda$-valued degree $q$-forms $\omega$:
\begin{align}
d_\lambda \omega (x_1, \ldots, x_{q+1}) & = \sum_{k < \ell} (-1)^{k+\ell+1} \omega([x_k, x_{\ell}], x_1, \ldots , \widehat {x_k}, \ldots, \widehat {x_\ell}, \ldots , x_{q+1}) \label{eq:chevalley-eilenberg-trivial} \\
d'_\lambda \omega (x_1, \ldots, x_{q+1}) & = \sum_{k < \ell} (-1)^{k+\ell+1} \omega([x_k, x_{\ell}], x_1, \ldots , \widehat {x_k}, \ldots, \widehat {x_\ell}, \ldots , x_{q+1}) \notag \\
& \quad + \sum_{k} (-1)^{k+1} [x_k, \omega(x_1, \ldots, \widehat{x_k}, \ldots, x_{q+1})].
\label{eq:chevalley-eilenberg-adjoint}
\end{align}
The group $\operatorname{GL}(n,\mathbf R)$ acts on $\mathcal{L}_n(\mathbf R)$ by restricting its natural action on $\Lambda^2 (\mathbf R^n)^\ast \otimes \mathbf R^n$.
We denote the orbit of $\lambda$ by $O(\lambda)$ or $O_\mathfrak g$ if $\mathfrak g$ is a Lie algebra isomorphic to $\lambda$; it is a smooth submanifold of $\Lambda^2 (\mathbf R^n)^\ast \otimes \mathbf R^n$ of dimension $n^2 - \dim \operatorname{Der}(\mathfrak g)$, embedded in $\mathcal L_n(\mathbf R)$.
Moreover, $T_\lambda {O_{\mathfrak g}} = B^2(\lambda, \lambda)$, as is most conveniently seen by differentiating the action of $\mathrm{GL}(n, \mathbf R)$ at $\lambda$: for every $\eta \in \mathfrak{gl}(\mathbf R^n)$,
\begin{align}
\label{eq:linearization}
e^\eta \lambda( e^{-\eta} X, e^{-\eta} Y) - \lambda(X \wedge Y) = d'_\lambda \eta (X \wedge Y) + O(\Vert \eta \Vert^2).
\end{align}
\begin{example}
\label{exm:affine}
Let $\mathfrak g = \mathfrak {aff}$ be the $2$-dimensional affine Lie algebra with basis $[X,T]$ such that $[T,X] = X$ and dual basis $(dx, dt)$. Then $X \otimes dx \wedge dt \in B^2(\mathfrak g, \mathfrak g)$; in the language of \S \ref{subsec:pinching}, $\mathbf R \rtimes_{1+\varepsilon} \simeq \mathbf R \rtimes_1 \mathbf R \simeq \mathfrak g$.
\end{example}
\begin{definition}
Let $\mathfrak g$ and $\mathfrak h$ be Lie algebras of dimension $n$ over $\mathbf R$.
We say that $\mathfrak g$ degenerates to $\mathfrak h$, denoted $\mathfrak g \to_{\mathrm {deg}} \mathfrak h$, if $\overline {O_\mathfrak h} \subsetneq \overline {O_{\mathfrak g}}$ where the closure is taken for the Zariski topology.
\end{definition}
Note that it is equivalent to require a single $\mu \in O_{\mathfrak h}$ such that $\mu \in \overline{O_{\mathfrak g}}$.
Since the metric topology is finer than the Zariski topology, a sufficient condition to have $\mathfrak g \to_{\mathrm {deg}} \mathfrak h$ is that there is a sequence $\lambda_0, \ldots, \lambda_r$ such that
\begin{equation}
\label{eq:sequence-of-metric-degenerations}
\begin{cases}
\lambda_0 \in O_{\mathfrak g}, \; \lambda_r \in O_{\mathfrak h} & \\
\forall X \in \Lambda^2 (\mathbf R^n),\,
\underset{t \to + \infty}{\lim} (\varphi_{t,i} . \lambda_i)(X) = \lambda_{i+1} (X) & i=0,\ldots, r-1.
\end{cases}
\end{equation}
where $\varphi_t \in \mathrm{GL}(n, \mathbf R)$ is continuous with respect to $t$.
When $r=1$, \eqref{eq:sequence-of-metric-degenerations} amounts to $\mu \in \overline{O(\lambda)}^{\, \mathrm{ met}}$ and is called a contraction (especially, by the physicists).
The author does not know whether the existence of a sequence of contractions as in \eqref{eq:sequence-of-metric-degenerations} is a necessary condition for $\mathfrak g \to_{\mathrm {deg}} \mathfrak h$ to hold.
\begin{example}[Nilpotent Lie algebras]
\label{ex:nilpotent}
Let $\mathfrak n$ be a nilpotent Lie algebra.
Let $\mathfrak n = \oplus_i V_i$ be a linear splitting such that $V_i \oplus C^{i+1} \mathfrak n = C^i \mathfrak n$ for all $i$. For $t \geqslant 0$, let $(\varphi_t)$ be the one parameter subgroup of $\operatorname{GL}(\mathfrak n)$ such that
\begin{align}
\varphi_t(X) = t^i X, & \qquad X \in V_i.
\end{align}
Then, the $V_i$ becomes a Lie algebra grading on $\varphi_t.\mathfrak n$ in the limit when $t \to + \infty$: $\mathfrak n$ degenerates metrically to the graded Lie algebra $\operatornamewithlimits{gr} (\mathfrak n)$ associated to the central filtration of $\mathfrak n$, supporting the asymptotic cone of the simply connected $N$ by \cite{PanCBN}. In particular, $\mathfrak n \to_{\mathrm{deg}} \operatorname{gr}(\mathfrak n)$.
(This description of the law in $\operatorname{gr}(\mathfrak n)$ as a limit is the one given in \cite[\S 2.1]{CantFur}, who prove a generalization of \cite{PanCBN}.)
\end{example}
For $\lambda \in \Lambda^2 \mathbf R^n \otimes_{\mathbf R} (\mathbf R[[1/t]])^n$, we denote $(\lambda,t) \mapsto \lambda(t)$ provided that $t$ is in the convergence domain of every coefficient of $\lambda$, and $\lambda[1/t^d]$ the monomial of degree $d$. (The choice of $\mathbf R[[1/t]]$ over $\mathbf R[[t]]$ is just a peculiarity for our convenience.) We also denote $\lambda(\infty)$ the constant term of $\lambda$. If $\lambda(t) \in \mathcal L_n(\mathbf R)$ for all $t \geqslant 1$, $\lambda$ is called a formal deformation.
Differentiating \eqref{eq:mu-circ-nu} to express that $\lambda$ is a formal deformation with $\lambda(\infty) = \mu$ yields an infinite system of equations, the first of which after \eqref{eq:mu-circ-nu} being
\begin{align}
\label{eq:deformation-equation}
d'_\mu \lambda[1/t] = 0,
\end{align}
that is, $\lambda[1/t] \in Z^2(\mu, \mu)$.
\begin{definition}
Let $\mathfrak g$ be a Lie algebra over $\mathbf R$. Let $\mu \in \mathcal{L}^n(\mathbf R)$ represent $\mathfrak g$, and let $\omega \in H^2(\mathfrak g, \mathfrak g)$ be nonzero.
We say that the formal deformation $\lambda$ integrates the infinitesimal deformation $\omega$ at $\mu$ if $\lambda(\infty) = \mu$, $\lambda$ is convergent on $\mathbf C \setminus \lbrace 0 \rbrace$ and $\lambda[1/t] \in Z^2(\mu, \mu)$ represents $\omega$.
We say that $\omega$ is integrable, resp. linearly expandable (as the authors in \cite{AncocheaCampoamor} do) if a formal deformation $\lambda$ integrates $\omega$, resp. if $\lambda$ is a formal deformation of $\omega$ and $\lambda = \lambda(\infty) + \lambda_1/t$ for some $\lambda_1 \in \Lambda^2 \mathbf R^n \otimes \mathbf R^n$.
\end{definition}
In the last Definition, we insisted more on the cohomology class than on the particular cocycle $\lambda[1/t]$ for the following reason.
Two formal deformations $\lambda, \lambda'$ of $\mu$ are called equivalent if $\lambda(t) = \varphi(t). \lambda'(t)$ for some $\varphi \in \operatorname{GL}(\mathbf R[[t]])$ with $\varphi(\infty) = 1$. If $\lambda$ and $\lambda'$ are equivalent then $\lambda[1/t] - \lambda'[1/t] \in B^2(\mu, \mu)$; this is a better version of \eqref{eq:linearization}, see e.g. Proposition just before \S 2.5 in \cite{AncocheaCampoamor}.
In view of \eqref{eq:linearization}, \eqref{eq:deformation-equation} and this, $H^2(\mathfrak g, \mathfrak g)$ encodes the degree to which $\mathfrak g$ can be deformed; one should nevertheless beware that infinitesimal deformations are not always integrable (See Remark \ref{remark:not-always-integrable}).
\subsubsection{Degenerations to $\mathfrak b(n, \mathbf R)$}
Let $\mathfrak{b}(n, \mathbf R)$ denote the maximal completely solvable subalgebra of $\mathfrak {o}(n,1)$, namely $\mathfrak{b}(n, \mathbf R) = \mathbf R^{n-1} \rtimes_1 \mathbf R$, where the adjoint action of $1 \in \mathbf R$ on $\mathbf R^{n-1}$ is by the identity.
The situation of $\mathfrak{b}(n, \mathbf R)$ with respect to degenerations and deformations is favorable:
\begin{theorem}[After Lauret]
\label{th:Lauret}
Let $\mathfrak g$ be a completely solvable Lie algebra and $n \geqslant 2$ an integer. The following are equivalent:
\begin{enumerate}[{\rm (\ref{th:Lauret}.1)}]
\item
\label{item:metric-deg}
$\mathfrak g$ contracts to $\mathfrak b(n, \mathbf R)$.
\item
\label{item:deg}
$\mathfrak g \to_{\mathrm{deg}} \mathfrak b(n, \mathbf R)$.
\item
\label{item:explicit-met-deg}
$\mathfrak g$ decomposes as $\mathbf R^{n-1} \rtimes_{\nu} \mathbf R$ where $\nu$ is unipotent.
\end{enumerate}
Moreover, under the former conditions there exists $\omega \in H^2(\mathfrak{b}(n, \mathbf R), \mathfrak{b}(n, \mathbf R))$ linearly expandable into a formal deformation $\lambda$ such that $\lambda(1) \in \mathcal{O}_{\mathfrak g}$ and $\lambda(\infty) \in O_{\mathfrak b(n, \mathbf R)}$.
\end{theorem}
Lauret proved (\ref{th:Lauret}.\ref{item:metric-deg}) $\iff$ (\ref{th:Lauret}.\ref{item:explicit-met-deg}) \cite[Theorem 6.2]{LauretDegenerations} with no a priori assumption on $\mathfrak g$. The core of the proof below uses the same idea. (\cite{LauretDegenerations} additionaly used bounds on pinching and \cite{EberleinHeber} that give constraints a priori on $\mathfrak g$).
We need a Lemma which is well-known, however we could only find proofs for the metric topology in the literature.
\begin{lemma}
\label{lem:lsc-zariski}
Let $n$ be a positive integer and $0 \leqslant i \leqslant n$.
Then, the following are upper semi-continuous with respect to the Zariski topology on $\mathcal L_n (\mathbf R)$:
\begin{enumerate}[{\rm (a)}]
\item
\label{item:betti-lsc}
The Betti number $b_p (\lambda) = \dim H^p(\lambda, \mathbf R)$, for all $p \geqslant 0$.
\item
\label{item:adjoint-lsc}
The dimension of the outer derivations $H^1(\lambda, \lambda) = \operatorname{Der}(\lambda) / \operatorname{InnDer} (\lambda)$
\item
\label{item:dim-center-lsc}
The dimension of the center $\dim Z(\lambda)$.
\end{enumerate}
\end{lemma}
\begin{proof}
Note that $Z(\lambda) = H^0(\lambda, \lambda)$, so to prove \eqref{item:betti-lsc}, \eqref{item:adjoint-lsc} and \eqref{item:dim-center-lsc} it is actually sufficient to prove that $\lambda \mapsto b_p(\lambda)$ and $\lambda \mapsto \dim H^{p}(\lambda, \lambda)$ are upper semicontinuous on $\mathcal L_n(\mathbf R)$.
We will prove this by a change of basis argument.
Denote by $x_{ij}^k$ the coordinate functions on $\Lambda^2 (\mathbf R^n)^\ast \otimes \mathbf R^n$, and
let $\mathcal I$ be the ideal of $\mathbf R[x_{ij}^k]$ generated by the relation \eqref{eq:mu-circ-nu}.
Let $A = \mathbf R[x_{ij}^k] /\mathcal I$.
Then $A$ is a noetherian ring by Hilbert's basis theorem, and $\mathcal L_n(\mathbf R)$ with the Zariski topology is a closed subspace of $\operatorname{Spec}(A)$ with the Zariski topology; all the points in $\mathcal L_n(\mathbf R)$ are maximal ideals.
Consider the graded $A$-modules
\begin{align*}
K = \Lambda^\bullet (A^n)^\ast \\
K' = \Lambda^\bullet (A^n)^\ast \otimes_A {A}^n.
\end{align*}
Then, for every $(y_1, y_2) \in \mathbf R[x_{ij}^k]^n$, we set $z = [y_1, y_2]$ to be an element of $\mathbf R[x_{ij}^k]^n$ such that $z(\lambda) = [y_1(\lambda), y_2(\lambda)]$ for all $\lambda \in \Lambda^2 (\mathbf R^n)^\ast \otimes \mathbf R^n$. Note that $z$ only depend on $y_1$ and $y_2$ modulo $\mathcal I^n$, and is well-defined only modulo $\mathcal I^n$; thus $[\cdot, \cdot]$ defines an element of $\Lambda^2(A^n)^\ast \otimes A^n$.
The differentials on $K$ and $K'$ are defined as in \eqref{eq:chevalley-eilenberg-trivial} and \eqref{eq:chevalley-eilenberg-adjoint}.
Then $K_\lambda = K \otimes_A A/\lambda$ and $K'_\lambda = K' \otimes_A A/\lambda$, where $A/\lambda$ is the residual field of $A$ at the maximal ideal $\lambda$.
$K$ and $K'$ are flat $A$-modules, because being flat is preserved by taking exterior and tensor products over the base ring \cite[Proposition 2.3]{LazardPlat}.
We may now conclude by applying the following \cite[Théorème 7.6.9(i)]{EGAIII2}: if $A$ is noetherian and $K$ is a differential complex of finitely generated flat modules, then for every $p \geqslant 0$, the function $y \mapsto H^p(K \otimes_A k(y))$ is upper semi-continuous on $\operatorname{Spec}(A)$, where $k(y)$ denotes the residual field at $y$.
In particular, it is upper semi-continuous on the closed subspace $\mathcal L_n(\mathbf R)$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{th:Lauret}]
\eqref{item:metric-deg} $\implies$ \eqref{item:deg} is clear.
Assume \eqref{item:deg}.
By Lemma \ref{lem:lsc-zariski}, $b_1(\mathfrak g) \leqslant 1$. If it is zero, then $\mathfrak g$ is perfect, especially it is not solvable; hence
$b_1 = 1$, and $\mathfrak g$ splits as a semidirect product
\begin{equation}
[\mathfrak g, \mathfrak g] \oplus \mathbf R A
\end{equation}
where $\operatorname{ad}_A$ is nonsingular in view of the fact that $Z(\mathfrak g) = 0$, again by Lemma \ref{lem:lsc-zariski}. Choosing an adequate representative $\lambda_0$ in $O_{\mathfrak g}$ and an adequate basis we may as well assume that $[\lambda_0, \lambda_0] = \mathbf R^{n-1}$ and $A = (0^{n-1},1)$.
The coefficients of the characteristic polynomial $P_{\mu, X}$ of $\operatorname{ad}_X: Y \mapsto \mu(X,Y)$ are polynomial functions on $\mathcal L_n(\mathbf R)$, and for every $\lambda_1 \in O(\lambda)$ the spectrum of $P_{\lambda_1, X}$ is either a nonzero multiple of $\operatorname{Sp} (P_{\lambda_0, A})$, or $0$ with multiplicity $n$, the latter case occurring $X \in [\lambda_1, \lambda_1]$.
So, for $\mu \in \overline{O(\lambda)}$ this holds as well. But for $\mu \in O_{\mathfrak b(n, \mathbf R)}$, this spectrum is always concentrated at one point.
So $\operatorname{ad}_A$ cannot have two distinct eigenvalues, and then $[\mathfrak g, \mathfrak g]$ is abelian, which proves \eqref{item:explicit-met-deg}.
Assume \eqref{item:explicit-met-deg}.
Then $\nu-1$ is nilpotent; let $X_1, \ldots , X_{n-1}$ be a basis of $[\mathfrak g, \mathfrak g]$ in which it appears in lower-triangular Jordan form, $\nu-1 = \sum_i \delta_i X_{i}^\ast \otimes X_{i+1}$ where $\delta_i \in \lbrace 0,1 \rbrace$. One computes that $d(A \wedge X_i^ \ast \otimes X_{i+1}) = 0$ (Lemma \ref{lem:differentials-of-2-cochains-R}; beware that $S$ replaces $A$ there) and that no nonzero linear combination of those is a coboundary (Lemma \ref{lem:differentials-of-1-cochains-R}).
Setting $\mu$ the law of $\mathfrak b(n, \mathbf R)$ in the basis $(X_1, \ldots X_{n-1}, A)$ and
$\omega = A^\ast \wedge \sum_i \delta_i X_i^\ast \otimes X_i $ we find that $\lambda_0 = \mu + \omega$.
Then $\mu$ is the degeneration of $\lambda$ through $(\varphi_t)$, where $\varphi_t A = A$ and $\varphi_t X^i = t^{-i} X_i$ for all $t$.
\end{proof}
\begin{remark}
\label{rem:dejavu}
A contraction to (a deformation of) $\mathfrak b(n, \mathbf R)$ was already implicitly used in the proof of Proposition \ref{prop:Riemannian-computation}; in accordance with \cite{LauretDegenerations}, contractions can be considered as limit points in the space of left-invariant Riemannian metrics over a given group.
\end{remark}
\begin{remark}
\label{remark:not-always-integrable}
We can additionally check that $H^3(\mathfrak b, \mathfrak b)=0$ when $\mathfrak b = \mathfrak b(2, \mathbf R)$, though it is unnecessary.
This vanishing ensures that the deformation system can be solved and every infinitesimal deformation of $\mathfrak b$ is integrable into a formal deformation {\cite[p.98]{NijenhuisRichardsonStruc}}.
For nilpotent Lie algebras $\mathfrak n$ that will be discussed more in detail in \S\ref{subsec:nilpotent}; on the other hand, one must beware that $H^3(\mathfrak n, \mathfrak n)$ is large, for instance $\dim H^3(\mathfrak n, \mathfrak n) \geqslant 8$ for all the $6$-dimensional nilpotent $\mathfrak n$ \cite[Table 11]{Magnin08}.
\end{remark}
\subsection{Groups $O(u)$-bilipschitz equivalent to $\mathbb H^n_{\mathbf R}$}
\label{subsec:proofA}
We prove here Theorem \ref{th:Tukia-SBE}.
Let us first recall some terminology from \cite{CoTesContracting} and \cite{CCMT}.
\begin{definition}
Let $G$ be a connected Lie group with finitely many components.
$G$ is of rank-one type if it has a maximal normal compact subgroup $W$ such that $G/W$ is isomorphic to a simple Lie group $G_{\mathbf R}$ of real rank one, with $Z(G_{\mathbf R}) = 1$.
\end{definition}
\begin{wrapfigure}[9]{r}[14pt]{0cm}
\begin{tikzcd}
(\ref{th:Tukia-SBE}.\ref{sublinear-characterization}) \arrow[dr, Rightarrow, "\ref{1to3}"]
&
& \\
(\ref{th:Tukia-SBE}.\ref{item:log-characterization}) \arrow[u, Rightarrow, "\ref{2to1}"]
& (\ref{th:Tukia-SBE}.\ref{pinching-characterization}) \arrow[dr, Rightarrow, swap, "G \text{ completely solvable: } \ref{3to5}"] \arrow[l, Rightarrow, "\ref{3to2}"]
& (\ref{th:Tukia-SBE}.\ref{item:degeneration-characterization-real}) \arrow[l, Rightarrow, swap, "\ref{4to3}"] \\
&
& (\ref{th:Tukia-SBE}.\ref{item:explicit-characterization-tukia}) \arrow[u, Leftrightarrow, swap, "\ref{4iff5}"]
\end{tikzcd}
\end{wrapfigure}
\subsubsection{{\rm (\ref{th:Tukia-SBE}.\ref{sublinear-characterization})} implies {\rm (\ref{th:Tukia-SBE}.\ref{pinching-characterization})}}
\label{1to3}
Let $G$ be a Lie group with finitely many connected components. Assume that $G$ is $O(u)$-sublinear bilipschitz equivalent to $\mathbb H_{\mathbf R}^n$ for some $n$.
Then all asymptotic cones of $G$ being $\mathbf R$-trees, $G$ is Gromov-hyperbolic.
By Cornulier and Tessera's theorem \cite{CoTesContracting}, $G$ is either of Heintze or rank-one Lie type.
First assume that $G$ is of Heintze type, write $G = (K \times \mathbf R) \ltimes N$ and call $H$ the co-compact normal subgroup $\mathbf R \ltimes N$ so that $G/K$ is simply transitively acted upon by $H$.
By \cite{pallier2019conf}, $\operatorname{Cdim}_{O(u)} \partial_\infty H = \operatorname{Cdim}_{O(u)} \partial_\infty \mathbb H_{\mathbf R}^n = n-1$.
By \cite{cornulier2017sublinear}, $ \operatorname{Topdim} \partial_\infty H = n-1$. So $H$ is metabelian and every eigenvalue of $\alpha$ has real part $1$. By Proposition \ref{prop:Riemannian-computation}, (\ref{th:Tukia-SBE}.\ref{pinching-characterization}) holds, while by \cite[Theorem 1.2]{CornulierCones11}, ({\rm \ref{th:Tukia-SBE}.\ref{item:log-characterization}}) holds.
If $G$ is of rank-one type, then it acts properly co-compactly by isometries on a rank one symmetric space, which can only be $\mathbb H^n_{\mathbf R}$ in view of the equality of conformal dimension and topological dimension of the boundary; especially, ({\rm \ref{th:Tukia-SBE}.\ref{item:log-characterization}}) and (\ref{th:Tukia-SBE}.\ref{pinching-characterization}) hold as well.
\subsubsection{{\rm (\ref{th:Tukia-SBE}.\ref{pinching-characterization})} implies {\rm (\ref{th:Tukia-SBE}.\ref{item:log-characterization})}}
\label{3to2}
Since it acts geometrically on Gromov-hyperbolic spaces, $G$ is Gromov-hyperbolic. Again by \cite{CoTesContracting}, it is of Heintze type or rank-one type. If it is rank-one type, then it is quasiisometric to a rank one symmetric space $X$; by Pansu's Theorem \ref{th:pansuconf}, $\operatorname{Cdim}(\partial_\infty G) = \operatorname{Topdim}(\partial_\infty G)$, so $X = \mathbb H_{\mathbf R}^n$.
If it is Heintze-type, then it is $O(\log)$-bilipschitz equivalent to a purely real Heintze group.
\subsubsection{{\rm (\ref{th:Tukia-SBE}.\ref{item:log-characterization})} implies {\rm (\ref{th:Tukia-SBE}.\ref{sublinear-characterization})}}
\label{2to1}
$u= \log$ is an admissible function.
\subsubsection{If $G$ is completely solvable then {\rm (\ref{th:Tukia-SBE}.\ref{pinching-characterization})} implies {\rm (\ref{th:Tukia-SBE}.\ref{item:explicit-characterization-tukia})}}
\label{3to5}
By \cite{CoTesContracting}, it is of Heintze type.
and by Corollary \ref{cor:on-pinching}, $N$ is abelian and all the eigenvalues of $\alpha$ have real part $1$.
\subsubsection{{\rm (\ref{th:Tukia-SBE}.\ref{item:degeneration-characterization-real})} and {\rm (\ref{th:Tukia-SBE}.\ref{item:explicit-characterization-tukia})} are equivalent}
\label{4iff5}
This is our version of Lauret's theorem, Theorem \ref{th:Lauret}.
\subsubsection{{\rm (\ref{th:Tukia-SBE}.\ref{item:degeneration-characterization-real})} implies {\rm (\ref{th:Tukia-SBE}.\ref{pinching-characterization})}}
\label{4to3}
This is a special case of Proposition \ref{prop:Riemannian-computation} where all the eigenvalues of $\operatorname{ad}_A$ are real.
\subsection{From the pinching condition to Higson coronae}
Corollary \ref{cor:sbe-corona} follows by applying {(\ref{th:Tukia-SBE}.\ref{pinching-characterization})} $\implies$ {\rm (\ref{th:Tukia-SBE}.\ref{sublinear-characterization})} together with Proposition \ref{prop:sbe-is-coarse}.
\section{Proof of Theorem \ref{thm:groups-sbe-to-h2c}}
\label{sec:proofE}
\subsection{Pointed sphere}
\label{subsec:pointedSphere}
We will prove the implication (\ref{thm:groups-sbe-to-h2c}.\ref{item:G-SBE-to-H2C}) $\implies$ (\ref{thm:groups-sbe-to-h2c}.\ref{item:G-SBE-to-commable}) in Theorem \ref{thm:groups-sbe-to-h2c} by establishing a baby case of a variant of Cornulier's pointed sphere conjecture \cite[Conjecture 19.104]{CornulierQIHLC}. Precisely we establish a special case of the conjecture in the setting of sublinear bilipschitz equivalences rather than quasiisometries for which it is usually formulated. We denote by $\operatorname{SBE}^{O(u)}(X)$ the group of self $O(u)$-bilipschitz equivalences of the metric space $X$ (modulo the relation of $O(u)$-closeness).
Let us first recall that sublinear bilipschitz equivalences induce homeomorphisms of the compact boundary sphere $\partial_\infty X$ when $X$ is Gromov-hyperbolic \cite{cornulier2017sublinear}.
\begin{lemma}
\label{lem:baby-sphere}
Let $u$ be an admissible function.
Let $S$ be a purely real Heintze group such that $[S,S]$ is abelian, and let $\Omega$ be the unique closed orbit of $\operatorname{SBE}^{O(u)}(S)$ acting by homeomorphisms on $ \partial_\infty S$. The following are equivalent:
\begin{enumerate}[{\rm (\ref{lem:baby-sphere}.1)}]
\item
\label{item:metab-two-distinct-egv}
$\alpha$ has at least two distinct eigenvalues
\item
\label{item:focal-group}
$\Omega$ is reduced to a single point.
\end{enumerate}
\end{lemma}
\begin{proof}
The reasoning is inspired from \cite[6.9 Corollaire]{PansuDimConf}.
Let $\omega$ be the endpoint of a $\mathbf R$ section in $S$, so that $\partial_\infty S \setminus \lbrace \omega \rbrace$ is simply transitively acted upon by $[S,S]$.
Assume \eqref{item:metab-two-distinct-egv} and let $\mathcal F$ be the foliation on $\partial_\infty S \setminus \lbrace \omega \rbrace$ determined by the cosets of $\ker(\alpha - \lambda)$, where $\lambda$ is the minimal eigenvalue of $\alpha$ (since $[S,S]$ is abelian, we may identify it with its Lie algebra).
Then by \cite[Lemma 3.9]{pallier2019conf}, for all sublinear bilispchitz equivalence $f:S \to S$, the boundary map $\partial_\infty f$ preserves $\mathcal{F}$. Now let $F$ be any leaf of $\mathcal F$. Then, $\lbrace \omega \rbrace$ can be written as $\overline{ F} \setminus F$ or $\overline{(\partial_\infty f) F} \setminus (\partial_\infty f) F$, so that $\partial_\infty f \omega = \omega$.
Conversely, if $\alpha$ only has a single eigenvalue, then $S$ is sublinearly bilipschitz equivalent to real hyperbolic space. Since $\operatorname{Isom}(\mathbb H^n_{\mathbf R})$ is transitive on $\partial_\infty \mathbb H^n_{\mathbf R}$, $\operatorname{SBE}^{O(u)}(S)$ is transitive on $\partial_\infty S$.
\end{proof}
\begin{proposition}
\label{prop:SBE-toh2c}
Let $u$ be an admissible function.
Let $S$ be a Heintze group. Assume that $S$ is $O(u)$-bilipschitz equivalent to $\mathbb H_{\mathbf C}^2$. Then the shadow of
$S$ is isomorphic to $\mathbf {Heis} \rtimes_{\alpha} \mathbf R$ where
$\mathbf {Heis}$ is the three-dimensional Heisenberg group and
\[ \alpha = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 2 \end{pmatrix} \qquad \text{or} \qquad \alpha = \begin{pmatrix} 1 & 1 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 2 \end{pmatrix} \]
in a basis $(X, Y, Z)$ of $\mathfrak {heis}$ such that $[X,Y] = Z$.
\end{proposition}
\begin{proof}
Let $S_0 = N \rtimes_\alpha \mathbf R$ be a semidirect product decomposition of the shadow $S_0$ of $S$, where $N$ is three-dimensional and $\alpha$ is normalized so that its lowest eigenvalue is $1$.
Since $S$ has been assumed $O(u)$-bilipschitz equivalent to $\mathbb H_{\mathbf C}^2$, by Theorem \ref{th:geodim}, $\dim S = \operatorname{asdim}_{\operatorname{AN}} \mathbb H^2_{\mathbf C} - \operatorname{conedim} \mathbb H^2_{\mathbf C} = 3$. So $N$ is isomorphic either to $\mathbf R^3$ or to the $3$-dimensional Heisenberg group.
In the first case, since $\operatorname{Tr}(\alpha) = \operatorname{Cdim}_{O(u)}(S) = 4 > 3$, $\alpha$ has at least two distinct eigenvalues, and by Lemma \ref{lem:baby-sphere}, the unique closed orbit of $\operatorname{SBE}^{O(u)}(S_0)$ acting on $\partial_\infty S_0$ has only one element (namely, $\omega$ from the proof of \ref{lem:baby-sphere}). This contradicts the fact that $\operatorname{SBE}^{O(u)}(\mathbb H^2_{\mathbf C})$ is transitive on $\partial_\infty \mathbb H^2_{\mathbf C}$, so this cannot be.
Consequently, $N$ is isomorphic to the the three-dimensional Heisenberg group. Let $1, \lambda, \mu$ be the eigenvalues of $\alpha$, where $\mu$ corresponds to the eigenvector generating the center of $\mathbf{Heis}$, and $1\leqslant \lambda \leqslant \mu$. Necessarily, $1+\lambda= \mu$ and $1+\lambda + \mu = 4$, so $2 + 2 \lambda = 4$, and then $\lambda = 1$.
We deduce from there that $\alpha$ can only be one of the two derivations in the conclusion.
\end{proof}
\begin{table}[]
\begin{tabular}{|cll|}
\hline
Nilradical & $\operatorname{Jordan}(\alpha)$ & $\mathbb H^n_{\mathbf K}$ \\
\hline
\hline
$\mathbf R^2$ & $\operatorname{diag}(1, \lambda)$ & \\
\hline
$\mathbf R^2$ & $\operatorname{diag}(1,1)$
& $\mathbb{H}_{\mathbf R}^3$ \\
$\mathbf R^2$ & $J_2(\lambda)$
& \\
\hline
$\mathbf R^3$ & $\operatorname{diag}(1,\lambda,\lambda)$
& \\
$\mathbf R^3$ & $\operatorname{diag}(1,J_2(\lambda))$
& \\
\hline
$\mathbf R^3$ & $\operatorname{diag}(1,1,\lambda)$
& \\
$\mathbf R^3$ & $\operatorname{diag}(J_2(1),\lambda)$
& \\
\hline
\end{tabular}
\begin{tabular}{|cll|}
\hline
Nilradical & $\operatorname{Jordan}(\alpha)$ & $\mathbb H^n_{\mathbf K}$ \\
\hline
\hline
$\mathbf R^3$ &
$\operatorname{diag}(1,1,1)$
& $\mathbb{H}_{\mathbf R}^4$ \\
$\mathbf R^3$ &
$\operatorname{diag}(1,J_2(1))$
& \\
$\mathbf R^3$ &
$J_3(1)$
& \\
\hline
$\mathbf R^3$ & $\operatorname{diag}(1, \lambda, \mu)$ & \\
\hdashline
$\mathbf {Heis}_3$ & $\operatorname{diag}(1,\lambda, 1+\lambda)$ & \\
\hline
$\mathbf {Heis}_3$ & $\operatorname{diag}(1,1, 2)$ & $\mathbb{H}_{\mathbf C}^2$ \\
$\mathbf {Heis}_3$ & $\operatorname{diag}(J_2(1), 2)$ & \\
\hline
\end{tabular}
\vskip 10pt
\caption{Purely real Heintze groups of dimension $3$ or $4$, with parameters $1 <\lambda < \mu$. The plain horizontal lines denote the separations between $O(\log)$-bilipschitz equivalence classes that can be deduced from \cite{pallier2019conf} and Theorem \ref{thm:groups-sbe-to-h2c}. The dash line remains unknown when $\mu = 1+\lambda$.
The isomorphism type of $N \rtimes_{\alpha} \mathbf R$ is generally not determined by $N$ and $\operatorname{Jordan}(\alpha)$ alone; see the $6$-dimensional example after Theorem 1.3 in \cite{CarrascoSequeira}.
}
\label{tab:SBE-Heintze-4}
\end{table}
\begin{proof}[Proof of {\rm (\ref{thm:groups-sbe-to-h2c}.\ref{item:G-SBE-to-H2C}) $\implies$ (\ref{thm:groups-sbe-to-h2c}.\ref{item:G-SBE-to-commable})}]
Let $G$ be as in the statement of Theorem \ref{thm:groups-sbe-to-h2c}, namely $G$ is a connected Lie group sublinear bilipschitz equivalent to $\mathbb H^2_{\mathbf C}$. Then $G$ is commable to a completely solvable group $G_0$ \cite[Lemma 6.7]{CornulierDimCone}. Since $G_0$ is Gromov-hyperbolic, by \cite{CoTesContracting} it is a purely real Heintze group \cite{CoTesContracting}. We may then apply Proposition
\ref{prop:SBE-toh2c} to $G_0$. in the first case where $\alpha$ is diagonalisable, $G_0$ (hence $G$) will be commable to $\mathrm{SU}(2,1)$, in the second case it will be commable to $S'$.
\end{proof}
Let us mention an application to the quasiisometry classification of Heintze groups.
The result below also follows from \cite[Theorem A]{KiviojaThesis} which appeared during the writing of this paper.
\begin{corollary}
\label{cor:qi}
The groups $S'$ and
$S'' = \mathbf R^3 \rtimes_\alpha \mathbf R$ where
\[ \alpha = \operatorname{diag}(J_2(1), 2) = \begin{pmatrix} 1 & 1 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 2 \end{pmatrix} \]
are not quasiisometric.
\end{corollary}
Indeed, if $S$ and $S'$ were quasiisometric, they would be $O(\log)$-bilipschitz equivalent. But $S'$ is $O(\log)$-bilipschitz equivalent to $\mathbb H^2_{\mathbf C}$, whereas $S''$ is not.
See Table \ref{tab:SBE-Heintze-4} for the Heintze groups of dimension at most $4$ and the current knowledge on their $O(\log)$-bilispchitz classification (their quasiisometry classification is known and reduces to isomorphism, see \cite[Theorem C]{KiviojaThesis}).
\begin{remark}
\label{rem:difficulty-extendE}
We can start the same reasoning with $X = \mathbb H_{\mathbf C}^n$, $n>2$. By conformal dimension, any purely real Heintze group $S$ that is $O(u)$-bilipschitz equivalent to $X$ has $[S,S]$ isomorphic to $\mathbf {Heis}^{2k+1} \times \mathbf R^{2(n-k)}$ for some $k \in \lbrace 0, \ldots k-1 \rbrace$, where $\mathbf {Heis}^{2k-1}$ denotes the $2k-1$-dimensional Heisenberg group for $k \geqslant 2$ and $H^1 = \mathbf R$.
Otherwise said, using the amalgam notation (Definition \ref{def:amalgam})
\[ \mathfrak s = \mathfrak b(k, \mathbf C)\; \sharp \; \mathfrak b(2(n-k)+1, \mathbf R), \]
where we recall that $\mathfrak b(k, \mathbf C)$ is the maximal completely solvable subalgebra of $\mathfrak u(k, 1)$.
But we are only able to prove the pointed sphere conjecture for $S$ when $k=1$: for $k\geqslant 2$ the invariant foliation in $\partial_\infty S$ provided by \cite[Lemma 3.9]{pallier2019conf} becomes a single leaf.
The same reasoning also falls short to characterize the triangulable groups $S$ that are $O(u)$-bilipschitz equivalent to $X = \mathbb{H}_{\mathbf H}^2$, for it leaves the possibility that the Lie algebra of their shadow is
\begin{align*}
\mathfrak s_0 \in & \left\{ \mathfrak b(5, \mathbf R) \; \sharp \; 2 \mathfrak b(4, \mathbf R), \mathfrak b(2, \mathbf C) \; \sharp \; \mathfrak b(2, \mathbf R) \; \sharp \; 2 \mathfrak b(3, \mathbf R), \mathfrak b(3, \mathbf C)\; \sharp \; 2 \mathfrak b(2, \mathbf R) , \right. \\
& \left.
\mathfrak n_{6} \rtimes_{\operatorname{Carnot}} \mathbf R \; \sharp \; 2 \mathfrak b(2, \mathbf R) ,
\mathfrak n_7 \rtimes_{\operatorname{Carnot}} \mathbf R, \mathfrak b(4, \mathbf R) \; \sharp \; \mathfrak l_{4,3} \rtimes_{\mathrm{Carnot}} \mathbf R, \mathfrak b(2, \mathbf H) \right\}
\end{align*}
where $\mathfrak l_{4,3}$ denotes the $4$-dimensional filiform algebra, $\mathfrak n_6$ is among $\mathfrak l_{6,8}$, $\mathfrak l_{6,22}(-1)$ and $\mathfrak l_{6,22}(0)$ (See \cite{deGraafclass} for structure constants), $\mathfrak n_7$ is one among the real forms of the $4$ complex nilpotent algebras denoted $\mathfrak g_{7,3.12}$ ($2$ real forms), $\mathfrak g_{7,3.24}$, $\mathfrak g_{7,4.1}$ ($2$ real forms) or $\mathfrak g_{7,4.2}$ in \cite{magnin2007adjoint}. Using \cite{pallier2019conf} one can only deduce the pointed sphere conjecture (Lemma \ref{lem:baby-sphere}) for the first $6$ out of these $14$ Lie algebras, while it is expected that it holds for all but the last one.
\end{remark}
\subsection{Degenerations to $\mathfrak b(2, \mathbf C)$}
We prove here a variant of Lauret's theorem \ref{th:Lauret}.
\begin{lemma}
\label{lem:deg-to-bnC}
Let $\mathfrak g$ be a completely solvable Lie algebra of dimension $4$.
The following are equivalent:
\begin{enumerate}[{\rm (\ref{lem:deg-to-bnC}.1)}]
\item
\label{item:deg-to-bnC-1}
$\mathfrak g$ contracts to $\mathfrak b(2, \mathbf C)$
\item
\label{item:deg-to-bnC-2}
$\mathfrak g \longrightarrow_{\mathrm{deg}} \mathfrak b(2, \mathbf C)$
\item
\label{item:deg-to-bnC-3}
$\mathfrak g$ decomposes as $\mathfrak [\mathfrak g, \mathfrak g] \oplus \mathbf R A$, where $[\mathfrak g, \mathfrak g] = \mathfrak{heis}$ and $\operatorname{ad}_A$ is unipotent on $[\mathfrak g, \mathfrak g]/D^3 \mathfrak g$.
\end{enumerate}
\end{lemma}
\begin{proof}
As in the proof of Theorem \ref{th:Lauret}, the core of the proof is that (\ref{lem:deg-to-bnC}.\ref{item:deg-to-bnC-2}) implies (\ref{lem:deg-to-bnC}.\ref{item:deg-to-bnC-3}), so let us focus on this part. Assume that $\mathfrak g \longrightarrow_{\mathrm{deg}} \mathfrak b(2, \mathbf C)$. Then $b_1(\mathfrak g) = 1$ by Lemma \ref{lem:lsc-zariski}. The ideal $\mathfrak n = [\mathfrak g, \mathfrak g]$ is nilpotent by Lie's theorem, and $\mathfrak g = \mathfrak n \rtimes_\beta \mathbf R$ for some nonsingular $\beta \in \operatorname{ad}(\mathfrak n)$. Without loss of generality we can assume that $\operatorname{Sp}(\beta) = \lbrace 1, 2 \rbrace$, and that $2$ has multiplicity $1$. So the nilpotency class of $\mathfrak n$ is at most $2$, and $\operatorname{codim} [\mathfrak n, \mathfrak n] = 1$; thus $\mathfrak n$ is either $\mathbf R^3$ or $\mathfrak {heis}$, and $\mathfrak g$ is among the four algebras
\[ \mathfrak b(2, \mathbf C), \mathfrak b(3, \mathbf R) \; \sharp \; 2 \mathfrak b(2, \mathbf R) , \mathfrak s', \mathfrak s'', \]
where we recall that $\mathfrak s' = \mathfrak{heis}_{\alpha} \rtimes \mathbf R$ with $\alpha = \operatorname{diag}(J_2(1), 2)$ in the basis $(X,Y,Z)$ and $\mathfrak s''$ is the Lie algebra of $S''$ defined in Corollary \ref{cor:qi}. Observe that
\begin{equation*}
\dim H^1(\mathfrak g, \mathfrak g) =
\begin{cases}
2 & \mathfrak g = \mathfrak b(2, \mathbf C) \; \text{by Proposition} \; {\rm \ref{prop:first-adjoint-C}} \\
4 & \mathfrak g = \mathfrak s'' \; \text{by Proposition} \; {\rm \ref{prop:first-adjoint-S''}}
\end{cases}
\end{equation*}
Note that $\mathfrak s''$ degenerates to $\mathfrak b(3, \mathbf R) \; \sharp \; 2 \mathfrak b(2, \mathbf R)$.
Hence by Lemma \ref{lem:lsc-zariski}, $\dim H^1(\mathfrak g, \mathfrak g) \geqslant 4$ for $\mathfrak g = \mathfrak b(3, \mathbf R) \; \sharp \; 2 \mathfrak b(2, \mathbf R)$, which, again by Lemma \ref{lem:lsc-zariski}, forbids a degeneration of the latter algebra to $\mathfrak b(2, \mathbf C)$.
This establishes (\ref{lem:deg-to-bnC}.\ref{item:deg-to-bnC-2}) $\implies$ (\ref{lem:deg-to-bnC}.\ref{item:deg-to-bnC-3}).
Finally let us prove that $\mathfrak s' \longrightarrow_{\mathrm{deg}} \mathfrak b(2, \mathbf C)$. Take
\[
\begin{array}{cccc}
\varphi_t X = X & \varphi_t Y = e^{-t} Y &
\varphi_t Z = e^{-t} Z & \varphi_t A = A.
\end{array}
\]
Then $\mathfrak s'$ contracts\footnote{This was recorded by Burde and Steinhoff in their list of degenerations between $4$-dimensional complex Lie algebras: $\mathfrak s' \otimes \mathbf C$ is $\mathfrak g(1/64, 5/16)$ in \cite{BurdeSteinhoff} and $\mathfrak s' \otimes \mathbf C \longrightarrow_{\mathrm{deg}} \mathfrak b(4, \mathbf R) \otimes \mathbf C$ is the case $\gamma = 2$ in Table IV p. 736 op cit.} to $\mathfrak b(2, \mathbf C)$ through $(\varphi_t)$.
This establishes (\ref{lem:deg-to-bnC}.\ref{item:deg-to-bnC-3}) $\implies$ (\ref{lem:deg-to-bnC}.\ref{item:deg-to-bnC-1}).
\end{proof}
The author expects that Lemma \ref{lem:deg-to-bnC} should hold replacing $\mathfrak b(2, \mathbf C)$ with $\mathfrak b(n, \mathbf C)$ and $\mathfrak{heis}$ with $\mathfrak{heis}^{2n-1}$ in (\ref{lem:deg-to-bnC}.\ref{item:deg-to-bnC-3}), though generalizing Proposition {\rm \ref{prop:first-adjoint-S''}} to higher dimensional algebras comprises some computational hurdles.
The greatest theoretical difficulty in generalizing Theorem \ref{thm:groups-sbe-to-h2c} (if it holds) from $\mathbb H_{\mathbf C}^2$ to $\mathbb H_{\mathbf C}^n$ with $n>2$ seems to lie on the analytical side, cf. Remark \ref{rem:difficulty-extendE} above.
\section{Some remarks on spaces other than $\mathbb H^n_{\mathbf R}$ and $\mathbb H^n_{\mathbf C}$}
\subsection{Connected Lie groups}
\label{subsec:nilpotent}
In the attemps to relate the large-scale geometry of pairs of connected Lie groups, several sufficient criteria have been found (e.g. for quasiisometry in \cite{BreuillardLarge}, \cite{CornulierCones11}, for sharing simply transitive Riemannian models in \cite{cowling2021homogeneous}, and for $O(u)$-bilipschitz equivalence in \cite{CornulierCones11}).
These criteria consist for a large part\footnote{Additional subtelty comes from the ``medium-scale'' topology of the groups when it is non trivial.} in going back to the Lie algebra and simplifying its structure.
These criteria can sometimes be formulated using deformations and degenerations of Lie algebras.
\begin{itemize}
\item
Pansu's theorem on asymptotic cones: those are degenerations.
\item
Cornulier's theorem on asymptotic cones: when the exponential radical is abelian, those are degenerations. (This is the case for the Heintze groups considered in Section \ref{sec:proofB}.)
\item
Twistings (or normal modifications) introduced by \cite{AlekHMN} and \cite{GordonWilson} and studied in relation to large-scale geometry in \cite{cowling2021homogeneous}: those are deformations.
\end{itemize}
\subsubsection{Cornulier's Theorem}
Let $\mathfrak g$ be a completely solvable Lie algebra. Let $\mathfrak h$ be a Cartan subalgebra (maximal nilpotent self-normalizing in $\mathfrak g$), and let $\mathfrak r = \liminf_i C^i \mathfrak g$ be the limit of the descending central series of $\mathfrak g$.
Decompose the adjoint representation of $\mathfrak h$ in $\mathfrak r$ into primary components,
\[ \mathfrak r = \bigoplus_{\omega \in \operatorname{Hom}(\mathfrak h, \mathbf R)} \mathfrak r^\omega = \bigoplus_{\omega \in \operatorname{Hom}(\mathfrak h, \mathbf R)} \limsup_{i \to + \infty} \ker (\alpha - \omega)^i \]
where $\alpha$ is the structural morphism $\mathfrak h \to \operatorname{Der}(\mathfrak r)$.
Note that since $\mathfrak h$ is nilpotent, its ideal $\mathfrak w = \mathfrak h \cap \mathfrak r$ lies within $\mathfrak r^0$.
So the semisimple part $\delta$ of $\alpha$ factors through $\pi: \mathfrak h \to \mathfrak h / \mathfrak w$, and the resulting $\mathfrak h/\mathfrak w$-module decomposes as
\begin{equation}
\label{eq:decomposition-of-exprad}
\mathfrak r = \mathfrak r^0 \oplus \bigoplus_{\underline \omega \in \operatorname{Hom}(\mathfrak h/\mathfrak w, \mathbf R), \, \underline \omega \neq 0} \mathfrak r^{\underline \omega} =
\mathfrak r^0 \oplus \bigoplus_{\underline \omega \neq 0} \ker(\underline \delta - \omega)
\end{equation}
where $\delta = \underline \delta \circ \pi$ and $\underline \omega = \omega \circ \pi$.
There is a Lie algebra homomorphism $\underline \delta_\infty : \operatorname{gr}(\mathfrak h /\mathfrak w) \to \operatorname{Der}(\mathfrak r)$ and the following diagram:
\[ \begin{tikzcd}
\mathfrak h \arrow[r, swap,"\pi"] \arrow[bend left=30, rrrd, "\delta"] & \mathfrak h / \mathfrak w \arrow[rd] \arrow[bend left=0, rrd, "\underline \delta"] & & \\
& & \mathfrak h/ [\mathfrak h, \mathfrak h] \arrow[r] & \operatorname{Der}(\mathfrak r). \\
\operatorname{gr} (\mathfrak h) \arrow[r] & \operatorname{gr}(\mathfrak h / \mathfrak w) \arrow[ur] \arrow[urr, swap, "\underline \delta_\infty"]
\end{tikzcd} \]
\begin{theorem}[Cornulier {\cite{CornulierCones11}}]
\label{th:cornulier-red}
Let $\mathfrak g$ be a completely solvable Lie algebra.
With notation as above, define
$\mathfrak g_1 = \mathfrak r \rtimes_\delta (\mathfrak h /\mathfrak w)$ and $\mathfrak g_\infty$ as $\mathfrak r \rtimes_{\underline \delta_\infty} \operatorname{gr}(\mathfrak h/\mathfrak w)$.
Let $G$, $G_1$, $G_\infty$ be simply connected with Lie algebras $\mathfrak g$, $\mathfrak g_1$, $\mathfrak g_\infty$ respectively.
Then
\begin{enumerate}[{\rm (a)}]
\item
\label{item:cornulier-def}
$G$ and $G_1$ are $O(\log)$-bilipschitz equivalent.
\item
\label{item:pansu-goodman-def}
If $C^{s+1} \mathfrak h = 0$, then $G_1$ and $G_\infty$ are $O(r^{1-1/s})$-bilispchitz equivalent.
\end{enumerate}
\end{theorem}
\begin{proposition}
\label{prop:cornulier-and-deg}
Let $\mathfrak g$ be a completely solvable Lie algebra.
Assume that $\mathfrak r = \liminf C^i \mathfrak g$ is abelian.
Let $\mathfrak g_1$, $\mathfrak g_\infty$ be as in Theorem \ref{th:cornulier-red}.
Then
\begin{equation}
\label{eq:cornulier-degenerations}
\mathfrak g \longrightarrow_{\mathrm{deg}} \mathfrak g_1 \longrightarrow_{\mathrm{deg}} \mathfrak g_\infty.
\end{equation}
\end{proposition}
We already encountered examples of this:
\begin{itemize}
\item
When $\mathfrak g$ is nilpotent, the right degeneration in \eqref{eq:cornulier-degenerations} is Example \ref{ex:nilpotent}. Note that $\mathfrak r = 0$ in this case.
\item
When $\mathfrak g = [\mathfrak g, \mathfrak g] \oplus \mathbf RA$ and $\operatorname{ad}_A$ is unipotent, the left degeneration in \eqref{eq:cornulier-degenerations} is the contraction occuring in Theorem \ref{th:Lauret} \eqref{item:deg}. $\mathfrak r$ is abelian and has codimension $1$ in this case.
\end{itemize}
\begin{proof}
Start with the decomposition \eqref{eq:decomposition-of-exprad}.
Decompose further $\mathfrak r$ into $\mathfrak r^0$ and a direct sum of subspaces $U_i$ such that
\begin{equation}
\bigoplus_{j \geqslant i} U_i = \bigoplus_{\underline \omega \neq 0} \ker (\alpha - \omega)^i
\end{equation}
Since $\mathfrak h$ is nilpotent, we have that $\mathfrak w = \mathfrak r \cap \mathfrak h \subseteq \mathfrak r^0$.
Decompose $\operatorname{Vect}(\mathfrak g)$ into a direct sum
\begin{align}
\mathfrak g & = \bigoplus_{i \geqslant 1} U_i \oplus \mathfrak r^0 \oplus \mathcal H
\end{align}
where $\mathcal H$ is a linear subspace of $\operatorname{Vect}(\mathfrak g)$ representing $\mathfrak h/\mathfrak w$.
Denote by $\mu$, resp. $\mu_1$, resp. $\mu_\infty$ the brackets of the three laws on $\operatorname{Vect}(\mathfrak g)$.
Set $\varphi_t(u) = t^{n-i} u$ for any $u \in U_i$.
Then for all $h \in \mathcal H$ and $u \in U_i \cap \mathfrak r^\omega$,
\begin{align*}
\varphi_t . \mu(h,u) = \varphi_t^{-1} \mu( h, t^iu)
& =\varphi_t^{-1} t^i (\omega(h) u + v) \quad \text{where} \quad v \in U_{i-1} \\
& = \omega(h) u + t^{-1-i} t^i v \\
& = \omega(h) u + O(t^{-1}),
\end{align*}
so $\mu$ contracts to $\mu_1$ through $\varphi$.
\end{proof}
\begin{remark}
\label{rem:degeneration-perturbs-brackets}
We do not know whether Proposition \ref{prop:cornulier-and-deg} holds in general. This is because the contraction we used in the proof perturbs in general the brackets in $\mathfrak r$. We know no obstruction of the kind expressed in Lemma \ref{lem:lsc-zariski} for a degeneration from $\mathfrak g$ to $\mathfrak g_1$.
\end{remark}
A question we would like to raise, in view of Remark \ref{rem:degeneration-perturbs-brackets} in particular, is whether the group $R = \exp(\mathfrak r)$ is a large-scale invariant (if the completely solvable $G$ and $G'$ are $O(u)$-equivalent, does it hold that $\liminf C^i G \simeq \liminf C^i G'$?). This appears quite difficult to determine in general, because this subgroup is exponentially distorted and gets totally disconnected in the asymptotic cones \cite{AsInv}. Nevertheless, it holds by Cornulier's formula \eqref{eq:cornlier-formula} and Theorem \ref{th:geodim} that the dimension loss
\begin{equation}
\label{eq:dydak-higes-solvable}
\dim R = \operatorname{geodim}(G) - \operatorname{conedim}(G)
\end{equation}
is indeed a $o(r)$-bilipschitz invariant.
When $G$ is of Heintze type, the $o(r)$-bilipschitz invariance of \eqref{eq:dydak-higes-solvable} is materialized into the Gromov boundary; note also that the quasiisometry class of $R$ is a quasiisometry invariant of $G$ \cite[Theorem A]{KiviojaThesis}; but we have no asymptotic invariant in general.
We also note that the nonnegativity $\operatorname{asdim}_{\mathrm{AN}}(X) - \operatorname{conedim} X \geqslant 0$ holds more generally, a result of Dydak and Higes \cite{DydakHiges}.
\subsubsection{Shadows and deformations}
Let $\mathfrak g_0$ be a completely solvable algebra.
We call torus an abelian algebra of semisimple derivations of $\mathfrak g_0$.
A torus $\mathfrak t$ is compactly embedded if every $T \in \mathfrak t$ has purely imaginary spectrum.
Maximal tori are conjugated.
\begin{definition}[Special case of {\cite[2.2]{GordonWilson}}]
Let $\mathfrak t$ be a maximal compactly embedded torus.
A modification\footnote{Modification is a more general notion, we only consider modifications of completely solvable Lie algebras for our purposes in the present paper.} of $\mathfrak g_0$ is a transversal subalgebra to $\mathfrak t$ in $\mathfrak g_0 \rtimes \mathfrak t$.
We call $\mathfrak g_0$ the shadow of $\mathfrak g$.
\end{definition}
The modification $\mathfrak g$ is the graph of a linear map $\tau : \mathfrak g_0 \to \mathfrak t$, called the modification map: for $X \in \mathfrak g_0$, $\tau(X)$ is the only $T \in \mathfrak t$ such that $X+T \in \mathfrak g$. Note that $\mathfrak t$ being abelian, $[\mathfrak g, \mathfrak g] \subset \mathfrak g_0$.
\begin{definition}
Let $\mathfrak g$, $\mathfrak g_0$ ant $\tau$ be as above.
We say that $\mathfrak g$ is a twisting (and $\tau$ a twisting map) if in addition $[\mathfrak g, \tau(\mathfrak g_0)] \subseteq \mathfrak g$.
\end{definition}
If $\mathfrak g_0$ is nilpotent, all its modifications are twistings \cite{GordonWilson}.
Early works on modifications (\cite{AlekHMN}, \cite{GordonWilson}) were concerned by the problem of finding adequate data for the classification of solvmanifolds.
Modification have attracted the attention more recently because if $\mathfrak g$ is a modification of $\mathfrak g_0$, then $G$, $G_0$ and $G_0 \rtimes T$ (where $T$ is the compact torus of $\operatorname{Aut}(G_0)$ with Lie algebra $\mathfrak t$) share a common Riemannian model, especially they are quasiisometric (\cite{CornulierDimCone}, \cite{cowling2021homogeneous}).
\begin{proposition}
Let $\mathfrak g_0$, $\mathfrak l$, $\mathfrak g$ and $\tau$ be as above. Assume that $\mathfrak g$ is a twisting.
Define $\omega_\tau (X \wedge Y) = [\tau(X), Y] + [X, \tau(Y)] = d\tau(X \wedge Y) + \tau[X,Y]$ for $X, Y \in \mathfrak g_0$. Then,
\begin{enumerate}[{\rm (1)}]
\item
\label{item:omega-phi-cocycle}
$\omega_\tau \in Z^2(\mathfrak g_0, \mathfrak g_0)$, where $\mathfrak g_0$ acts in $\mathfrak g_0$ through the adjoint representation.
\item
\label{item:omega-phi-deformation}
$[\omega_\varphi]$ is a linearly expandable infinitesimal deformation of $\mathfrak g_0$. The associated formal deformation goes through $O_{\mathfrak g}$.
\end{enumerate}
\end{proposition}
\begin{proof}
\eqref{item:omega-phi-cocycle}
By definition,
\begin{align*}
d \omega_\tau (X \wedge Y \wedge Z) & = [X, [\tau (Y), Z] + [Y, \tau (Z)]] - [Y, [\tau(X), Z] + [X, \tau(Z)]] \\
& \quad + [Z, [\tau(X), Y] + [X, \tau(Y)]] - [ \tau[X,Y], Z ] - [[X,Y], \tau(Z)] \\
& \quad + [ \tau[X,Z], Y ] + [[X,Z], \tau(Y)]- [ \tau[Y,Z], X ] - [[Y,Z], \tau(X)] \\
& =
[\tau [X,Z], Y] - [\tau[Y,Z], X] - [[Y, Z], \tau(X)]
\end{align*}
where we used the Jacobi identity in $\mathfrak g_0 \rtimes \mathfrak l$ three times.
If $\mathfrak g$ is a twisting then $\tau$ is a homomorphism \cite{GordonWilson}, hence the remaining terms can be simplified and vanish using the Jacobi identity again.
\eqref{item:omega-phi-deformation}
Set $\mu_0 \in O_{\mathfrak g_0}$ and put $\lambda = \mu_0 + \omega_\tau /t$. Then $\lambda(1) \in O_\mathfrak g$.
\end{proof}
Beware that it is not true that a twisting $\mathfrak g$ degenerates to its shadow $\mathfrak g_0$. Here is an already encountered example.
\begin{example}[Solvable example]
\label{ex:twisting-non-deg}
Let $\mathfrak g_0 = \mathfrak b(3, \mathbf R)$, with basis $(X_1, X_2, T)$ and brackets
\begin{equation}
[X_1, X_2] = 0, \; [T, X_1] = X_1,\; [T,X_2] = X_2.
\end{equation}
Let $(dx_1, dx_2, dt)$ be the dual basis.
Then $H^2(\mathfrak g_0, \mathfrak g_0)$ is $3$-dimensional, generated by the classes $\omega_1 = [dt \wedge dx_1 \otimes X_2]$ and of $\omega_2 = [dt \wedge dx_2 \otimes X_1]$.
$\omega_1$ and $\omega_2$ are linearly expandable into degenerations,
but $\omega_1 - \omega_2$ is linearly expandable into a family of twistings that are not degenerations. See Appendix \ref{sec:adjoint-computations} for a more general computation.
\end{example}
If $\mathfrak h$ is a graded Lie algebra and $\mu \in O_{\mathfrak h}$, the groups $H^2(\mathfrak \mu, \mu)$ are naturally graded.
This is the case, for instance, if $\mathfrak h$ is a Carnot-graded group.
\begin{figure}
\centering
\begin{tikzpicture}[line cap=round,line join=round,>=angle 45,x=0.6cm,y=0.6cm]
\clip(-11.55,0) rectangle (13.36,16.33);
\draw [shift={(-5.80,-32.77)}] plot[domain=1.22:1.63,variable=\t]({1*34.84*cos(\t r)},{1*34.84*sin(\t r)});
\draw [shift={(-3.82,-24.77)}] plot[domain=1.22:1.63,variable=\t]({34.84*cos(\t r)},{1*34.84*sin(\t r)});
\draw [shift={(20.66,-0.91)}] plot[domain=2.75:3.04,variable=\t]({1*28.81*cos(\t r)},{28.81*sin(\t r)});
\draw [shift={(34.67,-2.92)}] plot[domain=2.75:3.04,variable=\t]({1*28.81*cos(\t r)},{28.81*sin(\t r)});
\draw [->] (0,6) -- (-2,14);
\draw (0,6)-- (3.4,9.31);
\draw (0,6)-- (6.35,2.43);
\draw (0,6)-- (7.28,6.03);
\draw (-6.18,14.96) node[anchor=north west] {$\mathrm{weight} <0$};
\draw (2.47,10.8) node[anchor=north west] {$ \mathrm{weight}\; 1 $};
\draw (6.39,2.57) node[anchor=north west] {\parbox{7.71 cm}{$ \mathrm{weight}\; \\ 2 $}};
\draw (-0.93,5.91) node[anchor=north west] {$ \mu $};
\draw (-0.46,9.28) node[anchor=north west] {$ \mu +k \xi_1 $};
\draw (1.73,4.07) node[anchor=north west] {$ \mu + k \xi_2 $};
\draw (2.81,6.04) node[anchor=north west] {$ \mu + k(\xi_1 + \xi_2) $};
\draw (-4.57,12.01) node[anchor=north west] {$ \mu + \omega/t $};
\draw (-5.76,4.08)-- (4.43,14.01);
\draw (2.07,12.01) node[anchor=north west] {$ \mu + \omega + k \xi_1 $};
\draw [->] (0,6) -- (3.4,9.31);
\draw [->] (0,6) -- (6.35,2.43);
\draw (-5.39,3.03) node[anchor=north west] {$ \mathcal N_6(\mathbf R) $};
\draw (-0.61,15.01) node[anchor=north west] {$ \mathcal L_6(\mathbf R) $};
\draw [shift={(-5.4,1.68)},line width=1.6pt] plot[domain=0.23:1.44,variable=\t]({1*6.92*cos(\t r)+0*6.92*sin(\t r)},{0*6.92*cos(\t r)+1*6.92*sin(\t r)});
\draw (-6.36,9.35) node[anchor=north west] {$ O(\mu) $};
\draw [shift={(0.08,12.55)},dash pattern=on 3pt off 3pt] plot[domain=4.7:5.33,variable=\t]({1*6.55*cos(\t r)+0*6.55*sin(\t r)},{0*6.55*cos(\t r)+1*6.55*sin(\t r)});
\draw [shift={(-0.08,-1.06)},dash pattern=on 3pt off 3pt] plot[domain=0.84:1.56,variable=\t]({1*7.06*cos(\t r)+0*7.06*sin(\t r)},{0*7.06*cos(\t r)+1*7.06*sin(\t r)});
\draw (-3.48,9.86) node[anchor=north west] {$ O(\mathfrak l_{6,6}) $};
\draw (2.74,2.06) node[anchor=north west] {$ O(\mathfrak l_{6,11}) $};
\draw (4.15,7.34) node[anchor=north west] {$ O(\mathfrak l_{6,12}) $};
\begin{scriptsize}
\fill [color=black] (-5.82,-32.77) circle (1.5pt);
\fill [color=black] (0,6) circle (1.5pt);
\fill [color=black] (2.14,8.09) circle (1.5pt);
\fill [color=black] (3.97,3.77) circle (1.5pt);
\fill [color=black] (4.27,6.02) circle (1.5pt);
\fill [color=black] (-1.32,11.27) circle (1.5pt);
\fill [color=black] (1.39,11.04) circle (1.5pt);
\end{scriptsize}
\end{tikzpicture}
\caption{Sketch of $\mathcal N_6(\mathbf R) \subseteq \mathcal L_6(\mathbf R)$ around the Carnot Lie algebra from Example \ref{exm:L67}, and some deformations.}
\label{fig:N6L6}
\end{figure}
\begin{example}[A nilpotent example]
\label{exm:L67}
Let $G_0$ be the simply connected $6$-dimensional Lie group having Lie algebra over $X_1, \ldots , X_6$ and the nonzero brackets
\begin{equation*}
[X_1, X_2] = X_3, \, [X_1, X_3] = X_4, \, [X_1, X_4] = X_5.
\end{equation*}
(This algebra is denoted $\mathfrak l_{6,7}$ in \cite{deGraafclass}.)
Note that $X_6$ generates an abelian direct factor.
$\mathfrak g_0$ is a Carnot-graded algebra under the grading
\begin{equation*}
\left\langle X_1, X_2, X_6 \right\rangle \oplus \langle X_3 \rangle \oplus \langle X_4 \rangle \oplus \langle X_5 \rangle.
\end{equation*}
Let $(X^1, \ldots , X^6)$ be the dual basis, and denote by $\mu$ the law. Consider the following cochains:
\[
\begin{array}{llll}
\omega = X_2^{16} + X_1^{62}; & \xi_1 = X_5^{23}; &
\xi_2 = X_5^{26}; & \xi_3 = X_4^{26} + X_5^{36}.
\end{array}
\]
where we abbreviate $X_k \otimes X^i \wedge X^j$ into $X_k^{ij}$.
These are cocycles but not coboundaries in $H^2(\mu, \mu)$
(See the computations in Appendix \ref{subsec:nilpotent-adjoint}).
The cohomology classes of $\omega$, $\xi_1$, $\xi_2$ and $\xi_3$ have weight $-1$, $1$, $2$ and $1$ respectively under the grading.
The classes of $\xi_1$, $\xi_2$, $\xi_3$ and $\xi_1 + \xi_2$ linearly expand into formal deformations; the corresponding laws are $\mathfrak l_{6,6}$, $\mathfrak{l}_{6,12}$, $\mathfrak{l}_{6,13}$ and $\mathfrak{l}_{6,11}$ respectively in \cite{deGraafclass}. All these are also degenerations, entering into the description of Example \ref{ex:nilpotent} (adding cocycles with positive weights to the Lie algebra law does not change the lower central filtration).
For every $k \in \mathbf R$, the cohomology class of $k \xi_1 + \omega$ is also integrable into a formal deformation, going through families of twistings of $\mathfrak g_0 = \mathfrak l_{6,7}$ if $k=0$, or through families of twistings of $\mathfrak l_{6,6}$ if $k \neq 0$.
One can check that all the simply connected solvable Lie groups that are $O(u)$-bilipschitz equivalent to $G_0$ appear as deformations of $G_0$ of the form described above.
Let us give a few words on this. If $H$ is such a group, then by \cite{PansuCCqi} and \cite{BreuillardLarge}, the shadow $\mathfrak h_0$ of its Lie algebra is isomorphic to $\mathfrak l_{6,i}$ with $i \in \lbrace 6,7, 11, 12, 13 \rbrace$. The three last algebras are irreducible (they have no direct factor), and maximal tori for those are computed in \cite{magnin2007adjoint}; in this way we check that only $\mathfrak l_{6,6}$ and $\mathfrak l_{6,7}$ possess derivations with purely imaginary spectra. Thus either $\mathfrak h$ is $\mathfrak l_{6,11}$, $\mathfrak l_{6,12}$, $\mathfrak l_{6,13}$ or a twisting of $\mathfrak l_{6,7}$ or $\mathfrak l_{6,6}$. Note that the quasiisometry classes in this family are not completely known, though it is expected that they are given by the isomorphism type of the shadow \cite[Conjecture 19.114]{CornulierQIHLC}.
The real cohomology rings $H^\ast(\mathfrak l_{6,6}, \mathbf R)$ and $H^\ast(\mathfrak l_{6,7}, \mathbf R)$ are isomorphic \cite[\S 19.6.6]{CornulierQIHLC}.
However, $b_2(\mathfrak l_{6,13})$ is $4$ while it is $5$ for all the others; and we can check that the rank of $H^2(\mathfrak l_{6,i}, \mathbf R) \odot H^2(\mathfrak l_{6,i}, \mathbf R) \to H^4(\mathfrak l_{6,i}, \mathbf R)$ given by the cup product is $2$ for $i=6$ while it is $3$ for $i=11$ and $i=12$ (see \ref{cohom-ring} for some details). All these algebras have $\mathbf Q$-forms.
So none of the Lie groups $L_{6,11}$, $L_{6,12}$, $L_{6,13}$ are quasiisometric to $G_0$ by \cite{SauerHom}.
\end{example}
It seems natural to expect Carnot graded algebra to have more twistings than their nilpotent deformations (on the other extreme, observe that characteristically nilpotent Lie algebras, which lie ``deep down'' in $\mathcal N_n(\mathbf R)$, have no twistings). This is indeed the situation on the previous example. Thus we ask:
\begin{ques}
\label{ques:deformations}
Let $\mathfrak h$ be a solvable Lie algebra over $\mathbf R$, $H$ its associated simply connected Lie group, and let $G_0$ be a simply connected Carnot-group.
Assume that $H$ and $G_0$ are $O(u)$-bilipschitz.
Is there a formal deformation of $\mathfrak g_0$ going through $\mathfrak h$?
\end{ques}
The author does not know whether the dimension of compactly embedded maximal tori is upper semicontinuous on $\mathcal N_n(\mathbf R)$ (which would hint towards a positive answer to Question \ref{ques:deformations}). These tori embed linearly in $H^1(\lambda, \lambda)$ whose dimension we have seen to be upper semicontinuous in Lemma \ref{lem:lsc-zariski}. However the codimension of the tori may be high (see Table \ref{tab:max-tori}).
\begin{table}[t]
\centering
\begin{tabular}{|c|c|l|l|}
\hline
$\mathfrak g$ & $\dim \mathfrak t_{\max}^{\mathrm{c}}$ & $\dim \mathfrak t_{\max}$ & $\dim H^1(\mathfrak g, \mathfrak g)$ \\
\hline
$\mathfrak l_{6,7}$ & 1 & 3 & 9 \cite[$\mathfrak g_{5,5} \times \mathbf C$]{Magnin08} \\
$\mathfrak l_{6,6}$ & 1 & 2 & 8 \cite[$\mathfrak g_{6,5} \times \mathbf C$]{Magnin08} \\
$\mathfrak l_{6,12}$ & 0 & 2 \cite[4.2.5]{magnin2007adjoint} & 7 \cite[$\mathfrak g_{6,11}$]{Magnin08} \\
$\mathfrak l_{6,11}$ & 0 & 1 \cite[4.1.1]{magnin2007adjoint} & 6 \cite[$\mathfrak g_{6,12}$]{Magnin08} \\
$\mathfrak l_{6,13}$ & 0 & 2 \cite[4.2.6]{magnin2007adjoint} & 5 \cite[$\mathfrak g_{6,13}$]{Magnin08} \\
\hline
\end{tabular}
\vskip 10pt
\caption{Dimensions of maximal tori, compactly embedded maximal tori, and outer derivation spaces for the nilpotent Lie algebras of Example \ref{exm:L67}.}
\label{tab:max-tori}
\end{table}
\iffalse
\subsubsection{Some history}
In the case $\mathfrak g= \mathfrak h$ (i.e. $\mathfrak g$ nilpotent), Goodman \cite{Goodman77} proved a statement close to \eqref{item:pansu-goodman-def} in Theorem \ref{th:cornulier-red}.
Goodman also considered the deformation theory of $\operatorname{gr}(\mathfrak g)$ in \cite{GoodmanLNM}, although the latter was not concerned about large-scale geometry.
Then Pansu proved asymptotic cone theorem \cite{PanCBN}.
Breuillard extended the asymptotic cone theorem to the groups of type (R), using the nilshadow construction of Auslander and Greene \cite{BreuillardLarge}.
\fi
\subsection{Higher-rank symmetric spaces}
\label{subsec:hrss}
The real rank of a symmetric space $X$ is $o(r)$-bilipschitz invariant, as it is the covering dimension of asymptotic cone \cite{CornulierDimCone} or, more in line with \cite[Corollary 6.11]{KleinerLeebQI}, the minimal degree above which all relative homology group of subspaces in $\operatorname{Cone}^\bullet_\omega X$ vanish. This can be refined: the restricted root system is invariant.
\begin{proposition}[After Kleiner and Leeb]
Let $\phi : X \to Y$ be a sublinear bilipschitz equivalence between irreducible symmetric spaces $X$ of rank $\geqslant 2$.
Then, the restricted root systems associated with $X$ and $Y$ are isomorphic.
\end{proposition}
\begin{proof}
The spherical Tits building at infinity in $\operatorname{Cone}_\omega(X)$ has the same appartments as the Tits boundary of $X$ \cite[Theorem 5.2.1]{KleinerLeebQI}.
\end{proof}
We note that the rank $p$ irreducible symmetric spaces of noncompact type \[ \operatorname{SU}(p,2q)/S(\mathbf {U}_p \times \mathbf {U}_{2q}) \text{ and } \operatorname{Sp}(p,q)/\operatorname{Sp}(p) \times \operatorname{Sp}(q) \] have same restricted root system $BC_p$ and same asymptotic Assouad-Nagata dimension $4pq$ \cite[Table V p. 518]{HelgaDiffSym}.
Thus, we could not distinguish them with our techniques, and Question \ref{ques:classification} remains open so far for them.
The author is grateful to P. Pansu and G. Rousseau for bringing these pairs to his attention.
\subsection{Right-angled Fuchsian buildings of uniform thickness}
\label{subsec:rafb}
Given $(p,q)$ such that $p \geqslant 5$ and $q \geqslant 2$, the finitely presented group
\[ \Gamma_{p,q} = \langle s_1, \ldots s_p \mid [s_i, s_{i+1}], s_i^q \rangle. \]
has a model $I_{p,q}$ which is a $\operatorname{CAT}(-1)$ cellular complex generalizing the cellular action of the hyperbolic Coxeter group $\Gamma_{p,2}$ on $\mathbb H^2_{\mathbf R}$ tesselated by right-angled $p$-gons and, following \cite{BourdonFuchsI},
\begin{equation}
\label{eq:confdim-buildings}
\operatorname{Cdim} \partial_\infty I_{p,q} = \frac{\log \tau(p,q)}{\log \tau(p,2)} = 1 + \frac{\log (q-1)}{\operatorname{argch}\left( \frac{p-2}{2} \right)}.
\end{equation}
The conformal dimension of $I_{p,q}$ is not rational unless $q=2$.
It is proven in \cite{pallier2019conf} that $\operatorname{Cdim}_{O(u)} \partial_\infty I_{p,q} = \operatorname{Cdim} \partial_\infty I_{p,q}$, so that it is a $O(u)$-bilipschitz invariant.
Using Poincaré profiles, Hume, Mackay and Tessera proved that there can be no coarse embedding $I_{p,q} \to I_{p', q'}$ when $\operatorname{Cdim} \partial_\infty I_{p,q} > \operatorname{Cdim} \partial_\infty I_{p',q'}$ \cite[Theorem 13.2]{HmTPoinc}.
We found the equality case in \eqref{eq:confdim-buildings} to be related to the following conjecture.
\begin{conjecture}[Four exponential conjecture, {\cite[p.11]{LangT}}]
\label{conj:four-exponentials}
Let $\beta_1, \beta_2$ be complex numbers, linearly independent over $\mathbf Q$, and let $z_1, z_2$ be complex numbers, also linearly independent over $\mathbf Q$.
Then, at least one of the numbers $e^{\beta_i z_j}$ is transcendental.
\end{conjecture}
The analogous statement with two triples $\beta_1, \beta_2, \beta_3, z_1, z_2, z_3$ is known as the six exponentials theorem \cite{LangT}.
The unconditional form of the following Proposition is stated as a conjecture in \cite{TysonThesis} and \cite{MTconfdim}. (We indicate with an asterisk that our statement is conditional.)
\begin{cproposition}
\label{prop:tyson's conjecture}
Assume that Conjecture \ref{conj:four-exponentials} holds, and let $(p,q,p',q')$ be integers such that $p,p' \geqslant 5$ and $q, q' \geqslant 3$. Then the buildings $I_{p,q}$ and $I_{p', q'}$ have equal conformal dimension if and only if there exists positive integers $M,N$ such that
\begin{align}
\left( q-1 \right)^N & = (q'-1)^M
\label{eq:tyson-first-CN} \\
T_N \left( \frac{p-2}{2} \right) & = T_M \left( \frac{p'-2}{2} \right)
\label{eq:tyson-second-CN}
\end{align}
where $T_k$ is the Tchebychev polynomial of the first kind and degree $k$.
\end{cproposition}
\begin{proof}
Negating the conclusion amounts to assert that there exists an irrational number $z$ and a quadruple $(p,q,p',q')$ such that $z \log (q'-1) = \log (q-1)$ and $z \operatorname{argch}((p-2)/2) = \operatorname{argch}((p'-2)/2)$.
Define $\beta_1 = \log q'-1$,
\begin{align*}
\beta_2 & = \operatorname{argch}((p'-2)/2) = \log \left( \frac{p'-2}{2} + \sqrt{p'(p'/4-1)} \right),
\end{align*}
$z_1=1$ and $z_2 = z$.
Then, $\beta_2/\beta_1$ is not rational. But $e^{\beta_1}$, $e^{z \beta_1} = q-1$, $e^\beta_2$ and $e^{z\beta_2} = \frac{p-2}{2} + \sqrt{p(p/4-1)}$ are all algebraic.
\end{proof}
Note that the $I_{p,q}$ are quasiisometrically rigid for $q \geqslant 3$ \cite{XieBuildings}; especially they are classified up to quasiisometry by the pair $(p,q)$ for $q \geqslant 3$.
\begin{cproposition}
Assume that Conjecture \ref{conj:four-exponentials} holds.
If there exists a $O(u)$-bilipschitz equivalence $\phi: I_{p,q} \to I_{p',q'}$ then \eqref{eq:tyson-first-CN} and \eqref{eq:tyson-second-CN} hold for some $M,N \geqslant 1$.
\end{cproposition}
\begin{proof}
This directly follows from Proposition \ref{prop:tyson's conjecture} and \cite{pallier2019conf}.
\end{proof}
Let us finish with some questions.
Though we consider \eqref{eq:tyson-first-CN} and \eqref{eq:tyson-second-CN} perhaps not sufficient for $O(u)$-equivalence between $I_{p,q}$ and $I_{p', q'}$, we could not distinguish them up to this relation. In a slightly different direction, one can ask:
\begin{ques}
Assume that $p,q,p', q'$ are as in \eqref{eq:tyson-first-CN} and \eqref{eq:tyson-second-CN}. Are the groups $\Gamma_{p,q}$ and $\Gamma_{p',q'}$ (non)-measure equivalent? If yes, are they $L^p$-measure equivalent for some $p < +\infty$?
\end{ques}
(We recall that a measure equivalence between the finitely generated $\Gamma$ and $\Lambda$ is given by a couple of free, commuting, measure preserving actions of $\Gamma$ and $\Lambda$ on a Lebesgue space $(\Omega,m)$ with Borel fundamental domains of finite measure $X$ and $Y$, such that the associated cocycles $c: G \times X \to H$ and $H \times Y \to G$ have $c(g, \cdot) \in L^p$ for all $g$.)
Closer to the problems of this paper, the author also believe the following question to remain currently open, and of some interest in view of \cite{HmTPoinc}.
\begin{ques}
Assume that $p,q,p', q'$ are as in \eqref{eq:tyson-first-CN} and \eqref{eq:tyson-second-CN}, and $p \neq p'$. Is there a coarse embedding $I_{p,q} \to I_{p',q'}$?
\end{ques}
\begin{appendix}
\section{Methods used for the cohomology computations}
The cohomology groups used in this paper are obtained by direct methods (i.e. by somewhat explicit computations of derivative, cocycles and coboundaries).
We summarize them below.
\label{app:cohomcomput}
\subsection{Solvable Lie algebras}
\label{sec:adjoint-computations}
Let $\mathfrak b(n, \mathbf K)$ be defined as in Example \ref{exm:bnK}, with coordinates $(z_\alpha, \tau,s)$.
Decompose $z_\alpha = x_\alpha + iy_\alpha$; for $1 \leqslant \alpha_1 < \cdots < \alpha_s \leqslant n-1$ and $1 \leqslant \beta \leqslant n-1$, denote
\begin{align*}
X^{\alpha_1, \ldots ,\alpha_s}_{\beta} & = dx_{\alpha_1} \wedge \cdots \wedge dx_{\alpha_s} \otimes \frac{\partial}{\partial{x_{\beta}}
\\
Y^{\alpha_1, \ldots ,\alpha_s}_{\beta} & = dy_{\alpha_1} \wedge \cdots \wedge dy_{\alpha_s} \otimes \frac{\partial}{\partial{y_{\beta}}}
\\
T & = \partial_\tau, \; T^\ast = d \tau, \; S = \partial_s, \; S^\ast = d s.
\end{align*}
We apply the summation convention where we simplify $\sum_\mu X^{\alpha \mu}_\alpha$ into $X^{\alpha \mu}_\alpha$ in any equality between tensors whenever $\mu$ is unbound in the RHS.
The Lie algebra grading $\mathfrak s_0 = \langle S \rangle$, $\mathfrak s_1 = \langle X_\alpha, Y_\alpha \rangle$ and $\mathfrak s_2 =\langle T \rangle$ extends to a grading of the mixed exterior/tensor product so that,
say,
$X^{\alpha_1, \ldots \alpha_s}_{\beta} \wedge T$
has weight $ 1 -s + 2$.
The differentials have degree $0$, hence the cohomology groups are graded accordingly. Finally, $\mathfrak b(n , \mathbf C)$ has a preferred complex structure, $JX_\alpha = Y_\alpha$, $JY\alpha = -X_\alpha$ and $JS= T$. This is because $\mathbf H^n_{\mathbf C}$ is Hermitian. $J$ is not an automorphism; nevertheless,
\begin{equation*}
\widetilde{J}(Z) =
\begin{cases}
J(Z) & Z \in \mathfrak s_1 \\
Z & Z \in \mathfrak s_0 \oplus \mathfrak s_2
\end{cases}
\end{equation*}
is an automorphism, and we will use it in order to simplify the computations.
\subsubsection{Results}
\begin{proposition}
\label{prop:first-adjoint-R}
$H^1(\mathfrak b(n, \mathbf R), \mathfrak b(n, \mathbf R)) = \bigoplus_{(\alpha, \beta) \neq (n-1, n-1)} \langle X_\alpha^\beta \rangle$.
\end{proposition}
\begin{proposition}
\label{prop:second-adjoint-R}
$H^2(\mathfrak b(n, \mathbf R), \mathfrak b(n, \mathbf R)) = \bigoplus_{(\alpha, \beta) \neq (n-1, n-1)} \langle [X_\alpha \otimes S^\ast \wedge X^\beta] _{\alpha \neq \beta} \rangle$.
\end{proposition}
Since the computation of $H^1(\mathfrak b(n, \mathbf C), \mathfrak b(n,\mathbf C))$ proves useful for Proposition \ref{prop:first-adjoint-R} and is not significantly harder than the case $n=2$ used in Section \ref{sec:proofE}, we provide the result for all $n$ and the weight decomposition below.
\begin{proposition}
\label{prop:first-adjoint-C}
$\dim H^1 (\mathfrak b(n, \mathbf C), \mathfrak b(n, \mathbf C)) = (n-1)^2 + 1$ and
\[ H^1 (\mathfrak b(n, \mathbf C), \mathfrak b(n, \mathbf C))
= \operatorname{span}
\begin{cases}
[T \otimes S^\ast] & \mathrm{weight} -2 \\
[X_\alpha^\beta - Y_\beta^\alpha]_{1 \leqslant \alpha, \beta \leqslant n-1}, \alpha \neq \beta & \mathrm{weight}\, 0 \\
[X_\alpha \otimes Y^{\alpha}]_{1 \leqslant \alpha \leqslant n-1} & \mathrm{weight} \, 0. \\
\end{cases} \]
\end{proposition}
\begin{proposition}
\label{prop:first-adjoint-S''}
Let $\mathfrak s''$ be the four-dimensional Lie algebra $\mathbf R^3 \rtimes_{\alpha} \mathbf R$, where $\alpha = \operatorname{diag}(J_2(1), 2)$. Then $\dim H^1(\mathfrak s'', \mathfrak s'') = 4$.
\end{proposition}
\subsubsection{Method}
In order to gain space for Propositions \ref{prop:first-adjoint-R} to \ref{prop:first-adjoint-C} we gather the computation for $\mathbf R$ and $\mathbf C$ and then extract the case of $\mathbf K = \mathbf R$. We abbreviate the derivative of the complex $C^\bullet(\mathfrak b(n, \mathbf K), \mathfrak b(n, \mathbf K))$ into $d'_{\mathbf K}$.
\begin{lemma}
\label{lem:differentials-of-1-cochains-C}
For all $\alpha, \beta$ such that $1 \leqslant \alpha, \beta \leqslant n-1$,
\[
\begin{array}{rlrl}
d'_{\mathbf C} X_\alpha^\beta & = X^\beta \wedge Y^\alpha \otimes T;
&
d'_{\mathbf C}Y_\alpha^\beta & = X^\alpha \wedge Y^\beta \otimes T;
\\
d'_{\mathbf C} (X_\alpha \otimes Y^\beta) & = - 2 Y^{\alpha \beta} \otimes T; &
d'_{\mathbf C} (Y_\alpha \otimes X^\beta) &= 2 X^{\alpha \beta} \otimes T; \\
d'_{\mathbf C}(X_\alpha \otimes S^\ast) & = - Y^\alpha \wedge S^\ast \otimes T; &
d'_{\mathbf C}(Y_\alpha \otimes S^\ast) & = X^\alpha \wedge S^\ast \otimes T. \\
d'_{\mathbf C}(T \otimes X^\alpha) & = X^\alpha \wedge S^\ast \otimes T; &
d'_{\mathbf C}(T \otimes Y^\alpha) & = Y^\alpha \wedge S^\ast \otimes T \\
d'_{\mathbf C}(T \otimes S^\ast) & = 0 &
d'_{\mathbf C}(S\otimes T^\ast) & = - 2 S^\ast \wedge T \otimes S.
\end{array}
\]
\begin{align*}
d'_{\mathbf C}(S \otimes S^\ast) & = - X^\mu \wedge S^\ast \otimes X_\mu - Y^\mu \wedge S^\ast \otimes Y_\mu - 2T \wedge S^\ast \otimes T. \\
d'_{\mathbf C}(T \otimes T) & = - X^\mu \wedge Y^\mu \otimes T.
\\
d'_{\mathbf C}(S \otimes X^\alpha) & = X_{\ell}^{\alpha \ell} - X^\alpha \wedge Y^\ell \otimes Y_\ell + X^\alpha \wedge S^\ast \otimes S - 2 X^\alpha \wedge T^\ast \otimes T;
\\
d'_{\mathbf C}(S \otimes Y^\alpha) & = Y_{\ell}^{\alpha \ell} - Y^\alpha \wedge X^\ell \otimes X_\ell + Y^\alpha \wedge S^\ast \otimes S - 2 Y^\alpha \wedge T^\ast \otimes T;
\\
d'_{\mathbf C}(X_\alpha \otimes T^\ast) & = ( X^\mu \wedge Y^\mu - S^\ast \wedge T) \otimes X_\alpha + Y^\alpha \wedge T^\ast \otimes T
\\
d'_{\mathbf C}(Y_\alpha \otimes T^\ast) & = ( - Y^\mu \wedge X^\mu - S^\ast \wedge T) \otimes Y_\alpha - X^\alpha \wedge T^\ast \otimes T
\end{align*}
\end{lemma}
\begin{proof}
The whole computation being of little interest, let us explain in detail only how one computes $d_{\mathbf C} X_\alpha^\beta$, $d_{\mathbf C} Y_\alpha^\beta$ and $d(X_\alpha \otimes S^\ast)$ as a sample of the techniques employed. Applying \eqref{eq:chevalley-eilenberg-adjoint},
\begin{align*}
d'_{\mathbf C}X_{\alpha}^\beta (X_{\mu \ell}) & = - X_\alpha^\beta [X_\mu, X_\ell] + [X_\mu, X_\alpha^\beta X _\ell] - [X_{\ell}, X_\alpha^\beta X_\mu] \\
& = [X_\mu, \delta_{\beta\ell} X_\alpha] - [X_\ell, \delta_{\beta\mu} X_\alpha] =0; \\
d'_{\mathbf C}X_\alpha^\beta (Y_{\mu \ell}) & = - X_\alpha^\beta [Y_\mu, Y_\ell] + [Y_\mu, X_\alpha^\beta Y _\ell] - [Y_{\ell}, X_\alpha^\beta Y_\mu] = 0; \\
d'_{\mathbf C} X_\alpha^\beta (X_\mu \wedge Y_\ell) & = - X_\alpha^\beta [X_\mu, Y_\ell] + [X_\mu , X_\alpha^\beta Y_\ell] - [Y_\ell, X_\alpha^\beta X_\mu] \\
& = - \delta_{\mu \ell} X_\alpha^\beta T - \delta_{\beta\mu} [Y_\ell, X_\alpha] = \delta_{\alpha \ell} \delta_{\beta \mu} T; \\
d'_{\mathbf C} X_\alpha^\beta (X_\mu \wedge S) & = - X_\alpha^\beta [X_\mu, S] + [X_\mu, X_\alpha^\beta S] - [S, X_\alpha^\beta X_\mu] \\ & = X_\alpha^\beta X_\mu - [S, \delta_{\beta\mu}X_\alpha] = \delta_{\beta\mu} (X_\alpha - X_\alpha) = 0, \\
d'_{\mathbf C}X_\alpha^\beta (X_\mu \wedge T)
&= - X_\alpha^\beta [X_\mu, T] + [X_\mu, X_\alpha^\beta T] - [T, X_\alpha^\beta X_\mu] = - [T, \delta_{\beta \mu} X_\alpha] = 0. \\
d'_{\mathbf C}X_\alpha^\beta (Y_\mu \wedge S) & = - X_\alpha^\beta [Y_\mu, S] + [Y_\mu, X_\alpha^\beta S] - [S, X_\alpha^\beta Y_\mu] = X_\alpha^\beta Y_\mu =0. \\
d'_{\mathbf C}X_\alpha^\beta(Y_\mu \wedge T) &= - X_\alpha^\beta [Y_\mu, T] + [Y_\mu, X_\alpha^\beta T] - [T, X_\alpha^\beta Y_\mu] = 0; \\
d'_{\mathbf C} X_\alpha^\beta (S \wedge T) & = - X_\alpha^\beta [S, T] + [S, X_\alpha^\beta T ] - [T, X_\alpha^\beta S]= - 2 X_\alpha^\beta T = 0,
\end{align*}
which yields the expression of $d_{\mathbf C} X_\alpha^\beta$.
Applying $\widetilde J$ produces $d_{\mathbf C} Y_\alpha^\beta$:
\begin{equation*}
d'_{\mathbf C} Y_\alpha^\beta = d_{\mathbf C} \widetilde{J}X_\alpha^\beta = \widetilde J X^\beta \wedge \widetilde{J} X^\alpha \otimes T = Y^\beta \wedge (-X^\alpha) \otimes T.
\end{equation*}
\iffalse
In the same way, in order to show \eqref{C2ss-11inZ2ss} it is sufficient to compute $d X_\alpha \otimes Y^\beta$.
\begin{align*}
d'_{\mathbf C}(X_\alpha \otimes Y^\beta)(X_{\mu \ell}) & = 0. \\
d'_{\mathbf C}(X_\alpha \otimes Y^\beta)(Y_{\mu \ell}) & = \left( - \delta_{\alpha \mu} \delta_{\beta \ell} + \delta_{\alpha \ell} \delta_{\beta \mu} \right) T. \\
d'_{\mathbf C}(X_\alpha \otimes Y^\beta) (X_\mu \wedge Y_\ell) & = - X_\alpha \otimes Y^\beta [X_\mu, Y_\ell] + [X_\mu, X_\alpha \otimes Y^\beta Y_\ell] - [Y_\ell, X_\alpha \otimes Y^\beta X_\mu] \\
& = 0; \\
d'_{\mathbf C}(X_\alpha \otimes Y^\beta)(X_{\mu} \wedge S) & = 0; \\
d'_{\mathbf C}(X_\alpha \otimes Y^\beta)(X_{\mu} \wedge T) & = 0; \\
d'_{\mathbf C}(X_\alpha \otimes Y^\beta)(Y_{\mu} \wedge S) & = X_\alpha \otimes Y^\beta Y_\mu - [S, X_\alpha \otimes Y^\beta Y_\mu ] = \delta_{\beta \mu} (X_\alpha - [S, X_\alpha]) = 0; \\
d'_{\mathbf C}(X_\alpha \otimes Y^\beta) (Y_\mu \wedge T) & = - X_\alpha \otimes Y^\beta[Y_\mu, T] + [Y_\mu, X_\alpha \otimes Y^\beta T] - [T, X_\alpha \otimes Y^\beta Y_\mu] \\
& = - \delta_{\beta \mu} [T, X_\alpha] = 0; \\
d'_{\mathbf C}(X_\alpha \otimes Y^\beta) (S\wedge T) & = 0; \\
d'_{\mathbf C}(X_\alpha \otimes Y^\beta) (T_{\mu \ell}) & = 0.
\end{align*}
\fi
Now for $d'_{\mathbf C}(X_\alpha \otimes S^\ast) (X_\mu \wedge S)$, using the observation that $\ker S^\ast = \mathfrak s_1 \oplus s_2 =[\mathfrak s, \mathfrak s]$, we can reduce the number of terms needed for the computations of $d(X_\alpha \otimes S^\ast)$: this will evaluate to zero for any bivector where $S$ is not a factor. The remaining terms are:
\begin{align*}
d'_{\mathbf C}(X_\alpha \otimes S^\ast) (X_\mu \wedge S) & = [X_\mu, X_\alpha \otimes S^\ast S] = 0 ;\\
d'_{\mathbf C}(X_\alpha \otimes S^\ast) (Y_\mu \wedge S) & = [Y_\mu, X_\alpha \otimes S^\ast S] = [Y_\mu, X_\alpha]= - \delta_{\alpha \mu} T; \\
d'_{\mathbf C}(X_\alpha \otimes S^\ast) (S \wedge T ) & = - [T, X_\alpha \otimes S^\ast S] + [S, X_\alpha \otimes S^\ast T] = 0.\qedhere
\end{align*}
\iffalse
\eqref{eq:dSXa} is determined by
\begin{align*}
d'_{\mathbf C}(S \otimes X^\alpha)(X_{\mu \ell}) & = - \delta_{\alpha \ell} X_\mu + \delta_{\alpha_\mu} X_{\ell}; \\
d'_{\mathbf C}(S \otimes X^\alpha) (X_\mu\wedge Y_\ell) & = S \otimes X^\alpha [X_\mu, Y_\ell] - [X_\mu, S \otimes X^\alpha Y_\ell] +[Y_\ell, S \otimes X^\alpha X_\mu] \\
& = \delta_{\alpha \mu} [Y_\ell, S] = \delta_{\alpha \mu } Y_\ell; \\
d'_{\mathbf C}(S \otimes X^\alpha) (Y_{\mu\ell}) & = 0; \quad
d'_{\mathbf C}(S \otimes X^\alpha) (X_\mu \wedge S) = \delta_{\alpha \mu }S; \\
d'_{\mathbf C}(S \otimes X^\alpha) (X_\mu \wedge T) & = 2 \delta_{\alpha \mu} T; \quad
d'_{\mathbf C}(S \otimes X^\alpha)(Y_\mu \wedge S) = S \otimes X^\alpha Y_\mu = 0; \\
d'_{\mathbf C}(S \otimes X^\alpha)(Y_\mu \wedge T) & = 0; \\
d'_{\mathbf C}(S \otimes X^\alpha)(S \wedge T) & = S \otimes X^\alpha (-2 T) = 0; \\
d'_{\mathbf C}(S \otimes X^\alpha)(T_{\mu \ell}) & = 0;
\end{align*}
and then for \eqref{eq:dSYa}:
\begin{align*}
d(S \otimes Y^\alpha) & = \widetilde J d(S \otimes X^\alpha) \\
&= Y^{\alpha \ell} - Y^\alpha \wedge (-X^\ell) \otimes (-X_\ell) + Y^\alpha \wedge S^\ast \otimes S - 2 Y^\alpha \wedge T^\ast \otimes T \\
& = Y_{\ell}^{\alpha \ell} - Y^\alpha \wedge X^\ell \otimes X_\ell + Y^\alpha \wedge S^\ast \otimes S - 2 Y^\alpha \wedge T^\ast \otimes T. \label{eq:dSYa} \\
\end{align*}
For \eqref{eq:dXaT}
\[
\begin{array}{rlr}
d(X_\alpha \otimes T) (X_{\mu \ell}) & = 0; \qquad \qquad \qquad \; d(X_\alpha \otimes T) (Y_{\mu \ell}) = 0; \\
d(X_\alpha \otimes T) (X_\mu \wedge Y_\ell) & = - \delta_{\mu \ell} \delta_{1 \nu} X_\alpha; \qquad d(X_\alpha \otimes T) (X_{\mu} \wedge S) = 0; \\
d(X_\alpha \otimes T) (X_{\mu} \wedge T) & = - X_\alpha \otimes T [X_\mu, T] + [X_\mu, X_\alpha \otimes T T] - [T, X_\alpha \otimes T X_\mu] \\
& = \delta_{\nu \ell} [X_\mu, X_\alpha] = 0; \\
d(X_\alpha \otimes T) (Y_{\mu} \wedge S) & = 0; \qquad
d(X_\alpha \otimes T) (Y_{\mu} \wedge T) = [Y_\mu, X_\alpha \otimes T T] = \delta_{\nu \ell} \delta_{\alpha \mu} T_1; \\
d(X_\alpha \otimes T)(S \wedge T) & = - X_\alpha \otimes T [S, T] + [S, X_\alpha \otimes T T] - [T, X_\alpha \otimes T S] \\
& = - 2 X_\alpha \otimes T T + \delta_{\nu \ell} [S, X_\alpha] = - 2 \delta_{\nu \ell} X_\alpha + \delta_{\nu \ell} X_\alpha = - {\delta_{\nu\ell}} X_\alpha; \\
d(X_\alpha \otimes T)(T_{\mu \ell}) & = 0.
\end{array} \]
\begin{align*}
d(T \otimes X^\alpha) (X_{\mu \ell}) & = 0 \\
d(T \otimes X^\alpha) (Y_{\mu \ell}) & = 0 \\
d(T \otimes X^\alpha) (X_{\mu} \wedge Y_{\ell}) & = 0 \\
d(T \otimes X^\alpha) (X_{\mu} \wedge S) & = T \otimes X^\alpha [X_\mu, S] - [X_\mu, T \otimes X^\alpha S] + [S, T \otimes X^\alpha X_\mu] \\
& = - \delta_{\alpha \mu} T + 2 \delta_{\alpha \mu} T_nu = \delta_{\alpha \mu} T. \\
d(T \otimes X^\alpha) (X_{\mu} \wedge T) & = - [X_\mu, T \otimes X^\alpha T] + [T, T \otimes X^\alpha X_\mu] \\
& = \delta_{\alpha \mu} [T, T] = 0; \\
d(T \otimes X^\alpha) (Y_{\mu} \wedge S) & = 0; \\
d(T \otimes X^\alpha) (Y_{\mu} \wedge T) & = 0; \\
d(T \otimes X^\alpha) (S \wedge T) & = 0; \\
d(T \otimes X^\alpha) (T_{\mu \ell}) & = 0. \\
\end{align*}
We need only evaluate $d(T \otimes S^\ast)$ on $X_\mu \wedge S$ and $S \wedge T$:
\begin{align*}
d(T \otimes S^\ast) (X_\mu \wedge S) & = - T \otimes S^\ast [X_\mu, S] + [X_\mu, T \otimes S^\ast S] - [T, S \otimes S X_\mu] = 0; \\
d(T \otimes S^\ast) (S \wedge T)
& = - T \otimes S^\ast [S, T] + [S, T \otimes S^\ast T] - [T, T \otimes S^\ast S] \\
& = - [T, T] = 0.
\end{align*}
\begin{align*}
d(S\otimes T) (X_\mu \wedge S) & = 0; \\
d(S\otimes T) (S \wedge T)
& = - S \otimes T [S, T] + [S, S \otimes T T] - [S, S \otimes T X_\mu] \\
& = - S \otimes T (2 T) + \delta_{\mu \nu} [S,S] \\
& = - 2 \delta_{\mu \nu} S,
\end{align*}
and $d(S\otimes T)$ vanishes on the remaining bivectors.
Finally
\begin{align*}
d(T \otimes T^\ast)(X_{\alpha \beta}) & = 0; \\
d(T \otimes T^\ast) (X_\alpha \wedge Y_\beta) & = - T[X_\alpha, Y_\beta] = \delta_{\alpha \beta} T; \\
d(T \otimes T^\ast)(X_\alpha \wedge S) & = 0; \\
d(T \otimes T^\ast)(X_\alpha \wedge T) & = 0; \\
d(T \otimes T^\ast)(S \wedge T) & = - 2 T ^\nu T + \delta_{\nu \ell} [S, T] = 0. \qedhere
\end{align*}
\fi
\end{proof}
\begin{lemma}
\label{lem:differentials-of-1-cochains-R}
For all $\alpha, \beta$ such that $1 \leqslant \alpha, \beta \leqslant n-1$,
\begin{align*}
d'_{\mathbf R} X_\alpha^\beta & = 0 \\
d'_{\mathbf R}(S \otimes X^\alpha) & = X^{\alpha \ell}_\ell + X^\alpha \wedge S^\ast \otimes S \\
d'_{\mathbf R}(X_\alpha \otimes S^\ast) & = 0 \\
d'_{\mathbf R}(S \otimes S^\ast) & = - X^\mu \wedge S^\ast \otimes X^\mu.
\end{align*}
\end{lemma}
\begin{proof}
Discard the terms with $Y,S,T$ in the results of Lemma \ref{lem:differentials-of-1-cochains-C}.
\end{proof}
\begin{lemma}[Differentials of $2$-cochains]
\label{lem:differentials-of-2-cochains-R}
\begin{align*}
d'_{\mathbf R}(X_\alpha^{\beta \gamma}) & = - 2 X_\alpha \otimes X^{\beta \gamma} \wedge S^\ast \\
d'_{\mathbf R}(S \otimes X^{\alpha \beta}) & = - 2 S \otimes X^{\alpha \beta} \wedge S^\ast \\
d'_{\mathbf R}(X_\alpha \otimes S^\ast \wedge X^{\beta}) & = 0 \\
d'_{\mathbf R}(S \otimes S^\ast \wedge X^{\alpha}) & = 2 X_\mu \otimes X^{\mu \alpha} \wedge S^\ast.
\end{align*}
\end{lemma}
\begin{proof}
Let us concentrate on $d'_\mathbf R(X_\alpha^{\beta \gamma})$. First recall that if $\mathfrak g$ is a Lie algebra and $\omega$ is a $\mathfrak g$-valued $2$-form, then for every $U, V, W \in \mathfrak g^3$,
\begin{align*}
d\gamma(U\wedge V \wedge W) & = [U, \gamma(V \wedge W)] + [V, \gamma(W,U)] + [W, \gamma(U,V)] \\
& \quad - \gamma([U,V] \wedge W) - \gamma([W,U] \wedge V) - \gamma([V,W] \wedge U).
\end{align*}
Applying this, one checks readily that $d'_\mathbf R(X_\alpha^{\beta \gamma}) (X_{\mu\nu \ell})= 0$ while
\begin{align*}
d'_\mathbf R(X_\alpha^{\beta \gamma}) (X_{\mu\nu} \wedge S)
& = 0 - X_\alpha^{\beta\gamma} \left( [X_\mu, X_\nu] \wedge S + [S, X_\mu] \wedge X_\nu + [X_\nu, S] \wedge X_\mu \right) \\
& = \left( - \delta_{\beta\mu} \delta_{\gamma \nu} +\delta_{\beta\nu} \delta_{\gamma\mu} \right) X_\alpha. \qedhere
\end{align*}
\end{proof}
\begin{remark}
The Lie algebra-valued forms have a wedge product. However we did not find a clear computational advantage in using formulae for the derivative of $2$-forms using this wedge product.
\end{remark}
\begin{proof}[Proof of Proposition \ref{prop:first-adjoint-R}]
By Lemma \ref{lem:differentials-of-1-cochains-R},
\[Z^1(\mathfrak b(n,\mathbf R), \mathfrak b(n,\mathbf R)) = \operatorname{span} \left\{ X_\alpha^\beta, X_\alpha \otimes S^\ast \right\}_{1 \leqslant \alpha, \beta \leqslant n-1}, \]
while $d'_{\mathbf R}X_\alpha = X_\alpha \otimes S^\ast$ and $d'_{\mathbf R}S = - X_\mu^\mu$.
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop:second-adjoint-R}]
By Lemma \ref{lem:differentials-of-1-cochains-R},
\begin{equation}
\notag
B^2(\mathfrak b(n, \mathbf R), \mathfrak b(n, \mathbf R)) = \operatorname{span} \left\{ X^{\alpha \ell}_\ell + X^\alpha \wedge S^\ast \otimes S, X_\mu \otimes X^\mu \wedge S^\ast \right\}_{\alpha = 1, \ldots , n-1}
\end{equation}
while by Lemma \ref{lem:differentials-of-2-cochains-R},
\begin{equation}
\notag
Z^2 (\mathfrak b(n, \mathbf R), \mathfrak b(n, \mathbf R)) = \operatorname{span} \left\{
X_\mu^{\mu \alpha} + S \otimes S^\ast \wedge X^\alpha,
X_\alpha \otimes X^\beta \wedge S^\ast \right\}_{\alpha = 1, \ldots, n-1}. \qedhere
\end{equation}
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop:first-adjoint-C}]
The $1$-coboundaries are computed as
\begin{align*}
d'_{\mathbf C}X_\alpha & = X_\alpha \otimes S^\ast - T \otimes Y_\alpha \\
d'_{\mathbf C}Y_\alpha & = Y_\alpha \otimes S^\ast + T \otimes X_\alpha \\
d'_{\mathbf C}S & = - X_\mu^\mu - Y_\mu^\mu - 2 T \otimes T^\ast \\
d'_{\mathbf C}T & = 2T \otimes S^\ast.
\end{align*}
The right-hand side of equations in Lemma \ref{lem:differentials-of-1-cochains-C} provide the $1$-cocycles.
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop:first-adjoint-S''}]
We recall that
$\mathfrak s''$ is the Lie algebra over $X_1, \ldots ,X_4$ with
$\mathfrak s'' = \langle X_2, X_3, X_4 \rangle \oplus \langle X_4 \rangle$ and
\begin{equation*}
\operatorname{ad}(X_4) = \begin{pmatrix} 1 & 1 & 0 \\
0 & 1 & 0 \\ 0 & 0 & 2
\end{pmatrix}
\end{equation*}
in the basis $(X_2, X_3, X_4)$.
Omitting the symbol $\sum_{i<j}$ and using $d'(X_k^\ell)(x^{ij} X_{ij}) = - x^{ij} X_k^\ell[X_i, X_j] + \delta_{j\ell} x^{ij} [X_i, X_{k}] - \delta_{i\ell} x^{ij} [X_j, X_k]$ one finds
\[
\begin{array}{ll}
d' (X_1^1) = 2 X_1^{14} + X_1^{24}
& d'(X_1^2) = -X_1^{24} + X_2^{24} \\
d'(X_1^3) = - X_1^{12} + 3 X_1^{34}
& d'(X_1^4) = 0 \\
d'(X_2^1) = X_2^{14}
& d'( X_2^2) = X_1^{24} \\
d'(X_2^3) = - X_1^{34} - 3 X_2^{34}
& d'(X_2^4) = 0
\\
d'(X_3^1) = - X_3^{14}
& d'(X_3^2) = - X_3^{24} \\
d'(X_3^3) = 0
& d'(X_3^4) = 0 \\
d'(X_4^1) = X_4^{14} + X_4^{24} + X_1^{14}
& d'(X_4^2) = X_4^{24} - X_1^{12} + 2 X_3^{23} \\
d'(X_4^3) = 2X_4^{34} - X_1^{13} - X_1^{23} - X_2^{23}
& d'(X_4^4) = - X_1^{14} - X_1^{24} - X_2^{24} - X_3^{34}.
\end{array}
\]
All the nonzero co-boundaries obtained are linearly independent, hence
\begin{equation*}
H^1(\mathfrak s'', \mathfrak s'') = \operatorname{span}(X_3^3, X_1^4, X_2^4, X_3^4). \qedhere
\end{equation*}
\end{proof}
\subsection{The Lie algebra $\mathfrak l_{6,7}$ and its nilpotent deformations}
We expand below on the computations needed for Example \ref{exm:L67}.
\subsubsection{Adjoint cohomology of \texorpdfstring{$\mathfrak l_{6,7}$}{l67}}
\label{subsec:nilpotent-adjoint}
One computes the $2$-coboundaries as:
\[
\begin{array}{lll}
d'_{\mu}(X_1^1) = X_{3}^{12} + X_4^{13} + X_5^{14}
& d'_{\mu}(X_2^1) = 0
& d'_{\mu}(X_3^1) = 0 \\
d'_{\mu}(X_1^2) = X_4^{23} + X_5^{24}
& d'_{\mu}(X_2^2) =X_3^{12}
& d'_{\mu}(X_3^2 ) = X_4^{12} \\
d'_{\mu}(X_1^3) = - X_3^{23} + X_5^{34}
& d'_{\mu}(X_2^3) = X_3^{13}
& d'_{\mu}(X_3^3 ) = X_4^{13} \\
d'_{\mu}(X_1^4) = - X_3^{24} - X_4^{34}
& d'_{\mu}(X_2^4) = X_3^{14}
& d'_{\mu}(X_3^4) = X_4^{14} \\
d'_{\mu}(X_1^5) = -X_3^{25} - X_4^{35} - X_5^{45}
& d'_{\mu}(X_2^5) = X_3^{15}
& d'_{\mu}(X_3^5) = X_4^{15} \\
d'_{\mu}(X_1^6) = - X_3^{26} - X_4^{36} - X_5^{46}
& d'_{\mu}(X_2^6) = X_3^{16}
& d'_{\mu}(X_3^6) = X_4^{16} \\
d'_{\mu}(X_4^1) = 0
& d'_{\mu}(X_5^1) = 0
& d'_{\mu}(X_6^1) = 0 \\
d'_{\mu}(X_4^2) = X_5^{12}
& d'_{\mu}(X_5^2) = 0
& d'_{\mu}(X_6^2) = 0 \\
d'_{\mu}(X_4^3) = - X_4^{12} + X_5^{13}
& d'_{\mu}(X_5^3) = - X_5^{12}
& d'_{\mu}(X_6^3 ) = - X_6^{12} \\
d'_{\mu}(X_4^4) = - X_4^{14} + X_5^{15}
& d'_{\mu}(X_5^4) = - X_5^{13}
& d'_{\mu}(X_6^4) = - X_6^{13} \\
d'_{\mu}(X_4^5) = - X_4^{14} +X_5^{15}
& d'_{\mu}(X_5^5) = - X_5^{14}
& d'_{\mu}(X_6^5) = - X_6^{14} \\
d'_{\mu}(X_4^6) = X_5^{16}
& d'_{\mu}(X_5^6) = 0
& d'_{\mu}(X_6^6) = 0.
\end{array}
\]
This justify the assertion that $\omega$, $\xi_1$, $\xi_2$, $\xi_3$ and $\xi_1 + \xi_2$ are not coboundaries. We now check that they are cocycles. In the computation below, we omit the symbols $\sum_{i<j<k}$, and get rid of the terms that can be checked to equal $0$ by direct inspection.
\begin{align*}
d'_\mu \omega (x^{ijk} X_{ijk}) & = d'_\mu (X_2^{16}+X_1^{62}) (x^{ijk} X_{ijk}) \\
& = - x^{ijk} [X_j, X_2^{16} X_{ik}] - x^{ijk} [X_i, X_1^{26} X_{jk}] + x^{ijk} [X_j, X_1^{26} X_{ik}] = 0;
\end{align*}
\begin{align*}
d'_\mu \xi_1 (x^{ijk} X_{ijk})
& = d'_\mu (X_5^{23}) (x^{ijk} X_{ijk}) \\
& = -x^{ijk} X_5^{23}([X_i, X_j] \wedge X_k) + x^{ijk} X_5^{23}([X_i, X_k] \wedge X_j) \\
& \quad - x^{ijk} X_5^{23}([X_j, X_k] \wedge X_i) + x^{ijk} [X_i, X_5^{23} X_{jk}] - x^{ijk} [X_j, X_5^{23} X_{ik}] \\
& \quad + x^{ijk} [X_k, X_5^{23} X_{ij}] \\
& = x^{123} [X_1, X_5^{23} X_{23}] + x^{234}[X_4,X_5] + x^{235} [X_5,X_5] = 0;
\end{align*}
\begin{align*}
d'_\mu \xi_2 (x^{ijk} X_{ijk}) & = d'_\mu (X_5^{26}) (x^{ijk} X_{ijk}) \\
& = x^{ijk} [X_i, X_5^{26} X_{jk}] - x^{ijk} [X_j, X_5^{26} X_{ik}] + x^{ijk} [X_k, X_5^{26} X_{ij}] = 0; \\
d'_\mu \xi_3 (x^{ijk} X_{ijk}) & = d'_\mu (X_4^{26} + X_5^{36}) (x^{ijk} X_{ijk}) \\
& =
x^{126}[X_1, X_4^{26} X_{26}]
-
x^{126} X_5^{36} ([X_1, X_2] \wedge X_6) \\
& = x^{126}(X_5 - X_5) = 0 .
\end{align*}
Note that $\dim H^2(\mu, \mu) = 18$, by the computer-produced \cite[Table 11]{Magnin08}.
\subsubsection{Cohomology rings}
\label{cohom-ring}
Let $d_i$ denote the derivative of $C^\ast(\mathfrak l_{6,i}, \mathbf R)$.
Then
\[
\begin{array}{lll}
d_7 X^3 = - X^{12} & d_7 X^4 = - X^{13} & d_7 X^5 = - X^{14} \\
d_{11} X^3 = - X^{12} & d_{11} X^4 = - X^{13} & d_{11} X^5 = - X^{14} - X^{23} \\
d_{12} X^3 = - X^{12} & d_{12} X^4 = - X^{13} & d_{12} X^5 = - X^{14} - X^{26}.
\end{array}
\]
In the notation of \cite{magnin2007adjoint}, $\mathfrak l_{6,11} \otimes \mathbf C$ is $\mathfrak g_{6,12}$ while $\mathfrak l_{6,12} \otimes \mathbf C$ is $\mathfrak g_{6,11}$. We compute that
\iffalse
\begin{align*}
B^2 (\mathfrak l_{6,7}, \, \mathbf R) & = \langle X^{12}, \, X^{13}, \, X^{14} \rangle
\\
B^2 (\mathfrak l_{6,11}, \, \mathbf R) & = \langle X^{12}, \, X^{14}, \, X^{15} + X^{23} + X^{24} \rangle
\\
B^2 (\mathfrak l_{6,12}, \, \mathbf R) & = \langle X^{12}, \, X^{14}, \, X^{15} + X^{23} \rangle
\end{align*}
and
\begin{align*}
Z^2 (\mathfrak l_{6,7},\, \mathbf R) & = \langle X^{12}, X^{13},\, X^{14},\, X^{15},\, X^{16},\, X^{23}, \, X^{25} - X^{34} , X^{26} \rangle
\\
Z^2 (\mathfrak l_{6,11},\, \mathbf R) & = \langle X^{12}, X^{13},\, X^{14},\, X^{15},\, X^{23},\, X^{24},\, X^{16} + X^{25} - X^{34},\, X^{26} - X^{45} \rangle
\\
Z^2 (\mathfrak l_{6,12},\, \mathbf R) & = \langle X^{12}, X^{13},\, X^{14},\, X^{15},\, X^{16} - X^{34},\, X^{26} - X^{45}, X^{23}, X^{24} \rangle
\end{align*}
\fi
\begin{align*}
H^2 (\mathfrak l_{6,7},\, \mathbf R)
& = \langle [X^{15}],\, [X^{16}],\, [X^{23}],\, [X^{25} - X^{34}], [X^{26}] \rangle \\
H^2 (\mathfrak l_{6,11},\, \mathbf R)
& = \langle [X^{13}],\, [X^{15}],\, [X^{23}],\, [X^{16} + X^{25} - X^{34}], [X^{26} - X^{45}] \rangle \\
H^2 (\mathfrak l_{6,12},\, \mathbf R)
& = \langle [X^{13}],\, [X^{15}],\, [X^{16} - X^{34}],\, [X^{26} - X^{45}], [X^{24}] \rangle
\end{align*}
The computations for $\mathfrak l_{6,11}$ and $\mathfrak l_{6,12}$ can be checked with the help of the derivative $d_i$ written down with computer assistance in \cite{magnin2007adjoint} on p.44 and p.72 respectively (\cite{magnin2007adjoint} uses the $\mathfrak g_{6,i}$ notation recalled above for the Lie algebras and writes $\omega^{i,j}$ for $X^{ij}$).
Moreover,
\begin{align*}
B^4 (\mathfrak l_{6,7},\, \mathbf R)
& = \langle X^{1234},\, X^{1235},\, X^{1236},\, X^{1245}, X^{1246},\, X^{1256},\, X^{1356} \rangle \\
B^4 (\mathfrak l_{6,11},\, \mathbf R)
& = \langle X^{1234},\, X^{1235},\, X^{1245},\, X^{1246}, X^{1236} - X^{1345},\, X^{1346},\, X^{1256} + X^{2345} \rangle \\
B^4 (\mathfrak l_{6,12},\, \mathbf R)
& = \langle X^{1234},\, X^{1235},\, X^{1245},\, X^{1246}, X^{1236} - X^{1345},\, X^{1346},\, X^{1256} + X^{2345} \rangle
\end{align*}
If $\pi_i$ denotes the cup product $H^2(\mathfrak l_{6,i}, \mathbf R) \times H^2(\mathfrak l_{6,i}, \mathbf R) \to H^4(\mathfrak l_{6,i}, \mathbf R)$, then
\begin{align*}
\operatorname{Im}(\pi_7) & = \langle [X^{1345}],[X^{2346}] \rangle \\
\operatorname{Im}(\pi_{11}) & = \langle [X^{1236}],[X^{2345}],[X^{1456} +X^{2346}] \rangle \\
\operatorname{Im}(\pi_{12}) & = \langle [X^{1236}],[X^{1456}+2X^{2346}], [X^{1256}] \rangle
\end{align*}
One checks using the co-boundaries spaces $B^4$ listed above that these vectors are linearly independent, completing the proof that the cohomology rings of $\mathfrak l_{6,11}$ and $\mathfrak l_{6,12}$ are not isomorphic to that of $\mathfrak l_{6,7}$.
\end{appendix}
\begin{scriptsize}
\newcommand{\etalchar}[1]{$^{#1}$}
|
{
"timestamp": "2021-05-18T02:34:13",
"yymm": "2105",
"arxiv_id": "2105.03955",
"language": "en",
"url": "https://arxiv.org/abs/2105.03955"
}
|
\section{Introduction}
Block designs play a major role in combinatorics and finite geometry, and have many applications in statistics, specifically in the design of experiments. In~\cite{ADP}, Amarra, Devillers and Praeger have recently constructed families of highly symmetric $2$-designs which maximise certain parameters. Their constructions depend on certain quadratic polynomials with integer coefficients taking prime power values. Many of their polynomials satisfy three simple necessary conditions which Bunyakovsky~\cite{Bun-1857} in 1857 conjectured were also sufficient for any polynomial to take infinitely many prime values. Unfortunately, this conjecture has been proved only for polynomials of degree~$1$ (Dirichlet's Theorem on primes in an arithmetic progression). Nevertheless, the Bateman--Horn Conjecture~\cite{BH}, dating from 1962 and also proved only for degree $1$, gives estimates $E(x)$ for the number $Q(x)$ of positive integers $t\le x$ at which a given polynomial takes prime values. Using a recent improvement to the Bateman--Horn Conjecture due to Li~\cite{W.Li}, we calculated these estimates $E(x)$ for some of the simpler polynomials arising in~\cite{ADP}, taking $x=10^{8}$, and compared them with the actual numbers $Q(x)$ found by computer searches. As in various other applications of this conjecture (see~\cite{JZ21, JZ21a} for example), the estimates $E(x)$ are remarkably close to the actual values $Q(x)$. Although this does not prove the existence of infinite families of block designs, the accuracy of the estimates, together with the abundance of examples found, provides strong evidence for it, and it also adds to the growing body of evidence in favour of the more general Bunyakovsky and Bateman--Horn Conjectures.
There have been many number-theoretic applications of the Bateman--Horn Conjecture (see~\cite{AFG} for a survey), and a handful in areas such as combinatorics~\cite{EKN}, cryptography~\cite{BG, Scholl17, Scholl19, Sha}, elliptic curves~\cite{BPS, DS}, error-correcting codes~\cite{Kim} and fast integer multiplication~\cite{CT}. It seems likely that the present paper and~\cite{ADP} represent its first application to block designs, just as~\cite{JZ21, JZ21a} are the first in the areas of dessins d'enfants and permutation groups.
\section{Primes versus prime powers}\label{sec:versus}
Although the problem in~\cite{ADP} requires prime power values of certain polynomials $f_{n,r}(t)\in{\mathbb Z}[t]$, it is easier to estimate the distribution of their prime values, using the Prime Number Theorem and conjectures based on it. This restriction is no great loss, as the vast majority of prime powers, up to any given large bound, are in fact prime: if $\pi(x)$ is the usual function counting primes \linebreak $p\le x$, and $\Pi(x)$ is its analogue for prime powers $p^e\le x$, then $\pi(x)/\Pi(x)\to 1$ (quite rapidly) as $x\to\infty$. For example, $\pi(10^6)/\Pi(10^6)=78\,498/78\,734=0.9970002\ldots$, while $\pi(10^9)/\Pi(10^9)=50\,847\,534/50\,851\,223=0.999927\ldots$ (see~\cite{Cook}).
Nevertheless, we carried out a more restricted search,
over $t=1,\ldots,10^7$, for prime power values $f_{n,r}(t)$ of the chosen polynomials,
finding just a few squares and one cube (see Section~\ref{sec:powers}). However, in Section~\ref{t=0} we show how to realise any even power $p^{2i}>9$ of an odd prime $p$ as $f_{n,r}(0)$ for some polynomial $f_{n,r}$, a situation which has some interest for the construction of block designs.
\section {The Bunyakovsky Conjecture}\label{sec:Bunyakovsky}
If a non-costant polynomial $f(t)\in{\mathbb Z}[t]$ is to take infinitely many prime values for $t\in{\mathbb N}$ (equivalently, if it is prime for infinitely many such $t$), then the following conditions must be satisfied:
\begin{itemize}
\item[(a)] $f$ must have a positive leading coefficient (otherwise it will take only finitely many positive values);
\item[(b)] $f$ must be irreducible in ${\mathbb Z}[t]$ (otherwise all but finitely many of its values will be composite);
\item[(c)] $f$ must not be identically zero modulo any prime $p$ (otherwise all its values will be divisible by $p$).
\end{itemize}
In 1857 Bunyakovsky~\cite{Bun-1857} conjectured that these three necessary conditions are also sufficient. (Condition~(c) is needed to avoid examples such as $t^2+t+2$, which satisfies (a) and (b) but takes only even values.) For instance, if this were true it would imply Landau's conjecture (studied also by Euler~\cite{Eul}) that there are infinitely many primes of the form $t^2+1$. However, the Bunyakovsky Conjecture has been proved only in the case where $f$ has degree $1$: this is Dirichlet's Theorem, that if $a$ and $b$ are coprime integers then there are infinitely many primes of the form $at+b$ (see~\cite[\S 5.3.2]{BS} for a proof).
\section{The Bateman--Horn Conjecture}\label{sec:BHC}
In 1962 Bateman and Horn~\cite{BH} proposed a very general conjecture (in what follows
we will use the abbreviation BHC) which comprises many previous conjectures and theorems
and gives quantified versions of them. It deals with a finite set of polynomials simultaneously
taking prime values. Though for our purposes it is sufficient to consider the case
of a single polynomial, we give here the full version of the BHC. If we incorporate a
recent improvement due to Li~\cite{W.Li}, we get the following statement:
\begin{conj}[Bateman and Horn, 1962; Li, 2019]
Let $f_1,\ldots,f_k\in\mathbb{Z}[t]$ be coprime polynomials satisfying conditions
(a) and (b) of the Bunyakovsky Conjecture, and let their product $f=f_1\cdots f_k$
satisfy condition (c). Denote by $Q(x)$ the number of $t\in\mathbb{N}$, $t\le x$,
such that all $f_i(t)$, $i=1,\ldots,k$, are prime. Then the asymptotic estimate
$E(x)$ for the number $Q(x)$ is given by the following formula:
\begin{equation}\label{eq:BH-Q}
Q(x)\sim E(x):=C\negthinspace\int_a^x\negthinspace\frac{dt}{\prod_{i=1}^k\ln f_i(t)}
\quad\hbox{as}\quad x\to\infty
\end{equation}
where
\begin{equation}\label{eq:BH-C}
C=C(f):=\prod_p\left(1-\frac{1}{p}\right)^{-k}\left(1-\frac{\omega_f(p)}{p}\right)
\end{equation}
with the product over all primes $p$, and where $\omega_f(p)$ is the number of congruence
classes $t\in{\mathbb Z}_p$ such that $f(t)=0$. In (\ref{eq:BH-Q}), one chooses $a\ge 2$
large enough that the range of integration avoids singularities, where some $f_i(t)=1$.
(In our applications we can always take $a=2$.)
\end{conj}
\begin{lemm}[Constant $C(f)$]
The product in (\ref{eq:BH-C}) converges to a constant $C>0$.
\end{lemm}
This statement is far from trivial. Bateman and Horn, in their original paper
\cite{BH}, limit themselves to a few hints. The first detailed proof was recently
published in \cite[Theorem 5.4.3]{AFG}, and it takes seven pages.
Since the integral in (\ref{eq:BH-Q}) diverges, we get the following
\begin{coro}[Infinitely many prime values]
The estimate $E(x)\to\infty$ as $x\to\infty$; therefore, $Q(x)$ also goes to infinity:
there are infinitely many integers $t\in\mathbb{N}$ such that all $f_i(t)$, $i=1,\ldots,k$,
are simultaneously prime.
\end{coro}
As in the case of the Bunyakovsky Conjecture, the BHC, even when restricted to a single polynomial $f$, has been proved only in the case where $\deg f=1$. This is the quantified version of Dirichlet's Theorem, that for fixed coprime $a$ and $b$ the number of $t\le x$ such that $at+b$ is prime is asymptotic to
\[\frac{1}{\varphi(a)}\int_2^x\frac{dt}{\ln(at+b)},\]
where $\varphi$ is Euler's totient function. (Equivalently, the primes in the arithmetic progression $at+b$ are asymptotically equally distributed among the $\varphi(a)$ congruence classes of units mod~$a$; see \linebreak \cite[\S5.3.2]{BS} for a proof.)
An earlier special case of the BHC, applicable to a single quadratic polynomial $f$, is the Conjecture F of Hardy and Littlewood~\cite{HL}, giving similar estimates. For this reason, the constants $C(f)$ are sometimes known as Hardy--Littlewood constants.
\section{Heuristic argument for the ingredients of the Bateman--Horn Conjecture}\label{sec:heuristic}
Here we give a heuristic argument to explain certain ingredients
of the formula (\ref{eq:BH-Q}) for the Bateman--Horn estimate $E(x)$.
The Prime Number Theorem provides two asymptotic estimates for the number $\pi(x)$ of primes $p\le x$ as $x\to\infty$, namely
\begin{equation}\label{eq:PNT}
\pi(x)\sim\frac{x}{\ln x} \quad\quad\hbox{and}\quad\quad \pi(x)\sim {\rm Li}(x):=\int_2^x\frac{dt}{\ln t}.
\end{equation}
The first is easy to use, but not very accurate; the second, involving the {\sl offset
logarithmic integral function}\/ ${\rm Li}(x)$, is harder to use but much more accurate.
For example, the number of primes up to $10^{25}$ was computed in 2013 by J. Buethe,
J. Franke, A. Jost, and T. Kleinjung (see \cite{OEIS-A006880}): it is equal to
$\pi(10^{25})=176\,846\,309\,399\,143\,769\,411\,680$.
The formula $x/\ln x$ approximates this number with the relative error $-1.77\%$,
while the relative error of the estimate ${\rm Li}(10^{25})$ is $3.12\cdot 10^{-11}\%$.
In either case, (\ref{eq:PNT}) suggests that one can regard $1/\ln x$ as the probability
that $x$ (or, rather, a randomly-chosen number close to $x$) is prime.
Consider the ``random variables'' $\xi_i(t)=1$ if $f_i(t)$ is prime, and $\xi_i(t)=0$
otherwise. The ``probability'' that $\xi_i(t)=1$ is $1/\ln f_i(t)$. If, in addition, we
presume that these variables, for any given $t$, are independent, then the probability
that all $f_i(t)$ are prime, or, in other words, the probability that the product
$\eta(t):=\xi_1(t)\cdots\xi_k(t)$ is equal to $1$, is
$\displaystyle P(t)=\frac{1}{\prod_{i=1}^k\ln f_i(t)}$. Notice that the mean value
of $\eta(t)$ is the expected value ${\rm E}(\eta(t))=P(t)$.
The random variable $\eta(t)$ is a ``counting function'': as a first estimate for
the number of $t\le x$ such that all $f_i(t)$ are prime, we may take the average
number of times this variable is equal to 1. Let us choose $a$ so that all
$f_i(t)>1$ for $t\ge a$. Then, as $t$ goes from $a$ to $x$, we have
\begin{equation}\label{eq:naive}
{\rm E}\left(\sum_{t=a}^x\eta(t)\right) = \sum_{t=a}^x {\rm E}(\eta(t)) =
\sum_{t=a}^x P(t) \approx \int_a^x P(t)dt = \int_a^x\frac{dt}{\prod_{i=1}^k \ln f_i(t)}.
\end{equation}
We cannot present any profound reasons for considering $f_1(t),\ldots,f_k(t)$ as
independent for any given $t$, but at least this assumption stands the test of a
great number of experiments. However, the same is not true when we vary the variable $t$.
Therefore, a correcting term may be needed, and this is the constant $C(f)$.
First, if $f(t_0)\equiv 0$ mod~$(p)$ for some integer $t_0$ and prime $p$, then
$f(t)\equiv 0$ mod~$(p)$ for all $t\equiv t_0$ mod~$(p)$. We would like to avoid
the situation when $f(t)$ is divisible by $p$ (or, equivalently, at least one of
$f_i(t)$ is divisible by $p$). The ``probability'' of the opposite event is
$a_p=1-\frac{\omega_f(p)}{p}$.
Second, the probability that a ``randomly chosen'' $k$-tuple of integers (whatever that
means) does not contain any element divisible by $p$ is
$b_p=\left(1-\frac{1}{p}\right)^k$. The ratio $a_p/b_p$ used in the
product (\ref{eq:BH-C}) resembles the conditional probability, though it is not one
since it may well be $>1$.
What remains is to assemble different parts of this Lego, but the corresponding procedure
will need a long discussion and a self-coherent construction of a ``probabilistic model''
of what takes place, so we stop here. Anyway, we are not supposed to give a proof of the
BHC; we only provide some plausible speculations on the matter. ``The proof of the pudding
is in its eating'': the conjecture works well, even surprisingly well, and this is what
is important about it.
The BHC is a statement involving a $k$-tuple of polynomials $f_1,\ldots,f_k$. In what follows
we will work with a single polynomial, so that $k=1$, and $f_1$ will from now on be denoted
by $f$. There is no contradiction with the previous notation where $f$ denoted the product
of the polynomials $f_i$.
\section{The constant $C(f)$}\label{sec:const_C}
Computing the constant $C(f)$ is a challenging problem in itself. As already mentioned
above, the mere existence of a limit is a non-trivial fact. By the way, the convergence
is not absolute: by changing the order of factors we may get a different limit value.
This is perhaps one of the manifestations of the fact that the ``probabilistic measure" on
$\mathbb{N}$ represented by the density is finitely additive but not countably additive.
To make matters worse, the rate of convergence is, as one of our colleagues has put it,
``frustratingly slow''.
In Section \ref{sec:omega} we discuss the computation of $\omega_f(p)$ for a single
quadratic polynomial $f$. We will see that it involves rather subtle number-theoretic
methods, mainly the quadratic reciprocity law. The case of cubic polynomials is
treated in \cite{ShL}.
A highly advanced method, though still for a single polynomial, was proposed by
H.~Cohen~\cite{Coh}. For a quadratic polynomial it involves the techniques
of $L$-functions and, in particular, of the Riemann $\zeta$-function. For polynomials
of degree greater than 2 one also needs to know the Galois group of the polynomial
in question as well as the irreducible representations of this group. An intermediate
way is chosen by Li \cite{W.Li} (with a reference to K.~Conrad): he also uses
$L$-functions but in a simpler way than in~\cite{Coh}.
We have no intention to compete with the above specialists. Therefore, we have computed
the products only over primes $p\le 10^8$. The constants $C(f)$ thus obtained already
give excellent results in approximating the numbers of prime values of the polynomials
we study in this paper.
Another interesting question is, how large (or how small)
the constant $C(f)$ could be. For example, for the well-known Euler polynomial
$t^2+t+41$ (taking prime values for $t=0,1,2,\ldots,39$) the constant found in
\cite{Coh} is $3.31977318$, while for the polynomial $t^2+t+75$ it is $0.31097668$
(in fact, the author gives 39 correct digits for both numbers). Since the integrals
$\displaystyle \int_2^x\frac{dt}{\ln f(t)}$ for these two polynomials
are very close to each other for large $x$, we conclude that the first polynomial produces,
approximately, $10.7$ times as many primes as the second one.
In \cite{AFG}, an example of a polynomial $f(t)=t^2+t+a$ is given, with the constant
$5.4972\ldots$. In this example, the coefficient $a$ is a 219-digit number. This number
is not prime, and the discriminant is not prime either. We did not try to factorize
them.
A more systematic search was carried out in \cite{JW} (which contains many interesting
examples) and \cite{Rivin} (which is, mostly, an experimental work). The champion,
found by Rivin, among the {\em monic quadratic}\/ polynomials is $t^2-2619t+1291$,
with the corresponding constant equal to $6.3722$. The discriminant of this polynomial
is $6\,853\,997$, which is a prime.
The author has also found a monic polynomial of degree 6 with a yet greater constant $C(f)$;
the discriminant of the polynomial in question is $53\times\mbox{(a prime with 37 digits)}$.
The data based on a large random sample of monic polynomials $f$ with their coefficients
bounded by a large constant $N$ suggest that $C(f)$ obeys the log-normal distribution.
If this observation turns out to be true then, in general, $C(f)$ is unbounded. Other
plausible observations based on the experimental data are as follows: (a) the mean
value of $C(f)$ is 1; (b) for monic polynomials whose coefficients (other than the
leading one) are bounded by $N$, the maximum value of $C(f)$ grows like
$C(f)=O(\log\log N)$. At the same time, in \cite{JW} it is conjectured that among the
polynomials of the type $f(t)=t^2+t-a$ the maximum value of $C(f)$ is equal to
$5.65726388$, and it is attained for the 71-digit number
{\small
\[
a = 33\,251\,810\,980\,696\,878\,103\,150\,085\,257\,129\,508\,857\,312\,847\,751\,498\,190\,
{349\,983\,874\,538\,507\,313.}
\]}
\indent Finally, an example of a non-monic polynomial (also from \cite{Rivin}): taking $f(t)=At^2+1$
with $A=\prod_{i=1}^{30}p_i$ the product of the first 30 primes from $p_1=2$ to
$p_{30}=113$, we get a rather large constant $C(f)\approx 9.5$. There is no mystery:
all the factors $(1-1/p)^{-1}$ in (\ref{eq:BH-C}) are greater than~1, while
$\omega_f(p)=0$ for $p=2,3,\ldots,113$ (in fact, also for $p=127$ but not for 131),
making the factors $(1-\omega_f(p)/p)$ equal to~1, so that they do not compensate for
a steadily growing product which reaches, at this initial stage, approximately $8.78$.
\section{Block designs}\label{sec:designs}
Here, in order to provide motivation for our particular choice of polynomials $f$, we briefly summarise the construction in~\cite{ADP} of block designs requiring certain polynomials to take prime power values. Readers who are interested only in the number-theoretic aspects of this problem can safely omit this section.
A 2-$(v, k, \lambda)$ {\em design\/} $\mathcal D$ consists of a set $\mathcal P$ of $v$ points, together with a set $\mathcal B$ of $k$-element subsets of $\mathcal P$ called blocks, such that each pair of points lie in exactly $\lambda$ blocks. (This implies that each point lies in the same number of blocks.) The {\em automorphisms\/} of $\mathcal D$ are the permutations of $\mathcal P$ which leave the set $\mathcal B$ invariant; they form a group ${\rm Aut}\,{\mathcal D}$.
If a subgroup $G\le{\rm Aut}\,{\mathcal D}$ acts transitively on blocks then it also acts transitively on points. The latter action could be imprimitive, leaving invariant a partition $\mathcal C$ of $\mathcal P$ with $d\ge 2$ classes, each of size $c\ge 2$, so that $cd=v$. Delandtsheer and Doyen showed in~\cite{DD} that in this case there exist positive integers $m$ and $n$ such that
\[mc+n=\binom{k}{2}=nd+m.\]
These integers $m$ and $n$ are the Delandtsheer-Doyen parameters of $\mathcal D$, with $n$ and $mc$ the numbers of unordered pairs of points in any given block, lying in the same or in different classes of $\mathcal C$.
In~\cite{ADP}, Amara, Devillers and Praeger have explored the restrictions these parameters place on subgroups $G$ of ${\rm Aut}\,\mathcal D$. Let $K$ denote the permutation group of degree $d$ induced by $G$ on the set of classes in $\mathcal C$, and let $H$ be the permutation group of degree $c$ induced on any class in $\mathcal C$ by its setwise stabiliser in $G$, so that $G$ is embedded in the wreath product $H\wr K\le {\rm S}_c\wr{\rm S}_d$. The {\sl rank\/} ${\rm Rank}(X)$ of any transitive permutation group $X$ on a set $\Omega$ is the number of orbits of a point-stabiliser $X_{\alpha}\;(\alpha\in\Omega)$, or equivalently of $X$ on $\Omega\times\Omega$; similarly, the {\em pair-rank\/} ${\rm PairRank}(X)$ is the number of orbits of $X$ on unordered pairs of distinct elements of $X$, so that $({\rm Rank}(X)-1)/2\le{\rm PairRank}(X)\le{\rm Rank}(X)-1$. The main result of~\cite{ADP} is that in the above circumstances
\[\frac{{\rm Rank}(H)-1}{2}\le{\rm PairRank}(H)\le n \quad\hbox{and}\quad \frac{{\rm Rank}(K)-1}{2}\le{\rm PairRank}(K)\le m.\]
The authors of~\cite{ADP} give several constructions of designs $\mathcal D$ in which the ranks and pair-ranks of $H$ and $K$ attain these upper bounds. One construction requires {\em useful pairs} of integers $n, c$ with the properties that $n\ge 2$ and $c$ is a prime power such that
\[c\equiv 1\, {\rm mod}\,(2n)\quad\hbox{and}\quad c+n=\binom{k}{2}\;\,
\hbox{for some integer}\;\, k\ge 2n.\]
They need $c$ to a prime power in order to define $H$ to be the unique subgroup of index $n$ in ${\rm AGL}_1(c)$, acting naturally on the field ${\mathbb F}_c$, while they take $K={\rm S}_d$ acting naturally on ${\mathbb Z}_d$, so that $G:=H\wr K$ has a transitive but imprimitive induced action on ${\mathcal P}={\mathbb F}_c\times{\mathbb Z}_d$ with $d$ classes of size $c$. By taking $d=1+\frac{c-1}{n}$ (the number of orbits of $H$ on ${\mathbb F}_c$) and defining ${\mathcal B}$ to be the set of images under $G$ of a carefully-chosen $k$-element subset $B\subset{\mathcal P}$ they obtain a 2-$(cd, k, \lambda)$ design $\mathcal D$ for some $\lambda$, admitting $G$ as a block-transitive and point-imprimitive group of automorphisms. This design has Delandtsheer--Doyen parameters $m=1$ and $n$, with ${\rm Rank}(H)={\rm PairRank}(H)+1=n+1$ and ${\rm Rank}(K)={\rm PairRank}(K)+1=2$.
The conditions for the pair $n, c$ to be useful imply that, if $r$ denotes the least positive remainder of $k$ mod~$(4n)$, then $\binom{r}{2}\equiv\binom{k}{2}\equiv n+1$ mod~$(2n)$. Thus, for fixed positive integers $n\ge 2$ and $r<4n$ with $\binom{r}{2}\equiv n+1$ mod~$(2n)$ they need integers $k=4nt+r$ for some integer $t\ge 0$ such that
\[f_{n,r}(t):=\binom{k}{2}-n=8n^2t^2+2n(2r-1)t+\left(\frac{r(r-1)}{2}-n\right)\]
is a prime power $c$. If the polynomial $f_{n,r}$ takes prime power values for infinitely many integers $t\ge 0$ then this construction yields an infinite family of block designs with the required parameters and symmetry properties.
\begin{exam}
The smallest useful pair $(n,c)$ is $(2,13)$, with $r=k=6$ and $d=7$, so that the corresponding design $\mathcal D$ has $cd=91$ points and $|G|=78^7\cdot 7!$ automorphisms. This example arises from the polynomial $f_{n,r}(t)=f_{2,6}(t)=32t^2+44t+13$ taking the value $c=13$ at $t=0$. Note that $f_{2,6}(1)=89$ is prime, giving a design on $cd=89\cdot 45=4005$ points, whereas $f_{2,6}(2)=697=17\cdot 41$ is not a prime power and therefore does not correspond to a design in this family.
The smaller polynomial $f_{2,3}(t)=32t^2+20t+1$ has its first prime power value $f_{2,3}(1)=53$, giving a design on $53\cdot27=1431$ points.
\end{exam}
Note that, although this construction of block designs applies to any integer $t\ge 0$ such that $f_{n,r}(t)$ is a prime power, the number-theoretic conjectures and estimates we use are stated in terms of integers $t\ge 1$. This is not a problem here, since we are not concerned with individual block designs but with the existence or otherwise of infinite families of them. In any case, the value $f_{n,r}(0)=\frac{r(r-1)}{2}-n$ is easily dealt with (see Section~\ref{t=0}).
\section{Verifying the Bunyakovsky conditions}\label{sec:verifying}
The polynomials $f$ of interest in~\cite{ADP}, and hence the main focus of this note, are those of the form
\begin{equation}\label{eq:f}
f(t)=f_{n,r}(t)=8n^2t^2+2n(2r-1)t+\left(\frac{r(r-1)}{2}-n\right)
\end{equation}
for integers $n\ge 2$ and $r\ge 1$ with
\begin{equation}\label{eq:conditions}
r<4n\quad \hbox{and} \quad \frac{r(r-1)}{2}\equiv n+1\; \hbox{mod}\,(2n).
\end{equation}
Note that this last condition implies that $r\ge 3$.
\begin{lemm}\label{le:abc}
If a polynomial $f=f_{n,r}$ of the form~(\ref{eq:f}) satisfies~(\ref{eq:conditions}), it also satisfies Bunyakovsky's conditions~(a) and (c); it satisfies his condition~(b) if and only if $n$ is not a triangular number $a(a+1)/2$, $a\in{\mathbb N}$.
\end{lemm}
\begin{proof} Clearly $f$ satisfies condition~(a) since $n\ge 1$. As a quadratic polynomial, $f$ is reducible over~$\mathbb Z$ if and only if its discriminant $\Delta$ is a perfect square. Here
\begin{equation}\label{eq:Delta}
\Delta=4n^2(2r-1)^2-32n^2\left(\frac{r(r-1)}{2}-n\right)=4n^2(8n+1),
\end{equation}
and this is a square if and only if $8n+1$ is. Simple algebra shows that the solutions $n\in{\mathbb N}$ of $8n+1=l^2\;(l\in{\mathbb Z})$ are the triangular numbers $n=1, 3, 6, 10,\ldots$, those of the form $a(a+1)/2$ for some $a=(l-1)/2\in{\mathbb N}$ (readers may enjoy finding a geometric `proof without words' for this), so $f$ will satisfy~(b) if and only if $n$ does not have this form.
We now check condition~(c). If a prime $p$ divides $2n$ then $f$ reduces mod~$(p)$ to a constant polynomial; this takes the value $1$ since $r(r-1)/2\equiv n+1$ mod~$(2n)$, so $f$ is not identically zero mod~$(p)$. If $p$ does not divide $2n$ then $f$ reduces to a quadratic polynomial, with at most two roots, so again it cannot be identically zero.
\end{proof}
In order to apply the Bateman--Horn Conjecture to the polynomials $f_{n,r}$, we therefore restrict attention to those for which $n$ is not a triangular number.
\section{Calculating $\omega_f(p)$ for $f_{n,r}$}\label{sec:omega}
Recall that $\omega_f(p)$, which appears in the infinite product (\ref{eq:BH-C}), is the number of roots of $f$ mod~$(p)$ for each prime $p$. We saw in the proof of Lemma~\ref{le:abc} that $\omega_f(p)=0$ for any prime $p$ dividing $2n$. Primes $p$ dividing $8n+1$ (and thus not dividing $2n$) give $\Delta\equiv 0$ mod~$(p)$ by (\ref{eq:Delta}), and hence $\omega_f(p)=1$ by the quadratic formula. Similarly, all other primes $p$ give $\omega_f(p)=2$ or $0$ as $8n+1$ is or is not a quadratic residue (non-zero square) mod~$(p)$.
In general, given any prime $p$ and integer $q$, one can determine whether or not $q$ is a quadratic residue mod~$(p)$ by using the Legendre symbol
\[\left(\frac{q}{p}\right)=
\begin{cases}
0\quad\; \hbox{if $q\equiv 0$ mod~$(p)$;}\\
1\quad\; \hbox{if $q$ is a quadratic residue mod~$(p)$;}\\
-1 \;\; \hbox{otherwise.}
\end{cases}.\]
(See~\cite[Chapter~7]{JJ} for quadratic residues and the Legendre symbol.) Clearly
\[\left(\frac{q}{p}\right)=\left(\frac{q'}{p}\right)\quad \hbox{if $q\equiv q'$ mod~$(p)$,}\]
and since the quadratic residues form a subgroup of index~$2$ in the group of units mod~$(p)$ we have the multiplicative property, that
\[\left(\frac{qq'}{p}\right)=\left(\frac{q}{p}\right)\left(\frac{q'}{p}\right)\]
for all $q, q'\in{\mathbb Z}$. Using these rules one can reduce the calculation of the Legendre symbol to the cases where $q$ is an odd prime. In such cases one can use the Law of Quadratic Reciprocity, that if $p$ and $q$ are distinct odd primes then
\[\left(\frac{q}{p}\right)=\left(\frac{p}{q}\right)\quad \hbox{if $p$ or $q\equiv 1$ mod~$(4)$,}\]
while
\[\left(\frac{q}{p}\right)=-\left(\frac{p}{q}\right)\quad \hbox{if $p\equiv q\equiv -1$ mod~$(4)$.}\]
We also have
\[\left(\frac{2}{p}\right)=1\;\hbox{or}\;-1\quad\hbox{as}\quad p\equiv\pm 1\;\hbox{or}\;\pm 3\;{\rm mod}\,(8),\]
and
\[\left(\frac{-1}{p}\right)=1\;\hbox{or}\;-1\quad\hbox{as}\quad p\equiv 1\;\hbox{or}\;-1\;{\rm mod}\,(4).\]
By iterating these rules one can reduce the values of $p$ and $q$ until they are small enough to be dealt with by inspection.
We have seen that if $f=f_{n,r}$ then $\omega_f(p)=0$ for all primes $p$ dividing $2n$. For primes $p$ not dividing $2n$, by the definitions of the function $\omega_f$ and the Legendre symbol, the quadratic formula gives $\omega_f(p)=\left(\frac{\Delta}{p}\right)+1$. We will use this in the following examples.
\begin{exam}\label{ex:n=2}
The smallest value of $n$ which is not a triangular number is $n=2$, giving $8n+1=17$. Since $17\equiv 1$ mod~$(4)$ we have
\[\left(\frac{17}{p}\right)=\left(\frac{p}{17}\right)\]
for any odd prime $p$. By squaring integers one sees that the quadratic residues mod~$(17)$ are $\pm 1,\pm 2, \pm 4$ and $\pm 8$ (in fact, under multiplication mod~$(17)$ they form a cyclic group of order $8$, generated by $2$). Thus, if $f=f_{2,r}$ for some $r$ then $\omega_f(p)=2$ for odd primes $p\equiv\pm 1,\pm 2, \pm 4$ or $\pm 8$ mod~$(17)$, while $\omega_f(p)=0$ for primes $p\equiv\pm 3,\pm 5, \pm 6$ or $\pm 7$ mod~$(17)$. For the remaining primes $p$ we have $\omega_f(2)=0$ and $\omega_f(17)=1$.
\end{exam}
\begin{exam}\label{ex:n=4}
The second smallest value of $n$ which is not a triangular number is $n=4$, giving $8n+1=33$. In this case multiplicativity and quadratic reciprocity give
\[\left(\frac{33}{p}\right)=\left(\frac{3}{p}\right)\left(\frac{11}{p}\right)=\left(\frac{p}{3}\right)\left(\frac{p}{11}\right)\]
for all odd primes $p\ne 3, 11$, since $3\equiv 11$ mod~$(4)$ so that any minus signs cancel. Now the quadratic residues mod~$(3)$ and mod~$(11)$ are $1$ and $1, 3, 4, 5, 9$ respectively.
The primes $p$ for which $33$ is a quadratic residue mod~$(p)$ are those which are both residues or both non-residues mod~$(3)$ and mod~$(11)$, so solving the relevant pairs of simultaneous congruences gives the classes $\pm1, \pm 2, \pm 4, \pm 8, \pm 16$ mod~$(33)$ (forming a cyclic group generated by $2$). If $f=f_{4,r}$ for some $r$ then for odd primes $p$ in these classes we have $\omega_f(p)=2$, whereas for $p\equiv\pm 5, \pm 7, \pm 10, \pm 13, \pm 14$ mod~$(33)$ we have $\omega_f(p)=0$. For the remaining primes $p$ we have $\omega_f(2)=0$ and $\omega_f(3)=\omega_f(11)=1$.
Notice that the classes $\pm 3,\pm 6,\pm 9,\pm 12,\pm 15$ and $\pm 11$ are
not present in the above two lists: they are not coprime with~$33$ and therefore cannot
be residues of a prime $p>11$ modulo $33$.
\end{exam}
\begin{exam}\label{ex:n=5}
The next case $n=5$ is similar to Example~\ref{ex:n=2} since $8n+1=41$ is prime.
We find that $\omega_f(2)=\omega_f(5)=0$ and $\omega_f(41)=1$, while for other primes $p$ we have $\omega_f(p)=2$ or $0$ as $p$ is or is not a quadratic residue mod $(41)$. These are
$\pm 1, \pm 2, \pm 4, \pm 5, \pm 8, \pm 9, \pm 10, \pm 16, \pm 18, \pm 20$.
\end{exam}
For other permitted values of $n$ the process is similar: thus for $n=7$, $8$ and $9$ we have $8n+1=57=3\cdot 19$, $65=5\cdot 13$ and $73$ which is prime. However, in some cases the process can be lengthier, depending on the factorisation of $8n+1$. We give just one more typical example.
\begin{exam}\label{ex:n=13}
If $n=13$ then $8n+1=105=3\cdot 5\cdot 7$. Since $3\equiv 7\equiv -1$ mod~$(4)$ while $5\equiv 1$ mod~$(4)$ we have
\[\left(\frac{105}{p}\right)=\left(\frac{3}{p}\right)\left(\frac{5}{p}\right)\left(\frac{7}{p}\right)
=\left(\frac{p}{3}\right)\left(\frac{p}{5}\right)\left(\frac{p}{7}\right)\]
for all primes $p\ge 11$, so for such $p$ we have $\omega_f(p)=0$ or $2$ as $p$ is a quadratic residue modulo an even or odd number of the primes $3, 5$ and $7$. Since the quadratic residues modulo these primes are generated by $1$, $-1$ and $2$ respectively, this is easily determined in terms of congruences mod~$(105)$. (For some primes $p$, short-cuts are possible: for instance $105\equiv -1$ mod~$(53)$, so $\left(\frac{105}{53}\right)=\left(\frac{-1}{53}\right)=1$, and similarly $\left(\frac{105}{107}\right)=\left(\frac{-1}{107}\right)\left(\frac{2}{107}\right)=(-1)^2=1$.) For $p=3$, $5$ or $7$ we have $\omega_f(p)=1$, while $\omega_f(2)=\omega_f(13)=0$.
\end{exam}
Since the values of $\omega_f(p)$ depend only on a few simple congruences for $p$, it is straightforward to program Maple to determine the factors in the infinite product (\ref{eq:BH-C}) and hence to evaluate $C$. Note also that this part of the process depends only on $n$, and not on $r$, so that polynomials $f_{n,r}$ with the same parameter $n$ can be dealt with simultaneously.
\section{Evaluating the estimates}\label{sec:evaluating}
Since the factors in (\ref{eq:BH-C}) approach $1$ quite slowly as $p\to\infty$,
convergence of this infinite product is rather slow, and one needs to multiply
many terms in order to obtain good approximations for $C$. In our computations
we used all the primes $p\le10^8$.
Maple calculates the definite integral in (\ref{eq:BH-Q}) by numerical quadrature. We found that running times were less than a second. Bateman and Horn simplified this part of the process by replacing $\ln(f(t))$ with $\deg(f)\ln(t)$ in (\ref{eq:BH-Q}), thus ignoring the leading coefficient of $f$ together with all non-leading terms. No doubt, working in the early 1960s without resources such as Maple, they found that this shortcut was essential, especially in cases involving more than one polynomial. Li's recent improvement~\cite{W.Li}, using $\ln(f(t))$, certainly leads to more accurate estimates. In fact, the non-leading terms have remarkably little effect on the value of the integral (so again, $r$ is almost irrelevant), whereas most of the extra accuracy comes from including the leading coefficient. For instance, the estimates $E(10^8)$ for the polynomials $f_{2,3}(t)=32t^2+20t+1$ and $f_{2,6}(t)=32t^2+44t+13$, given in the next section, differ by only $0.29$.
For each polynomial $f$ we used Maple to find the actual number $Q(x)$ of prime values of $f(t)$ for $t\le x=10^8$. Since, for example, $f_{5,r}(10^8)\approx 2\cdot10^{18}$, this was the most time-consuming part of our computations, with running times of about two hours on a modest laptop. Maple uses the Rabin--Miller primality test, which is probabilistic rather than deterministic. If an integer is prime, the test will always declare it to be prime. If an integer is composite, the test may incorrectly declare it to be prime, but the probability of this happening is so small that in forty years of use of the test, no such incident has ever been reported. In our case, we found so many prime values of the polynomials $f$ which we considered that, even if we have been very unlucky and a few of them are actually composite, this will have a negligible effect on our evidence.
\section{The estimates and their accuracy}\label{sec:estimates}
\subsection{The case $n=2$.} The smallest allowed value for the parameter $n$ is $2$, so condition~(\ref{eq:conditions}) implies that $r=3$ or $6$.
In either case, evaluating $\omega_f(p)$ as in Example~\ref{ex:n=2}, and taking the product
in (\ref{eq:BH-C}) over all primes $p\le 10^8$, we found that $C=4.721240276\ldots$. Putting
$r=3$ gives
\[f(t)=f_{2,3}(t)=32t^2+20t+1.\]
Taking $x=10^i$ for $i=3, 4, \ldots, 8$ we found the estimates $E(x)$ for the values of $Q(x)$ shown in Table~\ref{tab:n=2,r=3}. The final column, showing the relative error, reveals the accuracy of these estimates.
\begin{table}[htbp]
\begin{center}
\begin{tabular}{c|c|c|c}
$x$ & $Q(x)$ & $E(x)$ & relative error \\
\hline
$10^3$ & 326 & $314.49$ & $-3.53\%$ \\
$10^4$ & 2421 & $2404.86$ & $-0.67\%$ \\
$10^5$ & 19\,394 & $19\,438.26$ & $0.23\%$ \\
$10^6$ & 162\,877 & $163\,182.75$ & $0.19\%$ \\
$10^7$ & 1\,405\,448 & $1\,406\,630.14$ & $0.084\%$ \\
$10^8$ & 12\,357\,532 & $12\,362\,961.06$ & $0.044\%$
\end{tabular}
\end{center}
\caption{Numbers $Q(x)$ and estimates $E(x)$ for $f_{2,3}$.}
\label{tab:n=2,r=3}
\end{table}
\subsection{The cases with $n\le 9$}
\begin{table}[htbp]
\begin{center}
\begin{tabular}{c|c|c|c|c|c}
$(n,r)$ & $f_{(n,r)}(t)$ & $C(f)$ & $Q(10^8)$ & $E(10^8)$ & relative error \\
\hline\hline
$(2,3)$ & $32t^2+20t+1$ & $4.72124$ & $12\,357\,532$ & $12\,362\,961.06$ & $0.0439\%$ \\
$(2,6)$ & $32t^2+44t+13$ & & $12\,363\,849$ & $12\,362\,960.77$ & $-0.0072\%$ \\
\hline
$(4,7)$ & $128t^2+104t+17$ & $3.20688$ & $8\,100\,174$ & $8\,102\,333.64$ & $0.0267\%$ \\
$(4,10)$ & $128t^2+152t+41$ & & $8\,104\,531$ & $8\,102\,333.57$ & $-0.0271\%$ \\
\hline
$(5,4)$ & $200t^2+70t+1$ & $5.62398$ & $14\,052\,016$ & $14\,050\,339.22$ & $-0.012\%$ \\
$(5,9)$ & $200t^2+170t+31$ & & $14\,049\,951$ & $14\,050\,339.05$ & $0.003\%$ \\
$(5,12)$ & $200t^2+230t+61$ & & $14\,057\,558$ & $14\,050\,338.95$ & $-0.051\%$ \\
$(5,17)$ & $200t^2+330t+131$ & & $14\,049\,868$ & $14\,050\,338.79$ & $0.003\%$ \\
\hline
$(7,9)$ & $392t^2+238t+29$ & $3.82010$ & $9\,381\,546$ & $9\,385\,428.26$ & $0.0415\%$ \\
$(7,13)$ & $392t^2+350t+71$ & & $9\,387\,937$ & $9\,385\,428.21$ & $-0.0267\%$ \\
$(7,16)$ & $392t^2+434t+113$ & & $9\,385\,853$ & $9\,385\,428.17$ & $-0.0045\%$ \\
$(7,20)$ & $392t^2+546t+183$ & & $9\,387\,135$ & $9\,385\,428.11$ & $-0.0182\%$ \\
\hline
$(8,15)$ & $512t^2+464t+97$ & $3.22754$ & $7\,879\,429$ & $7\,877\,750.61$ & $-0.0213\%$ \\
$(8,18)$ & $512t^2+560t+145$ & & $7\,879\,013$ & $7\,877\,750.57$ & $-0.0160\%$ \\
\hline
$(9,5)$ & $648t^2+162t+1$ & $5.41032$ & $13\,129\,138$ & $13\,129\,743.85$ & $0.0046\%$ \\
$(9,8)$ & $648t^2+270t+19$ & & $13\,127\,661$ & $13\,129\,739.69$ & $0.0158\%$ \\
$(9,17)$ & $648t^2+594t+127$ & & $13\,129\,080$ & $13\,129\,739.55$ & $0.0050\%$ \\
$(9,20)$ & $648t^2+702t+181$ & & $13\,130\,890$ & $13\,129\,743.63$ & $-0.0087\%$ \\
$(9,29)$ & $648t^2+1026t+397$ & & $13\,128\,036$ & $13\,129\,743.50$ & $0.0130\%$ \\
$(9,32)$ & $648t^2+1134t+487$ & & $13\,128\,979$ & $13\,129\,743.46$ & $0.0058\%$
\end{tabular}
\end{center}
\smallskip
\caption{Complete list of irreducible polynomials $f_{n,r}$ defined in (\ref{eq:f}) and satisfying
conditions (\ref{eq:conditions}), for $n\le 9$. The constants $C(f)$ are computed over
primes $p\le 10^8$.}
\label{tab:up-to-n=9}
\end{table}
The process for the remaining polynomials $f_{n,r}$ with non-triangular numbers $n\le 9$ was similar, with $x=10^8$ in all cases. Table~\ref{tab:up-to-n=9} summarises the results.
\begin{rema}
The greater the leading coefficient of a polynomial, the more
significant is Li's improvement in~\cite{W.Li} as compared with the initial
Bateman--Horn formula in~\cite{BH}. For example, in the case of $f(t)=f_{9,5}(t)=648t^2+162t+1$,
we have two corresponding estimates
\[E_{\rm Li} = C\cdot\int_2^{10^8}\frac{dt}{\ln(f(t))}
\quad\hbox{and}\quad
E_{\rm BH} = \frac{C}{2}\cdot\int_2^{10^8}\frac{dt}{\ln(t)}\]
for $x=10^8$, with relative errors $0.0046\%$ and $18.7\%$ respectively.
This does not contradict the fact that the two estimates are asymptotically
equivalent. Indeed, the relative error of $E_{\rm BH}$ steadily decreases to
approximately $2\%$ when the upper limit $x$ of the integration approaches $10^{70}$.
(Of course, we did not count the true number $Q(x)$ of prime values of this
polynomial: instead, we took $E_{\rm Li}(x)$ as if it were the true
value of $Q(x)$.)
\end{rema}
\section{Prime power values}\label{sec:powers}
We restricted our estimates to prime values of the polynomials $f_{n,r}$, since the Bunyakovsky and Bateman--Horn Conjectures have nothing to say about composite values. However, since the constructions of block designs in~\cite{ADP} apply to values which are prime powers, not just primes, we extended our computer searches to proper prime power values of some of these polynomials, for $t\le x=10^7$.
As predicted in Section~\ref{sec:versus}, we found very few proper prime power values, in comparison with the abundance of prime values.
The values we found for $n\le 9$ and $t\le 10^7$ are shown in Table~\ref{tab:primepowers}. We observe that there is only one cube: all the other prime powers are squares.
The polynomials $f_{n,r}$ for the following pairs $(n,r)$ with non-triangular parameters $n\le 9$ gave no proper prime power values for $t\le 10^7$, so they have been omitted from the table:
\[(2,6),\; (4,7),\; (4,10),\; (5,9),\; (5,12),\; (5,17),\; (7,9),\; (7,13),\; (7,16),\; (7,20),\]
\[(8,15),\; (8,18),\; (9,8),\; (9,20),\; (9,32). \]
\begin{table}[htbp]
\begin{center}
\begin{tabular}{c|c|c|c|c}
$(n,r)$ & polynomial $f_{n,r}$ & $t\le 10^7$ & $f_{n,r}(t)$ & power \\
\hline
$(2,3)$ & $32t^2+20t+1$ & 2 & 169 & $13^2$ \\
&& 8 & 2\,209 & $47^2$ \\
&& 78 & 196\,249 & $443^2$ \\
&& 282 & 2\,550\,409 & $1\,597^2$ \\
&& 9\,590 & 2\,943\,171\,001 & $54\,251^2$ \\
&& 23\,666 & 17\,923\,019\,113 & $2\,617^3$ \\
&& 90\,372 & 261\,348\,955\,729 & $511\,223^2$ \\
&& $3\,069\,998$ & $301\,596\,468\,440\,089$ & $17\,366\,533^2$ \\
\hline
$(5,4)$ & $200t^2+70t+1$ & 4 & 3\,481 & $59^2$ \\
&& 2\,044 & 835\,730\,281 & $28\,909^2$ \\
&& 4\,816 & 4\,639\,108\,321 & $68\,111^2$ \\
&& 163\,608 & 5\,353\,526\,985\,361 & $2\,313\,769^2$ \\
\hline
$(9,5)$ & $648t^2+162t+1$ & 3\,220 & 6\,719\,244\,841 & $81\,971^2$ \\
\hline
$(9,17)$ & $648t^2+594t+127$ & $1$ & $1\,369$ & $37^2$ \\
&& $49$ & $1\,585\,081$ & $1\,259^2$ \\
\hline
$(9,29)$ & $648t^2+1\,026t+397$ & 2 & 5\,041 & $71^2$
\end{tabular}
\end{center}
\caption{Proper prime power values for irreducible polynomials $f_{n,r}$
with $n\le 9$, $t\le 10^7$}
\label{tab:primepowers}
\end{table}
\section{Prime power values of reducible polynomials}\label{sec:reducible}
A reducible polynomial $f(t)=g(t)h(t)\in{\mathbb Z}[t]$ can take only finitely many prime values (with $g(t)$ or $h(t)$ equal to $\pm 1$), but could it take infinitely many prime power values? One way it might do so is if $g=h$ and this polynomial takes infinitely many prime values: Dirichlet's Theorem shows that this can happen with $\deg g=1$, and the Bunyakovsky Conjecture suggests that it can happen with $\deg g>1$. More generally, if $g$ is irreducible and takes infinitely many prime values $p$, then any power $f=g^e$ of $g$ takes infinitely many prime power values $p^e$. But what happens if $f$ has two or more distinct irreducible factors?
\begin{theo}\label{th:powers}
If $f$ is a polynomial in ${\mathbb Z}[t]$ with at least two different irreducible factors, then $f(t)$ is a prime power for only finitely many $t\in{\mathbb Z}$.
\end{theo}
\begin{proof} We first deal with a simple special case, and with $t\ge 0$. Suppose that $f=gh$ for distinct factors $g(t)=a_kt^k+\cdots$ and $h(t)=b_kt^k+\cdots$ in ${\mathbb Z}[t]$ of the same degree $k\ge 1$.
If there is some $t\in{\mathbb N}$ with $f(t)=p^e$ for a prime $p$ and integer $e\ge1$ then $g(t)=\pm p^i$ and $h(t)=\pm p^j$ for some integers $i, j\ge 0$ with $i+j=e$. If $i\ge j$ then
\[\frac{g(t)}{h(t)}=p^{i-j}\in{\mathbb Z}.\]
However, for all sufficiently large $t\in{\mathbb R}$ we have
\[\frac{g(t)}{h(t)}=\frac{a_kt^k+\cdots}{b_kt^k+\cdots}\to\frac{a_k}{b_k}\quad\hbox{strictly monotonically as}\;\; t\to+\infty,\]
so if there are infinitely many such $t\in{\mathbb N}$ with $i\ge j$ we have a sequence of integers $p^{i-j}$ converging strictly monotonically to $a_k/b_k$, which is impossible. A similar argument, with the factors $g$ and $h$ transposed, shows that there can be only finitely many such $t\in{\mathbb N}$ with $i<j$, so $f(t)$ is a prime power for only finitely many $t\in{\mathbb N}$.
We can now deal with the general case, where $f$ is reducible and not a power of a single irreducible polynomial. This allows us to factorise $f$ in ${\mathbb Z}[t]$ as $f=gh$ where $g$ and $h$ have different irreducible factors. If $\deg g\ne\deg h$ we can replace $f$ with
\[f^*=g^*h^*\quad\hbox{where}\quad g^*=g^{\deg h}\quad\hbox{and}\quad h^*=h^{\deg g},\]
so that $f^*$ takes prime power values at the same integers $t$ as $f$ does. Since $g^*$ and $h^*$ are distinct but have the same degree, we can apply the preceding argument to show that $f^*$ takes prime power values at only finitely many $t\in{\mathbb N}$, and hence the same applies to $f$. Finally, we can extend this result to all $t\in{\mathbb Z}$ either directly as above, using the fact that $g(t)/h(t)$ has similar limiting behaviour when $t\to-\infty$, or by applying the above argument for $t>0$ to $f(-t)$, which factorises in the same way as $f(t)$.
\end{proof}
In particular, let $f=f_{n,r}$ in a case where this polynomial is reducible, or equivalently $n$ is a triangular number $a(a+1)/2$ and $\Delta$ is a non-zero square $4n^2(8n+1)=4n^2(2a+1)^2$. Then $f$ factorises in ${\mathbb Z}[t]$ as
\[f(t)=g(t)h(t)=\frac{1}{2}(4nt+r+a)(4nt+r-a-1),\]
where the first or second displayed linear polynomial has both of its coefficients even as $r\equiv a$ mod~$(2)$ or not, so that it absorbs the factor $\frac{1}{2}$. In either case, the resulting linear factors $g$ and $h$ of $f$ in ${\mathbb Z}[t]$ are distinct and irreducible, so Theorem~\ref{th:powers} implies that $f(t)$ is a prime power for only finitely many $t\in{\mathbb Z}$.
\begin{prop}\label{prop:red}
If $f_{n,r}$ is reducible and $n>1$ then $f_{n,r}(t)$ is not a prime power for any
integer $t\ge 1$.
\end{prop}
\begin{proof} Suppose that $f:=f_{n,r}$ is reducible and $n>1$, so $n=a(a+1)/2$ for some integer $a\ge 2$ by Lemma~\ref{le:abc}, and that $f(t)=p^e$ for some prime $p$ and integers $e, t\ge 1$.
\medskip
\noindent{\bf Case 1} If $r\equiv a$ mod~$(2)$ then $f=gh$ where
\[g(t)=2nt+\frac{r+a}{2}\quad\hbox{and}\quad h(t)=4nt+r-a-1.\]
Since $t\ge 1$ we have $g(t), h(t)>1$ so $g(t)=p^i$ and $h(t)=p^j$ for integers $i,j\ge 1$ with $i+j=e$.
If $i<j$ then
\[\frac{h(t)}{g(t)}=p^{j-i}\ge p\ge2,\]
giving $-a-1\ge a$, which is impossible since $a\ge 1$. Thus $i\ge j$, so $g(t)\ge h(t)$, leading to
\[t\le\frac{3a-r+2}{4n}\le\frac{3a+1}{2a(a+1)}<1\]
(since $r\ge 1$ and $a\ge 2$), against our hypothesis.
\medskip
\noindent{\bf Case 2} If $r\not\equiv a$ mod~$(2)$ then $f=gh$ where
\[g(t)=4nt+r+a\quad\hbox{and}\quad h(t)=2nt+\frac{r-a-1}{2}.\]
As before we have $g(t)=p^i$ and $h(t)=p^j$ for integers $i,j\ge 1$. If $i\le j$ then $g(t)\le h(t)$, leading to\
\[2nt\le \frac{r-a-1}{2}-(r+a)<0,\]
which is impossible. Thus $i>j$, so
\[\frac{g(t)}{h(t)}=p^{i-j}\ge p.\]
If $p^{i-j}=2$ then $g(t)=2h(t)$, giving $a=-a-1$, which is impossible. Hence $p^{i-j}\ge 3$, so $g(t)\ge 3h(t)$, leading to
\[t\le\frac{a-r+3}{4n}\le\frac{a}{4n}=\frac{1}{2(a+1)}<1,\]
again contradicting our hypothesis.
\end{proof}
\begin{rema}\label{re:negative}
Although Theorem~\ref{th:powers} applies to all $t\in{\mathbb Z}$, Proposition~\ref{prop:red} applies only to integers $t\ge 0$ and cannot be extended to the case $t<0$. For example, the polynomial
\[f(t)=f_{3,5}(t)=72t^2+54t+7=(12t+7)(6t+1),\]
satisfies $f(-1)=5^2$, with $g(-1)=h(-1)=-5$. Of course, negative values of $t$ are not relevant to the 2-designs considered in this paper.
The condition $n>1$ is required in Proposition~\ref{prop:red}, since the polynomial
\[f(t)=f_{1,1}(t)=8t^2+2t-1=(2t+1)(4t-1)\]
satisfies $f(1)=3^2$. The block designs $\mathcal D$ considered here all satisfy this condition.
\end{rema}
\section{Values at $t=0$}\label{t=0}
Proposition~\ref{prop:red} leaves open the possibility, which is relevant to 2-designs, that
\[f(0)=f_{n,r}(0)=\frac{r(r-1)}{2}-n\]
could be a prime power. Prime values $f_{n,r}(0)$ seem to arise quite frequently when $f_{n,r}$ is irreducible:
for example, of the twenty polynomials in Table~\ref{tab:up-to-n=9}, sixteen have prime values at $t=0$,
three have the value $1$, and $f_{8,18}$ has the value $145$.
However, the situation is rather different for reducible polynomials $f_{n,r}$,
those for which $n$ is a triangular number $a(a+1)/2$.
\begin{prop}\label{prop:realize}
Let $f_{n,r}$ be reducible, and satisfy (\ref{eq:f}) and (\ref{eq:conditions}). Then $f_{n,r}(0)$ is a prime power
$p^e$, $e\ge 1$, if and only if $p$ is odd and one of the following occurs:
\begin{itemize}
\item[a)] $e=2i$ is even, with $n=(p^e-1)/8>1$, $a=(p^i-1)/2$ and $r=(3p^i+1)/2$, or
\item[b)] $p^e=7$, with $n=3$, $a=2$ and $r=5$ $($as in\/ {\rm Remark~\ref{re:negative})}.
\end{itemize}
\end{prop}
Note that by (a) every even power $p^e>9$ of an odd prime $p$
can be realised as a value $f_{n,r}(0)$ of a reducible polynomial $f_{n,r}$.
\begin{exam}
One can realise $5^2$ as a value by taking $n=3$, $a=2$ and $r=8$. This gives
\[f(t)=f_{3,8}(t)=72t^2+90t+25=(6t+5)(12t+5)\]
with $f(0)=5^2$. Similarly, one can realise $7^2$ by taking $n=6$, $a=3$ and $r=11$,
so that
\[f(t)=f_{6,11}(t)=288t^2+252t+49=(12t+7)(24t+7)\]
with $f(0)=7^2$. Taking $n=(13^4-1)/8=3\,570$ and $r=(3\cdot 13^2+1)/2=254$ we get
\[f(t)=f_{n,r}(t)=101\,959\,200\,t^2+3\,619\,980\,t+28\,561 = (7\,140t+169)(14\,280t+169)\]
with $f(0)=28\,561=13^4$.
\end{exam}
\noindent
{\em Proof}\/ of Proposition \ref{prop:realize}.
If we put $t=0$ in Case~(1) of the proof of Proposition~\ref{prop:red}, where $r\equiv a$ mod~$(2)$, we have
\[\frac{r+a}{2}=p^i\quad\hbox{and}\quad r-a-1=p^j\]
for integers $i, j\ge 0$ with $i+j=e\ge 1$ and $i\ge j$. Solving these simultaneous equations gives
\[r=p^i+\frac{p^j+1}{2}\quad\hbox{and}\quad a=p^i-\frac{p^j+1}{2},\]
so that
\[n=\frac{a(a+1)}{2}=\frac{1}{8}\left((2p^i-p^j)^2-1\right).\]
(Recall that $f_{n,r}$ is reducible if and only if $8n+1$ is a perfect square.)
Here we require $p^j$ to be odd, so that $r\equiv a$ mod~$(2)$; however, we reject solutions with $p=2$ and $j=0$ since they give $c=2^i$
and $r(r-1)/2\not\equiv n+1$ mod~$(2n)$, contradicting condition (\ref{eq:conditions}), so $p$ must be an odd prime.
The condition that $r(r-1)/2\equiv n+1$ mod~$(2n)$ also excludes many solutions when $p$ is odd. We have
\[\frac{r(r-1)}{2}-n-1=p^{i+j}-1\quad\hbox{and}\quad 2n=\frac{((2p^i-p^j)^2-1}{4},\]
so if $i>j$ then
\[0<\frac{r(r-1)}{2}-n-1<2n\]
and hence $r(r-1)/2\not\equiv n+1$ mod~$(2n)$. However, if we take $i=j$ then
\[r=\frac{3p^i+1}{2},\quad a=\frac{p^i-1}{2}\quad\hbox{and}\quad n=\frac{p^{2i}-1}{8},\]
so that $r<4n$ provided $p^i>3$, and
\[\frac{r(r-1)}{2}-n-1=p^{2i}-1=8n\equiv 0\; {\rm mod}\, (2n)\]
as required. Thus every even power $p^e=p^{2i}>9$ of an odd prime $p$ is the value
of some reducible polynomial $f_{n,r}$ at $t=0$, giving conclusion (a).
A similar argument applies in Case~(2) of the proof of Proposition~\ref{prop:red}, where $r\not\equiv a$ mod~$(2)$. We now have
\[r+a=p^i\quad\hbox{and}\quad \frac{r-a-1}{2}=p^j,\]
with $i>j$, so that
\[r=\frac{p^i+1}{2}+p^j\quad\hbox{and}\quad a=\frac{p^i-1}{2}-p^j,\]
giving
\[n=\frac{a(a+1)}{2}=\frac{1}{8}\left((p^i-2p^j)^2-1\right).\]
In this case
\[\frac{r(r-1)}{2}-n-1=p^{i+j}-1\quad\hbox{and}\quad 2n=\frac{((p^i-2p^j)^2-1}{4}.\]
We need
\[p^{i+j}-1=\frac{r(r-1)}{2}-n-1\ge 2n=\frac{p^{2i}-1}{4}-p^{i+j}+p^{2j}\]
so that
\[2p^{i+j}\ge\frac{p^{2i}+3}{4}+p^{2j}>\frac{p^{2i}}{4}\]
and hence $p^{i-j}\le 8$. Since $i>j$ and $p\ge 3$ this implies that $i-j=1$ and $p=3, 5$ or $7$. Thus only odd powers $p^e$ of these three primes can arise in Case~2.
Putting $i=j+1$ gives
\[\frac{r(r-1)}{2}-n-1=p^{2j+1}-1
\quad\hbox{and}\quad
2n=\frac{(p^{j+1}-2p^j)^2-1}{4}=\frac{(p-2)^2p^{2j}-1}{4}.\]
Now $2n$ divides $\frac{r(r-1)}{2}-n-1$, so multiplying by $4$ shows that
\[8n=(p-2)^2p^{2j}-1\quad\hbox{divides}\quad 4\left(\frac{r(r-1)}{2}-n-1\right)=4(p^{2j+1}-1)\]
Defining $q:=p^{2j}$, we see that
$(p-2)^2q-1$ divides $4(pq-1)$. We now apply this with $p=3$, $5$ and $7$ in turn.
If $p=3$ then $q-1$ divides $12q-4=12(q-1)+8$, so $q-1$ divides $8$, giving $q=1$ or $9$.
If $q=1$ then $j=0$, giving $r=3$, $a=0$ and $n=0$, whereas we need $n>1$.
If $q=9$ then $j=1$, giving $r=8$, $a=1$ and $n=1$, again too small.
Thus $p\ne 3$.
If $p=5$ then $9q-1$ divides $20q-4=2(9q-1)+2(q-1)$ and hence $9q-1$ divides $2(q-1)$ giving $q=1$.
Then $j=0$, so $r=4$, $a=1$ and $n=1$, whereas we need $n>1$. Thus $p\ne 5$.
If $p=7$ then $25q-1$ divides $28q-4=25q-1+3(q-1)$ and hence $25q-1$ divides $3(q-1)$ giving $q=1$. Then $j=0$, so $r=5$, $a=2$ and $n=3$, with $r<4n$ and $r(r-1)/2-n-1=6\equiv 0$ mod~$(2n)$; this gives the polynomial
\[f(t)=f_{3,5}(t)=72t^2+54t+7=(12t+7)(6t+1)\]
in conclusion (b), with $f(0)=7$.
\hfill$\Box$
\section{Conclusions}
We have found large numbers of prime values of those polynomials $f_{n,r}$ appearing in~\cite{ADP} for which $n$ is not a triangular number. The numbers found agree closely with the estimates for them provided by Li's recent version of the Bateman--Horn Conjecture. While this does not prove the conjecture in~\cite{ADP} that these polynomials take infinitely many prime values, and thus give infinite families of block designs, it provides strong evidence for this, and it also adds extra support for the validity of the Bunyakovsky and Bateman--Horn Conjectures.
\section{Acknowledgements}
We are grateful to Yuri Bilu, to Cheryl Praeger and to Weixiong Li for many useful comments.
Alexander Zvonkin was partially supported by the ANR project {\sc Combin\'e} (ANR-19-CE48-0011).
\bigskip
|
{
"timestamp": "2021-06-07T02:18:35",
"yymm": "2105",
"arxiv_id": "2105.03915",
"language": "en",
"url": "https://arxiv.org/abs/2105.03915"
}
|
\section{Introduction}\label{sec:introduction}
Economists are often tasked with predicting agent outcomes under some counterfactual policy or treatment assignment. In many cases, the counterfactual depends on how the agents are linked in a social or economic network. A diverse literature on treatment spillovers, social interactions, social learning, information diffusion, social capital formation, and more has approached this problem from a variety of specialized, often highly-parametric frameworks \citep[see for instance reviews by][]{athey2017state,jackson2017economic}. In this paper, we propose a unified framework for causal inference that accommodates many such examples of network interference.
Our main innovation is a new nonparametric modeling approach for sparse network data based on local configurations. Informally, a local configuration refers to the features of the network (the agents, their characteristics, treatment statuses, and how they are connected) nearby a focal agent as measured by path distance. The idea is that these local configurations index the different ways in which a policy or treatment assignment can impact a focal agent's outcome under network interference.
This modeling approach generalizes a developed literature on spillovers and social interactions in which the researcher specifies reference groups or an exposure map that details exactly how agents influence each other \citep[see for instance][]{hudgens2008toward,manski2013identification,aronow2017estimating}. One potential limitation of this literature is that the results are often sensitive to exactly how the researcher models this dependence. For example, in the spillovers literature it is often assumed that agents respond to the average treatment of their peers, while in the diffusion literature agents may be informed or infected by any peer. And in the social learning literature agents may be influenced more by those peers that are also central in the network. When the researcher is uncertain as to exactly how agents influence each other, misspecification can lead to inaccurate estimates and invalid inferences.
Another potential limitation of this literature is that it does not generally consider policies that change the structure of the network. Network-altering policies are becoming increasingly relevant to economic research. Examples include those that add or remove agents, or connections between agents, from the community \cite[see for instance, broadly,][]{ballester2006s,azoulay2010superstar,donaldson2016railroads}. Such policies may be difficult to evaluate using standard frameworks, which focus mostly on the reassignment of treatment statuses to agents while keeping the network structure fixed.
Our methodology addresses these limitations by using local configurations to model network interference. Intuitively, we use the space of local configurations as a ``network sieve'' that indexes the ways in which an agent may be affected by a given policy or treatment assignment. A key feature of local configurations is that they also naturally describe policies that alter network structure. A contribution of our work is to formalize this local approach and apply it to causal inference under network interference.
The use of local configurations in economics was pioneered by \cite{de2018identifying} \citep[see also][]{anderson2019collaborative}. In their work, local configurations (which they call network types) index moment conditions that partially identify the parameters of a strategic network formation model. The researcher has the flexibility to choose the configurations used in this task and can restrict attention to those that occur frequently in the data. In our setting, local configurations correspond to fixed counterfactual policies. It is usually the case that no exact instances of a given policy appear in the data and so we substitute outcomes associated with similar but not exactly the same configurations. Characterizing the resulting bias-variance trade-off requires additional machinery, which we introduce and formalize below by extending ideas from \cite{benjamini2001recurrence}.
We consider two causal inference problems. In both problems the researcher starts with a status-quo policy as described by one local configuration and is tasked with evaluating an alternative policy as described by a different local configuration. The researcher also has access to data from a many-networks experiment. Many-networks experimental designs are common in education, industrial organization, labor, and development economics where the researcher may collect network data on multiple independent schools, markets, firms, or villages. We argue that our framework can also be applied to other settings (for example, data on one large network), but leave formal extensions of our results to future work.
The first problem is a test of policy irrelevance (e.g. no treatment effects). For instance, the status-quo policy may be given by a particular social network structure where no agents are treated and the new policy may keep the same connections between agents but have every agent treated. The hypothesis to be tested is that both policies are associated with the same distribution of outcomes for one or more agents. We propose an asymptotically valid permutation test for this hypothesis building on work by \cite{canay2018approximate}.
The second problem is estimating policy effects (e.g. average treatment response). For instance, the status-quo policy may be given by a particular social network structure and the new policy may be one in which a key agent is removed. The policy effect to be estimated is the expected change in outcomes for one or more agents. We propose a $k$-nearest-neighbors estimator for the policy effect and provide non-asymptotic bounds on mean-squared error building on work by \cite{doring2017rate}.
The remainder of this paper is organized as follows. Section \ref{sec:motivation} specifies a general model of network interference. We use this model to motivate our local approach, which is formally introduced in Section \ref{sec:local}. Section \ref{sec:policy} applies the local approach to test policy irrelevance and estimate policy effects. Section \ref{sec:simulations} contains simulation results and Section \ref{sec:conclusion} concludes. Proof of claims and other details can be found in an appendix.
\section{Setup and motivation}\label{sec:motivation}
We introduce a general model of treatment response under network interference. This model is used to motivate our local approach in Section 3.
\subsection{Terminology and notation}\label{sec:motivation_term}
A countable population of agents is indexed by $\mathcal{I} \subseteq \mathbb{N}$. The population of agents are linked in a weighted and directed network. The weight of a link from agent $i$ to $j$ is given by $D_{ij} \in \mathbb{Z}_{+}\cup \{\infty\}$. The (potentially infinite-dimensional) matrix $D$ indexed by $\mathcal{I}\times\mathcal{I}$ with $D_{ij}$ in the $ij$th entry is called the adjacency matrix. We take the convention that larger values of $D_{ij}$ correspond to weaker relationships between $i$ and $j$. For instance, $D_{ij}$ might measure the physical distance between agents $i$ and $j$. We suppose that $D_{ij} = 0$ if and only if $i = j$. When the network is unweighted (agent pairs are either linked or not) $D_{ij} = 1$ denotes a link and $D_{ij} = \infty$ denotes no link from agent $i$ to $j$ with $i \neq j$. We denote the set of all such matrices $D$ by $\mathcal{D}$.
A path from agent $i$ to $j$ is a finite ordered subset of $\mathbb{N}$ denoted $\{t_{1},...,t_{L}\}$ with $t_{1} = i$, $t_{L} = j$, and $L \in \mathbb{N}$. The length of the path $\{t_{1},...,t_{L}\}$ is given by $\sum_{s = 1}^{L-1}D_{t_{s}t_{s+1}}$. The path distance from agent $i$ to $j$, $\rho(i,j)$, is the length of the shortest path from $i$ to $j$. That is,
\begin{align*}
\rho(i,j) := \inf_{\{t_{1},...,t_{L}\} \subset \mathbb{N}}\left\{\sum_{s = 1}^{L-1}D_{t_{s}t_{s+1}} : t_{1} = i, t_{L} = j\right\}.
\end{align*}
For any $i \in \mathcal{I}$ and $r \in \mathbb{Z}_{+}$, agent $i$'s $r$-neighborhood $\mathcal{N}_{i}(r) := \{j \in \mathcal{I}: \rho(i,j) \leq r\}$ is the collection of agents within path distance $r$ of $i$. $N_{i}(r) := |\mathcal{N}_{i}(r)|$ is the size of agent $i$'s $r$-neighborhood (i.e. the number of agents in $\mathcal{N}_{i}(r)$). For any agent-specific variable $\mathbf{W} := \{W_{i}\}_{i \in \mathcal{I}}$ (see Section \ref{sec:motivation_model} below), $W_{i}(r) = \sum_{j \in \mathcal{I}}W_{j}\mathbbm{1}\{\rho(i,j) \leq r\}$ is the $r$-neighborhood count of $W$ for agent $i$. It describes the partial sum of $W$ for the agents in $\mathcal{N}_{i}(r)$. Since $D_{ij} = 0$ if and only if $i = j$, $\mathcal{N}_{i}(0) = \{i\}$, $N_{i}(0) = 1$, and $W_{i}(0) = W_{i}$.
We assume that the network is \emph{locally finite}. That is, for every $i \in \mathcal{I}$ and $r \in \mathbb{Z}_{+}$, $N_{i}(r) < \infty$ \cite[see also][]{de2018identifying}. In words, the assumption is that every $r$-neighborhood of every agent contains only a finite number of agents. The assumption is implicit in much of the literature on network interference (including the examples below) where the researcher observes all of the relevant dependencies between agents in finite data.
\subsection{General outcome model}\label{sec:motivation_model}
Each agent $i \in \mathcal{I}$ has an outcome $Y_{i} \in \mathbb{R}$ and treatment assignment $T_{i} \in \mathbb{R}$. We call these quantities agent-specific variables and denote the corresponding population vectors $\mathbf{Y} = \{Y_{i}\}_{i \in \mathcal{I}}$, $\mathbf{T} = \{T_{i}\}_{i \in \mathcal{I}}$, etc. A general outcome model under network interference is
\begin{align*}
Y_{i} = f_i(\mathbf{T}, D, U_i)~,
\end{align*}
where $f_i$ is a real-valued function and $U_i \in \mathcal{U}$ represents unobserved agent-specific policy-invariant heterogeneity. We model $Y_i$ as a function of $\mathbf{T}$ and $D$, so that individual $i$'s outcome may vary with any of the treatments or network connections in the population. Let $\mathcal{T}$ denote the set of all population treatment vectors $\mathbf{T}$. The counterfactual outcome for agent $i$ at a fixed policy choice $(\mathbf{t}, d) \in \mathcal{T} \times \mathcal{D}$ is $f_i(\mathbf{t}, d, U_i)$.
A limitation of this general model is that without further assumptions it is not informative about policy effects that rely on counterfactual outcomes \citep[see][Section 1.2]{manski2013identification}. This is because the model does not specify how data from one policy can be used to learn about the outcomes associated with a counterfactual policy. A common solution to this problem is to impose what \cite{manski2013identification} calls a \emph{constant treatment response} (CTR) assumption \citep[see also][and others]{aronow2017estimating,basse2019randomization, hudgens2008toward,vazquez2017identification,leung2019causal,savje2021causal}. Let $\lambda_i: \mathcal{T} \times \mathcal{D} \rightarrow \mathcal{G}$ be an \emph{exposure map} that maps treatment and network information into an \emph{effective treatment} for agent $i$, where $\mathcal{G}$ denotes the set of all effective treatments. The CTR assumption is that for $(\mathbf{t}, d)$ and $(\mathbf{t}', d')$ such that $\lambda_i(\mathbf{t}, d) = \lambda_i(\mathbf{t}', d')$
\[f_i(\mathbf{t}, d, u) = f_i(\mathbf{t}', d', u)~,\]
for all $u \in \mathcal{U}$. In words, the CTR assumption states that for a fixed $u \in \mathcal{U}$, all policies $(\mathbf{t}, d)$ associated with the same effective treatment $\lambda_i(\mathbf{t}, d)$ generate the same outcome $f_{i}(\mathbf{t}, d, u)$.
Under the CTR assumption, we define the function $h: \mathcal{G} \times \mathcal{U} \rightarrow \mathbb{R}$ such that $h(\lambda_i(\mathbf{t},d), u) = f_i(\mathbf{t}, d, u)$, and rewrite our outcome model as
\begin{align*}\label{eq:outcome_model}
Y_i = h(G_i, U_i)~,
\end{align*}
where $G_i = \lambda_i(\mathbf{T}, D)$. Agent $i$'s outcome now only depends on the policy $(\mathbf{T}, D)$ through their effective treatment $G_i$.
The CTR assumption may help identify the policy effect of interest. A fixed policy implies a collection of effective treatments, one for each individual. If the effective treatments associated with a counterfactual policy are also observed in the data, then the researcher can potentially use those outcomes to characterize the counterfactual policy. For example, if $\lambda_i(\mathbf{T}, D) = T_i$, then agent $i$'s outcome depends on only their own treatment status. The counterfactual outcome associated with treating an untreated agent might then be learned by looking at the outcomes of treated agents in the data.
Some common choices of $\lambda_i$ and $\mathcal{G}$ are illustrated in Examples \ref{ex:spillovers}, \ref{ex:capital}, and \ref{ex:social_interactions} below. These choices are however based on strong modeling assumptions that when wrong may lead to inaccurate estimates and invalid inferences. Our local approach instead fixes a specific but flexible choice of $\mathcal{G}$ called the space of \emph{rooted networks} which generalizes the notion of a local configuration and can approximate a large class of effective treatments. Under appropriate continuity assumptions on $h$, we propose estimating and inferring causal effects by pooling outcome data associated with effective treatments that are close to but not exactly the effective treatment of interest. Such extrapolation plays a key role in Example \ref{ex:social_interactions}, where the effective treatment is not low-dimensional and so finding instances of a given effective treatment in the data is rare.
\subsection{Examples}\label{sec:motivation_examples}
\begin{example}\label{ex:spillovers}
(Neighborhood spillovers): Agents are assigned to either treatment or control status with $T_{i} = 1$ if $i$ is treated and $T_{i} = 0$ if $i$ is not. Agent $i$'s potential outcome depends on their treatment status, and the number of treated agents proximate to $i$. A common neighborhood spillovers model is
\begin{align*}
Y_{i} = Y\left(T_{i}, S_{i}(r), U_i\right)
\end{align*}
where $S_{i}(r) = \sum_{j \in \mathbb{N}}T_{j}\mathbbm{1}\{\rho(i,j) \leq r\}$ are the neighborhood counts of treatment assigned within radius $r$ of $i$. See for instance \cite{cai2015social,leung2016treatment, viviano2019policy}. A potential policy of interest is the effect of treating every agent in the community versus treating no one, holding the network connections fixed. In this example an effective treatment for $i$ is given by $\lambda_i(\mathbf{T}, D) = (T_i, S_i(r))$. It only depends on the network connections and treatment statuses within path distance $r$ of $i$.
\end{example}
\begin{example}\label{ex:capital}
(Social capital formation): Agents leverage their position in the network to garner favors, loans, etc. \cite{jackson2012social} specify a model in which agents are linked in an unweighted and undirected network. Two agents exchange favors if there exists a third agent linked to both agents that can monitor the exchange. The number of favors exchanged by agent $i$ is given by
\begin{align*}
Y_{i}= \left(\sum_{j \in \mathcal{I}} \mathbbm{1}\{\mathcal{N}_{i}(1)\cap \mathcal{N}_{j}(1) \neq \emptyset\}\right)\cdot U_{i}~.
\end{align*}
A potential policy of interest in this literature is the effect of expanding agent $i$'s $1$-neighborhood to include an additional agent. In this example an effective treatment for $i$ is given by $\lambda_i(\mathbf{T}, D) = \sum_{j \in \mathcal{I}} \mathbbm{1}\{\mathcal{N}_{i}(1)\cap \mathcal{N}_{j}(1) \neq \emptyset\}$. It only depends on the network connections within path distance $2$ of agent $i$. \cite{karlan2009trust,cruz2017politician} consider related models of social capital.
\end{example}
\begin{example}\label{ex:social_interactions}
(Social interactions): In the linear-in-means model, agent $i$'s outcome depends on the total or average outcomes and treatment statuses of their peers. For instance
\begin{align*}
Y_{i} = T_{i}\beta + T_{i}^{*}(1)\gamma + \delta Y^{*}_{i}(1) + V_{i}
\end{align*}
where $T^*_{i}(1) = T_{i}(1)/N_{i}(1)$. Outcomes are observed in equilibrium: agents first draw $(T,V)$ and then coordinate on $Y$. Examples include \cite{manski1993identification,lee2007identification, bramoulle2009identification, blume2010identification,de2010identification, lee2010specification, goldsmith2013social}. A potential policy effect of interest is the average effect of removing a particular agent from the community.
In this example we derive the effective treatment following \cite{bramoulle2009identification}. They show that under certain conditions the linear-in-means model admits a unique equilibrium
\begin{align*}
Y_{i} &= \left[\left(I - \delta A^*(D)\right)^{-1}\left(\mathbf{T}\beta + \mathbf{T}^*(1)\gamma + \mathbf{V}\right)\right]_i \\
&= \lim_{S \to \infty}\sum_{s=0}^{S}\left[\delta^{s}A^*(D)^{s}\left(\mathbf{T}\beta + \mathbf{T}^*(1)\gamma + \mathbf{V}\right)\right]_i~,
\end{align*}
where $[\cdot]_i$ is the $i$th entry of a vector, $\delta \rho < 1$, $\rho$ is the spectral radius of $A^{*}(D)$, and the $ij$th entry of $A^{*}(D)$ is $A_{ij}^*(D) = \mathbbm{1}\{0 < D_{ij} \leq 1\}/N_{i}(1)$. It follows that an effective treatment for $i$ is
\begin{align*}
\lambda_i(\mathbf{T}, D) = \{[\delta^{s}A^{*}(D)^{s}]_{i\cdot},\left[\delta^{s}A^*(D)^{s}\left(\mathbf{T}\beta + \mathbf{T}^*(1)\gamma\right)\right]_i\}_{s=0}^{\infty}
\end{align*}
and $U_{i} = \mathbf{V}$ where $[\cdot]_{i\cdot}$ denotes the $i$th row of a matrix. The second element of the effective treatment is closely related to common measures of network centrality such as eigenvector, Katz-Bonacich, or diffusion centrality \citep[see for instance][]{ballester2006s,calvo2009peer,banerjee2013diffusion}
\end{example}
One key difference between Example \ref{ex:social_interactions} and Examples \ref{ex:spillovers} and \ref{ex:capital} is that the effective treatment for $i$ depends on all of the treatment statuses and network connections of the agents that are path connected to $i$. \cite{leung2019causal} shows that this is a common feature of many economic models of network interference. Our local approach will also be able to accommodate such dependence.
\section{The local approach}\label{sec:local}
We start with an informal description of the local configurations that form the basis of our approach, see also \cite{de2018identifying}, Section 4. We then specify the model, extending ideas of \cite{benjamini2001recurrence}, and show how it generalizes the Section \ref{sec:motivation} examples.
\subsection{Informal description}\label{sec:local_informal}
Intuitively, agent $i$'s local configuration refers to the agents within path distance $r$ of $i$, their characteristics, and how they are connected. Larger values of $r$ are associated with more complicated configurations, which give a more precise picture of how the agent is connected in the network. This idea is illustrated in Figure 1.
Panel (a) depicts twelve agents connected in an unweighted and undirected network with binary treatment. Agents are either assigned to treatment (red square nodes) or control (blue circle nodes). Panel (b) depicts the local configurations of radius 1 for agents 1 and 2. They are both equivalent to a wheel with the focal untreated agent in the center and three other agents on the periphery, one of which is treated. Panel (c) depicts the local configurations of radius 2 for agents 1 and 2. They are both equivalent to a ring between five untreated agents (one of which is the focal agent). The focal agent is also connected to a treated agent who is connected to an untreated agent. Another agent in the ring adjacent to the focal agent is connected to a treated agent. Panel (d) depicts the local configurations of radius 3 for agents 1 and 2. They are not equivalent because the local configuration for agent 1 contains four treated agents while the local configuration for agent 2 contains only three treated agents.
In this way, one can describe the local configurations for any choice of agent and radius. Since the diameter of the network (the maximum path distance between any two agents) is 7, any local configuration of radius greater than 7 will be equal to the local configuration of radius 7. However, for networks defined on large connected populations, increasing the radius of the local configuration typically reveals a more complicated network structure.
\begin{figure}
\centering
\caption{Illustration of local configurations.}
\begin{subfigure}[b]{1\textwidth}
\centering
\begin{tikzpicture}[->,>= stealth,shorten >=1pt,auto,node distance= 1.5cm,
thick,main node/.style={circle,fill=blue!20,draw,minimum size=.5cm,inner sep=0pt]}]
\node[main node] (2) {$2$};
\node[main node] (6) [ left of =2] {$6$};
\node[main node] (5) [above right of=2, fill = red, opacity = .5, shape = rectangle] {$5$};
\node[main node] (8) [below right of =5] {$8$};
\node[main node] (1) [ below of =2] {$1$};
\node[main node] (3) [ below right of =1, fill = red, opacity = .5, shape = rectangle] {$3$};
\node[main node] (4) [ below left of =1] {$4$};
\node[main node] (7) [ above right of =3] {$7$};
\node[main node] (9) [ above left of =4] {$9$};
\node[main node] (10) [ right of =7, fill = red, opacity = .5, shape = rectangle] {$10$};
\node[main node] (11) [above right of =10] {$11$};
\node[main node] (12) [above left of =9, fill = red, opacity = .5, shape = rectangle] {$12$};
\path[-]
(6) edge node {} (2)
edge node {} (9)
(2) edge node {} (5)
edge node {} (5)
edge node {} (1)
(5) edge node {} (8)
(1) edge node {} (3)
(1) edge node {} (4)
(3) edge node {} (7)
(4) edge node {} (9)
(7) edge node {} (10)
(11) edge node {} (10)
(9) edge node {} (12);
\end{tikzpicture}
\caption{A network connecting a dozen agents. }
\end{subfigure}
\begin{subfigure}[b]{1\textwidth}
\centering
\begin{tikzpicture}[->,>= stealth,shorten >=1pt,auto,node distance=1cm,
thick,main node/.style={circle,fill=blue!20,draw,minimum size=.3cm,inner sep=0pt]}]
\node[main node] (2) [below right of=6] {};
\node[main node] (1) [ below of =2] {$1$};
\node[main node] (3) [ below right of =1, fill = red, opacity = .5, shape = rectangle] {};
\node[main node] (4) [ below left of =1] {};
\path[-]
(2) edge node {} (1)
(1) edge node {} (3)
(1) edge node {} (4);
\end{tikzpicture}
\begin{tikzpicture}[->,>= stealth,shorten >=1pt,auto,node distance=1cm,
thick,main node/.style={circle,fill=blue!20,draw,minimum size=.3cm,inner sep=0pt]}]
\node[main node] (2) [below right of=6] {$2$};
\node[main node] (1) [ below of =2] {};
\node[main node] (5) [ above right of =2, fill = red, opacity = .5, shape = rectangle] {};
\node[main node] (6) [ left of =2] {};
\path[-]
(2) edge node {} (1)
(2) edge node {} (5)
(2) edge node {} (6);
\end{tikzpicture}
\caption{The local configurations for agents $1$ and $2$ associated with radius $1$. }
\end{subfigure}
\begin{subfigure}[b]{1\textwidth}
\centering
\begin{tikzpicture}[->,>= stealth,shorten >=1pt,auto,node distance=1cm,
thick,main node/.style={circle,fill=blue!20,draw,minimum size=.3cm,inner sep=0pt]}]
\node[main node] (2) {};
\node[main node] (6) [ left of =2] {};
\node[main node] (5) [above right of=2, fill = red, opacity = .5, shape = rectangle] {};
\node[main node] (1) [ below of =2] {$1$};
\node[main node] (3) [ below right of =1, fill = red, opacity = .5, shape = rectangle] {};
\node[main node] (4) [ below left of =1] {};
\node[main node] (7) [ above right of =3] {};
\node[main node] (9) [ above left of =4] {};
\path[-]
(6) edge node {} (2)
edge node {} (9)
(2) edge node {} (5)
edge node {} (5)
edge node {} (1)
(1) edge node {} (3)
(1) edge node {} (4)
(3) edge node {} (7)
(4) edge node {} (9);
\end{tikzpicture}
\hspace{5mm}
\begin{tikzpicture}[->,>= stealth,shorten >=1pt,auto,node distance=1cm,
thick,main node/.style={circle,fill=blue!20,draw,minimum size=.3cm,inner sep=0pt]}]
\node[main node] (2) {$2$};
\node[main node] (6) [ left of =2] {};
\node[main node] (5) [above right of=2, fill = red, opacity = .5, shape = rectangle] {};
\node[main node] (1) [ below of =2] {};
\node[main node] (3) [ below right of =1, fill = red, opacity = .5, shape = rectangle] {};
\node[main node] (4) [ below left of =1] {};
\node[main node] (8) [ below right of =5] {};
\node[main node] (9) [ above left of =4] {};
\path[-]
(6) edge node {} (2)
edge node {} (9)
(2) edge node {} (5)
edge node {} (5)
edge node {} (1)
(1) edge node {} (3)
(1) edge node {} (4)
(5) edge node {} (8)
(4) edge node {} (9);
\end{tikzpicture}
\caption{The local configurations for agents $1$ and $2$ associated with radius 2.}
\end{subfigure}
\begin{subfigure}[b]{1\textwidth}
\centering
\begin{tikzpicture}[->,>= stealth,shorten >=1pt,auto,node distance= 1cm,
thick,main node/.style={circle,fill=blue!20,draw,minimum size=.3cm,inner sep=0pt]}]
\node[main node] (2) {};
\node[main node] (6) [ left of =2] {};
\node[main node] (5) [above right of=2, fill = red, opacity = .5, shape = rectangle] {};
\node[main node] (8) [below right of =5] {};
\node[main node] (1) [ below of =2] {$1$};
\node[main node] (3) [ below right of =1, fill = red, opacity = .5, shape = rectangle] {};
\node[main node] (4) [ below left of =1] {};
\node[main node] (7) [ above right of =3] {};
\node[main node] (9) [ above left of =4] {};
\node[main node] (10) [ right of =7, fill = red, opacity = .5, shape = rectangle] {};
\node[main node] (12) [above left of =9, fill = red, opacity = .5, shape = rectangle] {};
\path[-]
(6) edge node {} (2)
edge node {} (9)
(2) edge node {} (5)
edge node {} (5)
edge node {} (1)
(5) edge node {} (8)
(1) edge node {} (3)
(1) edge node {} (4)
(3) edge node {} (7)
(4) edge node {} (9)
(7) edge node {} (10)
(9) edge node {} (12);
\end{tikzpicture}
\hspace{5mm}
\begin{tikzpicture}[->,>= stealth,shorten >=1pt,auto,node distance= 1cm,
thick,main node/.style={circle,fill=blue!20,draw,minimum size=.3cm,inner sep=0pt]}]
\node[main node] (2) {$2$};
\node[main node] (6) [ left of =2] {};
\node[main node] (5) [above right of=2, fill = red, opacity = .5, shape = rectangle] {};
\node[main node] (8) [below right of =5] {};
\node[main node] (1) [ below of =2] {};
\node[main node] (3) [ below right of =1, fill = red, opacity = .5, shape = rectangle] {};
\node[main node] (4) [ below left of =1] {};
\node[main node] (7) [ above right of =3] {};
\node[main node] (9) [ above left of =4] {};
\node[main node] (12) [above left of =9, fill = red, opacity = .5, shape = rectangle] {};
\path[-]
(6) edge node {} (2)
edge node {} (9)
(2) edge node {} (5)
edge node {} (5)
edge node {} (1)
(5) edge node {} (8)
(1) edge node {} (3)
(1) edge node {} (4)
(3) edge node {} (7)
(4) edge node {} (9)
(9) edge node {} (12);
\end{tikzpicture}
\caption{The local configurations for agents $1$ and $2$ associated with radius $3$.}
\end{subfigure}
\end{figure}
\cite{benjamini2001recurrence} call the infinite-radius local configuration a rooted network (graph). Our local approach is based on the observation that, for many models with network interference, an agent's effective treatment is determined by their rooted network.
For example, in the spillovers model of Example \ref{ex:spillovers}, an agent is influenced by their treatment status and the treatment statuses of their $r$-neighbors. As a result, the agent's effective treatment is determined by their local configuration of radius $r$. In the linear-in-means model of Example \ref{ex:social_interactions}, an agent's effective treatment is not necessarily determined by any local configuration of finite radius. This is because an agent is influenced by all of the other agents to which they are path connected. However, the agent's effective treatment can be arbitrarily well-approximated by a local configuration of finite radius, and so the agent's effective treatment is determined by their rooted network. We show this in Example \ref{ex:social_interactions2} below.
This observation motivates the main idea of this paper, which is to model network interference using rooted networks. We now formalize this idea.
\subsection{Model specification}
We first define the space of rooted networks which builds on \cite{benjamini2001recurrence} \citep[see also][Section 2]{aldous2004objective} and generalizes the idea of a local configuration from Section \ref{sec:local}. We then show how rooted networks characterize the effective treatments from the Section \ref{sec:motivation} examples.
\subsubsection{Rooted networks}\label{sec:local_formal}
We maintain the assumptions of Section \ref{sec:motivation}, but introduce some new notation. For an adjacency matrix $D \in \mathcal{D}$ and treatment assignment vector $\mathbf{T} \in \mathcal{T}$, a network is the triple $(V, E, \mathbf{T})$ where $V$ is the vertex set (the population of agents represented by $\mathcal{I}$) and $E$ is the weighted edge set (represented by the adjacency matrix $D$).
A rooted network $G_{i} = ((V,E,\mathbf{T}), i)$ is the triple $(V,E,\mathbf{T})$ with a focal agent $i \in V$ called the root. Informally, $G_i$ is the network $(V,E,\mathbf{T})$ ``from the point of view'' of $i$.
For any $r \in \mathbb{Z}_{+}$, $G_{i}^{r}$ is the subnetwork of $(V,E,\mathbf{T})$ rooted at $i$ and induced by the agents within path distance $r$ of $i$ as measured by path distance $\rho$. Formally, $G_{i}^{r} := ((V_{i}^{r},E_{i}^{r},T_{i}^{r}), i)$, where $V_{i}^{r} := \mathcal{N}_{i}(r) := \{j \in V:\rho(i,j) \leq r\}$, $E_{i}^{r} := \{e_{jk} \in E: j,k \in V^{r}_{i}\}$, and $T_{i}^{r} := \{T_{j} \in \mathbf{T} : j \in V_{i}^{r}\}$. We say $G_{i}^{r}$ is the rooted network $G_{i}$ truncated at radius $r$. In Section \ref{sec:local_informal} we called this object agent $i$'s local configuration of radius $r$.
For any $\varepsilon \geq 0$, two rooted networks $G_{i_{1}}$ and $G_{i_{2}}$ are $\varepsilon$-isomorphic (denoted $G_{i_{1}} \simeq_{\varepsilon} G_{i_{2}}$) if all of their $r$-neighborhoods are equivalent up to a relabeling of the non-rooted agents, but where treatments are allowed to disagree up to a tolerance of $\varepsilon$. Formally, $G_{i_1} \simeq_{\varepsilon} G_{i_{2}}$ if for any $r \in \mathbb{Z}_{+}$ there exists a bijection $f: V^{r}_{i_{1}} \leftrightarrow V^{r}_{i_{2}}$ such that $f(i_{1}) = i_{2}$, $e_{jk} = e_{f(j)f(k)}$, for any $j, k \in V^{r}_{i_{1}}$, and $|T_{j} - T_{f(j)}| \leq \varepsilon$ for any $j \in V^{r}_{i_{1}}$.
Two rooted networks that are not $0$-isomorphic are assigned a strictly positive distance inversely related to the largest $r$ and smallest $\epsilon$ such that they have $\epsilon$-isomorphic $r$-neighborhoods. Formally, let $\zeta: \mathbb{R}_{+} \to \mathbb{R}_{++}$ be strictly decreasing with $\lim_{x \to \infty} \zeta(x) = 0$ and $\zeta(0) = 1$ (for example, $\zeta(x) = (1+x)^{-1}$). Then the following distance defines a pseudo-metric on the set of rooted networks:
\begin{align*}
d(G_{i_{1}},G_{i_{2}}) := \min\left\{\inf_{r \in \mathbb{Z}_+,\varepsilon \in \mathbb{R}_{++}} \{\zeta(r) + \varepsilon: G_{i_{1}}^{r} \simeq_{\varepsilon} G_{i_{2}}^{r}\}, \hspace{2mm} 2\right\}
\end{align*}
We demonstrate that $d(\cdot, \cdot)$ is in fact a pseudo-metric in Appendix \ref{sec:G_props}. The outer minimization is not essential to our analysis, but taking $d$ to be bounded simplifies the exposition of Section \ref{sec:policy}.
Let $\mathcal{G}$ denote the set of equivalence classes of all possible rooted networks under $d$. We demonstrate that $(\mathcal{G}, d)$ is a separable and complete metric space in Appendix \ref{sec:G_props}. Following \cite{aldous2004objective}, we call the topology on $\mathcal{G}$ induced by $d$ the local topology, and more broadly call modeling on $\mathcal{G}$ the local approach.
\subsubsection{Model of outcomes}\label{sec:local_model}
Recall the model of outcomes derived in Section \ref{sec:motivation}
\begin{align*}
Y_{i} = h\left(G_{i},U_{i}\right)~,
\end{align*}
where $G_i \in \mathcal{G}$ is a collection of effective treatments. We propose taking $\mathcal{G}$ to be the space of rooted networks, so that $G_i \in \mathcal{G}$ is the rooted network of agent $i$. In Section \ref{sec:examples2} below we revisit the examples of Section \ref{sec:motivation} and show that in each of the examples below $G_{i}$ is an effective treatment for $i$.
We endow $\mathcal{G} \times \mathcal{U}$ with the usual product topology and define probability measures on the corresponding Borel sigma-algebra. We associate a stochastic rooted network and error pair $(G_i, U_i)$ with a probability measure $\mu$. For now we take $\mu$ as arbitrary and fixed by the researcher. We fix a specific choice of $\mu$ in the context of a many-networks experimental design in Section 4.2 below.
Our main objects of interest are the average structural function (ASF) or distributional structural function (DSF) that describe the outcome for $i$ associated with a policy that sets the rooted network to some $g \in \mathcal{G}$. That is,
\[h(g) = E\left[h(g,U_i)\right]~\text{ and }\]
\[h_y(g) = E\left[\mathbbm{1}\{h(g,U_i)\le y\}\right]~,\]
where the expectation refers to the marginal distribution of $U_i$ under $\mu$ \cite[see generally][]{blundell2003endogeneity}. These functions can be used to estimate and conduct inferences about many causal effects of interest under network interference. For instance the average treatment effect (ATE) associated with a policy that alters agent $i$'s rooted network from $g$ to $g'$ is described by $h(g') - h(g)$. This object is analogous to the usual ATE recovered from an experiment that describes the impact of assigning $i$ to treatment or control status \citep[see for instance][]{heckman1985alternative,manski1990nonparametric,imbens1994identification}. Other causal effects can be defined analogously. A key message of our paper is that under appropriate continuity assumptions, $h(g)$ and $h_y(g)$ can be approximated by studying effective treatments which are close to but not exactly equal to $g$ under $d$. We demonstrate this idea in two concrete applications involving a many-networks experimental design in Sections 4.3 and 4.4 below.
\subsection{Examples Revisited}\label{sec:examples2}
We revisit the examples of Section \ref{sec:motivation} and show that for each model rooted networks serve the role of an effective treatment.
\begin{example}\label{ex:spillovers2}
In the treatment spillovers model of Example \ref{ex:spillovers}
\begin{align*}
Y_{i} = Y\left(T_{i}, S_{i}(r), U_i\right)
\end{align*}
with $S_{i}(r) = \sum_{j \in \mathbb{N}}T_{j}\mathbbm{1}\{\rho(i,j) \leq r\}$. An effective treatment is $\lambda_i(\mathbf{T}, D) = (T_i, S_i(r))$. Let $G_{i}$ be agent $i$'s rooted network. $T_{i}$ is associated with agent $i$ and so is determined by $G_{i}^{0}$. $S_{i}(r)$ counts the number of treated agents within distance $r$ of $i$ and so is determined by $G_{i}^{r}$. As a result, $\lambda_i(\mathbf{T}, D)$ is determined by $G_{i}$. It follows that the outcome for agent $i$ can be written as $Y_{i} = h(G_{i}, U_i)$ for some $h$.
\end{example}
\begin{example}\label{ex:capital2}
In the social capital formation model of Example \ref{ex:capital}
\begin{align*}
Y_{i}= \left(\sum_{j \in \mathcal{I}} \mathbbm{1}\{\mathcal{N}_{i}(1)\cap \mathcal{N}_{j}(1) \neq \emptyset\}\right)\cdot U_{i}~.
\end{align*}
An effective treatment is $\lambda_i(\mathbf{T}, D) = \sum_{j \in \mathcal{I}} \mathbbm{1}\{\mathcal{N}_{i}(1)\cap \mathcal{N}_{j}(1) \neq \emptyset\}$.
Let $G_i$ be $i$'s rooted network. Here agent $i$'s effective treatment is a function of the number of agents that are of path distance $2$ from $i$, which is determined by $G_i^2$. It follows that the outcome of agent $i$ can be written as $Y_i = h(G_i ,U_i)$ for some $h$.
\end{example}
\begin{example}\label{ex:social_interactions2}
In the social interactions model of Example \ref{ex:social_interactions}, equilibrium outcomes are described by the model
\begin{align*}
Y_{i} = \lim_{S \to \infty}\sum_{s=0}^{S}\left[\delta^{s}A^*(D)^{s}\left(\mathbf{T}\beta + \mathbf{T}^*(1)\gamma + \mathbf{V}\right)\right]_i~.
\end{align*}
An effective treatment is $\lambda_i(\mathbf{T}, D) = \{[\delta^{s}A^{*}(D)^{s}]_{i\cdot},\left[\delta^{s}A^*(D)^{s}\left(\mathbf{T}\beta + \mathbf{T}^*(1)\gamma\right)\right]_i\}_{s=0}^{\infty}$ and the structural error is $U_{i} = \textbf{V}$. Let $G_{i}$ be agent $i$ 's rooted network. The $i$th row of $\delta^{s}A^*(D)^{s}$ and $i$th entry of $\delta^{s}A^*(D)^{s}\left(\mathbf{T}\beta + \mathbf{T}^*(1)\gamma \right)$ only depend on the treatment statuses and connections of agents within path-distance $s$ of $i$ and so is determined by $G_{i}^{s}$. It follows that
\begin{align*}
Y_{i} = \lim_{S\to\infty}\sum_{s=1}^{S}h_{s}(G_{i}^{s},U_{i}) = h(G_{i}, U_{i})
\end{align*}
for some functions $h_{s}$ and $h$.
\end{example}
\section{Inference and estimation of causal effects}\label{sec:policy}
We apply the local approach framework of Section 3 to two causal inference problems. In both problems, the researcher begins with a status-quo policy (as described by an adjacency matrix and treatment assignment pair) and is tasked with evaluating the impact of an alternative. We assume that the researcher has access to data on outcomes and policies from a randomized experiment on many independent communities or clusters such as schools, villages, firms, or markets. This data structure, described in Section \ref{sec:exp_setup}, is not crucial to our methodology, but simplifies the analysis. The study of alternative experimental designs, for example with data from one large network, or endogenous policies is left to future work.
Our first application, described in Section \ref{sec:dist_test}, is to testing policy irrelevance. Specifically, we test the hypothesis that the two policies are associated with the same distribution of outcomes. Our second application, described in Section \ref{sec:est_asf}, is to estimating policy effects. Specifically, we construct a $k$-nearest-neighbors estimator for the policy function of Section 3.2.2 and provide non-asymptotic bounds on estimation error.
These applications contrast a developed literature that studies the magnitude of any potential spillover effects or tests the hypothesis of no spillovers \cite[see for instance][]{aronow2012general,athey2018exact,hu2021average,savje2021average}. A recent literature also considers the estimation and inference of exposure effects where the estimand depends on both a true model of dependence and a potentially misspecified exposure map \citep[see][]{leung2019causal, savje2021causal}.
\subsection{Many-networks setting}\label{sec:exp_setup}
A random sample of communities or clusters are indexed by $t \in [T] := 1, \ldots T$. Each $t \in [T]$ is associated with a finite collection of $m_{t}$ observations $\{W_{it}\}_{i \in [m_{t}]}$, where $m_t$ is a positive integer-valued random variable, $W_{it} := (Y_{it},G_{it})$, and $Y_{it} := h(G_{it}, U_{it})$ for some unobserved error $U_{it}$. Intuitively, each community $t$ is represented by an initial network connecting $m_t$ agents, and the $m_{t}$ rooted networks refer to this initial network rooted at each of agents in the community. Let $W_{t} := \{W_{it}\}_{i \in [m_t]}$. We impose the following assumptions on $\{(G_{it}, U_{it})\}_{i \in [m_{t}], t \in [T]}$. \pagebreak
\begin{assumption}\label{ass:sampling}
\begin{enumerate}[(i)]
\item[]
\item $\{W_{t}\}_{t \in [T]}$ is independent and identically distributed (across communities).
\item $\{U_{it}\}_{i \in [m_{t}], t \in [T]}$ is identically distributed (within and across communities). $U$ is an independent copy of $U_{it}$ (that is, $U$ has the same marginal distribution as $U_{it}$ but is independent of $\{W_{t}\}_{t \in [T]}$).
\item For any measurable $f$, $i \in [m_{t}]$, and $t \in [T]$, \[E\left[f(G_{it},U_{it}) | G_{1t},...,G_{m_{t}t}\right] = E\left[f(G_{it},U)|G_{it}\right]~,\] assuming that the conditional expectations exist.
\end{enumerate}
\end{assumption}
Assumption \ref{ass:sampling} (i) is what makes our analysis ``many-networks.'' It states that the networks and errors are independent and identically distributed across communities. We exploit independence across communities to characterize the statistical properties of our test procedure and estimator below. We do not make any restrictions on the dependence structure between observations within a community. Weakening the independence assumption (for instance, considering dependent data from one large community) would require additional assumptions characterizing the intra-community dependence structure, which we leave to future work.
Assumption \ref{ass:sampling} (ii) fixes the marginal distribution of the errors. It is used to define the policy function and effects of interest. Specifically, an average policy effect is defined as the average outcome over the homogeneous marginal distribution of $U_{it}$ for a fixed rooted network. This assumption can be dropped by defining the expectation to be with respect to the (mixture) distribution of $U_{\iota_{t} t}$ generated by drawing $\iota_{t}$ uniformly at random from $[m_{t}]$.
Assumption \ref{ass:sampling} (iii) states that the rooted networks are exogenous (i.e. the errors are policy-irrelevant). Specifically, we require that the conditional distribution of $(G_{it}, U_{it})$ given $G_{1t},...,G_{m_{t}t}$ is equal to the conditional distribution of $(G_{it}, U)$ given $G_{it}$, where $U$ is an independent copy of $U_{it}$. Exogeneity is a strong assumption, but allows us to approximate the unknown policy functions using sample averages (see below). It is also often assumed in the literature cited in Section \ref{sec:motivation_examples}. The study of endogenous rooted networks where the policy $(\mathbf{T},D)$ potentially depends on the errors $\textbf{U}$ is left to future work.
\subsection{Testing policy irrelevance}\label{sec:dist_test}
The policy maker begins with a status-quo community policy described by some treatment and network pair $(\mathbf{t}, d)$, and proposes an alternative $(\mathbf{t}', d')$. The researcher is tasked with testing whether the two policies are associated with the same distribution of outcomes for agents whose effective treatment under the status-quo is described by the rooted network $g \in \mathcal{G}$ and whose effective treatment under the alternative is described by the rooted network $g' \in \mathcal{G}$. Following Section 3.2, the potential outcomes under treatments $g$ and $g'$ are given by $ h(g,U)$ and $ h(g',U)$ respectively for some error $U$. The null hypothesis of policy irrelevance is
\begin{equation}\label{eq:null}
H_0: h_{y}(g) = h_{y}(g') \text{ for every } y \in \mathbb{R},
\end{equation}
where $h_{y}(g) := E\left[\mathbbm{1}\{h(g,U)\}\leq y\right]$. Under Assumption 4.1 (iii), $h_{y}(g)$ describes the conditional distribution of $Y_{i}$ given $G_{i} \simeq g$.
The proposed test is described in Section \ref{sec:algorithm}. Intuitively, it compares the empirical distribution of outcomes for the agents in the data whose rooted networks are most similar to $g$ or $g'$. Asymptotic validity of the test relies on the following assumptions.
\subsubsection{Assumptions}
We impose smoothness conditions on the model parameters and suppose that the rooted networks $g$ and $g'$ are ``supported'' in the data. We do not believe these conditions to be restrictive in practice. Let $\psi_{\tilde{g}}(\ell) := P\left(\min_{i \in [m_t]} d(G_{it}, \tilde{g}) \le \ell\right)$.
\begin{assumption}\label{ass:pos_mass}
For every $\ell > 0$ and $g_{0} \in \{g,g'\}$, $\psi_{g_{0}}(\ell) > 0$.
\end{assumption}
The function $\psi_g(\ell)$ measures the probability that there exists an agent from a randomly drawn community whose rooted network is within distance $\ell$ of $g$. It partly determines the statistical properties of our test in this section and the estimator in Section \ref{sec:est_asf}.
Assumption \ref{ass:pos_mass} states that the nearest neighbor of $g$ or $g'$ from a randomly drawn community has a positive probability of being within distance $\ell$ of $g$ or $g'$. The condition implies that as the number of communities $T$ grows, the researcher will eventually observe agents whose rooted networks are arbitrarily close to $g$ or $g'$.
\begin{assumption}\label{ass:dist_cont}
For every $\tilde{g} \in \mathcal{G}$, the distribution of $h(\tilde{g},U)$ is continuous, and for every $y \in \mathbb{R}$, $h_{y}(\tilde{g})$ is continuous at $\tilde{g}$.
\end{assumption}
Assumption \ref{ass:dist_cont} states that the distribution of outcomes associated with agents whose rooted networks are similar to $\tilde{g}$ approximate the distribution of outcomes at $\tilde{g}$. Recall that this continuity assumption is satisfied by the three examples of Section \ref{sec:motivation_examples}.
\subsubsection{Test procedure}\label{sec:algorithm}
We propose an approximate permutation test for $H_{0}$ building on \cite{canay2017randomization, canay2018approximate}. The test procedure is described in Algorithm 4.1. We assume that $T$ is even to simplify notation. When determining the closest agent in Step 2 or reordering the vectors in Step 3, ties are broken uniformly at random.
\begin{algorithm}\label{algo:test}{Input: data $\{W_{t}\}_{t \in [T]}$ and parameters $q \in [T/2]$, $\alpha \in [0,1]$. Output: a rejection decision.}
\begin{itemize}
\item[1.] Randomly partition the communities $[T]$ into two sets of size $T/2$, labelled $\mathcal{D}_1$ and $\mathcal{D}_2$.
\item[2.] For every $t\in \mathcal{D}_{1}$, let $i_{t}(g) := \text{argmin}_{i \in [m_t]}\{d(G_{it},g)\}$ be the agent whose value of $G_{it}$ is closest to $g$ and $W_{t}(g) := (Y_{t}(g),G_{t}(g)) := (Y_{i_{t}(g)t},G_{i_{t}(g)t})$ be the $i_{t}(g)$th outcome and rooted network. Similarly define $W_{t}(g') := (Y_{t}(g'),G_{t}(g'))$ for every $t \in \mathcal{D}_{2}$.
\item[3.] Reorder $\{W_{t}(g)\}_{t \in \mathcal{D}_{1}}$ and $\{W_{t}(g')\}_{t \in \mathcal{D}_{2}}$ so that the entries are increasing in $d(G_t(g),g)$ and $d(G_t(g') g')$ respectively. Denote the first $q$ elements of the reordered $\{W_{t}(g)\}_{t \in \mathcal{D}_{1}}$
\begin{align*}
W^{*}(g) &:= W^{*}_{1}(g), W^{*}_{2}(g), ..., W^{*}_{q}(g) \\
&:= (Y^{*}_{1}(g),G^{*}_{1}(g)), (Y^{*}_{2}(g),G^{*}_{2}(g)),...,(Y^{*}_{q}(g),G^{*}_{q}(g)).
\end{align*}
Similarly define $W^{*}(g').$ Collect the $2q$ outcomes of $W^{*}(g)$ and $W^{*}(g')$ into the vector
\[S_T := (S_{T,1}, ..., S_{T,2q}) := \left(Y^{*}_{1}(g), \ldots, Y^{*}_{q}(g), Y^{*}_{1}(g'), \ldots, Y^{*}_{q}(g')\right)~.\]
\item[4.] Define the Cramer von Mises test statistic
\[R(S_T) = \frac{1}{2q}\sum_{j=1}^{2q}\left(\hat{F}_{1}(S_{T,j}; S_T) - \hat{F}_{2}(S_{T,j}; S_T)\right)^2~,\]
where \[\hat{F}_{1}(y; S_T) = \frac{1}{q}\sum_{j = 1}^q \mathbbm{1}\{S_{T,j} \le y\} \hspace{2mm} \text{and} \hspace{2mm} \hat{F}_{2}(y; S_T) = \frac{1}{q}\sum_{j = q + 1}^{2q} \mathbbm{1}\{S_{T,j} \le y\}~.\]
\item[5.] Let $\mathbf{G}$ be the set of all permutations $\pi = (\pi(1), \ldots, \pi(2q))$ of $\{1, ..., 2q\}$ and
\[S_T^{\pi} = (S_{T,\pi(1)}, ..., S_{T,\pi(2q)})~.\]
Reject $H_{0}$ if the $p$-value $p \le \alpha$ where $p := \frac{1}{|\mathbf{G}|}\sum_{\pi \in \mathbf{G}} \mathbbm{1}\{R(S_T^{\pi}) \ge R(S_T)\}$.
\end{itemize}
\end{algorithm}
In some cases the $p$-value in Step 5 may be difficult to compute exactly because the set $\mathbf{G}$ is intractably large. It can be shown that the result below continues to hold if $\mathbf{G}$ is replaced by $\mathbf{\hat{G}}$ where $\mathbf{\hat{G}} = \{\pi_1, ..., \pi_B\}$, $\pi_1$ is the identity permutation, and $\pi_2, ..., \pi_B$ are drawn independently and uniformly at random from $\mathbf{G}$. This sampling is standard in the literature, see also \cite{canay2018approximate}, Remark 3.2.
The test presented in Algorithm \ref{algo:test} is a non-randomized version of the permutation test, in the sense that the decision to reject the null hypothesis is a deterministic function of the data. This leads to a test which is potentially conservative. We could alternatively consider a non-conservative version of this test which is randomized \citep[see for example][ Section 15.2]{lehmann2006testing}.
\subsubsection{Asymptotic validity}
If the entries of $Y^{*}(g)$ and $Y^{*}(g')$ were identically distributed to $h(g,U)$ and $h(g',U)$ respectively, then the test described in Algorithm 4.1 would control size in finite samples following standard arguments \citep[see for instance][Theorem 15.2.1]{lehmann2006testing}. However, since the entries of $G^{*}(g)$ and $G^{*}(g')$ are not exactly $g$ and $g'$, this condition is generally false.
Our assumptions instead imply that in an asymptotic regime where $q$ is fixed and $T\to\infty$, the entries of $Y^{*}(g)$ and $Y^{*}(g')$ are approximately equal in distribution to $h(g,U)$ and $h(g',U)$. A fixed $q$ rule is appropriate in our setting because the quality of the nearest neighbors $G^{*}(g)$ and $G^{*}(g')$ in terms similarity to $g$ or $g'$ may degrade rapidly with $q$. This is shown via simulation in Section \ref{sec:psi_measure} below.
We demonstrate that the test procedure in Algorithm 4.1 is asymptotically valid using the framework of \cite{canay2017randomization,canay2018approximate}. Finite-sample behavior is examined via simulation in Section \ref{sec:simulations}.
\begin{theorem}\label{thm:perm_test}
Under Assumptions \ref{ass:sampling}, \ref{ass:pos_mass} and \ref{ass:dist_cont}, the test described in Algorithm 4.1 is asymptotically ($T\to\infty$, $q$ fixed) level $\alpha$.
\end{theorem}
\subsection{Estimating Policy Effects}\label{sec:est_asf}
The policy maker begins with a status-quo community policy described by some treatment and network pair $(\mathbf{t}, d)$, and proposes an alternative $(\mathbf{t}', d')$. The researcher is tasked with estimating the average effect of the policy change for agents whose effective treatment under the status-quo is described by the rooted network $g \in \mathcal{G}$ and whose effective treatment under the alternative is described by the rooted network $g' \in \mathcal{G}$. Following Section 3.2, the potential outcomes under policies $g$ and $g'$ are described by $h(g,U)$ and $ h(g',U)$ respectively for some error $U$. The average policy effect of interest is
\[h(g') - h(g)\]
where $h(g) = E\left[h(g,U)\right]$. Alternative effects (e.g. distributional effects) based on other features of the conditional distribution of outcomes can be estimated analogously.
The proposed estimator for the policy effect is described in Section \ref{sec:estimator}. Intuitively, the estimator averages the outcomes of agents whose rooted networks are most similar to $g$ or $g'$. Our bounds on estimation error rely on the following assumptions.
\subsubsection{Assumptions}
We impose smoothness conditions on the model parameters and bound the variance of the outcome. We do not believe these assumptions to be restrictive in practice. Let $\psi_{\tilde{g}}(\ell) := P\left(\min_{i \in [m_t]} d(G_{it}, \tilde{g}) \le \ell\right)$.
\begin{assumption}\label{ass:no_ties}
For every $\ell \geq 0$ and $g_{0} \in \{g,g'\}$, $\psi_{\tilde{g}}(\ell)$ is continuous at $g_{0}$.
\end{assumption}
Recall from Section \ref{sec:dist_test} that the function $\psi_{\tilde{g}}(\ell)$ measures the probability that there exists an agent from a randomly drawn community whose rooted network is within distance $\ell$ of $\tilde{g}$. Intuitively, $\psi_{\tilde{g}}$ measures the amount of regularity in the community. It plays a key role in determining the statistical properties of our estimator.
Assumption \ref{ass:no_ties} states that $\psi_{\tilde{g}}(\ell)$ is continuous at $g$ and $g'$. It justifies a probability integral transform used to characterize the bias of the estimator. If the data are such that Assumption \ref{ass:no_ties} does not hold, then we can recover the assumption by simply adding a randomizing component to the metric \citep[see the discussion following equation (19) in][for details]{gyorfi2020universal}.
\begin{assumption}\label{ass:m_smooth}
For each $g_{0} \in \{g,g'\}$ there exists an increasing $\phi_{g_{0}}: \mathbb{R}_{+} \to \mathbb{R}_{+}$ such that $\phi_{g_{0}}(x) \rightarrow \phi_{g_{0}}(0) = 0$ as $x \rightarrow 0$ and for every $\tilde{g} \in \mathcal{G}$
\[\left|h(g_{0}) - h(\tilde{g})\right| \le \phi_{g_{0}}\left(d(g_{0},\tilde{g})\right)~.\]
\end{assumption}
Assumption \ref{ass:m_smooth} states that the regression function $h$ has a modulus of continuity $\phi_g$ at $g$. This restriction is analogous to smoothness assumptions on the conditional mean function that is common in the nonparametric estimation literature. A model of network interference often implies a specific choice of $\phi_{g}$. See Section 4.3.4 below.
\begin{assumption}\label{ass:var}
For every $\tilde{g} \in \mathcal{G}$, $\sigma^{2}(\tilde{g}) := E\left[(h(\tilde{g},U_{it}) - h(\tilde{g}))^2\right] \leq \sigma^2$.
\end{assumption}
Assumption \ref{ass:var} bounds the variance of the outcome variable.
\subsubsection{Estimator}\label{sec:estimator}
Let $W_t(g)$ be the observation from $\{W_{it}\}_{1 \le i \le m}$ whose value of $G_{it}$ is closest to $g$ (see also Algorithm 4.1, Step 2). Order the elements of $\{W_t(g)\}_{1 \le t \le T}$ to be increasing in $d(G_t(g), g)$, and denote the $k$ closest values
\[W^{*}_{1}(g), W^{*}_{2}(g), \ldots, W^{*}_{k}(g).\]
The proposed estimator for $h(g)$ is the sample mean of the outcomes $\{Y^{*}_{j}(g)\}_{j=1}^k$ associated with $\{W^{*}_{j}(g)\}_{j=1}^k$
\[\hat{h}(g) := \frac{1}{k}\sum_{j = 1}^k Y^{*}_{j}(g)\]
and the estimator for the average policy effect $h(g') - h(g)$ is the difference
\[\hat{h}(g') - \hat{h}(g) := \frac{1}{k}\sum_{j = 1}^k \left(Y^{*}_{j}(g')-Y^{*}_{j}(g)\right).\]
\subsubsection{Bound on estimation error}
We derive a finite-sample bound on the mean-squared error of $\hat{h}(g)$ using the framework of \cite{biau2015lectures,doring2017rate,gyorfi2020universal}.
\begin{theorem}\label{thm:mse_bound}
Under Assumptions \ref{ass:sampling}, \ref{ass:no_ties}, \ref{ass:m_smooth}, and \ref{ass:var},
\[E\left[\left(\hat{h}(g) - h(g)\right)^2\right] \le \frac{\sigma^2}{k} + E\left[\varphi_g(U_{(k,T)})^2\right]~,\]
where $\varphi_g(x) = \phi_g \circ \psi_g^{\dagger}(x)$, $\psi_g^{\dagger}:[0,1] \rightarrow \mathbb{R}_{+}$ refers to the upper generalized inverse
\[\psi_g^{\dagger}(x) = \sup\{\ell \in \mathbb{R}_{+}: \psi_g(\ell) \le x\}~,\] and $U_{(k,T)}$ is distributed $Beta(k,T-k+1)$.
\end{theorem}
The bound in Theorem \ref{thm:mse_bound} features a familiar bias-variance decomposition. The variance component $\sigma^2/k$ is standard and decreases as $k$ grows large. In contrast, the bias component $E[\varphi_g(U_{(k,T)})^2]$ and its relationship with $k$ are difficult to characterize without further information about $\phi_{g}$ and $\psi_{g}$. We discuss how the bound depends on features of these parameters in Section 4.3.4 below.
Theorem \ref{thm:mse_bound} has the immediate corollary
\begin{corollary}\label{cor:mse_bound}
Suppose the hypothesis of Theorem \ref{thm:mse_bound}. Then
\[E\left[\left(\hat{h}(g')-\hat{h}(g) - (h(g')-h(g))\right)^2\right] \le \frac{4\sigma^2}{k} + 4E\left[\varphi_{g\vee g'}(U_{(k,T)})^2\right]\]
where $\varphi_{g\vee g'} := \varphi_{g}\vee \varphi_{g'}$.
\end{corollary}
One can potentially improve this bound via sample splitting. This is left to future work.
\subsubsection{Interpreting the bias}
Theorem \ref{thm:mse_bound} establishes consistency for our estimator of the policy function in well-behaved settings. For example if (for $g$ fixed) $\varphi_g(\cdot)$ is bounded, continuous at zero, $\varphi_g(0) = 0$, and $k \rightarrow \infty$, $k/T \rightarrow 0$ as $T \rightarrow \infty$ then
\[E[\varphi_g(U_{(k,T)})^2] = E\left[\left(\phi_g \circ \psi_g^{\dagger}(U_{(k,T)})\right)^2\right] \rightarrow 0~.\]
Further characterization of the bias requires additional information about the components $\phi_{g}$ and $\psi_{g}$. Intuitively, the first controls the smoothness of the regression function $h_{g}$ and the second controls the quality of the nearest-neighbors that make up $\hat{h}(g)$ in terms of proximity to $g$. This subsection provides an analytical discussion. Supporting simulation evidence can be found in Section \ref{sec:simulations}.
The continuity parameter $\phi_{g}$ is often relatively easy to characterize because many models of network interference give an explicit bound. In particular, for our three examples in Section 2.3, $\phi_{g}(x)$ quickly converges to $0$ with $x$ for any $g\in\mathcal{G}$. In the neighborhood spillovers model of Example \ref{ex:spillovers} with binary treatments and uniformly bounded expected outcomes, $\phi_{g}(x) \leq C\mathbbm{1}\{x > \zeta(r)\}$ where $C = \sup_{g \in \mathcal{G}}2h(g)$, because if $d(g,\tilde{g}) \leq \zeta(r)$ then $h(g) = h(\tilde{g})$. Similarly, in the social capital model of Example \ref{ex:capital} with uniformly bounded $2$-neighborhoods, $\phi_{g}(x) \leq C\mathbbm{1}\{x > \zeta(2)\}$ where $C$ bounds the number of agents within path distance $2$ of the root agent.
In the linear-in-means peer effects model of Example \ref{ex:social_interactions} with binary treatments and uniformly bounded $1$-neighborhood treatment counts, $\phi_{g}(x) \leq \frac{C(\delta \rho)^{\zeta^{\dagger}(x)}}{1 - \delta\rho}$ where $\rho$ bounds the spectral radius of $A^{*}(D)$, $|\delta\rho| < 1$ by assumption, and $C = \sup_{i \in V(\tilde{g}), \tilde{g} \in \mathcal{G}}2\left|T_{i}\beta + T_{i}^{*}(1)\gamma\right|$. This is because if $d(g,\tilde{g}) \leq \zeta(r)$ then $h^{s}(g) = h^{s}(\tilde{g})$ for every $s \leq r$ and the remainder term in the policy function $\left|\sum_{s = r+1}^{\infty}[\delta^{s}A^{*}(D)^{s}(\mathbf{T}\beta + \mathbf{T}^{*}(1)\gamma)]_{i}\right| \leq C\sum_{s = r+1}^{\infty}\delta^{s}\rho^{s} \leq \frac{C(\delta \rho)^{r}}{1 - \delta\rho}$ for any $g \in \mathcal{G}$. See also \cite{leung2019causal}.
In contrast to these explicit bounds on $\phi_{g}$, we do not know of any convenient analytical way to characterize the regularity parameter $\psi_{g}$ even for relatively simple models of link formation. If the network is sparse or has a predictable structure (agents interact in small groups or on a regular lattice), then the rooted network variable may essentially act like a discrete random variable, and so $\psi_{g}(\ell)$ may be uniformly bounded away from $0$ (for $\ell > 0$ fixed). Unfortunately, such regularity rarely describes social or economic network data.
Irregular network formation models are more common. For example, a large literature considers models of network formation in which connections between agents are conditionally independent across agent-pairs. Examples include the Erd\"os-Renyi model, latent space model, and stochastic blockmodel \cite[see generally][]{graham2019network}. For such models, the implicit $\psi_{g}$ function can change dramatically with the model parameters. We demonstrate some example $\psi_{g}$ functions for the special case of the Erdos-Renyi model in Section 5 below.
\section{Simulation evidence}\label{sec:simulations}
Section 5.1 describes the simulation design. Section 5.2 gives results for the first application testing policy irrelevance. Section 5.3 gives results for the second application estimating policy effects.
\subsection{Simulation design}
We simulate data from $T$ communities, where $T$ is specified below. Each community contains $20$ agents. Links between agents are drawn from an Erd\"os-Renyi model with parameter $0.1$. That is, links between agent-pairs are independent and identically distributed $Bernoulli(.1)$ random variables. The Erd\"os-Renyi model is chosen not because it generates realistic-looking network data \cite[see][]{jackson2007meeting} but because a large class of rooted networks occur with non-trivial probability. This design choice is unfavorable to our method, which prefers models that reliably generate a small number of rooted network motifs \citep[see for instance][]{de2018identifying}. One can instead think of the Erd\"os-Renyi model as the policy maker assigning network policies to communities at random. We discuss this model in more detail in Section 5.4.
In this relatively simple design the metric presented in Section \ref{sec:local} also simplifies. The distance between two rooted networks is simply $\zeta(r)$ where $r$ is the largest radius such that the networks truncated at radius $r$ are $0$-isomorphic. In everything that follows we take $\zeta(r) = (1 + r)^{-1}$.
Outcomes are generated for each agent $i \in [20]$ in each community $t \in [T]$ according to the model
\[Y_{it} = \alpha_1f(G_{it}^1) + \alpha_2f(G_{it}^2) + U_{it}~,\]
where $\alpha = (\alpha_1, \alpha_2) \in \mathbb{R}^2$ is specified below,
\[f(g) := deg(g) + 2clust(g)~,\]
\[deg(g) := \frac{1}{|V(g)|}\sum_{i \in V(g)}\sum_{j \in V(g)}\mathbbm{1}\{ij \in E(g)\}\]
measures the average degree of the network $(V(g), E(g))$ and
\[clust(g) := \frac{1}{|V(g)|}\sum_{i \in V(g)}\frac{\sum_{j \in V(g)}\sum_{k\in V(g)}\mathbbm{1}\{ij, ik, jk \in E(g)\}}{\sum_{j \in V(g)}\sum_{k\in V(g)}\mathbbm{1}\{ik, jk \in E(g)\}}\]
measures the average clustering of the network $(V(g), E(g))$, and $U_{it} \sim U[-5, 5]$ independent of $G_{it}$. Our focus on degree and clustering statistics is meant to mimic the first two Examples of Section 2.3. That is, these network statistics are determined by the rooted network truncated at the first or second neighborhood.
\subsection{Testing policy irrelevance}
We first evaluate how the test procedure outlined in Algorithm 4.1 of Section 4.2.2 controls size when the null hypothesis of policy irrelevance is true. The choice of $\alpha$ we consider is $(0,2)$. The choice of rooted networks (policies) we consider is represented by $g_{5}$ and $g_{6}$ in Figure 2. These networks are chosen because under the model in Section 5.1 the conditional distribution of outcomes associated with $g_{5}$ and $g_{6}$ are the same, but the conditional distribution of outcomes associated with $g^{1}_{5}$ and $g^{1}_{6}$ are very different. The idea of this design is to illustrate the potential size distortion due to the fact that the permutation test is approximate. This can be seen in Table 1.
Columns 2-4 of Table 1 depict the results of $1000$ simulations for $T \in \{20,50,100,200\}$ communities and test parameters $q \in \{5,8,10\}$ and $\alpha = 0.05$. The results show that the test rejects the null hypothesis with probability approximately equal to $\alpha$ when $q$ is small ($q = 5$) or $T$ is large $(T = 200)$. Size distortion occurs when $q$ is large and $T$ is small ($q \geq 8$ and $T \leq 50$).
To evaluate the power properties of the test procedure, we consider two rooted networks $g_{1}$ and $g_{4}$ that are associated with two different conditional distributions of outcomes under the model in Section 5.1. These two networks are shown in Figure 2.
Columns 5-7 depict the results for the same simulations as Columns 2-4 but for the test based on $g_{1}$ and $g_{4}$ instead of $g_{5}$ and $g_{6}$. The results show that the test correctly rejects the null hypothesis with probability greater than $\alpha$. The probability of rejection generally increases with $q$ (except for $T$ = 20) at the cost of potential size distortions. Overall, our results suggest that unless a researcher has additional information about the structure of network interference relative to the quality of matches, they should not choose $q$ to be too large.
\begin{figure}\label{fig:rooted_networks_testing}
\centering
\begin{tikzpicture}[->,>= stealth,shorten >=1pt,auto,node distance=1cm,
thick,main node/.style={circle,fill=blue!20,draw,minimum size=.3cm,inner sep=0pt]}]
\node[main node] (2) {$1$};
\node[main node] (6) [ left of =2] {};
\node[main node] (5) [above left of=6] {};
\node[main node] (1) [below left of =6] {};
\path[-]
(6) edge node {} (2)
(1) edge node {} (6)
(5) edge node {} (6);
\end{tikzpicture}
\hspace{5mm}
\begin{tikzpicture}[->,>= stealth,shorten >=1pt,auto,node distance=1cm,
thick,main node/.style={circle,fill=blue!20,draw,minimum size=.3cm,inner sep=0pt]}]
\node[main node] (2) {};
\node[main node] (6) [ above left of =2] {$4$};
\node[main node] (5) [right of=2] {};
\node[main node] (1) [ below left of =2] {};
\path[-]
(6) edge node {} (2)
edge node {} (1)
(2) edge node {} (5)
edge node {} (1);
\end{tikzpicture}
\hspace{5mm}
\begin{tikzpicture}[->,>= stealth,shorten >=1pt,auto,node distance=1cm,
thick,main node/.style={circle,fill=blue!20,draw,minimum size=.3cm,inner sep=0pt]}]
\node[main node] (2) {};
\node[main node] (6) [ above left of =2] {$5$};
\node[main node] (5) [above right of=2] {};
\node[main node] (1) [ below left of =2] {};
\node[main node] (3) [ below right of =2] {};
\path[-]
(6) edge node {} (2)
edge node {} (1)
(2) edge node {} (5)
edge node {} (1)
edge node {} (3);
\end{tikzpicture}
\hspace{5mm}
\begin{tikzpicture}[->,>= stealth,shorten >=1pt,auto,node distance=1cm,
thick,main node/.style={circle,fill=blue!20,draw,minimum size=.3cm,inner sep=0pt]}]
\node[main node] (2) {};
\node[main node] (6) [ above left of =2] {};
\node[main node] (5) [above right of=2] {$6$};
\node[main node] (1) [ below left of =2] {};
\node[main node] (3) [ below right of =2] {};
\path[-]
(6) edge node {} (2)
edge node {} (1)
(2) edge node {} (5)
edge node {} (1)
edge node {} (3);
\end{tikzpicture}
\caption{Four rooted networks truncated at radius $2,$ labeled $g_1$ and $g_4$ to $g_{6}$.}\label{fig:rooted_networks}
\end{figure}
\begin{table}[htbp]
\centering
\caption{Rejection probabilities: $\alpha = 0.05$ ($1,000$ Monte Carlo iterations)}
\begin{tabular}{cccc|ccc}
\toprule
& \multicolumn{3}{c}{$H_0: g_5 =_d g_6$} & \multicolumn{3}{c}{$H_0: g_1 =_d g_4$} \\
& \multicolumn{3}{c}{$q$} & \multicolumn{3}{c}{$q$} \\
\cmidrule{2-7} $T$ & 5 & 8 & \multicolumn{1}{c}{10} & 5 & 8 & 10 \\
\midrule
20 & 6.6 & 8.0 & 7.0 & 13.8 & 10.9 & 9.9 \\
50 & 4.9 & 8.5 & 10.9 & 20.4 & 29.9 & 33.8 \\
100 & 4.8 & 6.3 & 7.5 & 20.8 & 32.3 & 41.1 \\
200 & 3.3 & 4.5 & 4.9 & 21.1 & 34.3 & 44.5 \\
\bottomrule
\end{tabular}%
\label{tab:rej_prob}%
\end{table}%
\subsection{Estimating policy effects}\label{sec:sim_MSE}
We study the mean-squared error of the $k$-nearest-neighbor estimator for the policy function $h$ given in Section 4.3.2. for four rooted networks and $\alpha \in \{ (1,0),(1,1/2) \}$. Under $\alpha = (1,0)$ the distribution of outcomes depends on the features of the network within radius $1$ of the root. Under $\alpha = (1,1/2)$ the distribution of outcomes also depends on the features of the network within radius $2$ of the root.
The choice of rooted networks we consider is represented by $g_{1}$ to $g_{4}$ in Figure \ref{fig:rooted_networks}. Networks $g_{1}$ and $g_{2}$ depict two wheels with the rooted agent on the periphery. These networks have moderate average degree and no average clustering: $(3/2,0)$ and $(5/3,0)$ respectively. Network $g_{3}$ depicts a closed triangle connected to a wheel with the rooted agent both on the periphery of the wheel and part of the triangle. This network has moderate average degree and average clustering $(2,1/3)$. Finally, $g_{4}$ depicts a closed triangle connected to a single agent. This network has moderate average degree and high average clustering $(2,7/12)$.
The results of the simulation are shown in Table 2. As suggested by Theorem 4.2, mean-squared error is generally decreasing with $T$ for a fixed choice of $k$. In addition, mean-square error is generally smaller for the $\alpha = (1, 0)$ experiment than it is for the $\alpha = (1, 0.5)$ experiment for a fixed choice of $T$ and $k$. The effect is more pronounced for the networks $g_{3}$ and $g_{4}$, for which we typically observe fewer good matches in the data compared to $g_1$ and $g_2$ (we quantify this observation by estimating $\psi_{g}$ for each of the four networks in Section \ref{sec:psi_measure}). This is also consistent with Theorem 4.2.
Fixing $T$ and comparing across $k$, we expect a potential bias-variance trade-off. For networks $g_1$ and $g_2$, there is no meaningful bias in the estimated policy function because $f(\tilde{g}^{1})$ and $f(\tilde{g}^{2})$ are similar and $\psi_{\tilde{g}}(1) \approx 1$ for $\tilde{g} = g_{1},g_{2}$ (see Section 5.4 below). As a result, it is optimal to use the nearest-neighbor from every community in $[T]$ (i.e. choose $k = T$). In contrast for $g_3$ and $g_4$ the rooted networks of the nearest neighbors in each community may be very different from the relevant policies, and so setting $k = T$ can lead to an inflated mean-squared error.
We conclude that unless the researcher has additional information about the structure of network interference or the density of the policies of interest, $k$ should not be large relative to the sample size. This is also consistent with our findings for the test of policy irrelevance in Section 5.2.
\begin{figure}
\centering
\begin{tikzpicture}[->,>= stealth,shorten >=1pt,auto,node distance=1cm,
thick,main node/.style={circle,fill=blue!20,draw,minimum size=.3cm,inner sep=0pt]}]
\node[main node] (2) {$1$};
\node[main node] (6) [ left of =2] {};
\node[main node] (5) [above left of=6] {};
\node[main node] (1) [below left of =6] {};
\path[-]
(6) edge node {} (2)
(1) edge node {} (6)
(5) edge node {} (6);
\end{tikzpicture}
\hspace{5mm}
\begin{tikzpicture}[->,>= stealth,shorten >=1pt,auto,node distance=1cm,
thick,main node/.style={circle,fill=blue!20,draw,minimum size=.3cm,inner sep=0pt]}]
\node[main node] (2) {$2$};
\node[main node] (6) [ left of =2] {};
\node[main node] (5) [above left of=6] {};
\node[main node] (4) [above of=6] {};
\node[main node] (3) [below of=6] {};
\node[main node] (1) [below left of =6] {};
\path[-]
(6) edge node {} (2)
(1) edge node {} (6)
(5) edge node {} (6)
(4) edge node {} (6)
(3) edge node{} (6);
\end{tikzpicture}
\hspace{5mm}
\begin{tikzpicture}[->,>= stealth,shorten >=1pt,auto,node distance=1cm,
thick,main node/.style={circle,fill=blue!20,draw,minimum size=.3cm,inner sep=0pt]}]
\node[main node] (2) {$3$};
\node[main node] (6) [ above left of =2] {};
\node[main node] (5) [right of=2] {};
\node[main node] (1) [ below left of =2] {};
\node[main node] (3) [ below right of =5] {};
\node[main node] (4) [ above right of =5] {};
\node[main node] (8) [ right of =5] {};
\path[-]
(6) edge node {} (2)
edge node {} (1)
(2) edge node {} (5)
(1) edge node {} (2)
(4) edge node {} (5)
(8) edge node {} (5)
(3) edge node {} (5);
\end{tikzpicture}
\hspace{5mm}
\begin{tikzpicture}[->,>= stealth,shorten >=1pt,auto,node distance=1cm,
thick,main node/.style={circle,fill=blue!20,draw,minimum size=.3cm,inner sep=0pt]}]
\node[main node] (2) {};
\node[main node] (6) [ above left of =2] {$4$};
\node[main node] (5) [right of=2] {};
\node[main node] (1) [ below left of =2] {};
\path[-]
(6) edge node {} (2)
edge node {} (1)
(2) edge node {} (5)
edge node {} (1);
\end{tikzpicture}
\caption{Four rooted networks truncated at radius $2,$ labeled $g_1$ to $g_4$.}\label{fig:rooted_networks}
\end{figure}
\begin{table}[htbp]
\scriptsize
\centering
\caption{Estimated MSEs ($1,000$ Monte Carlo iterations)}
\begin{tabular}{rcccc|cccc|cccccc}
\toprule
& \multicolumn{1}{r}{} & & $T = 20$ & \multicolumn{1}{r}{} & & $T = 50$ & & \multicolumn{1}{r}{} & & & $T = 100$ & & & \\
& \multicolumn{1}{r}{} & & $k$ & \multicolumn{1}{c}{} & & $k$ & & \multicolumn{1}{r}{} & & & $k$ & & & \\
\cmidrule{3-15} \multicolumn{1}{c}{$\alpha$} & \multicolumn{1}{c}{$g_{\iota}$} & 5 & 10 & \multicolumn{1}{c}{20} & 5 & 10 & 20 & \multicolumn{1}{c}{50} & 5 & 10 & 20 & 50 & 75 & 100 \\
\midrule
& $g_1$ & 1.71 & 0.84 & 0.41 & 1.69 & 0.86 & 0.43 & 0.18 & 1.63 & 0.81 & 0.41 & 0.17 & 0.11 & 0.08 \\
\multicolumn{1}{c}{$(1, 0)$} & $g_2$ & 1.7 & 0.84 & 0.42 & 1.75 & 0.84 & 0.44 & 0.17 & 1.66 & 0.85 & 0.42 & 0.17 & 0.12 & 0.09 \\
& $g_3$ & 1.77 & 0.98 & 1.41 & 1.6 & 0.81 & 0.42 & 1.17 & 1.74 & 0.84 & 0.44 & 0.2 & 0.59 & 1.09 \\
& $g_4$ & 1.77 & 1.53 & 3.27 & 1.65 & 0.89 & 0.55 & 3.02 & 1.64 & 0.84 & 0.42 & 0.64 & 1.92 & 2.97 \\
\midrule
& $g_1$ & 1.71 & 0.84 & 0.41 & 1.69 & 0.86 & 0.43 & 0.18 & 1.63 & 0.81 & 0.41 & 0.17 & 0.11 & 0.09 \\
\multicolumn{1}{c}{$(1, 0.5)$} & $g_2$ & 1.7 & 0.84 & 0.42 & 1.75 & 0.84 & 0.44 & 0.17 & 1.66 & 0.86 & 0.42 & 0.17 & 0.12 & 0.09 \\
& $g_3$ & 1.79 & 1.05 & 2.07 & 1.6 & 0.81 & 0.42 & 1.81 & 1.75 & 0.84 & 0.44 & 0.21 & 0.87 & 1.73 \\
& $g_4$ & 1.85 & 2.15 & 5.37 & 1.66 & 0.9 & 0.72 & 5.09 & 1.65 & 0.86 & 0.44 & 1.1 & 3.32 & 5.04 \\
\bottomrule
\end{tabular}%
\label{tab:MSE_results}%
\end{table}%
\subsection{Measuring network regularity}\label{sec:psi_measure}
In Section 4.3 we identified the function $\psi_{g}(\ell)$ as a key determinant of the estimation bias in Theorem \ref{thm:mse_bound}. This function measures the probability that the nearest-neighbor of $g$ from a randomly drawn network is within distance $\ell$ of $g$. Recall that in our simulations we used $\zeta(r) = (1+r)^{-1}$ in our definition of the metric.
Figure \ref{fig:psi} displays estimates of the $\psi_{g}$ function for the four rooted networks considered in the simulation design of Section \ref{sec:sim_MSE}. The figures were constructed by generating $3000$ Erd\"os-Renyi(0.1) random graphs with $20$ nodes each and recording the distances of the nearest neighbor to $g$ in each graph.
The results indicate that the nearest neighbor of $g_1$ always matches at least at a radius of $1$, and often matches at a radius of $2$. In contrast $g_4$ is only matched at a radius of $1$ in $40\%$ of networks, and is almost never matched at a radius of $2$. Intuitively, networks $g_{3}$ and $g_{4}$ are rare because triadic closure is uncommon under the random graph model. The network $g_{2}$ ia also relatively rare because the coincidence of five agents linked to a common agent is uncommon for such a sparse random network.
We remark that these results are the extreme case of a model of pure statistical noise. Strategic interaction between agents may potentially discipline the regularity of the network, particularly if only a small number of configurations are stable. Characterizing $\psi_{g}$ for such strategic network formation models is an important area for future work.
\begin{figure}
\includegraphics[width=.5\textwidth]{psi_G1_size20.png}\hfill
\includegraphics[width=.5\textwidth]{psi_G2_size20.png}\hfill
\includegraphics[width=.5\textwidth]{psi_G3_size20.png}\hfill
\includegraphics[width=.5\textwidth]{psi_G4_size20.png}\hfill
\caption{Estimated $\psi_{g}(\cdot)$ for $g_1$ to $g_4$ (from left to right, top to bottom).}\label{fig:psi}
\end{figure}
\section{Conclusion}\label{sec:conclusion}
This paper proposes a new unified framework for causal inference under network interference. We propose a model in which the impact of the policy on individual outcomes is indexed by rooted networks (also local configurations or network types). The model generalizes several popular specifications from the literature on treatment spillovers, social interactions, social learning, information diffusion, and social capital formation. We use the model to construct a test for policy irrelevance and estimates of policy effects. Some finite sample properties of the test are illustrated by simulation.
Much work remains to be done. A particularly interesting direction for future work would be to apply our methodology to the problem of policy learning under network interference \citep[see for example][Ananth 2020]{viviano2019policy, kitagawa2020should}.
\bibliographystyle{aer}
|
{
"timestamp": "2021-05-13T02:05:53",
"yymm": "2105",
"arxiv_id": "2105.03810",
"language": "en",
"url": "https://arxiv.org/abs/2105.03810"
}
|
\section{Omitted data}
Subject eleven denoted two jobs, his PhD and his corporate activity. As we interviewed him in his function in the company, we dropped the information about the PhD in the table.
We also initially asked participants to rate their familiarity with theoretical and practical ML topics and algorithms, but later did not use this data as there were not enough new insights.
\section{Subjects prior knowledge on AML}\label{sec::sanityCheck}
We also investigated the familiarity of our subjects with AML attacks.To avoid priming, we
asked subjects to rate their familiarity after the interview.
As sanity checks, we added two rather unknown terms, adversarial initialization~\cite{grosse2019adversarial} and neural trojans~\cite{liu2017neural} (similar to backdoors).
The results are depicted in Figure~\ref{fig:FamAttacks}.
Only one subject reported to be familiar with one attack (evasion). In general, most subjects reported to have heard of most common attacks (evasion, poisoning, membership inference, and model stealing).
As expected for the sanity check, adversarial initialization and neural trojans were largely unknown.
\begin{figure}[b]
\centering
\includegraphics[width=0.47\textwidth]{figures/AML.pdf}
\vspace{-10pt}
\caption{Self-reported familiarity of interviewed subjects with different attacks on ML. Total of subjects is 14, as one subject did not hand in questionnaire.}
\label{fig:FamAttacks}
\vspace{-10pt}
\end{figure}
\section{Interview protocol}\label{app:interview}
Thank you so much for taking the time to give us your perspective on security in machine learning. This study consists in III parts. Part I aims at exploring your role in ML-projects. Part II addresses the underlying machine learning pipeline. In part III, we want to know how you perceive the security of machine learning. In part II and III, please visualize the topics (and relationships between them) that we ask you about. There are no rules, no wrong way to do it, and don’t worry about spelling things perfectly. Nothing is off limits and you can use any feature of the digital whiteboard. After this last part, we will ask you about your knowledge about security of machine learning before this study.
\vspace{2mm}
\textbf{Part I: Machine Learning Project}
\begin{itemize}
\item Can you briefly describe what AI- or machine learning-based project you are currently involved in?
\item Can you tell us a bit more about the goal of this project?
\item Who else is involved in this project?
\item What is your collaborators role in the project?
\end{itemize}
\vspace{2mm}
\textbf{Part II: Machine Learning Pipeline}
\begin{itemize}
\item What kind of pipeline do you currently apply within this machine learning based project?
\item Which part of this pipeline is crucial for your business, or identical to your product?
\end{itemize}
\vspace{2mm}
\textbf{Part III: Security within Project and Pipeline}
\begin{itemize}
\item Is security something you regularly incorporate into your workflow?
\item Have you encountered any issues relating to security in the projects you described?
\item Where in the pipeline did these security-related issues originate?
\item Can you specify the cause of these security-related issues?
\item Can you specify how these security-related issues evolve in your pipeline?
\item Which goal pursues an adversary with a such a threat?
\item What is the security violation of the threat?
\item How specific is the depicted threat?
\item Are you aware of any further possible security threats in the scope of your project or pipeline?
\item Which countermeasures do you implement against any of the aforementioned threats?
\end{itemize}
Thank you so much for taking the time to give us your perspective on security in machine learning.
\vspace{1mm}
\section{Questionnaires of our study}\label{app:questionnaires}
\subsection{Demographics Questionnaire}\label{app:demogr}
Thank you for participating in our research study about security in machine learning. Please take a couple of minutes to respond to the following questions.
\vspace{2mm}
-- \makebox[0.3\textwidth]{How old are you? \enspace\hrulefill}
-- What gender do you identify with?
\begin{itemize}
\item[$\square$]male \hspace{3em} $\square$ female \hspace{3em} $\square$ $\rule{2.5cm}{0.15mm}$
\end{itemize}%
-- What is your level of education? (please specify highest)
\begin{itemize}
\item[$\square$] Highschool
\item[$\square$] \makebox[0.4\textwidth]{Bachelor in \enspace\hrulefill}
\item[$\square$] \makebox[0.4\textwidth]{Master / Diploma in \enspace\hrulefill}
\item[$\square$] \makebox[0.4\textwidth]{Training / Apprenticeship in \enspace\hrulefill}
\item[$\square$] \makebox[0.4\textwidth]{PhD, area: \enspace\hrulefill}
\end{itemize}%
-- \makebox[0.4\textwidth]{What is your profession? \enspace\hrulefill}
-- \makebox[0.4\textwidth]{What is your role in your team? \enspace\hrulefill}
-How long have you been working in your
\makebox[0.4\textwidth]{ current profession? \enspace\hrulefill}
-- What is the number of employees at your
\makebox[0.4\textwidth]{company/organization? \enspace\hrulefill}
-- \makebox[0.4\textwidth]{What is the application domain of your product? \enspace\hrulefill}
-- Which of these goals are part of your organization's
AI/ML-model checklist?
$\square$ Explainability \hspace{3em}
$\square$ Fairness \hspace{3em}
$\square$ Privacy
$\square$ Security \hspace{5.3em}
$\square$ Performance
-- In which of these areas have you taken a lecture or intense
course? Please add the title of the course if applicable.
\begin{itemize}
\item[$\square$] \makebox[0.4\textwidth]{Machine Learning \enspace\hrulefill}
\item[$\square$] \makebox[0.4\textwidth]{Security \enspace\hrulefill}
\item[$\square$] \makebox[0.4\textwidth]{Adversarial Machine Learning \enspace\hrulefill}
\end{itemize}%
-- In which of these areas have you taken a seminar, or read
up on? Please add the title of the seminar/book if applicable.
\begin{itemize}
\item[$\square$] \makebox[0.4\textwidth]{Machine Learning \enspace\hrulefill}
\item[$\square$] \makebox[0.4\textwidth]{Security \enspace\hrulefill}
\item[$\square$] \makebox[0.4\textwidth]{Adversarial Machine Learning \enspace\hrulefill}
\end{itemize}
\vspace{2mm}
\subsection{Selected Attack Vectors}\label{app:SelAVs}
Please read through the following selection of attack vectors and machine learning and explain whether you consider them relevant in your specific project. If yes, please add them to your sketch in a different color.
\vspace{2mm}
\textbf{Evasion/ Adversarial Examples.} This attack targets a model during deployment. The goal of the attacker is to fool the model: changing its output significantly by altering the input only slightly. An example is to change a picture containing a dog, present it to a cat-dog-classifier, and the model’s output changes from dog to cat.
\vspace{2mm}
\textbf{Poisoning.} This attack targets the training or optimization phase of the model. The goal of the attacker is to either decrease accuracy significantly, or to install a backdoor. An example is a cat-dog classifier that always classifies images containing a smiley as cat.
\vspace{2mm}
\textbf{Privacy/ Membership Inference.} This attack targets a model at test-time. The attacker’s goal is to identify individual samples from or even the whole training set. An example is to measure the confidence on an input, as some algorithms tend to be more confident on data they have seen during training. Also over-fitting eases to determine what a classifier was trained on.
\vspace{2mm}
\subsection{ML quiz}\label{app:quesAfter}
Please answer the following questions about ML.
For each question, please tick \textbf{at least} one box.
\vspace{2mm}
\textbf{Question 1.} Which loss is used to train DNN?
\begin{itemize}
\item[$\square$] $0$/$1$-loss.
\item[$\square$] Cross-entropy loss.
\item[$\square$] Hinge-loss.
\end{itemize}%
\textbf{Question 2.} What is the difference between classification and regression?
\begin{itemize}
\item[$\square$] The kind of labels we fit: reals vs discrete classes.
\item[$\square$] Regression is the name of classification in psychology / medical science.
\item[$\square$] Regression is for discrete labels, classification for real valued ones.
\end{itemize}%
\textbf{Question 3.} What is the difference between $L_1$ and $L_2$ regularization?
\begin{itemize}
\item[$\square$] $L_1$ yields sparser solutions.
\item[$\square$] $L_2$ yields sparser solutions.
\item[$\square$] none - they differ only in few practical applications.
\end{itemize}%
\textbf{Question 4.} In the bias-variance trade-off, what does high variance imply?
\begin{itemize}
\item[$\square$] The analyzed data shows high variance.
\item[$\square$] The classifier is overly complex and potentially overfits.
\item[$\square$] The data is likely to be classified fair (e.g., with low bias).
\end{itemize}%
\textbf{Question 5.} Why is Naive Bayes naive?
\begin{itemize}
\item[$\square$] Due to historic reasons.
\item[$\square$] Due to the assumption that all features are independent.
\item[$\square$] Because the application is simple and straight-forward.
\end{itemize}%
\textbf{Question 6.} What is cross-validation?
\begin{itemize}
\item[$\square$] Training on one task and then transferring the model to another task.
\item[$\square$] Splitting the dataset and training/evaluating on different subsets.
\item[$\square$] A method to reduce overfitting or choosing hyper-parameters.
\end{itemize}%
\textbf{Question 7.} What are kernels in machine learning?
\begin{itemize}
\item[$\square$] Essentially similarity functions.
\item[$\square$] A part of SVM, potentially yielding non-linear SVM.
\item[$\square$] A specific instance of a similarity function used in SVM.
\end{itemize}%
\textbf{Question 8.} What is pruning?
\begin{itemize}
\item[$\square$] Deletion of for example weights in a model.
\item[$\square$] Deletion of specific points of the data.
\item[$\square$] A technique to get a smaller from a large model \\ with similar performance.
\end{itemize}%
To conclude the study, we will ask you to rate your background knowledge on attacks \emph{before} this study according to the following four classes:
\begin{itemize}
\item{Familiar.} Your are familiar with this concept, and can write down the mathematical formulation.
\item{Dabbled in.} You could explain in a five minute talk what the concept is about.
\item{Heard of.} You have heard of the concept and you could put it into context if necessary.
\item{Never heard.} You did not know about this concept before this survey.
\end{itemize}
For each concept, please tick \textbf{one} box. \emph{The original questionnaire was formatted as table. To ease readability, we list them as questions here.}
\vspace{2mm}
\textbf{Evasion / adversarial examples}. \\
$\square$ familiar \hspace{1em} $\square$ dabbled in \hspace{1em} $\square$ heard of \hspace{1em} $\square$ never heard
\textbf{Poisoning / backdooring} \\
$\square$ familiar \hspace{1em} $\square$ dabbled in \hspace{1em} $\square$ heard of \hspace{1em} $\square$ never heard
\textbf{Model stealing } \\
$\square$ familiar \hspace{1em} $\square$ dabbled in \hspace{1em} $\square$ heard of \hspace{1em} $\square$ never heard
\textbf{Model reverse engineering } \\
$\square$ familiar \hspace{1em} $\square$ dabbled in \hspace{1em} $\square$ heard of \hspace{1em} $\square$ never heard
\textbf{Neural trojans } \\
$\square$ familiar \hspace{1em} $\square$ dabbled in \hspace{1em} $\square$ heard of \hspace{1em} $\square$ never heard
\textbf{Adversarial initialization} \\
$\square$ familiar \hspace{1em} $\square$ dabbled in \hspace{1em} $\square$ heard of \hspace{1em} $\square$ never heard
\vspace{2mm}
\section{Final set of codes}\label{app:codes}
The final set of codes for the interviews is depicted in Table~\ref{table:interviewcodes}. Analogously, the codes for the drawings can be found in Table~\ref{table:drawingcodes}.
\begin{table*}[t]
\footnotesize
\caption{Final set of codes for the interviews.}\label{table:interviewcodes}
\centering
\begin{tabular}{ l | l | l | l }
\toprule
\textbf{A. AML attacks} & \textbf{D. security defenses} & \textbf{G. organization} & \textbf{L. perception} \\
A.1 poisoning & D.1 sandboxing & G.1 ML role in project & L.1 security externalized\\
A.2 evasion & D.2 access control & G.2 security role in project & L.2 AML feature not bug\\
A.3 model stealing & D.3 development policy & G.3 other role on project & L.3 doubting attacker \\
A.4 reverse engineering & D.4 server register & G.4 legal constraints & L.4 believing defense is effective \\
A.5 membership inference & D.5 security testing & G.5 technical dept of ML & L.5 has not encountered threat \\
A.6 availability & D.6 data anonymization & \textbf{H. customer} & L.6 attacks too specific \\
\textbf{B. AML defenses} & D.7 input data format restrictions & H.1 requirements & L.7 insecurity about AML \\
B.1 retraining & E. \textbf{pipeline elements} & H.2 privacy relevant data & L.8 unspecific attack \\
B.2 interpretability & E.1 training & \textbf{I. cloud} & L.9 holistic attacker specificity \\
B.3 basic models & E.2 design & I.1 used for security & L.10 pipeline specific defense \\
B.4 ensemble & E.3 model & I.2 used but potential security risk & L.11 importance of data \\
B.5 human in the loop & E.4 data & I.3 not used because of security & L.12 high level perspective \\
B.6 regularization & E.5 data labelling & I.4 neutral & L.13 coding perspective \\
B.7 own implementation & E.6 data collection & \textbf{J. relevance} & \\
B.8 on purpose & E.7 data preprocessing & J.1 mentioning AML & \\
\textbf{C. security threats} & E.8 feature extraction & J.2 security low priority & \\
C.1 data capturing & E.9 testing & J.3 AML low priority & \\
C.2 access & E.10 deployment & J.4 encountered security issue & \\
C.3 data breach & E.11 API & \textbf{K. confusion} & \\
C.4 code breach & E.12 database & K.1 across ML attacks & \\
C.5 libraries & \textbf{F. pipeline properties} & K.2 security and AML & \\
C.6 denial of service & F.1 iterative & K.3 vagueness of concepts & \\
C.7 SDK & F.2 several within project & K.4 what security means & \\
C.8 customer & & & \\ \bottomrule
\end{tabular}
\end{table*}
\begin{table*}[t]
\caption{Final set of codes for the drawings.}\label{table:drawingcodes}
\centering
\begin{tabular}{ l | l | l | l }
\toprule
\textbf{A. pipeline elements} & \textbf{B. pipeline properties} & \textbf{D. attacks} & \textbf{E. drawing} \\
A.1 training & B.1 iterative & D.1 no attacks & E.1 boxes\\
A.2 design & B.2 linear & D.2 poisoning & E.2 symbols\\
A.3 model & B.3 abstracted & D.3 evasion & E.3 inner/outer \\
A.4 data & B.4 several & D.4 membership inference & E.4 flow within pipeline \\
A.5 data labelling & B.5 explainable & D.5 libraries & E.5 workflow embedding \\
A.6 data collection & B.6 MLaaS & D.6 data collection & E.6 attacks graphical \\
A.7 data preprocessing & \textbf{C. named explicitely} & D.7 input/output & E.7 attacks words \\
A.8 feature extraction & C.1 hardware & D.8 unspecific attack & E.8 attacks causal \\
A.9 testing & C.2 software & D.9 defenses & E.9 attacks pointwise \\
A.10 deployment & C.3 human & D.10 exit points & \\
A.11 deployment environment & C.4 privacy sanitization & D.11 input points & \\
& C.5 output & & \\
& C.6 classification & & \\
& C.7 server & & \\ \bottomrule
\end{tabular}
\end{table*}
\section{Background}
\section{Conclusion}\label{sec:conclusion}
Based on our semi-structured interviews with practitioners, we take a first step towards a theory of mental models in AML.
We identified four characteristic ranges of practitioners' mental models.
The first range, describes the relationship between AML and classical IT security. These two topics were often mingled, yet not used interchangeably by our subjects.
The second range confirmed the existence of structural and functional components within mental models of (A)ML. For example, some subjects marked a structural component as a starting point for and attack, whereas other subjects explained the causal steps of the attack.
The third range concerns the general variability in our subjects' mental models, which we found to be independent from the application domain and reported background knowledge.
This included the priority AML has for subjects: whereas some uttered clear concern, other subjects were not worried at all.
Finally, the fourth characteristic range describes that industrial practitioners perceive ML-specific threats and defenses at a varying level of technical depth. Whereas some subjects explained pipeline elements and attacks almost at the code level, other subjects made only high level references.
A clear understanding of the elicited mental models allows to improve information for practitioners and adjustments of corporate workflows.
Furthermore, our results help to develop tools for practitioners that assess the security of ML. These tools should be incorporated into the ML pipeline to ease security evaluation and minimize risks. Finally, regulatory frameworks might reduce uncertainty about AML and increase the awareness for possible security threats.
However, a wide range of subsequent research towards an encompassing theory of mental models in AML is still required. Finally, we are convinced that the AML community will benefit from further practical assessment of attacks occurring in the wild, as our subjects only reported semi-automated fraud.
\section{Practical Implications}\label{sec:accConc}
We found, similar to Shankar et al.~\cite{kumar2020adversarial}, that most our subjects lack an adequate and differentiated understanding to secure ML systems in production. In addition, the perception of AML varies strongly across individuals. The goal of corporate guidelines, tools and policies
should therefore be twofold.
First, they should raise the perceived relevance of AML. Second, and if necessary within a certain application domain, they should enable practitioners to actively develop specific mental models for the attacks relevant in their domain.
\textbf{Embedding AML into corporate workflows.} Our findings provide an intuition to ease the integration of AML into corporate workflows. Developing and deploying ML applications along the different steps of the ML pipeline (Figure~\ref{fig:pipeline}) usually involves the collaboration of individuals with different skills and roles within an organization~\cite{zhang2020data, arrieta2020explainable}. Our findings suggest that, despite their diverse background, all these actors should be able to identify relevant structural components of possible attacks and implementable defenses. Information provided to them should also entail explanations of the functional interconnection of these structural components. Practitioners should be able to understand AML through minimum viable mental models with the lowest possible number of cognitive chunks~\cite{lage2019human, suresh2021beyond}. If necessary, the provided information should be sufficient to develop these initial mental models into more accurate internal representations. These internal representations contain then the potential security threats within the corresponding application.
\textbf{Enhancing AML libraries and tools.} In addition, practitioners should be equipped with appropriate tools that incorporate ML-specific security measures.
Whereas several subjects reported which infrastructure or service provider they use, none mentioned
a specific tool for assessing security risks.
Our four characteristic ranges of mental models
define the cognitive frame for the development of such tools.
It is thus promising that several recent initiatives aim at providing better access to AML.
This includes libraries\footnote{For example the Adversarial Robustness Toolbox, CleverHans, RobustBench,
or the SecML library, just to name a few.},
but also overviews like the Adversarial ML Threat Matrix\footnote{https://github.com/mitre/advmlthreatmatrix}. These tools give practitioners the opportunity to navigate through an ever-increasing threat landscape. The latter even differentiates classical security threats against ML-specific attacks. This resonates with our findings from Section~\ref{sec:secvsAML} and might help practitioners to gain a more accurate understanding of the attacks that are relevant within a specific application.
\textbf{Creating appropriate regulatory frameworks for AML.} Lastly, our study has implications for regulatory approaches that enable appropriate security assessments.
To develop and refine adequate mental models, practitioners need to be knowledgeable in AML.
Future regulation could incorporate this requirement by providing adequate information at multiple mental abstraction levels~\cite{rutjes2019considerations, broniatowski2021psychological}. For example, current regulatory drafts like the NIST Taxonomy and Terminology of AML\footnote{https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8269-draft.pdf} ease a functional understanding of attacks and defenses. This draft explicitly lists references that might help practitioners develop more complex mental models.\footnote{Other regulations are on their way, for example ITU-T F.AI-DLFE, ETSI DTR INT 008, DIN SPEC 92001-2, and ISO 26262, just to name a few.} A similar regulation for privacy, the European general data protection regulation, was often mentioned by our subjects. With the regulation serving as scaffold for their privacy perception, they reported to comply even though we did not ask explicitly about privacy beyond membership inference.
\section{Future Work}\label{sec:TheoConc}
Our findings underline the need for additional research at the intersection of AML and cognitive science. Given the evidence of semi-automated, ML-related fraud, a more detailed assessment of which attacks are conducted in the wild would be greatly beneficial. Future work could investigate this with focus on different groups of ML practitioners, including for example ML engineers, auditors, and researchers.
\textbf{Temporal evolvement of mental models in AML.} Furthermore, a better understanding about the development of individual mental models could help to assess necessary steps to make practitioners take into account AML. Research on how mental models are shared between various AI practitioners might help to implement adequate defenses within and across corporate workflows. Corresponding starting points can be found in cognitive science~\cite{mohammed2010metaphor}, where the convergence of mental models has been studied as a three-phase process of orientation, differentiation and integration~\cite{kennedy2010merging}.
\textbf{User-centric threat taxonomy.} More work is also needed to understand why and how industrial practitioners relate classical security and AML.
Here, it could be interesting to consider the taxonomies proposed by Biggio et al.~\cite{biggio2014pattern} and Barreno et al.~\cite{barreno2006can}. This framework seems promising to investigate which specific structural elements practitioners consider relevant for specific attack vectors and how they perceive the causal evolvement of these attacks. In line with recent work by Wang et al.~\cite{10.1145/3290605.3300831}, such user-centric attack taxonomies might help to understand practitioners' reasoning on AML.
\textbf{Utility and usability of AML tools and libraries.} Finally, we found that practitioners’ mental models depend on available and provided information. Future research should therefore elaborate on the needed specificity of the available information. Furthermore, an evaluation of the available AML tools and libraries with regards to capabilities and needs of industrial practitioners might ease their usage across application domains. In line with recent work on fairness~\cite{lee2020landscape} and ethics~\cite{chivukula2021surveying}, we consider this crucial for designing tools, corporate guidelines and regulations.
\section{Introduction}
Adversarial machine learning (AML) studies the poor reliability of learning based systems in the context of an adversary~\cite{barreno2006can,biggio2018wild,papernot2018marauder}. For example, tampering with some features often suffices to change the classifier's outputs to a class chosen by the adversary~\cite{Dalvi:2004:AC:1014052.1014066,DBLP:conf/pkdd/BiggioCMNSLGR13,DBLP:journals/corr/SzegedyZSBEGF13}. Analogously, slightly altering the training data enables the attacker to decrease performance of the classifier~\cite{rubinstein2009antidote,biggio2011support}. Another change in the training data allows the attacker
to enforce a particular output class when a specified stimulus is present~\cite{ji2017backdoor,chen2017targeted}.
Most attacks and mitigations studied in AML are
in an ongoing arms race~\cite{carlini2017adversarial,athalye2018obfuscated,tan2019bypassing,ling2019deepsec,DBLP:conf/ijcai/SharmaDB19}.
Although machine learning (ML) is increasingly used in industry, very little is known about ML security in practice.
To tackle this question, we conduct a first study to explore mental models of AML.
Mental models are relatively enduring, internal conceptual representations of external systems that originated in cognitive science~\cite{10.5555/7909,7d020ff455a54632b4f80d0728ac54e6}.
In other security related areas, correct mental models have been found to ease the communication of security warnings~\cite{bravo2010bridging} or enable users to implement security best-practices~\cite{tabassum2019don}. Mental models also serve to
enable better interactions with a given system~\cite{wash2011influencing}, or to design better user interfaces~\cite{gallagher2017new}.
Our methodology builds upon these previous works by using qualitative methods to investigate the perception of vulnerabilities in ML applications. Our findings shed light on four characteristic ranges of practitioners' mental models of AML.
The first concerns the separation of AML and standard security.
In many cases, the borders between these two fields are blurry: a subject may start talking about evasion and finish the sentence with a reference to cryptographic keys.
On the other hand, security threats are often taken for granted, whereas practitioners are less aware of AML attack scenarios.
Secondly, we identified functional and structural components with respect to the perception of AML.
More concretely, structural components are cognitively put into functional relation within the mental models.
Furthermore, our subjects show large variation across their perception of attacks and defenses. These variations are unrelated to security background or other educational factors, and are only partially influenced by different applications of ML.
Last but not least, the degree of technical depth in our subjects' mental models differs: Whereas some subjects explained their applications almost at the code-level, others had rather a high level perspective where mental models of attacks and defenses seemed more abstract and ambiguous.
\begin{figure*}
\centering
\vspace{-1em}
\input{figures/pipeline.tex}
\vspace{-0.5em}
\caption{AML threats within the ML pipeline. Each attack is visualized as an arrow pointing from the step controlled to the point where the attack affects the pipeline.}\label{fig:pipeline}
\end{figure*}
During our interviews, we found evidence that semi-automated fraud on ML systems takes place in the wild.
Our findings on mental models allow to tackle these threats by
(\textbf{I}) aligning corporate workflows that enable all actors to understand AML threats with minimal effort, (\textbf{II}) developing tools that help practitioners to assess and evaluate security of ML applications, and (\textbf{III}) drafting regulations that contain adequate security assessments and reduce insecurity about AML.
However, more work is needed to understand the individual and shared mental models of practitioners.
\section{Limitations}\label{sec:limitations}
We followed an inductive approach to investigate mental models through qualitative analysis. Hence, the data collected is self-reported and subjected to a coding process. We continued coding and refining codes
until a good level of inter-coder agreement was reached. Nonetheless, all our findings are subject to interpretation which is inherent to qualitative analysis. Finally, due to the COVID-19 pandemic, all interviews were conducted remotely and the interface limitations of the digital whiteboard might have impacted the participants' sketches.
With 15 participants, our sample size is rather small and limits the generalizability of our findings. However, given the applied methods and that we reached saturation, the size is indeed acceptable~\cite{wu2018tree, gallagher2017new}.
All participants were employed at European organizations with $<$200 employees. This is due to the fact that while several multinational companies stated great interest in our research, they denied participation after internal risk assessments. As mental models of ML systems are always embedded in organizational practices~\cite{zhang2020data}, we strongly encourage future research to assess our findings within larger samples including more variety, for example academics, small and large companies, etc.
Finally, despite our efforts, we only managed to recruit one female participant and it is possible that our findings are biased.
Last but not least, AML itself is a subject of study for which the problem perception evolves continuously. With an increasing awareness for security within applied machine learning, the findings presented
can only be valid temporarily.
ML is applied in a wide range of settings.
Consequently, not all attacks are relevant within each application domain.
For example, a healthcare setting is subjected to other threats than a cybersecurity setting.
For the sake of studying abstract ranges of mental models, we did not consider the application in the present work. Yet, we would like to point out the necessity to study this aspect of mental models in AML.
\section*{Acknowledgements}
The authors would like to thank Battista Biggio, Antoine Gautier and Michael Schilling for their helpful feedback.
This work was supported by the German Federal Ministry of Education and
Research (BMBF) through funding for the Center for IT-Security,
Privacy and Accountability (CISPA) (FKZ: 16KIS0753).
\footnotesize
\bibliographystyle{acm}
\section{Methodology}\label{sec::methodology}
This section describes the design of our semi-structured interview study, the drawing task, our recruiting strategy, the participants, and how we analyzed the data. Our methodology was designed to investigate the perception of attacks and defenses in ML. To the best of our knowledge, this is the first study of mental models of AML.
\input{subject_table}
\subsection{Study design and procedure}
To assess participants' perceptions, we conducted semi-structured interviews enriched with drawing tasks.
We draw inspiration for our study from recent work in usable security which also investigated mental models~\cite{wu2018tree, krombholz2019if}.
The threefold structure of our interviews covered 1) a specification of a given ML project a subject was involved in, 2) the underlying ML pipeline of this project and 3) possible security threats within the project. We chose this approach as the different attack vectors form part of the ML-pipeline as shown in Section~\ref{sec:aml}.
The detailed interview guideline can be found in Appendix~\ref{app:interview}. As a last step of our interviews, we confronted the subjects with exemplary attacker models for some of the threats considered
relevant in industrial application of ML~\cite{kumar2020adversarial}. To assess practitioners' understandings of these threats, study participants had to elaborate on these attack vectors within their specific setup (Appendix~\ref{app:SelAVs}).
We conducted one pilot interview to evaluate the quality of our questionnaire. This first subject met all criteria of our target population in terms of employment, education and prior knowledge.
His explanations and drawings matched our expectations.
We therefore only added a specific question regarding the collaborators within a given ML-based project.
At the beginning of the interview, participants were informed about the general purpose of our study and the applied privacy measures during data collection.
Prior to each interview, participants were instructed to complete a questionnaire on demographics, organizational background and a self-reflected familiarity with field-related concepts (Appendix~\ref{app:questionnaires}). The answers to this questionnaire have later been used to put participants' perceptions in context to their organizational and individual background.
Each interview lasted approximately 40 minutes and has jointly been conducted by the first two authors of this paper. To minimize interviewer biases, we equally distributed the interviews between the two authors: one was the lead interviewer and the second interviewer took additional notes and screenshots of the drawing task. Due to the COVID-19 pandemic, all interviews were conducted remotely and relied on a freely available digital whiteboard\footnote{https://awwapp.com/}. To assess their knowledge about (A)ML in general, but avoid priming for specific security-related concepts before the interview, participants had to fill an additional questionnaire after the interview (Appendix~\ref{app:quesAfter}). In this questionnaire, we addressed general knowledge in ML and asked for a self-reflected familiarity rating with some of the attacks we discussed in Section~\ref{sec:aml}.
\subsection{Recruitment}
Recruitment for a study on applied ML in corporate environments presents a challenge, as only a small proportion of the overall population works with ML.
Further, the topic touches compliance and intellectual property of participating organizations. Hence, many companies are skeptical about the exchange with third parties. Therefore, many current contributions with industrial practitioners as study subjects are conducted by corporate research groups (e.g.~\cite{kumar2020adversarial,holstein2019improving}).
We tried to initiate interviews with two multinational companies with more than 140,000 employees each. Unfortunately, both denied our request after internal risk assessments. Therefore, we focused on smaller companies where we could present our research project directly to decision-makers and convince them to participate in our study. We relied on the individual networks of the authors and public databases\footnote{For example https://www.crunchbase.com/}, and used direct-messaging on LinkedIn and emails to get in contact with potential subjects.
Recruitment of study participants happened in parallel to interview conduction. Some subjects forwarded our interview request to internal colleagues, so that we talked to multiple employees of some participating companies (see Table~\ref{table:subjects}). We aimed to recruit experienced and knowledgeable participants and hence our requirements were a background in ML or computer science and positions such as data scientists, software engineers, product managers, or tech leads. We did not require any prior knowledge in security. After 15 recruited subjects, the research team agreed that the interviews saturated, and we stopped recruiting. The subjects were randomly assigned an ID (a number between 1 and 20) which was used throughout our analysis. All participants were offered an euro 20 voucher as compensation for their time.
\subsection{Participants}
We summarize demographic information in Table~\ref{table:subjects}. One subject, \Sub{10}, did not hand in the questionnaire and is consequently not included in the following statistics.
14 participants identified as male, one identified as female, with an average age of 34 years (standard deviation (STD) 4.27).
As intended for a first exploration of practitioners' perception of AML, our sample covered various application domains and organizational roles which we now describe in detail.
\textbf{Education and prior knowledge.} The majority of subjects (9 of 14) has a PhD, with all subjects holding some academic degree.
Most participants (12 of 14) reported that they had attended lectures or seminars on ML. Roughly half (6 of 14) reported to have a similar background in security.
To measure our participants' knowledge in the area of ML, we constructed a questionnaire based on job interview questions\footnote{For example https://www.springboard.com/blog/machine-learning-interview-questions/}
for ML (Appendix~\ref{app:quesAfter}). Given that participants were not previously informed they had to take a test, we aimed to select a broad range of topics easy to query with multiple choice answers that were not too hard. The questionnaire had 8 questions, with the subjects correctly answering on average 6.64 questions (STD 1.14). Guessing would yield an average of 2.66 correct questions. Thus, while we do not know how reliable our questionnaire estimates ML knowledge, we conclude that all our subjects are indeed knowledgeable in ML. We also sanity checked the knowledge of our subjects in AML (see Appendix~\ref{sec::sanityCheck}). Few subjects reported high familiarity, and very recent/less known attacks were rated as unfamiliar.
\textbf{Employment.} Regarding the size of the companies, four subjects worked in companies with less than ten employees, five in companies with less than 50 and the remaining six subjects in companies with less than 200 employees.
The companies' application areas were as diverse as healthcare, security, human resources, and others. Most subjects were working in their current positions 6 years (STD 4.9). Their roles were diverse:
Most subjects (8 of 15) were in managing positions. Three were software or ML engineers, three more were researchers. One of the subjects stated to be both a researcher and a founder. One subject did not report his role.
Finally, we asked subjects to report which goals were part of their companies' AI/ML checklist. Almost all subjects (13 of 14) reported that performance mattered in their company. Half (7 of 14) stated that privacy was important. Slight less than half (6 of 14) focused on explainability and security. Least subjects (4 of 14) listed fairness as a goal in their products. To conclude, when interpreting these numbers, one should keep in mind that not all five goals apply equally to all application domains. Yet, our sample is too small to derive per area or per company insights, and we leave a detailed analysis for future work.
\subsection{Data analysis}
Our analysis adopted an inductive approach,
where we followed recent work in social sciences and usable security that constructed theories based on qualitative data~\cite{naiakshina2017developers, krombholz2019if}. To distill observable patterns in interview transcripts and drawings, we applied two rounds of open coding. We then performed Strauss and Corbin’s descriptive axial coding to group our data into categories and selective coding to relate these categories to our research questions~\cite{strauss1990basics}. Throughout the coding process, we used analytic memos to keep track of thoughts about emerging themes. The final set of codes for interview transcripts and drawings is listed in Appendix~\ref{app:codes}.
As a first step, the first two authors independently conducted open coding sentence by sentence and sketch by sketch. This allowed for the generation of new codes without predefined hypotheses. Afterwards, the resulting codes were discussed and the research team agreed on adding specific codes for text snippets relating to the confusion of standard security and AML.
As a second step, two coders independently coded the data again. After all iterations of coding, conflicts were resolved and the codebook was adapted accordingly.
During axial coding, the obtained codes were grouped into categories. The first two authors independently came up with proposed categories which have then been discussed within an in-person meeting. While the grouping was undisputed for some of the categories presented in Appendix~\ref{app:codes} (e.g. AML attacks, pipeline elements), for others the research team decided for (e.g. confusion, relevance) or against (e.g. type of ML model applied) the inclusion of a corresponding category only after detailed discussion. In addition, dedicated codes for the perception of participants (e.g. perceives AML as a feature, not a bug or security issue) were added to the codebook. Once the research team agreed on a final codebook, all transcripts and drawings were coded again using corresponding software.\footnote{Available at https://www.taguette.org/ and https://www.maxqda.com/.} In doing so, we aimed for inferring contextual statements instead of singular entities.
The codes and categories served as a baseline for selective coding. Independently, the researchers came up with observations and proposals for specific mental models. Every proposal included a definition of the observation, related codes, exemplary quotes and drawings. The first two authors then met multiple times to discuss the observations and the corresponding relations of codes and categories. During these discussions, the four characteristic ranges of participants’ AML perception, described in detail in Section~\ref{sec:results}, were distilled.
We calculated Cohen's kappa~\cite{cohen1960coefficient} to measure the level of agreement among the coders. For drawings, we reached $\kappa = 0.85$, and for interview transcripts $\kappa= 0.71$. These values indicate a good level of coding agreement since both values are greater than 0.61~\cite{landis1977measurement}. Given the semi-technical nature of our codebook, we consider these values as substantial inter-coder agreement. Irrespective of this and in line with best practices in qualitative research, we believe that it is important to elaborate how and why disagreements in coding arose and disclose the insights gained from discussions about them. Each coder brought a unique perspective on the topic that contributed to a more complete picture. Due to the diverse background of our research team in AML, usable security and economic geography, most conflicts arose regarding the relevance of technical and organizational elements of transcripts and drawings. These were resolved during conceptual and on-the-spot discussions within the research team.
\begin{figure*}
\centering
\includegraphics[width=0.98\textwidth]{figures/Fig_2.pdf}
\caption{The four characteristic ranges of mental models we identified. Pipeline elements are denoted in black; AML threats (with origin and effect) in red. Ranges are visualized using extremes: the perception of subjects was not binary but continuous.}
\label{fig:MentalModelDims}
\vspace{-10pt}
\end{figure*}
\subsection{Expectations on subjects' mental models}\label{sec:expectations}
Given previous work on mental models~\cite{wu2018tree}, we expected to find structural and functional properties in our subjects’ mental models.
Concerning ML, we designed our study in a way that subjects would first visualize their perception of the pipeline and then later add corresponding attacks and defenses. For the pipeline, we expected that participants would name basic steps or components, such as data (collection), training, and testing. In general, we assumed subjects’ descriptions would vary in technical depth. Regarding AML, one of our motivations to conduct this study was to learn which knowledge our subjects had. As a recent phenomenon, AML might not be known at all in practice, although practitioners might be aware of attacks relevant to their specific application. In particular, we did not expect practitioners to visualize attacks using a starting and target point, as done in Figure~\ref{fig:pipeline}.
\subsection{Ethical considerations}
The ethical review board of our university reviewed and approved our study design.
We limited the collection of person-related data as much as possible by assigning IDs to participants that were used throughout the analysis. Since all participants were employed at existing companies and partially shared business-critical information, we aimed to avoid company-specific disclosures in this paper. We complied with both local privacy regulations and the general data protection regulation (GDPR).
\section{Background and related work}\label{sec:background}
In this section, we review related work on AML and recall different attacks that have recently been discussed. We also review literature on mental models with regard to human-computer interaction, usable security and ML.
\subsection{Adversarial machine learning}\label{sec:aml}
AML studies the security of ML algorithms~\cite{barreno2006can,biggio2018wild,papernot2018marauder}.
We attempt to give an informal overview of all attacks in AML, and additionally illustrate them in Figure~\ref{fig:pipeline}.
\textbf{Poisoning/backdooring.} Early works in poisoning altered the training data~\cite{rubinstein2009antidote} or labels~\cite{biggio2011support} to decrease accuracy of the resulting classifier, for example SVM. For deep learning, due to the flexibility of the models, introducing backdoors is more common~\cite{ji2017backdoor,chen2017targeted}. Backdoors are chosen input patterns that reliably trigger a specified classification output.
Defending such backdoors has lead to an arms race~\cite{tan2019bypassing}.
\textbf{Evasion/adversarial examples.}
Early work in evasion decreased the test-time accuracy of spam classification~~\cite{Dalvi:2004:AC:1014052.1014066}. It was later shown that also more complex models change their output for small, malicious input perturbations~\cite{DBLP:conf/pkdd/BiggioCMNSLGR13,DBLP:journals/corr/SzegedyZSBEGF13}.
Albeit all classifiers are principally vulnerable towards evasion, recent works focus on the
arms race in deep learning~\cite{carlini2017adversarial,athalye2018obfuscated,ling2019deepsec,DBLP:conf/ijcai/SharmaDB19}.
\textbf{Membership inference.}
After first inferring attributes~\cite{ateniese2015hacking}, research later showed that entire points can be leaked from a model~\cite{2016arXiv161005820S}.More concretely, the attacker deduces, given the output of a trained ML model, whether a data record was part of the training data or not.
As for other attacks, numerous defenses are being proposed~\cite{nasr2018machine,jia2018attriguard,jia2019memguard}.
\textbf{Model stealing.} Tram{\`{e}}r et al.~\cite{DBLP:conf/uss/TramerZJRR16} recently introduced model stealing. Here, the attacker copies the ML model functionality without consent of the model's owner.
The attacker generally has black box access to the model and tries to reproduce a model with similar performance.
As for the previous attacks, mitigations have been proposed~\cite{juuti2019prada,orekondyprediction}.
\textbf{Weight perturbations.}
Fault tolerance of neural networks has long been studied in the ML community~\cite{neti1992maximally,breier2018practical}. Recently, maliciously altered weights are used to
introduce of a specific backdoor~\cite{dumford2018backdooring,ji2018model}.
Few works exist to defend malicious change to the weights in general, not only related to backdoor introduction~\cite{stutz2020mitigating,weng2020towards}.
For the sake of completeness, we conclude with a description of additional, recent attacks,
some of which are part of our questionnaires (Appendix~\ref{app:quesAfter}).
In \textbf{adversarial initialization}, the initial weights of a neural network\footnote{Classifiers with convex optimization problems (for example SVM) cannot be targeted, as the mathematical solution to the learning problem does not depend on the initial weights.} are targeted to harm convergence or accuracy during training~\cite{grosse2019adversarial,liu2019bad}.
In \textbf{adversarial reprogramming}, an input perturbation mask forces the classifier at test time to perform another classification task than originally intended~\cite{2018arXiv180611146E}.
For example, a cat/dog classifier is reprogrammed to classify digits.
In \textbf{model reverse engineering}, crafted inputs allow to deduce from a trained model whether dropout was used and other architectural choices~\cite{joon2018towards}. Finally,
\textbf{sponge attacks} aim to increase energy consumption of the classifier at test time~\cite{shumailov2020sponge}.
In general, AML research has been criticized for the limited practical relevance of its threat models~\cite{gilmer2018motivating,evtimov2020security}.
There is also limited knowledge about which threats are relevant in practice. To the best of our knowledge, only Kumar et al.~\cite{kumar2020adversarial} have studied this question and found that practitioners are most concerned about poisoning and model theft.
Yet, in academia, most work focused on evasion so far.
To shed more light on AML in practice, we interview industrial practitioners and take a first step towards a theory of mental models of AML. To this end, we now introduce and review mental models.
\subsection{Mental models}
Mental models are relatively enduring and accessible, but limited, internal conceptual representations of external systems~\cite{doyle1998mental} that enable people to interact with given systems. Hence, the field of human computer interaction (HCI) studied this concept quite early~\cite{sasse1991t}. Mental models, most recently, saw an increasing relevance in usable security. We now recall prior application scenarios and highlight relevant conceptual contributions in the context of security and ML.
\textbf{Mental models in HCI and usable security.}
The relevance of mental models has been subject to a lengthy debate in HCI research~\cite{volkamer2013mental, staggers1993mental}. In many cases, the focus was to capture, depict and analyze mental models of specific objects of investigation. Examples of topics include, but are not limited to, the design of online search applications~\cite{bates1989design}, interface design~\cite{khaslavsky1998integrating}, and interfaces for blind people~\cite{donker2002design}. Research in usable security has recently focused on mental models of security in general~\cite{asgharpour2007mental, wash2011influencing, anellend}, privacy in general~\cite{renaud2014doesn}, security warnings~\cite{bravo2010bridging}, the internet \cite{kang2015my}, the design of security dashboards~\cite{maier2017influence}, the Tor anonymity network~\cite{gallagher2017new}, privacy and security in smart homes~\cite{zeng2017end, tabassum2019don}, encryption~\cite{wu2018tree, abu2018exploring}, HTTPS~\cite{krombholz2019if}, and cryptocurrency systems~\cite{255652}.
With regard to the respective object of investigation, these contributions paved the way for improvements of user interface designs~\cite{gallagher2017new}, adequate security communication~\cite{bravo2010bridging}, as well as the development of security policies and implementation of best-practices~\cite{tabassum2019don}. It has been argued that security mental models contain structural and functional properties~\cite{wu2018tree}. For each application, users develop a cognitive representation of its inherent components, their interconnection and correspondingly possible security threats. This representation helps them to understand where threats could emerge and how they could take effect. Mental models evolve dynamically upon individual interaction with a given application~\cite{binns2018s}.
\textbf{Mental models in ML.}
In order to interact with an ML application, humans need a mental model of how it combines evidence for prediction~\cite{nguyen2018believe}. This is all the more important for ML-based applications which often inherit a certain opacity. As Lage et al.~\cite{lage2019human} pointed out, the number of necessary cognitive chunks is the most important type of complexity in order to understand applications. During interaction with black-box processes, humans strive for reduced complexity which may lead to the development of inaccurate or oversimplified mental models~\cite{kaur2019interpreting, hitron2019can}.
A dedicated line of research therefore elaborates on the relevance and nature of mental models in the context of explainable artificial intelligence.
Mental models have been found to serve as scaffolds not only for a given ML application~\cite{nourani2021anchoring, villareale2021understanding}, but also for its embedding in organizational practices~\cite{zhang2020data}. For data science teams, these workflows usually consist of predefined steps (Figure~\ref{fig:pipeline}) and necessitate interpersonal collaboration. Following Arrieta et al.~\cite{arrieta2020explainable}, we argue that individual collaborators within these teams (e.g. ML engineers, software engineers) develop separate internal representations of a given workflow or application. The need for appropriate mental models thereby increases with the enlarged scope of ML applications~\cite{lakkaraju2016interpretable} and involved stakeholders ~\cite{suresh2021beyond, Langer_2021}.
Recent work in this line of research called for qualitative studies at the intersection of the HCI and ML communities, to better understand the cognitive expectations practitioners have on ML systems~\cite{kaur2019interpreting, 10.1145/3351095.3375624}. Suchlike studies seem all the more relevant as various industry initiatives propagate a human-centric approach to AI, explicitly referring to mental models.\footnote{e.g. https://pair.withgoogle.com/chapter/mental-models/} However, the current scientific discourse lacks a dedicated consideration of cognition in AML. In order to fill this gap, we present the first qualitative study to elicit mental models of adversarial aspects in ML.
\section{Empirical results}\label{sec:results}
We identified four characteristic ranges that describe practitioners’ mental models in AML. Our data indicates that the individual perception varies along these ranges; they are no binary features. Figure~\ref{fig:MentalModelDims} visualizes the two extremes of each of the four characteristic ranges.
As first range, we describe to which degree our subjects mixed standard security and AML concepts.
An example is given in Figure~\ref{fig:MentalModelDims} for model stealing. One extreme is a subject who distinguished between \quoteCode{model stealing} and for example a \quoteCode{code breach}, whereas on the other hand, some subjects were concerned about the model being \emph{somehow copied}.
We provide a detailed description of our findings in Section~\ref{sec:secvsAML}.
The second characteristic range concerns structural components and functional relations between them.
Figure~\ref{fig:MentalModelDims} shows both extremes for model reverse engineering of a neural network. By crafting inputs, an attacker might deduce architectural choices within the functional structure, whereas on the other hand a hyperparameter from the model could be accessed illicitly.
We present our detailed findings on how structural and functional components are relevant in Section~\ref{sec:strucfunc}.
The third range concerns variations in the pipelines, attacks, and defenses described.
An example is shown in Figure~\ref{fig:MentalModelDims} for poisoning attacks. Here, the attacker either injects specific inputs to the application (triangles and squares in our example), or a general, malicious input. The detailed findings on these individual variations are presented in Section~\ref{sec:cornercases}.
The last and fourth characteristic range describes the level of technical depth.
Figure~\ref{fig:MentalModelDims} depicts the extremes of sophisticated technical depth for membership inference.
Some participants explained their setting almost on the code level, whereas others would just utter the high level concern of \emph{their data being illicitly accessed}.
More detailed findings on the corresponding variances are presented in Section~\ref{sec:techdepth}. We will now detail each of the four characteristic ranges and give examples of both interviews and drawings where they occurred.
\subsection{Classical security and AML}\label{sec:secvsAML}
We found that our subjects generally did not distinguish between classical security and AML.
Albeit there is a clear distinction in research, it might not matter in practice whether an attacker obtained the data of a company via a social engineering attack, exploiting a security vulnerability, or via a prediction API.
On the one hand, the boundary between security and AML often appeared blurry or unclear, with the corresponding concepts intertwined. On the other hand,
there were crucial differences in the perception between classical security and AML threats. One difference is that whereas security defenses were often clearly stated as such, AML mitigations\footnote{We are aware that AML is far from being solved, and communicated this to our subjects if required. In this study, we define defenses as techniques which increase the difficulty for an attacker, like retraining or explainability.} were often applied without security incentives. Finally, we find a tendency to not believe in AML threats. Many subjects denied responsibility, doubted an attacker would benefit, or stated the attack does not exist in the wild. There was no such tendency in standard security.
\subsubsection{Mingling AML and security}
We first provide examples to clarify our observation that security and AML were not distinguished by our participants. Afterwards, we investigate if security and AML are used interchangeably, by investigating the co-occurrence of codes.
\textbf{Vagueness of the boundary between security and AML.} There are plenty of examples on vagueness about the boundary between classical security and AML. For example \Sub{20} reasoned about evasion: \quoteSub{this would require someone to exactly know how we deploy, right? and, where we deploy to, and which keys we use}. At the beginning, the scenario seems unclear, but the reference to (cryptographic) keys shows that the subject has moved to classical security.
Analogously, when \Sub{18} reasoned about membership inference: \quoteSub{but that could be only if you break in [...] if you login in to our computer and then do some data manipulation}. Again, this subject was reasoning about physical access control as opposed to an AML attack via an API.
Sometimes, ambiguity in naming confused our subjects. For example, \Sub{11} thought aloud: \quoteSub{poisoning [...] the only way to install a backdoor into our models would be that we use python modules that are somewhat wicked or have a backdoor}. In this case, the term `backdoor' in our questionnaire triggered a standard security mindset involving libraries in contrary to our original intention to query subjects about neural network backdoors. The same reasoning can also be seen in \Sub{11}'s drawing (compare Figure~\ref{fig:SecLEvel11}), where `backdoor' points to python modules.
Finally, \Sub{12} stated: \quoteSub{maybe the poisoning will be for the neural network. From our point of view you would have to get through the Google cloud infrastructure}. From an AML perspective, the infrastructure is irrelevant, as the model is independent. Yet, the infrastructure is perceived as an obstacle for the attack.
\textbf{Correlations between security and AML attacks.} In the previous paragraph, we showed that the boundaries between AML and classical security are blurred in our interviews.
Another example is \Sub{6} reasoning about IP loss: \quoteSub{we are very much concerned I’d say the models themselves and the training data we have that is a concern if people steal that would be bad}. In this case, it is left out how the attack is performed. Analogously, \Sub{9} remarked: \quoteSub{We could of course deploy our models on the Android phones but we don't want anybody to steal our models}.
To investigate whether our subjects are more concerned about some property or feature (data, IP, the model functionality) than about how it is stolen or harmed,
we examined the co-occurrence of AML and security codes that refer to similar properties in our interviews. For example, the codes \quoteCode{model stealing} and \quoteCode{code breach} both describe a potential loss of the model (albeit the security version is broader). Both codes
occur together six times, with \quoteCode{code breach} being tagged one additional time. Furthermore, the code \quoteCode{model reverse engineering}, listed only two times, occurs both times with both \quoteCode{model stealing} and \quoteCode{code breach}.
However, not all cases are that clear.
For example \quoteCode{membership inference} and \quoteCode{data breach} only occur together two times. The individual codes are more frequent, and were mentioned by three (\quoteCode{membership inference}) and eleven (\quoteCode{data breach}) participants.
Analogously, attacks on availability (such as DDoS) in ML and classical security were only mentioned once together.
Such attacks were brought up in an ML context twice, in standard security four times.
Codes like \quoteCode{evasion} and \quoteCode{poisoning}, in contrast, are not particularly related to any standard security concern. We conclude that AML and security are not interchangeable in our subjects' mental models to refer to attacks with a shared goal.
\subsubsection{Differences between AML and security}
In the previous subsection, we found that our subjects did not distinguish classical security and AML. To show that this is not true in general, we now focus on the differences between the two topics. To this end, we start with the perception of defenses and then consider the overall perception of threats in AML and security. We conclude with a brief remark on the practical relevance of AML.
\textbf{Defenses.} Out of fifteen interviews, in thirteen some kind of defense or mitigation was mentioned; all corresponding interviewees mentioned a security defense (encryption, passwords, sand-boxing, etc). An AML mitigation appeared in eight. In contrast to security defenses, however, AML defenses were often implemented as part of the pipeline, and not seen in relation to security or AML. As an example, \Sub{9}, \Sub{15}, and \Sub{18} reported to have humans in the loop, however not for defensive purposes. \Sub{10} and \Sub{16} were aware that this makes an attack more difficult. For example, \Sub{16} stated: \quoteSub{maybe this poisoning of the data [...] is potentially more possible.
There, we would have to manually check the data itself. We don’t [...] blindly trust feedback from the user}.
Analogous observations hold techniques like explainable models (3 subjects apply, 1 on purpose) or retraining (2 apply, additional 2 as mitigation).
For example, \Sub{14} said: \quoteSub{when we find high entropy in the confidences of the data [...] for those kind of specific ranges we send them back to the data sets to train a second version of the algorithm}.
In this case, retraining was used to improve the algorithm, not as a mitigation.
We conclude that albeit no definite solution to vulnerability exists, many techniques that increase the difficulty for an attacker are implemented by our subjects. At the same time, many practitioners are unaware which techniques potentially make an attack harder.
\begin{figure*}
\includegraphics[width=0.98\textwidth]{figures/drawings/sub11.png}
\caption{Drawing of \Sub{11}. Red markings were added by the subject before, blue after being confronted with selected attacks.}\label{fig:SecLEvel11}
\end{figure*}
\textbf{Perception of threats.} There is also a huge difference in the perception of threats in security and AML.
In security, threats were somewhat taken for granted. For example, \Sub{9} was concerned about security of the server's passwords \quoteSub{because anybody can reverse-engineer or sniff it or something}. Analogously, \Sub{6} said to pay attention to \quoteSub{the infrastructure so that means that the network the machines but also the application layer we need to look at libraries}.
On the other hand,
almost a third of our subjects (4 of 15) externalized responsibility for AML threats. For example, \Sub{3} said their \quoteSub{main vulnerability from that perspective would probably be more the client would be compromised}.
Analogously, \Sub{1} remarked that ML security was a \quoteSub{concern of the other teams}. In both cases, the subjects referred to another entity, and reasoned that they were not in charge to alleviate risks.
Other reasons not to act include that subjects had not encountered an AML threat yet, concluding AML was not relevant. More concretely, \Sub{9} remarked: \quoteSub{we also have a community feature where people can upload images. And there could be some issues where people could try to upload not safe or try to get around something. But we have not observed that much yet. So it's not really a concern, poisoning}.
Roughly half of the subjects (7 of 15) reported to doubt attackers' motivation or capabilities in the real world. For example, \Sub{1} said: \quoteSub{I have a hard time imagining right now in our use-cases what an attacker might gain from deploying such attacks}.
\Sub{20}, who worked in the medical domain stated: \quoteSub{I’m left thinking, like, why, what could you, achieve from that, by fooling our model. I’m not sure what the benefit is for whoever is trying to do that}.
Finally, many subjects (9 of 15) believed that they have techniques in place which function as defenses. As an in-depth evaluation of which mitigations are effective in which setting is beyond the scope of this paper, we leave it for future work.
\textbf{Practical relevance of AML.} The fact that most subjects did not consider AML threats relevant might simply be an expression of these threats being academic and not occurring in practice. Yet,
our interviews showed that there are already variants of AML attacks in the wild. More concretely, \Sub{10} stated: \quoteSub{What we found is [...] common criminals doing semi-automated fraud using gaps in the AI or the processes, but they probably don’t know what AML, like adversarial machine learning is and that they are doing that. So we have seen plenty of cases are intentional circumventions, we haven’t quite seen like systematic scientific approaches to crime}. The fact that many of our subjects seemed unconcerned about AML could then be an indicator that harmful AML attacks are (still) rare in practice.
\subsubsection{Summary}
On the one hand, classical IT security and AML were mingled in our subjects' mental models: the boundaries between the corresponding threats were often unclear. Yet, security and AML were not interchangeable in our subjects mental models to refer to attacks with a shared goal.
Furthermore, security threats were treated differently than AML threats: the latter were often considered less relevant. Finally, as our interviews show, there are already variants of AML attacks in practice. We now turn to more general properties of mental models in AML which we discovered during the interviews.
\subsection{Structural and functional components} \label{sec:strucfunc}
We found structural and functional components in our subjects' the mental models. Structural components cover single, constituting entities that an individual perceives as relevant within a given application. Functional components describe an individual's perception of the relations between the structural elements. As intended, the structure of our interview and drawing task (Appendix~\ref{app:interview}) allowed to investigate these properties on the level of the ML pipeline, of the attack vectors as well as of the defenses.
\begin{figure}
\includegraphics[width=0.98\columnwidth]{figures/drawings/sub01.png}
\caption{\Sub{1} inserted red arrows to indicate attacks.}\label{fig:SpecAttacks1}
\end{figure}
\subsubsection{ML pipeline} All subjects distinguish clearly separable elements within their ML workflow. The specific composition of these steps defines the structure of a certain ML pipeline. For two participants, this structure reflects the ML pipeline that we introduced in Figure~\ref{fig:pipeline}. When asked to sketch the kind of pipeline applied, \Sub{4} talked about \quoteSub{data}, \quoteSub{training}, \quoteSub{testing}, and \quoteSub{visualization}. We argue that these structural components serve as a scaffold for an individual's mental model. Interestingly, the mental models of 12 out of 15 subjects covered additional components that we did not expect prior to the study. The sketches of \Sub{3}, \Sub{7}, and \Sub{11} (Figure~\ref{fig:SecLEvel11}), for example, contain explicit elements for data capturing. \Sub{1} (Figure~\ref{fig:SpecAttacks1}), \Sub{9}, \Sub{12}, as well as \Sub{20} included dedicated elements representing a specific database to their drawing. Five subjects also highlighted structural elements within the deployment environment during the interviews. \Sub{14}, for example, specified on an API for deployment \quoteSub{on several kinds of hardware architectures}. Analogously, \Sub{1} described an API that \quoteSub{can be used to allow the user to interact with the models} (Figure~\ref{fig:SpecAttacks1}). Hence, these structural elements concerning data and deployment seem to be of importance for the corresponding mental models. However, the perception of industrial practitioners does no only focus on these structural components but also covers functional aspects. \Sub{6} for instance stated that his ML pipeline \quoteSub{forks into a number of different directions and there are also interactions between the different components}. In the corresponding sketch, multiple arrows within and across specific ML models indicate this interconnection of single components. Other drawings include this functional perspective through straight lines connecting the structural components (see Figure~\ref{fig:SpecAttacks1}, \Sub{1}), arrows connecting some of the structural components in a subsequent manner (e.g. \Sub{14}), and arrows connecting all structural components in a subsequent manner (\Sub{18} in Figure~\ref{fig:SecLEvel18}).
\begin{figure}
\includegraphics[width=0.98\columnwidth]{figures/drawings/sub16.png}
\caption{Drawing of \Sub{16}. Colors were added after the subject had been shown selected attacks. Red refers to evasion, purple to reverse engineering, blue to membership inference.}\label{fig:SpecAttacks16}
\end{figure}
\subsubsection{Attack vectors} The identified structural and functional components seem to be similarly relevant for mental models on attack vectors. For any kind of ML-specific threat, participants were able to precisely locate where they situated the corresponding, structural starting point. These have been specifically named during the interview and sketched via labelled arrows (e.g. Figure~\ref{fig:SecLEvel11}, \Sub{11}), additional annotations (\Sub{11}, \Sub{15}), highlighted parts of potentially vulnerable pipeline components (e.g. Figure~\ref{fig:SpecAttacks10}, \Sub{10}) or as entire steps within a given ML workflow that have been marked as vulnerable (\Sub{9}, \Sub{20}). Strikingly, we saw a wide overlap in the perception of potential focal starting points for attack vectors. Study participants considered the model itself, the input of their ML pipeline, or the deployment environment to be particularly vulnerable. Figure~\ref{fig:SpecAttacks16} (\Sub{16}) shows this for the latter. When confronted with poisoning and reverse engineering attacks, \Sub{16} marked the input and output of his pipeline as possible starting points for threats (purple rectangles) and talked about how a competitor could \quoteSub{screw our labeled dataset} or a customer might \quoteSub{ask a lot of questions to the API}. However, the perception of attack vectors did also cover functional components. \Sub{1}, for example, depicted the causal sequence of a \quoteSub{data injection attack} as three consecutive red arrows connecting different components of his ML pipeline (Figure~\ref{fig:SpecAttacks1}). This is all the more relevant, as \Sub{1} provided such a functional explanation and drawing for each of the attack vectors we presented to him. His mental models, hence, clearly seem to contain functional components. This is also the case for \Sub{16}, who similarly provided explanations on the functional evolvement of certain attacks within his workflow and even added corresponding functional elements to his sketch (blue and red arrows in Figure~\ref{fig:SpecAttacks16}).
\subsubsection{Defenses} Although we found participants' explanations and sketches for defenses to be rather sparse, structural and functional properties are also relevant for the corresponding mental models. As it can be seen in the sketch of \Sub{18}, defenses are often thought of as structurally bound to specific components of a workflow/pipeline (Figure~\ref{fig:SecLEvel18}, \Sub{18}). Data (\Sub{14}), training (\Sub{6}) and the models themselves (\Sub{10}) have been specifically named as focal points for implementing defenses. In the case of defenses implemented at the model component, \Sub{14} stated to \quoteSub{regularize in a way that makes it less sensitive to an adversary}.
Hence, these implemented defenses are cognitively attached to the classifier as a focal pipeline component. However, security mental models also contain functional properties. In the case of human-in-the-loop-defenses, for example, \Sub{14} stated to send certain classifications \quoteSub{back to the data sets to train a second version of the algorithm} if the output confidence for certain data exhibited high entropy.
This is depicted in the corresponding sketch by an arrow pointing from a rectangle with the caption \quoteSub{CPU} at the end of the pipeline to \quoteSub{raw data} (initial step of the pipeline). Similarly, \Sub{7} whose company operates in video surveillance explained the defense they had implemented to secure the transfer of input data (from cameras and on-site computers) into their pipeline: \quoteSub{This can only go out, never go in. [...] Nothing from the internet can connect to that server}. Industrial practitioners, hence, perceive defenses as containing functional components to unfold their full effect.
\subsubsection{Summary} Mental models in AML are thus composed of structural components which are cognitively put into (internal) relation. However, the specific unfolding of these internal conceptual representations seems to depend on the corresponding application and its underlying ML pipeline. There are more sources of variation in our subjects' mental models, however. We now investigate these variations.
\subsection{Variations at the individual level}\label{sec:cornercases}
Our interviews showed great variation regarding to the threats reported and in which detail, if at all. We investigate possible underlying causes that might influence these differences across subjects, including the prior knowledge and education as well as the subjects' application domain.
\begin{figure}
\includegraphics[width=0.98\columnwidth]{figures/drawings/sub18.png}
\caption{Drawing of \Sub{18}. Red star indicates the most important component of the pipeline, not an attack.}\label{fig:SecLEvel18}
\end{figure}
\subsubsection{Variation across subjects} We start with the variation of mental models across subjects.
\textbf{Perceived relevance of AML.}
The practitioners differed in the importance they attributed to AML. A third of them (5 of 15) did not mention AML at all before we explicitly asked. Another third reported that they were not very concerned about AML. For example, \Sub{1} stated about evasion, or \quoteSub{injecting malicious data to basically make the model [...] predict the wrong things} was \quoteSub{a concern that is not as high on my priority list}.
\Sub{15}, analogously, said: \quoteSub{mainly the machine learning pipeline this is the less critical security problem}, reasoning that \quoteSub{simply a performance would be unexpected}. Yet, over a third (6 of 15) of the participants reported to feel insecure about AML when confronted with the topic. Of these six subjects, two previously showed low priority on AML, and three did not mention AML at all.
An example of insecurity is \Sub{4}, who stated she needed \quoteSub{some more research on it}. Some subjects, like \Sub{19}, were concerned about specific attacks: \quoteSub{I maybe need to learn more about this membership}. We summarize that some practitioners consider AML threats important, whereas some subjects did not know AML well, and yet others did not consider it an important threat. From each of these three groups, there was at least one subject that felt not well informed. After the interviews (e.g., off the record) some participants stated that their awareness for AML had increased due to the interview.
\textbf{Specificity of attacks.}
Not only the overall opinion, but also the specificity with which attacks were described varied greatly.
On the one hand,
\Sub{1} (Figure~\ref{fig:SpecAttacks1}) added in his drawing text to the starting point of the attack. He also depicted how it propagated through the individual steps using red arrows. On the other hand, \Sub{10} (Figure~\ref{fig:SpecAttacks10}) only added blue color to denote that an attack is possible at the input or output of the system. Yet, a vague representation in the drawing does not imply a vague description of the attacks. During the interview, \Sub{10} stated: \quoteSub{we have to work with the assumption that the data we have [...] may ... contain ... basically unlimited number of modified samples or input data and that we don't know which ones are they and whether they would come in next day or so}. This paraphrases, in contrast to the drawing, poisoning fairly accurately. In contrast, \Sub{6} described a possible threat more vaguely as \quoteSub{the models themselves and the training data we have, that is a concern if people steal that would be bad}.
\subsubsection{Features influencing AML perception} After showing our participants' differences in the perception of AML, we focus on two major points that possibly explain these different perceptions. We first investigate the application setting of the ML projects, and then examine the educational background of our subjects as possible explanation.
\textbf{Application setting.}
We first study the influence of the application domain of each subject. As we expect practitioners in security-related tasks to show different behavior, we explore both cases separately, starting with subjects working in security-related fields.
\Sub{10}, who worked in a setting with cybersecurity reported: \quoteSub{there is some standard AML attacks on ML you can use, but we design our system knowing that very well; on the other hand, we know that there is no perfect security, so, again defense is in monitoring and vigilance, but it’s not something that can be fully automated in our opinion}. \Sub{10} was in general very sensitive towards AML.
\Sub{4}, also from a cybersecurity setting, was less concerned about evasion: \quoteSub{I can’t imagine yet how it can be applied for real life, for example [...] since we are pretty close on our development}.
Yet, \Sub{4} also stated the need to gather more information about AML.
Hence, also participants who worked in security-related areas had diverse mental models with respect to concrete attacks.
Subjects from non-security fields have similarly diverse mental models. This diversity is also reflected in the drawings. \Sub{11} (Figure~\ref{fig:SecLEvel11}) added some attacks (in red) before we provided explanations of evasion, backdooring and membership inference (added in blue). \Sub{18} (Figure~\ref{fig:SecLEvel18}), on the other hand, did not add any threats in his drawing. Analogously, opinions also differ in the interviews; e.g.,
\Sub{15} who worked in an non-security setting, was aware of security issues: \quoteSub{one interesting thing of course is that the solution is in some ways constraint by adversarial security considerations so for example you cannot use natural language generation very much because of potential adversarial behavior}.
On the other hand, and confirming the drawing, \Sub{18} reported that \quoteSub{we do not really protect the machine learning part}.
\textbf{Prior knowledge.} There is no relation between education and capability or knowledge about AML in our sample. One subject self-reported high knowledge in AML, but also stated:
\quoteSub{maybe the poisoning will be for the neural network from our point of view you would have to get through the Google cloud infrastructure this is one part of why we giving some or providing some models to third party so this could be a risk}.
Here, a general attack, poisoning, is related to an individual model (neural networks). Furthermore, the cloud infrastructure is attributed a defense status, although it is independent from the attack.
On the other side of the spectrum, \Sub{9} did not self-report any knowledge about security or AML, but correctly remarked: \quoteSub{Somebody could send us 100.000 images and collect all the results and try to build a model from that}. We conclude that none of these properties directly explain the diversity in our subjects' mental models of AML.
\subsubsection{Summary} In this section, we considered the individuality within the mental models of the practitioners that we interviewed. We showed two such examples, one was the concern uttered about AML, the other the specificity of the attacks described. We investigated two possible reasons that could influence mental models, the task at hand and prior education as reported by our subjects. Both properties had a low influence on mental models in our sample.
To conclude the section, we have a more general look at the variance of the technical depth of our participants' perceptions.
\begin{figure}
\includegraphics[width=0.98\columnwidth]{figures/drawings/sub10.png}
\caption{Drawing of \Sub{10}. Important components of the workflow added in blue, possible
starting points for attacks in red.}\label{fig:SpecAttacks10}
\end{figure}
\subsection{Sophisticated and sparse technical depth} \label{sec:techdepth}
The degree of detail in explanations and drawings defines the technical depth of participants' mental models of AML. We found these to contain in-depth technological descriptions as well as more abstract and ambiguous facets.
\subsubsection{Sparse technical depth} Some participants showed a stronger focus on higher-level concepts of their ML-based project. Concerning the general perception of the ML pipeline (Figure~\ref{fig:pipeline}), this seems to affect mainly the relevance of ML-models as such within the pipeline. Although 10 out of 15 subjects talked about models as pipeline components, their explanations remained rather superficial from a technical perspective. The coded text snippets cover terms such as \quoteSub{model} (\Sub{4}), \quoteSub{classification} (\Sub{19}) \quoteSub{classifier} (\Sub{15}), or, at most, a specification of the model type (e.g. \quoteSub{neural network}, (\Sub{15})). 6 out of 15 subjects did not even include the model to their drawings. Instead, for example, \Sub{10} just sketched a rectangle with the caption \quoteSub{detection} and possible inferences about input data indicated by an arrow (Figure~\ref{fig:SpecAttacks10}). This level of abstraction can also be observed for attack vectors on the ML pipeline. Asked to specify a certain threat model, \Sub{19} for example, stated: \quoteSub{It's like everywhere. Internal threats, external threats. Trying to mess with the communication, trying to mess if we model something}. However, it remains unclear how such an attack would actually take effect. In a similar manner, \Sub{14} explained that an adversary could \quoteSub{try to put some pythons in non conforming ways to trigger networks}. From a technical perspective, this is a rather unspecific description of potential security issues within the given project. This seems to also apply for defenses, that our participants' organizations apply to encounter AML-specific security threats. \Sub{18}, for example, first explained that \quoteSub{the countermeasures are all in the API}. Then, only after he opened the corresponding security documentation, this subject was able to provide further details on the implemented defenses. Hence, mental models in AML can entail a certain level of abstraction in terms of technical depth.
\subsubsection{Sophisticated technical depth} However, four participants revealed a clear orientation towards actual technological implementation. \Sub{9}, for example, described the underlying pipeline of his project very detailed and in chronological order. This subject also included hardware/software components in his explanation and precisely defined data pre-processing: \quoteSub{From these 20 million images we have about 500.000 images that are labelled by experts. From those we take labelled images of certain quality and build a data set. (...) We do a lot of stuff where we make sure that, for example, the images from one user don't end up in both training and test. We remove duplicates and lots of other steps. And then we create TensorFlow files from that}. Remarkably, such a degree of detail mostly concerned more fine-grained perceptions of the ML pipeline depicted in Figure 1. \Sub{4}, for example, clearly differentiated between \quoteSub{model study}, \quoteSub{model training}, \quoteSub{model testing} and \quoteSub{model validation} in her sketch. In a similar manner, \Sub{14} walked us through the logical sequence of his whole workflow. This subject even presented multiple options for internal components (e.g. various kinds of losses) which could be replaced according to the given use case: \quoteSub{Everything is composable, so you can really pick the ones that work best for you}. Such an in-depth perception eases the procedural understanding of attack vectors. In doubting the relevance of poisoning in real-world scenarios, \Sub{7} stated to \quoteSub{control training (...) so although you decide one day to go one station and feed us with a lot of wrong data, at the end of the year is \nicefrac{1}{365} percent of our training data. So like that would mean nothing to the neural network}. Hence, mental models in AML can come with a sophisticated technical depth.
\subsubsection{Summary} To conclude, industrial practitioners perceive the components of an ML pipeline at a varying level of technical depth. This includes the interconnection of these components and corresponding possible security threats.
|
{
"timestamp": "2021-05-11T02:13:08",
"yymm": "2105",
"arxiv_id": "2105.03726",
"language": "en",
"url": "https://arxiv.org/abs/2105.03726"
}
|
\section{ Introduction: Coercive Inequalities Problem on Nilpotent Lie Groups}
\label{Sec.1}
\noindent Let $\mathbb{G}$ be a Carnot group
with a given $n\in\mathbb{N}$ dependent on the group, let $\nabla\equiv (X_i)_{i=1,..,n}$, and $\Delta \equiv \sum_{i=1,..,n} X_i^2 $
denote the corresponding sub-gradient and sub-Laplacian, respectively.
In this section we describe possible natural approaches to the coercive bounds of the following form,
\[ \mu\left( |f|^q \mathcal{V}\right)\leq
C\mu |\nabla f|^q +D \mu|f|^q
\tag{*}
\]
with some $q\in[1,\infty)$, constants $C,D\in(0,\infty)$ and some function $\mathcal{V}$ independent of a function $f$. \\
When $\mathcal{V}(x)$ diverges to infinity
with a distance $d(x,0)\to \infty$, such bounds have a multitude applications to prove coercive inequalities, see e.g. \cite{R}, \cite{A}, \cite{HZ},
\cite{I}, \cite{IKZ}, \cite{WZ}, \cite{ChFZ}.
In particular, what we are after, is an interesting and very challenging problem how to get coercive inequalities for probability measures on Carnot group. Besides the direct interest in analysis on groups, additional motivation comes from the fact that such the coercive inequalities play an important role in studying ergodicity properties of Markov semigroups, spectral theory, isoperimetry, etc..., in finite and infinite dimensional spaces
(see e.g. \cite{BoZ1}, \cite{BoZ2}, \cite{LZ}, \cite{KOZ}, \cite{RZ},\cite{RZ2} and references therein).
One approach to show (*), described in \cite{HZ}, uses Leibniz rule together with integration by parts as follows. For a probability measure of the form $d\mu=e^{-U} d\lambda$, we note
\[
(\nabla f)e^{-\frac12 U} = (\nabla f e^{-\frac12 U})+
\frac12 f (\nabla U) e^{-\frac12 U}.
\]
Hence squaring and integrating both sides with respect to $\lambda$, we have
\[ \begin{split}
&\int |\nabla f|^2 e^{-U}d\lambda = \\
& \int f^2 \frac14 |\nabla U|^2e^{-U} d\lambda +\int (\nabla f e^{-\frac12 U})\cdot (\nabla U )f e^{-\frac12 U} d\lambda + \int (\nabla f e^{-\frac12 U})^2 d\lambda
\end{split}\]
and after integration by parts in the middle term and dropping out the third term on the right hand side, we arrive at
\[
\int f^2 \left(\frac14 |\nabla U|^2 -\frac12 \Delta U\right) d\mu \leq \int |\nabla f|^2 d\mu
\tag{$\diamond$}
\]
If one can show
that the following potential
\[\mathcal{V}_2 \equiv \frac14 |\nabla U|^2 -\frac12 \Delta U \]
is locally bounded and diverges to infinity in all directions, then,
\cite{HZ},
one gets Poincar{\'e} inequality for $\mu$ with the sub-gradient $\nabla$.
In fact, assuming Poincar{\'e} in the balls for the Lebesgue measure $\lambda$ and local boundedness of $U$, Poincar{\'e} inequality with $\mu$ and the sub-gradient, it sufficient to show the following bound
\[
\int f^2 \eta d\mu \leq A \int |\nabla f|^2 d\mu
+B \int f^2 d\mu
\tag{$\diamond\diamond$}
\]
with a function $\eta$ diverging to infinity with the distance from the origin, and some constants $A,B\in(0,\infty)$ independent of $f$ for which integrals on the right hand side are well defined, \cite{HZ}.\\
If additionally one has
\[ |\nabla U|^2 + U \leq a (1+ \mathcal{V}_2) \]
then one gets Log-Sobolev type inequalities,
\cite{HZ}, ( and possibly more non-tight coercive inequalities, \cite{R} , \cite{A}, which recently were tightened in \cite{WZ} ). \\
Another point of view is to start from quasi invariant measures and corresponding unitary groups,
{\'a} la \cite{AHK}, as follows.
Assuming the measure $\mu(\cdot\circ g)$, shifted by an action of the group element $g$, is absolutely continuous with respect to $\mu$, we define one parameter unitary groups
\[
S_t f(\omega)\equiv f(\omega\circ g_t)
\left( \frac{d\mu(\omega\circ g_t)}{d\mu(\omega)} \right)^\frac12
\]
These groups have generators of the form
\[
\mathbb{X}f \equiv \nabla f -f\frac12 \nabla U
\]
which satisfy formula of integration by parts with the measure $\mu$. Hence we can consider a Dirichlet operator
\[
\int f (-\mathcal{L} f) d\mu \equiv \int|\mathbb{X}f |^2 d\mu \geq 0 \tag{$\star\star$}
\]
defined on a dense domain.
A simple computation shows that
\[
\mathcal{L} f = \mathbb{X}^2 f = \Delta f - \nabla U\cdot \nabla f
+\left(\frac14 |\nabla U|^2- \frac12 \Delta U \right)f \equiv L f + \mathcal{V}_2 f
\]
Since the operator $L$ on the right hand side is the Dirichlet operator associated with the sub-gradient
\[
\int f (-L f)d\mu = \int |\nabla f|^2 d\mu ,
\]
the relation ($\star\star$) implies the coercive bound ($\diamond$). \\
Finally, if one is interested in the following inequality on $\mathbb{G}$ (outside some ball)
\[ - \frac12\Delta U +\frac14 |\nabla U|^2 \geq \eta(N) - B .\]
with $\eta(x)\underset{d(x,0)\to\infty}{\longrightarrow}\infty$ and a constant $B\in(0,\infty)$,
one should notice that
%
\[ \Delta e^{-\frac12U}=\frac12 \nabla\cdot\left(-(\nabla U)e^{-\frac12U}\right)=
\left(-\frac12 \Delta U +\frac14 |\nabla U|^2\right) e^{-\frac12 U}
\]
Hence our problem is equivalent to the following
\[
\Delta e^{-\frac12 U} \geq (\eta -B) e^{-\frac12U}
\]
and one could consider it as the following differential inequality.
\[ \label{bigstar}
\begin{split}
\exists ? \Phi>0 \qquad & \Delta \Phi
\geq (\eta-B) \Phi \\ & \Phi\underset{s\to\infty}{\longrightarrow}0
\end{split}
\tag{{$\bigstar$}}
\]
In more general form, for example to obtain Log-Sobolev inequalities we may also like to have
\[\label{2bigstars}
\Delta \Phi
\geq \eta\; \Phi \log(\frac1{\Phi})
\tag{{$\bigstar\bigstar$}}
\]
Focusing for a moment on (\ref{bigstar}), we rewrite it as follows, for $\omega\in\Omega^c$, for some ball $\Omega\subset\mathbb{G}$,
\[
(-\Delta +\gamma)\Phi \leq - \tilde\eta \Phi
\]
which can be represented as
\[
0\leq \Phi \leq \Psi_0 + \int_0^\infty dt e^{-\gamma t} e^{t\Delta} (\tilde \eta \Phi)
\tag{$\bigstar.1$}\]
with some $(-\Delta +\gamma)\Psi_0(\omega)=0$ for $\omega\in\Omega^c$, $\Psi_0(\omega)\underset{d(\omega,0)\to\infty}{\longrightarrow}0$ . At this point we recall that the heat kernel $p_t(w,v)$ on $\mathbb{G}$ has the following estimate, e.g. \cite{H},
\[
p_t(w,v)\leq C\; t^{-Q/2}\; e^{-\frac{d^2(w,v)}{2\alpha t} }
\]
with some positive constants $C,\alpha\in(0,\infty)$ and $Q$ denoting homogeneous dimension of the group.
Hence our ({$\bigstar.1$} ) gets the following form
\[
0\leq \Phi \leq \Psi_0 + \int_0^\infty dt e^{-\gamma t} C\; t^{-Q/2}\; e^{-\frac{d^2(w,v)}{2\alpha t} } (\tilde \eta \Phi)
\tag{$\bigstar.2$}\]
\vspace{0.25cm}
Some problems of particular interest involve
$\Phi$ which is a function of a homogeneous norm on the Carnot group. In particular, choosing dependence on the control distance, we can ask for what Carnot group we can find a solution to the following problem
\[
-\Phi'(d) \Delta d
+ \Phi''(d) |\nabla d|^2
\geq \eta(d)\; \Phi (d)
\]
Taking into the account that for the control distance by definition $|\nabla d|^2=1$, we obtain the following relation
\[
\Delta d
\leq \frac{\Phi'' (d) }{\Phi'(d)} - \eta(d)\; \frac{\Phi (d) }{\Phi'(d)} \tag{$\star\star\star\star$}
\]
for points where $\Phi'(d)\neq 0$; e.g. in special but already interesting case one could ask that for monotone $\Phi$ with maximum only at zero. \\
We notice that Laplacian distance bounds are well known and play an important role in certain problems in Riemannian geometry, \cite{D}, \cite{ChBPL},
but up to our knowledge there is no literature on this problem in the context of analysis on nilpotent Lie groups.
We remark that one could consider similar problem with a function $\Phi$ of some other homogeneous norm on the group. We know that, if we switch to smooth homogeneous norms, generally we cannot get Log-Sobolev inequality, \cite{HZ} . But there are examples in the literature of Poincar{\'e} and weaker log$^\beta$ inequalities in such setup, see \cite{I}, \cite{IKZ} for Heisenberg group, \cite{BDZ1}, \cite{BDZ2}, for type -2 and \cite{ChFZ} for filiform type groups. Later in his paper we provide some general constructions which work for more extensive classes of Carnot groups.
Thus we are led to considering the following problem
\[
\Delta \Phi(K)
\geq \eta(K)\; \Phi(K)
\]
involving some other homogeneous norms $K$, where using equivalence of norms we chose to have a function $\eta \equiv \eta(K)$.
In particular for the Kaplan type norm $N$, we have
the following general relation, \cite{BLU}
\[
\triangle N=
\frac{(Q-1)|\nabla N|^{2}}{N},\label{eq:1.2}
\]
where $Q$ is the homogeneous dimension of the group.
Taking this into the account, our problem looks as follows
\[
\left( \Phi''(N) + \Phi'(N)\frac{(Q-1)}{N}
\right) |\nabla N|^{2}
\geq \eta(N)
\Phi(N).
\]
Unfortunately, for smooth homogeneous norms, $\nabla N$ vanishes in some directions, so we cannot have $\eta$ growing to infinity with the distance, and the pointwise strategy doesn't work.
However, in some case one can overcome this problem by other arguments and get Poincar{\'e} inequality (but not Log-Sobolev), \cite{I}, \cite{BDZ1}, \cite{BDZ2}, \cite{ChFZ}.
\\
In the situation when very little is known about coercive inequalities for probability measures on Carnot groups, in this paper we explore yet another direction. We propose an approach of taming the singularities, that is we develop a technique to introduce natural singularities in the energy function $U$ in order to force one of the coercivity conditions. In particular, we explore explicit constructions of measures on type 2 Carnot groups which secure Poincar{\' e} and even Log-Sobolev inequality.
In Section 2-4, we consider additive and multiplicative ways of introducing singularity (by additive term and multiplicative factor given by a positive singular homogeneous functions of horizontal coordinates). It is interesting that by our technique we can provide new examples of probability measures for which Poincar{\'e} or even Log-Sobolev inequalities can be satisfied. Moreover we are able to provide classes of examples going far beyond the handsome existing results of
\cite{HZ}, \cite{I}, \cite{BDZ1}, \cite{BDZ2}, \cite{ChFZ}.
In particular, we have results for all type 2 Carnot groups as well some others general cases (possibly satisfying certain technical conditions for homogeneous norms). \\
In Section 5, we explore a universality hypothesis which suggests a classification framework for possible coercivity theory on Carnot groups.
\newpage
\section{Additive Taming}
\label{Additive Taming}
\subsection{Poincar{\'e} and Log Sobolev inequalities on Carnot Groups of Type 2}$\;$\\
\label{Sec.2}
\vspace{0.25cm}
\noindent Let $\mathbb{G}$ be a step-2 group, i.e. a group isomorphic to ${ \mathbb{R}^{n+m}}$
with the group law
\[
\left(w,z\right)\circ\left(w',z'\right)=\left(w_{i}+w'_{i},~z_{j}+z'_{j}+\frac{1}{2}<\Lambda^{\left(j\right)}w,w'>\right)_{i=1,..,n; j=1,..,m}
\]
for $w,w'\in\mathbb{R}^{n},z,z'\in\mathbb{R}^{m}$, where the matrices
$\Lambda^{\left(j\right)}$ are $n \times n$ skew-symmetric and linearly independent.
For $i\in\left\{ 1,\ldots,n\right\}$ and $j\in\left\{ 1,\ldots,m\right\} $, let
\[ X_{i}=\frac{\partial}{\partial x_{i}}+\frac{1}{2}\sum_{k=1}^{m}\sum_{l=1}^{n}\Lambda_{il}^{\left(k\right)}x_{l}\frac{\partial}{\partial z_{k}}\qquad \textrm{ and } \qquad Z_{j}=\frac{\partial}{\partial z_{j}}.
\]
Later on, $\nabla \equiv (X_i)_{i=1,..,n}$ and $\Delta \equiv \sum_{i=1,..,n} X_i^2 $ will denote the associated sub-gradient
and sub-Laplacian, respectively.
We consider the following smooth homogeneous norm on $\mathbb{G}$
\begin{equation}\label{2.N} N\equiv \left( |x|^4+ a|z|^2\right)^{\frac{1}{4}} .
\end{equation}
with $a\in(0,\infty)$
Define
\[ \label{4D.U.1}
{\color{blue}
U\equiv V(\beta N+\xi(x)) }
\]
with a real twice differentiable function $V$ and $\xi(x)\equiv \xi(|x|)$ to be specified later, and some constant $\beta\in(0,\infty)$.
We will assume that
\[ 0< Z\equiv \int e^{-U}d\lambda <\infty \]
and consider the following probability measure on $\mathbb{G}$
\[d\mu_U \equiv \frac1{Z}e^{-U}d\lambda. \]
For $\sigma\in(0,\infty)$, consider
\begin{equation} \label{eq:4.D.xi.1}
\xi\equiv \xi(|x|)\equiv\frac{1}{|x|^\sigma},
\end{equation}
Then
\begin{equation} \label{eq:4.D.xi.2}
\nabla\xi(x) = -\frac{\sigma}{|x|^{1+\sigma}}\cdot\frac{x}{|x|},\qquad
%
\Delta\xi(x) = - \sigma\frac{n-2-\sigma}{|x|^{2+\sigma}}
\end{equation}
We have
\begin{equation} \label{4D.U.2}
\begin{split}
|\nabla U|^2 &= |\beta \nabla N + \nabla \xi|^2 (V')^2 \\
\Delta U &=
(\beta \Delta N +\Delta \xi)V' +|\beta \nabla N + \nabla \xi|^2 V''
\end{split}
\end{equation}
and hence
\begin{equation} \label{4D.U.3}
\begin{split}
\mathcal{V}_2 \equiv \frac14|\nabla U|^2 -\frac12 \Delta U =
|\beta\nabla N + \nabla \xi|^2 \left(\frac14(V')^2 -\frac12 V''\right)\\
-\frac12
(\beta\Delta N +\Delta \xi)V'
\end{split}
\end{equation}
From our choice of $\xi$ with \eqref{eq:4.D.xi.2} we get
\begin{equation} \label{4D.U.4}
\begin{split}
\mathcal{V}_2
=
\left|\beta\nabla N -\frac{\sigma}{|x|^{1+\sigma}}\cdot\frac{x}{|x|}\right|^2 \left(\frac14(V')^2 - \frac12 V''\right)\\
+\frac12
\left(\sigma\frac{n-2-\sigma}{|x|^{2+\sigma}} - \beta \Delta N \right)V'
\end{split}
\end{equation}
\subsection{Heisenberg Group Case}\label{2.Case.1H}
\begin{theorem} $\;$\\
Let $N$ be the Kaplan norm on the Heisenberg group and let $
\xi\equiv \xi(|x|)\equiv\frac{1}{|x|^\sigma}$,
with $ \sigma\in(0,\infty)$.
Suppose $\exists R\in(0,\infty)$ $\forall s\geq R$ we have
\begin{equation}
\begin{split}
&V'(s)s^{-7} \geq D \\
& sV'(s) \geq BV(s) \\
&(V'(s))^2/V''(s)\underset{s\to\infty}{\longrightarrow} \infty ,
\end{split}
\end{equation}
with some constants $B,D >0$.
Then the probability measure
\[d\mu_U\equiv \frac1Z e^{- V(\beta N+\xi(x)) } d\lambda\]
satisfies Logarithmic-Sobolev Inequality
\[
\mu_U \left(f^2\log\frac{f^2}{\mu_U f^2}\right)\leq c\; \mu_U |\nabla f|^2
\]
with a constant $c\in(0,\infty)$ independent of a function $f$.
\end{theorem}
\textit{Proof} :
For the Heisenberg group, with orthogonal matrices $\Lambda^{(k)}$ and the constant $a=16$, we have $|\nabla N| = \frac{|x|}{N}$ and
$\frac{x}{|x|}\cdot\nabla N=\frac{|x|^3}{N^3}$.
Since then $N$ is the Kaplan norm we get
$\Delta N= \frac{Q-1}{N}|\nabla N|^2$.
Hence
\[ \begin{split}
\left|-\frac{\sigma}{|x|^{1+\sigma}}\cdot\frac{x}{|x|}+ \beta\nabla N\right|^{2}= \frac{\sigma^2}{|x|^{2(1+\sigma)}} +
\beta^2 \frac{|x|^2}{N^2}
- \frac{2\sigma\beta}{|x|^{1+\sigma}}\frac{|x|^3}{N^3}\\
= \left(\frac{\sigma }{|x|^{(1+\sigma)}}
- \beta \frac{|x|^3}{N^3}\right)^2
+
\beta^2 \frac{|x|^2}{N^2}\left(1
- \frac{|x|^4}{N^4} \right) \\
= \left(\frac{\sigma }{|x|^{(1+\sigma)}}
- \beta \frac{|x|^3}{N^3}\right)^2
+
\beta^2 a \frac{|x|^2 z^2}{N^6}
\end{split}
\]
The first term vanishes only if
\[
N^4
=\left( \frac{ \beta}{\sigma}\right)^{4/3} |x|^{\frac{4}{3}(4+\sigma) }
\]
i.e. we have
\[
az^2
=\left( \frac{\beta}{\sigma}\right)^{4/3} |x|^{\frac{4}{3}(4+\sigma)} - |x|^4
\]
but the solution $|x|=0, z=0$ has to be discarded.
The only other solution is given by
\[
|x|^{\frac{4}{3}(4+\sigma) -4}= \left( \frac{\sigma}{\beta}\right)^{4/3}, z=0.
\]
Thus $| \nabla \xi + \nabla N|^2$ does not vanish outside a compact ball.
For $ (\frac{\sigma }{1+\beta })^{\frac1{(1+\sigma)}}\equiv r_0 \geq |x|$
as well as for $|x|\geq r_0 $ with $|z|^2\geq \frac{1}{a\beta^2r_0^2}$ , we can now see that
\[ |\beta \nabla N + \nabla \xi|^2 \geq \frac{1}{N^6}.\]
Thus assuming $\exists R\in(0,\infty)$ and $\exists B,D\in(0,\infty)$ such that $\forall s\geq R$ we have
\begin{equation}
\begin{split}
&V'(s)s^{-7} \geq D \\
&sV'(s) \geq BV(s) \\
&(V'(s))^2/V''(s)\underset{s\to\infty}{\longrightarrow} \infty ,
\end{split}
\end{equation}
then there exists a ball and a constant $C\in(0,\infty)$ such that outside this ball we have
\[
|\nabla U|^2, U\leq C(\mathcal{V}_2+1).
\]
Hence by arguments of \cite{HZ} the corresponding measure satisfies Log-Sobolev inequality.
\qed
\bigskip
\subsection{Type 2 Carnot Groups}\label{2.Case.2}
In this section we prove the following result.
\begin{theorem} $\;$\\
Let \[ N\equiv \left( |x|^4+ a|z|^2\right)^{\frac{1}{4}} .\]
with $a\in(0,\infty)$ be a smooth homogeneous norm on a type 2 Carnot Group.
Let $
\xi\equiv \xi(|x|)\equiv\frac{1}{|x|^\sigma}$,
with $ \sigma\in(0,n-2)$.
Suppose $\exists R\in(0,\infty)$ $\forall s\geq R$ we have
\begin{equation}
\begin{split}
&V'(s)s^{-7} \geq D \\
&sV'(s) \geq BV(s) \\
&(V'(s))^2/V''(s)\underset{s\to\infty}{\longrightarrow} \infty ,
\end{split}
\end{equation}
Then if $a>0$ is sufficiently large, then the probability measure $\mu_U$ satisfies Logarithmic-Sobolev Inequality
\[
\mu_U \left(f^2\log\frac{f^2}{\mu_U f^2}\right)\leq c\; \mu_U |\nabla f|^2
\]
with a constant $c\in(0,\infty)$ independent of a function $f$.
For any $a>$ the measure $\mu_U$ satisfies Poincar{\'e} inequality provided that
\[
\frac{1}{N^{2(1+\sigma)}} \left(\frac14(V')^2 - \frac12 V''\right)\underset{N\to\infty}{\longrightarrow}\infty .
\]
\end{theorem}
The following lemma concerning certain analytic properties of the homogeneous norm $N$ was proven in \cite{BDZ1}.
\begin{lemma} \label{Lem.2.1N
There exist constants $A,C\in(0,\infty)$
\begin{equation}\label{S.2 grad N}
{\color{blue}
A\frac{|x|^2}{N^2}\leq\left|\nabla N\right|^{2}\leq C\frac{|x|^2}{N^2} .
}
\end{equation}
and
there exists a constant $B\in(0,\infty)$ such that
\begin{equation}\label{S.2laplacian N}
{\color{blue}
|\Delta N |\leq B \frac{|x|^2}{N^3}.
}
\end{equation}
and
\begin{equation}\label{S.2n.grad N}
{\color{blue}
\frac{x}{|x|}\cdot \nabla N = \frac{|x|^3}{N^3}
}
\end{equation}
and
\[
%
{\color{blue} |X_{i}X_{j}N |
\leq \frac{C}{N} (1+|\nabla N|^2 ) }
\]
\end{lemma}
Note that in general using just considerations of homogeneity, since $N$ is homogeneous of degree $1$ and so
$|\nabla N|$ is homogeneous of degree $0 $, one can only get a crude bound from above of the following form
\begin{equation} \label{eq:2.2}
|\nabla N|\leq C.
\end{equation}
We note that, if \eqref{eq:Ax.qfi} of Lemma \ref{Lem.1N} is satisfied by choosing $a$ in the definition of the norm $N$ sufficiently large, we get $A\geq 1$.
Hence
\[ \begin{split}
|\beta \nabla N + \nabla \xi|^2 &= \\
\left|-\frac{\sigma}{|x|^{1+\sigma}}\cdot\frac{x}{|x|}+ \beta\nabla N\right|^{2}&= \frac{\sigma^2}{|x|^{2(1+\sigma)}} +
\beta^2 |\nabla N |^{2}
- \frac{2\sigma\beta}{|x|^{1+\sigma}}\frac{|x|^3}{N^3}\\
&\geq \frac{\sigma^2}{|x|^{2(1+\sigma)}} +
A\beta^2 \frac{|x|^2}{N^2}
- \frac{2\sigma\beta}{|x|^{1+\sigma}}\frac{|x|^3}{N^3}\\
&= \left(\frac{\sigma }{|x|^{(1+\sigma)}}
- \beta \frac{|x|^3}{N^3}\right)^2
+
\beta^2 \frac{|x|^2}{N^2}\left(A
- \frac{|x|^4}{N^4} \right)
\end{split}
\]
If $A\geq 1$, similar arguments as in the case of Heisenberg group apply and the corresponding measure $\mu_U$ satisfies Log-Sobolev inequality.\\
On the other hand if $0<A < 1$, for $|x|\geq R$ , so that
$A\beta > \frac{2\sigma }{R^{1+\sigma}}$,
we have
\begin{equation} \label{4D.c2.0}
|\beta \nabla N + \nabla \xi|^2 \geq
\frac{\sigma^2}{|x|^{2(1+\sigma)}} +
A\beta^2 \frac{|x|^2}{N^2}
-\frac{2\sigma\beta}{|x|^{1+\sigma}}\frac{|x|^3}{N^3}\geq \frac{\sigma^2}{N^{2(1+\sigma)}}
\end{equation}
At the same time, using
\eqref{eq:4.D.xi.2} and Lemma \ref{Lem.2.1N}, for $|x|\leq R$, we have
\begin{equation} \label{4D.c2.1}
\begin{split}
\sigma\frac{n-2-\sigma}{|x|^{2+\sigma}} - \beta B \frac{|x|^2}{N^3} \geq
\sigma\frac{n-2-\sigma}{R^{2+\sigma}} - \beta B \frac{R^2}{N^3} > \frac{\sigma}2
\frac{n-2-\sigma}{N^{2+\sigma}}
\end{split}
\end{equation}
provided that $N\geq L$, where $L\in(0,\infty)$ satisfies
\begin{equation} \label{4D.c2.2}
N^3\geq L^3 \geq 2\beta B \frac{R^{4+\sigma}}{ \sigma(n-2-\sigma)} .
\end{equation}
Hence we get that
\begin{equation} \label{4D.U.c2.3}
\begin{split}
\mathcal{V}_2
\geq
\left|\beta\nabla N -\frac{\sigma}{|x|^{1+\sigma}}\cdot\frac{x}{|x|}\right|^2 \left(\frac14(V')^2 - \frac12 V''\right)\\
+\frac12
\left(\sigma\frac{n-2-\sigma}{|x|^{2+\sigma}} - \beta B \frac{|x|^2}{N^3} \right)V'
\end{split}
\end{equation}
goes to infinity in all direction with $N\to \infty$, provided that
\[
\frac{1}{N^{2(1+\sigma)}} \left(\frac14(V')^2 - \frac12 V''\right)\underset{N\to\infty}{\longrightarrow}\infty
\]
By arguments of \cite{HZ} this implies Poincar\'e inequality. \\
\bigskip
\qed
\newpage
\subsection{Case.3: Carnot Groups}\label{2.Case.3}
Recall that if $N$ is the Kaplan Norm, we have
\begin{equation}\label{KN}
\Delta N=\frac{Q-1}{N} |\nabla N|^2.
\end{equation}
Then
\begin{equation} \label{4D.U.5}
\begin{split}
\mathcal{V}_2
=
|\beta \nabla N + \nabla \xi|^2 \left(\frac14(V')^2 -\frac12 V''\right)\\
-\frac12
\left(\frac{(Q-1)\beta |\nabla N|^{2}}{N} +\Delta \xi\right)V'
\end{split}
\end{equation}
Let
\begin{equation} \label{eq:4.D.c3xi.1}
\xi(s)\equiv\log(\frac{1}{|x|}),
\end{equation}
Then
\begin{equation} \label{eq:4.D.c3xi.2}
\nabla\xi(x) = -\frac{1}{|x|}\cdot\frac{x}{|x|},\qquad
%
\Delta\xi(x) = - \frac{n-2}{|x|^2}
\end{equation}
\label{EndXi4.D}
Suppose $N$ is the Kaplan norm and consider
\[ U\equiv V(\beta N+\xi).\]
Using \eqref{KN} for Kaplan norm, we have
\begin{equation} \label{4D.c.3.1}
\begin{split}
\mathcal{V}_2
=
\left|\beta\nabla N -\frac{1}{|x|}\cdot\frac{x}{|x|}\right|^2 \left(\frac14(V')^2 - \frac12 V''\right)\\
+\frac12
\left( \frac{n-2}{|x|^2} - \beta \frac{(Q-1) |\nabla N|^{2}}{N} \right)V'
\end{split}
\end{equation}
First we observe that, since $\nabla \xi= -\frac{1}{|x|}\cdot\frac{x}{|x|}$, so
\[
|\beta \nabla N + \nabla \xi|^2 \geq \left(\beta |\nabla N| -\frac{1}{|x|}
\right)^2
\]
We note that, if for any $\varepsilon\in(\frac12,2)$
\[
|\nabla N| \leq \frac{\varepsilon}{|x|} ,
\]
then we have
\[ \frac{n-2}{|x|^2} - \beta \frac{(Q-1) |\nabla N|^{2}}{N} \geq \frac{1}{|x|^2} \left( (n-2) -2\beta\frac{Q-1}{N}\right)
\]
and one can prevent the right hand side to be less than zero for sufficiently large $N$. In particular for $|\nabla N| \leq \frac{\varepsilon}{|x|} $ and $N> 4\beta \frac{Q-1}{n-2} $, we have
\[ \frac{n-2}{|x|^2} - \beta \frac{(Q-1) |\nabla N|^{2}}{N} \geq \frac{1}{|x|^2} \left( \frac{n-1}{2}\right) \geq \frac{1}{N^2} \left( \frac{n-1}{2}\right).
\]
and in this case by our assumption about $V(s)$ we get
\[
\frac12
\left( \frac{n-2}{|x|^2} - \beta \frac{(Q-1) |\nabla N|^{2}}{N} \right)V'(\xi N) \geq
\frac{n-1}{2N^2} V'(\xi N) \underset{N\to\infty}{\longrightarrow} \infty.
\]
\label{sprawdzic-niepotrzebne}
Since by homogeneity $\sup|\nabla N|\equiv C<\infty$, so we have
\begin{equation} \label{4D.c.3.2}
|\beta \nabla N + \nabla \xi|^2 \geq
\frac{1}{|x|^2}
-\frac{2\beta}{|x|}C
\end{equation}
Hence for $|x|\leq R$ , so that
$\frac{1}{1+2\beta C } > R $,
we have
\begin{equation} \label{4D.c.3.3}
|\beta \nabla N + \nabla \xi|^2 \geq
\frac{1}{|x|} \left( \frac{1}{|x|}
- 2 \beta C
\right) \geq \frac{1}{N}
\end{equation}
On the other hand for $|x|\geq R$, we have
\[
\frac{n-2}{|x|^2} - \beta \frac{(Q-1) |\nabla N|^{2}}{N} \geq
\frac{n-2}{|x|^2} - \beta \frac{(Q-1) C^{2}}{N} \geq \frac{n-2}{R^2} - \beta \frac{(Q-1) C^{2}}{N}
\]
Thus choosing
\[ N \geq 2\beta \frac{(Q-1) C^{2}}{(n-2)}R^2, \]
we have in this region
\[
\frac{n-2}{|x|^2} - \beta \frac{(Q-1) |\nabla N|^{2}}{N} \geq \frac{n-2}{2 R^2} .
\]
Summarising,
choosing $V$ so that
\[ \frac1s^2 V'(s) \underset{s\to\infty}{\longrightarrow} \infty\]
we obtain
\[
\mathcal{V}_2 \underset{N\to\infty}{\longrightarrow} \infty.
\]
Hence by arguments of \cite{HZ} our measure $\mu_(V(N+\xi))$ satisfies Poincar{\'e} inequality.
\label{EndSec.2}
\newpage
\section{Multiplicative Taming}
\label{Multiplicative Taming}
\subsection{Setup}
Define
\[
U\equiv V( \xi(|x|)N),
\]
with a smooth norm $N$, real twice differentiable outside zero functions $V$ and $\xi\equiv \xi(|x|)\geq 1$ and
$\xi$ decreasing on $\mathbb{R}^+$.
Note that in our case the following holds
\begin{equation} \label{eq:3.1.1}
\begin{split}\nabla\xi(|x|) & = \frac{x}{|x|}\xi'\\
\\
\Delta\xi(|x|) & =\nabla\cdot\left(\frac{x}{|x|}\xi'\right)=
\frac{n-1}{|x|}\xi' + \xi''
\end{split}
\end{equation}
We will assume that
\[ 0< Z\equiv \int e^{-U}d\lambda <\infty \]
and consider the following probability measure on $\mathbb{G}$
\[d\mu \equiv \frac1{Z}e^{-U}d\lambda. \]
In the current case we have
\begin{equation} \label{eq:3.1.2}
\begin{split}
\mathcal{V}_2
&=
| \nabla \log N + \nabla \log\xi|^2 \; (\xi N)^2 \left(\frac14(V')^2 -\frac12 V''\right) \\
&\\
&-\frac12
\left( \frac{\Delta N}{N} + 2 \xi' \frac{x}{|x|} \cdot\frac {\nabla N}{\xi N} + \frac{n-1}{|x|}\frac{\xi'}{\xi} +\frac{\xi''}{\xi} \right) \xi N\;V'
\end{split}
\end{equation}
\subsection{Type 2 Case} \label{3.2:Type 2 Case}
In this section we prove the following result.
\begin{theorem}
Let $G$ be a Carnot group of type $2$ with the horizontal dimension $n$. For $a\in(0,\infty)$, let \[N=(|x|^4+a|z|^2)^\frac14\]
be a smooth homogeneous norm on $G$ for which we have
\[ A\frac{|x|^2}{N^2} \leq |\nabla N|^2.\]
Suppose $\xi(s)=s^{-\sigma}$ with $\sigma\in(0,\infty) $. \\
The following measure
\[
d\mu\equiv \frac1Z e^{-V(\xi N)} d\lambda
\]
satisfies Logarithmic Sobolev inequality provided that one of the following conditions holds:\\
\vspace{0.25cm}
(i) If $A\geq 1$ and $\sigma\neq 1 $
for
\[ V'(s) /s^2 \underset{s\to\infty}{\longrightarrow} \infty\]
and if $\sigma=1$ and $A > 1$
\[ V'(s) /s^2 \underset{s\to\infty}{\longrightarrow} \infty\]
\vspace{0.25cm}
(ii) If $A \leq 1$ and $n-2-\sigma\geq 2 $
\[ V'(s) /s^2 \underset{s\to\infty}{\longrightarrow} \infty. \]
\end{theorem}
\textit{Proof}:
In case of type 2 Carnot groups with the smooth norm $N=(|x|^4+a|z|^2)^\frac14$, using Lemma \ref{Lem.2.1N},
we have
\begin{equation} \label{eq:3.2.1}
\begin{split}
%
| \nabla \log N + \nabla \log \xi|^2 = \left(\frac{\xi'}{\xi}\right)^2 + 2\frac{\xi'}{\xi} \frac{x\cdot \nabla N}{|x| N}
+ \left(\frac{\nabla N}{ N}\right)^2 \\
\geq \left(\frac{\xi'}{\xi}\right)^2 +2\frac{\xi'}{\xi} \frac{|x|^3}{N^4} +A \frac{|x|^2}{N^4}
\end{split}
\end{equation}
with the constant $A$ given in Lemma \ref{Lem.2.1N}.
In particular for $\xi(s)=s^{-\sigma}$, we get
\begin{equation} \label{eq:3.2.2}
\begin{split}
%
|\nabla \log N + \nabla \log \xi |^2
= \frac{\sigma^2}{|x|^2} - 2 \frac{\sigma}{|x|} \frac{|x|^3}{N^4}
+\frac{|\nabla N|^2}{N^2}
%
\geq \frac{1 }{|x|^2 }\left( \sigma - \frac{|x|^4}{N^4}\right)^2 +\frac{|x|^2}{N^4}(A - \frac{|x|^4}{N^4})
\end{split}
\end{equation}
If {\color{blue}$A>1$}, what can be achieved by taking $a\in(0,\infty)$ sufficiently large,
we have
\[
|\nabla \log N + \nabla \log \xi |^2
\geq \frac{1 }{|x|^2} \left( \sigma - \frac{|x|^4}{N^4} \right)^2 +\frac{|x|^4}{N^6}(A^2 - 1)
\]
Then for any $R\in[0,\infty)$ and all $|x|\leq R$, if $N$ is such that $\frac{|x|^4}{N^4}\leq \frac1{2}\sigma$, we have
\[
|\nabla \log N + \nabla \log \xi |^2
\geq \frac{\sigma^2}{4 R^2} .
\]
On the other hand for all $|x|\geq R$, we have
\begin{equation} \label{eq:3.2.3}
|\nabla \log N + \nabla \log\xi|^2\geq \frac{1}{N^2} \left( \sigma - 1 \right)^2 +\frac{R^2}{N^4}(A - 1)
\end{equation}
In this case we also have
\begin{equation} \label{eq:3.2.4}
\begin{split}
- \frac12\left( \frac{\Delta N}{N} + 2 \xi' \frac{x}{|x|} \cdot\frac{\nabla N}{\xi N} + \frac{n-1}{|x|}\frac{\xi'}{\xi} +\frac{\xi''}{\xi} \right) &=
- \frac12\left( \frac{\Delta N}{N} - \frac{2\sigma}{|x|} \cdot \frac{|x|^3}{N^4} -\sigma\frac{(n-2-\sigma)}{|x|^2} \right)\\
&\geq \frac{\sigma}2\frac{(n-2-\sigma)}{|x|^2} + \left( \sigma -\frac{B}2\right) \frac{|x|^2}{N^4}
\end{split}
\end{equation}
which diverges to $\infty$ if $|x|\to 0$,provided $\sigma<n-2$, and otherwise remains bounded.\\
Then choosing
\begin{equation} \label{eq:3.2.5}{\color{blue}
\left( \frac{(\sigma-1)^2}{N^2} +\frac{R^2(A -1)}{N^4}\right) V'(\xi N)\underset{N\to\infty}{\longrightarrow}\infty
}
\end{equation}
we secure the conditions for {\color{blue}Logarithmic Sobolev} inequality.
\vspace{0.25cm}
\label{2nd Case: A less than 1}
\noindent If {\color{blue}$0<A \leq 1$} and
{\color{blue} $n-2-\sigma \geq 0$},
choosing {\color{blue}$\sigma\geq 2$}, we have
\begin{equation} \label{eq:3.2.6}
\begin{split}
%
|\nabla \log N + \nabla \log \xi |^2
& \geq \frac{\sigma^2}{|x|^2} - 2 \frac{\sigma}{|x|} \frac{|x|^3}{N^4}
+\frac{|\nabla N|^2}{N^2}
= \frac{1 }{|x|^2 }\left( \sigma - \frac{|x|^4}{N^4}\right)^2 +\frac{|x|^2}{N^4}(A - \frac{|x|^4}{N^4})\\
&\geq
\frac{1 }{|x|^2 }\left( \sigma - 1\right)^2 -\frac{1}{N^2}(1-A ) \geq \frac{A}{N^2}
\end{split}
\end{equation}
In this case we also have
\begin{equation} \label{eq:3.2.7}
\begin{split}
- \frac12\left( \frac{\Delta N }{N} + 2 \xi' \frac{x}{|x|} \cdot\frac{\nabla N}{\xi N} + \frac{n-1}{|x|}\frac{\xi'}{\xi} +\frac{\xi''}{\xi} \right) &=
- \frac12\left( \frac{\Delta N }{N} - 2\sigma \cdot \frac{|x|^2}{N^4} -\sigma\frac{(n-2-\sigma)}{|x|^2} \right)\\
&\geq \frac{\sigma}2\frac{(n-2-\sigma)}{|x|^2} + \left(\sigma - \frac{B}2\right) \frac{|x|^2}{N^4}
\end{split}
\end{equation}
which diverges to $\infty$ if $|x|\to 0$ and otherwise remains bounded.\\
Then choosing
\begin{equation} \label{eq:3.2.8}
{\color{blue}
\frac{A^2}{N^2} V'(\xi N)
\underset{N\to\infty}{\longrightarrow}\infty
}
\end{equation}
we secure the conditions for {\color{blue}Logarithmic Sobolev} inequality.
\qed
\newpage
\subsection{Case: Kaplan Norm on a Carnot Group} \label{Type 2 Case: Kaplan Norm}
In this section we prove the following general result.
\begin{theorem}
Let $N$ be the Kaplan norm of a Carnot group $G$ of homogeneous dimension $Q$ and the horizontal dimension $n$. Let $C\equiv \sup |\nabla N|$.
Suppose $\xi(s)=s^{-\sigma}$ with $\sigma\leq 1$.
Suppose $C < \sigma$ and $ n-3\geq 2C$, and that
%
\[ (V'(s))^2/V''_+(s) \underset{s\to\infty}{\longrightarrow} \infty\]
and for some $\varepsilon\in(0,\infty)$
\[
\varepsilon (V'(s))^2\geq \max (V(s),sV'(s)).
\]
Then the probability measure
\[
d\mu\equiv \frac1Z e^{-V(\xi N)} d\lambda
\]
satisfies Logarithmic Sobolev inequality.
\end{theorem}
Remark: In particular $V(s)=s^p$ with $p\geq 2$
satisfies the conditions of the above theorem.\
\vspace{0.25cm}
\textit{Proof}:
First we note that in the special case of $N$ being the Kaplan norm we have
\begin{equation} \label{eq:3.3.1}
\begin{split}
\mathcal{V}_2
&=
\left| \nabla \log N + \frac{\xi'}{\xi} \frac{x}{|x|}\right|^2 (\xi N )^2 \left(\frac14(V')^2 -\frac12 V''\right)\\
&\qquad -\frac12
\left( \frac{(Q-1) |\nabla N|^{2}}{N^2} + 2 \xi' \frac{x}{|x|} \cdot\frac{\nabla N }{\xi N}+ \frac{n-1}{|x|}\frac{\xi'}{\xi} +\frac{\xi''}{\xi} \right) \xi N\;V'
\end{split}
\end{equation}
For $\xi(s)=s^{-\sigma}$, with $C\equiv \sup|\nabla N|$, we have
\begin{equation} \label{eq:3.3.2}
\begin{split}
| \nabla \log N + \nabla \log \xi|^2 = \left(\frac{\xi'}{\xi}\right)^2 + 2\frac{\xi'}{\xi} \frac{x\cdot \nabla N}{|x| N}
+ \left(\frac{\nabla N}{ N}\right)^2 \\
= \left(\frac{\sigma}{|x|}\right)^2 - 2\frac{\sigma}{|x|} \frac{x\cdot \nabla N}{|x| N}
+ \left(\frac{\nabla N}{ N}\right)^2\\
\geq \left(\frac{\sigma}{|x|}\right)^2 - 2\frac{C \sigma}{|x|N}
+ \left(\frac{\nabla N}{ N}\right)^2
\end{split}
\end{equation}
Hence for any $R\in(0,\infty)$ and all $|x|\leq R$ and $N\geq 4R C/\sigma$ we have
\begin{equation} \label{eq:3.3.3}
| \nabla \log N + \nabla \log \xi|^2
\geq \frac{\sigma^2}{2R^2}
\end{equation}
Without further information about $|\nabla N|$ it is difficult to get a useful lower bound.
However we notice that in general, if we have
$\sigma > C \equiv \sup |\nabla N|$, using $N \geq |x|$, we get
\begin{equation} \label{eq:3.3.4}
\begin{split}
%
|\nabla \log N + \nabla \log \xi |^2
\geq
\left(\frac{\sigma}{|x|}\right)^2 - 2\frac{\sigma}{|x|} \frac{x\cdot \nabla N}{|x| N}
+ \left(\frac{\nabla N}{ N}\right)^2 \geq \left(\frac{\sigma}{|x|} - \frac{ |\nabla N|}{N} \right)^2 \\
\geq
\left(\frac{\sigma}{|x|} - \frac{C}{N} \right)^2 \geq \frac{(\sigma - C)^2}{N^2}
\end{split}
\end{equation}
For the same $\xi(s)=s^{-\sigma}$,
we have
\begin{equation} \label{eq:3.3.5}
\begin{split}
-\left( \frac{(Q-1) |\nabla N|^{2}}{N^2} + 2 \xi' \frac{x}{|x|} \cdot\frac{\nabla N}{\xi N} + \frac{n-1}{|x|}\frac{\xi'}{\xi} +\frac{\xi''}{\xi} \right)\qquad\qquad\qquad \\
\qquad\qquad=
\sigma\frac{n-2-\sigma }{|x|^2} +2\sigma \frac{1}{|x|} \frac{x}{|x|}\cdot \frac{\nabla N }{N}
- \frac{(Q-1) |\nabla N|^{2}}{N^2} \\
\geq \sigma\frac{n-2-\sigma }{|x|^2} - 2\sigma \frac{C}{|x| N}
- \frac{(Q-1) C^{2}}{N^2} .
\end{split}
\end{equation}
Thus the corresponding term in $\mathcal{V}_2$ is stable for small $|x|$.
In this way obtain a coercive bound with potential $\mathcal{V}_2$ and hence using arguments of \cite{HZ}, we arrive at Logarithmic Sobolev inequality.
\qed
\label{EndSec.3}
\newpage
\label{Sec.4A}
\section{Multiplicative Taming II}
\label{Multiplicative Taming 2}
Let $\mathbb{\xi}(s)=log\left(e+{ \frac{1}{s}}\right),$ and for $0<L<1,$ define
\[
\tilde{\xi}(|x|)=\begin{cases}
\xi(|x|) & if\;\;|x|<1\\
\frac{(|x|-L)^{2}}{(1-L)^{2}}\xi(|x|) & if\;\;|x|\geq1
\end{cases},
\]
Let
\[
U_{\xi,V}\equiv (1+\tilde{\xi}(|x|))V(N)
\]
and consider the following measure
\[
{ d\mu_{\xi,V}=\frac{e^{- U_{\xi,V} }}{Z}d\lambda }
\]
defined with a homogeneous norm $N$
\begin{theorem}
Suppose $N$ is a Kaplan norm satisfying
\[ x\cdot\nabla N\geq0,\]
and such that
we have
\[
V(N)/V'(N),\; V(N)/V''(N)\to 0 \qquad as
N\to \infty
\]
Then the measure $\mu_{\xi,V}$
satisfies the Logarithmic Sobolev inequality.
\end{theorem}
Remark: As we show in Appendix 1 the condition $x\cdot\nabla N\geq0$ holds for example in the class of generalised Heisenberg group.
\begin{proof}
\noindent To show that the measure $ \mu_{U_{\xi,V}}$
satisfies the Logarithmic-Sobolev inequality, it is sufficient to show that under our conditions for some $\alpha \in(0,1)$, exists a constant $C\in(0,\infty)$ such that for all $N$ sufficiently large, we have
\[
{U_{\xi,V}}\leq C\left((1-\alpha)|\nabla {U_{\xi,V}}|^{2}-\Delta {U_{\xi,V}} +1\right).
\]
\\
First we note two basic facts about $\mathbb{\xi}(s)=log\left(e+{ \frac{1}{s}}\right)$.\\
Namely, we have
\[
\xi'(s) =
-\frac{1}{(1+es)s} ,\qquad
\xi''(s) =\frac{1+2es }{(1+es)^{2}s^{2}}.
\]
For the case $|x|\geq1 $, we have
\[
\nabla\tilde{\xi}(|x|)={ \frac{2(|x|-L)}{(1-L)^{2}}\xi\frac{x}{|x|}-\frac{(|x|-L)^{2}}{(1-L)^{2}}\frac{x}{(1+e|x|)|x|^{2}},}
\]
and so
\[
\left|\nabla\tilde{\xi}(|x|)\right|^{2}={ \frac{4(|x|-L)^{2}}{(1-L)^{4}}\xi^{2}+\frac{(|x|-L)^{4}}{(1-L)^{4}}\frac{1}{(1+e|x|)^{2}|x|^{2}}-4\frac{(|x|-L)^{3}}{(1-L)^{4}}\frac{\xi}{(1+e|x|)|x|}.}
\]
and
\[\begin{split}
\Delta \tilde{\xi}(|x|)=\frac{2\xi}{(1-L)^{2}}-\frac{4(|x|-L)}{(1-L)^{2}}\frac{1}{(1+e|x|)|x|}+\frac{2(|x|-L)}{(1-L)^{2}}\xi\frac{(n-1)}{|x|} \\
-\frac{(|x|-L)^{2}}{(1-L)^{2}}\left(\frac{(n-2)+(n-3)e|x|}{(1+e|x|)^{2}|x|^{2}}\right).
\end{split}
\]
First, consider the case $|x| \geq 1$.
For Kaplan norm $N$, using the relation ${ \Delta N=\frac{(Q-1)|\nabla N|^{2}}{N}}$, we have
\begin{equation}
\begin{array}{ll}
& {(1-\alpha)|\nabla {U_{\xi,V}}|^{2}-\Delta {U_{\xi,V}}} = \\
& = { (1-\alpha)\left(|\nabla\tilde{\xi}|^{2}V(N)^{2}+(1+\tilde{\xi})^{2}V'(N)^{2}|\nabla N|^{2}+2(1+\tilde{\xi})V(N)V'(N)\nabla\tilde{\xi}\cdot\nabla N\right)}
\\
& { -\Delta\tilde{\xi}V(N)-2V'(N)\nabla\tilde{\xi}\cdot\nabla N-(1+\tilde{\xi})V''(N)|\nabla N|^{2}-(1+\tilde{\xi})V'(N)\frac{(Q-1)|\nabla N|^{2}}{N}}
\end{array}
\end{equation}
and hence
\begin{equation}
\begin{array}{ll}
& {(1-\alpha)|\nabla {U_{\xi,V}}|^{2}-\Delta {U_{\xi,V}}} =
\\
& ={ (1-\alpha)V(N)^{2}\left({\color{blue}\frac{4(|x|-L)^{2}}{(1-L)^{4}}\xi^{2}}+\frac{(|x|-L)^{4}}{(1-L)^{4}}\frac{1}{(1+e|x|)^{2}|x|^{2}}{\color{brown}{\color{red}-4\frac{(|x|-L)^{3}}{(1-L)^{4}}\frac{\xi}{(1+e|x|)|x|}}}\right)}\\
\\
& +V(N)\left({ {\color{red}-\frac{2\xi}{(1-L)^{2}}}+\frac{4(|x|-L)}{(1-L)^{2}}\frac{1}{(1+e|x|)|x|}{\color{red}-\frac{2(|x|-L)}{(1-L)^{2}}\xi\frac{(n-1)}{|x|}}+\frac{(|x|-L)^{2}}{(1-L)^{2}}\left(\frac{(n-2)+(n-3)e|x|}{(1+e|x|)^{2}|x|^{2}}\right)}\right)\\
\\
& +2V'(N)\left((1-\alpha)(1+\tilde{\xi})V(N)-1\right)\left({ {\color{green}\frac{2(|x|-L)}{(1-L)^{2}}\xi}{\color{red}-\frac{(|x|-L)^{2}}{(1-L)^{2}}\frac{1}{(1+e|x|)|x|}}}\right){ \frac{x}{|x|}\cdot\nabla N}\\
\\
& +(1+\tilde{\xi})|\nabla N|^{2}\left((1-\alpha)(1+\tilde{\xi})V'(N)^{2}-V''(N)-{ V'(N)\frac{(Q-1)}{N}}\right).
\end{array}\label{eq:1}
\end{equation}
If ${ \frac{x}{|x|}\cdot\nabla N\geq0}$,
then the red terms in (\ref{eq:1}) are negative for $|x|\geq1$ and the next goal
is to bound them by the blue term. First we show that
\[
4\frac{(|x|-L)^{3}}{(1-L)^{4}}\frac{\xi}{(1+e|x|)|x|} {\leq} \frac{1}{1+e}\frac{4(|x|-L)^{2}}{(1-L)^{4}}\xi^{2},
\]
i.e.
\[
\frac{(|x|-L)}{(1+e|x|)|x|} {\leq}\frac{1}{1+e} \log\left(e+{ \frac{1}{|x|}}\right),
\]
which is true since $0<L $ and $|x|\geq1.$ Secondly, we have
\[
\frac{2\xi V(N)}{(1-L)^{2}} {\leq}\frac14(1-\alpha)V(N)^{2}\frac{4(|x|-L)^{2}}{(1-L)^{4}}\xi^{2}
\]
i.e.
\[
1 {\leq} \frac12(1-\alpha)V(N)\frac{(|x|-L)^{2}}{(1-L)^{2}}log\left(e+{ \frac{1}{|x|}}\right),
\]
which is true for $N$ sufficiently large as $V(N)$ is unbounded increasing. Next, we prove that for $N$ sufficiently large
\[
\frac{2(|x|-L)}{(1-L)^{2}}\xi\frac{(n-1)}{|x|}V(N) {\leq}\frac14 (1-\alpha)V(N)^{2}\frac{4(|x|-L)^{2}}{(1-L)^{4}}\xi^{2},
\]
i.e.
\[
\frac{(n-1)}{2(1-\alpha)} {\leq}V(N)\frac{|x|(|x|-L)}{(1-L)^{2}}log\left(e+{ \frac{1}{|x|}}\right),
\]
which is true. Finally, using the fact that $|\nabla N|\leq C,$
we look at
\[
\frac{C(|x|-L)^{2}}{(1-L)^{2}}\frac{V'(N)\left((1-\alpha)(1+\tilde{\xi})V(N)\right)}{(1+e|x|)|x|} {\leq}(1-\alpha)V(N)^{2}\frac{4(|x|-L)^{2}}{(1-L)^{4}}\xi^{2},
\]
i.e.
\[
C\frac{(|x|-L)^{2}}{ ( 1+e |x|)|x|} {\leq}\frac{V(N)}{V'(N)}log\left(e+{ \frac{1}{|x|}}\right).
\]
Since $(|x|-L)^{2}\leq|x|(|x|-L)\leq|x|(1+e|x|)$
and $\frac{V(N)}{V'(N)}\to 0$ as $N\to \infty$ , then the above inequality is easily achievable with a small multiplier on the right hand side. \\
\textbf{Remark}: \textit{In the case
where ${ \frac{x}{|x|}\cdot\nabla N<0}$, we would
require the green term of (\ref{eq:1}) to be bounded by the blue
term:
\[
\frac{4(|x|-L)}{(1-L)^{2}}\xi V'(N)\left((1-\alpha)(1+\tilde{\xi})V(N)\right) {\leq}(1-\alpha)V(N)^{2}\frac{4(|x|-L)^{2}}{(1-L)^{4}}\xi^{2},
\]
i.e.
\[
V'(N)(|x|-L) {\leq}V(N),
\]
we would need the condition $NV'(N)\leq V(N),$ which generally does not hold for polynomial $V(N)$.}\\
\vspace{0.25cm}
\label{leq 1}To study the case $|x|<1,$ we note that
\[
\nabla\tilde{\xi}=-\frac{x}{(1+e|x|)|x|^{2}},
\qquad so\quad
|\nabla\tilde{\xi}|^{2}=\frac{1}{(1+e|x|)^{2}|x|^{2}},
\]
and
\[
\Delta\tilde{\xi}=-\frac{(n-2)+(n-3)e|x|}{(1+e|x|)^{2}|x|^{2}}.
\]
Thus
\[
\begin{array}{ll}
& { (1-\alpha)|\nabla {U_{\xi,V}}|^{2}-\Delta {U_{\xi,V}}} =\\
&{ (1-\alpha)\left(|\nabla\tilde{\xi}|^{2}V(N)^{2}+(1+\tilde{\xi})^{2}V'(N)^{2}|\nabla N|^{2}+2(1+\tilde{\xi})V(N)V'(N)\nabla\tilde{\xi}\cdot\nabla N\right)}\\
\\
& { -\Delta\tilde{\xi}V(N)-2V'(N)\nabla\tilde{\xi}\cdot\nabla N-(1+\tilde{\xi})V''(N)|\nabla N|^{2}-(1+\tilde{\xi})V'(N)\frac{(Q-1)|\nabla N|^{2}}{N}}\\
\\
& ={ (1-\alpha)\left(\frac{V(N)^{2}}{(1+e|x|)^{2}|x|^{2}}\right)}+{ 2\left(1-(1-\alpha)(1+\tilde{\xi})V(N)\right)V'(N)\frac{x}{(1+e|x|)|x|^{2}}\cdot\nabla N}\\
\\
& { +\frac{(n-2)+(n-3)e|x|}{(1+e|x|)^{2}|x|^{2}}V(N)+(1+\tilde{\xi})|\nabla N|^{2}\left((1-\alpha)(1+\tilde{\xi})V'(N)^{2}-V''(N)-V'(N)\frac{(Q-1)}{N}\right)}\\
\\
& \geq c {U_{\xi,V}}.
\end{array}
\]
with some positive constant $c$.
The last inequality is true since by the conditions on $V(N)/V'(N),\; V(N)/V''(N)\to 0 $ as
$N\to \infty$, and the fact that
\[
2{ \left((1-\alpha)(1+\tilde{\xi})V(N)\right)V'(N)\frac{x}{(1+e|x|)|x|^{2}}\cdot\nabla N} {\leq}{ (1-\alpha)\left(\frac{V(N)^{2}}{(1+e|x|)^{2}|x|^{2}}\right)},
\]
i.e., since $|\nabla N|\leq C,$ we suffices to show
\[
{ Clog\left(e+{ \frac{1}{|x|}}\right)(1+e|x|)|x|V'(N)} {\leq}{ V(N)}.
\]
Since $|x|<1$, for sufficiently large $N$ we have
\[
{ Clog\left(e+{ \frac{1}{|x|}}\right)(1+e|x|)|x|V'(N)}\leq C(1+e|x|)^{2}V'(N){ \leq C(1+e)^{2}V'(N)\leq \frac12 V(N)}
\]
which ends the proof.
\end{proof}
\label{EndSec.4}
\newpage
\section{Universality Hypothesis on Carnot Groups}$\;$\\
\label{Sec.5}
It is natural to expect that if we have a probability measure on a Carnot group which has a density $e^{-U}$ with respect to the Haar measure dependent on a smooth homogeneous norm
and it satisfies certain coercive inequalities, then similar property holds for any other smooth homogeneous norm on the same group.
On the bases of our experience, \cite{HZ}, \cite{I}, \cite{BDZ1}, \cite{BDZ2}, \cite{ChFZ},
one could conjecture that for any Carnot group and any smooth homogeneous norm $N$ there exists $U=U(N)$ such that the corresponding measure $\mu_U$ satisfies Poincar\'e Inequality.
On the other hand we know that stronger, Log-Sobolev type inequalities cannot be satisfied for smooth norm case, \cite{HZ}, Theorem 6.3, and \cite{BDZ2}, Theorem 10. One may conjecture that in this case we have such inequalities satisfied provided we restrict ourselves to probability measures $e^{-U}$ with $U=U(K)$ where $K$ is a non-smooth homogeneous norm and once it holds for one such norm it holds for any other non-smooth norm possibly with some adjustment of the function $U$.
Below we provide an illustration in the indicated direction.\\
Recall the following definition
\textbf{Definition}:\\
We call homogeneous norm on (the homogeneous Carnot group) $G$, a continuous function $K :G \to [0,\infty)$ such that:
(i) $K(\delta\lambda (x)) = \lambda K(x) \forall \lambda>0\; and\; x\in G$; \\
(ii) $K(x) > 0$ iff $x \neq 0$.\\
$K$ is called symmetric iff \\
(iii) $K(x^{-1}) = K(x)$ for every $x\in G$.
\\
Let $G$ be a stratified group and $x = (x^{(1)},...,x^{(r)})\in G$.
In this context we have the following family of homogeneous norms
\[
|x|_G := \left(\sum_{j=1}^r |x^{(j)}|^{\frac{\alpha}{j}}\right)^{\frac1{\alpha}}
\]
with $\alpha\geq 1$. In particular we have the following homogeneous norm on $G$ which is smooth out of the origin
\[
|x|_G := \left(\sum_{j=1}^r |x^{(j)}|^{\frac{2r!}{j}}\right)^{\frac1{2r!}} .
\]
At the other hand we have the following example of a non-smooth homogeneous norm
\[
\rho(x) := \sum_{j=1}^N |x_j|^{\frac1{\sigma_j}}
\]
We remark that given two homogeneous norms $N, \tilde N$
and a strictly positive (smooth ) function one can define a new homogeneous norm as follows \cite{Acz}
\[ \label{K}
K\equiv K(d,N) = N \zeta\left( \frac{\tilde N}{N} \right)
\]
Moreover if the function $\zeta$ is concave then the expression on the right hand side is the perspective function in the sense of \cite{M} which is jointly concave. Then if both norms $N, \tilde N$ satisfy triangle inequality, so does $K$.
In particular this is the case of a pair consisting of control distance $d$ and the Kaplan norm on the Heisenberg group \cite{C}. The choosing $\zeta$ to be a root function, we get a new perfect distance function in the form of geometric mean of $d$ and $N$.
The following property of equivalence of the homogeneous norms is well established, see e.g. \cite{BLU}. Let $K$ be a homogeneous norm on $G$. Then there exists a constant $c > 0$ such that
\begin{equation} \label{eq.5.0}
\forall x\in G\qquad \qquad |\delta_{c^{-1}}(x)|_G \leq K(x), N(x), d(x) \leq |\delta_{c}(x)|_G .
\end{equation}
\noindent Using a homogeneous norm $K$ we introduce the following probability
measure \[ d\mu_{U,K}= e^{-U(K)} d\lambda,\]
with the normalization constant $Z\in(0,\infty)$ .\\
\begin{theorem} \label{Thm5.6}
Let $K_0$ be a homogeneous norm on a Carnot group $\mathbb{G}$.
Suppose a probability measure $\mu\equiv \mu_{U,K_0}$
satisfies a coercive bound
\[
\int |f|^q \mathcal{V}_q(K_0) d\mu \leq C\int |\nabla f|^q d\mu +D \int | f|^q d\mu
\]
with some constants $C,D\in(0,\infty)$ independent of $f$, and
\[ \mathcal{V}_q(K_0) \geq \alpha_q |U'(K_0)|^q\]
for some $\alpha_q\in(0,\infty)$ and large $K_0$.\\
Let $K$ be a different homogeneous norm on $\mathbb{G}$.
Suppose $U'$ satisfies the doubling property $|U'(c^2t)|\leq A_c |U'(t)|$ with a constant $A_c\in(0,\infty)$.
There exists $\varepsilon\in(0,\infty)$ such that
\[ \|\nabla (K-K_0)\|<\varepsilon \]
implies
\[
\int |f|^q \left( \tilde {\mathcal{V}}_q(K_0) \right) d\mu_{U,K} \leq 2^{q-1}C\int |\nabla f|^q d\mu_{U,K} + D \int | f|^q d\mu_{U,K} .
\]
with
\[\tilde {\mathcal{V}}_q(K_0) \geq \left(\alpha_q - \frac12\left(\frac{2A_c\varepsilon}{q}\right)^q \right)|U'(K_0) |^q \]
for large $K_0$.
\end{theorem}
\begin{proof} We consider the case $q=1$. For other $q\in(0,\infty)$ the arguments are similar.
Replacing $f\geq 0$ by $f e^{(U(K_0)-U(K))}$,
we get
\[
\int f \mathcal{V}_1(K_0) \leq C \int |\nabla f| d\mu_{U,K} + C\int f |\nabla(U(K_0)-U(K))| d\mu_{U,K} + D \int f d\mu_{U,K}.
\]
Since with $K_s = sK_0+ (1-s)K\leq c^2K_0$, using the fact that by our assumption $|U'(c^2t)|\leq A_c |U'(t)|$, with some $A_c\in(0,\infty)$ independent of $t$, we have
\[ |\nabla(U(K)-U(K))| \leq \int_0^1 ds |U'(K_s) | \nabla (K-K_0)| \leq
A_c \varepsilon |U'(K_0) |
\]
Hence
\[
\int f\left( \mathcal{V}_1(K_0) - A_c \varepsilon |U'(K_0) |\right) d\mu_{U,K} \leq C \int |\nabla f| d\mu_{U,K} + D \int f d\mu_{U,K},
\]
from which the desired property follows.
\end{proof}
The merit of the above theorem is that the perturbation theory does not require second order derivatives of the homogeneous norm $K$.
Thus if for $K_0$ we have a coercive bound so we do for the $K$
as long as it is a small perturbation of the original homogeneous norm.\\
We recall \cite{HZ}
that by arguments involving Leibniz rule and integration by parts, for $d\mu\equiv \frac1Ze^{-U(d)}d\lambda$ with
\[
\mathcal{V}_1(d) \equiv U'(d) |\nabla d|^2-\Delta d,
\]
we have
\[
\int f \mathcal{V}_1(d) d\mu \leq \int |\nabla f| d\mu .
\]
For control distance $d$ we have $|\nabla d|=1$, so if one has a Laplacian of the distance well behaved (e.g. locally unbounded only from below and with moderate growth dominated by $U'(d)$ in the large), we have a theory admitting coercive inequalities which allows for small perturbations in which we need only smallness of sub-gradient of the difference of homogeneous norm.
For an example of this type see e.g. the case of Heisenberg group
in \cite{HZ}.
\\
We remark the naturally, along the line indicated above one can develop a perturbation theory for theory discusses in sections 1.-4. for measures in which we were taming the singularities as well as those considered in \cite{BDZ1}, \cite{BDZ2}, \cite{ChFZ}.\\
Although our description were focused on Poincar\'e and Log-Sobolev inequalities, one can provide similar development
for other inequalities as e.g. in \cite{BDZ1} or in \cite{RZ} and \cite{RZ2} including in particular necessary and sufficient condition for exponential decay to equilibrium in Orlicz spaces.
\subsection{Examples} \label{Sec.5.1 Examples}
\begin{example} \label{Example.5.1}
Let $K=(1-\alpha)d+\alpha N$, for any homogeneous norm $N$.
Then, using the fact that for a homogeneous norm $|\nabla N|\leq C$ for some $C\in(0,\infty)$, if $C\geq |\nabla d|\geq \kappa >0$ for some $\kappa\in(0,\infty)$, we have
\[
|\nabla K|^2\geq (1-\alpha )^2|\nabla d|^2+ 2(1-\alpha )\alpha \nabla d\cdot\nabla N +\alpha^2 | \nabla N|^2 \geq
(1-\alpha) \left((1-\alpha)\kappa^2 - 2\alpha C^2 \right)
\]
which is positive for $\alpha >0$ sufficiently small.
Moreover we have
\[
|\nabla K - \nabla d|= \alpha |\nabla N-\nabla d|\leq 2 \alpha C.
\]
Hence the assumptions of the perturbation Theorem \ref{Thm5.6} can be satisfied by taking $\alpha>0$ sufficiently small .\\
\end{example} \label{EndeExample.5.1}
\begin{example} \label{Example.5.2} In this example we discuss a case of a seminorm obtained as mixture of control and Kaplan norms.\\
For our purposes we are interested in properties of sub-gradient and sub-Laplacian of the homogeneous norms.
If $d$ is the control distance and $N$ is the Kaplan
norm associated to a sub-gradient $\nabla\equiv (X_1,..,X_n)$, then
the following relations are satisfied, \cite{BLU}
\begin{equation} \label{eq.5.1}
|\nabla d|=1 \qquad \qquad \qquad\qquad \Delta N = \frac{Q-1}{N}
|\nabla N|^{2}
\end{equation}
where $|\cdot|$ denotes Euclidean norm in $\mathbb{R}^n$ and $\Delta$ denotes the sub-Laplacian.
We define a new homogeneous norm by
\[
K\equiv K(d,N) = d \zeta\left( \frac{N}{d} \right).
\]
with a smooth non-negative function $ \zeta$. This includes more general means than just weighted arithmetic mean.
Using this definition and the equivalence relation of norms \eqref{eq.5.0}, one gets.
\begin{lemma} \label{Lem.2}
\begin{equation} \label{eq.5.2a}
(\inf \zeta ) d\leq K \leq (\sup \zeta) d
\end{equation}
\begin{equation} \label{eq.5.2b}
|\nabla K| = \left(( \zeta - \frac{N}{d}\zeta')^2 |\nabla d|^2 +2 (\zeta - \frac{N}{d}\zeta') \zeta' \nabla d\cdot \nabla N + (\zeta')^2 |\nabla N |^2\right)^{\frac12}
\end{equation}
\begin{equation} \label{eq.5.2c}
\begin{split}
\Delta K = \left( \zeta - \frac{N}{d}\zeta' \right) \Delta d +
\zeta'' \frac{1}{d}\left( \nabla N - \frac{N}{d} \nabla d \right)^2 +\zeta' \Delta N
\end{split}
\end{equation}
\end{lemma}
\hfill $\circ$
If $N$ is the Kaplan norm, then \eqref{eq.5.2c}
reads
\begin{equation} \label{eq.5.2d}
\Delta K = \left( \zeta - \frac{N}{d}\zeta' \right) \Delta d +
\zeta'' \frac{1}{d}\left( \nabla N - \frac{N}{d} \nabla d \right)^2 +\zeta' \frac{Q-1}{N}
|\nabla N|^{2}
\end{equation}
and, since $\zeta$ is a smooth function on an interval
$[c^{-2},c^2]$ and both $|\nabla d|$ and $\nabla N$ are bounded, the leading term on the right hand side is the one containing $\Delta d$.\\
Suppose $K_s\equiv d \zeta_s(\frac{N}{d})$ is a one parameter differentiable interpolation between $d$ and $K$ with bounded derivative $\frac{d}{ds}\zeta_s\equiv\dot \zeta_s$, then we have
\[\begin{split}
K_s &\leq d\max_s \|\zeta_s\|,\qquad |\dot K_s| \leq
d\max_s \|\dot\zeta_s\| \\
|\nabla K_s| &\leq |\nabla d| \; \left( \max_s \|\zeta_s\| + \frac{N}{d} \max_s \|\zeta_s'\| \right)+ |\nabla N| \max_s \|\zeta_s'\| ,
\\
|\nabla \dot K_s| & \leq
|\nabla d| \; \left(\max_s \|\dot\zeta_s\| + \frac{N}{d} \max_s \|\dot\zeta_s'\| \right) + |\nabla N| \max_s \|\dot \zeta_s'\|
\end{split}
\]
\[\begin{split}
|\nabla(U(d)-U(K))| &= |\nabla(\int_0^1 U'(K_s) \dot K_s ds)| \\
&\leq \int_0^1 |U''(K_s) \dot K_s | \; |\nabla K_s| ds + \int_0^1 |U'(K_s)|\; |\nabla \dot K_s | ds
\end{split}
\]
Hence the assumptions of the perturbation Theorem \ref{Thm5.6} can be satisfied by taking $\zeta$ sufficiently close to one.\\
\end{example} \label{EndExample.5.2}
In the following example we illustrate the above in the case of homogeneous norm which is created using the geometric mean.
\begin{example} \label{Example.5.3}
Consider $K=d^{1-\alpha}N^{\alpha},$ where $0<\alpha<1.$
Then we have
\begin{equation} \label{eq.Ex5.E3.1}
|\nabla K|^2 = |(1-\alpha) \left(\frac{N}{d}\right)^\alpha \nabla d + \alpha \left(\frac{d}{N}\right)^{1-\alpha}\nabla N |^2
\end{equation}
If
\[
\frac1c d \leq N \leq c d ,
\]
then
\[ \frac1c \leq \frac{N }{d}, \frac{d }{N} \leq c \]
and hence, for $C\geq |\nabla d|\geq \kappa >0$ and $|\nabla N| \leq C$ , we get
\begin{equation} \label{eq.Ex5.E3.2}
|\nabla K|^2 \geq (1-\alpha)^2 \frac{ \kappa^2}{c^{2\alpha}} -2 (1-\alpha) \alpha c |\nabla d|\; | \nabla N | \geq
(1-\alpha)^2 \frac{ \kappa^2}{c^{2\alpha}} -2 (1-\alpha) \alpha c C^2
\end{equation}
which can be made strictly positive for $\alpha>0$ sufficiently small.
We also have
\[\begin{split}
|\nabla K -\nabla d| \leq |(1-\alpha) \left(\frac{N}{d}\right)^\alpha -1| |\nabla d| + \alpha \left(\frac{d}{N}\right)^{1-\alpha}|\nabla N |\\
\leq \left( |(1-\alpha) c^\alpha -1| + \alpha c^{1-\alpha}\right)C
\end{split}
\]
which can be made sufficiently small for $\alpha>0$ sufficiently small.
Hence the assumptions of the perturbation Theorem \ref{Thm5.6} can be satisfied by taking $\alpha>0$ sufficiently small .\\
\vspace{0.25cm}
For the sub-Laplacian we have
\begin{equation} \label{eq.Ex5.2c}
\begin{split}
\Delta K = \left( 1-\alpha\right) \zeta \Delta d -
\alpha\left( 1-\alpha\right) \zeta d \left( \frac1N\nabla N - \frac{1}{d} \nabla d \right)^2 +\alpha\zeta \frac{d}{N} \frac{Q-1}{N} |\nabla N|^2
\end{split}
\end{equation}
Thus for large $N$ the possible singular behaviour is determined by
the first term on the right hand side.\\
\noindent Hence, in particular
for the Heisenberg group, we get we have the following conclusion (which follows from the corresponding $U$-bound via arguments of \cite{HZ}).
\begin{theorem}
If a probability measure
\[d\mu =e^{-d^p} d\lambda\]
satisfies Log-Sobolev inequality for $p>p_0>2$, then so does the measure
\[d\nu =e^{-K^{p/\alpha}} d\lambda.\]
\end{theorem}
\end{example}
\newpage
\paragraph*{Appendix.1 : Generalised Heisenberg Group } $\;$\\
\label{Appx.1.TypeGHeisenberg}
\noindent For $1\leq j\leq n$, let $L_j \in\mathbb{R}\setminus\{0\}$. In $\mathbb{R}^{2n+1}$, consider the algebraic group law given by
\[(x, t) \ast (y, s)=(x+y,t+s+\sum_{j=1}^n(L_j(x_jy_{j+n}-y_jx_{j+n})
). \]
We have
\[
X_j=\begin{cases}
\partial_{x_j}-L_j x_{j+n}\partial_t,\qquad j=1,..,n\\
\partial_{x_j} + L_{j-n} x_{j-n}\partial_t,\qquad j=n+1,..,2n .
\end{cases}
\]
Then
\[
[X_j,X_k]=\begin{cases}
2 L_j \partial_t,\qquad k=n+j\\
0,\qquad\qquad otherwise
\end{cases}
\]
Kaplan Norm
\[ \label{KNGH}
{\color{blue}
N=
\left( \left( \sum_{j=1,..,n} 2|L_j|(x_j^2 + x_{j+n}^2) \right)^2 +16 z^2 \right)^\frac14
}
\]
Sub-gradient of Kaplan norm:
\[
X_j N = \begin{cases}
\left( |L_j| x_j ( \sum_{k=1,..,n} 2|L_k|( x_k^2 + x_{k+n}^2)) - 4L_j x_{j+n}z \right)\frac1{ N^3} ,\quad j=1,..,n\\
\left( |L_{j-n}| x_{j} ( \sum_{k=1,..,n} 2 |L_k|( x_k^2 + x_{k+n}^2)) + 4 L_{j-n} x_{j-n}z \right)\frac1{ N^3} ,\quad j=n+1,..,2n
\end{cases}
\]
Using this we have
\[
\begin{split}
|\nabla N|^2&=\sum_{j=1}^{2n} |X_j N|^2 =\\
& \frac{1}{N^6} \sum_{j=1}^{n} \left( |L_j|\; x_j ( \sum_{k=1,..,n} 2|L_k|( x_k^2 + x_{k+n}^2)) - 4L_j x_{j+n}z \right)^2 \\
& +\frac{1}{N^6}\sum_{j=n+1}^{2n} \left( |L_{j-n}| \; x_{j} ( \sum_{k=1,..,n} 2|L_k|( x_k^2 + x_{k+n}^2)) + 4 L_{j-n} x_{j-n}z \right)^2 \\
& =\frac{ \sum_{k=1,..,n} |L_k|( x_k^2 + x_{k+n}^2) }{N^2}
%
\end{split}
\]
and
\[
\mathbf{x}\cdot \nabla N = \frac{2 ( \sum_{k=1,..,n} |L_k|( x_k^2 + x_{k+n}^2))^2}{N^3} .
\]
Summarising we have
\begin{lemma}
For the Kaplan norm of Generalised Heisenberg Group we have
{\color{blue}
\begin{equation} \label{Apx.1.Lem}
\begin{split}
\min_k|L_k| \; \frac{| \mathbf{x}|^2}{N^2} \leq |\nabla N|^2&=\frac{ \sum_{k=1,..,n} |L_k|( x_k^2 + x_{k+n}^2) }{N^2} \leq \max_k|L_k| \; \frac{| \mathbf{x}|^2}{N^2}\\
\mathbf{x}\cdot \nabla N &= \frac{2 ( \sum_{k=1,..,n} |L_k|
( x_k^2 + x_{k+n}^2))^2}{N^3} \geq 0.
\end{split}
\end{equation}
}
\end{lemma}
\paragraph*{Appendix.2 : Smooth Norms for Type 2 Nilpotent Lie Group } $\;$\\
\label{Appx.2.Type2G}
Let $\mathbb{G}$ be a step-2 group, i.e. a group isomorphic to ${ \mathbb{R}^{n+m}}$
with the group law
\[
\left(w,z\right)\circ\left(w',z'\right)=\left(w_{i}+w'_{i},~z_{j}+z'_{j}+\frac{1}{2}<\Lambda^{\left(j\right)}w,w'>\right)_{i=1,..,n; j=1,..,m}
\]
for $w,w'\in\mathbb{R}^{n},z,z'\in\mathbb{R}^{m}$, where the matrices
$\Lambda^{\left(j\right)}$ are $n \times n$ skew-symmetric and linearly independent.
For $i\in\left\{ 1,\ldots,n\right\}$ and $j\in\left\{ 1,\ldots,m\right\} $, let
\[ X_{i}=\frac{\partial}{\partial x_{i}}+\frac{1}{2}\sum_{k=1}^{m}\sum_{l=1}^{n}\Lambda_{il}^{\left(k\right)}x_{l}\frac{\partial}{\partial z_{k}}\qquad \textrm{ and } \qquad Z_{j}=\frac{\partial}{\partial z_{j}}.
\]
Later on, $\nabla \equiv (X_i)_{i=1,..,n}$ and $\Delta \equiv \sum_{i=1,..,n} X_i^2 $ will denote the associated sub-gradient
and sub-Laplacian, respectively.
We consider the following smooth homogeneous norm on $\mathbb{G}$
\[ N\equiv \left( |x|^4+ a|z|^2\right)^{\frac{1}{4}} \]
with $a\in(0,\infty)$.
\\
Other norm
\[ \tilde N\equiv \left( \left( |x|^4+ a|z|^2\right)^\frac12 + |x|^2 \right)^{\frac{1}{2}} \]
with $a\in(0,\infty)$.
\paragraph*{Appendix.3 : Sub-gradient and Sub-Laplacean of $K$} \label{Appx.3}$\;$\\
Define
\[
K\equiv K(d,N) = d \zeta\left( \frac{N}{d} \right).
\]
due to equivalence of homogeneous norms the function $\zeta$ is a smooth function supported in a bounded interval $[-c^{-1},c]$.
We have
\begin{equation} \label{nabla K}
|\nabla K| = |(\zeta - \frac{N}{d } \zeta') \nabla d + (\zeta')\nabla N | =
\left( (\zeta - \frac{N}{d } \zeta')^2 +2 \nabla d\cdot \nabla N(\zeta - \frac{N}{d } \zeta')(\zeta') + (\zeta')^2|\nabla N |^2 \right)^\frac12
\end{equation}
and
\begin{equation} \label{Delta K}
\begin{split}
\Delta K &=\nabla\cdot\left(( \zeta - \frac{N}{d}\zeta') \nabla d + \zeta' \nabla N\right)\\
&=\left( \zeta - \frac{N}{d}\zeta' \right) \Delta d \\
& +
\zeta' \left( \frac{\nabla N}{d}- \frac{ N}{d}\frac{\nabla d}{d} + \frac{N}{d}\frac{\nabla d}{d} - \frac{\nabla N}{d}\right) \cdot \nabla d
\\
&- \frac{N}{d}\zeta'' \left( \frac{\nabla N}{d} - \frac{N}{d}\frac{\nabla d}{d} \right)\cdot \nabla d+\\
& +\zeta' \Delta N +
\zeta'' \left(\frac{|\nabla N|^2}{d} - \frac{N}{d} \frac{\nabla N\cdot\nabla d}{d} \right)\\
&\\
&=\left( \zeta - \frac{N}{d}\zeta' \right) \Delta d \\
&\zeta'' \left(\frac{|\nabla N|^2}{d} - 2 \frac{N}{d} \frac{\nabla N\cdot \nabla d}{d} + \frac{N}{d} \frac{N}{d}\frac{\nabla d\cdot \nabla d}{d} \right)+\\
& +\zeta' \Delta N \\
&\\
&=\left( \zeta - \frac{N}{d}\zeta' \right) \Delta d + \zeta'' \frac{1}{d}\left( \nabla N - \frac{N}{d} \nabla d \right)^2 +\zeta' \Delta N\\
& = \left( \zeta - \frac{N}{d}\zeta' \right) \Delta d
+\zeta'' \frac{1}{d}\left( \nabla N - \frac{N}{d} \nabla d \right)^2
+\zeta' \frac{Q-1}{N}
|\nabla N|^{2}
\end{split}
\end{equation}
Since $\zeta$ is a smooth function on a bounded interval $[-c^{-1},c]$, for some positive constant $c$, the leading term in the last formula outside a large ball is provided by
$\zeta\Delta d$.
|
{
"timestamp": "2021-05-11T02:19:57",
"yymm": "2105",
"arxiv_id": "2105.03922",
"language": "en",
"url": "https://arxiv.org/abs/2105.03922"
}
|
\section{Introduction}
Generalizing the classical notion of (conditional) entropy from ergodic theory, Connes and St\o rmer in \cite{CS} defined a relative entropy
$H(B_1|B_2)$ between a pair of finite dimensional $C^*$-subalgebras of a finite von Neumann algebra $M$ with a fixed faithful normal trace $\mathrm{tr}$. However, in an impactful paper \cite{PP} Pimsner and Popa
had observed that the definition of the relative entropy does not depend on $B_1,B_2$ being finite dimensional, so that one may also consider the relative entropy
$H(B_1|B_2)$ for arbitrary von Neumann subalgebras $B_1,B_2\subset M$. Quite surprisingly, Pimsner and Popa had discovered that if $N$ and $M$ are type $II_1$ factors and $N\subset M$ then $H(M|N)$ depends on both the Jones index and the relative commutant. More precisely,
they proved (among other things) that if the relative commutant is trivial, that is $N^{\prime}\cap M=\mathbb{C}$, then
\begin{equation}
\label{pimsnerpopa1}
H(M|N)=\log [M:N].
\end{equation}
Indeed, Pimsner and Popa had shown that equality holds in \Cref{pimsnerpopa1} if and only if the subfactor $N\subset M$ is extremal.
In this paper we consider a pair of intermediate subfactors
$N\subset P,Q\subset M$ of a finite index subfactor $N\subset M$ of type $II_1$ factors with $N^{\prime}\cap M=\mathbb{C}$, and obtain a formula for $H(P|Q)$ in terms of a probabilistic number $\lambda(P,Q)$.
Pimsner and Popa in \cite{PP} defined for von Neumann subalgebras $B_2\subset B_1\subset M$ of a finite von Neumann algebra $M$ the probabilistic constant
\begin{equation}\label{probabilistic index}\lambda(B_1,B_2)=\text{max}\{\lambda>0|E_{B_2}(x)\geq \lambda x, x\in B_{1_{+}}\}.
\end{equation}This serves as a replacement of Jones index when $B_1$ and $B_2$ are not necessarily factors. Quite remarkably, they also prove that if
$M$ is a type $II_1$ factor and $N\subset M$ is a subfactor then
\begin{equation}
\label{pimsnerpopa2}
{\lambda(M,N)}={[M:N]}^{-1}.
\end{equation}
Interestingly, the definition of $\lambda(B_1,B_2)$, as in the \Cref{probabilistic index}, works for general subalgebras $B_1$ and $B_2$ (not necessarily, $B_2\subset B_1)$
of $M$ as well; and furthermore, one may also consider the number $\lambda(B_2,B_1).$
As a generalization of \Cref{pimsnerpopa2} we prove the following result:
\smallskip
\noindent{\bf \Cref{commuting}.} {Let $(N,P,Q,M)$ be a quadruple of type $II_1$ factors with $[M:N]<\infty$.
\begin{enumerate}
\item If $(N,P,Q,M)$ is a commuting square then $\lambda(P,Q)={[P:N]}^{-1}$ and $\lambda(Q,P)={[Q:N]}^{-1}.$
\item If $(N,P,Q,M)$ is a co-commuting square then $\lambda(P,Q)={[M:Q]}^{-1}$ and $\lambda(Q,P)={[M:P]}^{-1}.$
\end{enumerate}}
\smallskip
Moreover, in \Cref{whenequalityholds} and \Cref{whenequalityholds2} we prove that the converse of \Cref{commuting} also holds true for an irreducible quadruple (i.e., $N^{\prime}\cap M=\mathbb{C}$).
In this article, we also provide some calculable formulae for the probabilistic numbers in the case of an irreducible quadruple.\smallskip
\noindent{\bf \Cref{imp1}.}{
Suppose $(N,P,Q,M)$ is a quadruple of $II_1$ factors with $N^{\prime}\cap M=\mathbb{C}$ and $[M:N]<\infty$ and $e_P$ (resp. $e_Q$) is the biprojection corresponding to the intermediate subfactor $P$ (resp. $Q$), then $\lambda(Q,P)= \frac{\text{tr}(e_Pe_Q)}{\text{tr}(e_Q)}$ and
$\lambda(P,Q)= \frac{\text{tr}(e_Pe_Q)}{\text{tr}(e_P)}.$}
\smallskip
Finally, we prove the main result of this article by relating Connes-St\o rmer relative entropy $H(P|Q)$ with $\lambda(P,Q)$.
\smallskip
\noindent{\bf \Cref{main}.}{Let $(N,P,Q,M)$ be an irreducible quadruple such that $[M:N]<\infty$. Then,
$H(Q|P)=-\log\big(\lambda(Q,P)\big).$}
The above formula generalizes \Cref{pimsnerpopa1}.\smallskip
Combining \Cref{main} with \Cref{imp1} we deduce the following formula for the relative entropy (in \Cref{imp2}).
\begin{equation*}
H(Q|P)= \log\big(\mathrm{tr}(e_Q)\big)-\log\big(\mathrm{tr}(e_Pe_Q)\big).\end{equation*}
We conclude the paper with a cute application of \Cref{main}, where we deduce (in \Cref{prop2}) that if $(N,P,Q,M)$ is an irreducible quadruple with $[M:N]<\infty$ and $[P:N]=2$ then $H(P,Q)$ is either $0$ or $\log 2.$
The paper is organized as follows. After some Preliminaries in Section 2, we discuss the probabilistic constants and present some useful formulas for them in Section 3 and finally,
in Section 4 we prove our main result involving the Connes-St\o rmer relative entropy.
\section{Preliminaries}
In this section we fix the notations, and recall some of the results in \cite{BDLR}, which we will use frequently in the sequel.
\begin{notation}
Consider a subfactor with $[M:N]<\infty$ and $N^{\prime}\cap M=\mathbb{C}$. Throughout we will be dealing only with subfactors of type
$II_1$ with finite Jones' index. Thus, $M$ has the unique faithful, normal and tracial state $\mathrm{tr}$, $\mathrm{tr}(1)=1.$
\begin{enumerate}
\item
A quadruple $$\begin{matrix}
Q &\subset & M \cr \cup &\ &\cup\cr N &\subset & P,
\end{matrix}$$ denoted by $(N,P,Q,M)$, is called irreducible if $N^{\prime}\cap M=\mathbb{C}$. Consider the basic
constructions $N\subset M \subset M_1$, $P \subset M \subset P_1$ and
$Q \subset M \subset Q_1$. As is standard, we denote by $e_1$ the
Jones projection $e^M_N$. It is easily seen that, as $II_1$-factors
acting on $L^2(M)$, both $ P_1$ and $Q_1$ are contained in $ M_1$. In
particular, if $e_P: L^2(M) \rightarrow L^2(P)$ denotes the orthogonal
projection, then $e_P \in M_1$. Likewise, $e_Q \in M_1$. Note that, $\mathrm{tr}(e_P)={[M:P]}^{-1}.$ Thus, we
naturally obtain a dual quadruple $$\begin{matrix} P_1 &\subset & M_1
\cr \cup &\ &\cup\cr M &\subset & Q_1.
\end{matrix}$$ We call $(M,Q_1,P_1,M_1)$ the basic construction of $(N,P,Q,M)$.
\item A quadruple $(N,P,Q,M)$ is called a commuting square if
$E^M_P E^M_Q= E^M_Q E^M_P = E^M_N$.
A quadruple $(N,P,Q,M)$ is called a co-commuting square if the quadruple $(M,Q_1,P_1,M_1)$ is a commuting square.
\item Suppose $N_{-1}\subset N\subset M$ is a downward basic construction. Also, denote by $P_{-1}$ (resp. $Q_{-1}$) a downward basic construction of $N\subset P$ (resp. $N\subset Q$) with
the corresponding Jones projection $e^N_{P_{-1}}$ (resp. $e^N_{Q_{-1}}$).
We obtain a new quadruple
$$\begin{matrix} P_{-1} &\subset & N
\cr \cup &\ &\cup\cr N_{-1} &\subset & Q_{-1}.
\end{matrix}$$
We call this new quadruple $(N_{-1},Q_{-1},P_{-1},N)$ as a downward basic construction of the quadruple $(N,P,Q,M)$.
\end{enumerate}
\end{notation}
We recall without proofs the following elementary facts.
\begin{fact}\label{fact}
Consider a quadruple of type $II_1$ factors with $[M:N]<\infty.$
\begin{enumerate}
\item Suppose $(M,Q_1,P_1,M_1)$ is the basic construction of the quadruple and let $e_{P_1} (\text{resp.}~e_{Q_1})$
be the Jones projection for the inclusion $P_1\subset M_1 (\text{resp.}~ Q_1\subset M_1).$ Then,
$$\mathrm{tr}(e_{P_1}e_{Q_1})=\frac{[M:P]}{[Q:N]}\mathrm{tr}(e_Pe_Q)=\frac{[M:Q]}{[P:N]} \mathrm{tr}(e_Pe_Q).$$
\item Suppose $(N_{-1},Q_{-1},P_{-1},N)$ is a downward basic construction of $(N,P,Q,M)$. Then,
$$\mathrm{tr}(e^N_{P_{-1}}e^N_{Q_{-1}})=\frac{[M:Q]}{[P:N]}\mathrm{tr}(e_Pe_Q)=\frac{[M:P]}{[Q:N]} \mathrm{tr}(e_Pe_Q).$$
\item $[P_{-1}:N_{-1}]=[M:P]$ and $[Q_{-1}:N_{-1}]=[M:Q].$
\end{enumerate}
\end{fact}
Consider the quadruple $(N,P,Q,M)$ as above. Let $\{\lambda_i:i\in I\}$ and
$\{\mu_j:j\in J\}$ be (right) Pimsner-Popa bases for $P/N$ and $Q/N$,
respectively. Consider two auxiliary opertors $p(P,Q)$ and $p(Q,P)$
(as in \cite{BDLR}) given by
\[
p(P,Q)= \sum_{i,j}{\lambda_i}\mu_j e_1 {\mu}^*_j{\lambda}^*_i\quad \text{and}\quad
p(Q,P)= \sum_{i,j}\mu_j \lambda_i e_1 {\lambda}^*_i {\mu}^*_j.
\]
By \cite[Lemma 2.18]{BDLR}, $p(P,Q)$ and $p(Q,P)$ are both
independent of choice of bases and, by \cite[Proposition
2.22]{BDLR}, $Jp(P,Q)J = p(Q,P)$, where $J$ is the usual modular
conjugation operator on $L^2(M)$; so that, $\|p(P,Q)\| =
\|p(Q,P)\|$. Let us denote this common value by $\lambda$. We also note that $\lambda=[M:N] \mathrm{tr}(e_Pe_Q)$.
Furthermore, $$p(P,Q)= [P:N] E^{N^{\prime}}_{P^{\prime}}(e_Q)=[Q:N]E^{M_1}_{Q_1}(e_P).$$
We also recall the following result which will be used heavily in this note.
\begin{lemma}\cite{BDLR}\label{bdlr}
\label{crucial}
If $N^{\prime}\cap M=\mathbb{C}$ then $\frac{1}{\lambda} p(P,Q)$ is a projection and $\frac{1}{\lambda}p(P,Q)\geq e_P\vee e_Q.$ Similar statement holds if we interchange $P$ and $Q$.
\end{lemma}
\begin{remark}\label{downward}
The auxiliary operators corresponding to the downward basic construction $(N_{-1},Q_{-1},P_{-1},N)$ of the quadruple $(N,P,Q,M)$ have the following formulae.
$$p(Q_{-1},P_{-1})=[Q_{-1}:N_{-1}]E^{N^{\prime}_{-1}}_{Q^{\prime}_{-1}}(e^N_{P_{-1}})=[P_{-1}:N_{-1}]E^M_P(e^N_{Q_{-1}})$$ and also,
$$p(P_{-1},Q_{-1})=[P_{-1}:N_{-1}]E^{N^{\prime}_{-1}}_{P^{\prime}_{-1}}(e^N_{Q_{-1}})=[Q_{-1}:N_{-1}]E^M_Q(e^N_{P_{-1}}).$$\end{remark}
\section{Pimsner-Popa probabilistic constant}
Generalizing the Jones index, Pimsner-Popa in \cite{PP} had introduced the probabilistic constant $\lambda(B_1,B_2)$ for von Neumann subalgebras $B_2\subset B_1\subset M$ of a finite von Neumann algebra $M$
and this proved to be a powerful analytical tool in subfactor theory. However, as observed in \cite{O}, this definition works for general subalgebras $B_1$ and $B_2$ (not necessarily, $B_2\subset B_1)$ as well.
In this section we shall obtain a formula for $\lambda(P,Q)$, when $N\subset M$ has a pair of intermediate subfactors $P$ and $Q$.
\begin{definition}(Pimsner-Popa)\label{pimsner-popa}
Consider a pair of von Neumann subalgebras $P$ and $Q$ of a finite von Neumann algebra $M$. Define the Pimsner-Popa probabilistic constant of the ordered pair $(P,Q)$ as follows:
$$\lambda(P,Q)=\text{max}\{t>0| E_Q(x)\geq t x ~~\forall x\in P_{+}\}.$$
\end{definition}
\begin{remark}\label{remarkone}
If $N\subset M$ is a subfactor of type $II_1$ factors then $\lambda(N,M)=1$ and $\lambda(M,N)={[M:N]}^{-1}.$ The latter follows from \cite{PP}.\end{remark}
Below we prove the formulas for the probabilistic numbers.
\begin{theorem}\label{imp1}
Suppose $(N,P,Q,M)$ is a quadruple of $II_1$ factors with $N^{\prime}\cap M=\mathbb{C}$, then $\lambda(Q,P)= \frac{\text{tr}(e_Pe_Q)}{\text{tr}(e_Q)}$ and
$\lambda(P,Q)= \frac{\text{tr}(e_Pe_Q)}{\text{tr}(e_P)}.$
\end{theorem}
\begin{proof}
Let any $x\in P_{+}$. Since $x$ is positive, we must have
$$x=\big(\sum_j a_je^N_{P_{-1}}b_j\big)^*\big(\sum_i a_ie^N_{P_{-1}}b_i\big)=\sum_{i,j}b^*_jE^N_{P_{-1}}(a^*_ja_i)e^N_{P_{-1}}b_i,$$ where $a_i,b_j\in N$.
By \Cref{crucial} we have,
$\frac{1}{[N:N_{-1}]\mathrm{tr}(e^N_{P_{-1}}e^N_{Q_{-1}})}p(P_{-1},Q_{-1})\geq e^N_{P_{-1}}$ and thanks to \Cref{fact} we have $\frac{1}{[M:P][M:Q]\mathrm{tr}(e_Pe_Q)}[Q_{-1}:N_{-1}]E^{M}_{Q}(e^N_{P_{-1}})\geq e^N_{P_{-1}}.$
Again by \Cref{fact}, $[Q_{-1}:N_{-1}]=[M:Q]$ so that
$$\frac{\mathrm{tr}(e_P)}{\mathrm{tr}(e_Pe_Q)}E^M_Q(e^N_{P_{-1}})\geq e^N_{P_{-1}}.$$
Now, for $x\in P_{+}$ we have: \begin{align*}
x = & \sum_{i,j} b^*_jE^N_{P_{-1}}(a^*_ja_i)e^N_{P_{-1}}b_i\\
\leq & \frac{\mathrm{tr}(e_P)}{\mathrm{tr}(e_Pe_Q)}\sum_{i,j} b^*_jE^N_{P_{-1}}(a^*_ja_i)E^M_Q(e^N_{P_{-1}})b_i\\
\leq & \frac{\mathrm{tr}(e_P)}{\mathrm{tr}(e_Pe_Q)}E^M_Q\bigg(\sum_{i,j} b^*_jE^N_{P_{-1}}(a^*_ja_i)e^N_{P_{-1}}b_i\bigg)\\
\leq & \frac{\mathrm{tr}(e_P)}{\mathrm{tr}(e_Pe_Q)}E^M_Q(x)
\end{align*}
In other words, $E_Q(x)\geq \frac{\mathrm{tr}(e_Pe_Q)}{\mathrm{tr}(e_P)}x$ for any $x\in P_{+}.$
Now, suppose $E^M_Q(x)\geq sx$ for some $s>0$. Then, $E^M_Q(e^N_{P_{-1}})\geq s e^N_{P_{-1}}$. Taking norm both sides we get, $\lVert E^M_Q(e^N_{P_{-1}})\rVert\geq s$.
By Remark 3.3 of \cite{BDLR} we see that $\lVert E_Q(e^N_{P_{-1}})\rVert =\frac{[M:N]\mathrm{tr}(e^N_{P_{-1}}e^N_{Q_{-1}})}{[M:Q]}$. Hence, by \Cref{fact},
$$\frac{\mathrm{tr}(e_Pe_Q)}{\mathrm{tr}(e_P)}\geq s.$$ Therefore, $\lambda(P,Q)=\frac{\mathrm{tr}(e_Pe_Q)}{\mathrm{tr}(e_P)},$ as desired.
The other formula follows by interchanging $P$ and $Q$. This completes the proof. \end{proof}
\begin{remark}
Suppose $(N,P,Q,M)$ be as in \Cref{imp1}. In general, $\lambda(P,Q) \neq \lambda(Q,P)$ as can be easily seen from \Cref{remarkone}. Using \Cref{imp1} it follows that $\lambda(P,Q)=\lambda(Q,P)$ if and only if $[P:N]=[Q:N].$
\end{remark}
\begin{example}\label{ex-1}
Let $G$ be a finite group acting outerly on a $II_1$-factor $S$. Let
$H, K $ and $L$ be subgroups of $G$ such that $L \subseteq H \cap K$
and $H$ and $K$ are non-trivial. Consider the quadruple $(N = S
\rtimes L, P = S \rtimes H, Q = S \rtimes K, M= S \rtimes G )$. Then, a simple calculation shows that (see \cite{BG}, for instance) \begin{equation*}
\mathrm{tr}(e_P e_Q) = \frac{ |H \cap K|}{|G|}, \mathrm{tr}(e_P)=\frac{|H|}{|G|} ~~~\text{and}~~~\mathrm{tr}(e_P)=\frac{|K|}{|G|}.
\end{equation*}
Therefore, $$\lambda(P,Q)=\frac{|H\cap K|}{|H|}~~~\text{and}~~~\lambda(Q,P)=\frac{|H\cap K|}{|K|}.$$
\end{example}
\begin{example}
Let $H, K, G$ and $S$ be as in \Cref{ex-1}.
Consider the quadruple $(N= S^G, P = S^H, Q = S^K, M = S)$. Then,
$$\lambda(P,Q)=\frac{|H\cap K|}{|K|}~~~~\text{and}~~~\lambda(Q,P)=\frac{|H\cap K|}{|H|}.$$ The proof is simple and omitted.
\end{example}
\begin{corollary}\label{dual}
Let $(N,P,Q,M)$ be a quadruple of $II_1$ factors with $N^{\prime}\cap M=\mathbb{C}$ and $[M:N]<\infty$. Then, $\lambda(P_1,Q_1)=\lambda(Q,P)$ and $\lambda(Q_1,P_1)=\lambda(P,Q).$
\end{corollary}
\begin{proof}
It is easy to check that $\mathrm{tr}(e_{Q_1})=\frac{1}{[Q:N]}.$ Thus, using \Cref{fact}, we see that
$$\lambda(Q_1,P_1)=\frac{\mathrm{tr}(e_{P_1}e_{Q_1})}{\mathrm{tr}(e_{Q_1})}=[M:P] \mathrm{tr}(e_Pe_Q)=\lambda(P,Q).$$
Similarly, $\lambda(P_1,Q_1)=\lambda(Q,P).$ This completes the proof.
\end{proof}
\begin{corollary}\label{downwarddual}
Let $(N,P,Q,M)$ be a quadruple of $II_1$ factors with $N^{\prime}\cap M=\mathbb{C}$ and $[M:N]<\infty$. Then, $\lambda(P_{-1},Q_{-1})=\lambda(Q,P)$ and $\lambda(Q_{-1},P_{-1})=\lambda(P,Q).$
\end{corollary}
\begin{proof}
First, it is easy to check that $\mathrm{tr}(e^N_{P_{-1}})={[P:N]}^{-1}.$ Thus by \Cref{imp1} and \Cref{fact},
\begin{equation*}\lambda(P_{-1},Q_{-1})=\frac{\mathrm{tr}(e^N_{P_{-1}}e^N_{Q_{-1}})}{\mathrm{tr}(e^N_{P_{-1}})}=[M:Q]\mathrm{tr}(e_Pe_Q)=\lambda(Q,P).
\end{equation*}
Similarly, $\lambda(Q_{-1},P_{-1})=\lambda(P,Q).$ This completes the proof.
\end{proof}
\begin{corollary}\label{whenequalityholds}
Let $(N,P,Q,M)$ be a quadruple of type $II_1$ factors with $[M:N]<\infty$. Then, $1\geq \lambda(P,Q)\geq {[P:N]}^{-1}.$ Furthermore, if $N^{\prime}\cap M=\mathbb{C}$,
$\lambda(P,Q)= {[P:N]}^{-1}$ if and only if
$(N,P,Q,M)$ is a commuting square. Also, $\lambda(P,Q)=1$ if and only if $P\subset Q$. A similar statement holds if we interchange $P$ and $Q$.
\end{corollary}
\begin{proof}
The inequalities $1\geq \lambda(P,Q)\geq {[P:N]}^{-1}$ follow immediately from \Cref{pimsner-popa}.
By \Cref{bdlr}, $\lVert p(P,Q)\rVert=\lambda.$ Thus, $\lambda=[P:N]\lVert E^{N^{\prime}}_{P^{\prime}}(e_Q)\rVert\leq [P:N].$
Since $\lambda=[M:N] \mathrm{tr}(e_Pe_Q))$, by \Cref{imp1}, $\lambda(P,Q)=\frac{\lambda}{[P:N]}$. By \cite{BDLR}, $(N,P,Q,M)$ is a commuting square if and only if $\lambda=1$ if and only if $\lambda(P,Q)={[P:N]}^{-1}.$
If $P\subset Q$ then $e_P\leq e_Q$ and hence $\lambda(P,Q)=1.$ Conversely, if $\lambda(P,Q)=1$ then we get $e_P=e_Pe_Q=e_Qe_P$ and hence $P=P\cap Q.$ Therefore,
we get $P\subset Q$.
This completes the proof.
\end{proof}
\begin{corollary}
\label{whenequalityholds2}
Let $(N,P,Q,M)$ be a quadruple of type $II_1$ factors with $[M:N]<\infty$. Then, $1\geq \lambda(P,Q)\geq {[M:Q]}^{-1}.$ Furthermore, if $N^{\prime}\cap M=\mathbb{C}$, then $(N,P,Q,M)$ is a co-commuting square if and only if $\lambda(P,Q)={[M:Q]}^{-1}.$
A similar statement holds if we interchange $P$ and $Q$.
\end{corollary}
\begin{proof}
That $\lambda(P,Q)\geq {[M:Q]}^{-1}$ follows from \Cref{pimsner-popa}. Recall, $(N,P,Q,M)$ is a co-commuting square if and only if $(M,P_1,Q_1,M_1)$ is a commuting square. Therefore, by \Cref{whenequalityholds}, $(N,P,Q,M)$ is a co-commuting square if and only if
$\lambda(Q_1,P_1)={[Q_1:M]}^{-1}={[M:Q]}^{-1}$ if and only if $\lambda(P,Q)={[M:Q]}^{-1}$ (thanks to \Cref{dual}). This proves the assertion.
\end{proof}
\begin{proposition}\label{prop}
Let $(N,P,Q,M)$ be a quadruple of type $II_1$ factors with $N^{\prime}\cap M=\mathbb{C}$ and $[M:N]<\infty$. If $[P:N]=2$ then $\lambda(P,Q)$ is either $1$ or $1/2.$
\end{proposition}
\begin{proof}
Let $\{1,u\}$ be a unitary orthonormal Pimsner-Popa basis for $P/N$. Thus, $p(P,Q)=e_Q+ue_Qu^*$. By \Cref{bdlr}, $p(P,Q)e_Q=\lambda e_Q$ and hence $\big(1+ uE_Q(u^*)\big)e_Q=\lambda e_Q.$
Thus, \begin{equation}\label{final}\lambda-1= uE_Q(u^*).\end{equation} Now two cases arise.
If $\lambda=1$ then $(N,P,Q,M)$ is a commuting square and hence, by \Cref{whenequalityholds}, $\lambda(P,Q)=1/2.$ When $\lambda\neq 1$, by \Cref{final}, $u\in Q$ and therefore $P\subset Q$.
So, by \Cref{whenequalityholds} again, we conclude that $\lambda(P,Q)=1.$ This finishes the proof.
\end{proof}
In the general case (i.e., without assuming irreducibility) we have the following result.
\begin{theorem}\label{commuting}
Let $(N,P,Q,M)$ be a quadruple of type $II_1$ factors with $[M:N]<\infty$.
\begin{enumerate}
\item If $(N,P,Q,M)$ is a commuting square then $\lambda(P,Q)={[P:N]}^{-1}$ and $\lambda(Q,P)={[Q:N]}^{-1}.$
\item If $(N,P,Q,M)$ is a co-commuting square then $\lambda(P,Q)={[M:Q]}^{-1}$ and $\lambda(Q,P)={[M:P]}^{-1}.$
\end{enumerate}
\end{theorem}
\begin{proof}
If $x\in P_{+}$ we have, by commuting square condition, $E^M_Q(x)=E^M_Q\circ E^M_P(x)=E^P_N(x)$. Thanks to \cite{PP}, $E^M_Q(x)\geq {[P:N]}^{-1} x$.
Now, if $E^M_Q(x)\geq tx$ for all $x\in P_{+}$ and for some scalar $t$, we see that $E^M_Q(e^N_{P_{-1}})= E^P_N(e^N_{P_{-1}})={[P:N]}^{-1}$ and so ${[P:N]}^{-1}\geq te^N_{P_{-1}}$.
Taking norm to the both sides of the last equation we get ${[P:N]}^{-1}\geq t.$ Therefore, $\lambda(P,Q)= {[P:N]}^{-1}.$
Similarly, $\lambda(Q,P)={[Q:N]}^{-1}.$ This completes the proof of item (1).
We now prove item(2). Since $(N,P,Q,M)$ is a co-commuting square it is easy to see that $(N_{-1},Q_{-1},P_{-1},N)$ is a commuting square. According to \cite{BDLR},
$p(P_{-1},Q_{-1})$ is a projection such that $p(P_{-1},Q_{-1})\geq e^N_{P_{-1}}$. By \Cref{downward} and \Cref{fact} we see that $[M:Q]E^M_Q(e^N_{P_{-1}})\geq e^N_{P_{-1}}$.
For any $x\in P_{+}$, as in the proof of \Cref{imp1}, we see that $x= \sum_{i,j} b^*_jE^N_{P_{-1}}(a^*_ja_i)e^N_{P_{-1}}b_i$ for some $a_i,b_j\in N$. Therefore,
\begin{align*}
x \leq & [M:Q]\sum_{i,j} b^*_jE^N_{P_{-1}}(a^*_ja_i)E^M_Q(e^N_{P_{-1}})b_i\\
=& [M:Q] E^M_Q\big(\sum_{i,j}b^*_jE^N_{P_{-1}}(a^*_ja_i)e^N_{P_{-1}}b_i\big)\\
=& [M:Q]E^M_Q(x).
\end{align*}
Thus, we have proved that for any $x\in P_{+}$ we have $E^M_Q(x)\geq {[M:Q]}^{-1} x$. Also, if $E^M_Q(x)\geq sx$ for $x\in P_{+}$ and for some scalar $s$ then,
in particular, $E^M_Q(e^N_{P_{-1}})\geq s e^N_{P_{-1}}$ and hence $p(P_{-1},Q_{-1})\geq [M:Q] s e^N_{P_{-1}}$. Thus, $\lVert p(P_{-1},Q_{-1})\rVert =1\geq s[M:Q].$
So, ${[M:Q]}^{-1}\geq s.$ We conclude that $\lambda(P,Q)={[M:Q]}^{-1}$ and similarly $\lambda(Q,P)={[M:P]}^{-1}.$ This completes the proof.
\end{proof}
\begin{remark}
\Cref{commuting} can be thought of as a generalization of \cite{PP}[Theorem 2.2].
\end{remark}
\begin{example}
Let $K\subset L$ be a subfactor of finite index and $G$ be a finite group acting outerly on $L$ so that we obtain a quadruple $(N=K, P=L,Q=K\rtimes G,M=L\rtimes G).$ Then
the quadruple is a commuting square. Thus, $\lambda(L,K\rtimes G)=[L:K]$ and $\lambda(K\rtimes G,L)=|G|.$
\end{example}
\begin{remark}
We feel that $\lambda(P,Q)$ (and $\lambda(Q,P)$) is a powerful invariant in determining the relative position between the intermediate subfactors $N\subset P,Q\subset M$, and demands a deeper investigation.
\end{remark}
\section{Relative entropy and intermediate subfactors}
Connes and St\o rmer in \cite{CS} introduced a notion of entropy of a finite dimensional subalgebra and more generally, the relative
entropy between two finite dimensional subalgebras of a finite von Neumann algebra as an extension of the notion of entropy from classical ergodic theory.
Pimsner and Popa in \cite{PP} had observed that this notion does not depend on the fact that the subalgebras are finite-dimensional and they further extended it to a notion of relative entropy between intermediate von Neumann subalgebras of a finite von Neumann algebra.
We briefly recall the definition below.
\begin{definition}(Connes-St\o rmer)
Let $\eta:[0,\infty)\rightarrow (-\infty,\infty)$ be defined by $\eta(t)=-t\log(t)$ for $t>0$ and $\eta(0)=0.$ Let $M$ be a finite von Neumann algebra with a fixed normalized trace $\mathrm{tr}$ and $P$ and $Q$ be von Neumann subalgebras of $M$.
The entropy of $P$ relative to $Q$ with respect to $\mathrm{tr}$ is
$$H_{\mathrm{tr}}(P|Q)=\text{sup}\sum_i\bigg(\mathrm{tr}\big(\eta(E_Q(x_i))\big)-\mathrm{tr}\big(\eta(E_P(x_i))\big)\bigg),$$ where the supremum is taken over all finite partitions of unity $1=\sum_ix_i$ in $M$, and $E_P$ and $E_Q$ are the $\mathrm{tr}$-preserving conditional expectations on $P$ and $Q$, respectively.
If $M$ is type $II_1$ factor we often suppress $\mathrm{tr}$ in the notation for the relative entropy as the trace is uniquely determined in this case.
\end{definition}
Below we see that the Pimsner-Popa probabilistic constant is closely related with the relative entropy. This can be thought of as a generalization of \Cref{pimsnerpopa1} due to Pimsner and Popa.
First we need a lemma.
\begin{lemma}
\label{liu}
Let $(N,P,Q,M)$ be an irreducible quadruple such that $[M:N]<\infty$. Then,
$$\mathrm{tr}\bigg(\eta\big(E^M_P(e^N_{Q_{-1}})\big)\bigg)=-\frac{1}{[Q:N]}\log\big(\lambda(Q,P)\big).$$
\end{lemma}
\begin{proof}
By \Cref{downward},
\begin{equation}\label{e1}
\mathrm{tr}\bigg(\eta\big(E^M_P(e^N_{Q_{-1}})\big)\bigg)=\mathrm{tr}\bigg(\eta\big(\frac{1}{[P_{-1}:N_{-1}]}p(Q_{-1},P_{-1}))\big)\bigg).
\end{equation}
Now, using \Cref{bdlr} we see that $\frac{1}{[M:N]tr(e^N_{P_{-1}}e^N_{Q_{-1}})}p(Q_{-1},P_{-1})$ is a projection. Denote this projection by $f$. By \Cref{fact},
$$f=\frac{1}{[M:P][M:Q]tr(e_Pe_Q)}p(Q_{-1},P_{-1}).$$
Since, $tr(p(Q_{-1},P_{-1})=\frac{[Q_{-1}:N_{-1}]}{[N:P_{-1}]}=\frac{[M:Q]}{[P:N]}$, it follows that $tr(f)= \frac{1}{[M:N]tr(e_Pe_Q)}.$
Since, $[P_{-1}:N_{-1}]=[M:P]$, \Cref{e1} implies that
$$\mathrm{tr}\bigg(\eta\big(E^M_P(e^N_{Q_{-1}})\big)\bigg)= \mathrm{tr}\bigg(\eta\big(\frac{\mathrm{tr}(e_Pe_Q)}{\mathrm{tr}(e_Q)}f\big)\bigg).$$
It follows (see \cite{JLW}) that if $\alpha$ is a scalar then $\eta(\alpha f)=\eta(\alpha) f.$ Therefore,
$$\mathrm{tr}\bigg(\eta\big(E^M_P(e^N_{Q_{-1}})\big)\bigg)=\mathrm{tr}\bigg(\eta\big(\frac{\mathrm{tr}(e_Pe_Q)}{\mathrm{tr}(e_Q)}\big)f\bigg)=\frac{1}{[M:N]\mathrm{tr}(e_Pe_Q)}\eta\big(\frac{\mathrm{tr}(e_Pe_Q)}{\mathrm{tr}(e_Q)}\big).$$
In other words, $\mathrm{tr}\bigg(\eta\big(E^M_P(e^N_{Q_{-1}})\big)\bigg)=-\frac{1}{[Q:N]}\log\big(\frac{\mathrm{tr}(e_Pe_Q)}{\mathrm{tr}(e_Q)}\big).$ The proof is now complete once we apply \Cref{imp1}.
\end{proof}
\begin{theorem}\label{main}
Let $(N,P,Q,M)$ be an irreducible quadruple such that $[M:N]<\infty$. Then,
$H(Q|P)=-\log\big(\lambda(Q,P)\big).$
\end{theorem}
\begin{proof}
As observed in \cite{O} we easily obtain $H(Q|P)\leq -\log\big(\lambda(Q,P)\big).$ Indeed, exactly the same proof as in \cite{PP} works.
We prove the non-trivial implication. The proof is inspired by \cite{PP} (see also \cite{NS}).
Note, $e^N_{Q_{-1}}$ is a projection in $Q$ with $\mathrm{tr}(e^N_{Q_{-1}})=\frac{1}{[Q:N]}$. Put $x=e^N_{Q_{-1}}-\frac{1}{[Q:N]}1.$ Then,
$\mathrm{tr}(x)=0.$ Following \cite{PP}, let
$$K_x=\overline{\text{conv}\{vxv^*:v\in \mathcal{U}(N)\}}.$$ A similar calculation as in \cite{PP} shows that $0\in K_x$ and hence for any fixed $\epsilon>0$ there are unitaries
$v_1,\cdots, v_n \in N$ such that
$$\big\lVert \sum_i \frac{1}{n} v_ixv^*_i\big\rVert_2<{\epsilon}^2\frac{1}{[Q:N]}.$$ Therefore, we see that
$$\big\lVert \sum_i \frac{[Q:N]}{n} v_ie^N_{Q_{-1}}v^*_i-1\big\rVert_2<{\epsilon}^2.$$
Put, $y=\frac{[Q:N]}{n}\sum_iv_ie^N_{Q_{-1}}v^*_i$. Then $$\lVert y-1\rVert_2<\epsilon^2.$$
Let $p$ be the spectral projection of $y$ corresponding to the interval $[0,1+\epsilon].$ Put
$$x_i= \frac{[Q:N]}{(1+\epsilon)n} v_ie^N_{Q_{-1}}v^*_i\wedge p.$$
Then $\sum_i x_i\leq \frac{[Q:N]}{(1+\epsilon)n}\sum_i pv_ie^N_{Q_{-1}}v^*_ip\leq \frac{1}{(1+\epsilon)}yp\leq 1.$
It follows immediately from the definition that
$$H(Q|P)\geq \sum_i \mathrm{tr}\big(\eta(E_P(x_i))-\eta(E_Q(x_i))\big)\geq \frac{[Q:N]}{(1+\epsilon)n}\sum_i \mathrm{tr}\big(\eta(E_P(v_ie^N_{Q_{-1}}v^*_i\wedge p))\big)$$
Now,
$$\mathrm{tr}\bigg(\eta\big(E_P(v_ie^N_{Q_{-1}}v^*_i\wedge p)\big)\bigg)\geq \mathrm{tr}\bigg(\eta\big(E_P(v_ie^N_{Q_{-1}}v^*_i)\big)\bigg)-\mathrm{tr}\bigg(\eta\big(E_P(v_ie^N_{Q_{-1}}v^*_i)-E_P(v_ie^N_{Q_{-1}}v^*_i\wedge p)\big)\bigg).$$
By concavity of $\eta$, it follows that $$\mathrm{tr}\bigg(\eta\big(E_P(v_ie^N_{Q_{-1}}v^*_i)\big)\bigg)=\mathrm{tr}\bigg(\eta\big(v_iE_P(e^N_{Q_{-1}})v^*_i\big)\bigg) \geq \mathrm{tr}\bigg(v_i\eta\big(E_P(e^N_{Q_{-1}})\big)v^*_i\bigg)=\mathrm{tr}\bigg(\eta\big(E_P(e^N_{Q_{-1}})\big)\bigg).$$
Similar calculations as in the proof of \cite{NS}[Theorem 10.2.1] yields
$$\mathrm{tr}\bigg(\eta\big(E_P(v_ie^N_{Q_{-1}}v^*_i)-E_P(v_ie^N_{Q_{-1}}v^*_i\wedge e^N_{Q_{-1}})\big)\bigg)\leq \eta(\epsilon^2).$$
Hence the following inequalities hold true:
\begin{align*}
H(Q|P)\geq & \frac{[Q:N]}{(1+\epsilon)n}\sum_i \mathrm{tr}\bigg(\eta\big(E_P(e^N_{Q_{-1}})\big)\bigg)-\frac{[Q:N]}{(1+\epsilon)} \eta(\epsilon^2)\\
\geq & -\frac{[Q:N]}{(1+\epsilon)}\frac{1}{[Q:N]}\log(\lambda(Q,P))-\frac{[Q:N]}{(1+\epsilon)} \eta(\epsilon^2)~~~~~~[\text{Using~~~\Cref{liu}~~~]}\\
\geq & -\frac{1}{(1+\epsilon)}\log(\lambda(Q,P))-\frac{[Q:N]}{(1+\epsilon)} \eta(\epsilon^2).
\end{align*}
Therefore, letting $\epsilon\rightarrow 0$ we have we conclude that
$$H(Q|P)\geq -\log\big(\lambda(Q,P)\big).$$
This completes the proof.
\end{proof}
We immediately obtain the following (possibly useful) formula for the relative entropy.
\begin{corollary}\label{imp2}
Let $(N,P,Q,M)$ be an irreducible quadruple such that $[M:N]<\infty$. Then,
$$H(Q|P)= \log\big(\mathrm{tr}(e_Q)\big)-\log\big(\mathrm{tr}(e_Pe_Q)\big).$$
\end{corollary}
\begin{corollary}
Let $(N,P,Q,M)$ be a quadruple of type $II_1$ factors with $N^{\prime}\cap M=\mathbb{C}$ and $[M:N]<\infty$. Then, $H(Q_1,P_1)= H(P|Q)$ and
$H(P_1|Q_1)=H(Q|P).$
\end{corollary}
\begin{proof}
By \Cref{main} we obtain $H(P_1|Q_1)=-\log\big(\lambda(P_1,Q_1)\big).$ Using \Cref{dual} we get
$$H(P_1|Q_1)=-\log\big(\lambda(Q,P)\big).$$ Interchanging $P$ and $Q$ in the above equation we prove the other implication. This completes the proof.
\end{proof}
Similarly, applying \Cref{downwarddual} we get the following.
\begin{corollary}
Let $(N,P,Q,M)$ be a quadruple of type $II_1$ factors with $N^{\prime}\cap M=\mathbb{C}$ and $[M:N]<\infty$. Then, $H(Q_{-1}|P_{-1})= H(P|Q)$ and
$H(P_{-1}|Q_{-1})=H(Q|P).$
\end{corollary}
The following consequence is obvious once we apply \Cref{main} and \Cref{prop}.
\begin{corollary}\label{prop2}
Let $(N,P,Q,M)$ be a quadruple of type $II_1$ factors with $N^{\prime}\cap M=\mathbb{C}$ and $[M:N]<\infty$. If $[P:N]=2$ then $H(P,Q)$ is either $0$ or $\log 2.$
\end{corollary}
\section{Acknowledgement}
I would like to thank Zhengwei Liu for various useful discussions and pointing out an error in an earlier version.
|
{
"timestamp": "2021-05-20T02:21:17",
"yymm": "2105",
"arxiv_id": "2105.03914",
"language": "en",
"url": "https://arxiv.org/abs/2105.03914"
}
|
\section{Introduction}
In recent years, error correction techniques~\citep{anantaram2018repairing,d2016automatic,liao2020improving,mani2020asr,shivakumar2019learning} have been widely adopted to refine the output sentences from an ASR model for further WER reduction. Error correction, a typical sequence to sequence task, takes the sentence generated by an ASR model as the source sequence and the ground-truth sentence as the target sequence, and aims to correct the errors in the source sequence. Previous works on ASR error correction~\citep{liao2020improving,mani2020asr} usually adopt an encoder-decoder based autoregressive generation model. While achieving good WER reduction, autoregressive models suffer from slow inference speed, and do not satisfy the latency requirements for online ASR services. For example, the latency of our internal product ASR system is about 500ms for an utterance on a single CPU, but the latency of the autoregressive correction model alone is about 660ms, which is even larger than the original ASR system and unaffordable for online deployment.
Non-autoregressive (NAR) models can speed up sequence generation by generating a target sequence in parallel, and attract much research attention, especially in neural machine translation (NMT)~\citep{ghazvininejad2019mask,gu2018non,gu2019levenshtein,wang2018semi}. Unfortunately, direct application of NAR models designed for NMT to ASR error correction leads to poor performance. According to our experiments, using a popular NAR model from NMT~\citep{gu2019levenshtein} for error correction even increases WER, i.e., the correction output is worse than the original ASR output. Different from NMT where almost all input tokens need to be modified (i.e., translated to another language), the modifications in ASR correction are much fewer but more difficult. For example, if an ASR model achieves 10\% WER, only about 10\% input tokens of the correction model need to be modified, and these tokens are usually difficult to identify and correct since they have already been mistaken by the ASR model. Thus, we need to take the characteristics of ASR outputs into consideration and carefully design NAR models for ASR error correction.
In ASR error correction, the source and target tokens are aligned monotonically (unlike shuffle error in neural machine translation), and ASR accuracy is usually measured by WER based on the edit distance.
Edit distance provides the edit and alignment information such as insertion, deletion and substitution on the source sentence (the output of an ASR model) in order to match the target (ground-truth) sentence, which can serve as precise guidance for the NAR correction model. Based on these observations, in this paper, we propose FastCorrect, a novel NAR error correction model that leverages and benefits from edit alignment:
\begin{itemize}[leftmargin=*]
\item In training, FastCorrect first obtains the operation path (including insertion, deletion and substitution) through which the source sentence can be modified to target sentence by calculating the edit distance, and then extracts the token-level alignment that indicates how many target tokens correspond to each source token after the insertion, deletion and substitution operations (i.e., 0 means deletion, 1 means unchanged or substitution, $\geq$2 means insertion). The token-level alignments (token numbers corresponding to each source token) are used to train a length predictor and to adjust the source tokens to match the length of the target sentence for parallel generation.
\item In inference, we cannot get token alignments as the ground-truth sentence is not available. We use the length predictor to predict the target token number for each source token and use the predicted number to adjust the source tokens, which are then fed to the decoder for target sequence generation. With this precise edit alignment, FastCorrect can correct the ASR errors more effectively, using a length predictor to locate which source token needs to be edited/corrected and how many tokens will be corrected to, and then using a decoder to correct the tokens correspondingly.
\end{itemize}
Since current ASR models have already achieved high accuracy, there might be not many errors in ASR outputs to train a correction model, even if we have large-scale datasets for ASR model training. To overcome this limitation, we use the crawled text data to construct a pseudo correction dataset by randomly deleting, inserting and substituting words in the text data. When substituting word, we use a homophone dictionary considering that ASR substitution errors are mostly from homophones. Those randomly edited sentences and their original sentences compose the pseudo sentence pairs for correction model training. In this way, we first pre-train FastCorrect on the large-scale pseudo correction dataset and then fine-tune the pre-trained model on the limited ASR correction dataset.
The contributions of this work are as follows:
\begin{itemize}[leftmargin=*]
\item To our knowledge, we are the first to propose NAR error correction for ASR, which greatly reduces the inference latency (up to 9$\times$) compared with its autoregressive counterpart while achieving nearly comparable accuracy. Our method also outperforms the popular NAR models adopted in machine translation and text edition by a large margin.
\item Inspired by the distinctive error patterns and correction operations (i.e., insertion, deletion and substitution) in ASR, we leverage edit alignments between the output text from ASR models and the ground-truth text to guide the training of NAR error correction, which is critical to FastCorrect.
\end{itemize}
\section{Background}
\label{gen_inst}
\paragraph{Error Correction}
In the field of natural language processing, error correction aims to correct the errors in the generated sentence by another system, such as automatic speech recognition~\citep{mani2020asr, ringger2001error,shivakumar2019learning}, neural machine translation~\citep{song2020neural} and optical character recognition~\cite{mokhtar2018ocr}. The error correction models for ASR can be divided into two categories based on whether the model can be trained in an end-to-end manner. Based on the method of statistic machine translation, \citet{cucu2013stat} performed error correction for ASR system. \citet{d2016automatic} proposed to use a phrase-based machine translation system to serve as a correction model for ASR. \citet{anantaram2018repairing} used a four-step method to repair the ASR model output by ontology learning. With the increasing of training corpus, end-to-end correction models are more accurate and become popular. A language model was trained for ASR correction in \citet{tanaka2018neural}, which could exploit the long-term context and choose better results among different ASR output candidates. \citet{mani2020asr} utilized a Transformer-based model to train an ASR correction model in an autoregressive manner. \citet{liao2020improving} further incorporated the MASS \cite{song2019mass} pre-training into ASR correction. However, all the end-to-end correction models are autoregressive and unsuitable for online deployment due to large latency, hindering the industrial application of ASR correction.
Considering that there does not exit shuffle error in ASR correction, we propose a novel method to align the source sentences and target sentences towards global optimum based on edit distance, which can not only keep the matched tokens in alignment as many as possible, but also detect the substitution, deletion and insertion errors during alignment.
\paragraph{Non-autoregressive Models}
NAR generation, which aims to speed up the inference of autoregressive model while achieving minimal accuracy drop, has been a popular research topic in recent years \citep{guo2020fine,liu2020task,ren2020study}. \citet{gu2018non, ma2019flowseq, shu2020latent} approached this problem with a set of latent variables. \citet{shao2020minimizing,ghazvininejad2020aligned,li2019hint} developed other alternative loss functions to help the model capture target-size sequential dependencies. \citet{wang2018semi,stern2018blockwise} proposed partially parallel decoding to output multiple tokens at each decoding step. \citet{stern2019insertion,gu2019levenshtein} proposed to generate target tokens in a tree-based manner, and used dynamic insertion/deletion to iteratively refine the generated sequences based on previous predictions. FELIX \cite{mallinson-etal-2020-felix} performed NAR text edition by aligning source sentence with target sentence greedily.
However, as shown in section \ref{subsec:acc_lan_result}, directly using existing non-autoregressive models such as \citet{gu2019levenshtein} and \citet{mallinson-etal-2020-felix} cannot get satisfying results on ASR error correction. In this paper, based on the characteristics of ASR recognized text, we develop FastCorrect, which
builds edit alignment between the source sentences and target sentences to guide the error correction.
\section{FastCorrect}
FastCorrect leverages NAR generation with edit alignment to speed up the inference of the autoregressive correction model. In FastCorrect, we first calculate the edit distance between the recognized text (source sentence) and the ground-truth text (target sentence). By analyzing the insertion, deletion and substitution operation in the edit distance, we can obtain the number of target tokens that correspond to each source token after edition (i.e., 0 means deletion, 1 means unchanged or substitution, $\geq$2 means insertion). FastCorrect adopts an NAR encoder-decoder structure with a length predictor to bridge the length mismatch between the encoder (source sentence) and decoder (target sentence).
The obtained number of target tokens is used to train the length predictor to predict the length of each source token after correction, and to adjust each source token, where the adjusted source tokens are fed into the decoder for parallel generation.
In the following subsections, we introduce the edit alignment, model structure and pre-training strategy in FastCorrect.
\begin{figure*}
\centering
\includegraphics[width=1.\textwidth]{edit_distance_full_detail_color_final.PNG}
\caption{Illustration of the edit alignment between a source sentence ``B B D E F'' and a target sentence ``A B C D F''. The left part is the edit distance matrix calculated in a recursive manner. For example, the distance 2 in row D and column B means that the edit distance between source prefix sentence ``B B D'' and target prefix sentence ``A B'' is 2. The middle part shows all the possible edit paths with the minimum edit distance. Ø stands for an empty token. After filtering these paths with lower match scores, we can get all the possible token-level edit alignments, as shown in the right part. The alignment with the highest frequency score is selected as the final edit alignment to guide the error correction.
}
\label{fig:edit_dis}
\vspace{-5mm}
\end{figure*}
\subsection{Edit Alignment}
As shown in Figure~\ref{fig:edit_dis}, the edit alignment between each token in source and target sentences can be obtained through two steps: calculating the edit paths with minimum edit distance (the left and middle sub-figures), and choosing edit alignment with the highest n-gram frequency (the right sub-figure). In the next, we introduce each step in detail.
\paragraph{Calculating Edit Path }
Edit distance measures the dissimilarity of two sentences by counting the minimum number of edit operations required to transform the source sentence into the target sentence\footnote{Standard edit distance usually transforms the target to the source. We change the order to transform the source to target to align with our scenario.}. The valid edit operations include token insertion, token deletion and token substitution.
Given source sentence $S = (s_1, s_2, ..., s_M)$ and target sentence $T = (t_1, t_2, ...,t_N)$, where $M$ and $N$ are the lengths of source and target sentences, the edit distance between $S$ and $T$ can be obtained by calculating the edit distance of prefix sentences recursively. The procedure is as follows:
\begin{equation}
D(i, j) =\ \min(D(i-1, j) + 1, D(i, j-1) + 1, D(i-1, j-1)+\mathbbm{1}(s_i \neq t_j) ).
\end{equation}
In the above equation, $D(i, j)$ is the edit distance of source prefix sentence $(s_1, s_2, ...,s_i)$ and target prefix sentence $(t_1, t_2, ...,t_j)$, $\mathbbm{1}(\cdot)$ is the indicator function whose output is $1$ when the condition is true otherwise $0$.
The boundary condition of $D(i, j)$ is $ D(i, 0) = i,D(0, j) = j$.
The leftmost part of Figure \ref{fig:edit_dis} shows an example of the alignment between source sentence ``B B D E F'' and target sentence ``A B C D F''. By recursively calculating the edit distance, we can obtain all the possible edit paths that have the minimum edit distance. In this case, we have 3 possible paths with minimum edit distance 3, as path $a$, $b$ and $c$ shown in the middle part of this figure.
\paragraph{Choosing Edit Alignment}
We introduce how to choose the edit alignment between each token in source sentence and target sentence based on the edit paths.
First, for the edit paths obtained from the above procedure, we calculate the match score of each path and only keep the paths with the highest match score. We define the match score of an edit path as the number of tokens that is not changed in this path. If an edit path has a higher match score, this path is more preferred since most tokens in the source sentence can be kept. As shown in the middle part of Figure \ref{fig:edit_dis}, the match scores of the path $a$ and $b$ are both 3 because they both have 3 unchanged tokens: ``B'', ``D'' and ``F'', and the match score of path $c$ is only 2, which will be filtered\footnote{Intuitively speaking, path $c$ does not make sense because it first changes token ``D'' to ``C'' and then changes ``E'' to ``D''. It is apparent that ``D'' should not be changed.}.
Second, we get the edit alignment set $E$ (where $e \in E$ represents a possible edit alignment for all tokens between the source and target sentences) from all the edit paths obtained by now by the following rules: 1) For a deletion, the source token is aligned with an empty target token Ø. 2) For a substitution or identity, we align the source token with the substituted token or unchanged token in the target sentence, respectively. 3) For an insertion, the target token has no source token to align with (e.g., token ``C'' in path $b$), and then the target token will be aligned with its corresponding left or right source token, resulting in different edit alignments (e.g., path $b$ can generate two alignments: $b1$ and $b2$, by aligning target token ``C'' to source token ``B'' (left) or ``D'' (right), respectively).
Third, we select the final edit alignment $e$ from the edit alignment set $E$ obtained in the second step, according to the frequency of n-gram tokens of the aligned target tokens. We first build an n-gram frequency table $G$ that contains the number of occurrences of every n-gram term in the training corpus, and then calculate the frequency score $Freq_{score}(e)$ for each edit alignment $e \in E$ on a source sentence $S$:
\begin{equation}
Freq_{score}(e) = \sum_{i=1}^M Freq(e[s_i]); \ \ \ \ Freq(x) = \begin{cases}
G[x], & len(x) > 1 \\
0, & len(x) \leq 1
\end{cases},
\end{equation}
where $e[s_i]$ represents the target tokens aligned with the source token $s_i$ under alignment $e$, $M$ is the number of tokens in source sentence, $len(x)$ is the number of words in $x$ and $G[x]$ returns the frequency of $x$ in the n-gram table\footnote{In the rightmost part of Figure \ref{fig:edit_dis}, $E$ = $\{a, b1, b2\}$, $b2[\text{D}]$ = CD, $G[\text{CD}]$=20.}. The frequency of all 1-gram is set to 0 because we are interested in differentiating token combinations. We choose the alignment $e \in E$ with the largest frequency score as the final edit alignment between each token in the source and target sentences. Doing so, we encourage the edit alignment that aligns the source token with more frequent n-gram target tokens. Taking the alignment $a$ in the rightmost part of Figure \ref{fig:edit_dis} as an example, where only the first source token ``B'' is aligned with more than 1 target tokens (i.e., ``AB''). So the frequency score of alignment $a$ is equal to the frequency of ``AB'' in n-gram table $G$ (i.e., 90), which is larger than that of n-gram token ``BC'' in alignment $b1$ and ``CD'' in alignment $b2$. As a result, the alignment $a$ is selected as the final edit alignment and the length of target tokens aligned with each source token is ``2 1 1 0 1'', respectively.
\subsection{Model Structure}
We use Transformer \cite{vaswani2017attention} as the basic model architecture of FastCorrect, as shown in Figure~\ref{fig:nat_model}. The encoder takes the source sentence as input and outputs a hidden sequence, which is 1) fed into a length predictor to predict the number of target tokens corresponding to each source token (i.e., the edit alignment obtained in the previous subsection), and 2) used by the decoder through encoder-decoder attention. The detailed architecture of length predictor is shown in the right sub-figure of Figure~\ref{fig:nat_model}.
Thanks to the designs of the edit alignment and length predictor in FastCorrect, the deletion and insertion errors are detected by predicting a length of 0 or more than 1 on the corresponding source token through the length predictor. For substitution errors, the length predicted by the length predictor is 1, which is the same as the length of unchanged/correct source token.
In this condition, the substitution error can be differentiated from the unchanged token by the decoder since it is different with the target token. These designs decrease the difficulty of error correction by using the length predictor to precisely detect the error patterns and using the decoder to focus on modification.
\begin{figure}
\centering
\includegraphics[width=.90\textwidth]{NAT_model_detail_final.PNG}
\caption{Model structure of FastCorrect.}
\label{fig:nat_model}
\vspace{-5mm}
\end{figure}
\subsection{Pre-training}
The high accuracy of ASR model, whose outputs are correct in a large proportion, makes the effective training cases for correction models limited since most words in a sentence are correct.
To overcome this problem, we construct large-scale pseudo paired data to pre-train the FastCorrect model and then fine-tune on the original limited paired data. We crawl text data to construct a pseudo correction dataset by randomly deleting, inserting and substituting words in text data. To simulate the ASR error as close as possible to benefit the model training, we take two considerations: 1) The word is replaced with another word with similar pronunciation from a homophone dictionary when substituting, since substitution errors in ASR usually come from homophones. 2) The probability of modifying a word is set to the word error rate of the ASR model.
The probability distribution of deletion, insertion and substitution is set to the error distribution of the ASR model.
\section{Experimental Setup}
\label{sec:exp_setup}
\subsection{Datasets and ASR Models}
\label{sub:datasets}
We conduct experiments on two datasets, the public AISHELL-1 dataset and an internal product dataset.
\paragraph{AISHELL-1} AISHELL-1 \cite{bu2017aishell} is an open-source Mandarin speech corpus with 178 hours of training data\footnote{https://openslr.org/33}. We use the ESPnet \cite{watanabe2018espnet} toolkit to train an ASR model on AISHELL-1 dataset. Several techniques such as Conformer architecture \cite{gulati2020conformer}, SpecAugment \cite{park2019specaugment} and speed perturbation are utilized to improve the performance of this ASR model, resulting in a state-of-the-art
WER of 4.46 and 4.83 on the validation and test set of AISHELL-1. The trained ASR models transcribe
AISHELL-1 to generate the paired data for error correction.
\paragraph{Internal Dataset} Our internal dataset is a Mandarin speech corpus with 92K hours of training data. Our ASR model on internal dataset is a hybrid model, where the acoustic model is a latency-controlled BLSTM \cite{zhang2016highway} with 6 layers and 1024 hidden units in each layer, and the language model is a 5-gram model with 414 million n-grams trained with 436 billion tokens. The trained ASR models transcribe the
internel dataset to generate the paired data for error correction.
\paragraph{Pseudo Data for Pre-training} We use 400M crawled sentences to construct the pseudo paired data for pre-training. Each word in the original sentence is noised with a probability of $p$, which is set to the word error rate of the ASR model. For a word to be noised, the probability of substitution, deletion or insertion is estimated from the transcription results of ASR model, which is mentioned in the previous paragraph.
For all the text data in the above three datasets, we learn the subword using SentencePiece \cite{kudo2018sentencepiece} with a dictionary size of 40K.
\subsection{Model Configurations of FastCorrect and Baseline Systems}
\label{sub:model_config}
We use the default Transformer architecture in FastCorrect, which consists of a 6-layer encoder and a 6-layer decoder with hidden size $d_{model}=512$ and feed-forward hidden size $d_{ff}=1024$. Our length predictor consists of 5 layers of 1D convolutional network with ReLU activation and 2 linear layers to output a scalar, all of which have a hidden size of 512. Each convolutional layer is followed by layer normalization \cite{ba2016layer} and dropout. The kernel size of the convolutional network is 3. FastCorrect is implemented on Fairseq \cite{ott2019fairseq}. We describe the baseline systems used in our experiments for comparison: two NAR models, LevT~\cite{gu2019levenshtein} and FELIX~\cite{mallinson-etal-2020-felix}, and a Transformer based autoregressive (AR) model.
\paragraph{LevT} We compare FastCorrect with another NAR model called Levenshtein Transformer (LevT) \cite{gu2019levenshtein}, which also predicts the insertion and deletion of a token in the decoder with multiple iterations. Different from LevT that implicitly learns the insertion and deletion to refine the target sentence, FastCorrect explicitly leverages edit distance to extract the edit alignment (insertion, deletion and substitution) between the tokens in the source and target sentences, which can be more efficient and accurate. We train LevT for error correction with the default hyper-parameters in Fairseq\footnote{https://github.com/pytorch/fairseq/tree/master/examples/nonautoregressive\_translation}.
\paragraph{FELIX} FELIX is an NAR model for text edition~\cite{mallinson-etal-2020-felix}. Different from FastCorrect that utilizes edit distance for accurate alignment, the alignment algorithm is based on finding the matched tokens between the source and target sentences greedily, which 1) will be incorrect if the same token appears many times, 2) cannot perform substitution directly when editing text (i.e., need first deletion and then insertion). We implement FELIX based on the official code\footnote{https://github.com/google-research/google-research/tree/master/felix}.
\paragraph{AR Model} For the autoregressive model, we follow the standard Transformer encoder-decoder model adopted in machine translation while keeping the parameter amount comparable with FastCorrect. We use the standard settings and hyper-parameters for AR model training in Fairseq\footnote{https://github.com/pytorch/fairseq/tree/master/examples/translation}.
\subsection{Training and Inference}
\label{sub:training_details}
We train all correction models on 4 NVIDIA Tesla V100 GPUs, with a batch size of 12000 tokens. We follow the default parameters of Adam optimizer \cite{kingma2014adam} and learning rate schedule in \citet{vaswani2017attention}. All the corrections models are first pre-trained on the pseudo data corpus for 30 epochs and then fine-tuned on the AISHELL-1 or the internal dataset for 20 epochs. To simulate the industrial scenario, we test the inference speed of the correction models in three conditions: 1) NVIDIA P40 GPU, 2) 4-core CPU and 3) single-core CPU, where the CPU is “Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz”. The test batch size is set to 1 sentence to match the online serving environment.
\begin{table*}[t]
\small
\caption{The correction accuracy and inference latency of different correction models. We report the word error rate (WER), word error rate reduction (WERR) and latency of the autoregressive (AR) and non-autoregressive (NAR) models (FastCorrect, LevT and FELIX). ``MIter'' is a hyper-parameter in LevT controlling max decoding iteration. The actual iteration can be smaller than ``MIter'' due to early stopping.}
\label{tab:main_result}
\begin{center}
\begin{tabular}{l|l|l|l|l|ccc}
\toprule
\multirow{2}{*}{AISHELL-1} & \multicolumn{2}{c|}{Test Set} & \multicolumn{2}{c|}{Dev Set} & \multicolumn{3}{c}{Latency (ms/sent) on Test Set}
\tabularnewline\cmidrule{2-8}
\multicolumn{1}{c|}{} & \multicolumn{1}{c|}{WER} & \multicolumn{1}{c|}{WERR} & \multicolumn{1}{c|}{WER} & \multicolumn{1}{c|}{WERR} & GPU & CPU*4 & CPU \\\midrule
No correction & 4.83 & - & 4.46 & - & - & - & - \\
AR model & 4.08 & 15.53 & 3.80 & 14.80 & 149.5 \small{(1$\times$)} & 248.9 \small{(1$\times$)} & 531.3 \small{(1$\times$)} \\\midrule
LevT (MIter=1) \cite{gu2019levenshtein} & 4.73 & 2.07 & 4.37 & 2.02 & 54.0 \small{(2.8$\times$)} & 82.7 \small{(3.0$\times$)} & 158.1 \small{(3.4$\times$)} \\
LevT (MIter=3) \cite{gu2019levenshtein} & 4.74 & 1.86 & 4.38 & 1.79 & 60.5 \small{(2.5$\times$)} & 83.9 \small{(3.0$\times$)} & 161.6 \small{(3.3$\times$)} \\
FELIX \cite{mallinson-etal-2020-felix} & 4.63 & 4.14 & 4.26 & 4.48 & 23.8 \small{(6.3$\times$)} & 41.7 \small{(6.0$\times$)} & 85.7 \small{(6.2$\times$)} \\\midrule
FastCorrect & \textbf{4.16} & \textbf{13.87} & \textbf{3.89} & \textbf{13.3} & \textbf{21.2} \small{(7.1$\times$)} & \textbf{40.8} \small{(6.1$\times$)} & \textbf{82.3} \small{(6.5$\times$)} \\
\midrule
\midrule
\multirow{2}{*}{Internal Dataset} & \multicolumn{2}{c|}{Test Set} & \multicolumn{2}{c|}{Dev Set} & \multicolumn{3}{c}{Latency (ms/sent) on Test Set}
\tabularnewline\cmidrule{2-8}
\multicolumn{1}{c|}{} & \multicolumn{1}{c|}{WER} & \multicolumn{1}{c|}{WERR} & \multicolumn{1}{c|}{WER} & \multicolumn{1}{c|}{WERR} & GPU & CPU*4 & CPU \\\midrule
No correction & 11.17 & - & 11.24 & - & - & - & - \\
AR model & 10.22 & 8.50 & 10.31 & 8.27 & 191.5 \small{(1$\times$)} & 336 \small{(1$\times$)} & 657.7 \small{(1$\times$)} \\\midrule
LevT (MIter=1) \cite{gu2019levenshtein} & 11.26 & -0.80 & 11.35 & -0.98 & 60.5 \small{(3.2$\times$)} & 102.6 \small{(3.3$\times$)} & 196.5 \small{(3.3$\times$)} \\
LevT (MIter=3) \cite{gu2019levenshtein} & 11.45 & -2.50 & 11.56 & -2.85 & 75.6 \small{(2.5$\times$)} & 118.9 \small{(2.8$\times$)} & 248.0 \small{(2.7$\times$)} \\
FELIX \cite{mallinson-etal-2020-felix} & 11.14 & 0.27 & 11.21 & 0.27 & 25.9 \small{(7.4$\times$)} & 43.0 \small{(7.8$\times$)} & 90.9 \small{(7.2$\times$)} \\
\midrule
FastCorrect & \textbf{10.27} & \textbf{8.06} & \textbf{10.35} & \textbf{7.92} & \textbf{21.5} \small{(8.9$\times$)} & \textbf{42.4} \small{(7.9$\times$)} & \textbf{88.6} \small{(7.4$\times$)} \\\bottomrule
\end{tabular}
\end{center}
\vspace{-3mm}
\end{table*}
\section{Results}
In this section, we first introduce the accuracy and latency of FastCorrect, and then compare FastCorrect with Transformer based autoregressive model, LevT and FELIX. Then we conduct ablation studies to verify the effectiveness of designs in FastCorrect, including edit alignment (length predictor) and pre-training. Finally, we conduct more analyses to compare FastCorrect with other methods.
\subsection{Accuracy and Latency of FastCorrect} \label{subsec:acc_lan_result}
We first report the accuracy and latency of different error correction models on AISHELL-1 and the internal dataset in Table \ref{tab:main_result}. We have several observations: 1) Autoregressive (AR) correction model can reduce the WER (measured by WERR) of ASR model by 15.53\% and 8.50\% on the test set of AISHELL-1 and internal dataset, respectively. 2) LevT, a typical NAR model from NMT, achieves minor WERR on AISHELL-1 and even leads to WER increase on the internal dataset. Meanwhile, LevT can only speed up the inference of the AR model by 2-3 times on GPU/CPU conditions. 3)
FELIX only achieves 4.14\% WERR on AISHELL-1 and 0.27\% WERR on internal dataset, which is much worse compared with FastCorrect, although the inference speedup is similar.
4) Our proposed FastCorrect speeds up the inference of the AR model by 6-9 times on the two datasets on GPU/CPU conditions and achieves 8-14\% WERR, nearly comparable with the AR correction model in accuracy. We further analyze the difference between FastCorrect, LevT and FELIX in Section \ref{subsec:ana}. The above results demonstrate the effectiveness of FastCorrect in speeding up the inference of error correction while maintaining the correction accuracy.
\begin{wraptable}{r}{7.8cm}
\vspace{-6mm}
\caption{Ablation study of each design in FastCorrect.}
\label{tab:abla_study}
\begin{center}
\begin{tabular}{l|l|c}
\toprule
\multirow{2}{*}{Model} & \multicolumn{1}{c|}{Internal} & AISHELL-1
\tabularnewline
\multicolumn{1}{c|}{} & \multicolumn{1}{c|}{Dataset} & Dataset \\\midrule
No correction & 11.17 & 4.83 \\ \midrule
AR model & 10.22 & 4.08 \\
- Pre-training & 10.26 & 16.01 \\
- Fine-tuning & 11.70 & 5.28 \\\midrule
FastCorrect & 10.27 & 4.16 \\
- Pre-training & 10.33 & 4.83 \\
- Fine-tuning & 11.74 & 5.19 \\
- Edit Alignment & 12.27 & 4.67 \\
\bottomrule
\end{tabular}
\end{center}
\vspace{-4mm}
\end{wraptable}
\begin{table}[t]
\caption{Comparison of FastCorrect and the AR model with deep encoder and shallow decoder. “AR 8-4” means the AR model with an 8-layer encoder and a 4-layer decoder. }
\label{tab:comp_shallow}
\vspace{-2mm}
\begin{center}
\begin{tabular}{l|l|cc|l|cc}
\toprule
\multirow{3}{*}{Model} & \multicolumn{3}{c}{AISHELL-1} & \multicolumn{3}{|c}{Internal Dataset}
\tabularnewline\cmidrule{2-7}
\multicolumn{1}{c|}{} & \multicolumn{1}{c|}{WER} & \multicolumn{2}{c}{Latency (ms/sent)} & \multicolumn{1}{|c|}{WER} & \multicolumn{2}{c}{Latency (ms/sent)}
\tabularnewline\cmidrule{3-4} \cmidrule{6-7}
\multicolumn{1}{c|}{} & \multicolumn{1}{c|}{\%} & GPU & CPU & \multicolumn{1}{c|}{\%} & GPU & CPU \\\midrule
No Correction & 4.83 & - & - & 11.17 & - & - \\\midrule
AR 6-6 & 4.08 & 149.5 \small{(1$\times$)} & 531.3 \small{(1$\times$)} & 10.26 & 190.6 \small{(1$\times$)} & 648.3 \small{(1$\times$)} \\
AR 8-4 & 4.14 & 120.5 \small{(1.2$\times$)} & 427.6 \small{(1.2$\times$)} & 10.28 & 144.1 \small{(1.3$\times$)} & 542.0 \small{(1.2$\times$)} \\
AR 10-2 & 4.23 & 84.0 \small{(1.8$\times$)} & 317.6 \small{(1.5$\times$)} & 10.33 & 100.8 \small{(1.9$\times$)} & 431.2 \small{(1.5$\times$)} \\
AR 11-1 & 4.30 & 66.5 \small{(2.2$\times$)} & 281.0 \small{(1.7$\times$)} & 10.44 & 79.1 \small{(2.4$\times$)} & 372.3 \small{(1.7$\times$)} \\
\midrule
FastCorrect & 4.16 & \textbf{21.2} \small{(7.1$\times$)} & \textbf{82.3} \small{(6.5$\times$)} & 10.33 & \textbf{21.4} \small{(8.9$\times$)} & \textbf{86.8} \small{(7.5$\times$)} \\
\bottomrule
\end{tabular}
\end{center}
\vspace{-5mm}
\end{table}
\subsection{Ablation Study}
We conduct ablation study to verify the importance of the designs in FastCorrect, including the edit alignment (length predictor) and pre-training. For the setting without edit alignment, we train a predictor to predict the length difference between the input and the target, according to which the input is adjusted softly and fed into the decoder \cite{guo2019non}. We show the results of each setting on the test set of both datasets in Table \ref{tab:abla_study}. We have several observations: 1) Removing edit alignment causes a large WER increase, which shows that the edit alignment is critical to guide the model to correct the error in the recognized text. 2) Pre-training is effective when the data size in the fine-tuning stage is small. Removing pre-training causes a large WER increase on the AISHELL-1 dataset, which is much smaller than the internal dataset, especially in the AR setting. 3) Fine-tuning is necessary to ensure WER reduction in correction models, otherwise pre-training alone can result in worse WER than original ASR outputs. The above observations verify the effectiveness of each design in FastCorrect.
\subsection{Comparison to AR Model with Shallow Decoder}
As proposed in \cite{he2019hard,kasai2020deep}, deep encoder and shallow decoder can reduce the latency of the AR model while maintaining the accuracy. We compare the accuracy and latency of FastCorrect to AR models with different combinations of encoder and decoder layers on the test set of both datasets in Table \ref{tab:comp_shallow}.
FastCorrect achieves a much larger speedup than the AR models with a deep encoder and shallow decoder while maintaining similar accuracy with the AR models. To be more specific, FastCorrect has a better or comparable performance with auto-regressive model with 10 encoder layers and 2 decoder layers, while the speedup is about 5× higher.
\subsection{Analysis of FastCorrect, LevT and FELIX} \label{subsec:ana}
We perform an analysis on the ability of error detection and error correction between FastCorrect, LevT and FELIX to figure out why FastCorrect is better than LevT and FELIX.
If a source sentence is edited (including insertion, deletion or substitution) by a correction model, the edit result can be correct (same with the target sentence) or incorrect (different from the target sentence)\footnote{For example, for a source sentence ``ADC'' and a target sentence ``ABC'', The edit result is correct if the source sentence is edited to ``ABC'', while incorrect if edited to ``AEC''.}. Therefore, we can calculate several metrics to measure the advantage of a correction model: 1) $P_{\text{edit}}$, among all the edited sentences, how many sentences actually need to be edited. 2) $R_{\text{edit}}$, among all the error sentences, how many sentences are edited. 3) $P_{\text{right}}$, among all the edited sentences, how many sentences are edited to target (i.e., remove all errors). $P_{\text{edit}}$ and $R_{\text{edit}}$ measure the error-detection ability of correction models while $P_{\text{right}}$ measures the error-correction ability. The comparison of AR model, LevT, FELIX and FastCorrect on these metrics are shown in Table \ref{tab:deep_ana}.
From Table \ref{tab:deep_ana}, we can observe that compared with FastCorrect, the error-correction ability of both LevT and FELIX is inferior, since the ratio of sentences whose errors are all corrected, $P_{right}$, of FastCorrect is higher than that of LevT or FELIX by a large margin. Moreover, the error-detection ability of LevT is weaker than FastCorrect, especially on the internal dataset, where the $P_{edit}$ of LevT is only 74.0\%, meaning that 26\% sentences edited by LevT are accurate already and these modifications only lead to WER increase. Thanks to the accurate edit alignment, FastCorrect can split the error detection and error correction into different modules, using length predictor to detect errors and decoder to correct errors. Thus, FastCorrect can have comparable error-detection ability with AR model and high error-correction ability, enabling it to nearly match the accuracy of the AR model, and greatly outperform the two baselines.
\begin{table*}[t]
\caption{Comparison of models on several metrics.
We report: $P_{edit}$, among all the edited sentences, how many sentences actually need to be edited; $R_{edit}$, among all the error sentences, how many sentences are edited; $P_{right}$, among all the edited sentences, how many sentences are edited to target (i.e., correct all errors). $P_{edit}$ and $R_{edit}$ reflect the error-detection ability and $P_{right}$ reflects the error-correction ability.
The word error rate reduction (WERR) of each model is also shown.}
\label{tab:deep_ana}
\begin{center}
\begin{tabular}{l|cccc|cccc}
\toprule
\multirow{2}{*}{Model} & \multicolumn{4}{c|}{Internal Dataset} & \multicolumn{4}{c}{AISHELL-1}
\tabularnewline\cmidrule{2-5}\cmidrule{6-9}
\multicolumn{1}{c|}{} & $P_{edit}$ & $R_{edit}$ & $P_{right}$ & WERR & $P_{edit}$ & $R_{edit}$ & $P_{right}$ & WERR \\\midrule
AR model & 94.3 & 31.0 & 18.9 & 8.50 & 97.2 & 47.4& 35.1 & 15.53 \\\midrule
LevT & 74.0 & \textbf{41.3} & 11.4 & -0.80 & 91.6 & 26.1 & 20.3 & 2.07 \\
FELIX & 93.6 & 19.9 & 10.1 & 0.27 & 96.5 & 33.8 & 22.8 & 4.14 \\
FastCorrect & \textbf{95.0} & 27.6 & \textbf{16.2} &\textbf{8.06} & \textbf{96.8} & \textbf{48.1} & \textbf{26.4} & \textbf{13.87} \\
\bottomrule
\end{tabular}
\end{center}
\vspace{-4mm}
\end{table*}
\section{Conclusions}
\label{sec:conclusion}
In this paper, to reduce inference latency while maintaining the accuracy of conventional autoregressive error correction models for ASR, we proposed FastCorrect, a non-autoregressive model that leverages edit alignments (insertion, deletion, and substitution) between the tokens in the source and target sentences to guide error correction. FastCorrect utilizes a length predictor for error detection (predicting the number of target tokens aligned to each source token to detect insertion, deletion, and substitution/identity) and a decoder for error correction (modifying the error token into the correct one). Experiments on public AISHELL-1 and a large-scale internal dataset show that FastCorrect reduces inference latency of an autoregressive model by 6-9 times while maintaining its accuracy, and outperforms previous NAR models proposed for machine translation and text edition, which verifies the effectiveness of FastCorrect. Although FastCorrect is proposed for ASR, its design principle that first detecting errors and then correcting errors would be beneficial for general error correction. We will explore potential extensions of FastCorrect to other tasks such as grammar error check and text edition.
\bibliographystyle{plainnat}
|
{
"timestamp": "2021-06-04T02:08:31",
"yymm": "2105",
"arxiv_id": "2105.03842",
"language": "en",
"url": "https://arxiv.org/abs/2105.03842"
}
|
\section{The proof of Stochastic Value Policy Gradient Theorem}
\label{app:derivation-of-svg}
Here is the proof of stochastic value policy gradient theorem.
\begin{proof}
\begin{equation*}
\begin{aligned}
&\frac{\mathrm{d}}{\mathrm{d}\theta_\pi} \rho(\theta_\pi)\\
=& (1 - \gamma)\int_{s} p_0(s) \int_{z} p(z) \frac{\mathrm{d}}{\mathrm{d}\theta_\pi} Q(s, G(s, z; \theta_\pi); \theta_\pi) \mathrm{d} z \mathrm{d}s \\
=& (1 - \gamma)\int_{s} p_0(s) \int_{z} p(z) \nabla_{\theta_\pi} G(s, z; \theta_\pi)\cdot \nabla_{a = G(s, z; \theta_\pi)} Q(s, a; \theta_\pi)\mathrm{d}z \mathrm{d}s\\
&+ (1 - \gamma)\int_{s} p_0(s) \int_{z} p(z) \frac{\mathrm{d}}{\mathrm{d}\theta_\pi} Q(s, a; \theta_\pi) \big\vert_{a = G(s, z;\theta_\pi)} \mathrm{d} z \mathrm{d} s \\
=& (1 - \gamma)\int_{s} p_0(s) \int_{z} p(z) \nabla_{\theta_\pi} G(s, z; \theta_\pi)\cdot \nabla_{a = G(s, z; \theta_\pi)} Q(s, a; \theta_\pi)\mathrm{d}z \mathrm{d}s\\
&+ (1 - \gamma) \int_{s} p_0(s) \int_{z} p(z) \frac{\mathrm{d}}{\mathrm{d}\theta_\pi}
\bigg[r(s, a) \\
& + \gamma \int_{s'} p(s' \vert s, a)\int_{z'}p(z') Q(s',G(s', z';\theta_\pi); \theta_\pi)\bigg]
\bigg\vert_{a=G(s, z;\theta_\pi)} \mathrm{d}z' \mathrm{d} s' \mathrm{d} z \mathrm{d} s\\
=& (1 - \gamma)\int_{s} p_0(s) \int_{z} p(z) \nabla_{\theta_\pi} G(s, z; \theta_\pi)\cdot \nabla_{a=G(s, z;\theta_\pi)} Q(s, a; \theta_\pi)\mathrm{d}z \mathrm{d}s\\
&+ (1 - \gamma)\gamma\int_{s} p_0(s) \int_{z} p(z) \int_{s'}p(s'\vert s, a) \int_{z'} p(z') \\
&\cdot \frac{\mathrm{d}}{\mathrm{d}\theta_\pi}
Q(s',G(s', z';\theta_\pi); \theta_\pi)
\bigg\vert_{a=G(s, z;\theta_\pi)}\mathrm{d}z' \mathrm{d} s' \mathrm{d} z \mathrm{d} s\\
=& (1 - \gamma) \int_{s} p_0(s) \int_{z} p(z) \nabla_{\theta_\pi} G(s, z; \theta_\pi)\cdot \nabla_{a=G(s,z;\theta_\pi)} Q(s, a; \theta_\pi)\mathrm{d} z \mathrm{d} s\\
&+ (1 - \gamma)\int_{s} \gamma p_1(s;\theta_\pi)\int_{z}p(z)\frac{\mathrm{d}}{\mathrm{d}\theta_\pi}
Q(s,G(s, z;\theta_\pi); \theta_\pi)\mathrm{d}z \mathrm{d} s\\
=& \quad \vdots \\
=& (1 - \gamma) \int_{s} \sum^{\infty}_{t=0}\gamma^t p_t(s;\theta_\pi)
\int_{z} p(z) \nabla_{\theta_\pi} G(s, z; \theta_\pi)
\cdot \nabla_{a = G(s, z;\theta_\pi)} Q(s, a; \theta_\pi)\mathrm{d} z \mathrm{d} s\\
=& \int_{s} p_{\gamma}(s;\theta_\pi)
\int_{z} p(z) \nabla_{\theta_\pi} G(s, z; \theta_\pi)
\cdot \nabla_{a=G(s, z;\theta_\pi)} Q(s, a; \theta_\pi)\mathrm{d}z \mathrm{d}s\\
=& \mathbb{E}_{s \sim p_{\gamma}(\cdot;\theta_\pi), z \sim q(z)}
\bigg[\nabla_{\theta_\pi} G(s, z; \theta_\pi) \cdot \nabla_{a = G(s, z; \theta_\pi)} Q(s, a; \theta_\pi)\bigg].
\end{aligned}
\end{equation*}
\end{proof}
\newpage
\section{Pseudocode}
In this section, we introduce the full details of GAC. GAC also uses two techniques which are widely used in reinforcement learning. The first one is target q-network~\cite{mnih2013playing} to avoid double sampling and the other is double q-network~\cite{fujimoto2018addressing} to avoid overestimation problems.
Here is a brief explanation of target q-network technique. For the loss of critic,
\begin{equation}
\mathcal{L}(\theta_Q) = \mathbb{E}_{(s, a, r)\sim \mathcal{D}} \bigg\{Q(s, a; \theta_Q) - r - \gamma \mathbb{E}_{s' \sim p(\cdot \vert s, a), z' \sim q} [Q(s', G(s', z'; \theta_\pi);\theta_Q)]\bigg\}^2,
\end{equation}
its gradient is quite complex with double sampling problem
\begin{small}
\begin{equation}
\begin{aligned}
\nabla_{\theta_Q}\mathcal{L}(\theta_Q) = \mathbb{E}_{(s, a, r)\sim \mathcal{D}} \bigg\{&\bigg[Q(s, a; \theta_Q) - r -\gamma \mathbb{E}_{s' \sim p(\cdot \vert s, a), z' \sim q} [Q(s', G(s', z'; \theta_\pi);\theta_Q)]\bigg] \\
&\bigg[\nabla_{\theta_Q} Q(s, a;\theta_Q) - \gamma \mathbb{E}_{s' \sim p(\cdot \vert s, a), z' \sim q} [\nabla_{\theta_Q} Q(s', G(s', z'; \theta_\pi);\theta_Q)]\bigg]\bigg\}.
\end{aligned}
\end{equation}
\end{small}
To solve this problem, target q-network technique change the critic problem into a two-player game
\begin{small}
\begin{subequations}
\begin{numcases}{}
\min_{\theta_Q}\mathcal{L}(\theta_Q) = \mathbb{E}_{(s, a, r)\sim \mathcal{D}} \bigg\{Q(s, a; \theta_Q) - r - \gamma \mathbb{E}_{s' \sim p(\cdot \vert s, a), z' \sim q} [Q(s', G(s', z'; \theta_\pi);\theta_{Qtarg})]\bigg\}^2, \\
\min_{\theta_{Qtarg}} \mathcal{L}(\theta_{Qtarg}) = \frac{1}{2} \Vert \theta_{Qtarg} - \theta_{Q} \Vert^2_2.
\end{numcases}
\end{subequations}
\end{small}
\begin{algorithm}[htbp]
\LinesNumbered
\KwIn{An environment $env$.}
\KwOut{The optimal policy $G(s, z; \theta^*_\pi)$, where $z \sim p(z)$.}
Create an empty $\mathcal{D}$, a push-forward policy $G(s, z; \theta_\pi)$,and four q-function $Q(s, a; \theta_{Q1A})$, $Q(s, a; \theta_{Q1B})$, $Q(s, a; \theta_{Q2A})$, $Q(s, a; \theta_{Q2B})$ \;
\SetKwFunction{Update}{Update}
\SetKwProg{Fn}{Function}{:}{end}
\Fn{\Update{$\mathcal{D} = \{(s_i, a_i, r_i, s'_i)\}^{m'}_{i=1}$}}{
Sample actions $\{a'_i \sim G(s'_i, z'_i; \theta_\pi)\}^{m'}_{i=1}$, where$z'_i \sim p(z)$\;
$Q_1(s_i, a_i) = \min(Q(s_i, a_i;\theta_{Q1A}), Q(s_i, a_i;\theta_{Q1B}))$\;
$Q_2(s'_i, a'_i) = \min(Q(s'_i, a'_i;\theta_{Q2A}), Q(s'_i, a'_i;\theta_{Q2B}))$\;
$\mathcal{L}(\theta_{Q1A}, \theta_{Q1B}) = \frac{1}{2m'}\sum^{m'}_{i=1} \{r_i + \gamma Q_2(s'_i, a'_i) - Q_1(s_i, a_1) \}^2$\;
$\mathcal{L}(\theta_{Q2A}, \theta_{Q2B}) = \frac{1}{2} \Vert \theta_{Q2A} - \theta_{Q1A} \Vert^2_2 + \frac{1}{2} \Vert \theta_{Q2B} - \theta_{Q2B} \Vert^2_2$\;
Sample $\{\{z_{ij}\sim p\}^{m''}_{j=1}\}^{m'}_{i=1}$ from $q(z)$\;
Sample $\{\{a_{ik}\}^{m''}_{k=1}\}^{m'}_{i=1}$ from uniform distribution\;
${MMD}_{\mathcal{D}}(\theta_\pi) = \frac{1}{m' m''}\sum^{m'}_{i=1} [\sum^{m''}_{j,k=1} k(G(s_i, z_{ij};\theta_\pi),G(s_i, z_{ik};\theta_\pi)) + \sum^{m''}_{j,k=1} k(a_{ij}, a_{ik}) - 2 \sum^{m''}_{j,k=1} k(G(s_i, z_{ij};\theta_\pi), a_{ik}))]^{1/2}$\;
$\mathcal{L}(\theta_\pi) = -\frac{1}{m' m''} \sum^{m'}_{i=1}\sum^{m''}_{j=1} \min(Q(s_i, G(s_i, z_{ij};\theta_\pi);\theta_{Q1A}),$ $Q(s_i, G(s_i, z_{ij};\theta_\pi); \theta_{Q1B})) + \alpha MMD_{\mathcal{D}}(\theta_\pi)$\;
$\mathcal{L}(\alpha) = \ln \alpha \cdot (\beta - MMD_{\mathcal{D}}(\theta_\pi))$\;
Use Adam optimizer to update $\theta_{\pi}$, $\theta_{Q1A}$, $\theta_{Q1B}$ and $\alpha$ according to $\mathcal{L}(\theta_\pi)$, $\mathcal{L}(\theta_{Q1A}, \theta_{Q1B})$ and $\mathcal{L}(\alpha)$\;
Use Gradient Descent optimizer to update $\theta_{Q2A}$ and $\theta_{Q2B}$ according to $\mathcal{L}(\theta_{Q2A}, \theta_{Q2B})$\;
}
\For(){$n = 1,2, \ldots, N$}{
Sample from $env$ by executing $G(s, z; \theta_\pi)$, and get new samples $\mathcal{D}_n = \{(s_i, a_i, r_i, s'_i)\}^{m}_{i=1}$\;
$\mathcal{D} = \mathcal{D} \cup \mathcal{D}_n$\;
\For(){$n' = 1, 2, \ldots, N'$}{
Sample from $\mathcal{D}$ and get samples $\mathcal{D}_{n'} = \{(s_i, a_i, r_i, s'_i)\}^{m'}_{i=1}$\;
\Update{$\mathcal{D}_{n'}$}\;
}
$\mathcal{L}(\beta) = \frac{1}{2}\beta[sign(\alpha_{\max} - \alpha) + sign(\alpha_{\min}-\alpha)]$\;
$\beta = \beta - \Delta_{\beta} \cdot \nabla_{\beta} \mathcal{L}(\beta)$\;
}
\KwRet{$G(s, z; \theta_\pi)$}
\caption{Generative Actic-Critic}
\end{algorithm}
\newpage
\section{Experiment Settings}
We use three techniques in Mujoco environments for training stability. (They are also used in baseline methods.)
\begin{itemize}
\item \textbf{Observation Normalization}: in mujoco environments, the observation ranges from $-\infty$ to $\infty$. We normalize the observations by the following formula,
\begin{equation}
clip\bigg(\frac{s - \hat\mu_{s}}{\max(\hat \sigma_{s})}, -5, 5\bigg),
\end{equation}
where $\hat\mu_{s}$ is the mean of observations and $\hat\sigma_{s}$ is the standard deviation of observations.
\item \textbf{Action Normalization}: the action ranges from $-0.4$ to $0.4$ in Humanoid-v3 and HumanoidStandup-v2, and ranges from $-1$ to $1$. We normalize the actions by the following formula,
\begin{equation}
\frac{2a - a_{\max} - a_{\min}}{a_{\max} - a_{\min}}.
\end{equation}
\item \textbf{Reward Scaling}: the reward signal in HumanoidStandup-v2 is too large, so it shrinks for numerical stability.
\end{itemize}
\begin{table*}[ht]
\centering
\begin{tabular}{c|c}
\toprule
\textbf{Hyperparameters} & \textbf{Value}\\
\midrule
$q_{train}(z)$ & $\mathcal{N}(0, 1)$ \\
$q_{test}(z)$ & $\mathcal{N}(0, 0.5)$ \\
MMD Kernel & Energy Kernel: $k(x, y) = - \Vert x - y \Vert^2_2$ \\
Replay Buffer Size & $10^6$\\
$N$ & $2 \times 10^4$ \\
$N'$ & $50$ \\
$m$ & $100$ \\
$m'$ & $100$ \\
$m''$ & $100$ \\
$\lambda$ & $0.99$ \\
The step of Adam & $10^{-3}$ \\
The step of GD & $5 \times 10^{-3}$ \\
$\Delta_{\beta}$ & $0.01$ \\
\bottomrule
\end{tabular}
\caption{Hyperparameters}
\end{table*}
\begin{table*}[ht]
\centering
\begin{tabular}{c|c}
\toprule
\textbf{Actor} & \textbf{Critic}\\
\midrule
(state dim + epsilon dim, 400) & (state dim + act dim, 400) \\
Relu & Relu \\
(400, 300) & (400, 300) \\
Relu & Relu \\
(300, action dim) & (300, 1) \\
Tanh & \\
\bottomrule
\end{tabular}
\caption{Network Architecture}
\end{table*}
\begin{table*}[ht]
\centering
\begin{tabular}{c|c|c|c|c}
\toprule
& $a_{\min}$ & $a_{\max}$ & reward scale & terminated\_when\_unhealty\\
\midrule
HalfCheetah-v3 & 1.0 & 1.8 & 1.0 & false\\
Ant-v3 & 0.6 & 1.2 & 1.0 & false\\
Hopper-v3 & 0.3 & 0.8 & 1.0 & false\\
Walker2d-v3 & 0.7 & 1.4 & 1.0 & false\\
Humanoid-v3 & 1.0 & 1.8 & 1.0 & true\\
HumanoidStandup-v2 & 1.0 & 1.8 & 0.05 & false\\
\bottomrule
\end{tabular}
\caption{The Specific Hyperparameters of Environments}
\end{table*}
\newpage
\section{Additional Experiments Results}
\begin{figure*}[htbp]
\centering
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{./pics/ch6-performance/Humanoid.pdf}
\caption{Humanoid-v3}
\label{fig:app-Humanoid-performance}
\end{subfigure}
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{./pics/ch6-performance/Ant.pdf}
\caption{Ant-v3}
\label{fig:app-Ant-performance}
\end{subfigure}
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{./pics/ch6-performance/Hopper.pdf}
\caption{Hopper-v3}
\label{fig:app-Hopper-performance}
\end{subfigure}
\quad
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{./pics/ch6-performance/HumanoidStandup.pdf}
\caption{HumanoidStandup-v2}
\label{fig:app-HumanoidStandup-performance}
\end{subfigure}
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{./pics/ch6-performance/Walker2d.pdf}
\caption{Walker2d-v3}
\label{fig:app-Walker2d-performance}
\end{subfigure}
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{./pics/ch6-performance/HalfCheetah.pdf}
\caption{HalfCheetah-v3}
\label{fig:app-HalfCheetah-performance}
\end{subfigure}
\caption{Performance Curves of Algorithms}
\label{fig:app-performance}
\end{figure*}
\begin{figure*}[ht]
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{./pics/alpha/Humanoid.pdf}
\caption{Humanoid-v3}
\end{subfigure}
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{./pics/alpha/Ant.pdf}
\caption{Ant-v3}
\end{subfigure}
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{./pics/alpha/Hopper.pdf}
\caption{Hopper-v3}
\end{subfigure}
\quad
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{./pics/alpha/HumanoidStandup.pdf}
\caption{HumanoidStandup-v2}
\end{subfigure}
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{./pics/alpha/Walker2d.pdf}
\caption{Walker2d-v3}
\end{subfigure}
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{./pics/alpha/HalfCheetah.pdf}
\caption{HalfCheetah-v3}
\end{subfigure}
\caption{The Performance of The Hyperparameter Experiments}
\end{figure*}
\begin{figure*}[ht]
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{./pics/mmd/Humanoid.pdf}
\caption{Humanoid-v3}
\end{subfigure}
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{./pics/mmd/Ant.pdf}
\caption{Ant-v3}
\end{subfigure}
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{./pics/mmd/Hopper.pdf}
\caption{Hopper-v3}
\end{subfigure}
\quad
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{./pics/mmd/HumanoidStandup.pdf}
\caption{HumanoidStandup-v2}
\end{subfigure}
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{./pics/mmd/Walker2d.pdf}
\caption{Walker2d-v3}
\end{subfigure}
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{./pics/mmd/HalfCheetah.pdf}
\caption{HalfCheetah-v3}
\end{subfigure}
\caption{The MMD Entropy of The Hyperparameter Experiments}
\end{figure*}
\begin{figure*}[ht]
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{./pics/alpha-curves/Humanoid.pdf}
\caption{Humanoid-v3}
\end{subfigure}
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{./pics/alpha-curves/Ant.pdf}
\caption{Ant-v3}
\end{subfigure}
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{./pics/alpha-curves/Hopper.pdf}
\caption{Hopper-v3}
\end{subfigure}
\quad
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{./pics/alpha-curves/HumanoidStandup.pdf}
\caption{HumanoidStandup-v2}
\end{subfigure}
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{./pics/alpha-curves/Walker2d.pdf}
\caption{Walker2d-v3}
\end{subfigure}
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{./pics/alpha-curves/HalfCheetah.pdf}
\caption{HalfCheetah-v3}
\end{subfigure}
\caption{The $\alpha$ of The Hyperparameter Experiments}
\end{figure*}
\begin{figure*}[ht]
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{./pics/z-test/Humanoid.pdf}
\caption{Humanoid-v3}
\end{subfigure}
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{./pics/z-test/Ant.pdf}
\caption{Ant-v3}
\end{subfigure}
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{./pics/z-test/Hopper.pdf}
\caption{Hopper-v3}
\end{subfigure}
\quad
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{./pics/z-test/HumanoidStandup.pdf}
\caption{HumanoidStandup-v2}
\end{subfigure}
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{./pics/z-test/Walker2d.pdf}
\caption{Walker2d-v3}
\end{subfigure}
\begin{subfigure}{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{./pics/z-test/HalfCheetah.pdf}
\caption{HalfCheetah-v3}
\end{subfigure}
\caption{The Effect of $q_{test}(z)$}
\end{figure*}
\begin{figure*}[ht]
\centering
\begin{subfigure}{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{./pics/action-distribution/Humanoid-gac-original.png}
\caption{GAC-Original in Humanoid-v3}
\end{subfigure}
\begin{subfigure}{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{./pics/action-distribution/Humanoid-gac-auto.png}
\caption{GAC-Auto in Humanoid-v3}
\end{subfigure}
\begin{subfigure}{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{./pics/action-distribution/Ant-gac-original.png}
\caption{GAC-Original in Ant-v3}
\end{subfigure}
\begin{subfigure}{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{./pics/action-distribution/Ant-gac-auto.png}
\caption{GAC-Auto in Ant-v3}
\end{subfigure}
\begin{subfigure}{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{./pics/action-distribution/Hopper-gac-original.png}
\caption{GAC-Original in Hopper-v3}
\end{subfigure}
\begin{subfigure}{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{./pics/action-distribution/Hopper-gac-auto.png}
\caption{GAC-Auto in Hopper-v3}
\end{subfigure}
\begin{subfigure}{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{./pics/action-distribution/HumanoidStandup-gac-original.png}
\caption{GAC-Original in HumanoidStandup-v2}
\end{subfigure}
\begin{subfigure}{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{./pics/action-distribution/HumanoidStandup-gac-auto.png}
\caption{GAC-Auto in HumanoidStandup-v3}
\end{subfigure}
\begin{subfigure}{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{./pics/action-distribution/Walker2d-gac-original.png}
\caption{GAC-Original in Walker2d-v3}
\end{subfigure}
\begin{subfigure}{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{./pics/action-distribution/Walker2d-gac-auto.png}
\caption{GAC-Auto in Walker2d-v3}
\end{subfigure}
\begin{subfigure}{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{./pics/action-distribution/HalfCheetah-gac-original.png}
\caption{GAC-Original in HalfCheetah-v3}
\end{subfigure}
\begin{subfigure}{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{./pics/action-distribution/HalfCheetah-gac-auto.png}
\caption{GAC-Auto in HalfCheetah-v3}
\end{subfigure}
\caption{The Action Distributions of The final Policy}
\end{figure*}
\begin{table*}[ht]
\centering
\begin{tabular}{c|c|c|c|c|c}
\toprule
& DDPG & TD3 & SAC & GAC-original & GAC-auto\\
\midrule
HalfCheetah-v3 & 16115.02 & 15179.39 & \textbf{16120.46} & 10185.41 & 13077.92\\
Ant-v3 & 968.08 & 4541.29 & 6555.1723 & 6470.44 & \textbf{7344.37}\\
Hopper-v3 & 2879.09 & 4417.35 & 3483.96 & 4091.49 & \textbf{4483.57}\\
Walker2d-v3 & 3621.32 & 4954.50 & 6186.70 & 3870.19 & \textbf{6348.30} \\
Humanoid-v3 & 173.49 & 369.63 & 5953.10 & 6762.82 & \textbf{9928.80}\\
HumanoidStandup-v2 & 159678.55 & 155970.90 & 163716.07 & 111411.60 & \textbf{396108.54}\\
\bottomrule
\end{tabular}
\caption{The Performance Of Algorithms}
\end{table*}
\section{Conclusion}
In this paper, we present an off-policy algorithm, called generative actor-critic(GAC), that apply a push-forward generative model into actor-critic framework with a MMD-entropy regularizer to balance the exploration and exploitation.
We also devise an adaptive mechanism to automatically scale this regularizer, which further improves the stability and robustness of GAC.
The result shows that GAC outperforms state-of-the-art off-policy algorithms, expecially in hard tasks, and proofs that the push-forward policy can improves the efficiency of exploration and the asymptotic performance of algorithms.
While, our work is limited to single-agent and off-policy settings. We are looking forward to further works in more complex settings.
\section{Experiments}
\label{sec:experiment}
We compare generative actor-critic~(GAC) with three state-of-the-art off-policy algorithms: deep deterministic policy gradient~(DDPG)~\cite{lillicrap2015continuous}, one of the most famous off-policy algorithm known for its efficiency and simplicity; twin delayed deep deterministic policy gradient algorithm~(TD3)~\cite{fujimoto2018addressing}, an improved variant of DDPG; soft actor-critic~(SAC)~\cite{haarnoja2018soft}, an robust and stable algorithm based on maximum entroy framework. In all the methods, we use the same $[400 \times 300]$ fully connected network as their policies and q-functions, and more settings are in appendix.
\subsection{The Results of Performance Experiments}
\begin{figure}[tbp]
\centering
\begin{subfigure}{0.2\textwidth}
\centering
\includegraphics[width=\textwidth]{./pics/ch6-performance/HumanoidStandup.pdf}
\caption{HumanoidStandup-v2}
\label{fig:ch6-HumanoidStandup-performance}
\end{subfigure}
\begin{subfigure}{0.2\textwidth}
\centering
\includegraphics[width=\textwidth]{./pics/ch6-performance/Humanoid.pdf}
\caption{Humanoid-v3}
\label{fig:ch6-Humanoid-performance}
\end{subfigure}
\begin{subfigure}{0.2\textwidth}
\centering
\includegraphics[width=\textwidth]{./pics/ch6-performance/Ant.pdf}
\caption{Ant-v3}
\label{fig:ch6-Ant-performance}
\end{subfigure}
\begin{subfigure}{0.2\textwidth}
\centering
\includegraphics[width=\textwidth]{./pics/ch6-performance/Hopper.pdf}
\caption{Hopper-v3}
\label{fig:ch6-Hopper-performance}
\end{subfigure}
\caption{Performance Curves of Algorithms}
\label{fig:ch6-performance}
\end{figure}
In this subsection, we show the performance of our algorithm, GAC with adaptive mechanism of $\alpha$.
The policy is tested by exploiting in the environment 10 times until the actor is done or reaching the max steps. We evaluate these test trajectories by adding their own step rewards undiscoundedly. As suggested in original papers~\cite{lillicrap2015continuous,fujimoto2018addressing,haarnoja2018soft}, DDPG and TD3 is tested with their deterministic policies. And as for SAC, we use the mean action of the stochastic Gaussian distribution as its test policy, because it has better performance than the stochastic one~\cite{haarnoja2018soft}. The hyperparameters of these algorithms is set according to their original papers~\cite{haarnoja2018soft,fujimoto2018addressing}. We choose six tasks and repeat each task 5 times with different seeds, and draw the results on four tasks in figure \ref{fig:ch6-performance}. (The full results are in the appendix.)
Figure \ref{fig:ch6-performance} shows that GAC performs better than the baseline methods on all the tasks especially on the complex tasks, Ant-v3, Humanoid-v3 and HumanoidStandup-v2. Among baseline methods, SAC performs best in all environments. And DDPG fails to learning enough information in the complex tasks. The performance curves indicate that GAC has lower sample complexity and better asymptotic performance than the state-of-the-art methods.
\subsection{The Results of Hyperparameter Experiments}
\begin{figure}[tbp]
\centering
\begin{subfigure}{0.2\textwidth}
\centering
\includegraphics[width=\textwidth]{./pics/humanoidstandup/HumanoidStandup-alpha.pdf}
\caption{Performance}
\label{fig:ch6-hyperparameters-performance}
\end{subfigure}
\begin{subfigure}{0.2\textwidth}
\centering
\includegraphics[width=\textwidth]{./pics/humanoidstandup/HumanoidStandup-mmd.pdf}
\caption{MMD Entropy}
\label{fig:ch6-hyperparameters-mmdentropy}
\end{subfigure}
\begin{subfigure}{0.2\textwidth}
\centering
\includegraphics[width=\textwidth]{./pics/humanoidstandup/HumanoidStandup-alpha-curve.pdf}
\caption{The Value of $\alpha$}
\label{fig:ch6-hyperparameters-alpha}
\end{subfigure}
\caption{The Experiments of $\alpha$ in HumanoidStandup-v2}
\label{fig:ch6-hyperparameters}
\end{figure}
In this subsection, we study the influence of the MMD-entropy regularizer in GAC.
We compare GAC with different fixed regularizer hyperparameter $\alpha$ (i.e. update $\theta_\pi$ according to objective~\eqref{equ:gacpolicyloss-mmd}) and GAC with the adaptive mechanism. We run six tasks with 5 different seeds and report the result of HumanoidStandup-v2. Refer to the appendix for full results. In this experiment, GAC with the adaptive mechanism are labeled by the word ``auto'' and GAC with fixed $\alpha$ are labeled by the value of $\alpha$.
We set $\alpha_{\min}=1.0$ which is the minimum $\alpha$ of the $\alpha$-fixed experiments and $\alpha_{\max}=1.8$ which is the maximum $\alpha$ of the $\alpha$-fixed experiments.
We draw the performence curves, the MMD entropy value curves and the $\alpha$ value curves in figure \ref{fig:ch6-hyperparameters}.
The results in figure~\ref{fig:ch6-hyperparameters-performance} shows that the MMD regularizer improves the performance of GAC obviously with proper $\alpha$. And figure~\ref{fig:ch6-hyperparameters-mmdentropy} shows that the policy learned GAC without the regularizer has high MMD entropy which means that the policy lacks exploration and causes the worse performance. Furthermore, according to figure~\ref{fig:ch6-hyperparameters}, the performance is quite sensitive with fixed $\alpha$ and GAC with adaptive mechanism achieves the best performence.
The results also show that our adaptive mechanism of $\alpha$ can improve the stability and robustness of GAC accrossing different seeds comparing to the results of $\alpha$-fixed experiments.
To show the influence of the MMD-entropy regularizer on the stochasticity of policies, we compare the action distributions of the policies learned from GAC without the regularizer and GAC with adaptive mechanism. In figure \ref{fig:ch6-action-distribution}, we draw the first three dimensions of action distributions in HumanoidStandup-v2. The actions are blue points and we draw black circles to emphasize them. The results show that the policy of GAC without regularizer is almost a deterministic policy (figure \ref{fig:ch6-action-distribution-original}) and the counterpart in GAC with adaptive mechanism is a multimodal stochastic policy (figure \ref{fig:ch6-action-distribution-auto}). We conclude that the MMD-entropy regularizer can avoid the mode-collapsed problem and the ability of the push-forward function in expressing complex distributions, such as mutimodal distributions, helps the algorithm to get better performance.
\begin{figure}[tbp]
\centering
\begin{subfigure}{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{./pics/action-distribution/HumanoidStandup-gac-original.png}
\caption{GAC:~$\alpha=0$}
\label{fig:ch6-action-distribution-original}
\end{subfigure}
\begin{subfigure}{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{./pics/action-distribution/HumanoidStandup-gac-auto.png}
\caption{GAC:~Auto}
\label{fig:ch6-action-distribution-auto}
\end{subfigure}
\caption{The Action Distributions of The Final Policy in HumanoidStandup-v2}
\label{fig:ch6-action-distribution}
\end{figure}
\subsection{The Effect of Latent Distributions}
We also study the effect of latent distribution $q(z)$ in GAC. Figure~\ref{fig:ch6-hyperparameters-qz} shows the results of Humanoid-v3. In this experiment, the distribution $\mathcal{N}(0, 1)$ is used for training and two different distributions are used for test. The results shows that the test distribution $\mathcal{N}(0, 0.5)$ will be better than the test distribution $\mathcal{N}(0, 1.0)$ in Humanoid-v3. So a more concentrated latent distribution is preferred during test periods.
\begin{figure}[tbp]
\centering
\includegraphics[width=0.25\textwidth]{./pics/z-test/Humanoid.pdf}
\caption{The effect of $q_{test}(z)$ in Humanoid-v3}
\label{fig:ch6-hyperparameters-qz}
\end{figure}
\section{Generative Actor-Critic Algorithm}
In this section, we introduce our off-policy algorithm, generative actor-critic algorithm~(GAC), based on the stochastic value policy gradient theorem~\cite{heess2015learning} and actor-critic framework. GAC also uses an entropy-like regularizer based on maximum mean discrepancy to balance the exploration and exploitation. Besides, we devise an adaptive mechanism to automatically scale this regularizer.
\subsection{The Basic Objectives of GAC}
As we have discussed in section \ref{ch:introduction}, we aim to apply push-forward operator in DRL. It is expressive and widely used in GANs.
To distinguish the push-forward policy from other parameterized policies, we denote it as $G(s, z; \theta_\pi)$, where $z$ is sampled from a simple distribution $q$ such as the standard normal distribution. According to lemma \ref{lem:push-forward}, the discounted accumulative reward function $\rho(\theta_\pi)$ can be reformulated as
\begin{small}
\begin{equation}
\label{equ:valueofpolicywithpushforward}
\rho(\theta_\pi) = \mathbb{E}_{s \sim p_0, z \sim q}\bigg[(1-\gamma)
Q(s, G(s, z; \theta_\pi); \theta_\pi) \bigg].
\end{equation}
\end{small}
The following theorem gives us an empirical estimator of the gradient of $\rho(\theta_\pi)$ in equation \eqref{equ:valueofpolicywithpushforward}.
\begin{theorem}{\cite{heess2015learning}\textbf{(Stochastic Value Policy Gradient Theorem).}}
\label{thm:stochastic-value-policy-gradient-theorem}
\begin{small}
\begin{equation}
\begin{aligned}
\nabla_{\theta_\pi} \rho(\theta_\pi) = \mathbb{E}&_{s \sim p_{\gamma}(\cdot; \theta_\pi), z \sim q} \bigg[ \nabla_{\theta_\pi} G(s, z;\theta_\pi) \\
&\cdot \nabla_{a = G(s, z;\theta_\pi)} Q(s, a; \theta_\pi) \bigg],
\end{aligned}
\end{equation}
\end{small}
where $p_{\gamma} (s;\theta_\pi) = (1 - \gamma)\sum_{t=0}^{\infty} \gamma^t p_t(s;\theta_\pi)$ and $p_t(s;\theta_\pi)$ is the probability density of the event that Markov chain's $t$th state is $s$.
\end{theorem}
It tells us that if we can sample $\{s_i\}^m_{i=1}$ from $p_{\gamma}(s; \theta_\pi)$ and $\{z_i\}^m_{i=1}$ from $q(z)$, then we can construct the following unbiased estimator of the gradient of $\rho(\theta_\pi)$.
\begin{small}
\begin{equation}
\begin{aligned}
\hat\nabla \rho(\theta_\pi) = \frac{1}{m} \sum_{i=1}^{m}& \nabla_{\theta_\pi} G(s_i, z_i; \theta_\pi) \\
& \cdot \nabla_{a_i = G(s_i,z_i; \theta_\pi)}\hat Q(s_i, a_i;\theta_\pi).
\end{aligned}
\end{equation}
\end{small}
However, this estimator requires on-policy samples from $p_{\gamma}(\cdot \vert \theta_\pi)$ and the approximator of the q-function $\hat Q(s, a;\theta_\pi)$, which cause high sample and computational complexity.
To solve these problems, we adopt the actor-critic framework which changes
original problem into a two-player game:
\begin{small}
\begin{subequations}
\begin{numcases}{}
\label{equ:gacpolicygradient}
\max_{\theta_\pi} \mathcal{L}(\theta_\pi) = \mathbb{E}_{s \sim \mathcal{D}, z \sim q} [Q(s, G(s, z; \theta_\pi); \theta_Q)],\\
\label{equ:gacbellmanequation}
\min_{\theta_Q} \mathcal{L}(\theta_Q) = \mathbb{E}_{(s, a, r)\sim \mathcal{D}} \bigg\{Q(s, a; \theta_Q) - r \notag \\ \quad -\gamma \mathbb{E}_{s' \sim p(\cdot \vert s, a), z' \sim q} [Q(s', G(s', z'; \theta_\pi);\theta_Q)]\bigg\}^2,
\end{numcases}
\end{subequations}
\end{small}
where $\mathcal{D}$ means an off-policy replay buffer and contains the samples not only sampled by current policy.
The loss of actor \eqref{equ:gacpolicygradient} is obstained from the stochastic value policy gradient theorem.
The actor is updated by applying the chain rule to $\mathcal{L}(\theta_\pi)$ from the distribution $\mathcal{D}$ with respect to the policy parameters:
\begin{small}
\begin{equation}
\begin{aligned}
\nabla_{\theta_\pi} \mathcal{L}(\theta_\pi) = \mathbb{E}_{s \sim \mathcal{D}, z \sim q}&[\nabla_{\theta_\pi} G(s, z; \theta_\pi) \\
&\cdot \nabla_{a = G(s, z; \theta_\pi)} Q(s, a; \theta_Q)].
\end{aligned}
\end{equation}
\end{small}
If $\mathcal{D}(s) = p_{\gamma}(s;\theta_\pi)$, theorem~\ref{thm:stochastic-value-policy-gradient-theorem} proves that this is the gradient of the policy’s performance.
The loss of critor \eqref{equ:gacbellmanequation} is obtained from Bellman equation theorem~(theorem \ref{thm:bellmanequation}). According to the Bellman equation theorem, the optimal q-function $Q(s, a; \theta_Q^*)$ for a fixed $\theta_\pi$ should satisfy: $\forall (s, a) \in \mathcal{S} \times \mathcal{A}$,
\begin{small}
\begin{equation}
\label{equ:bellman-equation}
\begin{aligned}
Q(s, a;\theta_Q^*) &= r(s, a) + \\
\gamma & \mathbb{E}_{s' \sim p(\cdot \vert s, a), z' \sim q}[Q(s',
G(s', z';\theta_\pi);\theta_Q^*)].
\end{aligned}
\end{equation}
\end{small}
Equation~\eqref{equ:gacbellmanequation} minimizes the mean square difference of the two sides of equation~\eqref{equ:bellman-equation}.
Here is the reason why this two-player game works. Firstly, off-policy replay buffer $\mathcal{D}$ still guarantees policy improvement. If we fix $\theta_Q$, then the optimal solution $\theta^*_\pi$ of $\mathcal{L}(\theta_{\pi})$ will satisfy: $\forall s \in \mathcal{D}\text{ and } \forall z \sim p$,
\begin{small}
\begin{equation}
\label{equ:greedy}
G(s, z; \theta^*_\pi) \in \arg\max_a Q(s, a;\theta_Q).
\end{equation}
\end{small}
This optimal solution $\theta^*_\pi$ corresponds to the optimal Bellman operator~\cite{puterman2014markov}, which guarantees policy improvement if $\mathcal{D}$ contains all of the elements in $\mathcal{S}$. Secondly, if we can get the global optimimal solutions $\theta^*_{\pi}$ of $\mathcal{L}(\theta_\pi)$ and $\theta^*_{Q}$ of $\mathcal{L}(\theta_Q)$ at the same time, $\theta^*_{\pi}$ should be the solution of the optimal Bellman equation~(theorem~\ref{thm:optimalbellmanequation}), which guarantees the final policy $G(s, z;\theta^*_\pi)$ to be the optimal policy.
\subsection{The MMD-Entropy Regularizer}
\label{subsec:entropy}
As the equation~\eqref{equ:greedy} shows, the loss $\mathcal{L}(\theta_\pi)$~\eqref{equ:gacpolicygradient} prefers policies focusing on the optimal actions of $\max_{a} Q(s, a; \theta_Q)$, which will cause the lack of exploration and the limitation of performence. We will solve this problem by constructing an entropy-like technique, MMD-Entropy regularizer, which improves the performence significantly.
When the action density can be explicitly accessed, the entropy maximization technique is adopted to achieve a much more effective exploration strategy~\cite{haarnoja2017reinforcement,haarnoja2018soft}. It adds an entropy regularizer in the policy improvement procedure to get the energy-based policy by solving the problem $ \max_{\pi(\cdot \vert s)} \mathbb{E}_{a \sim \pi(\cdot \vert s)} [ Q(s, a) ] - \alpha \int_a \pi(a \vert s) \ln \pi(a \vert s) \mathrm{d} a$. The optimal solution of this problem is $\pi_{Q}^*(a \vert s) = \frac{\exp[Q(s, a)/\alpha]}{\int_a \exp[Q(s, a)/\alpha] \mathrm{d} a}$. Instead of only focusing on the optimal actions, $\pi_{Q}^*$ executes the action $a$ according to a probability positively associated with $Q(s, a)$ and are able to explore all regions with high expected accumulative reward $Q(s, a)$.
As it is difficult to explicitly calculate the probability density of a general push-forward policy,
the entropy maximization techniques can not be adopted in our method. Hence, we construct a similar entropy regularizer for GAC. It can be varified that the entropy of the distribution $p(x)$, $H(p) = -\int_x p(x) \ln p(x) dx$, has strong correlation with the Kullback-Leibler Divergence of $p(x)$ and the uniform distribution $u(x) = b$, as follows,
\begin{small}
\begin{equation}
\begin{aligned}
KL(p \Vert u) =& \int p(x) \ln (p(x) / u(x)) \mathrm{d} x \\
=& -\ln b - H(p).
\end{aligned}
\end{equation}
\end{small}
Thus, we construct an entropy-like regularizer for the policy without explicit probability density formula by replacing the Kullback-Leibler Divergence with the maximum mean discrepancy, which can be estimated in a density-free approach~(see equation~\eqref{equ:mmd-estimator}).
\begin{definition}{\textbf{(The MMD Entropy of Policies).}}
We denote the MMD entropy of policies as follows,
\begin{small}
\begin{equation}
MMD_{\mathcal{D}} (\pi)
= \mathbb{E}_{s \sim \mathcal{D}}
[MMD(\pi(\cdot \vert s) \Vert u(\cdot \vert s))],
\end{equation}
\end{small}
where $\mathcal{D}$ denotes any distribution of states and
$u(\cdot \vert s)$ denotes a uniform distribution of actions.
\end{definition}
The MMD-entropy regularizer evaluates the distance between current policy and uniform policy, which is contrary to the original entropy. And the objective of the policy \eqref{equ:gacpolicygradient} becomes
\begin{small}
\begin{equation}
\label{equ:gacpolicyloss-mmd}
\begin{aligned}
\max_{\theta_\pi} \mathcal{L}(\theta_\pi) =& \mathbb{E}_{s \sim \mathcal{D}, z \sim q} [Q(s, G(s, z; \theta_\pi); \theta_Q)] \\
& - \alpha MMD_{\mathcal{D}}(\theta_\pi),
\end{aligned}
\end{equation}
\end{small}
where $\alpha$ ($\ge 0$) is a hyperparameter. The hyperparameter $\alpha$ influences this distance in an indirect way: the distance will be small if $\alpha$ is large, and the distance will be large if $\alpha$ is small.
\begin{algorithm}[tbp]
\LinesNumbered
\KwIn{An environment $env$.}
\KwOut{The optimal policy $G(s, z; \theta^*_\pi)$.}
Create an empty $\mathcal{D}$, a push-forward policy $G(s, z; \theta_\pi)$,and four q-function $Q(s, a; \theta_{Q1A})$, $Q(s, a; \theta_{Q1B})$, $Q(s, a; \theta_{Q2A})$, $Q(s, a; \theta_{Q2B})$ \;
\SetKwFunction{Update}{Update}
\SetKwProg{Fn}{Function}{:}{end}
\Fn{\Update{$\mathcal{D} = \{(s_i, a_i, r_i, s'_i)\}^{m'}_{i=1}$}}{
Use Adam to optimize ${\mathcal{L}}(\theta_\pi)$~\eqref{equ:auto-policy-loss}, ${\mathcal{L}}(\theta_{Q})$~\eqref{equ:gacbellmanequation} and ${\mathcal{L}}(\alpha)$~\eqref{equ:auto-alpha-loss} with $\mathcal{D}$\;
}
\For(){$n = 1,2, \ldots, N$}{
Sample from $env$ by executing $G(s, z; \theta_\pi)$, and get new samples $\mathcal{D}_n = \{(s_i, a_i, r_i, s'_i)\}^{m}_{i=1}$\;
$\mathcal{D} = \mathcal{D} \cup \mathcal{D}_n$\;
\For(){$n' = 1, 2, \ldots, N'$}{
Sample from $\mathcal{D}$ and get samples $\mathcal{D}_{n'} = \{(s_i, a_i, r_i, s'_i)\}^{m'}_{i=1}$\;
\Update{$\mathcal{D}_{n'}$}\;
}
Calculate $\mathcal{L}(\beta)$~\eqref{equ:beta-loss}\;
$\beta = \beta - \Delta_{\beta} \cdot \nabla_{\beta} \mathcal{L}(\beta)$\;
}
\KwRet{$G(s, z; \theta_\pi)$}
\caption{Generative Actic-Critic With Adaptive Mechanism}
\label{alg:gac}
\end{algorithm}
\subsection{The Adaptive Mechanism for MMD-entropy Regularizer}
In this subsection, we introduce an adaptive mechanism to automatically scale the MMD-entropy regularizer. Instead of fixing $\alpha$, this mechanism automatically scales $\alpha$ in a range to adjust to different situations during training. According to our experiments, it improves the stability and robustness of GAC.
Let us construct a mechanism that automatically scales $\alpha$ according to the MMD entropy of policies. The original optimization problem~\eqref{equ:gacpolicyloss-mmd} can be changed into the following constrainted problem,
\begin{small}
\begin{equation}
\label{equ:constrainedMMD}
\begin{aligned}
\max_{\theta_\pi}&\ \mathbb{E}_{s \sim \mathcal{D}, z \sim q} [Q(s, G(s, z;
\theta_\pi); \theta_Q)], \\
s.t.\ & MMD_{\mathcal{D}}(\theta_\pi) \le \beta.
\end{aligned}
\end{equation}
\end{small}
Using the method of Lagrange multipliers, the optimization problem can be converted into an unconstrained form,
\begin{small}
\begin{equation}
\label{equ:dualConstrainedMMD}
\begin{aligned}
\max_{\theta_\pi}\min_{\alpha \ge 0} &\mathbb{E}_{s \sim \mathcal{D},z \sim q} [Q(s, G(s, z; \theta_\pi); \theta_Q)]\\
& + \alpha(\beta - MMD_{\mathcal{D}}
(\theta_\pi)),
\end{aligned}
\end{equation}
\end{small}
which is similar to the problem \eqref{equ:gacpolicyloss-mmd}.
The problem~\eqref{equ:dualConstrainedMMD} can be solved by iteratively optimizing the following two problems:
\begin{small}
\begin{subequations}
\begin{numcases}{}
\label{equ:auto-policy-loss}
\max_{\theta_\pi} \mathcal{L}(\theta_\pi) = \mathbb{E}_{s \sim \mathcal{D}, z \sim q} [Q(s, G(s, z; \theta_\pi); \theta_Q)] \notag \\
\quad\quad - \alpha MMD(\theta_\pi),\\
\label{equ:auto-alpha-loss}
\min_{\alpha \ge 0} \mathcal{L}(\alpha) = \alpha(\beta - MMD(\theta_\pi)).
\end{numcases}
\end{subequations}
\end{small}
However, this mechanism does not work in practice because it can dramatically limit the expressiveness of policies. In practice, the MMD entropy of the policy will quickly reach to the fixed $\beta$ in the early stages of training because of the greedy preference of $\mathcal{L}(\theta_\pi)$. And in the rest of training, the policy will alway walk near the boundary of this constrained set. Because of the limitation of this boundary set, it dramatically reduces the expressiveness of the push-forward policy and decreases the performance of GAC.
To overcome this problem, we devise a new mechanism with a tradeoff between $\alpha$ and $\beta$. Generally, the policy should gradually learn some knowledge during training, so its MMD entropy should increase gradually. Note that when $\beta$ is fixed, $\alpha$ will increase to confront the rising trend of the MMD entropy in equation~\eqref{equ:constrainedMMD} during training. So a large $\alpha$ means $\beta$ is too small for current training period, and the policy should increase its exploitation by increasing $\beta$. Hence, the mechanism needs a rule that $\beta$ increases when $\alpha$ is larger than a top threshold. On the countrary, a small $\alpha$ means $\beta$ is too large for current training period, and the policy should increase its exploration by decreasing $\beta$. Therefore, the mechanism needs a rule that $\beta$ decreases when $\alpha$ is smaller than a bottom threshold. To achieve these rules, we construct an objective of $\beta$ as follows\footnote{If $x > 0$, $sign(x) = 1$; if $x = 0$, $sign(x) = 0$; and if $x < 0$, $sign(x) = -1$.}
\begin{equation}
\label{equ:beta-loss}
\min_{\beta > 0} \mathcal{L}(\beta) = \beta[sign(\alpha_{\max} - \alpha) + sign(\alpha_{\min} - \alpha)].
\end{equation}
According to our experience, the scalar of MMD-entropy regularizer should be different in different experiment settings~(such as seeds) or different training periods. So the results of $\alpha$-fixed experiments can be unstable. Whereas, the mechanism of this subsection introduces some kinds of robust adaptation into GAC by making a proper tradeoff between $\alpha$ and the MMD entropy of policies. As the experiment results show, this is a fine mechanism which increases the stability and robustness of GAC.
The brief pseudocode of GAC is as algorithm~\ref{alg:gac} shows, and the full pseudocode of GAC is in the appendix.
\section{Introduction}
\label{ch:introduction}
Model-free deep reinforcement learning (DRL) has achieved great success in many domains, such as video games~\cite{mnih2013playing}, recommendation systems~\cite{chen2019top} and robotic control tasks~\cite{schulman2015trust}.
In this paper, we focus on the tasks with continuous state and action spaces. Usually, to gain the exploration bonus, these tasks require a \emph{stochastic} policy, which is either a fixed policy with a Gaussian noise ~\cite{silver2014deterministic,fujimoto2018addressing} or a parameterized Gaussian distribtuion~\cite{haarnoja2018soft}. However, in many complex environments, these kinds of policies are insufficient to provide effective exploration, for instance, uni-modal Gaussians cannot do well on tasks that require sampling from a multi-modal distribution~\cite{peyre2018computational,plappert2018multi,korenkevych2019autoregressive}. Thus, DRL needs more expressive policy functions that can model more convoluted action distribtuions.
Prior works have proposed a number of approaches to construct more expressive policies, such as \emph{autoregressive} policies~\cite{korenkevych2019autoregressive} and \emph{normalizing flow} policies~\cite{haarnoja2018latent}. The former is essentially a Gaussian distribution whose parameters are dependent on the current state and previous actions it has token. This history-dependent policy is originally proposed to produce a consistent motion for safety concerns~\cite{korenkevych2019autoregressive} and its expressiveness is still restricted in Gaussian distributions. Normalizing flow~\cite{dinh2016density} policy is based on bijective transformations that map a simple prior distribution to a complex posterior distribution~\cite{haarnoja2018latent}. The major concern is how to compromise between expressiveness and computational feasibility. Although it has been very effective in some domains, generally speaking, estimation of the target probability density is still expensive due to its inverse Jacobian calculation~\cite{kobyzev2019normalizing}.
Another plausible alternative is to use a \emph{push-forward operator}~\cite{peyre2018computational}, concretely an injective transformation, as the basis of our policy function. The push-forward operator shares congruence with the general-purpose neural network and can easily model any distribution over the observed variables as the network is large enough~\cite{peyre2018computational,haarnoja2018latent}. Note that push-forward operators have been widely used in Generative Adversarial Networks~(GANs)~\cite{goodfellow2014generative}, and demonstrate superior expressiveness in the application of modeling complex real-world distributions of images~\cite{gulrajani2017improved} and nature languages~\cite{zhang2016generating}. Another fact that injective transformation requires less computation than the bijective counterpart in a comparably sized problem may account for why we further believe that push-forward policy can make a better balance between expressiveness and computation, thus can substantially improve policies in large scale settings.
The central challenge arising in equipping push-forward policies for DRL is that the probability density of actions generated from push-forward operators can not be explicitly computed, which renders it impossible to integrate push-forward policies into enormous state-of-the-art methods, including stochastic-policy-gradient-based methods~\cite{sutton2000policy,schulman2015trust,schulman2017proximal} and energy-based methods~\cite{haarnoja2018soft,haarnoja2017reinforcement} since they all require an explicit mathematical formulation of action density.
Another barrier is so-called \emph{mode-collapse} problem, which is widely observed in the research of GANs~\cite{mathieu2015deep,isola2017image} and also demonstrated in our empirical study (see section \ref{subsec:entropy}).
Note that we also call it \emph{random-exploration-disappearing} in this paper, since, when mode-collapse happens, the outputs of transformation tend to concentrates into a limited set thus need to re-trigger an exploration.
To encourage active exploration, prior works in DRL adopt the entropy regularizer techniques
~\cite{haarnoja2018soft,schulman2017equivalence,haarnoja2017reinforcement,nachum2017bridging},
which aims to capture the single deterministic behavior of the largest reward, also the entire range of large-reward behaviors by explicitly maximizing the entropy of corresponding policies~\cite{haarnoja2017reinforcement}. However, it is hard to leverage the entropy regularizer technique for push-forward policies, since it also require computing the probability density of actions explictly.
\paragraph{Contribution.} To tackle above challenges, we construct a new density-free off-policy algorithm, generative actor-critic~(GAC). Based on the stochastic value gradient theorem~\cite{heess2015learning}, GAC successfully integrates push-forward policies into actor-critic framework with a novel entropy-like regularizer based on maximum mean discrepancy~(MMD) to balance the exploration and exploitation. Additionnally, we devise an adaptive mechanism to automatically scale this regularizer, which further improves the stability and robustness of GAC.
We compare our algorithm with state-of-the-art off-policy algorithms, by evaluating them on several continuous environments from OpenAI gym benchmark suite~\cite{brockman2016openai}. The empirical results show that our method outperforms these algorithms by a wide margin in \emph{sample-efficiency} and \emph{asymptotic performance}, expecially in high-dimensional environments, such as Ant-v3, Humanoid-v3 and HumanoidStandup-v2.
\section{Preliminaries}
In this section, we introduce the standard Markov decision processes, the basics of push-forward operator and maximum mean discrepancy.
\subsection{Markov Decision Processes}
In this paper, we consider a discounted and infinited-horizon \emph{Markov decision processes}~(MDPs) with continuous state and action space. It is denoted by a tuple $MDP = (\mathcal{S}, \mathcal{A}, p_0, p, r, \gamma)$. $\mathcal{S}$ is the state space, $\mathcal{A}$ is the action space, $p_0(s)$ is the start state distribution of environment, $p(s' \vert s, a)$ is the transition probability from the current state and action to the next state, $r(s, a)$ is the bounded reward function, and $\gamma$ is the discounted parameter. We use $\pi(a \vert s)$ to denote a \emph{Markov policy}, whose decision is dependent on the current state.
A MDP can be seen as a function project a \emph{Markov policy} to a \emph{Markov chain}, whose state set is $\mathcal{S} \times \mathcal{A}$ with start state distribution $p_0(s, a) = p_0(s) \pi(a \vert s)$ and state transition probability is $p(s', a' \vert s, a) = p(s' \vert s, a)\pi(a' \vert s')$. A trajectory of Markov chains is $\tau = ((s_0, a_0), (s_1, a_1), \ldots, (s_t, a_t), \ldots)$ and the reward function will projects a trajectory into a scalar trajectory $(r(s_0, a_0), r(s_1, a_1), \ldots, r(s_t, a_t), \ldots)$. $\gamma-$discounted cumulative function projects this scalar trajectory into a scalar, $(1 - \gamma)\sum_{t=0}^{\infty} \gamma^t r_t$. Then we can evaluate a Markov policy $\pi$ by the function
\begin{small}
\begin{equation}
\rho(\pi) = \mathbb{E}_{\tau \sim MDP(\pi)} \left[(1 - \gamma)\sum^{\infty}_{t=0} \gamma^t r(s_t, a_t) \right].
\end{equation}
\end{small}
The core goal in MDPs is to find \emph{the optimal policy}
\begin{small}
\begin{equation}
\pi^* \in {\arg\max}_\pi \rho(\pi).
\end{equation}
\end{small}
And denote \emph{q-function} as
\begin{small}
\begin{equation}
Q_{\pi}(s, a) = \mathbb{E}_{\tau \sim MDP(\pi)} \left[\sum^{\infty}_{t=0} \gamma^t r(s_t, a_t) \bigg\vert s_0 = s, a_0 = a\right],
\end{equation}
\end{small}
which is the expected cumulative reward of a subset of the policy's Markov chain. Then the expected accumulative reward of policy $\pi$ can be rewritten as
\begin{small}
\begin{equation}
\label{equ:valueofpolicy}
\rho(\pi) = \mathbb{E}_{s \sim p_0, a \sim \pi(\cdot \vert s)}\bigg[(1 - \gamma)Q_{\pi}(s, a)\bigg].
\end{equation}
\end{small}
\emph{Stochastic policy gradient theorem}~\cite{sutton2000policy} gives us a practical gradient of a parameterized policy's value in expectation form, by which we can directly optimize $\rho(\pi)$, no matter what complex function approximator is used as a policy. There are many on-policy policy gradient algorithms derived from stochastic policy gradient theorem, such as Trust Region Policy Optimization~(TRPO)~\cite{schulman2015trust} and Proximal Policy Optimization~(PPO)~\cite{schulman2017proximal}. If we parameterize $\pi$ by $\theta_\pi$, then stochastic policy gradient is as follows.
\begin{theorem}{\cite{sutton2000policy}\textbf{(Stochastic Policy Gradient Theorem).}}
\label{thm:stochasticpolicygradient}
\begin{small}
\begin{equation}
\begin{aligned}
\nabla_{\theta_\pi} \rho(\theta_\pi) =& \mathbb{E}_{s \sim p_{\gamma}(\cdot; \theta_\pi), a \sim \pi(\cdot \vert s; \theta_\pi)}\\
&\bigg[\nabla_{\theta_\pi} \log\pi(a \vert s; \theta_\pi) \cdot Q(s, a; \theta_\pi)\bigg],
\end{aligned}
\end{equation}
\end{small}
where $p_{\gamma} (s;\theta_\pi) = (1 - \gamma)\sum_{t=0}^{\infty} \gamma^t
p_t(s;\theta_\pi)$ and $p_t(s;\theta_\pi)$ is the probability density of the
event that Markov chain's $t$th state is $s$.
\end{theorem}
The \emph{Bellman equation theorem}~\cite{puterman2014markov} shows an important property of $Q_{\pi}(s, a)$. And the \emph{optimal Bellman equation theorem}~\cite{puterman2014markov} shows an important property of $Q_{\pi^*}(s, a)$.
\begin{theorem}{\textbf{(Bellman Equation Theorem).}}
\label{thm:bellmanequation}
Given a Markov decision process $MDP = (\mathcal{S}, \mathcal{A}, p_0, p, r, \gamma)$, and a Markov policy $\pi(a \vert s)$. The q-function $Q_{\pi}(s, a)$ is the only function satisfies the Bellman equation: $\forall (s, a) \in \mathcal{S} \times \mathcal{A}$,
\begin{small}
\begin{equation}
Q(s, a) = r(s, a) + \gamma \mathbb{E}_{s' \sim p(\cdot \vert s, a), a' \sim \pi(\cdot \vert s)}[Q(s', a')].
\end{equation}
\end{small}
\end{theorem}
\begin{theorem}{\textbf{(Optimal Bellman Equation Theorem).}}
\label{thm:optimalbellmanequation}
Given a Markov decision process $MDP = (\mathcal{S}, \mathcal{A}, p_0, p, r, \gamma)$, a Markov policy $\pi(a \vert s)$, and the optimal policy $\pi^*$. The q-function $Q_{\pi^*}(s, a)$ is the only function satisfies the optimal Bellman equation: $\forall (s, a) \in \mathcal{S} \times \mathcal{A}$,
\begin{small}
\begin{equation}
Q(s, a) = \max_{\pi} r(s, a) + \gamma \mathbb{E}_{s' \sim p(\cdot \vert s, a), a' \sim \pi(\cdot \vert s)}[Q(s', a')].
\end{equation}
\end{small}
\end{theorem}
\subsection{Push-forward Operator}
In this subsection, we introduce the \emph{push-forward operator}~\cite{peyre2018computational} which is used to construct our policy. Here is the definition of the push-forward operator.
\begin{definition}{\textbf{(Push-forward Operator).}}
\label{push-forward-operator}
For a continuous map $T: \mathcal{X} \rightarrow \mathcal{Y}$, we define its corresponding push-forward operator $T_{\sharp}: \mathcal{M}({\mathcal{X}}) \rightarrow \mathcal{M}({\mathcal{Y}})$, where $\mathcal{M}(\mathcal{X})$ denotes the set of probability measures on $\mathcal{X}$.
\end{definition}
As it shows, the push-forward operator can transform samples from a distribution $\Delta(\mathcal{X}) \in \mathcal{M}(\mathcal{X})$ into a more complex distribution $\Delta({\mathcal{Y}}) \in \mathcal{M}(\mathcal{Y})$. For any continuous function $h(y)$, the push-forward operator guarantees that
\begin{lemma}
\label{lem:push-forward}
For a continuous map $T:\mathcal{X} \rightarrow \mathcal{Y}$ and a distribution $\Delta(\mathcal{X})$,
\begin{small}
\begin{equation}
\int_x h(T(x)) \mathrm{d} \Delta(x) = \int_y h(y) \mathrm{d} \Delta(y),
\end{equation}
\end{small}
where $\Delta(\mathcal{Y}) = T_{\sharp} (\Delta(\mathcal{X}))$.
\end{lemma}
This property is the foundamental of the \emph{stochastic value policy gradient theorem}~\cite{heess2015learning} which is used to consturct our algorithm, GAC.
\subsection{Maximum Mean Discrepancy}
The \emph{maximum mean discrepancy}~(MMD)~\cite{gretton2007kernel} distinguishs two distributions by samples from these two distributions.
\begin{definition}{\cite{gretton2007kernel}\textbf{(Maximum Mean Discrepancy).}}
\label{def:mmd}
Let $\mathcal{F}$ be a class of function $f:\mathcal{X} \rightarrow \mathbb{R}$, where $\mathcal{X}$ is some measure space. Then the maximum mean discrepancy between two distributions $p$ and $q$ is
\begin{small}
\begin{equation}
MMD(p, q) = \sup_{f \in \mathcal{F}} (\mathbb{E}_{x \sim p}[f(x)] -
\mathbb{E}_{y \sim q}[f(y)]).
\end{equation}
\end{small}
\end{definition}
If $\mathcal{F}$ is a unit ball in a \emph{reproducing kernel Hilbert space}~(RKHS) $\mathcal{H}$ defined on a compact metric space $\mathcal{X}$, then MMD has a more easily computable form and it gives us a practical estimator. Let $X$ denote m samples tuple from $p$, $Y$ denote n samples tuple from $q$, and $k(x, y)$ denote the kernel of $\mathcal{H}$. The $MMD(p, q)$ can be approximated by the following estimator.
\begin{small}
\begin{equation}
\begin{aligned}
MMD(X, Y) =&
\bigg[
\frac{1}{m^2} \sum^{m}_{i,j=1} k(x_i, x_j)
+ \frac{1}{n^2} \sum^{n}_{i, j=1} k(y_i, y_j)\\
&- \frac{2}{mn} \sum^{m, n}_{i, j=1} k(x_i, y_j)
\bigg]^{\frac{1}{2}}.
\end{aligned}
\label{equ:mmd-estimator}
\end{equation}
\end{small}
There are two kinds of kernels usually used in MMD: one is Gaussian kernel, defined as $ k(x, y) = exp({-{\Vert x - y \Vert^2}/{2\sigma^2}})$; the other is energy kernel, defined as $k(x, y) = - \Vert x - y \Vert^2$. In GAC, energy kernel is preferred as it is more stable than Gaussian kernel in our experiments.
\section{Introduction}
This short example shows a contrived example on how to format the authors' information for {\it IJCAI--21 Proceedings}.
\section{Author names}
Each author name must be followed by:
\begin{itemize}
\item A newline {\tt \textbackslash{}\textbackslash{}} command for the last author.
\item An {\tt \textbackslash{}And} command for the second to last author.
\item An {\tt \textbackslash{}and} command for the other authors.
\end{itemize}
\section{Affiliations}
After all authors, start the affiliations section by using the {\tt \textbackslash{}affiliations} command.
Each affiliation must be terminated by a newline {\tt \textbackslash{}\textbackslash{}} command. Make sure that you include the newline on the last affiliation too.
\section{Mapping authors to affiliations}
If some scenarios, the affiliation of each author is clear without any further indication (\emph{e.g.}, all authors share the same affiliation, all authors have a single and different affiliation). In these situations you don't need to do anything special.
In more complex scenarios you will have to clearly indicate the affiliation(s) for each author. This is done by using numeric math superscripts {\tt \$\{\^{}$i,j, \ldots$\}\$}. You must use numbers, not symbols, because those are reserved for footnotes in this section (should you need them). Check the authors definition in this example for reference.
\section{Emails}
This section is optional, and can be omitted entirely if you prefer. If you want to include e-mails, you should either include all authors' e-mails or just the contact author(s)' ones.
Start the e-mails section with the {\tt \textbackslash{}emails} command. After that, write all emails you want to include separated by a comma and a space, following the same order used for the authors (\emph{i.e.}, the first e-mail should correspond to the first author, the second e-mail to the second author and so on).
You may ``contract" consecutive e-mails on the same domain as shown in this example (write the users' part within curly brackets, followed by the domain name). Only e-mails of the exact same domain may be contracted. For instance, you cannot contract ``person@example.com" and ``other@test.example.com" because the domains are different.
\end{document}
|
{
"timestamp": "2021-05-11T02:13:18",
"yymm": "2105",
"arxiv_id": "2105.03733",
"language": "en",
"url": "https://arxiv.org/abs/2105.03733"
}
|
\section{Introduction}
\label{sec_intro}
Most of the solar flares emanate from active regions (ARs) -- sites where strong magnetic fields penetrate from the convection zone to the solar atmosphere. It is widely accepted that the energy released during solar flares is stored in the form of free magnetic energy caused by non-potentiality of magnetic fields \citep[e.g.][]{Melrose1991, Schrijver2005, Fisher2015}.
Statistical studies reveal that the strongest flares mostly occur in ARs with a complex configuration of the magnetic field \citep[][to mention a few]{Mayfield1985, Sammis2000, Toriumi2017a}. For example, \citet{Guo2014} analysed 3334 ARs observed between 1983 and 2011 and found that 88\% of X-class flares were produced by ARs with $\beta \gamma \delta$ configuration according to the Mount Wilson classification \citep{Hale1919, Kunzel1960, Kunzel1965}. Recall that in a $\delta$-sunspot two umbrae of opposite magnetic polarities are located within a common penumbra and separated by less than 2 degrees \citep{Kunzel1960}. Note, however, that \citet{Zirin1987} argued that a presence of any $\delta$-configuration is not sufficient for production of a strong flare and both polarities within the $\delta$-spot must be ``substantial'' for that.
Flare-productive ARs also frequently display patterns of strong, high-gradient magnetic fields along the highly-sheared polarity inversion line \citep[e.g.][]{Zirin1987, Schrijver2007}. A comprehensive description of morphology and other properties of flare-productive ARs can be found in the recent review by \citet[][]{Toriumi2019} and references therein.
Since ARs with $\delta$-spot are the most plausible candidates to ensure strong flares, a lot of attention has been paid to the formation of such magnetic structures from both observational and theoretical points of view. Thus, \citet{Zirin1987} considered the formation and evolution of 21 flare-productive $\delta$-spot groups observed at the Big Bear Solar Observatory. They concluded that there exists three types of $\delta$-spots formation. First, $\delta$-spots may be formed as a result of simultaneous emergence of one or several magnetic dipoles at one place. Such structures are very compact and exhibit a large umbra. The second type is the emergence of a new dipole in the penumbra of a large pre-existing spot. The size of the newly-emerged dipole must be comparable to that of pre-existing spot in order to produce strong flares. The third type of $\delta$-spot formation is the collision of the leading part of one dipole with the following part of another dipole during the growth of both dipoles. The $\delta$-spots of this type are less flare-prolific as compared to the aforementioned ones. \citet{Zirin1987} also stated that, once formed, the polarities of a $\delta$-spot never separate. In addition, the orientation of the opposite polarities in $\delta$-spots often disobey Hale's polarity law \citep{Hale1925}.
\citet{Toriumi2017a} analysed the morphology and other properties of 29 ARs observed between 2010 and 2016. Each AR produced at least one flare stronger than M5.0 level when it was located within the 45-degrees distance from the central meridian. More than 80\% of ARs exhibited a $\delta$-structure and at least 3 ARs disobeyed Hale's polarity law. The authors proposed four scenarios for the formation of flare-productive AR:
\begin{enumerate}
\item A complex compact AR with a $\delta$-spot is formed via emergence of a single or several magnetic dipoles. The distinguishing feature of these ARs is a long polarity inversion line that extends through the entire AR. The formation of these ARs might be associated with the emergence of a highly-twisted magnetic flux bundle with kink instability \citep[e.g.][]{Linton1996, Tanaka1991, Knizhnik2018}. The possible anti-Hale orientation of such ARs was explained in a simulation of twisted flux tube emergence by \citet{Fan1999}. This scenario resembles the first type of $\delta$-spot following to \citet{Zirin1987}. In total, 11 of 29 ARs were attributed by \citet{Toriumi2017a} to this category.
\item A complex magnetic structure forms as a result of a new magnetic dipole emergence in the close vicinity of the pre-existing spot. The newly-emerged dipole, labelled ``satellite'' by \citet{Toriumi2017a}, is usually smaller than the pre-existing spot. This scenario refers to the second type of $\delta$-spot formation proposed by \citet{Zirin1987} and 15 out of 29 ARs belonged to this category. \citet{Toriumi2017a} suggested that the minor magnetic flux bundle might be connected to the main spot below the surface or might be trapped by the main tube during its raise through the convection zone. This case, as well as all other proposed scenarios, was successfully modelled by \citet{Toriumi2017b}, see also simulations by \citet{Jouve2018, Cheung2019}.
\item The so-called ``quadrupole'' scenario \citep[type 3 in][]{Zirin1987}, when two magnetic dipoles emerge close to each other simultaneously and the following spot of the western dipole approaches to the leading spot of the eastern dipole forming the common penumbra.
An excellent example of such a structure is the well-studied NOAA AR 11158 \citep[e.g.][]{Sun2012, Vemareddy2012}. The dipoles forming the AR might be two emerging parts of the same magnetic flux tube: the rising $\Omega$-tube might split into two segments below the surface \citep{Toriumi2014}, or we observe two rising $\Omega$-segments of a coherent emerging tube \citep{Fang2015, Toriumi2017b, Syntelis2019}.
\item A strong flare occurs between two close ARs \citep[``inter-AR'' event as labelled by][]{Toriumi2017a}. This case is similar to the ``quadrupole'' scenario with an exception that in this case the interacting ARs are supposed to be disconnected below the photosphere. There were only two inter-AR events (NOAA ARs 11944 and 12173) in the sample analysed by \citet{Toriumi2017a}.
\end{enumerate}
A reasonable inference from the above review seems to be a key role of the magnetic flux emergence some time prior to a strong flare. After the thorough study published by \citet{Schrijver2005} this hypothesis became a widely accepted concept. The authors analysed 95 ARs and concluded that non-potentiality in the corona (i.e. electric currents feeding a flare) is driven by a complex and large emergence of the magnetic flux. The electric currents emerging with the magnetic flux presumably decay within ~10-30 hours, therefore, a solar flare is expected to occur within this time interval.
Although the emergence of magnetic flux is studied extensively in conjunction with the appearance of strong flares and eruptions \citep[see, e.g., a review by][]{Schrijver2009}, much less attention is paid to the flux emergence rate of flare-productive ARs. Numerical simulations predict relatively high values of the flux emergence rate when considering the emergence of a highly-twisted magnetic tube presumably forming $\delta$-spot due to kink instability \citep[e.g.][]{Toriumi2017b, Knizhnik2018}. As for observations, \citet{Giovanelli1939} probably was the first one who showed that emerging ARs with higher area growth rate (that is equivalent to higher flux emergence rate) exhibit higher probability of flare occurrence.
In the present study, we aim to re-examine the statement that prior to a strong flare there is always a large emergence of new magnetic flux.
Note that in our previous paper \citep{Kutsenko2021}, we moved from emergence to flaring. Namely, we detected all ARs emerged on the solar disc between 2010 and 2017. Out of all detected 243 ARs, there were only 34 ARs with noticeable flaring and only one of them produced X-class flare (NOAA 11158), so that the comparison between emergence and flaring was possible only for a scanty data sample. We found that the faster an AR emerges the higher the flare index is (the Pearson correlation coefficient reaches 0.74).
In contrast to that study, in the present work we proceed in opposite direction: from flaring to emergence. We selected ARs (observed between 1996 and 2017) that produced at least one flare stronger than M5.0 level and investigated the pre-flare behaviour of the AR's magnetic flux.
Our goal was to reveal (i) whether strong solar flares are always preceded by magnetic flux emergence and (ii) whether flare-productive ARs exhibit higher flux emergence rates as compared to flare-quiet ARs. We hope that this study will contribute to the problem of solar flare predictions.
\section{Data}
\label{sec_method}
Since our goal is to analyse AR magnetic flux variations associated with solar flares, we need uninterruptible data on AR magnetic fields available at relatively high cadence. The first data source we utilized (1996-2010) was the {\it Michelson Doppler Imager} on board the {\it Solar and Heliospheric Observatory} \citep[SOHO/MDI,][]{Scherrer1995}. SOHO/MDI is a full-disc filtergraph that observes the photospheric Ni~\textsc{i} 6768 \AA\ spectral line at four wavelength positions. The magnetic field is proportional to the difference between the spectral line centres in right- and left-circular polarizations. The derived magnetic field maps are 1024$\times$1024 pixels line-of-sight magnetograms with the pixel size of 2$\times$2 arcsec\textsuperscript{2}. The data are available at 96 minute cadence.
The second source of magnetic field data (2010-2017) was the {\it Helioseismic and Magnetic Imager} \citep{Schou2012} on board the {\it Solar Dynamics Observatory} \citep[SDO/HMI,][]{Pesnell2012}. SDO/HMI observes the Fe~\textsc{i} 6173 \AA\ photospheric spectral line at six wavelength positions to derive solar magnetic field maps. The size of the full-disc maps is 4096$\times$4096 pixels with the pixel size of 0.5$\times$0.5 arcsec$^{2}$. Although the SDO/HMI data of the full magnetic field vector are available, we decided to use SDO/HMI line-of-sight magnetograms of 12 minute cadence due to the following reasons. First, to treat SOHO/MDI and SDO/HMI data by the same procedures to ensure the homogeneity of the derived magnetic flux values. Second, to ensure the consistency of the data reduction techniques applied in this work and in our previous works on AR emergence \citep{Kutsenko2019, Kutsenko2021} in order to perform the comparison of the results. The SDO/HMI algorithm for the line-of-sight magnetic field calculations is similar to that for SOHO/MDI data.
For the analysis we selected ARs that produced M5.0 or stronger flares while the AR was located inside the longitudinal interval (-35, +65) $\degr$. We suppose that the AR magnetic flux can be reliably estimated then the AR is no farther then 60$\degr$ from the central meridian. To ensure 2-3 days of reliable flux measurements before the flare onset, we have to shift the Eastern longitudinal limit to the West by approximately 25$\degr$.
For each AR we compiled the data cube of magnetograms spanning the time interval of the AR's passage across the solar disc.
For emerging ARs we also tracked the quiet-Sun area where the AR to be appeared. Prior to calculation of the total unsigned magnetic flux, each magnetogram was $\mu$-corrected for the projection \citep{Leka2017}. The magnetic flux density in each magnetogram pixel was divided by the cosine of the angle $\mu$ between the line-of-sight and the vector pointing from the centre of the Sun to the pixel. According to \citet{Leka2017} the $\mu$-correction in each pixel allows us to improve an estimation of the total unsigned magnetic flux from line-of-sight magnetograms.
The total unsigned magnetic flux was calculated as $\Phi = \sum |B_{i}|\Delta s_i$, where $ B_{i}$ and $\Delta s_i$ are the $\mu$-corrected magnetic flux density and area of $i^{th}$ pixel, respectively. The summation was performed over pixels where the magnetic flux density exceeded the triple noise level $B_{noise}$. The noise level is $B_{noise}=6$ Mx~cm\textsuperscript{-2} for SDO/HMI and $B_{noise}\approx18$ Mx~cm\textsuperscript{-2} for SOHO/MDI magnetograms \citep{Liu2012}. Consequently, the threshold $B_{th}$ was set to 18 Mx~cm\textsuperscript{-2} for SDO/HMI and to 50 Mx~cm\textsuperscript{-2} for SOHO/MDI data. The uncertainty of the calculated unsigned magnetic flux was estimated as $\Phi_{error} = nB_{th}\Delta s$, where $n$ is the number of pixel over which the calculation of magnetic flux was performed.
Using of SOHO/MDI and SDO/HMI data allowed us to span Solar Cycles 23 and 24 covering the time interval between 1996 and 2017 and to get large sample of flare-productive ARs. However, the instruments use different spectral lines and have different spatial resolutions. Therefore, in order to get consistent homogeneous estimations of the magnetic flux and flux emergence rate, we performed a cross-calibration between SOHO/MDI and SDO/HMI.
Fig.~\ref{figure_method_cc} shows SOHO/MDI versus SDO/HMI total magnetic fluxes for randomly selected ten ARs observed between 2010 May and 2011 February. Each point in the plot corresponds to the magnetic flux of a particular AR (coded with different colours) at moments with 96 minute cadence. SOHO/MDI and almost co-temporary SDO/HMI magnetograms of the same region of solar surface were used to calculate the data points. The linear fitting of the distribution yields the relationship between SOHO/MDI and SDO/HMI magnetic fluxes $\Phi_{MDI} = 1.42~\Phi_{HMI}$. The slope of the fitting line 1.42 is consistent with the value 1.40 by \citet{Liu2012} who compared magnetic flux densities in SOHO/MDI and processed SDO/HMI magnetograms. In the rest of this work, SOHO/MDI magnetic fluxes are divided by a factor of 1.42 to get better consistency with SDO/HMI data.
\begin{figure}
\includegraphics[width=\columnwidth]{figure_method_cc.eps}
\caption{Comparison of the total unsigned magnetic flux calculated using SOHO/MDI and SDO/HMI line-of-sight magnetograms. The magnetic fluxes were calculated from almost co-temporary SOHO/MDI and SDO/HMI magnetograms for ten ARs observed between 2010 June and 2011 February. NOAA numbers of ARs are shown in the plot. Black thick line shows the best linear fit to the data points.}
\label{figure_method_cc}
\end{figure}
The total unsigned magnetic flux variations of eight ARs are shown in Fig.~\ref{figure_method_fluxes}. Diurnal oscillations of the magnetic flux in panels (c),(f), and (h) are caused by well-known artefacts of SDO/HMI instrument \citep[e.g.][]{Liu2012, Kutsenko2016}. We did not take any precautions against this artefact since its influence on the derived values is minor. Error bars in each panel show typical errors of the magnetic flux measurements.
For AR with well pronounced emergence stage, the linear fitting of increasing and maximum patterns of the magnetic flux curve by a linear piecewise continuous function \citep{Kutsenko2017} was performed (green lines in panels (a)--(d) and (g) of Fig.~\ref{figure_method_fluxes}). The fitting function had two segments one of which (left-handed one in Fig.~\ref{figure_method_fluxes}(a-d, g) can have an arbitrary slope. The second segment had to be horizontal. The slope of the left-handed segment yields information on the flux emergence rate, $R_{av}$, in an AR. The error of the flux emergence rate measurements was evaluated as the uncertainty of the slope of the linear fit. The level of the second segment corresponded to the peak magnetic flux, $\Phi_{max}$, of the AR. Note that this fitting was performed only to those magnetic flux curves where (i) a clear emergence was observed in both magnetograms and continuum images, and (ii) the amount of the newly emerged flux exceeded the uncertainty of the magnetic flux estimation.
Vertical lines in Fig.~\ref{figure_method_fluxes} denote the moments when M (yellow) and/or X (red) class flares occurred in the AR. The data on soft X-ray flare magnitudes taken by {\it Geostationary Operational Environmental Satellites} (GOES) were retrieved from National Centers For Environmental Information of NOAA\footnote{available at https://www.ngdc.noaa.gov/stp/space-weather/solar-data/}.
To quantify the flaring productivity of an AR, we calculated a flare index (FI) introduced in \citet{Abramenko2005}:
\begin{equation}
\mathrm{FI} = (100S^{(X)} + 10S^{(M)} + 1.0S^{(C)} + 0.1S^{(B)})/\tau,
\label{eq_fi}
\end{equation}
where $S^{(j)} = \sum_{i=1}^{N_{j}} I_{i}^{j}$ is the sum of \textit{GOES} magnitudes and $N_{j}$ is the number of flares of a certain class, $\tau$ is the total time interval of AR observation in days. For ARs that passed across the solar disc from the Eastern to the Western limb $\tau$ was set to a half period of solar rotation $\tau = 13.5$ days. For emerging ARs $\tau$ was set to the actual interval of AR presence on the disc.
\begin{figure*}
\includegraphics[width=\linewidth]{figure_method_fluxes.eps}
\caption{
Variations of the total unsigned magnetic flux in NOAA ARs 10720 (a), 10314 (b), 11166 (c), 09236 (d), 12017 (e), 11877 (f), 10696 (g), 11520 (h). Vertical lines marks M- (yellow) or X-class (red) flares occurred in the AR. Where applicable, the green line shows the best piece-wise approximation of the emergence part and of the peak pattern by a piece-wise linear fit (see text). Typical errors of the flux measurements are shown in each panel.
}
\label{figure_method_fluxes}
\end{figure*}
Beside magnetograms and continuum images, we also utilized UV images acquired by the Extreme-Ultraviolet Imaging Telescope \citep{Delaboudiniere1995} on board SOHO and by the Atmospheric Imaging Assembly \citep{Lemen2012} on board SDO to analyse magnetic connections and to locate flares within ARs.
\section{Results}
\label{sec_results}
In total, in this work we analysed 100 ARs produced strong flares (stronger than M5.0), which were observed between 1996 July and 2017 September. Actually, we revealed that the total number of ARs that produced M5.0 or stronger flares during this time interval exceeds 150. However, we discarded about one third of them due to either the selection rules described above or gaps in the magnetic data.
The list of ARs and their parameters is presented in Table~\ref{table_ar_list} in Appendix. The common property of all ARs was the presence of opposite magnetic polarities located in the close vicinity from each other, although for some of them these polarities were weak to form pores or spots. Only 11 of 100 ARs formally exhibited no $\delta$-spots. In three cases a complex magnetic structure was formed as a result of interaction between two distinct ARs, namely in NOAA ARs 08647, 08674, and 09893 (see the last column in Table~\ref{table_ar_list}).
\subsection{Relationship between flaring and flux emergence}
\label{sec_types}
A thorough analysis of the magnetic flux variations, magnetic field maps, and continuum images of flare-productive AR in our set allowed us to conclude that these ARs can be separated into four subsets. These subsets are not directly related to the AR morphology as in \citet{Zirin1987} or \citet{Toriumi2017a}. ARs in different subsets exhibit different behavior of the magnetic flux emergence prior or during the strongest flares. We will refer to this behavior as the type of emergence and the assigned type for each AR is shown in column 10 of Table~\ref{table_ar_list}. These types are as follows:
\begin{enumerate}
\item Type I: A complex magnetic structure forming a flare-productive AR emerges amidst a quiet-Sun area. The total magnetic flux increases monotonously without significant interruptions in the growth. Usually the magnetic structure with $\delta$-spots is formed as a result of emergence of multiple interacting magnetic dipoles. Examples of the total magnetic flux variations of such ARs are shown in panels (a), (b), and (g) of Fig.~\ref{figure_method_fluxes}. A strong flaring (M-class flares) may start during the initial phases of emergence as in Fig.~\ref{figure_method_fluxes}g. However, most commonly M- and X-class flares occur as the AR reaches its peak magnetic flux (Fig.~\ref{figure_method_fluxes}a, b). NOAA ARs 11429 and 11158 can be attributed to this type. The number of ARs assigned to this type of emergence behavior was 29 out of 100.
\item Type II: A complex magnetic structure is formed as a result of emergence of a new magnetic flux within the area of a pre-existing AR. Moreover, the unsigned magnetic flux of the newly injected magnetic structure exceeds the uncertainty of the total magnetic flux measurements, i.e. a considerable amount of a new magnetic flux (as compared to the magnetic flux of the pre-existing structure) appears on the surface. In most cases, the AR exhibits no significant flaring prior to the emergence of the new flux.
Examples of the magnetic flux evolution of these type of ARs are shown in panels (c) and (d) of Fig.~\ref{figure_method_fluxes}. The magnetic flux of an AR remains nearly constant at a certain non-zero level during the initial phase of observations. Then, the total unsigned flux increases as a result of a new emergence. The flaring activity of the AR increases significantly as well. Probably, the most well-known ARs of this type are NOAA ARs 10930 and 12673. NOAA AR 10930 observed in 2006 December exhibited strong activity near the Eastern limb. It produced two X-class flares and started to decay as a unipolar sunspot surrounded by small magnetic features of both polarities. An emergence of a fast-rotating satellite occurred near the main polarity spot \citep[e.g.][]{Zhang2007}. The formation of a highly-sheared $\delta$-sunspot led to the production of two more X-class flares \citep[e.g.][]{Inoue2012}. NOAA AR 12673 showed up in the Eastern limb in 2017 August as a decaying flare-quiet unipolar sunspot. An intense emergence of a new magnetic flux around the sunspot started on 2017 September 03 and lasted for several days. The AR produced a series of M- and X-class flares including the strongest X9.3-class flare of Solar Cycle 24.
The evolution of continuum intensity and longitudinal magnetic field of one more AR of this type, namely NOAA AR 09236, is shown in Fig.~\ref{figure_09236}. The variations of the magnetic flux are shown in Fig.~\ref{figure_method_fluxes}d. The AR was a flare-quiet bipolar magnetic structure prior to 2000 November 22 (Fig.~\ref{figure_09236}a). A quite intense emergence began around the leading polarity on this date (Fig.~\ref{figure_09236}b). An interesting feature of the emergence is the formation of a circular symmetrical rim of moving magnetic features around the leading sunspot (Fig.~\ref{figure_09236}c). Three X-class flares occurred in the AR during the emergence, which can be seen in Fig.~\ref{figure_method_fluxes}d. Two more X-class flares were produced by the formed $\delta$-sunspot \citep[Fig.~\ref{figure_09236}d, see also][]{Park2013}.
For ARs of type II, in the 6-th column of Table~\ref{table_ar_list} (in addition to the peak magnetic flux, $\Phi_{max}$), we also list the AR magnetic flux, $\Phi_{min}$, measured prior to the observed emergence onset. There are 24 ARs of type II in our sample. In several cases, when an emerging AR appeared at the Eastern limb, we were unable to determine whether this AR emerges amidst a quiet-Sun area or an additional emergence occurs in the ``old'' AR. Therefore, these ARs were assigned to a mixed type I/II. However, the number of such events is only 6 and we suppose that this uncertainty did not affect the results significantly.
\item Type III: Although a fact of emergence is observed within a pre-existing magnetic structure, the flux injection is negligible as compared to the total magnetic flux of the magnetic structure. This type resembles the ``spot-satellite'' scenario in \citet{Toriumi2017a}. Typical variations of the total magnetic flux of type III ARs are shown in panels (e) and (f) in Fig.~\ref{figure_method_fluxes}. Insignificant (as compared to the uncertainty) injection of a new magnetic flux prior to flares can be seen in the figures.
We assume that the emergence of the satellite may play various roles in the flaring. First, the emergence may result in the formation of a small/moderate $\delta$-sunspot that will produce a strong flare. A large amount of the newly-injected flux is not necessary to produce a X-class flare. For instance, NOAA AR 12017 exhibited one of the lowest in our sample magnitude of the total magnetic flux of about 8$\times$10$^{21}$ (Fig.~\ref{figure_method_fluxes}e). The emergence of 1$\times$10$^{21}$ Mx of additional magnetic flux yielded the appearance of a compact small $\delta$-sunspot that resulted in X1.0 flare on 2014 March 29 at 17:48 UT (Fig.~\ref{figure_12017}).
Second, we suppose that the emergence of a small magnetic dipole within a pre-existing AR may trigger a flare. And the third possibility is that such an emergence will be irrelevant to the oncoming flare. Unfortunately, modern data and numerical resources are not enough to make an adequate decision on the consequences of a small magnetic flux appearance. As an example, let us consider the case of NOAA AR 11944. Emergence of a new magnetic structure in this AR is shown in Fig.~\ref{figure_11944}. The AR exhibited complex mixed-polarity structure in the following part. Two strong flares (M7.2 and X1.2) were produced by the AR on 2014 January 07. Our focus is the inter-AR X1.2 flare between the strong coherent leading part and the dispersed decayed following part of the neighbor AR 11943. A small magnetic dipole started to emerge between the leading and following parts of NOAA AR 11944 on 2014 January 06 (shown by red arrow in the upper panels of Fig.~\ref{figure_11944}). The Eastern footpoint of the X1.2-flare ribbon was located 10-20 Mm away from the newly-emerged dipole (white arrow in the lower panel (c) in Fig.~\ref{figure_11944}). The dipole could play some role in the triggering of the flare, however, without additional information from numerical simulations and magnetographic measurements on higher levels it seems to impossible to decide.
Similar emergence of a new magnetic dipole was observed in the largest and one of the most flare-productive AR of Solar Cycle 24 NOAA 12192. Again, to determine the exact influence of the emerging structures on the triggering of flares is impossible. In our opinion, in a certain number of cases these emerging structures do trigger strong flares. In total, 30 out of 100 ARs were identified as type III ARs.
\item Type IV: No clear signatures of emergence were observed during the entire interval of observations (i.e., at least several days prior to the strongest flare occurrence). An example of magnetic flux variations of type IV NOAA AR 11520 is shown Fig.~\ref{figure_method_fluxes}h. The AR exhibited almost constant magnetic flux. Insignificant increase of the magnetic flux is obvious at the beginning of the interval. However, we attribute this increase to the projection effect: no signatures of emergence were observed in the magnetograms or continuum images at that time. The X-class flare occurred three days after that magnetic increase. Moreover, flares may occur during the decaying phase of the AR evolution, see the second (right-hand) X-class flare in Fig.~\ref{figure_method_fluxes}g. Magnetograms of several type IV ARs are shown in Fig.~\ref{figure_type_iv}. Each magnetogram shows the presence of interacting opposite magnetic polarities within the AR. Mutual motions of sunspots resulting in shearing of magnetic field and formation of $\delta$-sunspots were observed in these ARs. There are 11 type IV ARs in our sample.
We must conclude that, once formed by any suitable mechanism, an AR with a substantial $\delta$-structure is prone to keep the high flaring potential as long as the $\delta$-structure exists: no additional emergence is required to maintain the high flare activity.
\end{enumerate}
\begin{figure*}
\includegraphics[width=\linewidth]{figure_09236.eps}
\caption
{
An example of Type II emergence: Evolution of continuum intensity (upper panels) and of longitudinal magnetic field (lower panels) of NOAA AR 09236. The time stamps are shown in each panel. An intense emergence started in the AR on 2000 November 22 (panels b-d) resulting in the formation of a flaring $\delta$-structure (panel d). The field-of-view of the maps is 125~arcsec$\times$75~arcsec. Magnetograms are scaled from -1000 Mx~cm$^{-2}$ (black) to 1000 Mx~cm$^{-2}$ (white).
}
\label{figure_09236}
\end{figure*}
\begin{figure*}
\includegraphics[width=\linewidth]{figure_12017.eps}
\caption{
An example of Type III emergence: Evolution of continuum intensity (upper panels), longitudinal magnetic field (middle panels), and SDO/AIA 1600 \AA\ intensity (lower panels) of NOAA AR 12017. The time stamps are shown in each panel. White contours in the lower panels show -1000 Mx~cm$^{-2}$ and 1000 Mx~cm$^{-2}$ isolines of the magnetic field. The emergence of a small magnetic dipole in the vicinity of the leading polarity (panels b-c) led to the formation of complex $\delta$-structure (panel d). Lower panel (d) show X1.0 flare on 2014 March 29. The field-of-view of the maps is 100~arcsec$\times$50~arcsec. Magnetograms are scaled from -1000 Mx~cm$^{-2}$ (black) to 1000 Mx~cm$^{-2}$ (white).
}
\label{figure_12017}
\end{figure*}
\begin{figure*}
\includegraphics[width=\linewidth]{figure_11944.eps}
\caption{
An example of Type III emergence: Evolution of longitudinal magnetic field (upper panels) and of SDO/AIA 1600 \AA\ intensity (lower panels) of NOAA AR 11944. The time stamps are shown in each panel. White contours in the lower panels show -1000 Mx~cm$^{-2}$ and 1000 Mx~cm$^{-2}$ isolines of the magnetic field. The emergence of a small magnetic dipole in the vicinity of the leading polarity is shown by red arrows in panels (b)-(d). Lower panels (c) and (d) show X1.2 flare on 2014 January 07. The white arrow in the lower panel (c) points the footpoint of the flare ribbon located near the newly-emerged magnetic dipole. The field-of-view of the maps is 250~arcsec$\times$125~arcsec. Magnetograms are scaled from -1000 Mx~cm$^{-2}$ (black) to 1000 Mx~cm$^{-2}$ (white).
}
\label{figure_11944}
\end{figure*}
\begin{figure*}
\includegraphics[width=\linewidth]{figure_type_iv.eps}
\caption{
Examples of type IV ARs: Longitudinal magnetic field maps of NOAA ARs 09415 (a), 10484 (b), 10486(c), 10501 (d), 10715 (e), 11520 (f), 11890 (g), and 12371 (h). The ARs showed no signatures of magnetic flux emergence during the entire interval of observation and were assigned to type IV. Interacting opposite magnetic polarities are seen in each panel. The field-of-view of the magnetograms is 150~arcsec$\times$100~arcsec. Magnetograms are scaled from -1000 Mx~cm$^{-2}$ (black) to 1000 Mx~cm$^{-2}$ (white).
}
\label{figure_type_iv}
\end{figure*}
\subsection{The flux emergence rate of flare-productive ARs}
\label{sec_fer}
As it was mentioned above, numerical simulations suggest highly twisted magnetic flux bundles forming $\delta$-sunspot to exhibit high flux emergence rate during emergence from the convection zone \citep[e.g.][and references therein]{Toriumi2017b, Knizhnik2018}. Most of ARs in our sample contain $\delta$-sunspots. For type I and type II ARs the data allow us to derive the flux emergence rate $R_{av}$ using the technique presented in \citet{Kutsenko2019}. The relationship between the peak magnetic flux, $\Phi_{max}$, and flux emergence rate for these ARs is shown in Fig.~\ref{figure_rav_vs_flux}. For type II ARs (blue circles), the background flux $\Phi_{min}$ was subtracted before plotting.
Data points from \citet{Kutsenko2019} for ~400 emerging dipoles are overplotted with grey circles in Fig.~\ref{figure_rav_vs_flux}. These dipoles are ephemeral and active regions observed between 2010 and 2017, and majority of them were flare-quiet: only 34 of those ARs exhibited flare index exceeding unity. Several ARs, presented in both the samples (in \citet{Kutsenko2019} and in the sample used in this work) were counted once.
In Solar Cycle 24 there was one peculiar active region with very high and stable flux and very low flaring activity. Data for such an AR may allow us to estimate the upper level of the flux emergence rate for flare-quiet AR. This is NOAA AR 12674. EUV images taken by the {\it Extreme Ultraviolet Imager} on board the {\it Solar Terrestrial Relations Observatory} \citep[STEREO,][]{Kaiser2008, Howard2008} allowed us to reveal that this AR started to emerge on the far-side of the Sun on 2017 August 26. By 2017 August 30, the AR was visible near the Eastern limb (~E85) and exhibited the peak magnetic flux of approximately 4.1$\times 10^{22}$ Mx. We estimated the flux emergence rate to be of about 3.4$\times 10^{20}$ Mx~h$^{-1}$. We overplotted this data point in Fig.~\ref{figure_rav_vs_flux} with a green star. The horizontal dashed line in the figure marks the upper level of the flux emergence rate for all low-flaring ARs, including the largest (in sense of flux) one, AR 12674. Above this level only strong-flaring ARs are observed (however, not all of them).
In general, the diagram in Fig.~\ref{figure_rav_vs_flux} demonstrates that new data (colour circles) are consistent with the published data (grey circles): the diagram is now extended toward higher magnitudes and the peak total unsigned flux tends to be in a direct proportion with the flux emergence rate. Pearson's correlation coefficient from the combined data set of N=471 events is 0.80$^{+0.03}_{-0.04}$. The slope of the linear regression is the same, 0.48$\pm$0.02. This result indicates that the relationship between $\Phi_{max}$ and $R_{av}$ found here does not depend on the sample selection.
\begin{figure}
\includegraphics[width=\columnwidth]{figure_rav_vs_flux.eps}
\caption{
The flux emergence rate, $R_{av}$, versus peak magnetic flux, $\Phi_{max}$, distribution for emerging ARs from \citet{Kutsenko2019} (grey circles) and for ARs of types I (red circles) and of type II (blue circles). The green star shows the data point on flare-quiet NOAA AR 12674. Dashed horizontal grey line shows the upper $R_{av}$ limit of the flare-quiet ARs 3.4$\times 10^{20}$ Mx~h$^{-1}$.
}
\label{figure_rav_vs_flux}
\end{figure}
Fig.~\ref{figure_fi_vs_all} shows the relationship between flare index and peak magnetic flux of all ARs in our sample (left panel) and the relationship between flare index and flux emergence rate of ARs of types I and II (right panel). In both panels we also overplotted data points on ARs (black circles) analysed in \citet{Kutsenko2021}. These data points are 34 emerging ARs with the flare index exceeding unity.
The left panel in Fig.~\ref{figure_fi_vs_all} confirms a well-known relationship between the peak magnetic flux and flare productivity of ARs: larger ARs are capable of producing stronger flares. Pearson's linear correlation coefficient of the distribution is $r=0.70 ^{+0.06}_{-0.08}$, where the confidence interval is calculated for 95\% confidence level. Although larger ARs tend to produce stronger flares, this relationship is hardly appropriate for reliable flare forecast. Taking into account logarithmic scale on both axes of the plot in the left panel of Fig.~\ref{figure_fi_vs_all}, we may conclude that ARs of similar peak magnetic flux may exhibit flare indices varying in an order of magnitude. High peak magnetic flux is a favourable but not necessary parameter of a flare-productive AR. For instance, NOAA AR 09087 exhibited relatively high peak magnetic flux of about 5.5$\times 10^{22}$ Mx. The strongest flare occurred in this AR was a M6.4 event on 2000 July 19 (see column 7 in Table~\ref{table_ar_list}). Meanwhile, relatively weak NOAA ARs 09511 and 12017 with $\Phi_{max} \approx 9 \times 10^{21}$ Mx produced X1.2 and X1.0 flares on 2001 June 23 and 2014 March 29, respectively.
The relationship between flare index and the flux emergence rate shown in the right panel of Fig.~\ref{figure_fi_vs_all} supports our results reported in \citet{Kutsenko2021}: fast-emerging ARs exhibit, in general, higher flare productivity, the correlation coefficient from the combined data set (N=84) is 0.61$^{+0.12}_{-0.15}$. The similar relationship (with less inclined slope) seems to be valid for the separate subset of type II ARs (blue circles).
The vertical dashed line shows the $R_{av} = 3.4\times 10^{20}$ Mx~h$^{-1}$ magnitude, the upper level of the flux emergence rate for flare-quiet ARs. To the right from this segment we observe a well pronounced subset of strong-flaring and fast-emerging ARs with a clear proportionality between FI and $R_{av}$. However, the number of such ARs in our sample is less than 10\%.
The data points in the right panel of Fig.~\ref{figure_fi_vs_all} are located predominantly above the main diagonal of the plot suggesting that fast-emerging ARs must exhibit high flare-productivity. At the same time, ARs emerging at gradual rates may be flare-quiet or may become flare-productive as well. So, for the entire data set, the relationship between the flux emergence rate and flare index is more complicated than a simple linear regression. Non-linear processes definitely contribute into the formation of flaring capabilities.
\begin{figure*}
\includegraphics[]{figure_fi_vs_all.eps}
\caption{
Left -- The relationship between flare index, FI, and the peak magnetic flux, $\Phi_{max}$, for ARs analysed in \citet{Kutsenko2021} (black circles) and in this work (red circles). Measurement uncertainties are shown in the plot. Pearson's correlation coefficient is $r=0.70 ^{+0.06}_{-0.08}$. Right -- The relationship between flare index, FI, and the flux emergence rate, $R_{av}$, for ARs analysed in \citet{Kutsenko2021} (black circles) and for ARs of type I (red circles) and of type II (blue circles). Measurement uncertainties are shown in the plot. vertical dashed grey line shows the maximum flux emergence rate $3.4\times 10^{20}$ Mx~h$^{-1}$ measured in flare-quiet ARs.
}
\label{figure_fi_vs_all}
\end{figure*}
\section{Conclusions and Discussion}
We used SOHO/MDI and SDO/HMI data to analyse 100 ARs that produced M5.0 or stronger flares during Solar Cycles 23 and 24. Our focus was an investigation of the observable magnetic flux emergence events during an interval of approximately 2-3 days before the flaring onset in the AR. We studied both qualitative and quantitative aspects of the magnetic flux emergence in these flare-productive ARs.
We found that ARs may be sorted out into four types with respect to the emergence of a magnetic flux prior or during strongest flares. Type I consists of the ARs that emerged amidst a quiet-Sun area and started to launch strong flares immediately after (or even before) the magnetic flux reached its peak magnitude.
The monotonous emergence most often results in the formation of a complex magnetic structure with a $\delta$-spot. Approximately one third (29\%) of all ARs in the sample were assigned to type I.
Emergence of a new magnetic flux within a pre-existing AR was denoted as type II. In most cases, a complex magnetic configuration is formed as a result of this new emergence. The amount of the new magnetic flux is significant as compared to the pre-existing magnetic flux. For instance, the most flare-productive NOAA AR 12673 in Solar Cycle 24 was assigned to type II. Another quarter of all ARs (24\%) were identified as ARs of type II.
For one third of all analysed ARs (30\%), emergence of only small amount of new flux was observed during the 2-3 days before a strong flare, type III ARs. Insignificant emergence of a new magnetic flux may result in the formation of a complex structure tending to produce flares. Interestingly, emergence of a very weak magnetic satellite of about 1$\times 10^{21}$ Mx in the vicinity of main sunspots is enough to form a small $\delta$-sunspot that could provoke a X-class flare. In our opinion, newly-emerged dipoles may also trigger flares in complex pre-existing ARs. Undoubtedly, emerging dipoles may also play no role in the occurrence of a flare.
A large number of type III ARs in the sample are not promising for the problem of the solar flare forecast. In general, the duration of emergence is proportional to the amount of emerging magnetic flux. Therefore, emergence of a relatively weak satellite in a pre-existing flare-quiet AR may occur within hours. This emergence may result in a complication of the magnetic configuration yielding a start for strong flaring. This means a decrease of the time interval for the reliable prediction down to hours. Probably, new methods for early detection of emerging magnetic flux, for example, by means of helioseismology \citep[e.g.][]{Birch2019, Dhuri2020}, could increase the forecast interval. The distortion of the electric current system of a pre-existing AR by an emerging satellite, which was discussed in \citet{Kutsenko2018}, could also be used in the forecast.
Finally, the type IV ARs are the strong-flaring ARs with no signatures of the new flux emergence during the entire interval of observations. It means that strong flares could occur at least three days after any emergence. All of type IV ARs were characterized by interacting opposite magnetic polarities (see Fig.~\ref{figure_type_iv}). Two of the strongest flares of Solar Cycle 23 belong to this type: NOAA ARs 09415 (X14.4, Fig.~\ref{figure_type_iv}a) and 10486 (X17.0, Fig.~\ref{figure_type_iv}c). Although ARs of type IV are not numerous (only 11 out of 100 ARs), the existence of these ARs implies the following conclusion: the emergence is not a necessary ingredient for an AR to produce a powerful flare; once formed by any scenario, an AR may keep the potential for flaring as long as favorable conditions are met.
To this end, we conclude that
\begin{enumerate}
\item In 29\% of cases the emergence was observed from the zero background;
\item In 24\% of cases the major emergence was observed in the pre-existing AR;
\item In 30\% of cases the minor emergence was observed in the pre-existing AR;
\item In 11\% of cased no emergence were detected;
\item For 6\% of cases data did not allow us to judge.
\end{enumerate}
Our results also support the well-known dependence between the peak magnetic flux and flaring productivity: stronger flares tend to occur in larger ARs. However, this dependence is just a tendency rather than a strong rule. We suppose that the capability of an AR to produce a strong flare is determined to a great extent by its morphology rather than by its size. The examples of relatively weak NOAA ARs 09511 and 12017 discussed in Section~\ref{sec_results} support this suggestion.
In this work we confirmed our previous findings \citep{Kutsenko2021} regarding the flare productivity and flux emergence rate of ARs: ARs emerging at a higher rate tend to produce stronger flares. Although flare-productive ARs exhibit higher flux emergence rate, its consistent difference from $R_{av}$ of flare-quiet ARs is not well pronounced. Flare-productive and flare-quiet ARs rather form a continuous distribution $R_{av}$ versus $\Phi_{max}$ with flare-productive ARs being mostly located at the higher peak magnetic flux end of the distribution.
The FI versus $R_{av}$ scatter plot (the right panel of Fig.~\ref{figure_fi_vs_all}) suggests that most points lie above the main diagonal, i.e. flux emergence rate defines lower limit of the flare productivity. In other words, ARs emerging at a very high rate must be flare-productive. Gradually, emerging ARs may exhibit both high and low flaring productivity. This distribution resembles the relationship between twist and flux emergence rate of ARs found in \citet[][see fig.~3]{Kutsenko2019}. Fast-emerging ARs were found to exhibit strong twist. Gradually-emerging ARs could be either weakly or strongly twisted. Perhaps, both dependencies on the flux emergence rate have the same physical reason: flare-productive ARs with $\delta$-structures are presumably formed as a result of emergence of highly-twisted magnetic flux bundles \citep{Toriumi2017b, Knizhnik2018}.
Very high flux emergence rate can be used as a precursor of strong flare activity of an AR in the future. Our estimates suggest that flare-quiet ARs do not exhibit flux emergence rate higher than $3.4\times 10^{20}$ Mx~h$^{-1}$.
However, the number of such ARs is less than 10\% in our sample. The most flare-productive AR of Solar Cycle 24 NOAA 12673 exhibited extremely high flux emergence rate. \citet{Sun2017} estimated the averaged flux emergence rate to be $4.93^{+0.11}_{-0.13}$ Mx~h$^{-1}$ that is comparable to our value $5.89 \pm 0.19$ Mx~h$^{-1}$ (see Table~\ref{table_ar_list}).
We found that similar flux emergence rate of about $6.09 \pm 0.67$ Mx~h$^{-1}$ was observed in NOAA AR 09393. The AR produced one of the strongest flare X20.0 of Solar Cycle 23. Interestingly, both NOAA ARs 09393 and 12673 were classified as type II ARs: in both cases fast emergence of a new magnetic structure was observed in the close vicinity of pre-existing AR.
We hope that this study will contribute to the problem of reliable flare forecast.
\section*{Acknowledgements}
SDO is a mission for NASA’s Living With a Star (LWS) programme. SOHO is a project of international cooperation between ESA and NASA. The SOHO and SDO data were provided by the
Joint Science Operation Center (JSOC). The analysis of the flux emergence rate and flare-productivity of active regions was supported by the Russian Science Foundation (Project 19-72-00027).
\section*{Data Availability}
The HMI, MDI, AIA data that support the findings of this study are available in the JSOC (http://jsoc.stanford.edu/) and can be accessed under open for all data policy. The GOES data on solar flares is available at https://www.ngdc.noaa.gov/stp/space-weather/solar-data/. Derived data products supporting the findings of this study are available in the article and from the corresponding author (ASK) on request.
\bibliographystyle{mnras}
|
{
"timestamp": "2021-05-11T02:18:59",
"yymm": "2105",
"arxiv_id": "2105.03886",
"language": "en",
"url": "https://arxiv.org/abs/2105.03886"
}
|
\section{Introduction}
The way in which massive galaxies build their stellar populations, and achieve this earlier than lower mass populations, remains an important question in the study of galaxy evolution. Theoretical models (e.g., \citealt{Somerville1999}, \citealt{Cole2000}, \citealt{Bower2006}, \citealt{Croton2006}, \citealt{Somerville2008}, \citealt{Benson2012}, \citealt{Somerville2015} and references therein, \citealt{Croton2016}, \citealt{Naab2017} and references therein, \citealt{Weinberger2017}, \citealt{Cora2018}, \citealt{Knebe2018}, \citealt{Behroozi2019}, \citealt{Cora2019}, \citealt{Dave2019}) struggle to implement physical processes that can simultaneously reproduce the observed properties of the massive and low mass galaxy populations at both high and low redshifts (e.g., \citealt{Conselice2007}, \citealt{Asquith2018}, \citealt{Sherman2020}, \citealt{Sherman2020b}). Observations of large samples of massive galaxies at cosmic noon ($1.5 < z < 3.0$), a time when the massive galaxy population transitions from star-forming to quiescent (e.g., \citealt{Conselice2011}, \citealt{vanderWel2011}. \citealt{Weinzirl2011}, \citealt{Muzzin2013}, \citealt{vanDokkum2015}, \citealt{Martis2016}, \citealt{Tomczak2016}, \citealt{Sherman2020b}), can provide important constraints on the physical processes driving the early assembly of massive galaxies.
The stellar masses and star-formation rates of galaxies at cosmic noon ($1.5 < z < 3.0$) are fundamental quantities that provide insights into this dynamic period in the history of the universe. At this epoch, proto-clusters began to collapse into the rich clusters seen at present day (e.g., \citealt{Gobat2011},
\citealt{Lotz2013}, \citealt{Overzier2016}, \citealt{Wang2016}, \citealt{Chiang2017}), star-formation and black hole accretion peaked (e.g., \citealt{MadauDickinson2014}), and the massive ($M_\star \ge 10^{11}$M$_\odot$) galaxy population transitioned from being predominantly star-forming to predominantly quiescent (e.g., \citealt{Conselice2011}, \citealt{vanderWel2011}. \citealt{Weinzirl2011}, \citealt{Muzzin2013}, \citealt{vanDokkum2015}, \citealt{Martis2016}, \citealt{Tomczak2016}, \citealt{Sherman2020b}). The relationship between star-formation rate and stellar mass, coined the ``main sequence" by \cite{Noeske2007}, provides key insights into the formation history of the massive galaxy population.
Although a significant number of studies (e.g., \citealt{Daddi2007}, \citealt{Elbaz2007}, \citealt{Noeske2007}, \citealt{Karim2011}, \citealt{Rodighiero2011}, \citealt{KGuo2013}, \citealt{Speagle2014}, \citealt{Whitaker2014}, \citealt{Lee2015}, \citealt{Renzini2015}, \citealt{Salmon2015}, \citealt{Schreiber2015}, \citealt{Tasca2015}, \citealt{Tomczak2016}, \citealt{Santini2017}, \citealt{Popesso2019}, among others) have investigated the nature of galaxies in the SFR-$M_\star$ plane, a consensus has not yet been reached for a single definition of the ``main sequence", specifically as it pertains to the star-forming galaxy main sequence. Some studies choose to pre-select for star-forming galaxies (e.g., \citealt{Noeske2007}, \citealt{Daddi2007}, \citealt{Whitaker2014}, \citealt{Tomczak2016}), typically via emission at 24$\mu$m. Others select a sample of star-forming galaxies from a sample containing all galaxies (e.g., \citealt{Whitaker2014} and \citealt{Tomczak2016} at intermediate redshifts), with techniques such as color-color selection. In this work, we investigate both the main sequence for all galaxies and for star-forming galaxies using a sample of massive ($M_\star \ge 10^{11}$M$_\odot$) galaxies that spans a wide range of specific star-formation rates in the star-forming, green valley (e.g., \citealt{Martin2007}, \citealt{Salim2007}, \citealt{Wyder2007}), and quiescent populations.
Previous studies have focused on three key aspects of the main sequence: the slope, normalization, and scatter around the main sequence. The slope provides information about when galaxies of different masses begin to quench (the so-called ``downsizing" scenario; \citealt{Cowie1996}). Out to $z\sim6$ the power-law slope is measured to be between $\sim0-1$ (e.g., \citealt{Daddi2007}, \citealt{Elbaz2007}, \citealt{Noeske2007}, \citealt{Rodighiero2011}, \citealt{KGuo2013}, \citealt{Speagle2014}, \citealt{Whitaker2014}, \citealt{Renzini2015}, \citealt{Tomczak2016}, \citealt{Santini2017}, among others), with evidence that higher mass star-forming galaxies exhibit a shallower slope than lower mass star-forming galaxies (e.g., \citealt{Whitaker2014}, \citealt{Lee2015}, \citealt{Tasca2015}). The normalization of the main sequence has been shown to increase with increasing redshift, indicating that the specific star-formation rates of extreme galaxies found at late times were more typical specific star-formation rates at earlier times (e.g., \citealt{Karim2011}, \citealt{Speagle2014}, \citealt{Whitaker2014}, \citealt{Tomczak2016}, \citealt{Santini2017}). Finally, the star-forming galaxy main sequence relation has been found to be quite tight with a rather constant scatter (typical scatter is measured to be $0.2 - 0.4$ dex; e.g., \citealt{Rodighiero2011}, \citealt{Speagle2014}, \citealt{Schreiber2015}, \citealt{Popesso2019}) with the level of scatter often attributed to the level of stochasticity in the star-formation history of the population (e.g., \citealt{Caplar2019}, \citealt{Matthee2019}).
Typically, previous studies have focused on achieving deep observations taken over small areas, often pushing constraints of the main sequence to fairly low masses ($\sim10^8 - 10^9$M$_\odot$; e.g., \citealt{Whitaker2014}, \citealt{Tomczak2016}). Different methods taken by previous studies for measuring the main sequence (e.g., extrapolation from low to high masses, fitting single and double power laws, stacking analyses, etc.), as well as inconsistent (and often biased) methods of separating star-forming galaxies from the total population (e.g., color-color indicators, specific star-formation rate thresholds, distance below the main sequence, detection in particular filters, etc.) have led to measures of the main sequence that are not unbiased \citep{Renzini2015}. Furthermore, biased selection of star-forming galaxies has often forced previous works to make assumptions about the distribution of star-forming galaxies in the SFR-M$_{\star}$ plane, which makes robust measures of the scatter around the star-forming galaxy main sequence, a strong tracer of the stochasticity of star-formation histories, quite difficult.
In this work, we present the massive end of the galaxy main sequence for all galaxies and star-forming galaxies at cosmic noon ($1.5 < z < 3.0$) using a sample of 28,469 massive ($M_\star \ge 10^{11}$M$_\odot$) galaxies. Notably, we do not make any assumptions about the functional form of the galaxy main sequence nor do we make assumptions about the distribution of massive galaxies in the SFR-M$_{\star}$ plane. This novel, unbiased approach is made possible by our large sample which is uniformly selected from a 17.5 deg$^2$ area ($\sim0.33$ Gpc$^3$ comoving volume over $1.5 < z < 3.0$), significantly reducing Poisson errors and rendering the effects of cosmic variance negligible. With this large sample, we are uniquely suited to separate star-forming galaxies from the collective green valley and quiescent galaxy populations by locating the transition between the star-forming and green valley populations in the SFR-M$_{\star}$ plane in small mass bins, rather than using fixed cutoffs to define these populations. Finally, due to our meaningful separation of star-forming galaxies from the total population, we are able to perform an unbiased study of the scatter around the star-forming galaxy main sequence as a function of stellar mass.
We also compare our empirical results with those from hydrodynamical models SIMBA \citep{Dave2019} and IllustrisTNG (\citealt{Pillepich2018b}, \citealt{Springel2018}, \citealt{Nelson2018}, \citealt{Naiman2018}, \citealt{Marinacci2018}), as well as the semi-analytic model SAG \citep{Cora2018}. \cite{Sherman2020b} showed that these models face significant challenges in reproducing the observed quiescent fraction of massive ($M_\star \ge 10^{11}$M$_\odot$) galaxies at $1.5 < z < 3.0$, indicating that the implementation of the physical processes underlying massive galaxy evolution at these epochs may need to be revised.
This paper is organized as follows. In Section \ref{sec:data_and_analysis} we detail the data used in this work, the SED fitting procedure, and sample selection. Section \ref{sec:MainSequence} presents our measurement of the main sequence for all galaxies, Section \ref{sec:SF_MainSequence} presents the main sequence for star-forming galaxies, and in Section \ref{sec:MSAll_vs_MSSF} we compare the resulting main sequences for all and star-forming galaxies. In Section \ref{sec:MS_scatter} we measure the scatter around the star-forming galaxy main sequence. In Section \ref{sec:CompWObs} we present comparisons with previous observational studies, and in Section \ref{sec:CompWTheory} we compare our empirical result with those from theoretical models. Finally, we discuss the implications of our results in Section \ref{sec:Discussion} and summarize our results in Section \ref{sec:Summary}. Throughout this work we adopt a flat $\Lambda$CDM cosmology with $h = 0.7$, $\Omega_m = 0.3$, and $\Omega_{\Lambda} = 0.7$.
\section{Data and Analysis}
\label{sec:data_and_analysis}
The data, SED fitting, sample selection, and stellar mass completeness estimates used in this work are the same as those used in \cite{Sherman2020b} and will briefly be described here. Our catalog is NEWFIRM $K_s$-selected (depth 22.4 AB mag at 5$\sigma$; PI Finkelstein, \citealt{Stevans2021}) and covers 17.5 deg$^2$ in the SDSS Stripe 82 equatorial field. In addition to the NEWFIRM $K_s$ data, we also utilize \textit{u, g, r, i, z} photometry from the Dark Energy Camera (DECam) (\citealt{Wold2019}, \citealt{Kawinwanichakij2020}, \citealt{Stevans2021}; r-band 5$\sigma$ depth is r = 24.5 AB mag), VICS82 $J$ and $K_s$ data (\citealt{Geach2017}; 5$\sigma$ depth for J-band is 21.5 AB mag and for K-band is 20.9 AB mag), and \textit{Spitzer}-IRAC 3.6 and 4.5$\mu$m photometry (PI Papovich; \citealt{Papovich2016}, \cite{Kawinwanichakij2020}); 5$\sigma$ depth is 22 AB mag in both filters). Combined, these data provide up to 10 photometric data points with which we can use SED fitting techniques to estimate redshift, stellar mass, SFR, and other galaxy properties. Additional photometric data in this footprint (which are not used in SED fitting) come from \textit{Herschel}-SPIRE (HerS, \citealt{Viero2014}) far-IR/submillimeter, and XMM-Newton and Chandra X-ray Observatory X-ray data from the Stripe 82X survey (\citealt{LaMassa2013a}, \citealt{LaMassa2013b}, \citealt{Ananna2017}, the X-ray data cover $\sim$11.2 deg$^2$). In this region, optical ($3500-5500$\AA) spectroscopy is being obtained by the Hobby Eberly Telescope Dark Energy Experiment (HETDEX, \citealt{Hill2008}), and these data are only used to estimate the accuracy of photometric redshifts, when available (see below).
SED fitting is performed using EAZY-py\footnote{The version of EAZY-py used in this work was downloaded in May 2018 from \url{https://github.com/gbrammer/eazy-py} and was later modified by \cite{Sherman2020} (also described in \citealt{Sherman2020b}) to add functions that provide uncertainties on measured galaxy parameters.}, a python-based version of EAZY \citep{Brammer2008}, which simultaneously fits for photometric redshift, stellar mass, SFR, and other galaxy properties, with an implementation from \cite{Sherman2020} and \cite{Sherman2020b} that also gives error estimates for these parameters (finding typical stellar mass and SFR errors of $\pm0.08$ dex and $\pm0.18$ dex, respectively for $1.5 < z < 3.0$ galaxies above our estimated mass completeness limits, detailed below). EAZY-py performs SED fitting using twelve Flexible Stellar Population Synthesis (FSPS; \citealt{Conroy2009}, \citealt{Conroy2010}) templates in non-negative linear combination. Our SED fitting is performed using the default EAZY-py FSPS templates which are built with a \cite{Chabrier2003} initial mass function (IMF), the \cite{KriekConroy2013} dust law, solar metallicity, and star-formation histories including bursty and slowly rising models.
We note that recent studies (e.g., \citealt{Carnall2019}, \citealt{Leja2019}) have showed the strong influence that the chosen star-formation history has on the resultant SFR given by SED fitting. In our study, the EAZY-py fitting method constructs a best-fit SED from the non-negative linear combination of twelve templates, each with different star-formation histories. Because of this, the resultant best-fit SED is not restricted to a single underlying star-formation history. Additionally, \cite{Sherman2020} used a diverse set of mock galaxies (V. Acquaviva, private communication) constructed with \cite{BruzualCharlot2003} templates and spanning stellar masses up to $M_{\star}=10^{12}$M$_{\odot}$ to validate the SED fitting procedure described above. These models were constructed from various underlying dust laws, IMFs, and star-formation histories (including exponentially declining, delayed exponential, constant, and linearly increasing). \cite{Sherman2020} found that for galaxies at $1.5 < z < 3.0$, EAZY-py is able to adequately recover the redshift, stellar mass, and SFR for the mock galaxies.
Photometric redshift accuracy is estimated using spectroscopic redshifts from SDSS \citep{Eisenstein2011} at $z < 1$ and the second internal data release of the HETDEX survey \citep{Hill2008} at $1.9 < z < 3.5$. For both samples, \cite{Sherman2020b} quantified the photometric redshift recovery using the normalized median absolute deviation ($\sigma_{\rm NMAD}$; \citealt{Brammer2008}). Using the low-redshift sample from SDSS $\sigma_{\rm NMAD}$ = 0.053, and for the intermediate redshift galaxies from HETDEX $\sigma_{\rm NMAD}$ = 0.102. This intermediate redshift sample has only 56 galaxies, which are all visually inspected to confirm the spectroscopic redshift from the HETDEX pipeline, and this sample is expected to grow with future data releases. Three of the 56 intermediate redshift galaxies are catastrophic outliers (5.3\%) where the HETDEX spectrum places them at ($z < 0.5$) but the best-fit photometric redshift is $z > 2$. We note that catastrophic outliers are not removed from the low or high redshift samples before computing $\sigma_{\rm NMAD}$.
Our science sample is the same as that from \cite{Sherman2020b}, comprised of 54,001 galaxies at $1.5 < z < 3.0$, of which, 28,469 are fit to have $M_\star \ge 10^{11}$M$_\odot$. The 95\% stellar mass completeness limits for this sample are log($M_{\star}$/$\rm M_{\odot}$) = 10.69, 10.86, and 11.13 in our $1.5 < z < 2.0$, $2.0 < z < 2.5$, and $2.5 < z < 3.0$ bins, respectively. We refer the reader to \cite{Sherman2020b} for details regarding sample selection and mass completeness estimates.
For every galaxy in our $K_s$-selected sample, we obtain a measure of dust-corrected SFR, with an associated uncertainty, from our SED fitting procedure. The SED fitting procedure uses all available filters to find the best-fitting SED. Unlike the rather straightforward connection between a galaxy's $K_s$-band magnitude and that galaxy's stellar mass, there is not a straightforward connection between the measured dust-corrected SFR and a particular band. To estimate an SFR completeness, however, we can use the g-band as a proxy for FUV flux (see \citealt{Sherman2020b} and \citealt{Florez2020}) and obtain a g-band SFR completeness estimate. To achieve this, we take the $5\sigma$ g-band limiting magnitude for our survey ($\rm m_{\rm g,lim} = 24.8$ mag AB; computed by \citealt{Wold2019}) and, following \cite{Sherman2020b} and \cite{Florez2020}, we apply the conversion factor from \cite{Hao2011} to convert the $5\sigma$ g-band limiting magnitude into an estimate of SFR$_{\rm FUV}$. The \cite{Hao2011} conversion assumes a \cite{Kroupa2001} IMF, and we reduce the estimated SFR$_{\rm FUV}$ by 0.046 dex to align the results with the \cite{Chabrier2003} IMF used throughout this work. We estimate the g-band based SFR completeness limits to be SFR = 2.36, 4.39, and 7.16 M$_{\odot}$ yr$^{-1}$ in our $1.5 < z < 2.0$, $2.0 < z < 2.5$, and $2.5 < z < 3.0$ bins, respectively. If we further apply a dust correction based on the median extinction measured by our SED fitting procedure for galaxies within $\pm0.1$ mag of the $5\sigma$ g-band completeness limit (this value is $\sim0.8-1.0$A$_{\rm v}$ across our three redshift bins), we find that the dust-corrected g-band SFR completeness limits are SFR = 4.80, 8.89, and 17.56 M$_{\odot}$ yr$^{-1}$ in our $1.5 < z < 2.0$, $2.0 < z < 2.5$, and $2.5 < z < 3.0$ bins, respectively. Again, we emphasize that because our sample is $K_s$ selected, not g-band selected, for every object in our science sample we have a measurement, from SED fitting, of the dust-corrected SFR with an associated uncertainty. This holds true even for those with SFR measured by our SED fitting procedure to be below the $5\sigma$ g-band based SFR completeness limit estimates.
\section{Galaxy Main Sequence}
\label{sec:MainSequence_Opening}
In this Section we present the main sequence for all galaxies, which is computed in individual small mass bins at the high mass end, thereby eliminating the need for extrapolation or assumed functional forms. We also detail a novel method for isolating the star-forming galaxy population in an unbiased way, and we use this sample to explore the main sequence for star-forming galaxies. Finally we compare the main sequence for all galaxies with the main sequence for star-forming galaxies, and detail how the buildup of the collective green valley and quiescent galaxy populations influences the time evolution of the slope of the main sequences for all galaxies and star-forming galaxies.
\subsection{Measuring the Main Sequence for All Galaxies}
\label{sec:MainSequence}
The main sequence in each of our three redshift bins spanning $1.5 < z < 3.0$ is defined to be the average SFR in small mass bins in the SFR-M$_{\star}$ plane. To compute the error on the main sequence, we employ a bootstrap resampling procedure (\citealt{Sherman2020b}, \citealt{Florez2020}) that is repeated 1000 times. During each bootstrap draw we select a random sample of galaxies from each mass bin, with replacement, where the sample size is equal to the number of galaxies in the bin. By taking the average SFR in each of the 1000 draws, we generate a distribution of average SFR (main sequence) values. The lower and upper error bars on the main sequence are the 16th and 84th percentiles of this distribution, respectively.
We find that at the high mass end ($M_\star = 10^{11}$ to $10^{12}$M$_\odot$), the main sequence for all galaxies is flattened (Figure \ref{ms_all_plot}; compared to the often assumed slope of unity; e.g., \citealt{Wuyts2011}), and this flattening becomes more pronounced as redshift decreases toward $z=1.5$. Although we do not assume any functional form of the main sequence, using an ordinary least squares regression (fit to main sequence values between $M_\star = 10^{11}$ to $10^{12}$M$_\odot$), we can determine that the power law slope of the main sequence evolves from $0.30\pm0.0005$ at $2.5 < z < 3.0$, to $0.24\pm0.0008$ at $2.0 < z < 2.5$, and finally to $-0.02\pm0.0004$ at $1.5 < z < 2.0$. Further exploration of the implication of the shape of the main sequence for all galaxies will be discussed in Sections \ref{sec:MSAll_vs_MSSF} and \ref{sec:Discussion}.
\begin{figure*}
\begin{center}
\includegraphics[width=\textwidth]{mainSequence_All.pdf}
\caption{The SFR-M$_{\star}$ relation (2D histogram) and main sequence (pink circles) for all galaxies in our sample. The main sequence is the average SFR in individual mass bins, while errors on the main sequence are computed using the bootstrap resampling procedure described in Section \ref{sec:MainSequence}. The main sequence for all galaxies shows a flattening at the highest masses ($M_\star = 10^{11}$ to $10^{12}$M$_\odot$), and this flattening becomes more prominent as time progresses towards $z=1.5$. Colorbars show the number of galaxies in each cell of the 2D histogram, and gray shaded regions represent masses below our 95\% completeness limit. We emphasize that the results presented in this work focus on the mass range $M_\star = 10^{11}$ to $10^{12}$M$_\odot$, and that results above $M_\star = 10^{12}M_\odot$ (vertical dashed gray line) are unlikely to be robust. Insets on the upper right of each panel show the total number ($N_{11}$) of galaxies in our sample with $M_\star \ge 10^{11}$M$_\odot$.}
\label{ms_all_plot}
\end{center}
\end{figure*}
\subsection{Isolating Star-Forming Galaxies and Measuring the Main Sequence for Star-Forming Galaxies}
\label{sec:SF_MainSequence}
To compute the star-forming galaxy main sequence, we first need to isolate the star-forming galaxy population from the collective green valley and quiescent population (see Figure \ref{ms_all_plot}). To do this, we require a method that both utilizes the quantities of interest in this study (stellar mass and star-formation rate) and does not place artificial limits on the width or scatter around the star-forming galaxy main sequence, as that would limit our ability to study the scatter around this relation later in this work (see Section \ref{sec:MS_scatter}).
\cite{Sherman2020b} used three methods to separate the star-forming and quiescent galaxy populations: a fixed specific star-formation rate (sSFR; $\rm sSFR = \rm SFR/\rm M_\star$) threshold, a fixed distance below the main sequence, and UVJ color-color selection. All three methods give quiescent fractions as a function of mass that are consistent within a factor of two. The fixed sSFR and distance below the main sequence methods both place artificial limits on the scatter around the main sequence by using a fixed threshold separating star-forming and quiescent galaxies. In \cite{Sherman2020b} the sSFR threshold was set to be sSFR = 10$^{-11} \rm yr^{-1}$ for all mass bins in our three redshift bins spanning $1.5 < z < 3.0$. Using the distance from the main sequence method, \cite{Sherman2020b} computed the main sequence in the same way as described here and considered all galaxies lying 1 dex or more below the main sequence to be quiescent. This method was an improvement over the fixed sSFR threshold because the threshold varied with stellar mass and redshift bin, however it still set an artificial limit of 1dex on the scatter around the star-forming galaxy main sequence. Alternatively, the UVJ color-color method seeks to separate galaxies into star-forming and quiescent populations by using their position in color-color parameter space. These populations were initially interpreted using evolutionary tracks (e.g., \citealt{Labbe2005}, \citealt{Wuyts2007}), and a boundary was later placed between them using the empirically-based locations of the two populations (e.g., \citealt{Williams2009}, \citealt{Muzzin2013}). Although this method is a common way to separate star-forming and quiescent galaxies (e.g., \citealt{Whitaker2014}, \citealt{Tomczak2016}) it relies heavily on where the boundary between star-forming and quiescent galaxies is drawn and how rest-frame U, V, and J fluxes are estimated during the SED fitting procedure.
An unbiased, meaningful way of isolating the star-forming galaxy population would be to employ the information provided by the SFR-M$_{\star}$ plane itself. Our large sample size allows us to make this separation by locating the transition between star-forming galaxies and galaxies in the green valley (e.g., \citealt{Martin2007}, \citealt{Salim2007}, \citealt{Wyder2007}) in individual small mass bins. In this work, we consider green valley galaxies to be those lying in the region of the SFR-M$_{\star}$ plane below the star-forming galaxy population and above the quenched galaxy population. We note that while some works select green valley galaxies in color space, we exclusively refer to this population as it relates to their location in the SFR-M$_{\star}$ plane.
Previous works with significantly smaller samples than ours have studied the green valley population by separating transitional green valley galaxies from star-forming and quiescent populations in the SFR-M$_{\star}$ plane. \cite{Pandya2017} made this separation at $z=0-3$ by first finding the main sequence (where the normalization is determined using the $M_\star = 10^{9} - 10^{9.5} $M$_\odot$ population and the slope is assumed to be unity), then defining a region from $0.6 -1.4$ dex below the main sequence which contained the green valley population. \cite{Jian2020} first found the median relationships for all star-forming and quiescent galaxies (where the former is simply the star-forming main sequence and the latter is a linear fit to the quiescent galaxy sample, where these populations are found using an iterative approach) and defined the center of the green valley to be the average of these linear fits for a sample of galaxies at $z=0.2-1.1$. They then adopted a fixed width for the green valley to define their transition galaxy population, and the upper limit of this region served as a fixed lower limit for the star-forming population. Both the methods from \cite{Pandya2017} and \cite{Jian2020} place artificial limits on the width of the star-forming galaxy population in the SFR-M$_{\star}$ plane, the same limitation encountered in \cite{Sherman2020b}.
A different, yet similarly limiting approach, is taken by \cite{Janowiecki2020} who define the star-forming population at $z=0.01-0.05$ by fitting un-constrained Gaussians to the sSFR distributions of galaxies in small mass bins. This is first done at low masses where galaxies are predominantly star-forming, then the modes of these Gaussians are extrapolated to higher masses to define the main sequence around which one-sided Gaussians with fixed modes are then fit to galaxies with sSFR greater than the mode. The star-forming population is defined to be the Gaussian distribution of galaxies around the star-forming main sequence (extrapolated modes), and they define green valley galaxies to be those $1\sigma$ below the ridge of the main sequence. Because the width of the best-fitting Gaussian in a given mass bin is determined solely from fitting a one-sided Gaussian to galaxies lying above the extrapolated mean sSFR, the lower bound of the star-forming population is reliant on the distribution of highly star-forming galaxies and the underlying assumption that star-forming galaxies adhere to a Gaussian distribution in the SFR-M$_{\star}$ plane.
\begin{figure}
\begin{center}
\includegraphics[width=3.3in]{findGreenValley_schematic_insetMS.pdf}
\caption{An example schematic of our method used to locate the transition between the star-forming and green valley galaxy populations. The labeled steps are as follows and they correspond to the same numbered steps in Section \ref{sec:SF_MainSequence}. Step 1: For all galaxies in a given mass bin (in this example, the $M_\star = 10^{11}$M$_\odot$ bin for $1.5 < z < 2.0$ galaxies) construct a histogram of specific star-formation rate values. Step 2: Interpolate the shape of this histogram using a univariate spline. Step 3: Find the local maximum at log(sSFR)$>-10.2$ as a rough estimate of the ridge of the main sequence. Step 4: Step bin-by-bin from higher to lower sSFR. Step 5: Stop bin-by-bin stepping when the interpolated spline goes from decreasing to increasing, and define this local minimum as the transition between the star-forming and green valley populations. We remind the reader that for every galaxy in our sample, we obtain a measure of dust-corrected SFR from our SED fitting procedure. The inset figure shows the SFR-$M_\star$ plane in the $1.5 < z < 2.0$ bin with the main sequence for all galaxies shown in pink and the dividing line (green with black outline) between the star-forming and green valley populations determined using the procedure described here and in Section \ref{sec:SF_MainSequence}. The results of implementing this procedure to isolate the star-forming population in all three redshift bins can be seen in Figure \ref{ms_sf_plot}. In the inset figure, the gray shaded region represents masses below our 95\% completeness limit, and the vertical dashed gray line represents $M_\star = 10^{12}M_\odot$, above which our results are unlikely to be robust.}
\label{gv_annotated_plot}
\end{center}
\end{figure}
In this work, we avoid biasing the scatter around the star-forming galaxy main sequence and use the values of interest (stellar mass and star-formation rate) to isolate the star-forming galaxy population, by employing a method that locates the transition between the star-forming galaxy population and galaxies lying in the green valley. We locate this transition in each of our small mass bins (mass bins have 0.25 dex width) within our three redshift bins spanning $1.5 < z < 3.0$ in order to isolate the star-forming galaxy population without using fixed cutoffs. We are uniquely suited to take this approach because each of our small mass bins contains enough high mass galaxies to robustly locate the transition between the star-forming and green valley populations.
We note that the local minima seen in the SFR-$M_\star$ plane (see Figure \ref{ms_all_plot}) are physically motivated, and they are not products of our SED fitting procedure. Our SED fitting method determines the best-fitting SED for each galaxy by combining a set of twelve SED templates in non-negative linear combination. There are no constraints placed on the contribution of each template, aside from requiring that the templates either provide a positive contribution or zero contribution to the final best-fitting SED. Therefore, since galaxies are fit to be in the transition regions between populations, these regions of parameter space are accessible to these template combinations. If galaxies are not fit to be in the transition regions between populations it is because those regions did not provide the best-fitting SED, not because those regions are inaccessible to the template SEDs.
Locating the transition between the star-forming and green valley populations is a five step process (see Figure \ref{gv_annotated_plot} for a schematic). First, in each of our small mass bins we construct a histogram of the sSFR of all galaxies in that bin. These histograms are binned using the Freedman-Diaconis Estimator \citep{Freedman1981}, which optimizes bin size based on sample size while being robust to outliers. This allows the bin size for each sSFR histogram in small mass bins to vary based on the number of galaxies in that small mass bin. Second, we interpolate over this histogram using a univariate spline with degree three (a cubic spline). This smoothed interpolation is robust to small amounts of bin-to-bin noise, and allows us to define the shape of the sSFR histogram in each small stellar mass bin. We note that the spline interpolation is based on the left edges of the sSFR histogram bins because we want all galaxies placed in an sSFR bin to have the same designation as star forming or green valley. If we were to place the separation between the star-forming and green valley populations at an sSFR in the center of an sSFR bin, then the galaxies in that bin would be placed into two separate categories. Third, we estimate the location of the ridge of the main sequence by finding the local maximum at log(sSFR)$>-10.2$, and fourth we step along our interpolated distribution from high to low sSFR values until the number of galaxies switches from decreasing to increasing. This switch occurs at the local minimum between the star-forming population and the green valley population, and finally (step five), we define this local minimum to be the transition between star-forming galaxies and the collective green valley and quiescent galaxy population in each of our small mass bins.
We note that the procedure described above is only implemented when there are more than 100 galaxies in a small mass bin and the transition between the star-forming and green valley populations can be clearly defined. This required number of galaxies was determined through trial and error. We found that when there were fewer than 100 galaxies in a given mass bin, the sSFR histogram was too sparsely populated to reliably locate the transition between star-forming and green valley galaxies, if any exists. In that case, the threshold between star-forming and quiescent galaxies was set to sSFR = 10$^{-11} \rm yr^{-1}$. This only impacts mass bins well below our completeness limit or at the extreme high-mass end ($M_\star > 10^{12}$M$_\odot$), and, because this work focuses on the mass range $M_\star = 10^{11}$ to $10^{12}$M$_\odot$, this requirement of 100 galaxies in a small mass bin does not impact our results.
A potential source of uncertainty in isolating star-forming galaxies with this method arises from measurement uncertainty. Our method relies on accurately identifying the first inflection point leftward of the main sequence in the sSFR histogram in a given mass bin. If the true sSFR value for a galaxy is slightly different than the sSFR measured from our SED fitting procedure, the true inflection point in the sSFR histogram may be different than the one we measure. To investigate the impact of this type of uncertainty, we implement a procedure in which we draw a new sSFR (and associated stellar mass) for every galaxy in our science catalog using its parameter measurement errors given by our SED fitting procedure (see \citealt{Sherman2020} for details of this error measurement). We then repeat the above procedure to re-compute the location of the inflection point in the sSFR histogram in each mass bin. This procedure is performed 1000 times, thereby giving 1000 values, in each mass bin, of the local minimum between the star-forming and green valley populations. We are then able to investigate how different inflection point locations impact our measurements of the main sequence for star-forming galaxies and the scatter around that relation. Through this procedure we find that the typical draw gives an sSFR inflection point within a factor of $\sim2$ of our best-fit measurement for $M_\star = 10^{11}$ to $10^{12}$M$_\odot$. Because there are relatively few galaxies around the local minimum between the star-forming and green valley populations, we find that our measured main sequence for star-forming galaxies and the scatter around that relation are robust (within factors of $\sim1.3$ and $\sim2$, respectively) to small changes in the value for the local minimum in the sSFR histogram. Although we allow galaxies to move between mass bins during this test, we note that our best-fit measurements of the (star-forming) galaxy main sequence and scatter around the star-forming galaxy main sequence, which are presented throughout this work, do not account for scatter between mass bins (we remind the reader that typical stellar mass errors are $\pm0.08$ dex for $1.5 < z < 3.0$ galaxies above our estimated mass completeness limits, which is significantly smaller than our 0.25 dex bin size).
To confirm that our method separating star-forming galaxies from the collective population of green valley and quiescent galaxies is consistent with other methods of isolating star-forming galaxies, we compare our collective fraction of green valley and quiescent galaxies to the fractions determined by \cite{Sherman2020b}, who used three methods (sSFR-selected, main sequence - 1 dex selected, and UVJ-selected quiescent fractions; Figure \ref{qf_comp}). The agreement is strongest with the quiescent fraction computed using the main sequence - 1 dex method. This is expected as this method was most effective at separating star-forming galaxies from the collective green valley and quiescent galaxy populations in \cite{Sherman2020b}. The method implemented in this work is an improvement over the main sequence - 1 dex method as it does not place an arbitrary distance below the main sequence as a criterion for isolating star-forming galaxies.
\begin{figure}
\begin{center}
\includegraphics[width=3.3in]{QF_zsplit_sSFR_QVJ_MSdex_gvTop.pdf}
\caption{The quiescent fraction as a function of stellar mass determined using the transition between star-forming and green valley galaxies to separate star-forming systems from the collective green valley and quiescent populations (green triangles). Also plotted are the results from \protect\cite{Sherman2020b} who determined the quiescent fraction in three ways: sSFR-selected (pink circles), main sequence - 1 dex selected (gold pentagons), and UVJ-selected (purple squares). The four measurements of the quiescent fraction give consistent results across our three redshift bins spanning $1.5 < z < 3.0$. Gray shaded regions represent masses below our 95\% completeness limit. Error bars represent Poisson errors. We emphasize that the results presented in this work focus on the mass range $M_\star = 10^{11}$ to $10^{12}$M$_\odot$, and that results above $M_\star = 10^{12}M_\odot$ (vertical dashed gray line) are unlikely to be robust. Insets on the upper left of each panel show the total number ($N_{11}$) of galaxies in our sample with $M_\star \ge 10^{11}$M$_\odot$.}
\label{qf_comp}
\end{center}
\end{figure}
With a population of star-forming galaxies identified, we are able to compute the star-forming galaxy main sequence (Figure \ref{ms_sf_plot}), which is the average SFR in each mass bin, with error bars computed using the bootstrap resampling procedure described in Section \ref{sec:MainSequence}. The star-forming galaxy main sequence does not show a significant flattening at the high mass end ($M_\star = 10^{11}$ to $10^{12}$M$_\odot$). Its power law slope, computed using an ordinary least squares fit to the star-forming galaxy main sequence values over the mass range $M_\star = 10^{11}$ to $10^{12}$M$_\odot$, evolves mildly from $0.47\pm0.0011$ at $2.5 < z < 3.0$, to $0.46\pm0.0001$ at $2.0 < z < 2.5$ , and finally to $0.35\pm0.0013$ at $1.5 < z < 2.0$.
\begin{figure*}
\begin{center}
\includegraphics[width=\textwidth]{mainSequence_SF_gvTop.pdf}
\caption{The SFR-M$_{\star}$ relation (2D histogram) and main sequence (pink circles) for star-forming galaxies in our sample. Star-forming galaxies are selected by locating the transition between star-forming and green valley populations, then removing galaxies below this transition, as is described in Section \ref{sec:SF_MainSequence}. The star-forming main sequence is the average SFR in individual mass bins, while errors on the star-forming main sequence are computed using the bootstrap resampling procedure described in Section \ref{sec:MainSequence}. Unlike the main sequence for all galaxies, the star-forming galaxy main sequence does not show a strong evolution in the high mass end slope from $z=3.0$ to $z=1.5$. Colorbars show the number of galaxies in each cell of the 2D histogram, and gray shaded regions represent masses below our 95\% completeness limit. We emphasize that the results presented in this work focus on the mass range $M_\star = 10^{11}$ to $10^{12}$M$_\odot$, and that results above $M_\star = 10^{12}M_\odot$ (vertical dashed gray line) are unlikely to be robust. Insets on the upper right of each panel show the number ($N_{11}$) of star-forming galaxies in our sample with $M_\star \ge 10^{11}$M$_\odot$.}
\label{ms_sf_plot}
\end{center}
\end{figure*}
\subsection{Implications of the Growing Green Valley and Quiescent Populations}
\label{sec:MSAll_vs_MSSF}
As is seen in Figure \ref{ms_all_plot}, our large sample of galaxies in the SFR-$M_\star$ plane shows three distinct populations of galaxies: star-forming, green valley, and quiescent. In Section \ref{sec:SF_MainSequence} we described a novel method for using the transitions between these populations to isolate the star-forming galaxy population. This procedure can also be used to find the local minimum in sSFR space between the green valley and quiescent populations. To locate this transition, we employ a version of the five step procedure described in Section \ref{sec:SF_MainSequence}, with a small modification to step three. Here, we (1) construct an sSFR histogram in each mass bin, (2) interpolate using a smoothed cubic spline, (3) find the local maximum of the green valley population (local maximum between log(sSFR)$>-12.0$ and the sSFR at which the local minimum occurs between the green valley and star-forming populations, as determined in Section \ref{sec:SF_MainSequence}), (4) step bin-by bin from high to low sSFR, and finally (5) stop stepping when a local minimum in the spline is found. This local minimum is the transition between the green valley and quiescent populations (Figure \ref{gvTrans_moreBins_plot}).
In Figure \ref{gvTrans_moreBins_plot}, we show that the sSFR distributions in individual mass bins can provide more information about the buildup of the collective green valley and quiescent populations as time progresses and that higher mass bins have larger collective populations of quiescent and green valley galaxies than star-forming galaxies. This result is consistent with measures of the quiescent fraction from \cite{Sherman2020b}, who showed that at these redshifts and stellar masses, the quiescent fraction increases from $z=3.0$ to $z=1.5$ at the highest masses and that higher mass galaxies ($M_\star = 10^{12}$M$_\odot$) at a given redshift have a larger quiescent fraction than lower mass systems ($M_\star = 10^{11}$M$_\odot$). The method used in this work to isolate star-forming galaxies by locating the transition between star-forming and green valley galaxies is an improvement over the main sequence - 1 dex technique used by \cite{Sherman2020b} as it more meaningfully isolates star-forming galaxies from the collective green valley and quiescent population without employing an ad hoc threshold below the main sequence.
\begin{figure*}
\begin{center}
\includegraphics[width=6in]{sfrMstarPlane_gv_q_transitions_wSpline.pdf}
\caption{Specific star-formation rate distributions for individual mass bins in the SFR-$M_\star$ plane (purple histograms), with the splines used to interpolate these distributions (solid pink lines). The three vertical columns of panels are for each of our three redshift bins spanning $z=1.5$ to $z=3.0$. The top row shows the log($M_{\star}$/$\rm M_{\odot}$) = 11.2 bin, and the bottom row shows the log($M_{\star}$/$\rm M_{\odot}$) = 11.7 bin. In each panel, star-forming galaxies fall to the right of the vertical dashed green line, green valley galaxies are between the vertical dashed green line and the vertical dash-dot pink line, and quiescent galaxies lie to the left of the vertical dash-dot pink line. The procedure used to identify the location of the transition between star-forming and green valley galaxies and transition between green valley and quiescent galaxies are described in Sections \ref{sec:SF_MainSequence} and \ref{sec:MSAll_vs_MSSF}, respectively. As we move from higher to lower redshift, the buildup of the populations of green valley and quiescent galaxies becomes prominent. We again note that our SED fitting procedure provides a measure of dust-corrected SFR for every galaxy in our $K_s$-selected sample. }
\label{gvTrans_moreBins_plot}
\end{center}
\end{figure*}
Our empirical main sequences measured for all galaxies (see Section \ref{sec:MainSequence}) and star-forming galaxies (see Section \ref{sec:SF_MainSequence}) are compared in Figure \ref{ms_all_v_sf_plot}. In our two highest redshift bins ($2.0 < z < 2.5$ and $2.5 < z < 3.0$), where only $\sim20-40\%$ of massive ($M_\star \ge 10^{11}$M$_\odot$) galaxies are members of the collective green valley and quiescent population (Figure \ref{qf_comp}), the total galaxy main sequence is higher than the star-forming galaxy main sequence by up to a factor of 1.5. At lower redshifts ($1.5 < z < 2.0$) where the collective green valley and quiescent population are $\sim40-70\%$ of the total massive galaxy population (Figure \ref{qf_comp}), the star-forming galaxy main sequence is a factor of $1.5-3$ higher than the main sequence for the total galaxy population. The significant buildup of the collective green valley and quiescent galaxy populations as a function of redshift and stellar mass leads to the flattening of the massive end slope of the main sequence for all galaxies as time progresses from $z=3.0$ to $z=1.5$.
\begin{figure}
\begin{center}
\includegraphics[width=3.3in]{mainSequence_all_SF_bootstrapFitsOnly.pdf}
\caption{The main sequence for all galaxies (pink circles) and star-forming galaxies (purple squares) in our sample. Star-forming galaxies are selected by locating the transition between star-forming and green valley populations, then removing galaxies below this transition, as is described in Section \ref{sec:SF_MainSequence}. The (star-forming) main sequence is the average SFR in individual mass bins, while errors on the (star-forming) main sequence are computed using the bootstrap resampling procedure described in Section \ref{sec:MainSequence}. We note that error bars are included, however they are often smaller than the symbol. At early epochs ($z > 2$) the star-forming galaxy main sequence is up to a factor of 1.5 higher than the main sequence for all galaxies, and at later epochs ($1.5 < z < 2.0$), the star-forming galaxy main sequence is a factor of $1.5 - 3$ higher than the main sequence for all galaxies. Gray shaded regions represent masses below our 95\% completeness limit. We emphasize that the results presented in this work focus on the mass range $M_\star = 10^{11}$ to $10^{12}$M$_\odot$, and that results above $M_\star = 10^{12}M_\odot$ (vertical dashed gray line) are unlikely to be robust.}
\label{ms_all_v_sf_plot}
\end{center}
\end{figure}
Our sample, which is used to study both the main sequence for all galaxies and star-forming galaxies contains galaxies with $M_\star > 10^{12}$M$_\odot$, particularly in the $2.0 < z < 2.5$ and $2.5 < z < 3.0$ bins where the comoving volume observed by our study is larger. For this extreme high-mass population, we see main sequence relations with steeper slopes at $M_\star > 10^{12}$M$_\odot$ than are seen at stellar masses $M_\star = 10^{11}$ to $10^{12}$M$_\odot$. Individual mass bins above $M_\star = 10^{12}$M$_\odot$ have fewer than 100 galaxies, making robust studies of this population challenging. \cite{Sherman2020b} also showed that the impact of uncertainties in photometric redshifts and Eddington bias on results for this extreme population is likely to be large (see \citealt{Sherman2020b} and their Appendix Figure A1) and that some of this population may be low-redshift interlopers. In this work, we focus on galaxies in the mass range $M_\star = 10^{11}$ to $10^{12}$M$_\odot$, and note that high-resolution imaging and spectroscopic followup of these extreme high-mass objects is necessary to better understand their properties and behavior in the SFR-$M_\star$ plane.
\section{Scatter Around the Star-Forming Galaxy Main Sequence}
\label{sec:MS_scatter}
In the absence of stochastic processes (e.g., mergers, gas accretion from the cosmic web, stellar and AGN feedback), the relationship between stellar mass and star-formation rate for star-forming galaxies should be relatively tight, with scatter around that relationship due only to measurement uncertainty (e.g., \citealt{Caplar2019}, \citealt{Matthee2019}). Therefore, measures of the scatter around the star-forming galaxy main sequence provide insights into the importance of stochastic processes in driving galaxy evolution. \cite{Sherman2020b} outlined how different stochastic processes could play a key role in driving the evolution of the massive galaxy population, where mergers are likely drivers of early mass buildup and environmental processes (e.g., ram pressure stripping, tidal stripping, harassment) are likely to suppress star-formation at $z < 2$, when emerging clusters develop their intracluster medium (ICM).
We measure the total scatter around the star-forming galaxy main sequence (Figure \ref{ms_scatter}) without assuming either a functional form of the main sequence or a fixed criterion for isolating star-forming galaxies. This is a significant improvement over previous studies where the selection of the star-forming galaxy population was biased and measures of the scatter often assumed an underlying distribution of galaxies in the SFR-$M_\star$ plane (such as a Gaussian; see Section \ref{subsec:Scatter_CompWObs} for further comparison with previous empirical results). As is described in Section \ref{sec:SF_MainSequence}, our star-forming galaxy population is selected by locating the transition between the star-forming and green valley populations in small mass bins, and the star-forming galaxy main sequence is the average SFR of the star-forming galaxy population in each mass bin. This approach is made possible by our large sample of 28,469 massive ($M_\star \ge 10^{11}$M$_\odot$) galaxies spanning $1.5 < z < 3.0$.
The total scatter around the star-forming galaxy main sequence measured in each of our small mass bins is simply the difference between the 84$^{\rm th}$ and 16$^{\rm th}$ percentile of the distribution of SFR values for star-forming galaxies in each mass bin. We also compute the upper scatter (difference between 84$^{\rm th}$ percentile of SFR and the star-forming galaxy main sequence value in a given mass bin) and lower scatter (difference between the star-forming galaxy main sequence value and 16$^{\rm th}$ percentile of SFR in a given mass bin) to provide a closer comparison with previous works.
Additionally, we can approximate the intrinsic scatter around the star-forming galaxy main sequence by accounting for the $\pm0.18$ dex measurement uncertainty in SFR from our SED fitting procedure. Our SFR error estimates are determined by drawing 100 SEDs from the best-fit SED's template error distribution (see \cite{Sherman2020} for a detailed description of this procedure), and therefore, this error estimate takes into account uncertainties in other fundamental measurements, such as extinction. We do not find that the measurement uncertainty in SFR varies as a function of stellar mass, indicating that removing the scatter due to measurement uncertainty will not change the trends (or lack thereof) observed in the total, upper, and lower scatter as a function of mass and redshift.
We measure the total observed scatter to be $\sim0.5-1.0$ dex (corresponding to $\sim0.47-0.98$ dex intrinsic scatter) and we find that the total observed scatter increases from low to high masses ($M_\star = 10^{11}$ to $10^{12}$M$_\odot$) by less than a factor of three in each of our three redshift bins. The scatter does not show significant evolution as a function of redshift across our three redshift bins spanning $1.5 < z < 3.0$.
In each of our redshift bins, the observed upward scatter is fairly constant as a function of mass and redshift, with a value of $\sim0.3$ dex (corresponding to $\sim0.24$ dex intrinsic scatter), consistent with values for the observed scatter found by previous studies (see Section \ref{subsec:Scatter_CompWObs}). The lower scatter around the main sequence is larger than the upper scatter in all redshift bins. Our result shows that the often assumed symmetrical Gaussian distribution of star-forming galaxies around the star-forming galaxy main sequence does not hold true at these redshifts ($1.5 < z < 3.0$) for massive ($M_\star \ge 10^{11}$M$_\odot$) galaxies.
\begin{figure*}
\begin{center}
\includegraphics[width=\textwidth]{mainSequence_SF_gvTop_scatterPanel.pdf}
\caption{Top Row: The total scatter (blue shaded region) around the star-forming galaxy main sequence (pink) overlaid on the distribution of star-forming galaxies (2D histogram; colorbar indicates the number of galaxies in each 2D bin) in the SFR-$M_\star$ plane. The upper (lower) bound of the blue shaded region is the 84th (16th) percentile of the SFR distribution in a given mass bin. Insets on the upper right of each panel in the top row show the number ($N_{11}$) of galaxies in the star-forming population in our sample with $M_\star \ge 10^{11}$M$_\odot$. Bottom Row: The total (squares), upper (circles), and lower (pentagons) observed scatter around the star-forming galaxy main sequence. The total scatter shows a modest increase with increasing stellar mass (less than a factor of three from $M_\star = 10^{11}$ to $10^{12}$M$_\odot$ in each redshift bin), and the total scatter is fairly constant across our three redshift bins from $z=1.5$ to $z=3.0$. In every redshift bin, the lower scatter is larger than the upper scatter by up to a factor of 3. Gray shaded regions represent masses below our 95\% completeness limit. We emphasize that the results presented in this work focus on the mass range $M_\star = 10^{11}$ to $10^{12}$M$_\odot$, and that results above $M_\star = 10^{12}M_\odot$ (vertical dashed gray line) are unlikely to be robust. }
\label{ms_scatter}
\end{center}
\end{figure*}
\section{Comparison With Previous Observations}
\label{sec:CompWObs}
\subsection{Comparison of the Main Sequence for All Galaxies and Star-Forming Galaxies with Previous Observations}
\label{subsec:MS_CompWObs}
Since the first work referencing the galaxy main sequence by \cite{Noeske2007}, many works have implemented different methods of measuring the main sequence for all galaxies and isolating star-forming galaxies to measure the star-forming galaxy main sequence. Rapid innovation in galaxy surveys over the past decade has produced a number of new methods, however this makes true one-to-one comparisons with previous works difficult.
In this work we have leveraged our sample of massive galaxies, the largest uniformly selected set compiled to date, to measure the galaxy main sequence for the total population and star-forming galaxies in small mass bins at the high mass end. A significant benefit of our large sample is that we do not need to adopt a functional form of the main sequence, and we can isolate star-forming galaxies in a meaningful way without prior assumptions. Moreover, the large area probed by our study also renders errors due to cosmic variance negligible.
The two works we compare with (\citealt{Whitaker2014} and \citealt{Tomczak2016}) present the main sequence for both the total galaxy population and star-forming galaxy population. The study from \cite{Whitaker2014} focused on the low-mass end of the main sequence using galaxies in the CANDELS/3D-HST fields. Their main sequence values in individual mass bins were computed using stacked UV + IR luminosities (with $L_{\rm IR}$ from \textit{Spitzer}-MIPS 24$\mu m$ photometry and assuming a \cite{Chabrier2003} IMF), and they separated star-forming and quiescent galaxies using UVJ colors. Their work finds that the main sequence is best characterized by a broken power law fit, however for comparison with our empirical result (Figure \ref{ms_vObs_plot}), we utilize their average stacked SFR values in each stellar mass bin rather than the functional fit to those data. We find that the main sequence for all galaxies and star-forming galaxies from \cite{Whitaker2014} are factors of $1.5-4.5$ and $1.7-3$ higher than our empirical main sequences for all galaxies and star-forming galaxies, respectively at $1.5 < z < 2.5$ for $M_\star = 10^{11}$ to $10^{12}$M$_\odot$. The study from \cite{Whitaker2014} does not investigate the main sequence in our highest redshift bin ($2.5 < z < 3.0$).
\cite{Tomczak2016} performed a similar study to that from \cite{Whitaker2014}, using a stacking analysis of UV + IR luminosities (also with $L_{\rm IR}$ from \textit{Spitzer}-MIPS 24$\mu m$ photometry and using a \cite{Chabrier2003} IMF) to derive the average SFR (main sequence values) in small mass bins for galaxies in ZFOURGE. Similar to \cite{Whitaker2014}, \cite{Tomczak2016} separated star-forming and quiescent galaxies using UVJ color. When comparing with our empirical result, we find that the results from \cite{Tomczak2016} are in general agreement, within a factor of $\sim1.5$, with our main sequence for all galaxies and star-forming galaxies in our three redshift bins spanning $1.5 < z < 3.0$ for $M_\star = 10^{11}$ to $10^{12}$M$_\odot$. We note that in the $2.5 < z < 3.0$ bin, the highest masses probed by \cite{Tomczak2016} only reach our mass completeness limit. Therefore, comparisons between our empirical result and that from \cite{Tomczak2016} in the $2.5 < z < 3.0$ bin are not informative.
There are several important caveats to the comparisons presented above that must be noted. First, these studies both utilize data from similar legacy fields, often the same fields with updated photometry, spectroscopy, or different modeling techniques. The CANDELS/3D-HST fields ($\sim900$ arcmin$^2$) used by \cite{Whitaker2014} include AEGIS, COSMOS, GOODS-N, GOODS-S, and UDS. The ZFOURGE fields ($\sim400$ arcmin$^2$) used by \cite{Tomczak2016} include CDF-S, COSMOS, and UDS. Second, these legacy fields, while rich in spectroscopy and multi-wavelength photometry allowing for strongly constrained SEDs, are small area studies with small samples of galaxies at the highest masses. Across the three redshift bins spanning $1.5 < z < 3.0$, the study from \cite{Tomczak2016} has 81 $M_\star \ge 10^{11}$M$_\odot$ galaxies in their total galaxy population. The publicly available CANDELS/3D-HST catalog (\citealt{Brammer2012}, \citealt{Skelton2014}) used by \cite{Whitaker2014} has 533 $M_\star \ge 10^{11}$M$_\odot$ total galaxies spanning $1.5 < z < 2.5$, however \cite{Whitaker2014} may have only used a sub sample of these objects. Third, the small areas probed by these studies may be strongly impacted by the effects of cosmic variance. For $M_\star \ge 10^{11}$M$_\odot$ the cosmic variance is $\sim50-70\%$ for studies of this size \citep{Moster2011}. For comparison, our 17.5 deg$^2$ study has 28,469 $M_\star \ge 10^{11}$M$_\odot$ galaxies between $1.5 < z < 3.0$. This effectively eliminates errors due to cosmic variance. Finally, the stacked 24$\mu$m-based SFR values used by \cite{Whitaker2014} and \cite{Tomczak2016} may be systematically different from our rest-frame UV-based SFR values determined for individual galaxies through SED fitting.
\begin{figure*}
\begin{center}
\includegraphics[width=\textwidth]{mainSequence_AllSF_vObs.pdf}
\caption{Our empirical main sequence for all galaxies (top row) and star-forming galaxies (bottom row) compared with results from previous observations. We find similar results to those from \protect\cite{Tomczak2016} for both the total (top row) and star-forming (bottom row) galaxy populations. The results from \protect\cite{Whitaker2014} for both the total (top row) and star-forming (bottom row) galaxy populations are higher than our empirical results by a factor of $\sim1.5-6.5$. Gray shaded regions represent masses below our 95\% completeness limit. Insets on the upper right of each panel show the number ($N_{11}$) of galaxies for the total population (top row) and star-forming population (bottom row) in our sample with $M_\star \ge 10^{11}$M$_\odot$. We emphasize that the results presented in this work focus on the mass range $M_\star = 10^{11}$ to $10^{12}$M$_\odot$, and that results above $M_\star = 10^{12}M_\odot$ (vertical dashed gray line) are unlikely to be robust.}
\label{ms_vObs_plot}
\end{center}
\end{figure*}
\subsection{Comparison of the Scatter Around the Star-Forming Galaxy Main Sequence with Previous Observations}
\label{subsec:Scatter_CompWObs}
Using our method of isolating the star-forming galaxy population by locating the transition between the star-forming and green valley populations in the SFR-$M_\star$ plane, we investigated the scatter around the star-forming galaxy main sequence in Section \ref{sec:MS_scatter}. Our result shows that the scatter around the star-forming galaxy main sequence does not evolve significantly either as a function of stellar mass or redshift over the stellar mass range $M_\star = 10^{11}$ to $10^{12}$M$_\odot$ and redshifts $1.5 < z < 3.0$. Our method of isolating star-forming galaxies does not place artificial constraints on the lower boundary of this population, or on the underlying distribution of star-forming galaxies in the SFR-$M_\star$ plane, and we find that star-forming galaxies are not normally distributed around the star-forming galaxy main sequence. This is a significant finding as the distribution of star-forming galaxies in the SFR-$M_\star$ plane is often assumed to be a Gaussian by previous works.
Comparisons with previous results for the scatter around the star-forming galaxy main sequence are challenging as a consensus has not been reached by previous works when it comes to measuring the scatter. These measurements are further complicated by the different approaches to separating star-forming galaxies from the total population (e.g. different color indicators or fixed thresholds) and different ways of measuring the main sequence (e.g., stacking analyses, average SFR, median SFR, extrapolation from low to high masses, assumed functional forms).
\cite{Schreiber2015} investigated the scatter of galaxies above the main sequence using individual \textit{Herschel}-detected galaxies in the CANDELS-\textit{Herschel} fields out to $z=4$. They found the scatter above the main sequence to be 0.32 dex with little evolution as a function of mass or redshift. \cite{Rodighiero2011} used a sample of $1.5 < z < 2.5$ BzK color-selected star-forming galaxies in COSMOS and found the scatter around the main sequence to be 0.24 dex, assuming a Gaussian distribution of galaxies around the star-forming galaxy main sequence. \cite{Popesso2019} took yet another approach, whereby they used an IR-selected sample of star-forming galaxies in the CANDELS+GOODS fields out to $z=2.5$ and only fit for the normalization of the main sequence, adopting the slope from the local relation. They found that the scatter around the star-forming galaxy main sequence increases from $\sim0.3$ dex to $\sim0.4$ dex as a function of mass for $1.5 < z < 2.5$ galaxies.
Results from these previous studies are broadly consistent with our result, however detailed comparisons are difficult due to the very different methods used. Additionally, as described above, our approach to measuring the scatter around the star-forming galaxy main sequence is a significant improvement over previous works as it does not rely on assumed functional forms of the star-forming galaxy main sequence, ad hoc cutoffs for selecting the star-forming galaxy population, or assuming a Gaussian distribution of star-forming galaxies in the SFR-$M_\star$ plane.
\section{Comparison With Theoretical Models}
\label{sec:CompWTheory}
In this work, we have explored the main sequence for all galaxies (Section \ref{sec:MainSequence}), used a novel approach to identify the star-forming galaxy population in the SFR-M$_\star$ plane (Section \ref{sec:SF_MainSequence}), and investigated the scatter around the star-forming galaxy main sequence (Section \ref{sec:MS_scatter}) in an unbiased way, with our focus placed on the mass range $M_\star = 10^{11}$ to $10^{12}$M$_\odot$. Theoretical models, such as hydrodynamical simulations and semi-analytic models (SAMs), seek to implement physical processes that drive galaxy evolution and, therefore, insights from theoretical models may allow for interpretation of the physical processes driving observed trends. Large volume empirical studies, such as the study presented in this work, can likewise provide benchmarks for these models.
Our comparison will focus on the hydrodynamical models SIMBA \citep{Dave2019} and IllustrisTNG (\citealt{Pillepich2018b}, \citealt{Springel2018}, \citealt{Nelson2018}, \citealt{Naiman2018}, \citealt{Marinacci2018}), as well as the semi-analytic model SAG \citep{Cora2018}. Details about each model can be found in their respective publications, as well as in \cite{Sherman2020b}, and the key points will briefly be described here. SIMBA has a 100 Mpc/h box with mass resolution $m_{\rm gas} = 1.82\times10^7~M_{\odot}$ and we utilize the total stellar mass and SFR for each galaxy in their group catalog. IllustrisTNG offers several volumes, and we use the largest box that is $\sim$300$^3$Mpc$^3$ (TNG300) with mass resolution $m_{\rm baryon}=1.1\times10^7~M_{\odot}$ and masses and SFR measured within twice the stellar half mass radius (the $2 \times R_{1/2}$ aperture; see \citealt{Sherman2020b} for a detailed study of aperture types in IllustrisTNG). SAG populates halos in the MultiDark-Planck2 (MDPL2) dark matter-only simulation, and we utilize an updated version of the model (S. Cora, private communication) which has been run on 9.4\% of the 1.0 $h^{-1}$Gpc box available in MDPL2. For SAG, we use total masses and SFR for galaxies in their group catalog. The group catalogs for all three models hard code SFR = 0 when the SFR for an object falls below the resolution limit. To account for this, following \cite{Donnari2019} and \cite{Sherman2020b}, we assign these objects a random SFR between SFR = $10^{-5} - 10^{-4}$M$_{\odot} yr^{-1}$ before performing our analysis.
The above models have significantly smaller volumes than our empirical study, leading to significantly smaller numbers of galaxies with stellar masses $M_\star = 10^{11}$ to $10^{12}$M$_\odot$. Because of this, we will compare our empirical main sequence for all galaxies with the corresponding relation from the theoretical models, with a focus on galaxies with masses spanning the range $M_\star = 10^{11}$ to $10^{12}$M$_\odot$. At this time, a fair and informative comparison cannot be done between our empirical star-forming galaxy main sequence and results from theoretical models, as the theoretical models do not have enough galaxies in small mass bins spanning $M_\star = 10^{11}$ to $10^{12}$M$_\odot$ across our redshift range of interest ($1.5 < z < 3.0$) to define the relation or study the scatter around it.
The main sequence for all galaxies for each of the three theoretical models is computed in the same way as our empirical main sequence (see Section \ref{sec:MainSequence}) We find that in all three of our redshift bins spanning $1.5 < z < 3.0$, the hydrodynamical model SIMBA is in fair agreement, within a factor of $\sim1.5$, with our empirical main sequence for all galaxies, but does not show a flattening at the highest masses in the lowest redshift bin ($1.5 < z < 2.0$). The SAM SAG is up to a factor of $\sim3$ higher than our empirical result and starts to show a flattening slope at the highest masses in the $1.5 < z < 2.0$ bin. The hydrodynamical model IllustrisTNG is below our empirical result by up to a factor of $\sim10$ and shows a strong turnover at the highest masses in the $2.0 < z < 3.0$ bins, where our empirical result does not show this trend. These results are consistent with those from \cite{Sherman2020b} who showed that SIMBA and SAG under-estimate the fraction of the collective green valley and quiescent galaxy population compared with the star-forming galaxy population at the high-mass end, while the IllustrisTNG model was shown to over-predict the fraction of massive galaxies lying in the collective green valley and quiescent galaxy population.
Further exploration of the implication of comparisons with results from theoretical models will be discussed in Section \ref{sec:Discussion}. We also recognize that a single line (in this case, the main sequence for all galaxies) is not an adequate representation of the full distribution of galaxies in the SFR-$M_\star$ plane, and we refer the reader to Appendix \ref{app:TheoryComp} for a comparison of our empirical data in the SFR-$M_\star$ plane and those from the models.
\begin{figure*}
\begin{center}
\includegraphics[width=\textwidth]{mainSequence_All_vTheory.pdf}
\caption{Our empirical main sequence for all galaxies compared with results from hydrodynamical models SIMBA and IllustrisTNG and SAM SAG. The main sequence for all galaxies from SIMBA is within a factor of $\sim1.5$ of our empirical result and that from SAG is higher than our empirical result by up to a factor of $\sim3$. SIMBA does not show a flattening at the highest masses by $z=1.5$, while SAG begins to show a flattening high-mass slope towards $z=1.5$. The main sequence for all galaxies from IllustrisTNG is lower than our empirical result by up to a factor of $\sim10$ and shows a strong turnover at the highest masses at $2.0 < z < 3.0$ that is not seen in our empirical result. Gray shaded regions represent masses below our 95\% completeness limit. We emphasize that the results presented in this work focus on the mass range $M_\star = 10^{11}$ to $10^{12}$M$_\odot$, and that results above $M_\star = 10^{12}M_\odot$ (vertical dashed gray line) are unlikely to be robust.}
\label{ms_vTheory_plot}
\end{center}
\end{figure*}
\section{Discussion}
\label{sec:Discussion}
In this work, we presented the main sequence for all galaxies (Section \ref{sec:MainSequence}), the main sequence for star-forming galaxies (Section \ref{sec:SF_MainSequence}), and an unbiased measurement of the scatter around the star-forming galaxy main sequence (Section \ref{sec:MS_scatter}).
Our large sample of massive ($M_\star \ge 10^{11}$M$_\odot$) galaxies has allowed us to separate star-forming galaxies from the green valley and quiescent populations in a natural way without any prior assumptions about the data. With our approach, we are able, for the first time, to present the star-forming galaxy main sequence and measure the scatter about the mean relation without making assumptions about the functional form of the main sequence or the distribution of star-forming galaxies in the SFR-$M_\star$ plane.
The slope of the main sequence for all galaxies at the high-mass end provides important information about the downsizing \citep{Cowie1996} of the massive galaxy population. \cite{Sherman2020b} showed that the massive ($M_\star \ge 10^{11}$M$_\odot$) galaxy population becomes increasingly quiescent from $z=3.0$ to $z=1.5$, and that more massive galaxies ($M_\star \sim 10^{12}$M$_\odot$) have a higher quiescent fraction than less massive ($M_\star \sim 10^{11}$M$_\odot$) systems. The increased flattening of the high-mass end slope of the main sequence for all galaxies as time progresses from $z=3.0$ to $z=1.5$ traces the downsizing of the massive galaxy population (Figures \ref{ms_all_plot} and \ref{ms_all_v_sf_plot}). A further investigation of the buildup of the collective green valley and quiescent galaxy populations as a function of stellar mass and redshift can be seen in Figure \ref{gvTrans_moreBins_plot}, further supporting the downsizing scenario.
In contrast to the main sequence for all galaxies, the massive end of the star-forming galaxy main sequence does not demonstrate a strong flattening as time progresses (Figure \ref{ms_all_v_sf_plot}). We have measured that the slope of the massive star-forming galaxy main sequence is rather constant across $1.5 < z < 3.0$, with the slope (and normalization) only beginning to decrease at $1.5 < z < 2.0$. This suggests that, although there is a decrease in the fraction of massive galaxies that are star-forming, those that remain highly star-forming at $1.5 < z < 2.0$ have fairly similar specific star-formation rates as massive star-forming galaxies at earlier epochs ($2.0 < z < 3.0$).
Our finding that the total scatter around the star-forming galaxy main sequence remains relatively constant from $z=3.0$ to $z=1.5$ and as a function of stellar mass at the high-mass end ($M_\star = 10^{11}$ to $10^{12}$M$_\odot$) is in alignment with our result showing that the slope of the star-forming galaxy main sequence does not significantly flatten as time progresses towards $z=1.5$. The total scatter around the star-forming galaxy main sequence is thought to trace the stochasticity of processes driving star-forming galaxy evolution (e.g., \citealt{Caplar2019}, \citealt{Matthee2019}). Additionally, with our unbiased approach to identifying the star-forming galaxy population, we find that the distribution of massive star-forming galaxies in the SFR-$M_\star$ plane does not follow the often assumed Gaussian distribution. This non-Gaussian distribution around the star-forming galaxy main sequence, which is skewed towards galaxies with lower SFR, suggests that galaxies spend less time in the high SFR phase (above the star-forming galaxy main sequence) than they do in the more moderate SFR phase (below the main sequence). This aligns with studies of the molecular gas content of massive galaxies out to $z\sim4$ using ALMA (e.g., \citealt{Tacconi2018}, \citealt{Franco2020}), which showed that massive galaxies lying above the main sequence have shorter gas depletion timescales than those lying below the main sequence. They report that galaxies above the main sequence may deplete their gas supply in $\sim10^2$ Myr, while star-forming galaxies below the main sequence have depletion times closer to $\sim10^3$ Myr.
A natural inquiry following the results presented in this work is the question of why the local minima appear in the SFR-$M_\star$ plane between the three populations of interest (star-forming, green valley, and quiescent), and specifically why a peak appears in the green valley. Potential scenarios leading to this green valley peak are complex. Previous works (e.g., \citealt{Pandya2017}, \citealt{Janowiecki2020}) have shown that galaxies do not necessarily take a simple one way trip from the star-forming sequence to the quiescent population, and the way in which galaxies move through the SFR-$M_\star$ plane, and specifically how they arrive in the green valley population, is dependent on several factors such as environment, available cold gas reservoir, and evolutionary history, among others. This is particularly true for massive galaxies which are likely to live in rich environments where environmental effects are common. Just as environmental mechanisms can remove galaxies from the star-forming population (e.g., AGN and stellar feedback, hot-mode accretion, ram-pressure stripping, tidal stripping, harassment; e.g., \citealt{Man2018}, \citealt{Sherman2020b}), events such as mergers and gas accretion can rejuvenate a previously quenched (or partially quenched) galaxy. The peak seen in the green valley region of the SFR-$M_\star$ plane may arise from the superposition of these massive galaxy populations with diverse evolutionary histories.
Additionally, the peak in the green valley suggests that galaxies may spend a non-trivial amount of time in this regime. \cite{Pandya2017} estimate that the upper limit for the time galaxies spend in the green valley is $\sim1.5-2$ Gyr for $1.5 < z < 3$. We note, however, that their model makes the simplifying, and unlikely, assumption that galaxies move uni-directionally from the star-forming to quiescent population through the green valley. Interestingly, they find that the population of galaxies in the green valley is rather stable, with more galaxies remaining in the population than moving in or out of the population between timesteps. Again, we emphasize that movement of massive galaxies through and within the green valley is complex and likely to be multi-directional.
Although our comparisons with theoretical models (hydrodynamical models SIMBA and IllustrisTNG and SAM SAG) are limited to the main sequence for all galaxies due to the small volumes of these models (Section \ref{sec:CompWTheory}), we can use comparisons with these models to interpret the trends seen in our empirical result. The shape and slope of the main sequence for all galaxies, computed for the three theoretical models in the same way as is done for our empirical main sequence for all galaxies, provides information about the transition of the massive galaxy population from being predominantly star-forming to predominantly quiescent. We find in our comparison that the models are unable to simultaneously recover the average specific star-formation rates of massive galaxies found in our observed sample and the flattening of the high-mass end slope of the main sequence for all galaxies as time progresses from $z=3.0$ to $z=1.5$. This result indicates that the models do not adequately represent the observed trends, specifically the buildup of the collective green valley and quiescent populations, seen in our empirical results at these redshifts. Our findings support those from \cite{Sherman2020b} who showed that the SAG and SIMBA models under-estimate the fraction of massive galaxies in the collective green valley and quiescent population, while the IllustrisTNG model over-estimates the fraction of massive galaxies in the collective green valley and quiescent population.
\section{Summary}
\label{sec:Summary}
Using a large sample of 28,469 massive ($M_\star \ge 10^{11}$M$_\odot$) galaxies that are uniformly selected from data spanning 17.5 deg$^2$, we investigate the nature of the main sequence for all galaxies and for star-forming galaxies. With our large sample, we are uniquely suited to conduct this study without assuming the functional shape of the main sequence or placing prior constraints on the distribution of galaxies in the SFR-M$_{\star}$ plane. A summary of our key results is presented below.
\begin{enumerate}
\item Our large sample size allows us to compute the main sequence in small stellar mass bins and isolate star-forming galaxies using the quantities of interest (SFR and stellar mass) by finding the local minimum between the star-forming and green valley populations in each mass bin (Fig. \ref{gv_annotated_plot}). A key advantage of this method is that it does not place artificial constraints on the distribution of galaxies around the star-forming galaxy main sequence. Following this approach, we show that the main sequence for all galaxies (Fig. \ref{ms_all_plot}) has a distinct flattening at the high-mass end, which becomes increasingly flat as time progresses from $z=3.0$ to $z=1.5$. We show that this flattening is due to the increasing fraction of the green valley and quiescent galaxy population from $z=3.0$ to $z=1.5$ (Fig. \ref{gvTrans_moreBins_plot}). The star-forming galaxy main sequence (Fig. \ref{ms_sf_plot}) does not show this flattening (see also Fig. \ref{ms_all_v_sf_plot}). This indicates that the average specific star-formation rate of the massive star-forming galaxy population does not evolve significantly over that epoch.
\\
\item We measure the total scatter around the star-forming galaxy main sequence to be $\sim0.5-1.0$ dex and find that there is little evolution in the scatter as a function of stellar mass or redshift (Fig. \ref{ms_scatter}). With our meaningful isolation of star-forming galaxies, we avoid biasing our result by assuming an underlying distribution around the star-forming galaxy main sequence. We also quantify the scatter above (upper) and below (lower) the star-forming galaxy main sequence and find the lower scatter is larger than the upper scatter by up to a factor of 3 in all three redshift bins spanning $1.5 < z < 3.0$, indicating that the underlying distribution of galaxies around the star-forming galaxy main sequence is not the often assumed symmetrical Gaussian.
\\
\item Additionally, we compare our empirical main sequence for all galaxies with results from theoretical models SIMBA, IllustrisTNG, and SAG. Results from SIMBA are within a factor of $\sim1.5$ of our empirical result but do not show a flattening at the highest masses in our lowest redshift bin ($1.5 < z < 2.0$); those from SAG lie above our results but do show the flattening at the high-mass end in the $1.5 < z < 2.0$ bin. The main sequence for all galaxies from IllustrisTNG is below our empirical result and shows a strong turnover at the highest masses at $2.0 < z < 3.0$, which is not seen in our empirical result. Interpretation of comparisons with theoretical models is not straightforward as the physical processes driving stellar mass buildup and star-formation rates are highly inter-dependent in the models.
\end{enumerate}
\vspace{5mm}
SS, SJ, and JF gratefully acknowledge support from the University of Texas at Austin, as well as NSF grant AST 1413652. SS, SJ, JF, and SLF acknowledge support from NSF grant AST 1614798. SS, SJ, JF, MLS, and SLF acknowledge generous support from The University of Texas at Austin McDonald Observatory and Department of Astronomy Board of Visitors. SS, SJ, JF, MLS, and SLF also acknowledge the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for providing HPC resources that have contributed to the research results reported within this paper. SS is supported by the University of Texas at Austin Dissertation Writing Fellowship. MLS and SLF acknowledge support from the NASA Astrophysics and Data Analysis Program through grants NNX16AN46G and 80NSSC18K09. LK and CP acknowledge support from the National Science Foundation through grant AST 1614668. The Institute for Gravitation and the Cosmos is supported by the Eberly College of Science and the Office of the Senior Vice President for Research at the Pennsylvania State University. This publication uses data generated via the Zooniverse.org platform, development of which is funded by generous support, including a Global Impact Award from Google, and by a grant from the Alfred P. Sloan Foundation.
\section*{Data Availability}
No new data were generated in support of this research. We refer the reader to the publications of individual catalogs used in this work for information regarding data access.
\bibliographystyle{mnras}
|
{
"timestamp": "2021-05-11T02:14:40",
"yymm": "2105",
"arxiv_id": "2105.03766",
"language": "en",
"url": "https://arxiv.org/abs/2105.03766"
}
|
\section{Introduction}\label{sec:introduction}
\IEEEPARstart{I}{mage} anomaly localization is a technique that
identifies the anomalous region of input images at the pixel level. It
finds real-world applications such as manufacturing process monitoring
\cite{scime2018anomaly}, medical image diagnosis
\cite{schlegl2017unsupervised, schlegl2019f} and video surveillance
analysis \cite{napoletano2018anomaly, saligrama2012video}. It is often
assumed that only normal (i.e., anomaly-free) images are available in
the training stage since anomalous samples are few to be modeled
effectively rare and/or expensive to collect.
There is a growing interest in image anomaly localization due to the
availability of a new dataset called the MVTec AD
\cite{bergmann2019mvtec} (see Fig. \ref{fig:1_images}).
State-of-the-art image anomaly localization methods adopt deep learning.
Many of them employ complicated pretrained neural networks to achieve
high performance yet without a good understanding of the basic problem.
To get marginal performance improvements, fine-tuning and other minor
modifications are made on a try-and-error basis. Related work will be
reviewed in Sec. \ref{sec:review}.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\linewidth]{figure1.png}
\caption{Image anomaly localization examples taken from the MVTec AD
dataset (from left to right): normal images, anomalous images, the
ground truth and the predicted anomalous region by AnomalyHop, where the
red region indicates the detected anomalous region.}\label{fig:1_images}
\end{figure}
A new image anomaly localization method, called AnomalyHop, based on the
successive subspace learning (SSL) framework is proposed in this work.
This is the first work that applies SSL to the anomaly localization
problem. AnomalyHop consists of three modules: 1) feature extraction
via successive subspace learning (SSL), 2) normality feature
distributions modeling via Gaussian models, and 3) anomaly map
generation and fusion. They will be elaborated in Sec.
\ref{sec:method}. As compared with deep-learning-based image anomaly
localization methods, AnomalyHop is mathematically transparent, easy to
train, and fast in its inference speed. Besides that, as reported in Sec.
\ref{sec:experiments}, its area under the ROC curve (ROC-AUC)
performance on the MVTec AD dataset is 95.9\%, which is the
state-of-the-art performance. Finally, concluding remarks and possible
future extensions will be given in Sec. \ref{sec:conclusion}.
\begin{figure*}[t]
\centering
\includegraphics[width=18cm]{figure2.png}
\caption{The system diagram of the proposed AnomalyHop method.}\label{fig:2_framework}
\end{figure*}
\section{Related Work}\label{sec:review}
If the number of images in an image anomaly training set is limited,
learning normal image features in local regions is challenging. We
classify image anomaly localization methods into two major categories
based on whether a method relies on external training data (say, the
ImageNet) or not.
\noindent
{\bf With External Training Data.} Methods in the first category rely on
pretrained deep learning models by leveraging external data. Examples
include PaDiN \cite{defard2020padim}, SPADE \cite{cohen2020sub}, DFR
\cite{yang2020dfr} and CNN-FD \cite{napoletano2018anomaly}. They employ
a pretrained deep neural network (DNN) to extract local image features
and, then, use various models to fit the distribution of features in
normal regions. Although some offer impressive performance, they do
rely on large pretrained networks such as the ResNet \cite{he2016deep}
and the Wide-ResNet \cite{zagoruyko2016wide}. Since these pretrained
DNNs are not optimized for the image anomaly detection task, the
associated image anomaly localization methods usually have large model
sizes, high computational complexity and memory requirement.
\noindent
{\bf Without External Training Data.} Methods in the second category
exploit neither pretrained DNNs nor external training data. They learn
local image features based on normal images in the training set. For
example, Bergmann {\em et al.} developed the MVTec AD dataset in
\cite{bergmann2019mvtec} and used an autoencoder-like network to learn
the representation of normal images. The network can reconstruct
anomaly-free regions of high fidelity but not for anomalous regions. As
a result, the pixel-wise difference between the input abnormal image and
its reconstructed image reveals the region of abnormality. A similar
idea was developed using the image inpainting technique
\cite{li2020superpixel, zavrtanik2021reconstruction}. Traditional
machine learning models such as support vector data description (SVDD)
\cite{tax2004support} can also be integrated with neural network, where
novel loss terms are derived to learn local image features from scratch
\cite{yi2020patch, liznerski2020explainable}. Generally speaking,
methods without external training data either fail to provide
satisfactory performance or suffer from a slow inference speed
\cite{yi2020patch}. This is attributed to diversified contents of
normal images. For example, the 10 object classes and the 5 texture
classes in the MVTec AD dataset are quite different. Their
capability in representing features of local regions of different images
is somehow limited. On the other hand, over-parameterized DNN models
pretrained by external data may overfit some datasets but may not be
generalizable to other unseen contents such as new texture patterns. It
is desired to find an effective and mathematically transparent learning
method to address this challenging problem.
\noindent
{\bf SSL and Its Applications.} SSL is an emerging machine learning
technique developed by Kuo {\em et al.} in recent years
\cite{kuo2016understanding, kuo2019interpretable, chen2020pixelhop,
chen2020pixelhop++, rouhsedaghat2021successive}. It has been applied to
quite a few applications with impressive performance. Examples include
image classification \cite{chen2020pixelhop, chen2020pixelhop++}, image
enhancement \cite{azizi2020noise}, image compression
\cite{tseng2020interpretable}, deepfake image/video detection
\cite{chen2021defakehop}, point cloud classification, segmentation,
registration \cite{zhang2020pointhop, zhang2020pointhop++,
zhang2020unsupervised, kadam2021r}, face biometrics
\cite{rouhsedaghat2020low, rouhsedaghat2020facehop}, texture analysis
and synthesis \cite{zhang2019texture, lei2020nites}, 3D medical image
analysis \cite{liu2021voxelhop}, etc.
\section{AnomalyHop Method}\label{sec:method}
AnomalyHop belongs to the second category of image anomaly localization
methods. Its system diagram is illustrated in Fig. \ref{fig:2_framework}.
It contains three modules: 1) feature extraction, 2) modeling
of normality feature distributions, and 3) anomaly map generation. They
will be elaborated below.
\subsection{SSL-based Feature Extraction}\label{subsec:feature}
Deep-learning methods learn image features indirectly. Given a network
architecture, the network learns the filter parameters first by
minimizing a cost function end-to-end. Then, the network can be used to
generate filter responses, and patch features are extracted as the
filter responses at a certain layer. In contrast, the SSL framework
extracts features of image patches directly using a data-driven
approach. The basic idea is to study pixel correlations in a
neighborhood (say, a patch) and use the principal component analysis
(PCA) to define an orthogonal transform, also known as the Karhunen
Lo\`{e}ve transform (KLT). However, a single-stage PCA transform is not
sufficient to obtain powerful features. A sequence of modifications have
been proposed in \cite{kuo2016understanding, kuo2019interpretable,
chen2020pixelhop, chen2020pixelhop++} to make the SSL framework complete.
The first modification is to build a sequence of PCA transforms in
cascade with the max pooling inserted between two consecutive stages.
The output of the previous stage serves as the input to the current
stage. The cascaded transforms are used to capture short-, mid- and
long-range correlations of pixels in an image. Since the neighborhood of
a graph is called a hop (e.g., 1-hop neighbors, 2-hop neighbors, etc.),
each transform stage is called a hop \cite{chen2020pixelhop}. However, a
straightforward cascade of multi-hop PCAs does not work properly due to
the sign confusion problem, which was first pointed out in
\cite{kuo2016understanding}. The second modification is to replace the
linear PCA with an affine transform that adds a constant-element bias vector to
the PCA response vector \cite{kuo2019interpretable}. The bias vector is
added to ensure all input elements to the next hop are positive to avoid
sign confusion. This modified transform is called the Saab (Subspace
approximation with adjusted bias) transform. The input and the output of
the Saab transform are 3D tensors (including 2D spatial components and
1D spectral components.) By recognizing that the 1D spectral components
are uncorrelated, the third modification was proposed in
\cite{chen2020pixelhop++} to replace one 3D tensor input with multiple
2D tensor inputs. This is named as the channel-wise Saab (c/w Saab)
transform, and it greatly reduces the model size of the
standard Saab transform.
Here we employee c/w Saab transform as our feature extractor, where the output of c/w Saab transform will provide pixel-wise image local features automatically.
\subsection{Modeling of Normality Feature Distributions}\label{subsec:Gaussian}
We propose three Gaussian models to describe the distributions of
features of normal images, which are extracted in Sec. \ref{subsec:feature}.
\subsubsection{Location-aware Gaussian Model}
If the input images of an image class are well aligned in the spatial
domain, we expect that features at the same location are close to each
other. We use $X_{ij}^n$ to denote the feature vector extracted from a
patch centered at location $(i, j)$ of a certain hop in the $n$th
training image. By following \cite{defard2020padim}, we model the
feature vectors of patches centered at the same location $(i,j)$ by a
multivariate Gaussian distribution, $\mathbb{N}(\mu_{ij},\Sigma_{ij})$.
Its sample mean is $\mu_{ij} = N^{-1} \sum^N_{n=1} X_{ij}^n$ and its
sample covariance matrix is
$$
\Sigma_{ij} = (N-1)^{-1} \sum^N_{n=1} (X_{ij}^n-\mu_{ij})(X_{ij}^n-\mu_{ij})^T
+ \epsilon I,
$$
where $N$ is the number of training images of an image class,
$\epsilon$ is a small positive number, and $I$ denotes identity matrix. The term $\epsilon I$ is added to
ensure that the sample covariance matrix is positive semi-definite.
\subsubsection{Location-Independent Gaussian Model}
For images of the same texture class, they have strong self-similarity.
Besides, they are often shift-invariant. These properties can be
exploited for texture-related tasks \cite{zhang2019texture,
zhang2019data,zhang2021dynamic}. For homogeneous fine-granular textures, we can use a
single Gaussian model for all local image features at each hop and call
it the location-independent Gaussian model. The model has its mean $\mu
= (NHW)^{-1} \sum_{i,j,n} X_{ij}^n$ and its covariance matrix
$$
\Sigma = (NHW-1)^{-1} \sum_{i,j,n} (X_{ij}^n- \mu_{ij})(X_{ij}^n
-\mu_{ij})^T + \epsilon I,
$$
where $N$ is the number of training images in one texture class, and $H$
and $W$ are pixel numbers along the height and the width of texture
images.
\subsubsection{Self-reference Gaussian Model}
Both location-aware and location-independent Gaussian models utilize
all training images to capture the normality feature distributions.
However, images of the same class may have intra-class variations, which location-aware and location-independent Gaussian models cannot capture well. One example is the grid class in the MVTec AD dataset.
Different images may have different grid orientations and lighting
conditions. To address this problem, we train a Gaussian model with the
distribution of features from a single normal image and call it the
self-reference Gaussian. Again, we compute the sample mean as
$\mu=(HW)^{-1} \sum_{i,j} X_{ij} $ and the sample covariance matrix as
$$
\Sigma = (HW-1)^{-1} \sum_{i,j} (X_{ij}-\mu)(X_{ij} -\mu) + \epsilon I.
$$
For this setting, we only use normal images in the training set to
determine the c/w Saab transform filters. The self-reference Gaussian
model is learned from the test image at the testing time. For more
discussion, we refer to Sec. \ref{sec:experiments}.
\subsection{Anomaly Map Generation and Fusion}
With learned Gaussian models, we use the Mahalanobis distance,
$$
M(X_{ij})= \sqrt{(X_{ij}-\mu_{ij})\Sigma_{ij}^{-1}(X_{ij} -\mu_{ij})^T},
$$
as the anomaly score to show the anomalous level of a corresponding
patch. Higher scores indicate a higher likelihood to be anomalous. By
calculating the scores over all locations of a hop, we form an anomaly
map at each hop for an input test image. Finally, we re-scale all anomaly
maps to the same spatial size and fuse them by weighting average to yield the final anomaly
map.
\section{Experiments}\label{sec:experiments}
\noindent
{\bf Dataset and Evaluation Metric.} We evaluate our model on the MVTec
AD dataset \cite{bergmann2019mvtec}. It has 5,354 images from 15
classes, including 5 texture classes and 10 object classes, collected
from real-world applications. The resolution of input images ranges from
700$\times$700 to 1024$\times$1024. The training set consists of normal
images only while the test set contains both normal and abnormal images.
The ground truth of anomaly regions is provided for the evaluation
purpose. The area under the receiver operating characteristics curve
(AUC-ROC) \cite{dehaene2020iterative,bergmann2019mvtec} is chosen to be
the performance evaluation metric.
\noindent
{\bf Experimental Setup and Benchmarking Methods.} First, we resize
images of different resolutions to the same resolution of
$224\times224$. Next, we apply the 5-stage Pixelhop++ to all classes for
feature extraction as shown in Fig. \ref{fig:2_framework}. The spatial
sizes, $b \times b$, and the number, $k$, of filters at each hop are
searched in the range of $2 \le b \le 7$ and $2 \le k \le 5$,
respectively. The $2 \times 2$ max-pooling is used between hops. The
optimal hyper-parameters at each hop are class dependent. A
representative case for the leather class is given in Table
\ref{tab:hyper-parameters}. The optimal hyper-parameters of all 15
classes can be found in our
\href{https://github.com/BinWang28/AnomalyHop}{Github codes.}
We compare AnomalyHop against seven benchmarking methods. Four
of them belong to the first category that leverages external
datasets. They are PaDiM \cite{defard2020padim}, SPADE
\cite{cohen2020sub}, DFR \cite{yang2020dfr} and CNN-FD
\cite{napoletano2018anomaly}. Three of them belong to the second category
that solely relies on images in the MVTec AD dataset. They are
AnoGAN \cite{schlegl2017unsupervised}, VAE-grad
\cite{dehaene2020iterative} and Patch-SVDD \cite{yi2020patch}.
\begin{table}[htb]
\begin{center}
\caption{The hyper-parameters of spatial sizes and numbers of filters at
each hop for the leather class.} \label{tab:hyper-parameters}
\begin{tabular}{cccccc} \hline
Hop Index & 1 & 2 & 3 & 4 & 5 \\
b & 5 & 5 & 3 & 2 & 2 \\
k & 4 & 4 & 4 & 4 & 4 \\ \hline
\end{tabular}
\end{center}
\end{table}
\begin{table*}[htb]
\begin{center}
\caption{Performance comparison of image anomaly localization methods in
terms of AUC-ROC scores for the MVTec AD dataset, where the best results
in each category are marked in bold.} \label{exp:result}
\resizebox{\textwidth}{!}{
\begin{tabular}{c|c|c|c|c||c|c|c|c}
\hline \hline
& \multicolumn{4}{c||}{\textbf{Pretrained w/ External Data}} & \multicolumn{4}{c}{\textbf{w/o Pretraining}}\\ \hline
& PaDiM \cite{defard2020padim} & SPADE \cite{cohen2020sub} & DFR \cite{yang2020dfr} & CNN-FD \cite{napoletano2018anomaly} & AnoGAN \cite{schlegl2017unsupervised} & VAE-grad \cite{dehaene2020iterative} & Patch-SVDD \cite{yi2020patch} & AnomalyHop \\ \hline
Carpet & \textbf{0.991} & 0.975 & 0.970 & 0.720 & 0.540 & 0.735 & 0.926 & \textbf{0.942}$^\ast$ \\
Grid & 0.973 & 0.937 & \textbf{0.980} & 0.590 & 0.580 & 0.961 & 0.962 & \textbf{0.984}$^\star$ \\
Leather & \textbf{0.992} & 0.976 & 0.980 & 0.870 & 0.640 & 0.925 & 0.974 & \textbf{0.991}$^\ast$ \\
Tile & \textbf{0.941} & 0.874 & 0.870 & 0.930 & 0.500 & 0.654 & 0.914 & \textbf{0.932}$^\ast$ \\
Wood & \textbf{0.949} & 0.885 & 0.930 & 0.910 & 0.620 & 0.838 & \textbf{0.908} & 0.903$^\ast$ \\ \hline
\textbf{Avg. of Texture Classes} & \textbf{0.969} & 0.929 & 0.946 & 0.804 & 0.576 & 0.823 & 0.937 & \textbf{0.950}$^{\ }$ \\ \hline
\hline
Bottle & 0.983 & \textbf{0.984} & 0.970 & 0.780 & 0.860 & 0.922 & \textbf{0.981} & 0.975$^\diamond$ \\
Cable & 0.967 & \textbf{0.972} & 0.920 & 0.790 & 0.780 & 0.910 & \textbf{0.968} & 0.904$^\diamond$ \\
Capsule & 0.985 & \textbf{0.990} & \textbf{0.990} & 0.840 & 0.840 & 0.917 & 0.958 & \textbf{0.965}$^\diamond$ \\
Hazelnut & 0.982 & \textbf{0.991} & 0.990 & 0.720 & 0.870 & 0.976 & \textbf{0.975} & 0.971$^\diamond$ \\
Metal Nut & 0.972 & \textbf{0.981} & 0.930 & 0.820 & 0.760 & 0.907 & \textbf{0.980} & 0.956$^\diamond$ \\
Pill & 0.957 & 0.965 & \textbf{0.970} & 0.680 & 0.870 & 0.930 & 0.951 & \textbf{0.970}$^\diamond$ \\
Screw & 0.985 & 0.989 & \textbf{0.990} & 0.870 & 0.800 & 0.945 & 0.957 & \textbf{0.960}$^\star$ \\
Toothbrush & 0.988 & 0.979 & \textbf{0.990} & 0.770 & 0.900 & \textbf{0.985} & 0.981 & 0.982$^\diamond$ \\
Transistor & \textbf{0.975} & 0.941 & 0.800 & 0.660 & 0.800 & 0.919 & 0.970 & \textbf{0.981}$^\diamond$ \\
Zipper & \textbf{0.985} & 0.965 & 0.960 & 0.760 & 0.780 & 0.869 & 0.951 & \textbf{0.966}$^\diamond$ \\ \hline
\textbf{Avg. of Object Classes} & \textbf{0.978} & 0.976 & 0.951 & 0.769 & 0.826 & 0.928 & \textbf{0.967} & 0.963$^{\ }$ \\ \hline \hline
\textbf{Avg. of All Classes} & \textbf{0.975} & 0.960 & 0.949 & 0.781 & 0.743 & 0.893 & 0.957 & \textbf{0.959}$^{\ }$ \\\hline\hline
\end{tabular}
}
\end{center}
\end{table*}
\noindent {\bf AUC-ROC Performance.} We compare the AUC-ROC scores of
AnomalyHop and seven benchmarking methods in Table \ref{exp:result}. As
shown in the table, AnomalyHop performs the best among all methods with
no external training data. Although Patch-SVDD has close performance,
especially for the object classes, its inference speed is significantly
slower as shown in Table \ref{exp:inference}. The best performance in
Table \ref{exp:result} is achieved by PaDiM \cite{defard2020padim} that
takes the pretrained 50-layer Wide ResNet as the feature extractor
backbone. Its superior performance largely depends on the
generalizability of the pretrained network. In practical applications,
we often encounter domain-specific images, which may not be covered by
external training data. In contrast, AnomalyHop exploits the statistical
correlations of pixels in short-, mid- and long-range neighborhoods and
obtain the c/w Saab filters based on PCA. It can tailor to a specific
application domain using a smaller number of normal images. Furthermore,
the Wide-ResNet-50-2 model has more than 60M parameters while AnomalyHop
has only 100K parameters in PixelHop++, which is used for image feature
extraction.
Three Gaussian models are adopted by AnomalyHop to handle different
classes in Table \ref{exp:result}. Results obtained using
location-aware, location-independent and self-reference Gaussian models
are marked with $^\diamond$, $^\ast$, $^\star$, respectively. The object
classes are well-aligned in the dataset so that the location-aware
Gaussian model is more suitable. For texture classes (e.g. carpet and
wood classes), the location-independent Gaussian model is the most
favorable since the texture classes are usually homogeneous across the
whole image. The location information is less relevant. The grid class
is a special one. On one hand, the grid image is homogeneous across the
whole image. On the other hand, different grid images have different
rotations, lighting conditions and viewing angles as shown in Fig.
\ref{fig:3_grid_example}. As a result, the self-reference Gaussian model
offers the best result.
\begin{figure}[htb]
\centering
\includegraphics[width=1.0\linewidth]{1.pdf}
\caption{Two anomaly grid images (from left to right): input images,
ground truth labels, predicted heatmap, predicted and segmented anomaly
regions.}\label{fig:3_grid_example}
\end{figure}
\noindent
{\bf Inference Speed.} The inference speed is another important
performance metric in real-world image anomaly localization
applications. We compare the inference speed of AnomalyHop and the other
three high-performance methods in Table \ref{exp:inference}, where all
experiments are conducted with Intel I7-5930K@3.5GHz CPU. We see that
AnomalyHop has the fastest inference speed. It has a speed-up factor of
4x, 22x and 28x with respect to PaDIM, Patch-SVDD and SPADE,
respectively. SPADE and Patch-SVDD are significantly slower because of the
expensive nearest neighbor search. For DNN-based methods, their feature
extraction can be accelerated using GPU hardware, which applies to
AnomalyHop, too. On the other hand, image anomaly localization is often
conducted by edge computing devices in manufacturing lines. GPU could be
too expensive for this environment. Although training complexity is
often ignored since it has to be done once, it is worthwhile to mention
that the training of AnomalyHop is very efficient. It takes only 2
minutes to train an AnomalyHop model for each class with the
above-mentioned CPU.
\begin{table}[htb]
\begin{center}
\caption{Average inference time (in sec.) per image with
Intel i7-5930K @ 3.5 GHz CPU.}
\label{exp:inference}
\resizebox{0.45\textwidth}{!}{
\begin{tabular}{|c|c|c|}
\hline
\textbf{Methods} & \textbf{Inference Time} & \textbf{Speed Up} \\\hline
SPADE \cite{cohen2020sub} & 6.80 & 1$\times$ \\ \hline
Patch-SVDD \cite{yi2020patch} & 5.23 & 1.3$\times$ \\ \hline
PaDiM \cite{defard2020padim} & 0.91 & 7.5$\times$ \\ \hline
AnomalyHop & 0.24 & 28.3$\times$ \\ \hline
\end{tabular}
}
\end{center}
\end{table}
\section{Conclusion and Future Work}\label{sec:conclusion}
An SSL-based anomaly image localization method, called AnomalyHop, was
proposed in this work. It is interpretable, effective and fast in both
inference and training time. Besides, it offers state-of-the-art anomaly
localization performance. AnomalyHop has a great potential to be used in
a real-world environment due to its high performance as well as low
implementation cost.
Although SSL-based feature extraction in AnomalyHop is powerful, its
feature distribution modeling (module 2) and anomaly localization
decision (module 3) are still primitive. These two modules can be
improved furthermore. For example, it is interesting to leverage
effective one-class classification methods such as SVDD
\cite{tax2004support}, subspace SVDD \cite{sohrab2018subspace} and
multimodal subspace SVDD \cite{sohrab2021multimodal}. This is a new
topic under our current investigation.
\bibliographystyle{IEEEtran}
|
{
"timestamp": "2021-05-11T02:15:52",
"yymm": "2105",
"arxiv_id": "2105.03797",
"language": "en",
"url": "https://arxiv.org/abs/2105.03797"
}
|
\section{Introduction}
The characteristics such as the non-homogeneity of the fault itself and the complex formation mechanism and its important role in the process of oil-gas development determine that the prediction of fault reservoir has always been an important subject in oil-gas exploration, and fault detection is the key point. The main methods of fault detection include log methods and seismic methods, and our study focuses on the use of seismic data for fault detection.
Before deep learning was widely used, researchers used traditional geological methods for fault detection. The first to be applied to fault detection was the theory of anisotropy. Crampin discovered and put forward many new understandings and opinions on fault anisotropy \cite{1}, Rüger proposed the Rüger approximation formula, which verified that the formula has good adaptability in weakly anisotropic media \cite{2}, and proposed AVO (Amplitude Variation with Offset) gradient inversion to calculate the fault parameters \cite{3}, but the anisotropy as a basic property of the fault, is prone to noise interference from seismic data when applied to the detection task, and the detection accuracy is very low; Bahorich proposed the use of coherent technology interpretation and detection of seismic faults \cite{4}, by calculating the cross-correlation coefficients between seismic traces to highlight the characteristics of fault discontinuity, but for seismic data with relatively large coherent noise, especially for small faults, the detection effect is poor. Subsequently, Marfurt etc. Proposed the second-generation coherent technology, which improved the anti-noise ability, but the resolution was low \cite{5}. The third-generation coherent technology provided high resolution detection results in noisy data by calculating the eigenvalues of covariance matrix, but the effect was not good in some special geological environment \cite{6} (such as the wing of salt mound).
Pedersen s applied ant colony algorithm to fault detection \cite{7}, ant colony algorithm used ant tracking to highlight fault lines, filtered irrelevant noise and non fault response; D. C. Sun et al combined spectral decomposition technology with ant colony algorithm \cite{8}; A. Aqrawi uses the improved 3D Sobel filtering method and ant colony algorithm to realize the detection of small faults \cite{9}. However, the ant colony algorithm based on 3D seismic data is often disorganized, whether it is sliced along the layer or on the section, and the description of the bottom fault does not correspond to the distribution characteristics of the seismic event axis, resulting in poor practical application effects. In addition, there are some other fault detection algorithms. Saito and Hayashi use frequency domain-based Stoneley waves to detect faults \cite{10}; F. Admasu et al. proposed an active contour algorithm to achieve semi-automatic tracing of faults \cite{11}; Priezzhev and Scollard It is proposed to detect faults through orthogonal decomposition of seismic data \cite{12}; Hale D uses three steps: calculation of 3D tomographic images, extraction of fault surface and drop estimation to detect faults \cite{13}; W. Zhen et al. proposed based on Hough transform and Vector tracking interactive fault detection algorithm \cite{14}; Wu and Fomel proposed a method to extract the optimal surface based on the maximum fault attributes, and use these optimal surface voting to detect faults \cite{15}. However, the use of traditional geological methods or the introduction of digital image processing algorithms on this basis cannot solve the problem of high noise and serious interference in seismic data.
As early as 2005, K. M. Tingdahl, M. de Rooij, etc. realized an algorithm that uses multiple seismic attributes and BP neural network to detect faults \cite{16}, but its performance is limited by the neural network theory and hardware conditions at that time. With the development of deep learning in recent years, some studies have introduced convolutional neural networks into seismic fault detection \cite{17,18,19,20}. These methods regard fault detection as an image segmentation task in the field of computer vision. Seismic image voxels are classified into two categories (fault and non-fault), but doing so will lose the 3D spatial morphological feature of the fault, which will cause the segmented fault to be discontinuous; Guitton A proposed a method of fault segmentation using 3D convolution \cite{21}, but its stacked neural network structure cannot effectively extract the spatial information of seismic data. The workload of 3D data annotation is huge and requires expert experience. Therefore, Guitton A uses the results of the algorithm proposed by Hale D \cite{13} as the training label. This approach may cause the model to learn only the detection mode of the algorithm proposed by the reference \cite{13}, and its performance is affected by the quality of the label; Wu et al. use synthetic Seismic data is used to train the 3D U-Net model \cite{22,23}. Synthetic data avoids the problems caused by manual labeling. However, in many cases, synthetic data cannot be generalized to real seismic data. We verified the work of Wu et al. There is still a lot of noise in the prediction results of the model trained on synthetic data on the real data we provide (see Figure \ref{fig5}, \ref{fig7}), which is difficult to apply in the field of petroleum exploration with variable geological structures.
In summary, the segmentation of 3D faults still faces two major problems. First of all, various complex geological conditions and the influence of acquisition equipment lead to a low signal-to-noise ratio of the original seismic data, resulting in a large amount of noise in the detection results obtained by traditional geological methods or machine learning methods; secondly, 3D seismic data cannot be directly labeled , The workload of labeling on 2D data and then synthesizing 3D labels is huge and requires expert experience. Wrong labeling and missing labeling will affect the segmentation performance of the model.
The method proposed in this paper can train a model that can accurately segment 3D seismic data through a small amount of labeled 2D seismic data slices.
In general, we have improved the standard U-Net, added an attention module that can be actively trained, and proposed two new loss functions, so that a 3D fault segmentation network can be trained using few 2D images.
We have drawn on the ideas from reference \cite{24,25}. reference \cite{24} uses attention gate for medical image segmentation, allowing the model's attention coefficients to be more specific to the local areas that need attention during the training process, thereby effectively filtering noise; in the work of \cite{22}, sparse data training 3D medical segmentation model. However, seismic data is different from medical data. In medical images, the pixels of the target area are clustered into a 2D plane, while the fault pixels in the seismic image are arranged in a line. From the local point of view, it is one-dimensional, which makes the fault difficult to obtain by attention mechanism. Moreover, the seismic data is more complex and contains more noise, the proportion between fault area and non-fault area is seriously out of balance, and the proportion of fault voxels in the overall voxel is so few, which makes it more difficult to transmit the effective gradient (the gradient brought by fault voxels) in sparse data training.
The contribution of our work can be summarized as follows:
(1) We propose an attention module that can be actively supervised and trained (Active Attention Module, AAM) based on the characteristics of seismic faults (without providing additional annotations), which can make the model pay more attention to the fault area, thereby effectively suppressing noise. In addition, this module can treated as intermediate supervision to provide more effective gradients for training.
(2) We propose a new binary cross entropy and smooth $L_1$ loss function for seismic fault segmentation ($\lambda$-BCE and $\lambda$-smooth $L_1$), and only use a small amount of labeled real 2D data to train the 3D convolutional neural network.
(3) This allows geologists and oil and gas prospectors to label only a small part of the 2D slices (At least 3.3\% of the original) in the seismic data to obtain accurate 3D fault segmentation models for all similar geological types of seismic data.
\section{Approach}
\subsection{Active Attention Module}
The AAM embedded in 3D U-Net model to suppress a large amount of noise in seismic data, make the model focus on fault area, and provide more effective gradient for model training.
This module obtains the linear projections $\omega_l$$F_l$ and $\omega_h$$F_h$ from the low-level detail feature $F_l$ and the high-level semantic feature $F_h$ through $1\times1$ convolution respectively, and then combines them into a single channel and normalizes them by sigmoid. The whole process is expressed as a formula \ref{attetion},
\begin{equation}
\Theta=\mbox{Sigmoid}(\omega_s\mbox{ReLU}^T(\omega_lF_l+\omega_hF_h))\label{attetion}
\end{equation}
Where $\omega_l$ and $\omega_h$ is differentiable, so in the formula \ref{attetion}, $\omega_lF_l+\omega_hF_h$ (denoted as $D$) can be interpreted as the difference between the low-level features and the high-level features. With the deepening of network layers, the features extracted from the deep layer will more and more tend to the ground truth \cite{25}. Therefore, $D$ shows the signal response of ground truth. We think this is the main mechanism of Attention Gate in reference \cite{24}. The $\Theta$ that is weighted and normalized by $D$ is the Attention Map we need.
In U-Net, one of the reasons to concatenate $F_l$ and $F_h$ is to make the segmentation results merge more detailed cue from $F_l$, but $F_l$ also include a lot of noise. Before they concatenate, the multiplication of $\Theta$ and $F_l$ can effectively introduce the details around the suspected ground truth area while suppressing noise. So that the model pays more attention to the fault area. However, in seismic fault segmentation, it is very difficult to automatically generate Attention Map during the model training iteration process, because the ground truth pixels of the fault are arranged in lines. After continuous convolution, it is difficult for $D$ to capture the difference between high and low level fault features. Therefore, we propose a method to generate Attention Map from label data to supervise $\Theta$ to generate attention regions.
We hope that the Attention Map extracted by the attention mechanism can effectively suppress the non-fault area feature $F_l^r$ in $F_l$, and retain the feature $F_l^t$ of the fault area, then the idealized Attention Map (denoted as $\Theta$) is expressed as,
\begin{equation}
\begin{split}
\lim\limits_{\mbox{pos}(f_{l,i}) \to \mbox{pos}(F_l^r)}\theta_i=0\\
\lim\limits_{\mbox{pos}(f_{l,i}) \to \mbox{pos}(F_l^t)}\theta_i=1
\end{split}
\end{equation}
Where, $f_{l,i}$ denotes a single eigenvector in the feature region, $\mbox{pos}(x)$ denotes the coordinates or coordinate clusters of the obtained features in Euclidean space, $\theta_i\in\Theta$, that is, the weight response of 0 tends to 1 with the decrease of the euclidean distance from the fault region. We use Gaussian function to simulate this trend, this process can be expressed by the formula \ref{att_fumula}, assuming $\mbox{pos}(\theta^t)\in\{\mbox{pos}(F_l^t )\}$, then
\begin{equation}
\Theta(\theta^t)=\mbox{exp}( \frac{\Arrowvert\mbox{pos}(\theta^t)- \textbf{\mbox{x}}_{w,h,d}\Arrowvert^2_2}{\sigma^2})\label{att_fumula}
\end{equation}
In the labeled data, all the variables in the formula \ref{att_fumula} are known, and the generated heatmap is shown in Figure \ref{fig1}.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.5]{1.png}
\centering\caption{In (a), the trend of $\theta_i$ around $\mbox{pos}(\theta^t)$ is shown, (b) shows the heatmap generated when we replace each pixel in ground truth with the response point in (a)}
\label{fig1}
\end{figure}
Use the generated heatmap to supervise $\Theta$ through smooth $L_1$ loss \cite{26},
\begin{equation}
\mathcal{L}_{sL_1}(\theta_i,\theta_i^{gt})=\sum_{i\in\{0,1,2...whd\}}\mbox{smooth}_{L_1}(\theta_i-\theta_i^{gt})
\end{equation}
where.
\begin{equation}
\mbox{smooth}_{L_1}(x)=\begin{cases}
0.5x^2 \ &\mbox{if}|x|<1\\
|x|-0.5 \ &\mbox{otherwise}
\end{cases}
\end{equation}
The reason for using smooth $L_1$ loss is to ensure that when the model cannot extract enough features at the initial stage of training, the large difference between the predicted value and the ground truth leads to a high gradient that causes training instability. And the difference between the later predicted value and ground truth is very small and still can provide a stable gradient.
The overall structure of the model is shown in Figure \ref{fig2}.
\begin{figure*}[ht]
\includegraphics[scale=0.25]{2.png}
\centering\caption{The model adds an Attention module to the basic U-Net framework to suppress the noise introduced when fusing features. Active supervision is used in the Attention module to ensure that it can extract the effective area near the fault. In the figure, our labels are sparse, and we will describe the detailed process in the next section.}
\label{fig2}
\end{figure*}
In subsequent experiments, it was found that this module can not only suppress noise, but it can also be regarded as a intermediate supervision mechanism to provide more effective gradients for the model.
It can effectively prevent a large number of holes in the segmentation result when the label is very sparse (Figure \ref{fig7}). At the same time, it can significantly improve the quantitative index of the model with less noise data (synthetic data, Tabel \ref{t1}).
\subsection{Learning 3D segmentation from few 2D labeled seismic data slices}
3D seismic data requires a large amount of accurate labeling. Seismic data labeling is difficult and requires expert experience, so the cost is very high, and the labeling process is subjective, wrong and missing labels can mislead the backward process. In this paper, a method of learning 3D segmentation from a small amount of 2D data is proposed, and the effectiveness of this method is proved from theory and experiment.
U-Net can be divided into two parts, backbone and prediction layer. backbone is used to extract features. The prediction layer is one convolution layer. Use $\Gamma$ to represent the convolution kernel of this layer, then the shape of $\Gamma$ is $(C_1,k,k,k,C_0)$, where $C_1$ is the number of channels in the upper layer, $k=1$, is the size of the convolution kernel, $C_0=1$, is the number of convolution kernels, $C_1\times k\times k\times k\times C_0=C_1$, so $\Gamma$ is a vector of length $C_1$. As shown in Figure \ref{fig3}, for the convenience of presentation, the figure shows a two-dimensional situation, where the last feature map shares a set of convolution weights $\Gamma$, and the weight $\Gamma$ slides on the last feature map to obtain the final result of the prediction, which can be expressed as formula \ref{f6}.
\begin{equation}
prediction = \mbox{sigmoid}(\{\sum_{l=1}^{C_1}\gamma_la^1_l,\sum_{l=1}^{C_1}\gamma_la^2_l,...\sum_{l=1}^{C_1}\gamma_la^{whd}_l, \})\label{f6}
\end{equation}
Where $a_l^i$ represents the value of each element on the last feature, $\gamma_l\in\Gamma$. The label in Figure \ref{fig3} is sparse, that is, only the red part is labeled. Our method is to calculate only the gradient caused by the labeled area in backward. The last feature map shares the convolution weight $\Gamma$, so even if some voxels that are not labeled are missing, it can still provide an effective gradient in backward. The main process is as follows.
At this time, the number of positive samples of voxels in ground truth is $S_p$, and the number of negative samples is $S_f$. Denote $\mbox{sigmoid}(\sum_{l=1}^{C_1}\gamma_la_l^i)$ as $x_i$, ground truth as $y_i$, and use the binary cross-entropy loss to calculate the cost.
\begin{equation}
\mathcal{L}_{\mbox{bce}}(x_i,y_i)=\sum_{i\in\{0,1,2,...whd\}}y_i\mbox{log}x_i+(1-x_i)\mbox{log}(1-x_i)
\end{equation}
Then the gradient generated by each voxel is $\eta\frac{\partial\mathcal{L}_{bce}}{\partial x_i}$, where $\eta$ is the learning rate, now we calculate the weight $\mu=\frac{S_p}{S_f}$ according to the state of the voxel samples in the label. The gradient propagated to the next layer is expressed as \ref{f8}.
\begin{equation}
grad = \frac{\eta}{S_p+S_f}\sum_{i\in\{0,1,2,...whd\}}\lambda_i\frac{\partial\mathcal{L}_{\mbox{bce}}}{\partial x_i}\label{f8}
\end{equation}
where,
\begin{equation}
\lambda_i=\begin{cases}
\frac{S_f}{S_p}\ &\mbox{if} \ Positive\\
\ 1 \ &\mbox{if} \ Negative\\
\ 0 \ &\mbox{if} \ Nonlabelled
\end{cases}
\end{equation}
$\lambda_i$ is the backward gradient coefficient, so it is equivalent to acting on the loss function to obtain the $\lambda$-BCE loss function.
\begin{equation}
\mathcal{L}_{\lambda- \mbox{bce}}(x_i,y_i)= \lambda_i \sum_{i\in\{0,1,2,...whd\}}y_i\mbox{log}x_i+(1-x_i)\mbox{log}(1-x_i)
\end{equation}
In the same way, we get $\lambda$-smooth $L_1$ loss function.
\begin{equation}
\mathcal{L}_{\lambda- sL_1}(\theta_i,\theta_i^{gt})= \lambda_i \sum_{i\in\{0,1,2...whd\}}\mbox{smooth}_{L_1}(\theta_i-\theta_i^{gt})
\end{equation}
\begin{figure}[ht]
\centering \includegraphics[scale=0.2]{3.png}
\caption{The elements on the last feature map share a set of weights $\Gamma$, which allows us to obtain effective gradients only by training using the labeled voxels in the label.}
\label{fig3}
\end{figure}
In actual operation, we sample at equal intervals along the iline and xline directions of the seismic body and label the sampled 2D data, and then form the labeled 2D data into the grid. Finally, the seismic volume is divided into $64\times64\times64$ tensors, and Adam is used for training \cite{27}.
\section{Experiment}
\subsection{Illustration of the experiment}
Our real data come from the Shengli Oilfield Branch of Sinopec, this data is mainly used for qualitative experiments. In addition, we also used the synthetic data disclosed by Wu \cite{22}, which is mainly used for quantitative analysis, because in real seismic data, it is almost impossible to make accurate labels. So, this is not conducive to quantifying the performance of the analysis model, and the label information of the synthetic data is absolutely correct.
According to the loss function of $\lambda$-BCE and $\lambda$-smooth $L_1$, each sample can participate in training as long as one 2D cross section is labeled.
In order to verify the most efficient labeling way, so as to save more labeling costs and help geological professionals to improve efficiency as much as possible, we have verified six labeling ways, as shown in Figure \ref{fig4}.
\begin{figure}[htb]
\includegraphics[scale=0.23]{4.png}
\centering\caption{Mode A, B, C, D only label iline, Mode E, F label both crossline and iline. In the qualitative experiment and the ablation experiment, we used these six methods for labeling, and conducted comparative experiments to discover the most efficient labeling ways.}
\label{fig4}
\end{figure}
\subsection{Qualitative experiment}
We have data from two work areas, one of which is used as the train set and the other is used as the test set. Annotate the train set as shown in Figure 4 and divide it into 5000$\times$64$\times$64$\times$64 samples. The experiment used two NVIDIA Tesla P100 16GB (16$\times$2 memory), the training epoch is 35, and the batch size is 32.
The segmentation effect of the model on the test set is shown in Figure \ref{fig5}.
\begin{figure*}[htb]
\centering\includegraphics[scale=0.332]{5.png}
\centering\caption{Qualitative experimental segmentation results. To facilitate visualization, the original data uses pseudo colors. Our method achieves high performance via a few labels, and can segment the fault very clearly and accurately.}
\label{fig5}
\end{figure*}
\subsubsection{The most efficient model training way}
Figure \ref{fig5} shows the advanced performance of our method on real data, which suppresses most of the noise in seismic data. Experiments show that training the model with volume data samples that only label one slice can accomplish segmentation of 3D seismic data, this validates our theory in the previous section.
We observe that in Mode B-F, adding labeled slices does not significantly improve the segmentation performance. On the contrary, in E and F labeled in two directions, the segmentation effect is not ideal. Although both Mode D and Mode E only labeled 6 slices, the effect of Mode D was better than that of E. Not only that, the effect of D was even better than that of Mode F labeled with 12 slices (Figure \ref{fig5}, \ref{fig6}). The difference between E and D is that E annotates both iline and crossline, while D only iline.
\begin{figure*}[htb]
\centering\includegraphics[scale=0.5]{6.png}
\centering\caption{Among them, (a) is iline, (b) is crossline, and (c) is tline. The segmentation effect of Mode D in the three directions is significantly better than the E, which shows that our method is the most efficient for labeling only iline in the segmentation of real seismic data.}
\label{fig6}
\end{figure*}
The main reason for this phenomenon is that the alignment direction of the faults is often perpendicular to the iline, resulting in the faults observed and marked from the iline are straight lines, and it is easier to find the faults when labeling the data.
When observing from the crossline, the arrangement of the faults is often chaotic and difficult to find, which leads to a large number of missed and mislabeled labels, which in turn misleads the back propagation process.
In addition, considering that the weights of the 3D convolution kernel are not symmetrical, convolution operations on seismic data from different directions will get different results.
If we only label the slice in one direction, it will cause the model to have a serious overfitting problem in this direction.
Therefore, we randomly rotate the data during training, which also enables the convolution kernel to fully learn the spatial characteristics of the data. In subsequent ablation experiments, we verified the effectiveness of random rotation to improve the performance of the model.
This also makes the model obtained by labeling only one direction in real data have better performance.
The method of Wu et al. (training using only synthetic data) has achieved advanced performance on some data sets \cite{22}, but it has not effectively migrated to our real data. It shows a lot of messy noise. This proves that it is difficult to accurately detect faults in real data using only synthetic data training models.
Our preliminary verification of the most efficient way is as follows: (1) Only label iline for training data; (2) Label at least once every 30 frames; (3) Randomly rotate the data during training.
\subsubsection{Active Attention Module}
AAM has a powerful ability to deal with noise. It uses the generated heatmap to suppress the underlying noise introduced during feature fusion. In addition, AAM is also an intermediate supervision mechanism in order to provide more effective gradients.
\begin{figure*}[htb]
\centering\includegraphics[scale=0.4]{7.png}
\centering\caption{Among them, (a) is tline, (b) is crossline, and (c) is iline. When we use Mode B for training (Only 2 slices), the model inference results obtained without AAM will have more noise, which shows the excellent ability of AAM to suppress noise.
In addition, the segmented image obtained by not applying AAM is very rough, and the segmented faults have small holes. We analyze this because when fewer slices are used for training, the effective gradient is fewer, and AAM can provide more gradients.}
\label{fig7}
\end{figure*}
In Figure \ref{fig7}, the model inference result obtained by training only using synthetic data contains a lot of noise, and even the geological texture is judged as a fault, which reflects that the model trained on synthetic data is difficult to migrate to real data.
The inference result without AAM will be more noisy. In addition, Figure 7 also shows that when we use Mode D (Only 2 sclices) for training, not using AAM may cause a lot of small holes in the inference results.
But when we increase the sclices to 3 or further increase the weight of the fault during training, this phenomenon may disappear, but this is not absolute, and this phenomenon has never appeared in the model using AAM.
This because AAM provides a intermediate supervision mechanism, which allows the model to obtain more effective gradients during training. Wei et al. discussed the mechanism of intermediate supervision in reference \cite{25}.
\subsection{Quantitative experiment}
\subsubsection{Ablation experiment}
Taking into account the accuracy of the numerical requirements of the experiment, the experiment completely uses a synthetic dataset.
We split the data of size $128\times128\times128$ disclosed by Wu into 8 pieces of data of $64\times64\times64$. In addition, in order to ensure the continuity of the data, we also downsample (resize) each original data into size of $64\times64\times64$, a total of $220\times(8+1)=1980$ data, 300 of which are randomly sampled as test set. We use the six ways shown in Figure 4 to process labels, plus the original label mode (all label), and a total of seven label modes for training.
In the experiment, we found that the data enhancement method of randomly rotating samples is very effective for training, but it is often ignored when used in 3D tasks. We added this to the ablation experiment. Therefore, our ablation experiment contains two variables: whether to add AAM and whether to rotate the sample cube during training.
We use IOU (Intersection Over Union) as the performance evaluation metric. The IOU is expressed by formula \ref{iou}.
\begin{equation}
IOU=\frac{TP}{FP+TP+FN}\label{iou}
\end{equation}
Among them, TP (True Positive) is classified as a positive sample, in fact it is also a positive sample. FP (False Positive), is classified as a positive sample, but in fact it is a negative sample.
FN (False Negative) is classified as a negative sample, but in fact it is a positive sample.
\begin{table*}[htb]
\caption{Ablation Experiment}
\label{t1}
\centering
\begin{tabular}{ccccccccc}
\hline
\textbf{\textit{AAM}} & \textbf{\textit{\begin{tabular}[c]{@{}c@{}}Random\\ Rotation\end{tabular}}} & \textbf{\textit{\begin{tabular}[c]{@{}c@{}}All Label\\ 64,64 slices\end{tabular}}} & \textbf{\textit{\begin{tabular}[c]{@{}c@{}}Mode F\\ 6,6 slices\end{tabular}}} & \textbf{\textit{\begin{tabular}[c]{@{}c@{}}Mode E\\ 3,3 slices\end{tabular}}} & \textbf{\textit{\begin{tabular}[c]{@{}c@{}}Mode D\\ 6 slices\end{tabular}}} & \textbf{\textit{\begin{tabular}[c]{@{}c@{}}Mode C\\ 3 slices\end{tabular}}} & \textbf{\textit{\begin{tabular}[c]{@{}c@{}}Mode B\\ 2 slices\end{tabular}}} & \textbf{\textit{\begin{tabular}[c]{@{}c@{}}Mode A\\ 1 Slice\end{tabular}}} \\ \hline
& & 69.72 & 63.04 & 65.59 & 60.48 & 55.10 & 37.44 & 33.60 \\
\textbf{$\surd$} & & 70.10 & 69.15 & 67.66 & 61.34 & 58.64 & 39.48 & 36.56 \\
& \textbf{$\surd$} & 71.43 & 70.22 & 68.19 & 66.88 & 64.67 & 64.17 & 59.59 \\
\textbf{$\surd$} & \textbf{$\surd$} & \textbf{72.18} & \textbf{71.69} & \textbf{70.01} & \textbf{70.92} & \textbf{69.65} & \textbf{68.83} & \textbf{64.41} \\ \hline
\end{tabular}%
\end{table*}
Figure \ref{fig8} shows the loss curve obtained by verifying the model on the validation set every 200 steps during the training process.
From the Tabel \ref{t1}, experiments show that AAM not only has the obvious ability to eliminate noise in qualitative experiments, but also has a significant improvement in quantitative indicators. Although there is less noise in the synthetic data, the use of AAM still improves the performance of the model.
It is worth noting that during training, whether or not to rotate the sample plays a decisive role in the performance of the model. Especially when the sample is only labeled in one direction, if the sample is not rotated, the performance of the model will be greatly reduced.
In addition, there is no significant difference between the quantitative and quantitative indicators of All Label and Model B-F.
This phenomenon is also reflected in Figure 5. Increasing a large amount of labeling workload will not significantly improve the performance of the model, and may even play the opposite effect (wrong labeling and missing labeling) in real scenarios.
\begin{figure}[htb]
\centering\includegraphics[scale=0.42]{8.png}
\centering\caption{In order to facilitate observation, the curve has been smoothed. (a) shows the convergence curve of the model after using AAM and random rotation of the sample. When the labeled slice is greater than 1, the convergence of the model is similar.
It can also be seen from Table 1 that there is no significant difference between the quantitative data (IOU) of Mode B-F and All Label.
(b) shows when an ablation experiment is performed in Mode D (6 slices), Mode D only labels iline. The figure shows that the performance of the non-rotating model quickly reaches saturation and gradually declines (overfitting). It also shows that using AAM will make the model converge faster and have a higher upper limit.}
\label{fig8}
\end{figure}
In the quantitative experiment, labeling the two directions of the sample also showed good performance. This is because the labels in the synthetic data are absolutely objective and accurate, and there is no problem of a large number of incorrect labels when labeling the crossline of the real data.
We verified that the seismic data obtained for a certain work area or a certain instrument, we only need to label iline once every 30 frames to make the model obtain very good segmentation performance.
Next, we use cross-validation to further confirm our conclusion.
\subsubsection{Cross validation}
This experiment uses the K-fold cross-validation method. Let K=5, divide the 1980 samples into 5 sub-samples evenly and randomly, a single sub-sample is kept as the validation set, and the other 4 sub-samples are used for training. The cross-validation was repeated 5 times, and each sub-sample was validated once. The advantage of this method is that random sub-samples are repeatedly used for training and verification at the same time, which means that each sample will be used for verification. Among them, all training data uses Mode B (2 slices labeled, i.e. labeled once every 30 frames).
The cross-validation experiment uses five metrics: Precision, Recall, IOU, Dice and Hausdorff Distance.
Among them, Precision and Recall are common metrics in machine learning, IOU and Dice are a metric that is sensitive to the segmentation area, and Hausdorff Distance is a metric that is sensitive to the segmentation boundary. For more analysis of these five metrics, see reference \cite{28}.
\begin{table}[htb]
\caption{Cross Validation for Two Slices Labeled}
\label{t2}
\centering
\begin{tabular}{ccccccc}
\hline
& \textbf{\textit{Precision}} & \textbf{\textit{\ Recall\ }} & \textbf{\textit{\ IOU\ }} & \textbf{\textit{\ Dice\ }} & \textbf{\textit{Hausdorff}} \\ \hline
\textbf{\textit{set 1}} & 76.58 & 89.30 & 66.86 & 80.14 & 62.84 \\
\textbf{\textit{set 2}} & 76.07 & 88.47 & 65.97 & 79.50 & 66.43 \\
\textbf{\textit{set 3}} & 75.88 & 91.40 & 66.51 & 79.88 & 69.29 \\
\textbf{\textit{set 4}} & 75.74 & 89.17 & 65.74 & 79.33 & 65.80 \\
\textbf{\textit{set 5}} & 77.06 & 90.25 & 67.78 & 80.80 & 68.16 \\
\textbf{\textit{Mean}} & 76.27 & 89.72 & 66.57 & 79.93 & 64.30 \\ \hline
\end{tabular}%
\end{table}
The cross-validation result data in Table \ref{t2} shows that each sample only uses two slices labeled for training to obtain a very stable and usable model.
The experiment showed a high recall rate and IOU of the model, indicating that the model detected almost all faults.
The reason why Precision is slightly lower than the Recall is that the width of the fault label in the label is too narrow, and the detected fault is wider, which leads to a slightly larger FP.
Here, the width of the detected fault can be controlled by adjusting the $\lambda$ coefficient of the positive sample in the loss function.
The stability of the hausdorff distance above 60 indicates that the model is very advanced in processing the boundary and noise.
\section{Conclusion}
Under the premise of using our method, we have obtained the most effective way of labeling, which only needs to be labeled once every 30 frames. The experiment shows that redundant labels will not significantly improve the segmentation performance. Although we only used 3.3\% of the total labels, we still achieved the most advanced segmentation performance. This is a leap forward for fault detection of seismic data. This work enables the deep learning model to be quickly migrated to the seismic data obtained by different work areas, which greatly improves the work efficiency of geologists and petroleum exploration workers. We will explore more efficient models or methods in the next study.
\section*{Acknowledgments}
The authors are very indebted to the anonymous referees for their critical comments and suggestions for the improvement of this paper. This work was also supported by grants from the National Natural Science Foundation of China (Major Program, No.51991365).
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
|
{
"timestamp": "2021-05-21T02:19:57",
"yymm": "2105",
"arxiv_id": "2105.03857",
"language": "en",
"url": "https://arxiv.org/abs/2105.03857"
}
|
\section{INTRODUCTION}
In robot-assisted minimally-invasive surgery (RAMIS), surgeons use robotic manipulators to control the movements of instruments, which are inserted into patients' bodies via small incisions \cite{maesoEfficacyVinciSurgical2010}. RAMIS offers many advantages over open surgery \cite{moorthyDexterityEnhancementRobotic2004}. However, to reap the benefits of RAMIS, surgeons must be well trained to use the robotic systems \cite{crawfordEvolutionLiteratureReview2018}. Currently, there are training guidelines for open and laparoscopic surgery, but not for RAMIS \cite{ahmedDevelopmentStandardisedTraining2015}.
An important step to mitigate this gap is to study how surgeons acquire RAMIS skills, and to discover what affects their learning. Previous studies attempted to influence the acquisition of RAMIS skills, e.g. by choosing which exercises the trainees perform. For example, in \cite{gurungAcceleratedSkillsAcquisition2020} learning was sped up by focusing the training on high difficulty exercises. In \cite{marianiSkillOrientedPerformanceDrivenAdaptive2021}, an adaptive training protocol was proposed in which the exercises are chosen based on the trainee's performance, leading to improved performance of participants compared to participants who chose their exercises on their own.
Another way to affect RAMIS skill acquisition is to combine task execution examples of an experienced user during training. For example, in \cite{yangEffectivenessIntegratedVideo2017} participants who watched their own performances along with a video of an expert performing the same exercise learned better than participants who did not have access to such feedback. In addition, many research groups developed training platforms that combine haptic guidance that enables the trainee to follow the movement of an experienced user who performs the exercise \cite{jacobsImpactHapticLearning2007,shahbaziMultimodalSensorimotorIntegration2018,abdelaalPlayMeBack2019}.
Important sources of knowledge about how to train RAMIS surgeons are motor learning theories \cite{jarcRobotassistedSurgeryEmerging2015}. Motor learning studies investigate the different processes that enable learning. These studies define skill acquisition as an improvement in performance beyond previous levels or the acquisition of completely novel abilities \cite{krakauerHumanSensorimotorLearning2011}. Adaptation is defined as when participants improve their performances in response to altered conditions such as a perturbing force field \cite{shadmehrAdaptiveRepresentationDynamics1994} or visuomotor transformations \cite{krakauerLearningVisuomotorTransformations2000}. It is important to note that in contrast to skill acquisition, in adaptation, the participants can restore their baseline performance but will not improve
beyond it \cite{krakauerHumanSensorimotorLearning2011}.
Recent findings in motor learning propose ways to affect learning. For example, in reach movements, manipulating the consistency
of the perturbations \cite{castroEnvironmentalConsistencyDetermines2014,wuTemporalStructureMotor2014,duarteEffectsRoboticallyModulating2015}, the error that is presented to the trainee \cite{herzfeldMemoryErrorsSensorimotor2014,vanderkooijVisuomotorAdaptationHow2015,diederenScalingPredictionErrors2015}, and the balance between punishment and reward \cite{galeaDissociableEffectsPunishment2015}, can affect the rate of adaptation. In error-based learning, the sensorimotor system is hypothesized to estimate the error between the desired or predicted outcome of a movement and the actual outcome, and update the motor commands in the following movement \cite{wolpertPrinciplesSensorimotorLearning2011}. These ideas may be used to optimize surgical
skill learning \cite{jarcRobotassistedSurgeryEmerging2015}, but there is a gap between the current knowledge that is based on simple movements and the needed knowledge to train RAMIS surgeons, who perform complex motor tasks. There are a few studies that begin to fill the gap \cite{m.m.coadTrainingDivergentConvergent2017,oquendoRobotAssistedSurgicalTraining2019}, but more work is needed to develop efficient training protocols.
In this study, for the first time, we examine the effect of time-dependent perturbations on the learning of a surgical task. In our experiment, the participants cut circles drawn on gauze while they were exposed to perturbations that alternatingly pushed their hands inwards and outwards in the radial direction. We chose a time-dependent perturbation because while acting on the human body, surgeons encounter various time-dependent perturbations; for example, as a result of the periodic movement of the heart and blood vessels. To develop RAMIS training protocols, we need to understand how surgeons learn to deal with such perturbations, and their effects on performance.
More specifically, it is important to understand whether surgeons can improve their performance under time-dependent perturbations. Our task has a clear desired path, and the perturbations increase the error between the desired movement and the actual movement. Hence, we hypothesized that the motor system would adjust motor commands and reduce error with training. If surgeons do manage to improve performances during exposure to the perturbations, it is important to test whether this learning impairs their ability to cope with other conditions (such as an environment without perturbations), and whether it gives them resistance to other perturbations as well.
We designed our experiment to answer four specific research questions:
\begin{itemize}
\item \textbf{Q1} -- Whether participants that are exposed to force perturbations can learn and improve their performance under these perturbations?
\item \textbf{Q2} -- Whether training with force perturbations impairs the performance when the perturbations are removed?
\item \textbf{Q3} -- Whether training with force perturbations can give an advantage when later encountering these perturbations, compared to those who trained without perturbations?
\item \textbf{Q4} -- Whether training with force perturbations can give an advantage when encountering different types of perturbations, compared to those who trained without perturbations?
\end{itemize}
A preliminary version of this study which included an initial analysis of the pilot experiment was presented in an extended abstract form \cite{sharonPreliminaryAnalysisLearning2020}.
\section{METHODS}
\subsection{Experimental Setup}
\subsubsection{the da Vinci Research Kit (dVRK)}
The dVRK is a development platform for researchers in the field of RAMIS \cite{kazanzidesfOpensourceResearchKit2014}, provided by Intuitive Surgical. Its hardware consists of components from the first-generation da Vinci Surgical System \cite{guthartIntuitiveTextsuperscriptTMTelesurgery2000}. Our dVRK (Fig. \ref{fig:dVRK}) consists of a pair of Master Tool Manipulators (MTMs), a pair of Patient Side Manipulators (PSMs), a foot-pedal tray, a high resolution stereo viewer, and four manipulator interface boards. In this study, the participants sat at the master-side (Fig.\ref{fig:dVRK}.b), and used the MTMs to remotely teleoperate curved scissors and a large needle driver at the patient side, where the task board was placed (Fig. \ref{fig:dVRK}.c). The MTMs and PSMs electronics were connected via firewire to a single Ubuntu (UNIX) OS computer with an Intel Xeon E5-2630 v3 processor. The vision system consisted of a pair of Blackfly S cameras (FLIR Integrated Imaging Solutions Inc.) that acquired the visual scene in the patient side. The cameras were fixed such that the task board was in the center of the field of view. The participants could not control the cameras or the zoom. The video was broadcasted to the stereo viewer, which presented a 3D view at 35 Hz refresh rate and with 1080 X 810 resolution per eye. The visual information was transmitted using a custom-developed software on a dedicated computer. The movement scaling was set to 0.4, such that each movement of the PSM was 0.4 times of the movement of the MTM, and the participants could reach the entire workspace without using the clutch.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{Figures/Fig1.pdf}
\caption{The dVRK. (a) The participant sits in the master-side and uses the MTMs to remotely teleoperate the PSMs in the patient-side. (b) The master-side. (c) The patient-side.}
\label{fig:dVRK}
\end{figure}
\subsubsection{The Pattern-Cutting Task}
The task in this experiment was based on the FLS pattern-cutting task \cite{FLSManualSkills2014}, and modified to our needs. In this task, participants used their right hands to control curved scissors and cut a 5cm diameter circle drawn on a two-layered 10X10cm non-woven gauze. The width of the black circle line was 2mm. To complete the task participants could cut one layer or both layers. The task sequence consisted of (Fig. \ref{fig:TheTask}.a): (1) cutting the gauze toward point A; (2) cutting along the left half of the circle, until point B; (3) moving the scissors back to point A; (4) cutting the right side of the circle. The participants used their left hands to control a large needle driver and maintain the tension of the gauze. Participants were instructed to cut as quickly and as accurately as possible, which meant within the black line.
\begin{figure}[t]
\centering
\includegraphics[width=0.95\columnwidth]{Figures/Fig2.pdf}
\caption{The pattern-cutting task and the force perturbations. (a) The task sequence. (b) The task board and the two PSM tools (right – curved scissors, left – large needle driver). (c) The MTMs. (d) The circle; the position of the right PSM's tip $\vec{\bm{x}}_P$ (blue); the force applied on the participant's hand $\vec{\bm{f}}_M$ (red), and the reference base frame (yellow).}
\label{fig:TheTask}
\end{figure}
\subsubsection{Force Perturbations}
In some of the trials during the experiment, planar radial force perturbations were applied on each participant’s hand using the right MTM -- away from the center of the circle and toward the center, alternatingly (Fig. \ref{fig:TheTask}(c)-(d)):
\begin{equation}
\label{eq:perturbations}
\overrightarrow{\boldsymbol{f}}_{M}=A\left[\frac{x_{P}}{\sqrt{x_{P}^{2}+y_{P}^{2}}}, \frac{y_{P}}{\sqrt{x_{P}^{2}+y_{P}^{2}}}, 0\right]^T,
\end{equation}
where $x_P$ and $y_P$ are the x and y position coordinates of the scissors' tip, relative to the center of the circle (see Fig. \ref{fig:TheTask}.d), and A defines the type of the trial. There were three types of trials during the experiment (see Video 1):
\begin{itemize}
\item No perturbations: $A =0$.
\item 1Hz perturbations: $A = sin(2\pi t)$.
We chose the 1N maximal force applied in this type of trial such that it was noticeable but still allowed task completion.
\item Unpredictable perturbations: $A=\frac{\sum_{1}^{5} \sin (2 \pi f t)}{\sqrt{5}} ; f \sim U[0.3 H z, 1 H z]$. The force was a combination of five sine waves with different frequencies, known to be unpredictable to human users \cite{avrahamPerceivingRobotsHumans2012}. We normalized the amplitude by dividing the sum by $\sqrt{5}$, leading to power that was equal to that of the 1Hz perturbations.
\end{itemize}
To make sure the perturbations worked in the radial directions, the gauze was placed so that the center of the circle was placed on the origin of the reference base frame (Fig. \ref{fig:TheTask}.d). Before each trial, the PSM tool pointed to the origin of the reference base frame and the experimenter placed the marked center of the circle in the right position.
\subsection{Experimental Protocol}
Thirty right-handed volunteers without surgical background (aged 21-30; 15 females) participated in the experiment after signing an informed consent approved by the Human Participants Research Committee of Ben-Gurion University of the Negev. The participants were instructed on how to use the dVRK and how to perform the pattern-cutting task. Since the perturbations are based on the position of the circle, the experimenter emphasised the importance of keeping the gauze in its place, i.e., not pulling it in a way that distorts the marked circle. The instructions were accompanied by videos of the pattern cutting task -- a task that performed according to the instructions, and a task in which the participant stretches the gauze in a prohibited way, and also cuts with many errors (not within the black line). Before the experiment, each participant practiced the following actions with the robot: pressing the right pedal to start teleoperation, moving the right tool through the task sequence (without cutting), catching the gauze with the left tool, and cutting a straight line with the right tool.
The participants were randomly assigned into two groups (15 participants per group): a 1Hz group, and a control group. During the experiment each participant performed 24 consecutive trials (Fig. \ref{fig:Protocol}):
\begin{itemize}
\item \textbf{Baseline (B)} -- five trials without perturbations. These trials were the same for both groups, and their purpose was to assess baseline performances of each participant.
\item \textbf{Training (T)} -- 10 trials that were used to answer Q1: the control group trained without perturbations, and the 1Hz group trained with 1Hz perturbations.
\item \textbf{Post training (P)} -- nine trials for testing the effect of the training on performance post training, i.e. answering Q2-Q4; these trials were the same for both groups. The first three trials \textbf{(P1)} were without perturbations, and were included to assess the performances of both groups after the different training protocols (Q2). The next three trials \textbf{(P2)} were with 1Hz perturbations, and were included to asses the resistance to 1Hz perturbations (Q3). The last three trials \textbf{(P3)} were with unpredictable perturbations, and were included to asses the resistance to new perturbations, which participants had not practiced before (Q4). The five frequencies of the unpredictable trials were different between the three trials, but the same for all the participants.
\end{itemize}
\begin{figure}[t]
\centering
\includegraphics[width=0.8\columnwidth]{Figures/Fig3.pdf}
\caption{The experimental protocol.}
\label{fig:Protocol}
\end{figure}
\subsection{Data Acquisition and Segmentation}
All the kinematic data of the MTMs and PSMs were recorded at 100Hz. The video data of the left camera were recorded at 35Hz. In addition, all the cut circles were scanned. In 20 trials out of the 720 trials there were technical issues that resulted in the trial stopping before the task completion. In all those trials, participants completed the trial after fixing the issue, and these trials were not included in the analysis. We used the recorded videos and the scissors' opening angle to manually segment each trial into its stages.
\subsection{Metrics}
We used four metrics to assess the progress of the participants: \textit{combined error-time}, \textit{path length}, \textit{number of cuts}, and \textit{perturbation MSE}. The first two metrics quantified the task performance, and the other two metrics allowed us to follow different approaches of the participants.
\subsubsection{Combined error-time}
participants were instructed to cut as quickly and as accurately as possible. According to the speed-accuracy trade-off \cite{fittsInformationCapacityHuman1954}, we expected that shorter task times might be accompanied by larger errors and vice versa. Hence, we combined these two measures of performance.
The task time was calculated as:
\begin{equation}
TT = t_{end}-t_{start},
\end{equation}
where $t_{start}$ is the time when the participant first closes one of the tools on the gauze, and $t_{end}$ is the time of the last cut, when the circle was completely removed from the gauze.
To quantify the amount of errors, we used a custom-written image processing algorithm (MATLAB) that detected error areas in the scanned circles. The error areas were defined as areas in which the cutting was not on the line -- outside or inside the circle (Fig. \ref{fig:TotalError}). The total error was calculated as:
\begin{equation}
TE = E_{outside}+E_{inside},
\end{equation}
where $E_{outside}$ and $E_{inside}$ are the numbers of pixels in the error areas outside and inside the circle, respectively.
The combined error-time of trial $j$ was calculated as \cite{liesefeldCombiningSpeedAccuracy2019}:
\begin{equation}
CET_j = \frac{TT_j-\overline{TT}}{S_{TT}}+\frac{TE_j-\overline{TE}}{S_{TE}},
\end{equation}
where $\overline{TT}$ and $\overline{TE}$ are the average values of the task times and the total errors across all the trials of all the participants in the experiment, $S_{TT}$ and $S_{TE}$ are the standard deviations. Note that the value of this metric can be positive or negative and that lower value means better performance.
\begin{figure}[b]
\centering
\includegraphics[width=\columnwidth]{Figures/Fig4.pdf}
\caption{Example of total error calculation. (a) The cut circle. (b) The circle with marked error areas.}
\label{fig:TotalError}
\end{figure}
\subsubsection{Path Length}
this metric is a common measure of surgical skill performance which quantifies the economy of motion, and it was calculated as:
\begin{equation}
PL = \sum_{i=1}^{N-1} ||\vec{\bm{x}}_P[i+1]-\vec{\bm{x}}_P[i]||_2,
\end{equation}
where $\vec{\bm{x}}_P[i]$ is the position of the scissors' tip at the i\textsuperscript{th} sample, and N is the number of samples.
\subsubsection{Number of cuts}
one approach to dealing with the perturbations may be to time the cuts according to the perturbations. Such an approach would lead to a change in the number of cuts during training. Hence, we counted the number of times the scissors were closed during the task. To find these closing events, we used the MATLAB function \texttt{findpeaks()} to find the local minima of the recorded scissors' opening angle. We included in the analysis only cuts that were made for cutting the circle itself (i.e, steps 2 and 4, Fig. \ref{fig:TheTask}.a).
\subsubsection{Perturbation MSE}
this metric quantifies the effect of the perturbations on the participant's movement. The higher the value of this metric, the greater the 1Hz radial fluctuations in the hand movement are. The size of the fluctuations is a measure of coping with the perturbations by either adaptation---i.e., actively cancelling the perturbation be opposing forces---or by increasing of arm stiffness.
To calculate this metric, we first extracted the planar radial component of the scissors' path (Fig. \ref{fig:Perturbation}, upper panel):
\begin{equation}
Rxy_p[i] = \sqrt{{x_P[i]}^2+{y_P[i]}^2}.
\end{equation}
Where $x_P$ and $y_P$ are the x and y position coordinates of the scissors' tip.
We then filtered $Rxy_p$ using a moving average of 100 samples:
\begin{equation}
\overline{Rxy_p}_{(100)}[i] =
\frac{1}{100} \sum_{k=-50}^{49} Rxy_p[i+k].
\end{equation}
At the edges of the signal, the average of the available samples was calculated. The 1Hz perturbation was periodic with a period duration of 1sec, and there were 100 samples per second. Therefore, $\overline{Rxy_p}_{(100)}$ is the radial component without the contribution of the perturbation (Fig.\ref{fig:Perturbation}, magenta line). To isolate the contribution of the perturbation (Fig. \ref{fig:Perturbation}, lower panel) we used :
\begin{equation}
Per[i] = Rxy_p[i]-\overline{Rxy_p}_{(100)}[i]
\end{equation}
The \textit{perturbation MSE} was calculated as:
\begin{equation}
PerMSE = \frac{1}{N}\sum_{i=1}^{N} (Per[i])^2
\end{equation}
We calculated $PerMSE$ for steps 2 and 4 -- cutting the circle (Fig. \ref{fig:TheTask}.a), and summed them to get one metric value per trial. Since the unpredictable perturbations in P3 (Fig. \ref{fig:Protocol}) was a combination of five sine waves with different frequencies, we did not calculate this metric on trials in P3.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\columnwidth]{Figures/Fig5.pdf}
\caption{Isolating the perturbation from the recorded path.}
\label{fig:Perturbation}
\end{figure}
\subsection{Statistical analysis}
Our metrics were not normally distributed, and therefore we tested our hypotheses using permutation tests \cite{goodPermutationTestsPractical2013}. For each participant we calculated the average metric values of: the last three trials of the baseline ($B$); the first three trials of the training ($T_{start}$); the last three trials of the training ($T_{end}$); the post training trials without perturbations ($P1$); the post training trials with 1Hz perturbations ($P2$); the post training trials with unpredictable perturbations ($P3$).
We first tested Q1 -- whether participants that are exposed to force perturbations can learn and improve their performance under the perturbations. For each metric, we calculated the difference between $T_{start}$ and $T_{end}$ of each participant. We then used a matched-pair permutation test for each group to test whether the mean of these differences was significantly different than zero. Additionally, we used a permutation test to test whether the means of the individual differences between $T_{start}$ and $T_{end}$ were significantly different between the two groups (control and 1Hz).
We then tested Q2 -- whether training with force perturbations can impair the performance when the perturbations are removed. We performed a permutation test to test whether there was a significant difference between the mean $P1$ values of the two groups. In addition, we calculated the improvement between the baseline trials and P1 trials ($P1-B$) and used them for another permutation test between the two groups.
Next, we tested Q3 -- whether training with force perturbations can give an advantage when encountering these perturbations, compared to those who trained without perturbations. We performed two permutation test between the groups: (1) $P2$ values, (2) $P2-P1$ values.
Lastly, we tested Q4 -- whether training with force perturbations can give an advantage when encountering different perturbations, compared to those who trained without perturbations. We performed two permutation test between the groups: (1) $P3$ values, (2) $P3-P1$ values.
To control for multiple comparisons, we used the Bonferronni correction. For each of the research questions separately, we multiplied the p-values by the number of tests (two or three). Statistical significance was determined at the 0.05 threshold.
\section{Results}
Fig. \ref{fig:CutExamples} shows the scissors’ path when the participant cut the circle. The deviations from the circle in Fig. \ref{fig:CutExamples}(b) are more prominent than those in Fig. \ref{fig:CutExamples}(a), showing that the perturbations affect the scissors’ path. Fig. \ref{fig:Results1} depicts the values of the four metrics during the experiment trials. Fig. \ref{fig:Results2} presents the average values for the different experiment stages (left panel), and the average values of the individuals differences between the stages (right panel).
Videos 2-5 present the videos of the trials with the lowest and highest scores for each metric.
For all the four metrics, there was no significant difference between the two groups at the baseline stage (p\textsubscript{combined error-time} = 0.2165,
p\textsubscript{path length} = 0.9985,
p\textsubscript{number of cuts} = 0.1810,
p\textsubscript{perturbation MSE} = 0.7355). In the following subsections we will describe the results of the tests we conducted to examine our research questions. Table \ref{tab:Results} summarizes the statistical analysis.
\begin{figure}[t]
\centering
\includegraphics[width=\the\columnwidth]{Figures/Fig6.pdf}
\caption{Examples of the recorded path of the scissors. (a) The last baseline trial -- without perturbations, and (b) the same participant’s first training trial with 1Hz perturbations.}
\label{fig:CutExamples}
\end{figure}
\subsection{Q1 -- The effect of the training protocols on the learning curves and approaches}
Participants from both groups improved their \textit{combined error-time} and \textit{path length} scores during training (Fig. \ref{fig:Results1}.a-b, and Fig. \ref{fig:Results2}.a-d). These improvements were statistically significant. In addition, the average \textit{path length} improvement of the 1Hz group was significantly higher than the improvement of the control group. Because the perturbations directly increases the path length, the improvement of the 1Hz group probably consists of an improvement as a result of learning the task (similar to the control group) and an additional improvement caused by coping with the perturbations.
The approach metrics showed that the participants in the 1Hz group significantly reduced the number of cuts during training, whereas in the control group, no such change was observed (Fig. \ref{fig:Results2}.e-f). A possible explanation is that participants in the 1Hz group had to time their cuts according to the perturbations, forcing participants who cut quickly to lower the pace. In addition, the participants in the 1Hz group significantly reduced their \textit{perturbation MSE} scores during training. This means that at the end of the training, the 1Hz fluctuations in their movement were smaller than at the beginning of the training.
\begin{figure}[!t]
\centering
\includegraphics[height=15.5cm]{Figures/Fig7.pdf}
\caption{The values of the four metrics as a function of trial number. Pale color lines are individual scores, dark color lines are means, and shaded areas are 95\% bootstrap confidence intervals.}
\label{fig:Results1}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[height=15.5cm]{Figures/Fig8.pdf}
\caption{The left panel presents the average values of the four metrics in the different stages of the experiment: last three baseline trials (B), first three training trials (T\textsubscript{start}), last three training trials (T\textsubscript{end}), post training trials without perturbations (P1), post training trials with 1Hz perturbations (P2), and post training trials with unpredictable perturbations (P3). The right panel present the average values of the individual differences between the stages: T\textsubscript{start}-T\textsubscript{end}, B-P1, P2-P1, P3-P2. Marker are means, error bars are 95\% bootstrap confidence intervals.}
\label{fig:Results2}
\end{figure}
\subsection{Q2 -- The effect of the training protocols on post-training trials without perturbations (P1)}
After the training (P1), the groups reached a similar level of performance in \textit{combined error-time} and \textit{path length} (Fig. \ref{fig:Results2}.a,c). There was a small and not significant difference between the average improvement in \textit{combined error-time} of the two groups (Fig. \ref{fig:Results2}.b). This small difference may stem from the fact that the control group had a slightly higher score in the baseline trials, thereby having more room for improvement during training.
The number of cuts of the 1Hz group was significantly smaller than of the control group, suggesting that after getting used to a lower number of cuts caused by the perturbations, they did not raise it even after the perturbations were removed.
\subsection{Q3 -- The effect of the training protocols on post-training trials with 1Hz perturbations (P2)}
In all four metrics, the average value of the 1Hz group was slightly lower than the control group, but these differences were not statistically significant. For the number of cuts, the value of the difference ($P2-P1$) of the control group was significantly smaller than the 1Hz group (Fig. \ref{fig:Results2}.f). In Fig. \ref{fig:Results2}.e, we can see that while participants in the 1Hz group kept the same number of cuts for P1 and P2, participants in the control group reduced this number. Apparently, participants who chose an approach of high number of cuts were forced to change it due to the perturbations.
\begin{table*}[h]
\caption{Statistical Analysis summary.}
\label{tab:Results}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}
\hline
\multirow{3}{*}{} &
\multicolumn{3}{c|}{\textbf{Q1}} &
\multicolumn{2}{c|}{\textbf{Q2}} &
\multicolumn{2}{c|}{\textbf{Q3}} &
\multicolumn{2}{c|}{\textbf{Q 4}} \\ \cline{2-10}
&
\multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}}Training start – training end\\ (T\textsubscript{start}-T\textsubscript{end})\end{tabular}} &
\begin{tabular}[c]{@{}c@{}}Post\\ no pert. \\ (P1)\end{tabular} &
\begin{tabular}[c]{@{}c@{}}Baseline \\ –post no pert. \\ (B-P1)\end{tabular} &
\begin{tabular}[c]{@{}c@{}}Post 1Hz \\ (P2)\end{tabular} &
\begin{tabular}[c]{@{}c@{}}Post 1Hz \\ – post no pert. \\ (P2-P1)\end{tabular} &
\begin{tabular}[c]{@{}c@{}}Post unpred. \\ (P3)\end{tabular} &
\begin{tabular}[c]{@{}c@{}}Post unpred.\\ – post no pert.\\ (P3-P1)\end{tabular} \\ \cline{2-10}
&
Control &
1Hz &
\begin{tabular}[c]{@{}c@{}}1Hz -\\ Control\end{tabular} &
\begin{tabular}[c]{@{}c@{}}1Hz -\\ Control\end{tabular} &
\begin{tabular}[c]{@{}c@{}}1Hz -\\ Control\end{tabular} &
\begin{tabular}[c]{@{}c@{}}1Hz -\\ Control\end{tabular} &
\begin{tabular}[c]{@{}c@{}}1Hz -\\ Control\end{tabular} &
\begin{tabular}[c]{@{}c@{}}1Hz -\\ Control\end{tabular} &
\begin{tabular}[c]{@{}c@{}}1Hz –\\ Control\end{tabular} \\ \hline
\multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}l@{}}Combined \\ error-time\end{tabular}}} &
\textbf{$\Delta$=0.516} &
\textbf{$\Delta$=0.962} &
$\Delta$=0.446 &
$\Delta$=-0.078 &
$\Delta$=-0.327 &
$\Delta$=-0.835 &
$\Delta$=-0.757 &
$\Delta$=-0.19 &
$\Delta$=-0.112 \\
&
\textbf{p=0.002} &
\textbf{p=0.003} &
p=0.211 &
p=1 &
p=0.331 &
p=0.233 &
p=0.231 &
p=1 &
p=1 \\ \hline
\multirow{2}{*}{\textbf{Path length}} &
\textbf{$\Delta$=5.65} &
\textbf{$\Delta$=32.774} &
\textbf{$\Delta$=27.124} &
$\Delta$=3.529 &
$\Delta$=-3.516 &
$\Delta$=-21.824 &
$\Delta$=-25.353 &
$\Delta$=7.85 &
$\Delta$=4.321 \\
&
\textbf{p=0.035} &
\textbf{p\textless{}0.001} &
\textbf{p\textless{}0.001} &
p=0.932 &
p=0.897 &
p=0.266 &
p=0.115 &
p=1 &
p=1 \\ \hline
\multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}l@{}}Number \\ of cuts\end{tabular}}} &
$\Delta$=-0.633 &
\textbf{$\Delta$=6.133} &
$\Delta$=6.767 &
\textbf{$\Delta$=-15.822} &
$\Delta$=9.711 &
$\Delta$=-4.644 &
\textbf{$\Delta$=11.178} &
$\Delta$=-5.111 &
\textbf{$\Delta$=10.711} \\
&
p=1 &
\textbf{p\textless{}0.001} &
p=0.057 &
\textbf{p=0.015} &
p=0.23 &
p=0.478 &
\textbf{p=0.026} &
p=0.493 &
\textbf{p=0.006} \\ \hline
\multirow{2}{*}{\textbf{Pert. MSE}} &
$\Delta$\textless{}0.001 &
\textbf{$\Delta$=0.024} &
\textbf{$\Delta$=0.024} &
$\Delta$\textless{}0.001 &
$\Delta$\textless{}0.001 &
$\Delta$=-0.022 &
$\Delta$=-0.022 &
– &
– \\
&
p=0.223 &
\textbf{p=0.002} &
\textbf{p\textless{}0.001} &
p=1 &
p=0.994 &
p=0.218 &
p=0.19 &
– &
– \\ \hline
\end{tabular}
$\Delta$ denotes the mean of the individual differences for matched-pair tests, and the difference between the means of the two groups for the other tests. Bold font indicates statistically significant effects ($p < 0.05$).
\end{table*}
\subsection{Q4 -- The effect of the training protocols on post-training trials with unpredictable perturbations (P3)}
The small differences in \textit{combined error-time} and \textit{path length} values between the groups in the P2 stage were further reduced in P3. Looking at Fig. \ref{fig:Results2}.a,c, it seems that the control group improved their performances in P3 relative to P2, while the 1Hz group had no improvement in \textit{combined error-time} and a smaller improvement in \textit{path length}. There was a significant difference between the $P3-P1$ values of the two groups because the control group continue to cut with less cuts than in P1, while the 1Hz group did not change the number of cuts.
\section{DISCUSSION}
In this letter we presented a new experimental protocol for testing the effect of time-dependent force perturbations on the learning of a RAMIS task. Our aim was to harness motor learning theories to improve the training of RAMIS surgeons. Our analysis revealed several approaches that participants used to overcome the perturbations without impairing their performance when the perturbations were removed. Our results form a basis for further research that could improve the way surgeon acquire RAMIS skills.
So far the effect of assistive and resistive forces on RAMIS training has been investigated \cite{m.m.coadTrainingDivergentConvergent2017,enayatiRoboticAssistanceasNeededEnhanced2018,oquendoRobotAssistedSurgicalTraining2019}; however, to the best of our knowledge, the effect of time-dependent perturbations had not been explored. We found that participants improved their performance during training with 1Hz periodic perturbations. Compared to at the beginning of the training, at the end of the training the perturbations had less impact on their movement. Such a result can be caused by an adaptation in which the motor system learns the force perturbations and applies force in the opposite direction, which reduces the impact of perturbations on the movement. When adaptation occurs, an after-effect can be seen after the perturbations are removed. In our experiment, such after-effect would have been expressed as 1Hz fluctuations in the P1 trials. Because the \textit{perturbation MSE} of the 1Hz group returned to the baseline level as soon as the perturbation was removed, we conclude that adaptation did not occur. This result is consistent with previous studies that showed that participants do not adapt to time varying perturbations \cite{karnielSequenceTimeState2003}.
Although adaptation did not occur, the participants did develop approaches to improve. Previous studies showed that participants stabilize unstable conditions by co-contraction of the muscles and an increase in the impedance of the arm \cite{burdetCentralNervousSystem2001}. Such an increase can be the cause of the decrease in \textit{perturbation MSE} during training in our experiment. Another approach revealed by our analysis is a change in the number of cuts during the training. This result is consistent with \cite{patelUsingIntermittentSynchronization2018}, where a surgeon who performed a pattern-cutting task with a moving platform timed the cuts according to the movement of the platform.
Importantly, we showed that when the perturbations were removed both groups reached a similar performance level. This means that learning how to deal with the perturbations was not at the expense of learning how to perform the task better. This result suggests that trainees can be exposed to challenging conditions during training without impairing their learning. When the participants encountered the 1Hz perturbations after training the advantage of the 1Hz group was not significant. This small advantage was further reduced when participants encountered the unpredictable perturbations. It is possible that after a long training on the task under simple conditions the control group was able to learn quickly how to deal with the perturbations, and that therefore the gap between the groups was small. Additional research is required to examine protocols that will incorporate perturbations in a way that may increase the advantages over learning without perturbations.
This is the first study of time-dependent perturbations in RAMIS training, and hence we chose simple and not fully realistic perturbations. Now that we have found that training with these perturbations did not impair the learning processes we plan to examine more realistic perturbations.
\section{CONCLUSIONS}
We developed a novel experimental protocol and new analysis tools to examine how time-dependent perturbations affect of the learning of a surgical task. The data collected for this study are available (on request to the corresponding author), and can be used to advance surgical robotics and motor learning research. We found that participants learned how to overcome these perturbations and that this learning was not at the expense of learning the task. Our results lead the way toward developing training protocols that will incorporate time-dependent perturbations, which could improve the way surgeons acquire RAMIS skills. From the perspective of the motor learning field, this study is an important step toward understanding learning in real life tasks.
\section*{ACKNOWLEDGMENT}
The authors would like to thank Anton Deguet, Simon DiMaio, Eli Peretz, and Gilat Malka for their help with the dVRK integration in our lab, and Alon Lempert and Noa Yamin for running the experiments.
\bibliographystyle{IEEEtran}
|
{
"timestamp": "2021-05-11T02:19:47",
"yymm": "2105",
"arxiv_id": "2105.03917",
"language": "en",
"url": "https://arxiv.org/abs/2105.03917"
}
|
\section{Introduction}
Transformer-based models \cite{vaswani2017attention} are ubiquitously state-of-art across many natural language processing (NLP) tasks, including summarization. To achieve the best results, the community has trained ever larger transformer models on larger amount of data, and/or more task-specific optimization objectives \cite{devlin2018bert, raffel2020exploring, lewis-etal-2020-bart, brown2020language}. In long document summarization, the input sequences could be more than an order of magnitude longer than the limits of these transformer models. Although the limits can be extended, training large transformer models on long sequences is expensive and may not be possible on a standard GPU card because of the self-attention mechanism that grows quadratically with sequence length.
To tackle the quadratic characteristic, recent works have modified self-attention mechanism and proposed variants of the transformer such that the quadratic complexity is reduced \cite{tay2020efficient, kitaev2020reformer, child2019generating, beltagy2020longformer,ainslie-etal-2020-etc,zaheer2020big}. However, pre-trained weights of the modified models are not readily available. In contrast, standard models such as BERT \cite{devlin2018bert} or BART \cite{lewis-etal-2020-bart} have been trained on various target tasks, including text summarization \cite{liu-lapata-2019-text}. This allows practitioners to achieve good performance with less training time. Thus, we are interested in exploiting pre-trained models for long-span summarization tasks.
We study a range of design configurations empirically and theoretically in regards to memory and compute requirements as well as their performance. We propose that long-span dependencies can be handled by two complementary methods. Firstly, inspired by modified self-attention transformers, we exploit standard transformer models by constraining attention mechanism to be local, allowing longer input spans during training. Secondly, because abstractive summarization systems perform content selection implicitly \cite{nallapati-etal-2016-abstractive, lebanoff-etal-2020-cascade}, to reduce memory and compute requirements an alternative method is to perform content selection explicitly before the abstractive stage. We study content selection during two phases: training time and test time. At training time, we investigate methods to select data for training fixed-span abstractive models. At test time, we extend existing model-based selection methods, and we propose a multitask content selection method that ranks sentences through extractive labelling based module \cite{cheng-lapata-2016-neural} and attention based module \cite{see-etal-2017-get}. Ultimately, we explore the combined approach, consisting of local self-attention transformer and content selection for long-document summarization.
We conduct our experiments using a number of design configurations on the Spotify open-domain Podcast summarization dataset \cite{clifton-etal-2020-100000}. This dataset is challenging not only because of its long-span nature, but also because transcribed spoken utterances typically have lower information density \cite{li-etal-2019-keep, manakul2020_interspeech}. Furthermore, we carry out experiments on arXiv and PubMed datasets \cite{cohan-etal-2018-discourse} to further demonstrate and verify the effectiveness of our approach as well as making comparisons to existing approaches. We highlight the strengths and weaknesses of our approach in different resources and tasks. The main contributions of this paper are:
\begin{itemize}
\item On local self-attention, we show how to exploit a standard transformer model for long-span summarization, and we show good design considerations based on empirical results.
\item On content selection, we demonstrate the best selection method at training time, and we propose a multitask content selection (MCS) method outperforming baselines at test time.
\item Our work has set new state-of-the-art results on Spotify Podcast, arXiv and PubMed datasets in the ROUGE scores. Furthermore, with a small-scale GPU card, our approach achieves comparable or superior performance to previous state-of-the-art systems.
\end{itemize}
\begin{figure*}[!t]
\centering
\includegraphics[width=0.99\textwidth,keepaspectratio]{fig/architecture.pdf}
\caption{Overview of the combined architecture where we highlight different aspects of this work. $N_0$ is the original document length, $N$ is the input length to the generation system, and $M$ is the summary length.}
\label{fig:architecture}
\end{figure*}
\vspace{-0.28cm}
\section{Related Work}
\vspace{-0.04cm}
\textbf{Efficient Transformers.}
Pre-trained transformer models have shown success and become the starting point for various NLP problems such as BERT \cite{devlin2018bert} in contextual representation, GPT2 in text generation \cite{radford2019language}, or BART in seq2seq tasks \cite{lewis-etal-2020-bart}. However, the memory and time requirements for transformer models grow quadratically with the sequence length, and for long-span tasks this quickly leads to GPU running out of memory in training. To mitigate the quadratic nature, a wide range of modified architectures have recently been proposed \cite{tay2021long}. They reduce the quadratic complexity of the full self-attention mechanism by using fixed attention patterns \cite{ parmar2018image, dai-etal-2019-transformer, child2019generating, qiu-etal-2020-blockwise, ainslie-etal-2020-etc, zaheer2020big, beltagy2020longformer}, learnable patterns \cite{kitaev2020reformer, tay2020sparse}, low-rank matrix approximation \cite{wang2020linformer}, or kernel method \cite{choromanski2021rethinking}. Alternatively, it has been shown that some attention heads are redundant and can be pruned to reduce model size \cite{voita-etal-2019-analyzing, michel_sixteen_heads}. Knowledge distillation reduces memory and compute by compressing a large model to a smaller one \cite{hinton2015distilling, sanh2019distilbert}. In contrast, we focus on the dependencies of long input and target sequences in encoder-decoder architectures, and we exploit publicly available transformer models with summarization weights to long-span summarization tasks.
\vspace{8pt}
\noindent \textbf{Long-span Summarization.} Efficient transformer architectures have been applied to summarize long documents such as BigBird \cite{zaheer2020big}, and Longformer-Encoder-Decoder (LED) \cite{beltagy2020longformer}, which has recently been revised parallel to this work.\footnote{On the self-attention aspect, we believe this system is the most comparable to ours (see comparisons in Sec. \ref{section:arxiv_results}).} Hierarchical transformer architectures have been applied to multi-document summarization \cite{liu-lapata-2019-hierarchical}, and extractive news and table-to-text summarization \cite{zhang-etal-2019-hibert, narayan-etal-2020-stepwise}. Hierarchical attention RNN system has been applied to summarize long articles \cite{cohan-etal-2018-discourse}.
Alternatively, earlier methods show that good content selection helps abstractive news summarization systems \cite{chen-bansal-2018-fast, gehrmann-etal-2018-bottom, hsu-etal-2018-unified}. Hybrid systems that select sentences and generate an abstractive summary have been proposed such as extractive system + TLM for scientific articles \cite{pilault-etal-2020-extractive}, simple selection + BART for podcasts \cite{manakul2020cued_speech, kaiqiang_trec2020}, and guided summarization by BERT-based keyword/sentence extraction + BART for news and scientific articles \cite{he2020ctrlsum, dou-etal-2021-gsum}.
Other work includes dividing the source and target into multiple smaller pairs to train abstractive summarizers \cite{gidiotis2020divide}. Extractive methods with and without redundancy reduction techniques for long-span summarization have been studied \cite{xiao-carenini-2019-extractive, xiao-carenini-2020-systematically}.
\section{Experimental Setup}
\subsection{Dataset}
\textbf{Spotify Podcast.}\footnote{\hspace{-1pt}\url{https://podcastsdataset.byspotify.com}} The dataset consists of ASR transcripts with human descriptions as summaries \cite{clifton-etal-2020-100000}. We follow the data processing at TREC2020 \cite{jones_trec2020} in removing bad transcript-summary pairs from a total of 105,360+1,027 episodes, resulting in train/valid/test splits of 60,415/2,189/1,027 episodes the same as \citet{manakul2020cued_speech}.
\vspace{4pt}
\noindent \textbf{arXiv and PubMed.} Popular long document summarization datasets consist of academic articles with abstracts as summaries \cite{cohan-etal-2018-discourse} and train/valid/test splits of 203,037/6,436/6,440 for arXiv and 119,924/6,633/6,658 for PubMed.
\begin{table}[ht]
\centering
\scalebox{0.9}{
\begin{tabular}{rcccc}
\toprule
Dataset &\#Doc &Input &90$^{\text{th}}$\% &Target \\
\midrule
Podcast &106k &5,727 &11,677 &61.1 \\
arXiv &216k &8,584 &16,108 &367 \\
PubMed &133k &3,865 &7,234 &260 \\
\bottomrule
\end{tabular}
}
\caption{Length Statistics (mean \& 90$^{\text{th}}$\%-ile).}
\label{tab:data_statistics}
\end{table}
\vspace{-2pt}
\subsection{Models}
\label{section:models}
\noindent \textbf{BART and LoBART.} We use the publicly released BART model \cite{lewis-etal-2020-bart} fine-tuned on CNNDM \cite{hermann2015teaching}.\footnote{\hspace{-1pt}\url{https://huggingface.co/facebook/bart-large-cnn}} Following the local window attention in Sparse Transformer \cite{child2019generating} and Longformer \cite{beltagy2020longformer}, we modify the self-attention mechanism in the encoder to local self-attention (see Figure \ref{fig:attn_mech}), and we refer to this local self-attention BART as LoBART. It has the same architecture as BART, e.g. the number of parameters, except that we extend positional embedding beyond 1,024 by copying BART's positional embedding with flipping to allow a smoother transition. See details in Appendix \ref{appendix:model_parameters}.
\begin{figure}[!ht]
\centering
\begin{subfigure}[b]{0.50\linewidth}
\centering
\includegraphics[width=\linewidth,height=2.2cm,keepaspectratio]{fig/fullattn.png}
\caption{Full}
\label{fig:full_attn}
\end{subfigure}%
\hfill
\begin{subfigure}[b]{0.50\linewidth}
\centering
\includegraphics[width=\linewidth,height=2.2cm,keepaspectratio]{fig/localattn.png}
\caption{Local ($W$=9)}
\label{fig:local_attn}
\end{subfigure}%
\caption{Self-Attention Pattern.}
\label{fig:attn_mech}
\end{figure}
\noindent \textbf{Hierarchical RNN.} The content selection model is based on a hierarchical encoder-decoder architecture that has been shown effective on meeting and long document summarization \cite{cohan-etal-2018-discourse, hier_rnn_2019, li-etal-2019-keep}. The model consists of word-level and sentence-level GRUs \cite{cho-etal-2014-learning}. We add a linear layer on top of the sentence-level GRU to perform extractive labelling. The sentence-level attention mechanism and extractive labelling modules form our multitask content selection (MCS). More details in Section \ref{section:mcs}.
We provide the full details about our implementation, model parameters, hyperparameters, optimizer, and training configurations in Appendix \ref{appendix:implementation_details}.
\section{Longer Span via Local Self-Attention}
\label{section:local_attention}
It has been known that memory and compute complexity of transformers is \textit{quadratic} with the sequence length. However, in encoder-decoder architectures, the exact dependencies on input length $N$, target length $M$, and batch size $B$ are less understood. This is particularly important in long-span seq2seq tasks because large memory or compute requirement could make training impractical. Thus, this work studies these dependencies, and shows the trade-off between the size of input span and the size of attention span in local self-attention.
\subsection{Memory Analysis and LoBART Design}
Firstly, through a regression analysis for an encoder-decoder architecture such as BART, the memory required in training is:
\begin{equation*}
c^b_1 + B(c^b_2 M + c^b_3 N + c^b_4 MN + c^b_5 M^2 + c^b_6 N^2)
\end{equation*}
The term $c^b_1$ depends on only the model size and optimizer, and it is \textit{constant} (theoretical calculation provided in Appendix \ref{appendix:complexity}). The remaining terms are activation memory associated with the activation outputs cached for backpropagation, and they grow with $N$, $M$, and $B$. Table \ref{tab:memory_profile} shows system-independent\footnote{system-independent across hardware and machines; albeit implementation-dependent. This analysis is based on widely used PyTorch and Huggingface implementation.} regression results for the memory in training BART. It is apparent that as $N$ grows the dominant term is $c^b_6N^2$, which is associated with the encoder self-attention. Thus, this motivates us to modify self-attention only on the encoder side.
\begin{table}[ht]
\tabcolsep=0.17cm
\centering
\scalebox{0.9}{
\begin{tabular}{c|cccccc}
\toprule
Term &$c^b_1$ &$c^b_2M$ &$c^b_3N$ &$c^b_4 MN$ &$c^b_5M^2$ &$c^b_6 N^2$\\
\midrule
GiB &6.05 &0.23 &0.84 &0.21 &0.02 &1.53 \\
\bottomrule
\end{tabular}}
\caption{BART's Memory Profile ($N$=1024, $M$=144).}
\label{tab:memory_profile}
\end{table}
\noindent By introducing local self-attention of width $W$, the memory in training LoBART becomes:
\begin{equation*}
c^l_1 + B(c^l_2 M + c^l_3 N + c^l_4 MN + c^l_5 M^2 + c^l_6 NW)
\end{equation*}
For large $N$, the memory is now dominated by $c^l_6 NW$. The coefficient $c^l_6\approx 1.72 c^b_6$, suggesting that $W$ should be at most $0.58N$ to reduce memory. We provide more details about the exact theoretical calculation for model and optimizer memory as well as time complexity in Appendix \ref{appendix:complexity}.
The memory for training BART/LoBART in Figure \ref{fig:memory_pred} enables us to choose an operating point. Additionally, other \textit{complementary} techniques for reducing memory in training include: (i) gradient-checkpoint where a subset of intermediate values in the computation graph are cached, and the rest are re-computed during backpropagation \cite{chen2016training}, but this requires changes to optimization and leads to longer training time; (ii) half/mixed-precision training \cite{micikevicius2017mixed} that would almost halve y-axis in Figure \ref{fig:memory_pred}, but this requires changes to the model precision and may result in lower performance; (iii) model parallelism with micro-batching \cite{huang2019gpipe}, but this method requires multiple accelerators.
\begin{figure}[ht]
\centering
\includegraphics[width=0.80\linewidth,keepaspectratio]{fig/memory_pred.pdf}
\caption{Operating points for $B$=1 and $M$=144. (1) Section \ref{section:local_attention} studies local attention to reduce quadratic complexity to linear. As $W$ decreases, the gradient of linear complexity decreases. (2) Section \ref{section:content_selection} studies content selection to move an operating point to the left.}
\label{fig:memory_pred}
\end{figure}
\subsection{BART and LoBART}
We study the characteristics of the full self-attention in BART by defining the mean attention distance in a particular layer and head as follows:
\begin{equation}
D = \frac{1}{N} \sum_{i=1}^N \left( \sum_{j=1}^N \alpha_{i,j} \times |i-j| \right)
\end{equation}
where $\alpha_{i,j}$ is the attention weight of position $i$ attending to position $j$ ($\sum_{j=1}^N \alpha_{i,j} = 1$). This measure corresponds to the average distance of self-attention. If the attention weight is uniform, $D_{U} = \frac{N^2-1}{3N}$. For $N=1024$, $D_{U} = 341$. In Figure \ref{fig:mean_distance}, our results show that most layers have a shorter mean distance than $D_U$, supporting that the information is more localized. The mean distances of differently initialized BART models computed on the podcast data also show that the attention mechanism is learned during pre-training stage as there is little variation after the pre-training stage. As illustrated in Figure \ref{fig:mean_distance}, the average attention distance $D$ of the BART model is around 250-350 tokens. This suggests the window size $W$ should be designed to be above 700, allowing half local attention window $W/2$ be greater than 250-350 to effectively match BART and to exploit transfer learning more efficiently.
\begin{figure}[ht]
\centering
\includegraphics[width=0.88\linewidth,height=8cm,keepaspectratio]{fig/mean_distance.pdf}
\caption{The average mean distance across multi-heads for each layer. The average mean distance of the random weight model is slightly lower than $D_U$ as some inputs are shorter than 1,024.}
\label{fig:mean_distance}
\end{figure}
Subsequently, we train different configurations of BART/LoBART models up to our GPU memory limit of 32GiB. The results in Table \ref{tab:bart_1024_2048_4096_new} show that: (i) expanding the model to accommodate longer input spans improve over the baseline BART(1k) as opposed to \citet{manakul2020cued_speech} that trained longer-span models by freezing bottom layers and did not show any improvement over their baseline; (ii) Although LoBART(8k) with $W$=512 can process longer input spans than LoBART(4k) with $W$=1024, it performs worse and we suggest that this is because LoBART(8k)'s window is too small, e.g. $<$700, to utilize transfer learning efficiently and its effective receptive field is also smaller.
\begin{table}[ht]
\tabcolsep=0.20cm
\centering
\scalebox{0.9}{
\begin{tabular}{lc|c|ccc}
\toprule
System &$W$ &{GiB} &R1 &R2 &RL \\
\midrule
BART(1k) &Full &8.9 &26.43 &9.22 &18.35 \\
\midrule
LoBART(2k) &128 &9.6 &25.88 &8.89 &17.87 \\
LoBART(2k) &256 &10.2 &25.93 &8.80 &17.82 \\
LoBART(2k) &512 &11.6 &26.35 &8.98 &18.19 \\
LoBART(2k) &1024 &14.2 &26.44 &9.26 &18.25 \\
BART(2k) &Full &14.5 &26.63 &9.41 &18.65 \\
\midrule
LoBART(4k) &128 &12.8 &26.42 &9.02 &18.12 \\
LoBART(4k) &256 &14.1 &26.66 &9.22 &18.33 \\
LoBART(4k) &512 &16.7 &26.75 &9.54 &18.54 \\
LoBART(4k) &1024 &22.0 &\textbf{27.02} &\textbf{9.57} &\textbf{18.78} \\
\midrule
LoBART(8k) &128 &19.3 &26.45 &9.04 &18.23 \\ %
LoBART(8k) &256 &21.1 &26.72 &9.30 &18.36 \\
LoBART(8k) &512 &27.1 &26.90 &9.47 &18.50 \\
\bottomrule
\end{tabular}
}
\caption{BART \& LoBART memory requirement in training and performance. ($n$k) denotes maximum input length of $n\times1024$.}
\label{tab:bart_1024_2048_4096_new}
\end{table}
\section{Longer Span via Content Selection}
\label{section:content_selection}
Some input sequences still exceed LoBART's longer fixed-span limit. Further extending the input span would lead to a small local attention span, a diminishing improvement, or GPU running out of memory. Alternatively, it has been shown that a better content selection improves abstractive summarization in news \cite{chen-bansal-2018-fast, gehrmann-etal-2018-bottom, hsu-etal-2018-unified}, multi documents \cite{liu-lapata-2019-hierarchical, liu2018generating}, and scientific articles \cite{pilault-etal-2020-extractive}. Thus, we propose to tackle the excess length by content selection. Here, we distinguish between two phases of content selection: training time and test time.
\subsection{Training-time Content Selection}
\label{section:training_bart_with_cs}
During training, ground-truth targets are available. We categorize selection methods in this phase into two types: ground-truth based (model-free), which is also referred to as \textit{oracle}; and model-based. Ground-truth based methods cannot be used at test time, while model-based methods can be applied at both phases. Although model-based methods do not rely on ground-truth targets, they have the advantage of matching in training and test phases. Existing oracle methods include using ROUGE-2 recall \cite{liu2018generating} or the average of ROUGE-1,2,L recall \cite{pilault-etal-2020-extractive}. We discuss model-based methods in Section \ref{section:mcs}, where we propose the MCS method. Let the subscript $(i,j)$ denote the position of the $j$-th word in the $i$-th input sentence, the full input $ \mathbf{X}=\{\mathbf{x}_1,...,\mathbf{x}_i,...,\mathbf{x}_{N_1}\} = [\underbrace{x_{1,1},{x}_{1,2},{x}_{1,J_1}}_{\text{sent }1},...,\underbrace{{x}_{i,1},{x}_{i,J_i}}_{\text{sent }i},...,\underbrace{{x}_{N_1,1},{x}_{N_1,J_{N_1}}}_{\text{sent }N_1}]$.
Content selection re-ranks, truncates, and sorts $\mathbf{X}$ to get ${\mathbf{X}}^{\text{cs}}$ for training BART/LoBART as follows:
\begin{align}
\bar{\mathbf{X}} &= \{\mathbf{x}_{r_1},\mathbf{x}_{r_2},\mathbf{x}_{r_3},...,\mathbf{x}_{r_R}\} \\
{\mathbf{X}}^{\text{cs}} &= {\tt SortOrig}({\tt TruncateN}(\bar{\mathbf{X}}))
\end{align}
where $r_i$ is the index of the sentence of rank $i$, the ${\tt TruncateN}$ operation filters $\bar{\mathbf{X}}$ such that the total of number of words is less than $N$, and ${\tt SortOrig}$ retains the original sentence order. The following ranking methods are considered:
\begin{itemize}
\item Truncation (TRC): $r_k = k$.
\item Model-based: Given the score $f$ of model $\boldsymbol{\phi}$, \\ $r_k = \{i \in N_1 : f_{\boldsymbol{\phi}}(i|\mathbf{X}) \hspace{4pt} \text{is ranked} \hspace{4pt} k\text{-th}\}$
\item Oracle (ORC): Given the ground-truth summary $\mathbf{y}$ and similarity measure $d$, \\ $r_k = \{i \in N_1 : d(\mathbf{x}_i, \mathbf{y}) \hspace{4pt} \text{is ranked} \hspace{4pt} k\text{-th}\}$
\end{itemize}
In this work, we use ROUGE-2 recall as the similarity measure $d$. For the ORC method, first, we retain only sentences with positive $d$, leading to $R \leq N_1$. We found that the number of sentences with positive $d$ is low at 21.3\% of the total number of sentences in average on podcast data. This corresponds to 56\% of training instances being shorter than BART input span of 1024.\footnote{We refer to this percentage as \%AgORC$_\text{no-pad}$ (the percentage of inputs aggressively extracted by the oracle method).} This no-padding oracle method ({ORC}$_{\text{no-pad}}$) is highly \textit{aggressive}, potentially preventing the downstream summarizer from learning complex abstraction. Hence, we propose variants of oracle methods to extend the {ORC}$_{\text{no-pad}}$-selected input to the max input span $N$:
\begin{itemize}
\item {ORC}$_{\text{pad-lead}}$: Pad by leading unselected sentences and keep the original sentence order.
\item {ORC}$_{\text{pad-rand}}$: Pad by random unselected sentences and keep the original sentence order.
\end{itemize}
\begin{figure}[ht]
\centering
\includegraphics[width=0.95\linewidth,height=8cm,keepaspectratio]{fig/barchart_selection_R1.pdf}
\caption{The impact of training-time content selection methods on BART(1k) performance.}
\label{fig:barchart}
\end{figure}
In Figure \ref{fig:barchart}, since any oracle method is considered cheating at test time, the best performance is obtained by MCS (in blue), and the upper bound performance is obtained by optimal oracle method (in green). The results show that although {ORC}$_{\text{no-pad}}$ yields the highest upper bound, the abstractive model in fact does not learn how to perform abstraction. For instance, with TRC or MCS at test time, {ORC}$_{\text{no-pad}}$ yields the lowest performance level. The best way to fine-tune the abstractive model shown in Figure \ref{fig:barchart} is using {ORC}$_{\text{pad-rand}}$. Compared to {ORC}$_{\text{pad-lead}}$, {ORC}$_{\text{pad-rand}}$ is better as it introduces more diversity to the abstractive model. Compared to the model-based method, {ORC}$_{\text{pad-rand}}$ is also computationally less expensive.
In addition, Table \ref{tab:combine_podcast} shows that when there is no content selection at test time (i.e. TRC applied), LoBART(4k) and LoBART(8k) benefit from {ORC}$_{\text{pad-rand}}$, whereas BART(1k) does not. This is because in the 1k setting, content selection is more aggressive; as a result, the large mismatch between training and test leads to a poor result. Thus, we suggest that the best content selection during training is {ORC}$_{\text{pad-rand}}$ given that content selection will be used at test time, or model's input span is long.
\subsection{Multitask Content Selection (MCS)}
\label{section:mcs}
To process long input sequences entirely, we consider RNN, whose memory requirement grows linearly with the sequence length, and hierarchical architectures which have been shown effective for long seq2seq tasks \cite{cohan-etal-2018-discourse, li-etal-2019-keep}. In this work, the hierarchical RNN model described in Section \ref{section:models} has memory requirement given the target length of 144 during training of $0.83 + B(3.96\times10^{-5}+3.33\times10^{-5}N_2)N_1$,\footnote{Obtained by least-squares regression with 20 samples.} where $N_1$ is \#sentences, and $N_2$ is the maximum number of words in a sentence, and $B$ is batch size. By setting $N_1$=1000 and $N_2$=50, only 2\% of podcast data exceeds this limit, while taking GPU memory to only 2.53GiB for $B$=1. Thus, this shows that this model can cover long sequences.
Previous model-based methods treat content selection as extractive labelling and create labels heuristically \cite{pilault-etal-2020-extractive}, or using encoder-decoder attention mechanism \cite{manakul2020cued_speech}. To utilize both of these in one framework, we propose a Multitask Content Selection (MCS) method where we train the hierarchical encoder-decoder with attention mechanism and a classification layer on top of the encoder (described in Section \ref{section:models}). First, the model is trained on seq2seq abstractive summarization objective:
\begin{equation}
\mathcal{L}_{\text{seq2seq}} = -\sum_{m=1}^M \log P(y_m | \mathbf{y}_{<m},\mathbf{X})
\end{equation}
Second, we create binary labels as follows: for sentence $i$, the label $z_i$ is 1 if $d(\mathbf{x}_i,\mathbf{y}) > 0$; else $z_i$ is 0, and $d$ is the ROUGE-2 recall measure. The extractive labelling task objective is:
\begin{equation}
\resizebox{.89\hsize}{!}{$
\mathcal{L}_{\text{label}} = -\sum_{i=1}^{N_1} \left( z_i \log \hat{z}_i + (1-z_i) \log (1 - \hat{z}_i) \right)
$}
\end{equation}
\vspace{-0.5cm}
\begin{equation}
\hat{z}_i =\text{sigmoid} (\mathbf{W}_{\text{cls}} ^{T} \mathbf{h}_i + \mathbf{b}_{\text{cls}})
\end{equation}
where $\mathbf{h}_i$ is the sentence-level encoder output associated with sentence $i$, and $\mathbf{W}_{\text{cls}}, \mathbf{b}_{\text{cls}}$ are the parameters of the classification layer. Thus, the MCS training loss is defined as follows:
\begin{equation}
\mathcal{L}_{\text{MCS}} = \gamma\mathcal{L}_{\text{label}} + (1-\gamma)\mathcal{L}_{\text{seq2seq}}
\end{equation}
At inference stage, there are two modes: (i) standard abstractive summary generation, e.g. via beam search decoding; (ii) ranking input sentences via labelling score and seq2seq attention score. The latter is how we use MCS during inference.\footnote{In practice, we run beam search decoding of width 4, and we obtain the attention score from the top beam.} For sentence $i$, the scores are:
\begin{equation}
\resizebox{.95\hsize}{!}{$
\text{score}_{i,(\text{label})} = \hat{z}_i, \hspace{0.1cm} \text{score}_{i,(\text{seq2seq})} = \sum_{m=1}^M \alpha^{\tt s}_{m,i}
$}
\label{eq:mcs_inference_score}
\end{equation}
where $\alpha^{\tt s}_{m,i}$ is the sentence-level attention weight at decoder step $m$ over input sentence $i$. Since the scores are on different scales, rather than using the scores defined in Eq. \ref{eq:mcs_inference_score}, we simply rank the scores, and then normalize the score ranks into the range 0.0 to 1.0. Let nscore denote the normalized ranking score, the MCS inference score is:
\begin{equation}
f_{\boldsymbol{\phi}}(i|\mathbf{X}) = \text{nscore}_{i,(\text{label})} + \text{nscore}_{i,(\text{seq2seq})}
\end{equation}
In our preliminary experiments, we vary the amount of selected sentences from the limit of BART/LoBART to a few sentences, and we found that more aggressive selection at test time degrades the performance. Therefore, our MCS selects input sentences up to the limit of BART/LoBART.
By setting $\gamma$=0.0, our method is comparable to the attention-based method in \citet{manakul2020cued_speech}. By setting $\gamma$=1.0, our method is similar to the extractive models in \citet{hsu-etal-2018-unified, pilault-etal-2020-extractive}.
In Table \ref{tab:mcs_results}, we show that when coupled with BART, MCS yields better summarization performance than both Attn-only and Ext-only baselines. MCS also achieves higher recall rate of sentences with $d(\mathbf{x}_i,\mathbf{y})>0$ than the two baselines.
\begin{table}[ht]
\centering
\scalebox{0.9}{
\begin{tabular}{l|c|ccc}
\toprule
{System} &{\%Recall} &R1 &R2 &RL \\
\midrule
Attn ($\mathcal{L}_{\text{{seq2seq}}}$) &38.85 &26.90 &9.70 &18.78 \\
Ext ($\mathcal{L}_{\text{{label}}}$) &35.26 &26.39 &8.90 &18.03 \\
\midrule
MCS ($\mathcal{L}_{\text{{MCS}}}$) &40.50 &27.28 &9.82 &19.00 \\
\bottomrule
\end{tabular}
}
\caption{The impact of test-time content selection methods on BART(1k) trained using ORC$_\text{pad-rand}$. Optimal $\gamma$=0.2 is tuned between 0.0-1.0 on the validation set.}
\label{tab:mcs_results}
\end{table}
\section{Combined Approach}
\subsection{Spotify Podcast results}
In Table \ref{tab:combine_podcast}, a performance gain is obtained in all settings by adding MCS. By comparing different configurations with MCS, it can be seen that the gain from MCS in LoBART(8k) system is the lowest. This is because the average length is 5,727, meaning that many Podcasts inputs to LoBART(8k) do not benefit from content selection.
CUED-filt, the best single-model system in \citet{manakul2020cued_speech}, uses an attention-based content selection at both training and test time, and it is combined with fine-tuned vanilla BART. Our approach outperforms CUED-filt by improved content selection at both training time and test time as demonstrated by BART(1k)-ORC+MCS. Additionally, local self-attention allows training on longer sequences, and our LoBART(4k)-ORC+MCS system has yielded the best results. Lastly, even though LoBART(8k) requires more resource to train, it does not perform as well as LoBART(4k) due to its smaller attention window, and it also has a lower improvement when adding MCS.
\begin{table}[h]
\tabcolsep=0.14cm
\centering
\scalebox{0.9}{
\begin{tabular}{lcc|ccc}
\toprule
{System} &CS-trn &CS-tst &R1 &R2 &RL \\
\midrule
CUED-filt$^*$ &\cmark &\cmark &26.96 &9.75 &18.90 \\
\midrule
BART(1k) &\xmark &\xmark &26.43 &9.22 &18.35 \\
BART(1k) &\xmark &MCS &26.82 &9.39 &18.57 \\
BART(1k) &ORC &\xmark &25.54 &9.00 &17.83 \\
BART(1k) &ORC &MCS &27.28 &9.82 &19.00 \\
\midrule
LoBART(4k) &\xmark &\xmark &27.02 &9.57 &18.78 \\
LoBART(4k) &\xmark &MCS &27.53 &9.95 &19.08 \\
LoBART(4k) &ORC &\xmark &27.36 &10.04 &19.33 \\
LoBART(4k) &ORC &MCS &\textbf{27.81} &\textbf{10.30} &\textbf{19.61} \\
\midrule
LoBART(8k) &\xmark &\xmark &26.90 &9.47 &18.50 \\
LoBART(8k) &\xmark &MCS &27.02 &9.52 &18.62 \\
LoBART(8k) &ORC &\xmark &27.16 &9.84 &19.08 \\
LoBART(8k) &ORC &MCS &27.49 &9.98 &19.25 \\
\bottomrule
\end{tabular}
}
\caption{Podcast Results. The impact of training-time ORC$_\text{pad-rand}$ and test-time MCS. $^*$CUED systems were the top systems by human evaluation at Spotify Challenge 2020; CUED systems use BART with a model-based (trained on $\mathcal{L}_{\text{seq2seq}}$) content selection in both training and test stages.}
\label{tab:combine_podcast}
\end{table}
\begin{table*}[!t]
\centering
\scalebox{0.9}{
\begin{tabular}{ccl|ccc|ccc}
\toprule
&\multirow{2}{*}{Type} &\multirow{2}{*}{System} &\multicolumn{3}{c}{arXiv} &\multicolumn{3}{c}{PubMed} \\
& & &R1 &R2 &RL &R1 &R2 &RL \\
\midrule
\parbox[t]{3pt}{\multirow{9}{*}{\rotatebox[origin=c]{90}{\small Previous Work}}} &Abs&Discourse-Aware \cite{cohan-etal-2018-discourse} &35.80 &11.05 &31.80 &38.93 &15.37 &35.21 \\
&Mix&Ext+TLM \cite{pilault-etal-2020-extractive} &41.62 &14.69 &38.03 &42.13 &16.27 &39.21 \\
&Ext&ExtSum-LG+Rd\cite{xiao-carenini-2020-systematically} &44.01 &17.79 &39.09 &45.30 &20.42 &40.95\\
&Abs&Pegasus \cite{zhang2020pegasus} &44.21&16.95&38.83 &45.97&20.15&41.34 \\
&Abs&DANCER \cite{gidiotis2020divide} &45.01 &17.60 &40.56 &46.34 &19.97 &42.42 \\
&Abs&BigBird(3k) \cite{zaheer2020big} &46.63 &19.02 &41.77 &46.32 &20.65 &42.33 \\
&Abs&LED(4k) \cite{beltagy2020longformer} &44.40 &17.94 &39.76 &- &- &- \\
&Abs&LED(16k) \cite{beltagy2020longformer} &46.63 &19.62 &41.83 &- &- &- \\
&Mix&CTRLsum(BART+BERT) \cite{he2020ctrlsum} &46.91 &18.02 &42.14 &- &- &- \\
\midrule
\parbox[t]{3pt}{\multirow{4}{*}{\rotatebox[origin=c]{90}{\small This Work}}}&Abs&$^\dagger$BART(1k) &44.96 &17.25 &39.76 &45.06 &18.27 &40.84 \\
&Mix&$^\ddagger$BART(1k)+MCS &47.68 &19.77 &42.25 &46.49 &19.45 &42.04 \\
&Abs&$^\ddagger$LoBART(4k) &46.59 &18.72 &41.24 &47.47 &20.47 &43.02 \\
&Mix&$^\ddagger$LoBART(4k)+MCS &\textbf{48.79} &\textbf{20.55} &\textbf{43.31} &\textbf{48.06} &\textbf{20.96} &\textbf{43.56} \\
\bottomrule
\end{tabular}
}
\caption{Results on arXiv and PubMed. $^\dagger$denotes TRC applied, and $^\ddagger$denotes ORC$_\text{pad-rand}$ applied at training time.}
\label{tab:arxiv_pubmed_result}
\end{table*}
\subsection{ArXiv and PubMed results}
To verify the effectiveness of our systems, we re-train BART(1k) and LoBART(4k) on arXiv and PubMed datasets.
\label{section:arxiv_results}
Our training is different from Ext+TLM \cite{pilault-etal-2020-extractive} where their abstractive models are trained using inputs extracted from top two sentences in ROUGE recall for each target sentence without padding, similar to ORC$_\text{no-pad}$. Although in 1k setting, ORC$_\text{no-pad}$ yields \%AgORC$_\text{no-pad}$ (defined in Section \ref{section:training_bart_with_cs}) of only 2.8\% on arXiv (12\% on PubMed), in 4k setting this is 39\% on arXiv (71\% on PubMed). Based on the best configurations on podcast data, we train BART(1k) and LoBART(4k) using TRC or ORC$_\text{pad-rand}$ content selection, and we train the hierarchical model on arXiv/PubMed for MCS.
\vspace{6pt}
\noindent \textbf{ArXiv.} In Table \ref{tab:arxiv_pubmed_result}, both BART(1k)+MCS and LoBART(4k)+MCS outperform all existing systems. To better understand the advantages of our approach, the following systems are compared: CTRLsum versus our BART(1k) baseline; LED and BigBird versus our LoBART(4k) system.
CTRLsum extends BART by conditioning it with extracted keywords $\mathbf{v}$ using a BERT-based model, e.g. $p(\mathbf{y}|\mathbf{X},\mathbf{v})$. Their BERT-based model uses sliding window allowing it to extract $\mathbf{v}$ in long sequences, but their BART is still limited to the first 1,024 tokens. As a result, it performs better than BART(1k), but worse than BART(1k)+MCS.
LoBART(4k) has a similar architecture to LED(4k) without the global attention pattern for special tokens. Instead, our LoBART(4k) benefits from knowledge transferred from CNNDM and the ORC$_\text{pad-rand}$ training-time content selection, which yields a larger gain when MCS is applied, i.e. the system trained with truncated data has a smaller gain when MCS is applied. Transfer learning comparison and additional results on the impact of ORC$_\text{pad-rand}$ are provided in Appendix \ref{appendix:additional_results}.
Compared to BigBird, LoBART(4k) has a longer input span, e.g. 3,072 vs. 4,096. However, BigBird benefits from utilizing more recent summarization specific pre-training Pegasus \cite{zhang2020pegasus} which is better than our transfer learning. BigBird incorporates a global attention pattern similar to LED, and it also has a random attention pattern. Hence, LoBART without MCS performs worse.
Ultimately, we show that adding MCS to either BART(1k) or LoBART(4k) yields a significant improvement, resulting in state-of-the-art results in both settings. Moreover, although the gain from adding MCS is comparable to the gain observed in extending LED(4k) to LED(16k), the content selection method adds less training cost.
\vspace{6pt}
\noindent \textbf{PubMed.} Similarly, LoBART(4k)+MCS achieves state-of-the-art results shown in Table \ref{tab:arxiv_pubmed_result}. In contrast to the arXiv results, BART(1k)+MCS does not outperform LoBART(4k) nor BigBird, and the gain from MCS is not as high in both 1k and 4k settings.
\subsection{Local Attention v.s. MCS.}
Local attention yields better performance on PubMed, while MCS yields better performance on arXiv. To understand this discrepancy, a fine-grained analysis is conducted.
\begin{figure}[!ht]
\centering
\begin{subfigure}[b]{0.85\linewidth}
\centering
\includegraphics[width=\linewidth,height=8cm,keepaspectratio]{fig/arxiv_ablation_R1_gain.pdf}
\caption{arXiv (Len:Avg=8,584, 90$^{\text{th}}$\%=16,108)}
\label{fig:ablation_len_arxiv}
\end{subfigure}%
\vspace{8pt}
\hfill
\begin{subfigure}[b]{0.85\linewidth}
\centering
\includegraphics[width=\linewidth,height=8cm,keepaspectratio]{fig/pubmed_ablation_R1_gain.pdf}
\caption{PubMed (Len:Avg=3,865, 90$^{\text{th}}$\%=7,234)}
\label{fig:ablation_len_pubmed}
\end{subfigure}%
\caption{ROUGE-1 score relative to that of BART(1k) system evaluated on different partitions by length.}
\label{fig:ablation_len}
\end{figure}
\noindent In Figure \ref{fig:ablation_len}, we partition the test sets by input lengths, and we evaluate the performance improvement in each partition with respect to the BART(1k) baseline.\footnote{For arXiv/PubMed, each test set consists of over 6,000 instances, while Podcast test set has only 1,027 instances. The same analysis is conducted on Podcast, but the results are noisy due to the smaller size of its test set (see Appendix \ref{appendix:additional_results}).} The results illustrate that as the input length $N$ increases:
\begin{itemize}
\item The improvement of systems \textit{with} MCS increases and subsequently plateaus out.
\item The improvement of systems \textit{without} MCS decreases once the input exceeds the length limit but then plateaus, suggesting that fixed-span systems without content selection perform worse once the maximum fixed-span is reached. For instance, below 4,000 input words, LoBART(4k) without MCS performs better than BART(1k)+MCS on both datasets.
\end{itemize}
Therefore, our MCS method is more effective on arXiv compared to PubMed because the average length of PubMed documents is more than twice shorter than the average length of arXiv documents.
\section{Conclusion}
We study two methods for long-span summarization tasks. First, on local self-attention transformers, we present the design considerations for local self-attention BART, and we investigate the feasibility and performance of different network configurations. Second, on content selection, we distinguish between training time and test time methods, and we provide a good practice for both phases. At training time, we show that the oracle method with random sentences padded (ORC$_\text{pad-rand}$) yields the best results. At test time, we propose multitask content selection (MCS) that shows an improvement over baselines. We demonstrate that content selection is essential, in particular for longer documents such as the articles in the arXiv dataset. Our BART(1k)+MCS outperforms the current best systems on Podcast and arXiv datasets, and this system does not require a large-scale accelerator in training. Ultimately, by combining local self-attention technique with MCS, our LoBART(4k)+MCS system has set new state-of-the-art results in terms of ROUGE scores in all three long-span summarization tasks. Future work will focus on training our LoBART+MCS system in an end-to-end fashion.
\section*{Acknowledgments}
This paper reports on research supported by ALTA institute, Cambridge Assessment English, University of Cambridge, and Cambridge International \& St John’s College Scholarship. Thanks to Yiting Lu, Qingyun Dou, Xixin Wu, Raf Czlonka, and Kate Knill for interesting discussions and computing resource support. Thanks to the anonymous reviewers for their helpful comments.
\bibliographystyle{acl_natbib}
|
{
"timestamp": "2021-06-01T02:11:19",
"yymm": "2105",
"arxiv_id": "2105.03801",
"language": "en",
"url": "https://arxiv.org/abs/2105.03801"
}
|
\section{Introduction}
Reading someone else's handwriting is often a challenging task; some of the characters are unclear, the text is cursive, there is background clutter and the image quality can be low. When deciphering each character, we often rely on the surrounding area to compensate for the occasional obscurity of local areas in the text.
The automation of reading text images has been a thriving field of research in computer vision for decades. Recent deep learning methods have significantly improved recognition results \cite{Baek2019clova,zhang2019sequence,luo2020learn,Litman_2020_CVPR,aberdam2020sequence}.
Nevertheless, a close investigation reveals that state-of-the-art text recognizers are prone to overly rely on local image statistics\footnote{In this work, statistics are defined as the mean and standard deviation calculated from the corresponding distribution.}, ignoring cues from the surrounding areas.
Similarly, previous works suggested the susceptibility of image classifiers to develop a bias towards global image statistics~\cite{nuriel2020permuted}. Adaptive Instance Normalization (AdaIN)~\cite{huang2017arbitrary} was utilized for the purpose of reducing this bias. Specifically, Nuriel \etal~\cite{nuriel2020permuted} proposed to probabilistically apply AdaIN between the activations of random samples in a batch during training. Thus, randomly swapping channel-wise feature statistics accumulated over the entire image, as depicted in \cref{fig:teaser_fig}(a).
\begin{figure}[t]
\centering
\includegraphics[width=0.95\columnwidth]{Figures/teaser_fig_v8.pdf}
\caption{{\bf Fine-grained TextAdaIN.} (a) Recent works suggested utilizing AdaIN, swapping instance-level feature statistics, to remedy classifiers' bias towards global information. (b) Our method, TextAdaIN, leverages a fine-grained variation of AdaIN to swap feature statistics between slices along the width axis. This moderates the observed reliance of text recognizers on local statistics. }
\label{fig:teaser_fig}
\vspace{-0.4cm}
\end{figure}
In classification tasks, global statistics can contain relevant label cues. For example, an image of a fish will normally have the global statistics of water. As a result, the model learns to depend on this information and is prone to develop a bias towards these statistics~\cite{nuriel2020permuted,geirhos2018}. However this is not the case in text recognition, as the global statistics contain little information regarding the transcript of the text in the image. Thus, as we will show, text recognizers are already robust to global image statistics.
This work proposes a simple, yet powerful, technique for moderating the reliance on local statistics, which enhances the performance of text recognizers. The key idea of our approach is to probabilistically swap fine-grained feature statistics in a manner which is adjusted to the task at hand.
To that end, we propose TextAdaIN, a local variation of AdaIN~\cite{huang2017arbitrary}, depicted in \cref{fig:teaser_fig}(b). In contrast to AdaIN which operates on the entire image, TextAdaIN operates on vertical slices of the sequential feature map. In addition, the normalization is performed over multiple dimensions and as a consequence, the granularity level in which the statistics are modified is increased. Effectively, local distortions are injected into the feature space at a sub-word level, forcing the encoder to account for information in surrounding areas.
To gain insights on the effect of swapping local feature statistics in the domain of text images, we utilize a trained autoencoder, similarly to~\cite{nuriel2020permuted}. As illustrated in \cref{fig:reconst_vis}(c), TextAdaIN provides two noticeable modifications to the reconstructed images. Firstly, local perturbations are injected from an induced distribution which represents the natural feature statistics of text images. Additionally, we notice that local regions containing textual information occasionally undergo masking. These domain specific distortions act as a regularization method to the network. They prevent the network from overfitting to the local statistics of the training data and restrict its reliance on specific features to identify characters. To verify that our method indeed regulates this reliance, we numerically quantify its robustness towards local image corruptions, demonstrating its distinct advantage. This regularization is most critical in the case of handwriting, where the training data is typically very limited.
We validate our method experimentally, comparing its performance with state-of-the-art approaches on several handwritten text benchmarks.
TextAdaIN achieves state-of-the-art results, reducing the word error rate by $\textbf{1.4\%}$, $\textbf{2.0\%}$ and $\textbf{0.4\%}$ on IAM, RIMES and CVL, respectively. Furthermore, our method shows a consistent improvement across multiple architectures and in the domain of scene text recognition.
Not only does our model surpass other, more complicated methods (\cite{bhunia2019handwriting, fogel2020scrabblegan, zhang2019sequence, luo2020learn, aberdam2020sequence}), but it is simple to implement and can effortlessly be integrated into any mini-batch training procedure.
To summarize, the key contributions of our work are:
\begin{enumerate}[nolistsep]
\item We identify the problem of text recognizers' heavy dependence on local statistics and suggest to regulate it.
\item We introduce TextAdaIN, a simple yet effective normalization approach to remediate the reliance on local statistics in text recognizers.
\item Extensive experimental validation shows our method achieves state-of-the-art performance on several popular handwritten text benchmarks. In addition, it is applicable to the domain of scene text and can be used independent of the chosen architecture.
\end{enumerate}
\begin{figure*}[t]
\centering
\includegraphics[width=0.95\textwidth]{Figures/vis_autoencoder_v8.pdf}
\caption{\label{fig:reconst_vis}
\textbf{Autoencoder visualization.} We visualize the effect of applying different variations of AdaIN at intermediate layers of a trained autoencoder. The input, donor and reconstructed images are shown on the left. As seen in (a), applying vanilla AdaIN yields only global effects for text images. In (b), we see the effect of applying AdaIN on both the height and channel dimensions, increasing the granularity level. (c) TextAdaIN further increases the granularity level and allows for multiple donors by splitting the feature map into separate elements along the width axis. L.=layer; Recon=Reconstruction.
}
\end{figure*}
\section{Related Work and Background}
\paragraph{Normalization and style transfer.}
Normalizing feature tensors is an effective and powerful technique that has been explored and developed over the years for a variety of tasks. Ioffe \& Szegedy~\cite{ioffe2015batch} were the first to introduce Batch Normalization which inspired a series of normalization-based methods such as Layer Normalization \cite{ba2016layer}, Instance Normalization \cite{ulyanov2016instance}, and Group Normalization \cite{wu2018group}.
The style of an image was characterized by \cite{gatys2016image} to be the statistics of activation maps in intermediate layers of convolutional neural networks. Instance Normalization (IN) \cite{ulyanov2016instance} is a normalization layer which normalizes these statistics, removing the style representation from the activation map. Formally, given an instance $x\in \mathbb{R}^{C\times H \times W} $, where $C,H$ and $W$ are the channels, height, and width respectively, Instance Normalization is defined to be:
\begin{equation*}
\textrm{IN}(x)= \gamma\left(\frac{x-\mu(x)}{\sigma(x)}\right)+\beta\,.
\end{equation*}
Where $\gamma, \beta \in \mathbb{R}^{C}$ are learned parameters and $\mu(x), \sigma(x) \in \mathbb{R}^{C}$ are calculated by taking the mean and standard deviation over $H,W$ of $x$.
Adaptive Instance Normalization (AdaIN), proposed in \cite{huang2017arbitrary} changes the input image style by incorporating the statistics of an additional image into the feature space. Given two image representations, $x_a,x_b\in \mathbb{R}^{C \times H \times W}$, AdaIN shifts the statistics of the representation of $x_a$ to the representation of $x_b$. This is done in two steps. First, Instance Normalization is applied on $x_a$ to remove $x_a$'s style information. Then, the normalized activation map is scaled and shifted to match $x_b$'s statistics. This operation is thus perceived to transfer $x_b$'s style onto $x_a$.
\begin{equation*}
\label{eq:2}
\textrm{AdaIN}_c(x_a,x_b)= \sigma(x_b)\left(\frac{x_a-\mu(x_a)}{\sigma(x_a)}\right)+\mu(x_b)\,.
\end{equation*}
$\textrm{AdaIN}_c$ denotes the standard AdaIN operation in which $\sigma,\mu$ are calculated over the spatial dimensions, resulting in shifting corresponding channel statistics.
Leveraging the benefits of this operation, Geirhos \etal~\cite{geirhos2018} created Stylized-ImageNet, a stylized version of ImageNet. They demonstrated that classifiers trained on this dataset rely more on shape rather than texture.
More recently, Zhou \etal~\cite{zhou2021mixstyle} proposed to probabilistically mix instance-level feature statistics during training across different domains, thus increasing the generalizability of the model.
Furthermore, Nuriel~\etal~\cite{nuriel2020permuted} demonstrated that a similar approach can be utilized to reduce classifiers' reliance on global statistics, therefore increasing classification performance.
In this work, we reveal that text recognizers exhibit a similar bias, yet on a local scale. To remediate this problem, we also leverage AdaIN. However, instead of swapping statistics on a word (instance) level, we swap fine-grained statistics on a sub-word level, thus regulating the reliance on local feature statistics in text recognizers.
\paragraph{Text Recognition.}
Text recognition has attracted considerable attention over the past few years. In particular, deep learning approaches have achieved remarkable results \cite{shi2016end,Liu2016STARNetAS,Baek2019clova,luo2020learn,wan2020vocabulary,slossberg2020calibration,aberdam2020sequence}.
Still, current state-of-the-art methods struggle to train robust models when the amount of data is insufficient to capture the magnitude of handwriting styles.
Various methods have been suggested to cope with this problem. Bhunia \etal~\cite{bhunia2019handwriting} proposed an adversarial feature deformation module that learns ways to elastically warp extracted features, boosting the network's capability to learn highly informative features. Luo \etal~\cite{luo2020learn} introduced an agent network that learns from the output of the recognizer. The agent controls a set of fiducial points on the image and uses a spatial transformer to generate “harder” training samples.
In a different line of work, utilizing unlabeled data was proposed to improve recognition results.
Zhang \etal~\cite{zhang2019sequence} and Kang \etal~\cite{kang2020unsupervised} have introduced adversarial domain adaptation approaches between a labeled source domain and an unlabeled target domain.
Aberdam~\etal~\cite{aberdam2020sequence} suggested a framework for sequence-to-sequence contrastive learning named SeqCLR, which yields effective visual representations for text recognition by pre-training the network on unlabeled data.
Unlike previous methods, our method does not require additional data, new models or complex training paradigms.
Instead, we suggest a normalization-based method adjusted to text images and to sequence-to-sequence approaches.
TextAdaIN is extremely easy to implement and can fit into any encoder as part of a mini-batch training process.
Throughout this work, unless mentioned otherwise, we integrate our proposed method with a state-of-the-art recognizer named SCATTER~\cite{Litman_2020_CVPR}.
The SCATTER architecture consists of four main components:
\begin{enumerate}[nolistsep]
\item Transformation: A rectification module which aligns the input text
image using a Thin Plate Spline (TPS) transformation \cite{Liu2016STARNetAS,shi2016robust}.
\item Feature extraction: A convolutional neural network
(CNN) that extracts features from the rectified image. Similar to~\cite{cheng2017focusing,Baek2019clova,Litman_2020_CVPR,aberdam2020sequence}, a 29-layer ResNet is employed as the backbone. Subsequently, the features are mapped to a sequence of frames, denoted by $V = [v_1, v_2, ..., v_T]$, each corresponding to different receptive fields in the image.
\item Visual feature refinement: An intermediate supervision in the form of a CTC decoder \cite{Graves2006ctc} is employed to provide direct supervision for each frame in the visual features $V$.
\item Selective-Contextual Refinement Block: Contextual features are extracted using a two-layer bi-directional LSTM encoder. These features are concatenated to the visual features, $V$. The features are then fed into a selective decoder and into a subsequent block if it exists.
These blocks can be stacked together to improve results. In this work, for convenience, we set the number of blocks to two.
\end{enumerate}
For further details about the baseline architecture, we refer the reader to~\cite{Litman_2020_CVPR}.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{Figures/fig_text_recog_v3.pdf}
\caption{{\bf TextAdaIN block.} (a) A standard residual block and (b) a TextAdaIN block. TextAdaIN is probabilistically employed after every Conv layer during training.}
\label{fig:arch_fig}
\vspace{-0.3cm}
\end{figure}
\section{Method}
Our method is inspired by two key observations in the context of text recognition. First, text recognition approaches define the recognition task as a sequence character classification problem. The characters appear across a series of frames along the width axis of the image.
This leads us to the second observation, that the model's predictions are mostly based on local information which lies in consecutive frames~\cite{Bai2015crnn,qiao2020seed}. Therefore, text recognizers are at the risk of developing a bias towards local statistics and thus may not sufficiently take the surrounding information into account.
Leveraging the recent development proposed by \cite{nuriel2020permuted, zhou2021mixstyle},
we suggest two modifications that correspond to the aforementioned observations:
\begin{enumerate}[nolistsep]
\item Viewing the feature map as a sequence of individual elements and swapping the feature statistics between elements instead of entire images.
\item Modifying AdaIN to operate on two dimensions - the height and channels of the feature map.
\end{enumerate}
These modifications increase the granularity level in which statistics are calculated and modified, thus regulating the reliance on local statistics. Both are crucial for our method's success, as shown in the ablation study.
Similarly to \cite{nuriel2020permuted}, in \cref{fig:reconst_vis}, we illustrate the effect of our approach by applying it to an autoencoder trained on natural images. We compare between applying vanilla AdaIN to applying TextAdaIN exemplifying the distinction between the two. In (a), we apply vanilla AdaIN. When applied to natural images the change is apparent. However, when applied to text images the change is subtle. Correspondingly, in the experimental section we provide further evidence that text recognizers are already invariant to global statistics.
In (b), a local variation of AdaIN is applied, in which statistics from each channel and height are swapped. In this setting the visual impact is more noticeable. In the last row (c), we apply TextAdaIN, which takes it a step further and splits the sequential feature map into separate elements created from every few consecutive frames. Thus, increasing the scale in which the feature statistics are swapped and allowing the use of multiple donor images.
\input{tables/hw_sota}
Formally, given an image representation $x\in \mathbb{R}^{C\times H\times W}$, we define $\mu_{c,h}$, $\sigma_{c,h}$ to be the following:
\begin{equation*}
\mu_{c,h}(x) = \frac{1}{W}\sum_{w=1}^{W} x_{c,h,w} \,,
\end{equation*}
\begin{equation*}
\sigma_{c,h}(x) = \sqrt{\frac{1}{W}\sum_{w=1}^{W}(x_{c,h,w} - \mu_{c,h}(x))^{2} + \epsilon}\ \,.
\end{equation*}
A local variation of $\textrm{AdaIN}_c$ is defined as:
\begin{equation*}
\textrm{AdaIN}_{c,h}(x_a,x_b)= \sigma_{c,h}(x_b)\left(\frac{x_a-\mu_{c,h}(x_a)}{\sigma_{c,h}(x_a)}\right)+\mu_{c,h}(x_b) \,.
\end{equation*}
This variant of AdaIN swaps statistics for every corresponding channel and height, thus impacting the feature map's statistics at a higher level of granularity. We note that backpropagating gradients only occur through $\mu_{c,h}(x_a), \sigma_{c,h}(x_a)$.
Given the definitions above, we formulate the TextAdaIN operation used during training. Let $X=\{x_i\}_{i=1}^B$ denote a mini-batch of $B$ feature maps. We divide each sample $x_i$ into $K$ windows along its width. The result is a batch of elements pertaining to $B\cdot K$ windows. This operation can be defined as a mapping:
\begin{equation*}
X\in \mathbb{R}^{B\times C \times H \times W} \rightarrow \widehat{X}\in \mathbb{R}^{B\cdot K \times C \times H \times \frac{W}{K}} \,.
\end{equation*}
Then, we employ a similar procedure to the one used in pAdaIN \cite{nuriel2020permuted}. We randomly draw a permutation of the modified batch $\widehat{X}$ and apply $\textrm{AdaIN}_{c,h}$ between the modified batch and the permuted one. Namely, let
\begin{equation*}
\pi(\widehat{X}) = [\hat{x}_{\pi(1)},\hat{x}_{\pi(2)}...\hat{x}_{\pi(B\cdot K)}] \,
\end{equation*} denote applying a permutation $\pi: [B\cdot K] \rightarrow [B\cdot K]$ on $\widehat{X}$. Then the output of TextAdaIN on the $i^{th}$ window of $\widehat{X}$ is defined by:
\begin{equation*}
\textrm{TextAdaIN }(\hat{x}_i) = \textrm{AdaIN}_{c,h}(\hat{x}_i,\hat{x}_{\pi(i)}) \,.
\end{equation*}
Subsequently, the batch of windows is rearranged back to its original form using the inverse mapping operation.
TextAdaIN is applied batch-wise with probability $p$ after every convolutional layer in the encoder, as illustrated in \cref{fig:arch_fig}(b). The permutation $\pi$ is sampled uniformly, and $p$ is a hyperparameter fixed ahead of training. TextAdaIN is only applied during training and not during inference.
\section{Experiments}
In this section, we begin by comparing our method's performance against state-of-the-art methods on several public handwritten datasets. Then, we demonstrate that TextAdaIN can be integrated into additional recognition architectures and applied to natural scene text images.
\noindent
\textbf{Datasets}~ We conduct experiments on several public handwritten and scene text datasets.
For handwriting, we consider the English datasets IAM~\cite{marti2002iam} and CVL~\cite{kleber2013cvl}, and the French dataset RIMES~\cite{grosicki2009icdar}. For scene text, we train on the synthetic datasets SynthText~\cite{Zisserman206st} and MJSynth~\cite{Zisserman2014mj} and test on four real-world regular text datasets: IIT5K~\cite{Mishra2012sj}, SVT~\cite{Wang2011bottom}, IC03~\cite{Lucas2003ic03}, IC13~\cite{Karatzas2013ic13} and three real-world irregular text datasets: ICDAR2015~\cite{Karatzas2015ic15}, SVTP~\cite{Phan2013svtp} and CUTE 80~\cite{Risnumawan2014cute}.
We present samples from each dataset and include more details in \cref{app:dtatsets}.
\noindent
\textbf{Metrics}~ To evaluate recognition performance, word-level accuracy is measured. For handwritten text recognition state-of-the-art comparison, Word Error Rate (WER) and Character Error Rate (CER) are adopted similar to the convention used in \cite{sueiras2018offline, zhang2019sequence, aberdam2020sequence}.
\noindent
\textbf{Implementation Details}~ Unless mentioned otherwise, in all of our experiments TextAdaIN is fused into the backbone of SCATTER~\cite{Litman_2020_CVPR}. The experimental settings, including the optimizer, learning rate, image size, and training datasets, are identical to SCATTER~\cite{Litman_2020_CVPR}. Full implementation details are described in \cref{app:implementation_det}.
\subsection{Comparison to State-of-the-Art}
In \cref{tab:htr_sota_results}, we measure the accuracy of our proposed method on public handwritten text benchmarks.
Our method achieves state-of-the-art results across all datasets. Compared to current state-of-the-art methods, incorporating TextAdaIN achieves a performance increase of \textbf{+1.4} pp (85.9\% vs. 87.3\%) on IAM, \textbf{+2.0} pp (92.4\% vs. 94.4\%) on RIMES and \textbf{+0.4} pp (77.8\% vs. 78.2\%) on CVL. We wish to emphasize that previous methods, such as \cite{bhunia2019handwriting,kang2020unsupervised,luo2020learn,aberdam2020sequence}, introduced complex modifications to the training phase, including adversarial learning and contrastive pre-training. In contrast, TextAdaIN can be easily implemented in a few lines of code and seamlessly fit into any mini-batch training procedure. In addition, the effect of applying MixStyle~\cite{zhou2021mixstyle} and pAdaIN~\cite{nuriel2020permuted} is displayed in \cref{tab:htr_sota_results}. Both have little to no effect on the results, indicating that handwritten text recognizers are already invariant to changes in global statistics.
In \cref{fig:fail_case}, we display failure cases of our method on the IAM dataset. The failure cases are mostly composed of highly cursive text, unclear handwriting styles and character occlusions. Adding additional context by employing a line-level approach can assist in rectifying these types of errors, nevertheless line-level recognition has its own caveats.
\begin{figure}[t]
\centering
\includegraphics[width=0.95\columnwidth]{Figures/error_ana_v2.pdf}
\caption{{\bf Failure cases.} Samples of failure cases on the IAM dataset. GT stands for the ground truth annotation, and Pred is the predicted result. Prediction errors are marked in red, and missing characters are annotated by strike-through.}
\label{fig:fail_case}
\vspace{-0.3cm}
\end{figure}
\subsection{Generalization of Proposed Method}
In this subsection, we explore our method's transferability to both the domain of scene text and to different recognition architectures. In addition to the SCATTER architecture, we utilize the Baek \etal~\cite{Baek2019clova} framework, which can describe the building blocks of many text recognizers, including~\cite{Bai2015crnn,Zisserman2015large,shi2016end,shi2016robust, liu2016star, Hu2017grccn,bai2017accurate,cheng2017focusing, zhang2019sequence, Litman_2020_CVPR, fogel2020scrabblegan, yousef2020origaminet}.
We choose to present TextAdaIN's performance when integrated into the Baek \etal~\cite{Baek2019clova} framework while employing either a CTC~\cite{graves2006connectionist} or an attention decoder~\cite{cheng2017focusing, shi2016robust}. As in \cite{Litman_2020_CVPR}, weighted (by size) average word accuracy is adopted where regular and irregular text datasets are distinguished.
As observed in \cref{tab:str_sota_results_avg}, TextAdaIN shows consistent improvement on both regular and irregular text benchmarks. The results are competitive with recent state-of-the-art methods. Furthermore, TextAdaIN is flexible and can be easily integrated into any general text recognizer.
Contrary to handwritten text recognition models, scene text models are trained on synthetic data~\cite{Baek2019clova,Litman_2020_CVPR}. This synthetic data is over a hundred times the size of handwriting training data and thus can cover a variety of text appearance. Therefore, scene text models are less prone to develop a reliance on local feature statistics. Despite this, TextAdaIN still shows improvement.
\input{tables/str_sota_avg}
\section{Ablation Study}
In this section, we conduct a series of experiments to further understand the performance improvements and analyze the impact of our key contributions. For this purpose, we adopt the IAM dataset, and the baseline model is the SCATTER architecture~\cite{Litman_2020_CVPR}. Similar to our previous experiments, implementation details are described in \cref{app:implementation_det}.
We begin by exploring the different variants of AdaIN, demonstrating that our method significantly outperforms other variants. To analyze the performance as a function of the granularity level, we then measure the accuracy while varying the number of windows. Subsequently, we demonstrate the reliance of text recognizers on local feature statistics by increasing the hyperparameter $p$. Lastly, we perform an analysis of our method's robustness demonstrating its effectiveness on fine versus coarse corruptions.
\input{tables/SpAdaIN_breakdown}
\begin{figure*}[t]
\centering
\includegraphics[width=0.8\textwidth]{Figures/feature_viz_v6.pdf}
\caption{{\bf Feature visualization.} An intensity map of the features after the first convolution layer of the encoder is visualized. TextAdaIN has two noticeable effects: (1) injecting background distortions drawn from an induced distribution (2) introducing masking on a local scale. Both regulate the reliance on local feature statistics in text recognizers as further investigated in \cref{sec:corruption}.}
\label{fig:adain_config}
\vspace{-0.3cm}
\end{figure*}
\begin{figure}[t]
\centering
\includegraphics[width=0.9\columnwidth]{Figures/app_fig_window_short.png}
\caption{{\bf Number of windows.} Varying the number of windows extracted per sample has multiple effects, including the granularity level and the number of donor images.}
\label{fig:window_size}
\vspace{-0.3cm}
\end{figure}
\subsection{AdaIN Variations}
Each dimension of the feature space represents different information about the input. Hence, modifying AdaIN to operate across different dimensions can influence the model accordingly. Our method is employed over both the channel and height dimensions between different elements of samples in a batch. Each element consists of a pre-defined number of consecutive frames in the sequential feature map. As seen in \cref{tab:spAdaIN_configuration}, TextAdaIN significantly improves recognition performance as opposed to all other AdaIN variations.
To better understand the information encompassed in each of the dimensions, we visualize the feature maps of a trained baseline model in \cref{fig:adain_config}. For this purpose, we apply PCA on the spatial dimensions of the first convolutional layer's output, thus obtaining an $H\times W \times 1$ intensity map. An image depicting the spatial intensities is displayed after normalization.
As depicted in \cref{fig:adain_config}, applying pAdaIN has almost no effect on the learned features. This is in alignment with the quantitative results indicating that text recognizers are relatively invariant to changes in global statistics. As for $\textrm{AdaIN}_{c,w}$, modifying individual vertical frames introduces subtle changes to the feature map. The network can easily compensate for the distortions leading to minimal impact on the training process.
Interestingly, applying $\textrm{AdaIN}_{h,w}$ injects text from the donor image into the feature map. This phenomenon originates from shifting each corresponding spatial location in the representation space. Clearly, without the modification of the labels, this effect will not improve the performance.
TextAdaIN has two major effects as visualized in both \cref{fig:reconst_vis} and \cref{fig:adain_config}.
The first is injecting local perturbations drawn from an induced distribution into the feature space. The distribution is induced from the manner in which TextAdaIN operates, providing a correct balance between the coarse to fine distortion level. Namely, TextAdaIN's impact is more local than pAdaIN's, yet more global in the sequence dimension than $\textrm{AdaIN}_{c,w}$. Therefore, the impact aligns with both the nature of the data and sequence-to-sequence approaches.
Occasionally, statistics of smooth areas (without text) are injected into regions of the feature space which represent text. This generates the second effect of local masking, in which part of the textual features undergo masking. We hypothesize that this forces the model to rely on semi-semantic information, which compensates for the missing visual cues. This was partially observed by Aberdam \etal~\cite{aberdam2020sequence} while applying horizontal cropping and in the context of speech recognition by Baevski \etal~\cite{baevski2020wav2vec}. As this analysis was performed on the feature space, we also show the influence of TextAdaIN on the input space in \cref{sec:corruption}.
\input{tables/corruption}
\input{tables/padain_vs_spadain}
\subsection{Number of Windows}
TextAdaIN splits the feature map into windows along the width axis. Each window is perceived as an individual element in the AdaIN operation. As the features vary in size at different layers, we define $K$ to represent the number of elements created per sample. Thus, $K$ determines the window size at each layer. Modifying $K$ has several different effects. For example, it controls the granularity level in which the statistics are calculated and modified and the number of donors.
Therefore, an optimal value of $K$ can be found to balance the different effects. In \cref{fig:window_size}, we plot the performance as a function of $K$. The best result is achieved when using $K=5$. We note that the average length of English words is 4.7 characters. Thus, when $K=5$, the statistics are normalized per character on average.
\subsection{Reliance on Local Statistics}
\label{sec:reliance}
In this subsection, we wish to assert text recognizers' reliance on local statistics. Nuriel et al. \cite{nuriel2020permuted} observed that applying pAdaIN at high values of $p$ resulted in a significant degradation of classification performance. We can thus infer that global statistics contain label-relevant information for classifiers.
For text recognizers, as shown in \cref{tab:spadain_vs_padain}, this is not the case.
Increasing the value of $p$, when applying pAdaIN, only slightly affects the results. This indicates that global statistics are less significant in the domain of text images.
In contrast, applying TextAdaIN with a high value of $p$ decreases performance substantially. This implies that profusely applying TextAdaIN distorts important information that the model relies on. Therefore, text recognizers are prone to develop a bias towards local statistics. If applied correctly, with the right $p$, TextAdaIN can alleviate this bias.
\subsection{Robustness Towards Local Corruption}
\label{sec:corruption}
To demonstrate that TextAdaIN indeed regulates the reliance on local statistics, in \cref{tab:corruption}, we evaluate its performance on several types of corruptions, comparing it to the baseline model. The corruptions are divided into three categories: (1) local masking, (2) kernel-based methods and (3) geometric transformations. The columns are roughly sorted by the locality scale of the corruptions. In the kernel-based columns, they are ordered by kernel size.
For each corruption, we display the gap between the performance of the baseline versus the TextAdaIN model. To accentuate the improvement provided by our method, we also display the normalized gap - the ratio between the gap on the corrupted data and the gap on the original data. This normalization removes the performance advantage of TextAdaIN on the original data.
We note that there is a distinct correlation between locality and the normalized gap. This provides further evidence that TextAdaIN regulates the reliance on local statistics in text recognizers. We refer the reader to \cref{app:corruption} for more details on and visualizations of the corrupted datasets.
\section{Conclusion}
Text recognizers leverage convolutional layers to extract rich visual features, and hence are extremely powerful. However, in many cases, they exhibit sensitivity towards subtle modifications that preserve image content. This is in contrast to classifiers that exhibit a bias on a global scale~\cite{nuriel2020permuted}. To relieve text recognizers' oversensitivity, we introduce TextAdaIN, a normalization-based method which distorts the feature space in a local manner and effectively regulates the reliance on local statistics.
Our method achieves state-of-the-art results on handwritten text recognition benchmarks. TextAdaIN is also applicable to various recognition architectures and to the domain of scene text images. In contrast to previous methods, our method does not require complex adversarial training, contrastive pre-training or the incorporation of additional data. It can be implemented simply in a few lines of code and effortlessly integrated into a mini-batch training procedure.
{\small
\bibliographystyle{ieee_fullname}
\section{Datasets}
\label{app:dtatsets}
In this work, we consider the following public datasets for handwriting and scene text. Samples from the different datasets are depicted in \cref{fig:dataset_samples}.
\subsection{Handwritten text}
For handwriting recognition, we consider three datasets:
\begin{itemize}[noitemsep]
\item \textbf{IAM} \cite{marti2002iam} handwritten English text dataset, written by 657 different writers. This dataset contains 101,400 correctly segmented words, partitioned into writer independent training, validation and test.
\item \textbf{CVL} \cite{kleber2013cvl} handwritten English text dataset, written by 310 different writers. 27 of the writers wrote 7 texts, and the other 283 writers wrote 5 texts. This dataset contains 84,990 correctly segmented words, partitioned into writer independent training and test.
\item \textbf{RIMES} \cite{grosicki2009icdar} handwritten French text dataset, written by 1300 different writers. This dataset contains 66,480 correctly segmented words, partitioned into training, validation and test sets that are independent of writers.
\end{itemize}
\subsection{Scene text}
For training the scene text models we utlized only synthetic datasets:
\begin{itemize}[noitemsep]
\item {\textbf{MJSynth}} (MJ) \cite{Zisserman2014mj} a synthetic text in image dataset which contains 9 million word-box images, generated from a lexicon of 90K English words.
\item \textbf{SynthText} (ST) \cite{Zisserman206st} a synthetic text in image dataset, containing 5.5 million word, designed for scene text detection and recognition.
\item \textbf{SynthAdd} (SA) \cite{Wang2019sar} only when training SCATTER for scene text, as in the original paper~\cite{Litman_2020_CVPR}, SA was also utilized for training data. This dataset was generated using the same synthetic engine as in ST, and contains 1.2 million word box images. SA is used for compensating the lack of non-alphanumeric characters in the training data.
\end{itemize}
\begin{figure}[t]
\centering
\includegraphics[width=0.9\columnwidth]{supp_files/dataset_samples.pdf}
\caption{\label{fig:dataset_samples} \textbf{Dataset samples.}
Examples of images from each dataset.
}
\vspace{-0.5cm}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{supp_files/corruption_v1.pdf}
\caption{\label{fig:corruptions}
{\bf Visualization of selected corruptions}. To assert TextAdaIN's regularization effect we compare it's performance to the baseline while applying different types of corruptions. We visualize the different corruptions, sorted by their locality impact in each category.
}
\end{figure*}
Aligned with many scene text recognition manuscripts (e.g. \cite{Bai2018aster, Baek2019clova, Wang2019sar, Litman_2020_CVPR}), we evaluate our models using seven scene text datasets: ICDAR2003, ICDAR2013, IIIT5K, SVT, ICDAR2015, SVTP and CUTE. Those datasets are commonly divided into regular and irregular text according to the text layout.
\newline
\textbf{Regular text} datasets are composed of:
\begin{itemize}[noitemsep]
\item \textbf{IIIT5K}~\cite{Mishra2012sj} consists of 2000 training and 3000 testing images that are cropped from Google image searches.
\item \textbf{SVT}~\cite{Wang2011bottom} is a dataset collected from Google Street View images and contains 257 training and 647 testing cropped word-box images.
\item \textbf{ICDAR2003}~\cite{Lucas2003ic03} contains 867 cropped word-box images for testing.
\item \textbf{ICDAR2013}~\cite{Karatzas2013ic13} contains 848 training and 1,015 testing cropped word-box images.
\end{itemize}
\textbf{Irregular text} datasets are composed of:
\begin{itemize}[noitemsep]
\item \textbf{ICDAR2015}~\cite{Karatzas2015ic15} contains 4,468 training and 2,077 testing cropped word-box images, all captured by Google Glass, without careful positioning or focusing.
\item \textbf{SVTP}~\cite{Phan2013svtp} is a dataset collected from Google Street View images and consists of 645 cropped word-box images for testing.
\item \textbf{CUTE 80}~\cite{Risnumawan2014cute} contains 288 cropped word-box images for testing, many of which are curved text images.
\end{itemize}
\section{Implementation Details}
\label{app:implementation_det}
In our experiments, we utilize three types of architectures. The first is SCATTER, and the two others are variants of Baek~\etal\cite{Baek2019clova} framework. All models are trained and tested using the PyTorch
framework on a Tesla V100 GPU with 16GB memory.
We follow the training procedure performed in \cite{Litman_2020_CVPR}. Accordingly, models are trained using the AdaDelta optimizer with: a decay rate of 0.95, gradient clipping with a magnitude of 5 and batch size of 128. During training, 40\% of the input images are augmented by randomly resizing them and adding extra distortion. Models trained on handwriting and scene text datasets are trained for 200k iterations and 600k iterations, respectively. Model selection is performed on the validation set, which for scene text is the union of the IC13, IC15, IIIT5K, and SVT training splits and for handwriting is the predefined validation sets. When utilizing SCATTER all images are resized to $32\times 128$ and are in RGB format. For evaluation, word accuracy measured is case insensitive. We refer the reader for any additional implementation details, both during inference and training to the original papers \cite{Baek2019clova, Litman_2020_CVPR}.
For TextAdaIN, we use $p=0.01$ and split images into $K=5$ windows. In the cases where the number of windows doesn't divide the width, we use the maximum width that can be divided and ignore the remainder.
\section{TextAdaIN Pseudo-Code}
\label{app:code}
In this section, we provide pseudo-code for TextAdaIN. The code includes two functions not explicitly implemented: \textit{create\_windows\_from\_tensor}, \textit{revert\_windowed\_tensor}.
The first function represents the mapping:
\begin{equation*}
X\in \mathbb{R}^{B\times C \times H \times W} \rightarrow \widehat{X}\in \mathbb{R}^{B\cdot K \times C \times H \times \frac{W}{K}} \,,
\end{equation*} and the second represents the corresponding inverse mapping. As mentioned in the paper, we don't backpropogate through $\mu_{c,h}(\hat{x}_{\pi(i)}), \sigma_{c,h}(\hat{x}_{\pi(i)})$ and thus detach is used.
\input{supp_files/psudo_code}
\section{Image Corruptions}
\label{app:corruption}
In Section 5.4, we compare between the performance of the baseline model and the TextAdaIN version on corrupted versions of the IAM test set. \cref{fig:corruptions} contains original images from the test set on the first column, and the results of applying each of the corruptions on the following columns. The corruptions are applied utilizing the \textit{imgaug}~\cite{imgaug} package as explicitly written below.
\input{supp_files/corruptions}
|
{
"timestamp": "2021-05-11T02:19:31",
"yymm": "2105",
"arxiv_id": "2105.03906",
"language": "en",
"url": "https://arxiv.org/abs/2105.03906"
}
|
\section{INTRODUCTION}
Operating in disruptive and failure-prone environments is a fundamental requirement for achieving real-world autonomy in robotics systems~\cite{thrun2002probabilistic}. These disruptions can either be adversarial---such as in defense and security applications~\cite{sless2014multi,renganathan2017spoof,sung2019multi}, or non-adversarial---due to the complex and noisy nature of the real world~\cite{khalastchi2019fault}. In such settings, the deployment of multi-robot teams to accomplish tasks adds a layer of flexibility within the system, in the form of increased reliability via redundancy and reconfigurability~\cite{cortes2017coordinated}. \par
Consequently, a significant amount of research effort has been dedicated towards the design of multi-robot coordination algorithms which account for the presence of disturbances, e.g.~\cite{ulusoy2012robust,dias2004robust,luo2019minimum}. Broadly speaking, these approaches consist of two possible elements: a \emph{predictive} risk-aware piece which generates solutions based on a model of the disturbance (typically interpreted as \textit{robustness})~\cite{zhou1998essentials}, or a \emph{reactive} piece which adapts the underlying control algorithm to continue task execution regardless of the disruptions affecting the system (typically interpreted as \textit{resilience})~\cite{chamon2020resilient}. The importance of both these aspects depends on the mission objectives and can change during operations---while the latter prioritizes task performance, the former generates potentially conservative solutions to prevent future disruptions.\par
\begin{figure}
\centering
\includegraphics[width=1.0\linewidth]{figures/illustration1.png}
\caption{A depiction of the multi-robot target tracking scenario considered in this paper. A team of robots equipped with heterogeneous noisy sensors tracks a set of mobile targets. The targets are moving and can induce failures in the sensors of the robots based on the proximity between them. Therefore, the robots must position themselves close to the targets to generate accurate estimates, but must simultaneously account for the risk of failures---which impact the quality of future measurements.}
\label{fig: illustration}
\end{figure}
These considerations are highly relevant in the context of \emph{heterogeneous} multi-robot teams, where the different capabilities of the robots can be leveraged to magnify the efficacy of the robot team~\cite{rizk2019cooperative}. For example, if only a single robot in the multi-robot team carries an essential sensor required for the success of the task, this robot's actions should be more risk-averse compared to other robots which carry non-essential sensors. A pertinent question then is: \emph{How should one systematically balance the objectives of risk-aversion and performance-focused adaptiveness in heterogeneous multi-robot teams?} \par
In the context of a target tracking application, this paper presents a framework which synthesizes adaptive and risk-aware control inputs for a team of robots equipped with heterogeneous sensing capabilities. We consider a team of robots tracking mobile adversarial targets in a failure-prone environment. The robots are equipped with different types of noisy sensors, and are tasked with estimating the targets' states. Furthermore, the targets are moving and can induce failures in the sensors of the robots based on the proximity between them (see Fig.~\ref{fig: illustration}). In such a scenario, the robots must not only position themselves in order to achieve a trade-off between risk and tracking quality, they must also suitably adapt to failures that have rendered certain sensors dysfunctional. \par
We endow the robot team with a Kalman filter~\cite{thrun2002probabilistic}
which generates optimal estimates of the target locations along with a quantification of the uncertainty corresponding to each target's estimate. To embed risk-awareness in our framework, we introduce the notion of a \emph{safety-aware observability Gramian} (SOG), which weighs the quality of future observations made by the multi-robot team with the risk of failures associated with making those observations. By encoding the heterogeneous sensors available to the robots within the SOG, we enable our framework to position robots based on the contribution of their sensors to the overall tracking performance. \par
Within our framework, the objectives of risk-aversion and tracking performance maximization are automatically traded off by considering the \emph{sensing margin}---defined as the excess number of operational sensors when compared to the minimum required for collective observability of the targets tracking process. This enables our algorithm to prioritize performance when the sensing margin is large and switch to risk-averse behaviors as the sensing margin of the team decreases. As we demonstrate, when compared to a framework with no risk-awareness, our algorithm preserves the sensors of the robots for longer, resulting in an extended operational time horizon. Moreover, our approach presents the mission designer with an explicit mechanism to tune this trade-off between performance and risk-aversion. \par
The outline of the paper is as follows. Section~\ref{sec:lit_rev} places our work in the context of existing literature. Section~\ref{sec:ps} introduces the formulation for the target tracking scenario along with some mathematical notations. Section~\ref{sec:rrtt} presents the centralized optimization program whose solution generates adaptive and risk-aware configurations for the robot team. Section~\ref{sec:exp} presents the results of simulated experiments, and Section~\ref{sec:conclusion} concludes the paper.
\section{Literature Review}\label{sec:lit_rev}
Robust control and planning methods typically account for the risk of future failures and adversarial attacks occurring in the system when generating solutions~\cite{zhou1998essentials,yu2013tube}. Techniques such as $\mathcal{H}_{\infty}$ control~\cite{zhou1998essentials} and robust tube MPC~\cite{yu2013tube} have been specifically designed for ensuring operability under ``worst-case" disturbances or failures. Recently, similar traits have been investigated to develop robust coordination and planning algorithms in multi-robot systems. For instance, Zhou et al.~\cite{zhou2018resilient,zhou2020distributed} devised robust target tracking algorithms for robot teams to counter worst-case denial-of-service (DoS) failures or attacks that can make robots fail or compromise their tracking sensors. Their algorithms guarantee a provably close-to-optimal team performance even though some robots or their sensors malfunction. Along this line, robust information collection algorithms~\cite{schlotfeldt2018resilient} were developed while accounting for worst-case robot failures or attacks when a team of robots are jointly planning paths to gather data in the environment. In contrast to these approaches which might generate conservative solutions,~\cite{zhou2018approximation} adopts a probabilistic approach, and presents a risk-aware coordination algorithm which addresses the issue of random sensor failures in the problem of sensor placement for environmental monitoring.\par
As disruptions are inevitable during the long duration operations of robot teams, mechanisms to maintain performance despite failures have been developed~\cite{ramachandran2020resilience,saulnier2017resilient,ramachandran2019resilience,song2020care}. To this end, Saulnier et al.~\cite{saulnier2017resilient} presented a resilient formation control algorithm that achieves the formation despite deceptive signals from non-cooperative team members. Similarly, Ramachandran et al.~\cite{ramachandran2019resilience} designed an algorithm to reconfigure the communication network of the robot team in order to compensate for resource failures. This algorithm was then utilized to adapt to failures of sensing resources in a multi-target tracking scenario~\cite{ramachandran2020resilience}. A resilient event-driven approach was devised by Song et al.~\cite{song2020care} to compensate for the robot failures when a team of robots are tasked to completely explore or cover an environment. Particularly, they designed a game-theoretic approach to trigger the reassignment of the functional robots, e.g., either continuing the coverage task in their pre-assigned workspace or being reassigned to the workspace of a failed robot.\par
In contrast to these works which focus on either risk-awareness or adaptiveness, we present an optimization framework that integrates \emph{both} predictive and reactive control paradigms in the context of multi-robot multi-target tracking with heterogeneous sensing resources. In particular, our framework systemically exhibits a tradeoff between the performance maximization and the risk-aversion based on the abundance of resources available within the team.
\section{Problem Setup} \label{sec:ps}
This section formally introduces the heterogeneous sensing model of the multi-robot team and sets up the target tracking problem. Following this, we introduce the target-induced sensor failure models, which will be used in Section~\ref{sec:rrtt} to imbue risk-awareness into the proposed controller.
\subsection{Notation}
In this paper, capital or small letters, bold small letters and bold capital letters represent scalars, vectors and matrices, respectively. Calligraphic symbols are used to denote sets. For any positive integer $z \in \mathbb{Z}^+$, $[z]$ denotes the set $\{1,2, \cdots, z\}$. $\|\cdot\|_p$ denotes the $p$-norm and the induced $p$-norm for vectors and matrices respectively. We drop the subscript on the norm when referring to the 2-norm. Additionally, $\|\mathbf{M}\|_F \triangleq \sqrt{trace(\mathbf{M}^T\mathbf{M})}$, outputs the Frobenius norm of a matrix $\mathbf{M}$. $\mathbf{1}^{m_1}$ and $\mathbf{1}^{m_1 \times m_2}$ denotes the vector and matrix of ones of appropriate dimension, respectively. $\mb{I}_m$ denotes the identify matrix of size $m\times m$. The operator $|\cdot|$ gives the length of a vector applied on a vector and cardinality of a set when operated on a set. Given a vector $\mb{v}$, $Diag(\mb{v})$ represents a diagonal matrix with elements of $\mb{v}$ along its diagonal. Also, $\text{Tr}(\mb{M})$ results in the trace of $\mb{M}$. We use $\mb{M}(i,j)$ to denote the $i,j$ element of $\mb{M}$. Given a set of matrices $\{\mb{M}_1, \mb{M}_2, \ldots, \mb{M}_n\}$, $[\mb{M}_1, \mb{M}_2, \ldots, \mb{M}_n]$ and $[\mb{M}_1; \mb{M}_2; \ldots, \mb{M}_n]$ represent the matrices obtained through horizontal and vertical concatenation of the given matrices respectively. Since vectors and scalars are special cases of matrices, the definitions apply to them too. Furthermore, $\mb{M}_1 \oplus \mb{M}_2 \oplus \ldots \oplus \mb{M}_n$ yields a block diagonal matrix constructed from the given set of matrices. Similarly, the operation $\mb{M}_1 \otimes \mb{M}_2$ denotes the Kronecker product between the matrices.
\subsection{Heterogeneity Model}
Consider a team of $N$ robots engaged in the target tracking task, indexed by the set $\mc{R} :=[N]
. Let $\mb{x}_i \in \mc{X} \subseteq \mathbb{R}^p$ denote the state of robot $i$, and $\mb{u}_i \in \mc{U} \subseteq \mathbb{R}^q$ denote its control input, which steers the state according to the following control-affine dynamics:
\begin{equation}
\dot{\mb{x}}_i = f(\mb{x}_i) + g(\mb{x}_i)\mb{u}_i,
\end{equation}
where $f$ and $g$ are continuously differentiable vector fields. \par
The robots are equipped with a set of heterogeneous sensors for tracking targets in the environment. Let $U$ denote the number of distinct types of sensors available within the team. Let $\bs{\Gamma} \in \{0,1\}^{N \times U}$ denote the \emph{sensor matrix} which describes the distribution of sensors over the robot team:
\begin{equation}
\bs{\Gamma}_{ij} = \begin{cases}
1,\quad\text{if robot $i$ possesses sensor $j$},\\
0,\quad\text{otherwise}.
\end{cases}
\end{equation}
\subsection{Target Tracking}
The robots are tasked with tracking a set of $M$ targets in the environment, which are indexed by the set $\mc{T} := [M]$. Let $\mb{e}_j \in \mc{X} \subseteq \mathbb{R}^p$ denote the state of target $j\in\mc{T}$, whose motion in the environment is described by the following linear dynamics:
\begin{equation}
\dot{\mb{e}}_j = \mb{A}_{j}\mb{e}_j + \mathbf{B}_j\mb{u}^e_j + \mb{w}^e_j,
\end{equation}
where $\mb{A}_{j}$ and $\mb{B}_j$ are the process and control matrices, $\mb{w}^j_e$ is the zero-mean independent Gaussian process noise with a covariance matrix $\mb{Q}_j \in \mathbb{R}^{p \times p}$, and $\mb{u}^e_j \in \mc{U} \subseteq \mathbb{R}^q$ is the control input of target $j \in \mc{T}$. Stacking the target states and target control inputs into the vectors $\mb{e}$ and $\mb{u}^e$, respectively, we get:
\begin{align}
\label{eqn:target team dyn}
\dot{\mb{e}} = \mb{A}\mb{e} + \mathbf{B}\mb{u}^e + \mb{w}^e,
\end{align}
where $\mb{A} \triangleq \mb{A}_1 \oplus \ldots \oplus \mb{A}_M$, $\mb{B} \triangleq \mb{B}_1 \oplus \ldots \oplus \mb{B}_M$ and $\mb{w}^e \triangleq [\mb{w}^e_1; \ldots; \mb{w}^e_M]$. As described before, we let $\mb{w}^e \sim \mc{N}(0,\mb{Q}),\mb{Q} \triangleq \mb{Q}_1 \oplus \mb{Q}_2 \oplus \ldots \oplus \mb{Q}_N$.
\par
Within our framework, every robot $i$ makes an observation of every target $j$ according to the following linear observation model:
\begin{equation}
\mb{y}_{ij} = \mb{H}_{ij} \mb{e}_j + \bs{\nu}_{ij},\quad i\in\mc{R},~j\in\mc{T},
\end{equation}
where $\mb{y}_{ij}$ is the measurement, $\bs{\nu}_{ij}$ is the measurement noise, and $\mb{H}_{ij}$ is the measurement matrix corresponding to robot $i$'s measurements of target $j$ and will depend on the sensor suite available to robot $i$. \par
Let $\bs{\gamma}_i$ denote the vector containing the ordered column indices of the sensor matrix corresponding to robot $i$,
\begin{equation}
\bs{\gamma}_i = \left[j\in[U]~\mid~\bs{\Gamma}_{ij} = 1 \right].
\end{equation}
In this paper, we assume the sensors to be linear, and associate a one-dimensional output matrix corresponding to each sensor available in the team, denoted by the set $\mc{S} := \{\mb{h}_1,\ldots,\mb{h}_{|\mc{S}|}\}$.
Consequently, the measurement matrix for robot $i$ associated with the process of taking the measurements about the state of target $j$ can be constructed simply using the set $\mc{S}$ and $\bs{\gamma}_i$,
\begin{equation}
\mathbf{H}_{ij} = [\mb{h}_{\bs{\gamma}_i(1)},\ldots,\mb{h}_{\bs{\gamma}_i(|\bs{\gamma}_i|)}]^T.
\end{equation}
We model the measurement noise $\bs{\nu}_{ij}\sim \mc{N}(0,\mb{R}_{ij})$ to be a zero-mean Gaussian process where covariance $\mb{R}_{ij}$ of the sensor noise is distance dependent, and exponentially increases with the distance between the robots and the targets,
\begin{align}\label{eq:sensor_cov}
&\mb{R}^{-1}_{ij} = Diag([{R}^{-1}_{ij1}, {R}^{-1}_{ij2}, \ldots, {R}^{-1}_{ij|\bs{\gamma}_i|}])
\end{align}
with
\begin{align}
&\mb{R}^{-1}_{ijl} = w_l\exp{\left(-\lambda_l\|\mathbb{P}(\mb{x}_i) - \mathbb{P}(\mb{e}_j)\|\right)},~\forall l \in [|\bs{\gamma}_i|],
\end{align}
where $\lambda_l$ and $w_l$ determine the noise characteristics of each sensor $l$ in the sensor suite of robot $i$, and $\mathbb{P}$ is a projector operator that maps the state of the robot to it's position. While this paper chooses an exponential decay model to capture the sensor performance characteristics, any other continuously differentiable degradation function can be utilized instead. \par
In ensemble form, the measurements of all the targets by robot $i$ can be written as:
\begin{equation}
\mb{y}_i = \mb{{H}}_i \mb{e} + \bs{\nu}_i, ~\forall i\in\mc{R},
\end{equation}
where ${\mb{H}}_i = I_M \otimes \mathbf{H}_{ij}$, and $\bs{\nu}_i = [\bs{\nu}_{i1},\ldots,\bs{\nu}_{iM}]^T$. Here $I_M$ denotes the identity matrix of size $M$. The measurement equation for the team can be written as:
\begin{equation}
\mb{y} = \mb{{H}} \mb{e} + \bs{\nu},
\label{eq:team_measurements}
\end{equation}
where $\mb{y} = [\mb{y}_1, \mb{y}_2;\ldots; \mb{y}_N]$, $\mb{{H}} = [\mb{{H}}_1; \mb{{H}}_2; \ldots; \mb{{H}}_N ]$ and $\bs{\nu} = [\bs{\nu}_1; \bs{\nu}_2;\ldots; \bs{\nu}_N]$. The team-wise measurement noise can be written as $\bs{\nu} \sim \mc{N}(0,\mb{R})$ where $\mb{R} = \mb{R}_{11}\oplus\mb{R}_{12}\oplus\cdots\oplus\mb{R}_{NM}$. Thus, given a sensor matrix $\boldsymbol{\Gamma}$, one can construct the corresponding robot output matrix and the team output matrix. These sensing models will be utilized in Section~\ref{sec:rrtt} to compute the estimates of the target positions and regulate the tracking performance of the robots.
\subsection{Target-induced Risk Model}
\label{subsec: risk of fail}
As discussed in the introduction, each target can detect and induce failures within the robot team. Towards this end, let $\phi_{j}: \mc{X} \rightarrow [0, 1]$ represent the \emph{target risk field} which models the risk of a robot being detected by target $j \in \mc{T}$. Formally, $\phi_j(\mb{x}_i)$ represents the probability of robot $i$ with state $\mb{x}_i$ being detected by target $j$. In addition, we define the \textit{immunity field} $\pi_j : \mc{X} \rightarrow \mathbb{R}^+$ associated with $\phi_j(\mb{x}_i)$ as
\begin{align}
\label{eqn: immunity field}
\pi_j(\mb{x}_i) = -\log (\phi_j(\mb{x}_i)).
\end{align}
Note that, $\pi_j$ increases with the decrease in $\phi_j$. Thus, the immunity field $\pi_j$ encodes the safety of a robot in the vicinity of target $j$. Intuitively, it quantifies the ability of robot $i$ to evade detection from target $j$ while residing at $\mb{x}_i$. Depending on the context, we refer to this model as either the risk model or the safety model. Furthermore, we assume that the functional form of the target risk fields are known to the robot team (see Section~\ref{sec:exp} for particular choices of these functions). The next section formulates the optimization program to track the targets in an adaptive and risk-aware manner.
\section{Adaptive and Risk-Aware Target Tracking} \label{sec:rrtt}
Given the measurements of the targets as well as the risk model, this section first introduces the Kalman filter equations which will be used to measure the performance of the team. Subsequently, we introduce a safety-aware observability framework, which is then used to formulate the optimization program for generating the positions of the robots.
\subsection{Generating Target Estimates}
As discussed above, the primary objective of the robots is to generate accurate estimates of the positions of the targets. We achieve this using a Kalman filter which generates an optimal estimate for the states of all targets $\hat{\mb{e}}$ along with a team-wise covariance matrix $\mb{P}$. Typically, the estimation by KF contains prediction and update steps. In the prediction step, we have,
\begin{align}
& \hat{\mb{e}}_{-} = \mb{A}\hat{\mb{e}}_0, \\
& \mb{P}_{-} = \mb{A} \mb{P}_0 {\mb{A}}^T + \mb{Q},
\end{align}
where $\hat{\mb{e}}_{-} $ and $\mb{P}_{-}$ are (a priori) estimated states of the targets and the (a priori) team-wise covariance matrix at the next step, respectively; $\hat{\mb{e}}_0$ and $\mb{P}_0$ are the estimated states of the targets and the team-wise covariance matrix at the current step, respectively; and $\mb{Q}$ is the process noise covariance corresponding to all targets. Once the robots' measurements $\mb{y}$ (Eq.~\ref{eq:team_measurements}) are available, we have the KF update step as,
\begin{align} \label{eqn:kf}
& \hat{\mb{e}} = \hat{\mb{e}}_{-} + \mb{K}\widetilde{\mb{y}}, \\
& \mb{P} = (\mb{I} - \mb{K} \mb{{H}})\mb{P}_{-} ,\label{eqn:post_cov}
\end{align}
where $\hat{\mb{e}}$ and $\mb{P}$ are the (a posteriori) estimated states of the targets and the (a posteriori) team-wise covariance matrix at the next step; $\widetilde{\mb{y}} := \mb{y} - \mb{{H}}\hat{\mb{e}}_{-}$ is measurement pre-fit residual; $\mb{K}: = \mb{P}_{-}{\mb{{H}}}^T(\mb{{H}}\mb{P}_{-}{\mb{{H}}}^T + \mb{R})^{-1}$ is the Kalman gain with $\mb{R}$ denoting the covariance matrix of the robot team measurement noise $\bs{\nu}$, as described in~\eqref{eq:team_measurements}.
Notably, the team-wise aposteriori covariance matrix $\mb{P}$ can be expressed as $\mb{P} = \mb{P}_1 \oplus \mb{P}_2 \oplus \ldots \oplus \mb{P}_M$ with $\mb{P}_{j}$ denoting the (a posteriori) covariance matrix corresponding to all robots' joint estimate of target $j\in \mathcal{T}$.
\subsection{Imparting Risk-aware Behaviors}
In order to encode risk-awareness into our framework, we leverage the observability of the system of moving targets. More specifically, the observability Gramian associated with the multi-robot team tracking the targets is defined as: $\mb{O} \triangleq \sum_{k=0}^{T-1} (\mb{A}^T)^k\mb{{H}}^T\mb{{H}}(\mb{A})^k$, where $T\geq 0$ is a suitably chosen time horizon. The positive definiteness of $\mb{O}$ is a necessary and sufficient condition for the observability of a deterministic linear system \cite{hespanha2018linear}. Intuitively, $\mb{O}$ measures the ease of estimating the initial state of a deterministic linear system from a given sequence of measurements on the state of the system. A measure of the observability Gramian matrix (such as the trace or the determinant) is typically used to measure the ease of estimating the initial state of the linear system~\cite{tzoumas2018resilient}. \par
In our scenario, proximity between the robots and the targets will prove detrimental to the ability of the robots to sense the targets in the future, due to the increasing risk of target-induced sensor failures. To encode this, we introduce a weighted observability Gramian, expressed as:
\begin{align}
\mb{O}_{\bs{\Pi}} \triangleq \sum_{k=0}^{T-1} (\mb{A}^T)^k\mb{{H}}^T\bs{\Pi}\mb{{H}}(\mb{A})^k,
\label{eq:safety_gamm}
\end{align}
where $\bs{\Pi}$ is a positive definite matrix quantifying the relative safety of the different sensor-equipped robots. As the detection models described in Section~\ref{subsec: risk of fail} are purely dependent on the states of the targets and the robots, we can define $\bs{\Pi}$ as,
\begin{align}
\label{eqn: safe weight matrix}
\bs{\Pi} = \oplus_{i = 1}^{N}\left(\oplus_{j = 1}^{M}~ I_{|\bs{\gamma}_i|} \otimes \pi_j(\mb{x}_i)\right)
\end{align}
where ${\pi}_j(\mb{x_i})$ is the immunity field with respect to robot $i$ and target $j$, as defined in~\eqref{eqn: immunity field}. Note that, the block diagonal matrices $I_{|\bs{\gamma}_i|} \otimes \pi_j(\mb{x}_i)$ which compose the matrix $\bs{\Pi}$ represent the inverse risk associated with target $j$'s detection of robot $i$. \par
In other words, larger the diagonal entries of $\mb{O}_{\bs{\Pi}}$, larger is the ability of the robot team to obtain the targets' state measurements while evading detection by the targets. Thus, we refer to the matrix $\mb{O}_{\bs{\Pi}}$ as \textit{safety aware observability gramian} (SOG) matrix. In the optimization problem described in Section~\ref{sec:rrtt}, we bound the trace of $\mb{O}_{\bs{\Pi}}^{-1}$ from above to control the risk-averseness of the robot team.
\subsection{Optimization Problem}
We propose solving the following optimization program to obtain the states of the robots for target tracking. In the following, we will describe the various constraints and symbols constituting the problem description.
\begin{subequations}\label{eq:rtargettrackingopt}
\begin{flalign}
\text{\bf Adaptive and Risk-Aware Target Tracking} \tag{\ref{eq:rtargettrackingopt}}&&
\label{eq:full_opt_target_tracking}
\end{flalign}
\begin{align}
\begin{split}
\minimize_{\substack{\mb{x},\boldsymbol{\delta}_1,\delta_2}} & k_1(\Delta, \boldsymbol{\delta}_1) + k_2(\Delta, \delta_2)
\end{split}\label{eq:nlp:a} \\
s.t. ~
& \|\mb{x}_i - \mb{x}_{i0}\| \leq d_m, ~\forall i \in \mathcal{R}, \label{eq:nlp:b}\\
& \|\mb{x}_i -\mb{x}_j\| \geq d_n, ~\forall i\neq j,~ \forall i,j \in \mathcal{R}, \label{eq:nlp:c}\\
& \Tr(\mb{P}_j) \leq \bs{\rho}_{1,j} + {\delta}_{1,j}, ~\delta_{1,j} \geq 0, ~j\in\mathcal{T}, \label{eq:nlp:d}\\
&\Tr(\mb{O_{\bs{\Pi}}}^{-1}) \leq \rho_2 + \delta_2, ~\delta_2 \geq 0, \label{eq:nlp:e}
\end{align}
\end{subequations}
where $\mb{x}_{i0}$ is robot $i$'s state at the current time step, $\mb{x} := [\mb{x}_1^T, \mb{x}_2^T, \cdots, \mb{x}_N^T]^T$ is the optimization variable representing the ensemble state of all robots at the next time step, $d_m$ is the maximum distance each robot $i$ can move between two consecutive time steps, and $d_n$ is the minimum inter-robot distance for safety. Furthermore, $\bs{\rho}_{1} = [\bs{\rho}_{1,1}, \bs{\rho}_{1,2}, \cdots, \bs{\rho}_{1,M}]^T$ and $\rho_2$ denote the user-defined upper bounds on the tracking error (encoded by $\mb{P}$) and team-level risk (encoded by $\mb{O}_{\bs{\Pi}}^{-1}$), respectively. We would like our adaptive framework to continue operations even when sensor failures might make it impossible to achieve desired specifications. To this end, we introduce slack variables $\boldsymbol{\delta}_{1}$ and $\delta_2$ where $\boldsymbol{\delta}_1 := [\bs{\delta}_{1,1}, \bs{\delta}_{1,2}, \cdots, \bs{\delta}_{1,M}]^T$ introduces a separate performance slack variable for each target.\par
We now describe the various constraints introduced in~\eqref{eq:full_opt_target_tracking}. Equation~\ref{eq:nlp:b} limits the distance that each robot can travel between two consecutive steps. Equation~\ref{eq:nlp:c} guarantees that the generated solutions maintain a minimum safety distance between the robots. Equation~\ref{eq:nlp:d} nominally aims to bound the trace of the posterior covariance matrix $\mb{P}_j$~\eqref{eqn:post_cov} for each target $j$ by a constant $\bs{\rho}_{1,j}$. However, sensor failures might imply that meeting this performance criteria is not longer possible. To this end, the slack variable $\bs{\delta}_1$ is introduced to ensure that the framework adapts to the current operating conditions. Similarly, the risk in the team tracking performance is captured by the inverse SOG $\mb{O}^{-1}_{\bs{\Pi}}$ and it is upper bounded by the sum of $\rho_2$ and the slack variable $\delta_2$ to ensure a desired level of safety~\eqref{eq:nlp:e}. It should be noted that $\mb{P}$ is computed directly using the update equation~\eqref{eqn:post_cov} and $\mb{O}_{\bs{\Pi}}$ is computed using the estimated state of the targets $\hat{\mb{e}}$ computed in~\eqref{eq:safety_gamm}.
\subsection{Objective Function Design}
As described in the introduction, balancing the competing objectives of performance maximization and risk-aversion is a desirable feature of a resource-aware control framework. In this paper, we enable our framework to implicitly make this trade-off based on a quantification of the abundance of sensors available in the team. This is encoded within the cost functions $k_1$ and $k_2$ as described next. \par
First, we define the resource abundance $\Delta$ as the difference between the current sensor matrix $\bs{\Gamma}$ and the minimal heterogeneous resources required for collective observability of the targets $\bs{\Gamma}_{\min}$,
\begin{equation}
\Delta = \|\bs{\Gamma}\|_\mathcal{F} -\|\bs{\Gamma}_{\min}\|_\mathcal{F},
\label{eq:resource_abundance}
\end{equation}
where $\bs{\Gamma}_{\min}$ is defined as,
\begin{align}
\begin{split}
\bs{\Gamma}_{\min} = & \arg\min_{\bs{\Gamma}} \|\bs{\Gamma}\|^2_{\mc{F}} \\%\text{trace}((\mb{A}_e^T)^k\bs{\mc{H}}^T\bs{\mc{H}}(\mb{A}_e)^k) \\
\label{eq:min_gamma}
& s.t. ~\text{det}\left(\sum_{k=0}^{T-1}(\mb{A}^T)^k{\mb{H}}^T{\mb{H}}(\mb{A})^k\right) > 0.
\end{split}
\end{align}
Notably, \eqref{eq:min_gamma} computes the smallest set of sensors which will yield collective observability of the system.\par
Generally speaking, with fewer heterogeneous resources (i.e., a lower $\Delta$), the robot team should lower its expectations on the tracking performance by choosing a larger $\|\boldsymbol{\delta}_1\|_1$ (in~\eqref{eq:nlp:d}) and be more risk-averse by choosing a smaller $\delta_2$ (in~\eqref{eq:nlp:e}). Conversely, with more heterogeneous resources, the robots should aim to achieve a higher tracking quality by choosing a smaller $\|\boldsymbol{\delta}_1\|_1$ and can be more risk-seeking by choosing a larger $\delta_2$. To this end, $k_1(\Delta, \boldsymbol{\delta}_1)$ belongs to a class of functions $f: [0, +\infty] \times [0, +\infty] \to [0, +\infty]$ that is monotone increasing in both $\|\boldsymbol{\delta}_1\|_1$ and $\Delta$. While, $k_2(\Delta, \delta_2)$ belongs to a class of functions $f: [0, +\infty] \times [0, +\infty] \to [0, +\infty]$ that is monotone increasing in $\delta_2$ and yet monotone decreasing in $\Delta$. Simple examples of $k_1(\Delta, \boldsymbol{\delta}_1)$ and $k_2(\Delta, \delta_2)$ can be
\begin{align}
k_1(\Delta, \boldsymbol{\delta}_1) = w_1 \Delta \|\boldsymbol{\delta}_1\|_1, \\
k_2(\Delta, \delta_2) = w_2 \frac{1}{\Delta} \delta_2^2,
\label{eq:hexamples}
\end{align}
where $w_1, w_2$ are positive scalars. Notably, optimizing over such functions in the objective~\eqref{eq:nlp:a} enables the robots to make adaptive and risk-aware decisions given the abundance of the heterogeneous resources they have. While this paper utilizes a simple instantiation of the functions $k_1$ and $k_2$, more complex functions can be designed for achieving fine-tuned behaviors.
\subsection{Computational Aspects}
The optimization problem presented in~\eqref{eq:full_opt_target_tracking} can be solved at discrete time intervals $t\in\mathbb{N}$ based on the updated estimates of the target states. Algorithm~\ref{alg:target_track} illustrates the operations of the target tracking framework. Step~\ref{step:kf} uses the Kalman filter to generate estimates of the target positions, $\hat{\mathbf{e}}$. Using this estimate, Step~\ref{step:opt} generates new positions for the robots according the tradeoffs between performance maximization and risk-aversion. We assume that the robots drive to the configuration generated by the optimization program within this step (ensured by ~\eqref{eq:nlp:a})~\cite{wang2017safety}. Following this, the robots evaluate which of their sensors have failed due to proximity with the targets, and update the sensing margin $\Delta$ in Step~\ref{step:delta}. See Fig.~\ref{fig: schematic} for a system diagram illustrating these operations.
\begin{algorithm}
\caption{Adaptive and Risk-Aware Target Tracking}
\label{alg:target_track}
\begin{algorithmic}[1]
\Require
\Statex Robot team heterogeneity specifications: $\mb{\Gamma}, \mb{H}$
\Statex Target and Team Specs: $N, M, \phi_j,w_l,\lambda_l$, $d_m, d_n$
\Statex Parameters: $\bs{\rho}_1,\rho_2,w_1,w_2$
\State Initialize: $t = 0$
\While{true}
\State Update target position estimate $\hat{\mb{e}}$\label{step:kf} \Comment \eqref{eqn:kf}
\State Execute the adaptive risk-aware controller\label{step:opt}\Comment \eqref{eq:full_opt_target_tracking}
\State Compile sensor failures and update $\boldsymbol{\Gamma}$
\State Update sensing margin $\Delta$\label{step:delta}\Comment \eqref{eq:resource_abundance}
\State $t = t + 1$
\EndWhile
\end{algorithmic}
\end{algorithm}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{figures/schematic.png}
\caption{Operations of the proposed adaptive and risk-aware target tracking strategy. A central controller estimates the targets' state and sensing margin based on the information received from the robots. Using the targets' state estimate and the sensing margin, the central controller solves the optimization problem depicted in~\eqref{eq:full_opt_target_tracking} to generate new coordinates for the robots.}
\vspace{0mm}
\label{fig: schematic}
\end{figure}
\section{Simulated Experiments} \label{sec:exp}
In this section, we present a series of simulated experiments which demonstrate the ability of the proposed framework to track multiple targets in an adaptive and risk-aware manner using a team of robots with heterogeneous sensors. We simulate a team of ground robots operating in $\mathbb{R}^2$, with single integrator dynamics: $\dot{\mb{x}} = \mb{u}$, where $\mb{x}$ denotes the positions of the robots. The robots are equipped with three different types of sensors, whose reduced measurement matrices are given as:
\begin{align} \label{eq:sensors}
h_1 = \begin{bmatrix} 1 & 0 \end{bmatrix},~h_2 = \begin{bmatrix} 0 & 1 \end{bmatrix},~h_3 = \begin{bmatrix} 1 & 1 \end{bmatrix}.
\end{align}
We enable the centralized controller (Fig.~\ref{fig: schematic}) to run a Kalman filter using the noisy measurements obtained from the robots and update the posteriori covariance matrix $\mb{P}$. \par
The target risk field $\phi_j$ corresponding to each target $j\in\mc{T}$ is described by a Gaussian function centered around the target,
\begin{align}
\phi_j(\mb{x}) = \frac{c_j}{2\pi|\Sigma_j|}\exp\left(-\frac{1}{2}(\mb{x} - \mb{e}_j)^T\Sigma_j(\mb{x}-\mb{e}_j)\right),
\end{align}
where $c_j$ determines the peak value of the field for target $j$, and $\Sigma_j$ determines the variance. The optimization problem given in~\eqref{eq:full_opt_target_tracking} was solved using SNOPT~\cite{snopt77} via pydrake~\cite{drake}.\par
\subsection{Two robots, two targets} \label{subsec:exp1}
For the first set of experiments, we consider a team of two robots, whose sensor matrix at deployment time is specified as $\mb{\Gamma} = \mb{1}^{2 \times 3}$. The target risk field parameters are chosen as $c_j = 3$ and $\Sigma_j = Diag([2.0, 2.0]),~\forall j\in\mc{T}$. Sensor failures on the robots are simulated by flipping a biased coin proportional to the magnitude of the target risk field at the robot's location. While the true state of the targets $\mb{e}$ is used to simulate the failures, the robots compute $\mb{O}_{\bs{\Pi}}$ using the estimated state of the targets $\hat{\mb{e}}$ as they do not have access to the true target states.\par
The parameters for the sensor covariance matrices are chosen as $w_l = 1.80, \lambda_l = 0.1, \forall l \in [|\mc{S}|]$, and the target dynamics are set to $\mb{A}_j = Diag([1, 1]), \mb{B}_j = Diag([1, 1]),~\forall j\in\mc{T}$. For the parameters $(d_m = 0.33,d_n = 2.0, \bs{\rho}_1 = [0.5,0.5], \rho_2 = 0.1, w_1 = 1 , w_2 = 100)$, Fig.~\ref{fig:scenario_risk_aware} illustrates the motion trails of the robots (black dots) relative to the true target positions (red squares) and the estimated target positions (blue translucent squares) as the simulation progresses. As seen, the robots approach the targets while accounting for the risk of detection and failures (depicted by the shaded purple regions).
\begin{figure}[h]
\hspace*{-0.45cm}
\centering
\subfloat[][]{
\includegraphics[trim={3.00cm 0cm 4.0cm 0cm},clip,width=0.24\textwidth]{figures/moving_tracking_scenario.png}
\label{fig:scenario_risk_aware}}
\subfloat[][]{
\includegraphics[trim={3.00cm 0cm 4.0cm 0cm},clip,width=0.24\textwidth]{figures/no_risk_tracking_scenario.png}
\label{fig:scenario_no_risk}}
\caption{Motion trails of the robots (black dots), mobile targets (red squares), and estimated target locations (blue translucent squares) executing the adaptive \& risk-aware control framework presented in this paper. The target risk fields are illustrated by the shaded regions around the targets. \protect\subref{fig:scenario_risk_aware} illustrates how the risk-aware constraint~\eqref{eq:nlp:e} enables the robots to trade-off between current and future tracking performance by maintaining a distance from the targets. This can be constrasted with the behavior in~\protect\subref{fig:scenario_no_risk} where the risk-aware constraint has been removed. Consequently, the robots position themselves very close to the targets to optimize for tracking performance but at the risk of higher sensor failures rates which detrimentally affects their future performance.}
\label{fig:scenarios}
\end{figure}
Figure~\ref{fig:tracking_perf} illustrates how the tracking error corresponding to each target---measured by $\Tr(\mb{P}_j)$---increases as failures affect the system. It should be noted that, despite not meeting the desired maximum tracking error $\bs{\rho}_1$, the team continues to perform the task, thanks to the slack variable $\boldsymbol{\delta}_1$ introduced in constraint~\eqref{eq:nlp:d}. \par
\begin{figure}[h]
\centering
\includegraphics[width=0.44\textwidth]{figures/perf_1.png}
\caption{Tracking error for a team of two robots executing the adaptive \& risk-aware target tracking control presented in~\eqref{eq:rtargettrackingopt}. As seen, the robots do not achieve their tracking performance objectives $\bs{\rho}_1$ due to the safety Gramian constraint imposed in~\eqref{eq:nlp:e} as well as sensor failures induced by the targets. However, the adaptive nature of the performance optimization allows the team to continue tracking the targets while balancing the objectives of current performance and safe future operations.}
\label{fig:tracking_perf}
\end{figure}
To demonstrate the advantage of the control framework introduced in this paper, we compare the performance of our controller with and without the SOG constraint given by~\eqref{eq:nlp:e}. In this scenario, the robots drive close to the targets in an attempt to obtain accurate estimates of the target positions (see Fig.~\ref{fig:scenario_no_risk}). Figure~\ref{fig:comp} compares the tracking performance and sensor margin over the course of a simulation, and indicates the times at which the sensor failures occurred. As seen, without accounting for the detrimental effects of future failures, the robots drive very close to the targets, and consequently experience a sequence of sensor failures proportional to the target risk field. These simulations demonstrate the ability of our risk-aware adaptive controller to not only continue operations in the face of sensor failures, but also operate for a longer period of time by accounting for future failures. \par
\begin{figure}[h]
\centering
\subfloat[][]{
\includegraphics[width=0.45\textwidth]{figures/perf_compare_12.png}
\label{fig:perf_comp}
} \\
\subfloat[][]{
\includegraphics[width=0.45\textwidth]{figures/sens_margin_12.png}
\label{fig:sm_comp}
}
\caption{Comparison of total tracking error with and without the risk aware controller encoded in constraint~\eqref{eq:nlp:e}. In the case where future performance is not accounted for, the robots drive very close to the target with the aim of meeting their current performance specifications. However, the larger number of sensor failures dramatically increases the tracking error of the robots (see~\protect\subref{fig:perf_comp}). This corresponds to a decreasing sensor margin as seen in \protect\subref{fig:sm_comp}. In contrast, the proposed risk-aware adaptive controller enables the multi-robot team to track the targets for a longer time horizon, and maintains a larger sensing margin $\Delta$ over the simulation time horizon.}
\label{fig:comp}
\end{figure}
\subsection{Four robots, two targets} \label{subsec:exp2}
For the second set of experiments, we consider a team of 4 robots, which are equipped with the same types of three sensors described in~\eqref{eq:sensors}. We assign the the following sensor matrix for the team at deployment time: $\bs{\gamma}_1 = [1, 2, 3], \bs{\gamma}_2 = [1, 2, 3], \bs{\gamma}_3 = [3], \bs{\gamma}_4 = [1, 2]$.
For the parameters $(d_m = 0.33, d_n = 2.0, c_j = 1, \Sigma_j = Diag([4.0, 4.0]), \bs{\rho}_1 = [0.45, 0.45], \rho_2 = 0.1, w_1 = 1 , w_2 = 500)$, Fig.~\ref{fig:ens_comp} compares the performance of the algorithm with and without the risk-aware constraint~\eqref{eq:nlp:e}. In particular, we present the averaged results over 10 simulation runs to demonstrate the consistent performance of the developed framework. As seen, not only does the proposed framework ensure the longer operability of the robot team, the variance in the tracking performance is also lower when compared to the case with no risk-awareness. \par
\begin{figure}[h]
\centering
\subfloat[][]{
\includegraphics[width=0.425\textwidth]{figures/ens_tracking_perf12.png}
\label{fig:ens_perf_comp}
}\\
\subfloat[][]{
\includegraphics[width=0.425\textwidth]{figures/ens_sensor_margin12.png}
\label{fig:ens_sm_comp}
}
\caption{Performance of the proposed adaptive and risk-aware target tracking controller in the face of target-induced failures (4 robots, 2 targets). Similar to Fig.~\ref{fig:comp}, the addition of the risk-aware component encoded by constraint~\eqref{eq:nlp:e} enables the proposed controller to balance performance maximization with the quality of future measurements---allowing for better performance over a longer time horizon. These results were averaged over 10 simulation runs to demonstrate consistent performance. The solid lines denote the mean values, and the shaded region depicts the $\pm 1$ standard deviation. This is especially clear in~\protect\subref{fig:ens_perf_comp} where the variance over the simulation runs is lower in the case of the risk-aware controller.}
\label{fig:ens_comp}
\end{figure}
One of the salient features of the proposed framework is the automatic trade-off between performance maximization and risk-aversion which accounts for the excess of sensors available within the team, as defined in~\eqref{eq:resource_abundance}. Towards this end, Fig.~\ref{fig:sm_tradeoff} compares the team-wide tracking quality $\Tr(\mb{P})^{-1}$ and the safety metric $\Tr(\mb{O}_{\bs{\Pi}})$ for varying sensor margin values. From Fig.~\ref{fig:sm_tradeoff} it is interesting to observe that the system prioritizes performance over safety when the abundance of sensors in the team is high, but flips this relation as the abundance reduces. Note that, to illustrate this point, the sensor margin $\Delta$ was not computed using~\eqref{eq:resource_abundance} but was directly specified to the optimization program which then generated the robot configurations accordingly.
\begin{figure}[h]
\includegraphics[trim={0.1cm 0cm 0.0cm 0cm},clip,width=0.47\textwidth]{figures/sens_margin_tradeoff.png}
\caption{The automatic trade-off between performance and risk-aversion within the proposed control framework as a function of the resource abundance within the team. Thanks to the encoding of sensing margin $\Delta$ within the cost functions $k_1$ and $k_2$ (see~\eqref{eq:hexamples}), the optimization-based controller in~\eqref{eq:full_opt_target_tracking} inherently prioritizes performance for high sensor margins, and prioritizes safety as the sensor margin decreases. This is reflected in the tracking quality and safety metric of the generated robot configurations.}
\label{fig:sm_tradeoff}
\end{figure}
\section{Conclusion}\label{sec:conclusion}
This paper developed a framework to leverage the heterogeneous sensors in a robot team to track hostile targets while simultaneously accounting for current and future tracking performance. The cost functions~$k_1$ and~$k_2$, which were a function of the sensor abundance~$\Delta$, are an implicit mechanism to trade-off between risk-aversion and performance maximization. While only a simple version of these functions were considered in this paper, future investigations might reveal a systematic mechanism to design these functions to obtain a desired outcome.
\bibliographystyle{unsrt}
|
{
"timestamp": "2021-05-11T02:16:30",
"yymm": "2105",
"arxiv_id": "2105.03813",
"language": "en",
"url": "https://arxiv.org/abs/2105.03813"
}
|
\section{} and \subsection{} commands, these will automatically be printed on this slide as an overview of your presentation
\tableofcontents
\end{frame}
\section{Background}
\subsection{Unsupervised Representation Learning}
\begin{frame}{Unsupervised Representation Learning}
\begin{itemize}
\item Unsupervised representation learning~\cite{bengio2013representation} has drawn much attention.
\item Unsupervised representation learning shows great effects in reducing the expensive human efforts for annotations and the benefits to downstream machine learning algorithms.
\item Representative methods:
\begin{itemize}
\item Principle Component Analysis (PCA)~\cite{tipping1999probabilistic}
\item Restricted Boltzmann Machine (RBM)~\cite{hinton2006reducing}
\item Variational AutoEncoder (VAE)~\cite{kingma2013auto}
\item Contrastive Learning (CL)~\cite{oord2018representation}
\end{itemize}
\end{itemize}
\end{frame}
\subsection{Contrastive Representation Learning}
\begin{frame}{Contrastive Representation Learning}
\begin{itemize}
\item Contrastive Learning is investigated as a lower bound of mutual information to enrich the MI between data and their representations in early stage \cite{gutmann2010noise,hjelm2018learning}.
\item In recent years, the effectiveness of CL is not just attributed to the maximization of mutual information \cite{tschannen2019mutual,tian2019crd}.
\item Contrastive Learning is widely studied:
\begin{itemize}
\item SimCLR~\cite{chen2020simple,chen2020big} studies extensive augmentations for positive and negative samples and intra-batch-based negative sampling.
\item A memory bank that caches representations~\cite{wu2018unsupervised} and a momentum update strategy are introduced to enable the usage of an enormous number of negative samples~\cite{chen2020mocov2}.
\item \cite{wang2020understanding} reveals the contrastive scheme is optimizing the alignment of positive samples and the uniformity of negative pairs in the limit of an infinite number of negative samples.
\item Areas like text~\cite{logeswaran2018efficient}, sequential data \cite{oord2018representation,henaff2019data}, structural data like graphs \cite{sun2019infograph,li2019graph,hassani2020contrastive,velickovic2019deep}, reinforcement learning~\cite{srinivas2020curl}, and few-shot scenarios~\cite{khosla2020supervised,sylvain2020locality}.
\end{itemize}
\end{itemize}
\end{frame}
\begin{frame}{Problems of Contrastive Learning}
\begin{itemize}
\item As a mutual information lower bound estimator (NCE \cite{oord2018representation}), contrastive lower bound is biased \cite{poole2018variational}.
\item Contrastive Learning methods are sensitive to selected samples~\cite{saunshi2019theoretical}:
\begin{itemize}
\item Positive samples need to apply various perturbations \cite{chen2020simple}.
\item ``Hard" negative samples are observed to be helpful \cite{bose2018adversarial,cherian2020representation,li2020self}.
\item ``Not all samples are negatives":
\begin{itemize}
\item A decomposition of the data distribution to approximate the true negative distribution~\cite{chuang2020debiased}.
\item To use ``neither too hard nor too easy" negative samples~\cite{wu2020conditional}.
\item To apply Monte-Carlo sampling for selecting hard negative samples under the user's control~\cite{robinson2020contrastive}.
\end{itemize}
\end{itemize}
\end{itemize}
\end{frame}
\begin{frame}{Problems of Contrastive Learning}
\begin{itemize}
\item Contrastive loss maximizes (\textit{resp.} minimizes) the similarity of positive (\textit{resp.} negative) pairs in the feature space:
\beq{
\!\mathop{\mathbb{E}}_{\substack{(\boldsymbol{x}, \boldsymbol{x}^+, \boldsymbol{x}^-_{1:M}) }} \left[ - \ln \ \frac{e^{f_{\boldsymbol{\theta}}(\boldsymbol{x})^{\top} f_{\boldsymbol{\theta}}(\boldsymbol{x}^+) / \tau}}{e^{f_{\boldsymbol{\theta}}(\boldsymbol{x})^{\top} f_{\boldsymbol{\theta}}(\boldsymbol{x}^+) / \tau}+{\mathop{\sum}_{i=1}^M } e^{f_{\boldsymbol{\theta}}(\boldsymbol{x}_{i}^{-})^{\top} f_{\boldsymbol{\theta}}(\boldsymbol{x}) / \tau}} \right],\! \label{eq: CL}}
\item Encoder $f_{\boldsymbol{\theta}}: \mathbb{R}^n \rightarrow \mathcal{S}^{d-1}$, where the learned $d$-dimensional features are with a unit norm.
\item Typically:
\begin{itemize}
\item For observations $\boldsymbol{x}_{0:M} \sim p_\mathrm{data}(\boldsymbol{x})$, we commonly assume that each $\boldsymbol{x}_i$ can be randomly transformed in certain ways with a transformation function $\mathcal{T}(\boldsymbol{x}_i,\epsilon_i)$ with $\epsilon_i \sim p(\epsilon)$.
\item For each $\boldsymbol{x}_0$, the query (also known as the anchor) is defined as $\boldsymbol{x}=\mathcal{T}(\boldsymbol{x}_0,\epsilon_0)$.
\item For positive pair $\{(\boldsymbol{x}, \boldsymbol{x}^+)\}$: $\boldsymbol{x}^+= \mathcal{T}(\boldsymbol{x}_0, \epsilon^+)$ is transformed from the same observation.
\item For negative pairs $\{(\boldsymbol{x}, \boldsymbol{x}^-_i)\}_{1:M}$: $\boldsymbol{x}^-_i = \mathcal{T}(\boldsymbol{x}_{i}, \epsilon^-_{i})$ are transformed from different observations.
\end{itemize}
\end{itemize}
\end{frame}
\begin{frame}{Problems of Contrastive Learning}
\begin{itemize}
\item In feature space, positive samples should be close and negative samples should be away from each other.
\item Positive pairs are sampled from a joint distribution $\boldsymbol{x}, \boldsymbol{x}^+ \sim p(\boldsymbol{x}, \boldsymbol{x}^+)$.
\item Negative samples are sampled: $\boldsymbol{x}^- \stackrel{iid}{\sim} p(\boldsymbol{x}^-)$.
\item In practice, positive pairs $(\boldsymbol{x}, \boldsymbol{x}^+)$ are often independent augmented views, negative samples are sampled from data distribution $\boldsymbol{x}^- \stackrel{iid}{\sim} p_\mathrm{data}(\boldsymbol{x})$.
\end{itemize}
\end{frame}
\section{Contrastive Conditional Transport}
\subsection{Conditional Transport and Contrastive Learning}
\begin{frame}{Motivation of Contrastive Conditional Transport}
\begin{columns}[c]
\column{.45\textwidth}
\begin{itemize}
\item \textbf{Conventional CL:} Given a query, the model randomly takes one positive sample to form a positive pair and compares it against multiple negative pairs, with all samples equally treated.
\item \textbf{CCT:} using multiple positive and negative pairs, the weight of a sample (indicated by point scale) is contrastively computed to more strongly pull more distant positive samples towards the query and push closer negative samples away from the query, making the update adaptive to samples and encouraging uniformity.
\end{itemize}
\column{.5\textwidth}
\begin{figure}[t]
\centering
\includegraphics[width=.92\columnwidth]{misc/motiv_ours.pdf}
\end{figure}
\end{columns}
\end{frame}
\begin{frame}{Contrastive Uniform Transport (CUT) in the feature space}
\begin{itemize}
\item In the same spirit of Equation \eqref{eq: CL}, by transporting positive samples together and negative samples away from each other, we define the Contrastive Uniform Transport (CUT) as:
\baa{
& \min_{\boldsymbol{\theta}}\{ \mathbb{E}_{\boldsymbol{x}_0\sim p_{data}(\boldsymbol{x})}\mathbb{E}_{\epsilon_0,\epsilon^+\sim p(\epsilon)} \left[ c(f_{\boldsymbol{\theta}}(\boldsymbol{x}), f_{\boldsymbol{\theta}}(\boldsymbol{x}^+)) \right]-\mathbb{E}_{\boldsymbol{x},\boldsymbol{x}^{-}\sim p(\boldsymbol{x})} \left[ c(f_{\boldsymbol{\theta}}(\boldsymbol{x}), f_{\boldsymbol{\theta}}(\boldsymbol{x}^-)) \right]\}.
\label{eq: CL-transport}
}
\item $c(\boldsymbol{z}_1,\boldsymbol{z}_2)$ denotes the point-to-point cost of transporting between two feature vectors.
\item To minimize/maximize the expected cost of moving between the representations of positive/negative samples, with the costs of all sample pairs being uniformly weighted.
\item Our experimental results do not observe CUT in \eqref{eq: CL-transport} perform well.
\end{itemize}
\end{frame}
\subsection{Contrastive Conditional Transport}
\begin{frame}{CCT: Contrastive Conditional Transport--positive transport}
Generalizing CUT, we further define the CCT loss for transporting the positive pairs:
\begin{block}{Conditional probability for transporting the positive pairs}
\ba{
&\textstyle\pi^+_{\boldsymbol{\theta}}(\boldsymbol{x}^+ \,|\, \boldsymbol{x},\boldsymbol{x}_0) := \frac{e^{d_{t^{+}}(f_{\boldsymbol{\theta}}(\boldsymbol{x}), f_{\boldsymbol{\theta}}(\boldsymbol{x}^+))} p(\boldsymbol{x}^+\,|\, \boldsymbol{x}_0)}{\int e^{d_{t^{+}}(f_{\boldsymbol{\theta}}(\boldsymbol{x}), f_{\boldsymbol{\theta}}(\boldsymbol{x}^+))}p(\boldsymbol{x}^+\,|\, \boldsymbol{x}_0) d\boldsymbol{x}^+}; ~ d_{t^{+}}(\boldsymbol{z}_1, \boldsymbol{z}_2) = {t^{+}\| \boldsymbol{z}_1 - \boldsymbol{z}_2 \|^2}, \label{eq: CT-positive}}
\end{block}
\begin{block}{CCT loss for transporting the positive pairs}
\ba{
\min_{\boldsymbol{\theta}} \mathcal C^+ := \mathbb{E}_{\boldsymbol{x}\sim p(\boldsymbol{x})}\mathbb{E}_{\boldsymbol{x}^+\sim \pi^+_{\boldsymbol{\theta}}(\boldsymbol{\cdot} \,|\, \boldsymbol{x},\boldsymbol{x}_0)} \left[ c(f_{\boldsymbol{\theta}}(\boldsymbol{x}), f_{\boldsymbol{\theta}}(\boldsymbol{x}^+)) \right],\!\!\label{eq:C+}
}
\end{block}
\end{frame}
\begin{frame}{CCT: Contrastive Conditional Transport--negative transport}
Similarly, CCT for transporting the negative pairs:
\begin{block}{Conditional probability for transporting the negative pairs}
\ba{
&\textstyle\pi^-_{\boldsymbol{\theta}}(\boldsymbol{x}^- \,|\, \boldsymbol{x}) := \frac{e^{-d_{t^{-}}(f_{\boldsymbol{\theta}}(\boldsymbol{x}), f_{\boldsymbol{\theta}}(\boldsymbol{x}^-))} p(\boldsymbol{x}^-)}{\int e^{-d_{t^{-}}(f_{\boldsymbol{\theta}}(\boldsymbol{x}), f_{\boldsymbol{\theta}}(\boldsymbol{x}^-))}p(\boldsymbol{x}^-) d\boldsymbol{x}^-}; ~ d_{t^{-}}(\boldsymbol{z}_1, \boldsymbol{z}_2) = {t^{-}\| \boldsymbol{z}_1 - \boldsymbol{z}_2 \|^2}, \label{eq: CT-negative}
}
\end{block}
\begin{block}{CCT objective for transporting the negative pairs}
\ba{
\max_{\boldsymbol{\theta}} ~\mathcal C^- := \mathbb{E}_{\boldsymbol{x}\sim p(\boldsymbol{x})}\mathbb{E}_{\boldsymbol{x}^-\sim \pi^-_{\boldsymbol{\theta}}(\boldsymbol{\cdot} \,|\, \boldsymbol{x})} \left[ c(f_{\boldsymbol{\theta}}(\boldsymbol{x}), f_{\boldsymbol{\theta}}(\boldsymbol{x}^-)) \right],\label{eq:C-}
}
\end{block}
\end{frame}
\begin{frame}{On the mini-batch based optimization}
\begin{itemize}
\item With empirical samples, we have $(\boldsymbol{x}_{i}^\mathrm{data},\epsilon_i)\sim p_{data}(\boldsymbol{x}) p(\epsilon)$ for $i=1,\ldots,M$.
\item The distribution of queries:
$$
\hat p(\boldsymbol{x}) = \textstyle \frac{1}{M} \sum_{i=1}^M \delta_{\boldsymbol{x}_i},~\boldsymbol{x}_{i} =\mathcal{T}(\boldsymbol{x}_{i}^\mathrm{data},\epsilon_i).
$$
\item Given each query $\boldsymbol{x}_i$, the positive samples are sampled with $\epsilon_{1:K} \stackrel{iid}\sim p(\epsilon)$, and the distribution of positive samples is approximated as:
\ba{
\hat p(\boldsymbol{x}_i^+\,|\, \boldsymbol{x}^{\rm{data}}_i) =\textstyle \frac{1}{K}\sum_{k=1}^K \delta_{\boldsymbol{x}^+_{ik}},~ \boldsymbol{x}^+_{ik} = \mathcal{T}(\boldsymbol{x}^{\rm{data}}_i,\epsilon_k).\label{eq:x+}
}
\item The distribution of negative samples is approximated as:
\ba{
\textstyle\hat p(\boldsymbol{x}_i^-) = \frac{1}{M-1} \sum_{j\neq i} \delta_{\boldsymbol{x}_j}.
\label{eq:x-}
}
\item $\hat p(\boldsymbol{x}_i^-)$ could be both refined with other methods~\cite{oord2018representation,He_2020_CVPR,khosla2020supervised}.
\end{itemize}
\end{frame}
\begin{frame}{CCT: On the mini-batch based optimization}
\begin{block}{Conditional transport probability with empirical positive samples}
\baa{
&\textstyle\hat \pi^+_{\boldsymbol{\theta}}(\boldsymbol{x}^+_{i}\,|\, \boldsymbol{x}_i,\boldsymbol{x}^{\rm{data}}_i) := \sum_{k=1}^K \frac{e^{d_{t^{+}}(f_{\boldsymbol{\theta}}(\boldsymbol{x}_i), f_{\boldsymbol{\theta}}(\boldsymbol{x}^+_{ik}))}}{\sum_{k'=1}^K e^{d_{t^{+}}(f_{\boldsymbol{\theta}}(\boldsymbol{x}_i), f_{\boldsymbol{\theta}}(\boldsymbol{x}^+_{ik'}))}}\delta_{x_{ik}^+}, \notag\\
&\textstyle\hat \pi^-_{\boldsymbol{\theta}}(\boldsymbol{x}^-_{i} \,|\, \boldsymbol{x}_i) := \sum_{j\neq i} \frac{e^{-d_{t^{-}}(f_{\boldsymbol{\theta}}(\boldsymbol{x}), f_{\boldsymbol{\theta}}(\boldsymbol{x}_j))} }{\sum_{j'\neq i} e^{-d_{t^{-}}(f_{\boldsymbol{\theta}}(\boldsymbol{x}_i), f_{\boldsymbol{\theta}}(\boldsymbol{x}_{j'}))}}\delta_{\boldsymbol{x}_j}.\notag
}
\end{block}
\begin{block}{CCT loss for empirical samples}
\baa{
&\hat{\mathcal{C}}^+ := \frac{1}{M} \sum_{i=1}^M \sum_{k=1}^K
\textstyle
\frac{e^{d_{t^{+}}(f_{\boldsymbol{\theta}}(\boldsymbol{x}_i), f_{\boldsymbol{\theta}}(\boldsymbol{x}^+_{ik}))}}{\sum_{k'=1}^K e^{d_{t^{+}}(f_{\boldsymbol{\theta}}(\boldsymbol{x}_i), f_{\boldsymbol{\theta}}(\boldsymbol{x}^+_{i{k'}}))}}\scriptsize{\times c(f_{\boldsymbol{\theta}}(\boldsymbol{x}_i), f_{\boldsymbol{\theta}}(\boldsymbol{x}^+_{ik}))}, \notag \\
&\hat{\mathcal{C}}^- := \frac{1}{M} \sum_{i=1}^M \sum_{j\neq i} \textstyle\frac{e^{-d_{t^{-}}(f_{\boldsymbol{\theta}}(\boldsymbol{x}_i), f_{\boldsymbol{\theta}}(\boldsymbol{x}_{j}))} }{\sum_{j'\neq i} e^{-d_{t^{-}}(f_{\boldsymbol{\theta}}(\boldsymbol{x}_i), f_{\boldsymbol{\theta}}(\boldsymbol{x}_{j'}))}}\times \scriptsize{c(f_{\boldsymbol{\theta}}(\boldsymbol{x}_i), f_{\boldsymbol{\theta}}(\boldsymbol{x}_{j}))}\notag, \\
&\textstyle \mathcal L_{\text{CCT}} = \textstyle \hat{\mathcal{C}}^+ - \textstyle \hat{\mathcal{C}}^-.\notag
}
\end{block}
\end{frame}
\begin{frame}{Framework of CCT}
\begin{figure}[t]
\centering
\includegraphics[width=.9\columnwidth]{misc/model_architecture.pdf}
\caption{Illustration of CCT framework. The encoder extracts embeddings from samples and the conditional distributions indicate the transport maps for optimizing the transport cost in contrastive learning.
The conditional weights are calculated according to the distance of a query $\boldsymbol{x}$ and its contrastive samples $\boldsymbol{x}^+,\boldsymbol{x}^-$. $\otimes$ means element-wise multiplication between costs and conditional weights. }
\label{figure:model_architecture}
\end{figure}
\end{frame}
\subsection{Property Analysis}
\begin{frame}{Properties of CCT}
\begin{theorem}[Invariant representation with positive transport]
The positive transport is optimized if and only if all positive samples of a query share the same representation as that query. More specifically, for query $\boldsymbol{x}$ that is transformed from $\boldsymbol{x}_0\sim p_{data}(\boldsymbol{x})$, its positive samples share the same representation with it, which means
\ba{ f_{\boldsymbol{\theta}}(\boldsymbol{x}^+) = f_{\boldsymbol{\theta}}(\boldsymbol{x}) ~\text{ for any }~\boldsymbol{x}^+\sim p(\boldsymbol{x}^+\,|\, \boldsymbol{x}_0).
\label{eq:th1}
}
\end{theorem}
\end{frame}
\begin{frame}{Properties of CCT}
\begin{theorem}[Mutual information minimization with negative transport]
Suppose all samples are equally likely in the prior, which means $p(\boldsymbol{x})=p(\boldsymbol{x}^-)$ for any $\boldsymbol{x},\boldsymbol{x}^-\stackrel{iid}\sim p(\boldsymbol{x})$, and their latent representations $\boldsymbol{z}=f_{\boldsymbol{\theta}}(\boldsymbol{x})$ and $\boldsymbol{z}^-=f_{\boldsymbol{\theta}}(\boldsymbol{x}^-)$ are also equally likely in the encoder space, which means $p(\boldsymbol{z})=p(\boldsymbol{z}^-)$, then
optimizing $\boldsymbol{\theta}$ to maximize the
negative transport cost $\mathcal C^-$ in \eqref{eq:C-} is the same as optimizing $\boldsymbol{\theta}$ to maximize the conditional differential entropy of $\boldsymbol{x}^-$ given $\boldsymbol{x}$ under the joint distribution $p(\boldsymbol{x})\pi_{\boldsymbol{\theta}}^-(\boldsymbol{x}^-\,|\, \boldsymbol{x})$, which can be expressed as
\ba{
\label{eq:entropy}
\mathcal H(X^-\,|\, X) = \mathbb{E}_{\boldsymbol{x}\sim p(\boldsymbol{x})}
\mathbb{E}_{\boldsymbol{x}^-\sim \pi^-_{\boldsymbol{\theta}}(\boldsymbol{\cdot} \,|\, \boldsymbol{x})}[-\ln \pi^-_{\boldsymbol{\theta}}(\boldsymbol{x}^-\,|\, \boldsymbol{x})],\!\!
}
which is also the same as optimizing $\boldsymbol{\theta}$ to minimize the mutual information between $\boldsymbol{x}$ and $\boldsymbol{x}^-$, expressed as
\ba{
I(X;X^-)= \textstyle\mathop{\mathbb{E}}_{{\boldsymbol{x} \sim p(\boldsymbol{x}) }} {\mathop{\mathbb{E}}_{\boldsymbol{x}^- \sim \pi_{\boldsymbol{\theta}}(\boldsymbol{x}^- \,|\, \boldsymbol{x}) }} \left[ \ln \frac{\pi_{\boldsymbol{\theta}}(\boldsymbol{x}^-\,|\, \boldsymbol{x})}{p(\boldsymbol{x}^-)} \right].\!\! \label{eq:I}}
\end{theorem}
\end{frame}
\begin{frame}{Properties of CCT}
Interpretations:
\begin{itemize}
\item Positive transport: the optimal encoder produces representations invariant to the noisy details.
\item Negative transport: the optimal encoder distributes samples with an equal distance \textbf{to the query}.
\begin{itemize}
\item A perfect encoder trained with CL loss will uniformly distribute samples on the feature hypersphere \cite{wang2020understanding}.
\item The uniform hypersphere is achieved when the uniform prior $p(\boldsymbol{x}) = p(\boldsymbol{x}^-)$ is achieved.
\item In practice, this prior is not satisfied. Minimizing the mutual information $I(X; X^-)$ is more efficient in distributing the negative samples.
\end{itemize}
\end{itemize}
\end{frame}
\begin{frame}{Properties of CCT}
\begin{theorem}[Distinction between CL and CCT loss]
As the number of negative samples $M$ goes to infinity, the contribution of the negative samples to the CL loss shown in Equation \eqref{eq: CL} can be expressed as
$$\mathop{\mathbb{E}}_{{\boldsymbol{x} \sim p(\boldsymbol{x}) }} \left[ \ln {\mathop{\mathbb{E}}_{\boldsymbol{x}^- \sim p(\boldsymbol{x}^-) }} e^{f_{\boldsymbol{\theta}}(\boldsymbol{x}^{-})^{\top} f_{\boldsymbol{\theta}}(\boldsymbol{x}) / \tau} \right],$$
adding which to the mutual information $I(X;X^-)$ in Equation \eqref{eq:I}
becomes an uppper bound of $-\mathcal{C}^-$ defined as in Equation \eqref{eq:C-}, which is the contribution of the negative samples to the CCT loss.
\end{theorem}
The CL loss does not consider the impacts of the mutual information $I(X; X^-)$. As the uniform data prior assumption is not clearly violated, the robustness will be affected.
\end{frame}
\begin{frame}{Properties of CCT}
\begin{alertblock}{Reformulation of $\mathcal{C}^+$: comparison with uniform sampling}
Remind in CL loss, the alignment of positive samples is encouraged by:
\ba{\textstyle
\mathbb{E}_{\boldsymbol{x}_0\sim p_{data}(\boldsymbol{x})}
\mathbb{E}_{\boldsymbol{x},\boldsymbol{x}^+\sim p(\boldsymbol{\cdot}\,|\, \boldsymbol{x}_0)} \left[ -f_{\boldsymbol{\theta}}(\boldsymbol{x})^\top f_{\boldsymbol{\theta}}(\boldsymbol{x}^+)) \right].\nota
}
We can rewrite the positive transport cost $\mathcal C^+$ in \eqref{eq:C+} as
\ba{\textstyle
\mathbb{E}_{\boldsymbol{x}_0\sim p_{data}(\boldsymbol{x})}
\mathbb{E}_{\boldsymbol{x},\boldsymbol{x}^+\sim p(\boldsymbol{\cdot}\,|\, \boldsymbol{x}_0)} \left[ -f_{\boldsymbol{\theta}}(\boldsymbol{x})^\top f_{\boldsymbol{\theta}}(\boldsymbol{x}^+)) \frac{\pi^+_{\boldsymbol{\theta}}(\boldsymbol{x}^+ \,|\, \boldsymbol{x},\boldsymbol{x}_0)}{p(\boldsymbol{x}^+\,|\, \boldsymbol{x}_0)} \right]
.\notag
}
\end{alertblock}
The density ratio $\frac{\pi^+_{\boldsymbol{\theta}}(\boldsymbol{x}^+ \,|\, \boldsymbol{x},\boldsymbol{x}_0)}{p(\boldsymbol{x}^+\,|\, \boldsymbol{x}_0)}$ provides an importance weight and enhances the robustness when $p(\boldsymbol{x}^+\,|\, \boldsymbol{x}_0)$ does not have a uniform distribution.
\end{frame}
\subsection{Experiments}
\begin{frame}{Experiment settings}
\begin{itemize}
\item Training feature encoders with CCT.
\item Fine-tuning on downstream tasks: linear classification, object detection, segmentation.
\item Small-scale datasets:
\begin{itemize}
\item Datasets: CIFAR-10, CIFAR-100, STL-10.
\item Negative sample strategy: SimCLR \cite{chen2020simple}.
\item Encoder backbone: AlexNet-based Encoder \cite{wang2020understanding}.
\end{itemize}
\item Class-imbalanced datasets:
\begin{itemize}
\item Datasets: CIFAR-10, CIFAR-100. (skewed with a linear/exponential rule)
\item Negative sample strategy: SimCLR \cite{chen2020simple}.
\item Encoder backbone: AlexNet-based Encoder \cite{wang2020understanding}.
\end{itemize}
\item Large-scale datasets:
\begin{itemize}
\item Datasets: ImageNet-100, ImageNet-1K.
\item Negative sample strategy: MoCo-v2 \cite{chen2020mocov2}.
\item Encoder backbone: ResNet50 Encoder \cite{he2016deep}.
\end{itemize}
\item Ablation studies: mini-batch size, hyperparameters
\end{itemize}
\end{frame}
\begin{frame}{Experiments: Linear classification on small-scale datasets}
\begin{table}[t]
\vspace{-2mm}
\centering
\caption{The top-1 classification accuracy ($\%$) of different contrastive objectives with SimCLR framework on small-scale datasets. All methods follow SimCLR setting and apply an AlexNet-based encoder. The results of CL and AU-CL on STL-10 are quoted from \cite{wang2020understanding}. }
\label{tab:performance_small}
\renewcommand{\arraystretch}{1.1}
\setlength{\tabcolsep}{1.0mm}{
\begin{tabular}{cc|ccc}
\toprule
\multicolumn{2}{c|}{Methods} & CIFAR-10 & CIFAR-100 & STL-10 \\ \midrule
\multirow{4}{*} &CL & 83.47 & 55.78 & 83.89 \\
&AU-CL & 83.39 & 55.31 & \textbf{84.43} \\
&HN-CL & 83.67 & 55.87 & 83.27 \\
&CCT ($K=1$) & \textbf{83.73} & \textbf{56.52} & 83.90 \\ \midrule
\multirow{2}{*}&CMC ($K=4$) & 85.54 & 58.64 & 84.50 \\
&CCT ($K=4$) & \textbf{86.54} & \textbf{59.41} & \textbf{85.59} \\ \bottomrule
\end{tabular}
}
\vspace{-3.5mm}
\end{table}
\end{frame}
\begin{frame}[allowframebreaks]
\frametitle{Experiments: On the effect of conditional transport map}
\begin{figure}[t]
\centering
{
\!\includegraphics[width=0.4\textwidth]{entropy/entropy_512_cifar10.pdf}\includegraphics[width=0.4\textwidth]{training_evolution/cifar10_evolution.pdf}
}
\caption{CIFAR-10 training evolution, with mini-batch size 512.
\textbf{Left:} Conditional entropy $\mathcal{H}(X^-|X)$ \textit{w.r.t.} epoch. The maximal possible conditional entropy
is indicated by a dotted line. \textbf{Right}: Linear classification with learned representations \textit{w.r.t.} epoch. }\label{figure:train_entropy_acc_cifar10}
\end{figure}
\begin{figure}[t]
\centering
{
\!\includegraphics[width=0.4\textwidth]{entropy/entropy_512_cifar100.pdf}\includegraphics[width=0.4\textwidth]{training_evolution/cifar100_evolution.pdf}
}
\caption{CIFAR-100 training evolution, with mini-batch size 512.
\textbf{Left:} Conditional entropy $\mathcal{H}(X^-|X)$ \textit{w.r.t.} epoch. The maximal possible conditional entropy
is indicated by a dotted line. \textbf{Right}: Linear classification with learned representations \textit{w.r.t.} epoch. }\label{figure:train_entropy_acc_cifar100}
\end{figure}
\begin{figure}[t]
\centering
{
\!\includegraphics[width=0.4\textwidth]{entropy/entropy_512_stl10.pdf}\includegraphics[width=0.4\textwidth]{training_evolution/stl10_evolution.pdf}
}
\caption{STL-10 training evolution, with mini-batch size 512.
\textbf{Left:} Conditional entropy $\mathcal{H}(X^-|X)$ \textit{w.r.t.} epoch. The maximal possible conditional entropy
is indicated by a dotted line. \textbf{Right}: Linear classification with learned representations \textit{w.r.t.} epoch. }\label{figure:train_entropy_acc_stl10}
\end{figure}
\begin{figure}
\centering
{
\includegraphics[width=.47\textwidth]{visualization/cct_weights_p.pdf}}\vspace{-3mm}\\
{
\includegraphics[width=.485\textwidth]{visualization/cct_weights_n.pdf}}
\caption{Illustration of positive/negative samples and their corresponding weights.}
\label{fig:visualization_of_samples}
\end{figure}
\end{frame}
\begin{frame}{Experiments: Linear classification on class-imbalanced datasets}
\begin{table}[t]
\vspace{-2.5mm}
\centering
\caption{The classification accuracy ($\%$) of different contrastive objectives on class-imbalanced datasets. ``Linear'' and ``Exponentional'' indicate the number of samples in each class are chosen by following a linear rule or an exponential rule, respectively. The performance drops compared with the performance in Table~\ref{tab:performance_small} are shown next to each result.}
\label{tab:performance_imbalance}
\renewcommand{\arraystretch}{1.0}
\setlength{\tabcolsep}{1.0mm}{
\begin{tabular}{c|cc|cc}
\toprule
{Imbalance} & \multicolumn{2}{c|}{Linear} & \multicolumn{2}{c}{Exponential} \\ \hline
{Dataset} & CIFAR-10 & CIFAR-100 & CIFAR-10 & CIFAR-100 \\ \midrule
CL & $79.88_{3.59\downarrow}$ & $52.29_{3.57\downarrow}$ & $71.74_{11.73\downarrow}$ & $43.29_{12.57\downarrow}$ \\
AU-CL & $80.25_{3.14\downarrow}$ & $52.74_{2.57\downarrow}$ & $71.62_{11.76\downarrow}$ & $44.38_{10.93\downarrow}$ \\
HN-CL & $\textbf{80.51}_{3.15\downarrow}$ & $52.72_{3.14\downarrow}$ & $72.74_{10.93\downarrow}$ & $45.13_{10.73\downarrow}$ \\
CCT ($K=1$) & $80.46_{3.27\downarrow}$ & $\textbf{54.12}_{2.40\downarrow}$ & $\textbf{73.02}_{10.71\downarrow}$ & $\textbf{46.59}_{9.93\downarrow}$ \\ \midrule
CMC ($K=4$) & $82.20_{3.34\downarrow}$ & $55.38_{3.26\downarrow}$ & $74.77_{10.77\downarrow}$ & $48.87_{9.77\downarrow}$ \\
CCT ($K=4$) & $\textbf{83.62}_{2.92\downarrow}$ & $\textbf{56.91}_{2.50\downarrow}$ & $\textbf{75.89}_{10.65\downarrow}$ & $\textbf{50.17}_{9.24\downarrow}$ \\ \bottomrule
\end{tabular}
} \vspace{-2.5mm}
\end{table}
\end{frame}
\begin{frame}{Experiments: Linear classification on large-scale datasets}
\begin{table}[t]
\vspace{-2.mm}
\centering
\caption{The top-1 classification accuracy ($\%$) of different contrastive objectives with MoCo-v2 framework on the ImageNet dataset. All methods apply MoCo-v2 with ResNet50. The results from paper or Github page are marked by $\star$.
}
\label{tab:performance_large}
\renewcommand{\arraystretch}{0.9}
\setlength{\tabcolsep}{1.0mm}
\begin{tabular}{cc|cc}
\toprule
\multicolumn{2}{c|}{Methods} & ImageNet-100 & ImageNet-1K \\ \midrule
\multirow{4}{*} &CL & $77.54^\star$ & $67.50^\star$ \\
&AU-CL & $77.66^\star$ & $67.69^\star$ \\
&HN-CL & $76.34$ & $67.41$ \\
&CMC ($K=1$) & $75.80^\star$ & $66.20^\star$ \\
&CCT ($K=1$) & $\textbf{79.40}$ & $\textbf{68.40}$ \\ \midrule
&CMC ($K=4$) & 78.84 & $-$ \\
&CCT ($K=4$) & $\textbf{80.46}$ & $\textbf{70.35}$ \\ \bottomrule
\end{tabular}
\vspace{-3mm}
\end{table}
\end{frame}
\begin{frame}{\Large Experiments: Object detection and segmentation on ImageNet-1K}
\begin{table}[ht]
\centering
\caption{Results of transferring features to object detection and segmentation task on Pascal VOC, with the pre-trained MoCo-v2 ResNet50 on ImageNet-1k. The results of CL loss on Pascal VOC are from their papers and online Github pages.}
\label{table:detection_and_segmenation}
\renewcommand{\arraystretch}{1.0}
\setlength{\tabcolsep}{1.0mm}{
\scalebox{1.0}{
\begin{tabular}{c|ccc|ccc|ccc}
\toprule
Task & \multicolumn{6}{c|}{Object Detction} & \multicolumn{3}{c}{Object Segmentation} \\ \hline
Dataset & \multicolumn{3}{c|}{Pascal VOC} & \multicolumn{3}{c|}{COCO} & \multicolumn{3}{c}{COCO} \\ \hline
Loss & AP & $\text{AP}_{50}$ & $\text{AP}_{75}$ & AP & $\text{AP}_{50}$ & $\text{AP}_{75}$ & AP & $\text{AP}_{50}$ & $\text{AP}_{75}$ \\ \midrule
CL & 57.00 & 82.40 & 63.60 & 40.90 &60.53 & 44.30 &35.73 &57.29 &\textbf{38.20} \\% & \textbf{16.70} & 39.27 & 52.97 \\
AU-CL & 57.24 & 82.49 & 63.83 & 41.01 & 60.68 & 44.40 &35.56 &57.38 &37.93 \\% & 15.78 & 39.28 & 52.88 \\
CCT ($K=1$) & \textbf{57.75} & \textbf{82.76} & \textbf{64.23} & \textbf{41.08} & \textbf{60.80} & \textbf{44.84} &\textbf{35.74} & \textbf{57.50} &38.07 \\ \midrule
CCT ($K=4$) & \textbf{57.91} & \textbf{82.83} & \textbf{64.85} & \textbf{41.50} & \textbf{61.11} & \textbf{45.30} & \textbf{36.08} & \textbf{57.95} & \textbf{38.68}
\\ \bottomrule
\end{tabular}
}}
\end{table}
\end{frame}
\begin{frame}{Ablation studies: Conditional Transport Map}
\begin{table}[ht]
\centering
\caption{Linear classification performance (\%) of different variants of our method. ``CCT'' represents the normal CCT configuration, ``w/o $\pi_{\boldsymbol{\theta}}^{+}$'' means without the positive transport map, ``w/o $\pi_{\boldsymbol{\theta}}^{-}$'' means without the negative transport map. ``CUT'' indicates the conditional uniform transport (see the model we discussed in Equation~\ref{eq: CL-transport}), \textit{i.e.} without both positive and negative transport map. This experiment is done on all small-scale datasets, with $K=4$ and mini-batch size $M=128$.}
\label{tab:different_variant}
\renewcommand{\arraystretch}{1.1}
\setlength{\tabcolsep}{1.0mm}{
\begin{tabular}{cccc}
\toprule
Methods & CIFAR-10 & CIFAR-100 & STL-10 \\ \hline
CCT & \textbf{85.94} & \textbf{59.51} & \textbf{85.59} \\ \hline
w/o $\pi_{\boldsymbol{\theta}}^{+}$ & 85.22 & 58.74 & 85.06 \\
w/o $\pi_{\boldsymbol{\theta}}^{-}$ & 78.49 & 47.88 & 72.94 \\ \hline
CUT & 77.17 & 44.24 & 71.88 \\ \bottomrule
\end{tabular}}
\end{table}
\end{frame}
\begin{frame}{Ablation studies: On the sampling size}
\begin{figure}[ht]
\subfloat[CIFAR-10]
{
\includegraphics[width=0.3\textwidth]{batch_size/cifar10_sampling_size.pdf}
}\hfill
\subfloat[CIFAR-100]
{
\includegraphics[width=0.3\textwidth]{batch_size/cifar100_sampling_size.pdf}
}\hfill
\subfloat[STL-10]
{
\includegraphics[width=0.3\textwidth]{batch_size/stl10_sampling_size.pdf}
}
\caption{The linear classification results of training with different sampling size on small-scale datasets. The training batch size is proportional to the negative sampling size.}
\label{figure:sampling_size}
\end{figure}
\end{frame}
\begin{frame}[allowframebreaks]
\frametitle{Ablation studies: On the Effects of Hyper-parameter $t^{+}$, $t^{-}$}
Remind the hyper-parameters in the definition of transport map:
$$
\textstyle\pi^+_{\boldsymbol{\theta}}(\boldsymbol{x}^+ \,|\, \boldsymbol{x},\boldsymbol{x}_0) := \frac{e^{{t^{+}}\|f_{\boldsymbol{\theta}}(\boldsymbol{x}), f_{\boldsymbol{\theta}}(\boldsymbol{x}^+)\|^2} p(\boldsymbol{x}^+\,|\, \boldsymbol{x}_0)}{\int e^{{t^{+}}\|f_{\boldsymbol{\theta}}(\boldsymbol{x}), f_{\boldsymbol{\theta}}(\boldsymbol{x}^+)\|^2}p(\boldsymbol{x}^+\,|\, \boldsymbol{x}_0) d\boldsymbol{x}^+};\quad \textstyle\pi^-_{\boldsymbol{\theta}}(\boldsymbol{x}^- \,|\, \boldsymbol{x}) := \frac{e^{-{t^{-}}\|f_{\boldsymbol{\theta}}(\boldsymbol{x}), f_{\boldsymbol{\theta}}(\boldsymbol{x}^-)\|^2} p(\boldsymbol{x}^-)}{\int e^{-{t^{-}}\|f_{\boldsymbol{\theta}}(\boldsymbol{x}), f_{\boldsymbol{\theta}}(\boldsymbol{x}^-)\|^2}p(\boldsymbol{x}^-) d\boldsymbol{x}^-}.$$
\begin{table}[ht]
\centering
\caption{The classification accuracy(\%) of CCT ($K=4,~M=128$) with different hyper-parameters $t^{+}$ on small-scale datasets.}
\label{tab:hyperparameter_pos}
\begin{tabular}{c|c|cccccc}
\toprule
Method & Dataset & 0.5 & 0.7 & 0.9 & 1.0 & 2.0 & 3.0 \\ \midrule
\multirow{3}{*}{CCT ($K=4$)} & CIFAR-10 & 86.07 & 85.78 & 85.90 & \textbf{86.54} & 84.85 & 84.76 \\
& CIFAR-100 & \textbf{59.47} & 59.61 & 59.41 & 59.41 & 57.82 & 57.55 \\
& STL-10 & 85.90 & \textbf{85.91} & 85.81 & 85.59 & 85.65 & 85.14 \\ \bottomrule
\end{tabular}
\end{table}
\begin{table}[ht]
\centering
\caption{The classification accuracy(\%) of CCT ($K=1,~M=768$) and CCT ($K=4,~M=128$) with different hyper-parameters $t^{-}$ on small-scale datasets.}
\label{tab:hyperparameter_neg}
\begin{tabular}{c|c|cccccc}
\toprule
Methods & Dataset & 0.5 & 0.7 & 0.9 & 1.0 & 2.0 & 3.0 \\ \midrule
\multirow{3}{*}{CCT ($K=1$)} & CIFAR-10 & 81.66 & 82.40 & 83.07 & 82.74 & \textbf{83.73} & 83.11 \\
& CIFAR-100 & 51.42 & 52.81 & 53.36 & 54.20 & 56.21 & \textbf{56.52} \\
& STL-10 & 80.37 & 81.47 & 81.89 & 82.16 & 83.55 & \textbf{83.90} \\ \midrule
\multirow{3}{*}{CCT ($K=4$)} & CIFAR-10 & 85.67 & 86.19 & \textbf{86.54} & 86.41 & 85.94 & 85.69 \\
& CIFAR-100 & 58.17 & 58.63 & 59.37 & 59.35 & \textbf{59.41} & 59.31 \\
& STL-10 & 83.81 & 84.42 & 84.71 & 85.25 & \textbf{85.59} & 85.41 \\ \bottomrule
\end{tabular}
\end{table}
\end{frame}
\begin{frame}{Ablation studies: On the effects of different cost metrics}
\begin{align}
\label{eq:Euclidean_cost}
\text{Euclidean cost: } c(f_{\theta}(\boldsymbol{x}),f_{\theta}(\boldsymbol{y}))=||f_{\theta}(\boldsymbol{x})-f_{\theta}(\boldsymbol{y})||_{2}^{2}.
\end{align}
\begin{align}
\label{eq:RBF_neg}
\text{RBF cost: } c_\mathrm{RBF}(f_{\theta}(\boldsymbol{x}),f_{\theta}(\boldsymbol{y}))=-e^{-t||f_{\theta}(\boldsymbol{x})-f_{\theta}(\boldsymbol{y})||_{2}^{2}}.
\end{align}
\begin{table}
\centering
\caption{The classification accuracy ($\%$) of CCT ($K=1$) and CCT ($K=4$) with different cost metrics on CIFAR-10, CIFAR-100 and STL-10. Euclidean indicates the cost defined in Equation~\ref{eq:Euclidean_cost}, and RBF indicates the cost metrics defined in Equation~\ref{eq:RBF_neg}.}
\label{tab:different_cost_metrics}
\renewcommand{\arraystretch}{1.1}
\setlength{\tabcolsep}{1.0mm}{
\scalebox{1.0}{
\begin{tabular}{c|c|ccc}
\toprule
Methods & Cost Metric & CIFAR-10 & CIFAR-100 & STL-10 \\ \midrule
\multicolumn{1}{c|}{\multirow{2}{*}{CCT$(K=1)$}} & Euclidean & 83.73 & 56.21 & 83.55 \\ \cline{2-5}
\multicolumn{1}{c|}{} & RBF & 83.08 & 55.90 & 84.20 \\ \midrule
\multicolumn{1}{c|}{\multirow{2}{*}{CCT$(K=4)$}} & Euclidean & 85.94 & \textbf{59.41} & 85.59 \\ \cline{2-5}
\multicolumn{1}{c|}{} & RBF & \textbf{86.20} & 58.81 & \textbf{85.80} \\ \bottomrule
\end{tabular}
}}
\end{table}
\end{frame}
\begin{frame}[allowframebreaks]
\frametitle{Ablation studies: Feature space visualization}
\begin{figure}
\subfloat[Epoch 1]{
\includegraphics[width=0.3\textwidth]{tsne/tsne_CL_0.pdf}
}\hfill
\subfloat[Epoch 20]{
\includegraphics[width=0.3\textwidth]{tsne/tsne_CL_20.pdf}
}\hfill
\subfloat[Epoch 200]{
\includegraphics[width=0.3\textwidth]{tsne/tsne_CL_200.pdf}
}\\
\subfloat[Epoch 1]{
\includegraphics[width=0.3\textwidth]{tsne/tsne_CCT_0.pdf}
}\hfill
\subfloat[Epoch 20]{
\includegraphics[width=0.3\textwidth]{tsne/tsne_CCT_20.pdf}
}\hfill
\subfloat[Epoch 200]{
\includegraphics[width=0.3\textwidth]{tsne/tsne_CCT_200.pdf}
\label{figure:tsne_epoch}
}\vspace{-2mm}
\caption{The $t$-SNE visualization of the latent space at different training epochs, learned by CL loss (\textit{top}) and CCT loss (\textit{bottom}). The picked query is marked in green, with its positive samples marked in blue and its negative samples marked in red. The circle with radius $t^{-}$ is shown as the black dashed line.
}
\label{figure:tsne}
\end{figure}
\end{frame}
\section{Summary and Discussions}
\begin{frame}{Summary and Discussion}
\begin{itemize}
\item Randomly selecting positive and negative samples for a query is limiting the performance of contrastive representation learning.
\item Contrastive conditional transport (CCT) loss constructs for a random query with two contradicting conditional transport maps.
\item CCT combines the independent and random sampling convention with the practice of contrastively reweighing both positive and negative samples according to their distances to the query.
\item CCT shows consistent better performance and can be applied in the future works related to representation learning.
\end{itemize}
\end{frame}
\begin{frame}[allowframebreaks,noframenumbering]
\frametitle{References}
\bibliographystyle{amsalpha}
\section{Introduction}
The conventional Contrastive Learning (CL) loss~\cite{oord2018representation,poole2018variational} has achieved remarkable success in representation learning, benefiting downstream tasks in a variety of areas~\cite{misra2020self,He_2020_CVPR,chen2020simple,fang2020cert,Giorgi2020DeCLUTRDC}. Recent approaches mainly apply the conventional CL loss to make the encoder distinguish each positive sample within multiple negative samples. In image representation learning, this scheme is widely used to encourage the encoder to learn representations that are invariant to unnecessary details in the representation space, for which the unit hypersphere is the most common assumption~\cite{wang2017normface,davidson2018hyperspherical,hjelm2018learning,tian2019contrastive,bachman2019learning}. Meanwhile, the contrast with negative samples is demystified as avoiding the collapse issue, where the encoder outputs a trivial constant, and uniformly distributing samples on the hypersphere~\cite{wang2020understanding}. To improve the quality of the contrast, various methods, such as large negative memory bank~\cite{chen2020mocov2}, hard negative mining~\cite{chuang2020debiased,kalantidis2020hard}, and using strong or multi-view augmentations~\cite{chen2020simple,tian2019contrastive}, are proposed and succeed in learning powerful representations. Since the conventional CL loss achieves the one-vs-many contrast with a softmax cross-entropy loss, a notable concern is still that the contrast could be sensitive to the sampled positive and negative pairs~\cite{saunshi2019theoretical,chuang2020debiased}. Given a sampled query, a conventional CL method usually randomly takes one positive sample, and equally treats all the other negative samples, regardless of how informative they are to the query. The sampled positive pair could make the contrast either easy or difficult, and trivially selecting hard negative pairs could make the optimization inflexible and inefficient.
\begin{figure}[t]
\centering
\begin{minipage}[b]{\columnwidth}
\centering
\includegraphics[width=\columnwidth]{misc/motiv_ours3.pdf}
\end{minipage}
\vspace{-5.5mm}
\caption{\small Comparison of conventional contrastive learning (CL) and the proposed Contrastive Attraction and Contrastive Repulsion (CACR) framework. For conventional CL, given a query, the model randomly takes one positive sample to form a positive pair and compares it against multiple negative pairs, with all samples equally treated. For CACR, using multiple positive and negative pairs, the weight of a sample (indicated by point scale) is contrastively computed to allow the query to not only more strongly pull more distant positive samples, but also more strongly push away closer negative samples.
}
\label{fig:motiv}
\vspace{-4.5mm}
\end{figure}
An alternative intuition of the CL loss is that given a query, its positive sample needs to be close, while its negative ones need to be far away in the representation space. This motivates the construction of Contrastive Attraction and Contrastive Repulsion (CACR), a new CL framework where the positive and negative samples are first contrasted within themselves before getting pulled and pushed, respectively, from the query. As shown in Figure~\ref{fig:motiv}, unlike conventional CL, which equally treats samples and pulls/pushes them with the softmax cross-entropy contrast, CACR considers both the cost of moving positive samples close and that of moving negative samples away. Moreover, CACR applies a double-contrast strategy to contrast the positive samples and negative ones within themselves separately, where we formulate the contrasts as the conditional probabilities of moving a given query to different samples. Specifically, if a selected positive sample is far away from the query, it indicates the encoder does not sufficiently capture some information, and CACR will assign higher probability for the query to pull this positive sample. Conversely, if a selected negative sample is too close to the query, it indicates the encoder has difficulty distinguishing them, and CACR will assign higher probability for the query to push away this negative sample. This double-contrast method determines if a positive/negative sample is easy or hard to move, making the optimization more flexible.
We further provide theoretical analysis of CACR's properties to show the connection and difference between CACR and conventional CL methods. We also justify the effectiveness of the doubly contrastive strategy from both theoretical and empirical perspectives. Our main contributions include:
1) We propose CACR, a CL framework where the positive and negative samples first contrast within themselves respectively, with the importance of positive and negative samples modeled by two separate conditional distributions. With the weight of a sample indicating how informative it is to the query, CACR is able to attract positive samples close and repel negative samples away from the query in a more efficient and adaptive way. 2)~CACR produces useful representations by minimizing the expected cost of attracting the positive samples towards the query while maximizing that of pushing the negative samples away from the query.
The doubly contrastive strategy is realized by modeling two conditional distributions for the intra-contrasts within positive and negative samples. Our theoretical and empirical analysis show that these two conditional distributions help in making contrastive attraction and contrastive repulsion more effective and robust than conventional CL does.
3) Our experiments show that CACR consistently outperforms conventional CL methods in a variety of settings, achieving state-of-the-art results on standard vision tasks over various benchmark datasets.
\section{Related work}\label{sec:related_work}
Plenty of unsupervised representation learning~\cite{bengio2013representation} methods have been developed to learn good data representations, \textit{e.g.,} PCA~\cite{tipping1999probabilistic}, RBM~\cite{hinton2006reducing}, VAE~\cite{kingma2013auto}. Among them, CL~\cite{oord2018representation} is investigated as a lower bound of mutual information in early stage \cite{gutmann2010noise,hjelm2018learning}. Recently, many studies reveal that the effectiveness of CL is not just attributed to the maximization of mutual information \cite{tschannen2019mutual,tian2019crd}, motivating various works to demystify the contrastive learning scheme.
\textbf{Contrastive representation learning.}
In vision tasks, SimCLR~\cite{chen2020simple,chen2020big} studies extensive augmentations for positive and negative samples and intra-batch-based negative sampling. A memory bank that caches representations~\cite{wu2018unsupervised} and a momentum update strategy are introduced to enable the use of an enormous number of negative samples~\cite{He_2020_CVPR,chen2020mocov2}. CL has also been developed in learning representations for text~\cite{logeswaran2018efficient}, sequential data \cite{oord2018representation,henaff2019data}, structural data like graphs \cite{sun2019infograph,li2019graph,hassani2020contrastive,velickovic2019deep}, reinforcement learning~\cite{srinivas2020curl}, and few-shot scenarios~\cite{khosla2020supervised,sylvain2020locality}. Along with the success of these designs, works like~\citet{wang2020understanding} reveal the contrastive scheme is optimizing the alignment of positive samples and the uniformity of negative pairs in the limit of an infinite number of negative samples.
\textbf{Sample selection for CL.}
How to construct samples in CL has also been widely studied. For positive samples, \citet{chen2020simple,chen2020mocov2} propose to apply image perturbations. \citet{tian2019contrastive,tian2020makes} consider the image views in different modalities and minimize the irrelevant mutual information between them. Most works on negative selection observe the merits of using ``hard'' negative samples, motivating the introduction of additional techniques, such as Mixup and adversarial noise~\cite{bose2018adversarial,cherian2020representation,li2020self}. In a view that not all negative pairs are ``true'' negatives~\cite{saunshi2019theoretical}, \citet{chuang2020debiased} propose a decomposition of the data distribution to approximate the true negative distribution. RingCL~\cite{wu2020conditional} proposes to use ``neither too hard nor too easy'' negative samples by predefined percentiles, and HN-CL~\cite{robinson2020contrastive} applies Monte-Carlo sampling for selecting hard negative samples. The conditional distributions in CACR also show the spirit of the sampling weights, but the objective is not limited to contrast positives and negatives in a one-vs-many mechanism as RingCL and HN-CL do. An advantage of CACR is its flexibility of the learnable conditional distributions, rather than heuristically defined based on prior belief. Compared to HN-CL~\cite{robinson2020contrastive} inspired by the philosophy of hard negative mining, CACR is motivated by contrastively pull positives and contrastively push negatives according to how informative they are to the query. This yields two distinct loss functions shown in Section~\ref{sec:CACR}, and we further empirically explore the difference between sample selection based CL and CACR by experiments in Section~\ref{sec:experiments}.
\section{The proposed approach}
In CL, for observations $\boldsymbol{x}_{0:M} \sim p_\mathrm{data}(\boldsymbol{x})$, we commonly assume that each $\boldsymbol{x}_i$ can be transformed in certain ways, with the samples transformed from the same and different data regarded as positive and negative samples, respectively. Specifically, we denote $\mathcal{T}(\boldsymbol{x}_i,\epsilon_i)$ as a random transformation of $\boldsymbol{x}_i$, where $\epsilon_i \sim p(\epsilon)$ represents the randomness injected into the transformation. In computer vision, $\epsilon_i$ often represents a composition of random cropping, color jitter, Gaussian blurring, \textit{etc.} For each $\boldsymbol{x}_0$, with query $\boldsymbol{x}=\mathcal{T}(\boldsymbol{x}_0,\epsilon_0)$, we sample a positive pair $(\boldsymbol{x}, \boldsymbol{x}^+)$, where $\boldsymbol{x}^+= \mathcal{T}(\boldsymbol{x}_0, \epsilon^+)$, and $M$ negative pairs $\{(\boldsymbol{x}, \boldsymbol{x}^-_i)\}_{1:M}$, where $\boldsymbol{x}^-_i = \mathcal{T}(\boldsymbol{x}_{i}, \epsilon^-_{i})$. Denote $\tau\in \mathbb{R}^+$, where $\mathbb{R}^+:=\{x:x>0\}$, as a temperature parameter. With encoder $f_{\boldsymbol{\theta}}: \mathbb{R}^n \rightarrow \mathcal{S}^{d-1}$, where we follow the convention to restrict the learned $d$-dimensional features with a unit norm, we desire to have similar and distinct representations for positive and negative pairs, respectively, via the contrastive loss as
\begin{align}
\label{eq: CL}
\mathop{\mathbb{E}}_{\substack{(\boldsymbol{x}, \boldsymbol{x}^+, \boldsymbol{x}^-_{1:M}) }} \left[ \textstyle - \ln \frac{e^{f_{\boldsymbol{\theta}}(\boldsymbol{x})^{\top} f_{\boldsymbol{\theta}}(\boldsymbol{x}^+) / \tau}}{e^{f_{\boldsymbol{\theta}}(\boldsymbol{x})^{\top} f_{\boldsymbol{\theta}}(\boldsymbol{x}^+) / \tau}+{\mathop{\sum}_{i}} e^{f_{\boldsymbol{\theta}}(\boldsymbol{x}_{i}^{-})^{\top} f_{\boldsymbol{\theta}}(\boldsymbol{x}) / \tau}} \right].
\end{align}
Note by construction, the positive sample $\boldsymbol{x}^+$ is independent of $\boldsymbol{x}$ given $\boldsymbol{x}_0$ and the negative samples $\boldsymbol{x}_i^-$ are independent of $\boldsymbol{x}$.
Intuitively, this 1-vs-$M$ softmax cross-entropy encourages the encoder to not only pull the representation of a randomly selected positive sample closer to that of the query, but also push the representations of $M$ randomly selected negative samples away from that of the query.
\begin{figure*}[t]
\centering
\includegraphics[width=11cm]{misc/model_architecture.pdf}
\vspace{-9.pt}
\caption{\small Illustration of the CACR framework. The encoder extracts features from samples and the conditional distributions help weigh the samples differently given the query, according to the distance of a query $\boldsymbol{x}$ and its contrastive samples $\boldsymbol{x}^+,\boldsymbol{x}^-$. $\otimes$ means element-wise multiplication between costs and conditional weights.
}
\label{figure:model_architecture}
\vspace{-10pt}
\end{figure*}
\subsection{Contrastive attraction and contrastive repulsion} \label{sec:CACR}
In the same spirit of letting the query attract positive samples and repel negative samples, Contrastive Attraction and Contrastive Repulsion (CACR) directly models the cost of moving from the query to positive/negative samples with a doubly contrastive strategy:
\begin{align}
\centering
\mathcal{L}_\mathrm{CACR}&:=\underbrace{\mathbb{E}_{\boldsymbol{x}\sim p(\boldsymbol{x})}\mathbb{E}_{\boldsymbol{x}^+\sim \pi^+_{\boldsymbol{\theta}}(\boldsymbol{\cdot} \,|\, \boldsymbol{x}, \boldsymbol{x}_0)} \left[c(f_{\boldsymbol{\theta}}(\boldsymbol{x}), f_{\boldsymbol{\theta}}(\boldsymbol{x}^+)) \right]}_\textbf{Contrastive Attraction}
+
\underbrace{ \mathbb{E}_{\boldsymbol{x}\sim p(\boldsymbol{x})}E_{\boldsymbol{x}^-\sim \pi^-_{\boldsymbol{\theta}}(\boldsymbol{\cdot} \,|\, \boldsymbol{x})} \left[-c(f_{\boldsymbol{\theta}}(\boldsymbol{x}), f_{\boldsymbol{\theta}}(\boldsymbol{x}^-)) \right]}_\textbf{Contrastive Repulsion}, \notag \\
&:= \mathcal{L}_\mathrm{CA} + \mathcal{L}_\mathrm{CR} \label{eq:CACR_loss},
\end{align}
where we denote $\boldsymbol{\pi}^+$ and $\boldsymbol{\pi}^-$ as the conditional distributions of intra-positive contrasts and intra-negative contrasts, respectively, and $c(\boldsymbol{z}_1,\boldsymbol{z}_2)$ as the point-to-point cost of moving between two vectors $\boldsymbol{z}_1$ and $\boldsymbol{z}_2$, $e.g.$, the squared Euclidean distance $\|\boldsymbol{z}_{1}-\boldsymbol{z}_{2}\|^{2}$ or the negative inner product $-\boldsymbol{z}_{1}^{T}\boldsymbol{z}_{2}$. In the following we explain the doubly contrastive components with more details.
\textbf{Contrastive attraction}: The intra-positive contrasts is defined in a form of the conditional probability, where the positive samples compete to gain a larger probability to be moved from the query. Here we adapt to CACR a Bayesian strategy in~\citet{zheng2020act}, which exploits the combination of an energy-based likelihood term and a prior distribution, to quantify the difference between two implicit probability distributions given their empirical samples. Specifically, denoting {$d_{t^{+}}(\cdot, \cdot)$ as a distance metric with temperature $t^{+}\in \mathbb{R}^+$,} \textit{e.g.}, $d_{t^{+}}(\boldsymbol{z}_1, \boldsymbol{z}_2) = {t^{+}\| \boldsymbol{z}_1 - \boldsymbol{z}_2 \|^2}$, given a query $\boldsymbol{x}=\mathcal{T}(\boldsymbol{x}_0,\epsilon_0)$, we define the conditional probability of moving it to positive samples $\boldsymbol{x}^+= \mathcal{T}(\boldsymbol{x}_0, \epsilon^+)$:
\ba{
&\small \textstyle\pi^+_{\boldsymbol{\theta}}(\boldsymbol{x}^+ \,|\, \boldsymbol{x},\boldsymbol{x}_0) := \frac{e^{d_{t^{+}}(f_{\boldsymbol{\theta}}(\boldsymbol{x}), f_{\boldsymbol{\theta}}(\boldsymbol{x}^+))} p(\boldsymbol{x}^+\,|\, \boldsymbol{x}_0)}
{Q^+(\boldsymbol{x} \,|\, \boldsymbol{x}_0)},~~ Q^+(\boldsymbol{x} \,|\, \boldsymbol{x}_0)=:
{\int e^{d_{t^{+}}(f_{\boldsymbol{\theta}}(\boldsymbol{x}), f_{\boldsymbol{\theta}}(\boldsymbol{x}^+))}p(\boldsymbol{x}^+\,|\, \boldsymbol{x}_0) d\boldsymbol{x}^+}, \label{eq: CT-positive}}
where $f_{\boldsymbol{\theta}}(\cdot)$ is an encoder parameterized by $\boldsymbol{\theta}$ and $Q^+(\boldsymbol{x})$ is a normalization term. This construction makes it more likely to pull $\boldsymbol{x}$ towards a positive sample that is more distant in their latent representation space. With Eqn.~\eqref{eq: CT-positive}, the contrastive attraction loss $\mathcal{L}_\mathrm{CA}$ measures the expected cost of moving a query to its positive samples, as defined in Eqn.~\eqref{eq:CACR_loss}, which more heavily weighs $c(f_{\boldsymbol{\theta}}(\boldsymbol{x}), f_{\boldsymbol{\theta}}(\boldsymbol{x}^+))$ if $f_{\boldsymbol{\theta}}(\boldsymbol{x})$ and $f_{\boldsymbol{\theta}}(\boldsymbol{x}^+) $ are further away from each~other.
\textbf{Contrastive repulsion}: On the contrary of the contrastive attraction shown in Eqn.~\eqref{eq: CT-positive}, we define the conditional probability for moving query $\boldsymbol{x}$ to a negative sample as
\ba{
&\small \textstyle\pi^-_{\boldsymbol{\theta}}(\boldsymbol{x}^- \,|\, \boldsymbol{x}) := \frac{e^{-d_{t^{-}}(f_{\boldsymbol{\theta}}(\boldsymbol{x}), f_{\boldsymbol{\theta}}(\boldsymbol{x}^-))} p(\boldsymbol{x}^-)}{Q^-(\boldsymbol{x})},~~~~Q^-(\boldsymbol{x}):={\int e^{-d_{t^{-}}(f_{\boldsymbol{\theta}}(\boldsymbol{x}), f_{\boldsymbol{\theta}}(\boldsymbol{x}^-))}p(\boldsymbol{x}^-) d\boldsymbol{x}^-}, \label{eq: CT-negative}
}
where $t^{-}\in \mathbb{R}^+$ is the temperature. This construction makes it more likely to move query $\boldsymbol{x}$ to a negative sample that is closer from it in their representation space. With Eqn.~\eqref{eq: CT-negative}, the contrastive repulsion loss $\mathcal{L}_\mathrm{CR}$ measures the expected cost to repel negative samples from the query shown in Eqn.~\eqref{eq:CACR_loss}, which more heavily weighs $c(f_{\boldsymbol{\theta}}(\boldsymbol{x}), f_{\boldsymbol{\theta}}(\boldsymbol{x}^-))$ if $f_{\boldsymbol{\theta}}(\boldsymbol{x})$ and $f_{\boldsymbol{\theta}}(\boldsymbol{x}^-) $ are closer to each other.
\textbf{Choice of $c(\cdot, \cdot)$, $d_{t^{+}}(\cdot, \cdot)$ and $d_{t^{-}}(\cdot, \cdot)$.} There could be various choices for the point-to-point cost function $c(\cdot, \cdot)$, distance metric $d_{t^{+}}(\cdot, \cdot)$ in Eqn.~\eqref{eq: CT-positive}, and $d_{t^{-}}(\cdot, \cdot)$ in Eqn.~\eqref{eq: CT-negative}. Considering the encoder $f_{\boldsymbol{\theta}}$ outputs normalized vectors on the surface of a hypersphere, maximizing the inner product is equivalent
to minimizing squared Euclidean distance. Without loss of generality, we define them~as
\baa{\nonumber
& c(\boldsymbol{z}_1, \boldsymbol{z}_2) = \| \boldsymbol{z}_1-\boldsymbol{z}_2 \|^2_2,~~~ d_{t^{+}}(\boldsymbol{z}_1,\boldsymbol{z}_2) =t^{+} \| \boldsymbol{z}_1-\boldsymbol{z}_2 \|^2_2,~~~
d_{t^{-}}(\boldsymbol{z}_1,\boldsymbol{z}_2) = t^{-}\| \boldsymbol{z}_1-\boldsymbol{z}_2 \|^2_2,
}
where $t^{+}, t^{-} \in \mathbb{R}^+$. There are other choices for $c(\cdot, \cdot)$ and we show the ablation study in Section~\ref{sec:cost_choice}.
\subsection{Mini-batch based stochastic optimization}\label{sec:empirical_CPP}
Under the CACR loss as in Eqn.~\eqref{eq:CACR_loss}, to make the learning of $f_{\boldsymbol{\theta}}(\boldsymbol{\cdot})$ amenable to mini-batch stochastic gradient descent (SGD) based optimization,
we draw $(\boldsymbol{x}_{i}^\mathrm{data},\epsilon_i)\sim p_{data}(\boldsymbol{x}) p(\epsilon)$ for $i=1,\ldots,M$ and then approximate the distribution of the query using an empirical distribution of $M$ samples as
$$
\hat p(\boldsymbol{x}) = \textstyle \frac{1}{M} \sum_{i=1}^M \delta_{\boldsymbol{x}_i},~\boldsymbol{x}_{i} =\mathcal{T}(\boldsymbol{x}_{i}^\mathrm{data},\epsilon_i).
$$
With query $\boldsymbol{x}_i$ and
$\epsilon_{1:K} \stackrel{iid}\sim p(\epsilon)$,
we approximate $p(\boldsymbol{x}_i^-)$ for Eqn.\,\eqref{eq: CT-negative} and $p(\boldsymbol{x}_i^+\,|\, \boldsymbol{x}^{\rm{data}}_i)$ for Eqn.\,\eqref{eq: CT-positive}~as
\ba{
\textstyle\hat p(\boldsymbol{x}_i^-) = \frac{1}{M-1} \sum_{j\neq i} \delta_{\boldsymbol{x}_j},~~~\hat p(\boldsymbol{x}_i^+\,|\, \boldsymbol{x}^{\rm{data}}_i) =\textstyle \frac{1}{K}\sum_{k=1}^K \delta_{\boldsymbol{x}^+_{ik}},~~~ \boldsymbol{x}^+_{ik} = \mathcal{T}(\boldsymbol{x}^{\rm{data}}_i,\epsilon_k).\label{eq:x+}
}
Note we may improve the accuracy of $\hat p(\boldsymbol{x}_i^-)$ in Eqn.~\eqref{eq:x+} by adding previous queries into the support of this empirical distribution. Other more sophisticated ways to construct negative samples~\cite{oord2018representation,He_2020_CVPR,khosla2020supervised} could also be adopted to define $\hat p(\boldsymbol{x}_i^-)$. We will elaborate these points when describing experiments.
Plugging Eqn.~\eqref{eq:x+} into Eqn.~\eqref{eq: CT-positive} and Eqn.~\eqref{eq: CT-negative}, we can approximate the conditional distributions as
\baa{
&\textstyle\hat \pi^+_{\boldsymbol{\theta}}(\boldsymbol{x}^+_{i}\,|\, \boldsymbol{x}_i,\boldsymbol{x}^{\rm{data}}_i) := \sum_{k=1}^K \frac{e^{d_{t^{+}}(f_{\boldsymbol{\theta}}(\boldsymbol{x}_i), f_{\boldsymbol{\theta}}(\boldsymbol{x}^+_{ik}))}}{\sum_{k'=1}^K e^{d_{t^{+}}(f_{\boldsymbol{\theta}}(\boldsymbol{x}_i), f_{\boldsymbol{\theta}}(\boldsymbol{x}^+_{ik'}))}}\delta_{x_{ik}^+}, \\
&
\textstyle\hat \pi^-_{\boldsymbol{\theta}}(\boldsymbol{x}^-_{i} \,|\, \boldsymbol{x}_i) := \sum_{j\neq i} \frac{e^{-d_{t^{-}}(f_{\boldsymbol{\theta}}(\boldsymbol{x}), f_{\boldsymbol{\theta}}(\boldsymbol{x}_j))} }{\sum_{j'\neq i} e^{-d_{t^{-}}(f_{\boldsymbol{\theta}}(\boldsymbol{x}_i), f_{\boldsymbol{\theta}}(\boldsymbol{x}_{j'}))}}\delta_{\boldsymbol{x}_j},
\notag
}
which leads to a mini-batch based CACR loss as $\hat{\mathcal L}_{\text{CACR}}=\hat {\mathcal{L}}_\mathrm{CA} + \hat{\mathcal{L}}_\mathrm{CR}$, where
\ba{
\textstyle\hat{\mathcal{L}}_\mathrm{CA} & := \textstyle \frac{1}{M} \sum_{i=1}^M \sum_{k=1}^K \textstyle \frac{e^{d_{t^{+}}(f_{\boldsymbol{\theta}}(\boldsymbol{x}_i), f_{\boldsymbol{\theta}}(\boldsymbol{x}^+_{ik}))}}{\sum_{k'=1}^K e^{d_{t^{+}}(f_{\boldsymbol{\theta}}(\boldsymbol{x}_i), f_{\boldsymbol{\theta}}(\boldsymbol{x}^+_{i{k'}}))}}
{\times c(f_{\boldsymbol{\theta}}(\boldsymbol{x}_i), f_{\boldsymbol{\theta}}(\boldsymbol{x}^+_{ik}))},\notag\\
\textstyle\hat{\mathcal{L}}_\mathrm{CR} & := - \textstyle \frac{1}{M} \sum_{i=1}^M \sum_{j\neq i} \textstyle\frac{e^{-d_{t^{-}}(f_{\boldsymbol{\theta}}(\boldsymbol{x}_i), f_{\boldsymbol{\theta}}(\boldsymbol{x}_{j}))} }{\sum_{j'\neq i} e^{-d_{t^{-}}(f_{\boldsymbol{\theta}}(\boldsymbol{x}_i), f_{\boldsymbol{\theta}}(\boldsymbol{x}_{j'}))}}\times
{c(f_{\boldsymbol{\theta}}(\boldsymbol{x}_i), f_{\boldsymbol{\theta}}(\boldsymbol{x}_{j}))}\notag.
}
We optimize $\boldsymbol{\theta}$ via SGD using $\nabla_{\boldsymbol{\theta}} \hat{\mathcal{L}}_{\text{CACR}}$, with the framework instantiated as in Figure~\ref{figure:model_architecture}.
\begin{wraptable}{r}{.7\columnwidth}
\vspace{-13pt}
\caption{Comparison with representative CL methods. $K$ and $M$ denotes the number of positive and negative samples, respectively.}\label{tab:comparison}
\renewcommand{\arraystretch}{1.1}
\setlength{\tabcolsep}{1.0mm}{
\resizebox{.7\columnwidth}{!}{
\begin{tabular}{c|c|cc}
\toprule \hline
\multirow{2}{*}{Method} & \multirow{2}{*}{Contrast Loss} & {Intra-positive} & {Intra-negative} \\
& & contrast & contrast \\ \hline
CL~\cite{oord2018representation} & 1-vs-$M$ cross-entropy & \XSolidBrush & \XSolidBrush \\
AU-CL~\cite{wang2020understanding} & 1-vs-$M$ cross-entropy & \XSolidBrush & \XSolidBrush \\
HN-CL~\cite{robinson2020contrastive} & 1-vs-$M$ cross-entropy & \XSolidBrush & \Checkmark \\
CMC~\cite{tian2019contrastive} & $\binom{K}{2}$ $\times$ (1-vs-$M$ cross-entropy) & \XSolidBrush & \XSolidBrush \\ \hline
CACR (ours) & Intra-$K$-positive vs Intra-$M$-negative & \Checkmark & \Checkmark
\\ \hline \bottomrule
\end{tabular}
}
}
\vspace{-12pt}
\end{wraptable}
\textbf{Relation with CL}:
As shown in Eqn.~\eqref{eq:CACR_loss}, with both the contrastive attraction component and contrastive repulsion component, CACR loss shares the same intuition of conventional CL in pulling positive samples closer to and pushing negative samples away from the query in their representation space. However, CACR realizes this intuition by introducing the double-contrast strategy on the point-to-point moving cost, where the contrasts appear in the intra-comparison within positive and negative samples, respectively. The use of the double-contrast strategy clearly differs the CACR loss in Eqn.~\eqref{eq:CACR_loss} from the conventional CL loss in Eqn.~\eqref{eq: CL}, which typically relies on a softmax-based contrast formed with a single positive sample and multiple equally-weighted independent negative samples. A summary of the comparison between some representative CL losses and CACR is shown in Table~\ref{tab:comparison}.
\section{Property analysis of CACR}
\subsection{On the contrastive attraction}
We first analyze the effects \textit{w.r.t.} the positive samples. With contrastive attraction, the property below suggests that the optimal encoder produces representations invariant to the noisy details.
\begin{property}\label{theorem: pos-unif}
The contrastive attraction loss $\mathcal{L}_\mathrm{CA}$ is optimized if and only if all positive samples of a query share the same representation as that query. More specifically, for query $\boldsymbol{x}$ that is transformed from $\boldsymbol{x}_0\sim p_{data}(\boldsymbol{x})$, its positive samples share the same representation with it, which means
\ba{ f_{\boldsymbol{\theta}}(\boldsymbol{x}^+) = f_{\boldsymbol{\theta}}(\boldsymbol{x}) ~\text{ for any }~\boldsymbol{x}^+\sim \boldsymbol{\pi}(\boldsymbol{x}^+\,|\, \boldsymbol{x}, \boldsymbol{x}_0).
\label{eq:th1}
}
\end{property}
This property coincides with the characteristic (learning invariant representation) of the CL loss in \citet{wang2020understanding} when achieving the optima. However, the optimization dynamic in contrastive attraction evolves in the context of $\boldsymbol{x}^+ \sim \boldsymbol{\pi}_{\boldsymbol{\theta}}(\boldsymbol{x}^+ \,|\, \boldsymbol{x}, \boldsymbol{x}_0)$, which is different from that in the CL.
\begin{lemma}\label{theorem: pos-sampling}
Let us instantiate $c(f_{\boldsymbol{\theta}}(\boldsymbol{x}), f_{\boldsymbol{\theta}}(\boldsymbol{x}^+))=-f_{\boldsymbol{\theta}}(\boldsymbol{x})^\top f_{\boldsymbol{\theta}}(\boldsymbol{x}^+)$. Then, the contrastive attraction loss $\mathcal{L}_\mathrm{CA}$ in Eqn.~\eqref{eq:CACR_loss} can be re-written as
\ba{\textstyle
\mathbb{E}_{\boldsymbol{x}_0}
\mathbb{E}_{\boldsymbol{x},\boldsymbol{x}^+\sim p(\boldsymbol{\cdot}\,|\, \boldsymbol{x}_0)} \left[ -f_{\boldsymbol{\theta}}(\boldsymbol{x})^\top f_{\boldsymbol{\theta}}(\boldsymbol{x}^+)\frac{\pi^+_{\boldsymbol{\theta}}(\boldsymbol{x}^+ \,|\, \boldsymbol{x},\boldsymbol{x}_0)}{p(\boldsymbol{x}^+\,|\, \boldsymbol{x}_0)} \right],\notag
}
which could further reduce to the alignment loss
${\textstyle \mathbb{E}_{\boldsymbol{x}_0\sim p_{data}(\boldsymbol{x})} \mathbb{E}_{\boldsymbol{x},\boldsymbol{x}^+\sim p(\boldsymbol{\cdot}\,|\, \boldsymbol{x}_0)} \left[ -f_{\boldsymbol{\theta}}(\boldsymbol{x})^\top f_{\boldsymbol{\theta}}(\boldsymbol{x}^+)) \right]}\notag
$ in \cite{wang2020understanding}, \emph{iff} ${\pi^+_{\boldsymbol{\theta}}(\boldsymbol{x}^+ \,|\, \boldsymbol{x},\boldsymbol{x}_0)} = {p(\boldsymbol{x}^+\,|\, \boldsymbol{x}_0)}$.
\end{lemma}
Property~\ref{theorem: pos-unif} and Lemma~\ref{theorem: pos-sampling} jointly show contrastive attraction in CACR and the alignment loss in CL reach the same optima, while working in different sampling mechanism. In practice $\boldsymbol{x}^+$ and $\boldsymbol{x}$ are usually independently sampled augmentations in a mini-batch, as shown in Section~\ref{sec:empirical_CPP}, which raises a gap between the empirical distribution and the true distribution. Our method makes the alignment more efficient by considering the intra-relation of these positive samples to the query.
\subsection{On the contrastive repulsion}
Next we analyze the effects \textit{w.r.t.} the contribution of negative samples. \citet{wang2020understanding} reveals that a perfect encoder will uniformly distribute samples on a hypersphere under an uniform isometric assumption, \textit{i.e.}, for any uniformly sampled $\boldsymbol{x},\boldsymbol{x}^-\stackrel{iid}\sim p(\boldsymbol{x})$, their latent representations $\boldsymbol{z}=f_{\boldsymbol{\theta}}(\boldsymbol{x})$ and $\boldsymbol{z}^-=f_{\boldsymbol{\theta}}(\boldsymbol{x}^-)$ also satisfy $p(\boldsymbol{z})=p(\boldsymbol{z}^-)$. We follow their assumption to analyze contrastive repulsion via the following lemma.
\begin{lemma}\label{theorem: neg-unif} Without loss of generality, we define the moving cost and metric in the conditional distribution as $c(\boldsymbol{z}_1, \boldsymbol{z}_2) = d(\boldsymbol{z}_1, \boldsymbol{z}_2) = \| \boldsymbol{z}_1- \boldsymbol{z}_2 \|_2^2$. When we are with an uniform prior, namely $p(\boldsymbol{x})=p(\boldsymbol{x}^-)$ for any $\boldsymbol{x},\boldsymbol{x}^-\stackrel{iid}\sim p(\boldsymbol{x})$ and $p(\boldsymbol{z})=p(\boldsymbol{z}^-)$ given their latent representations $\boldsymbol{z}=f_{\boldsymbol{\theta}}(\boldsymbol{x})$ and $\boldsymbol{z}^-=f_{\boldsymbol{\theta}}(\boldsymbol{x}^-)$, then
optimizing $\boldsymbol{\theta}$ with $\mathcal{L}_\mathrm{CR}$ in Eqn.~\eqref{eq:CACR_loss} is the same as optimizing $\boldsymbol{\theta}$ to minimize the mutual information between $\boldsymbol{x}$ and $\boldsymbol{x}^-$:
\ba{
I(X;X^-)= \textstyle\mathop{\mathbb{E}}_{{\boldsymbol{x} \sim p(\boldsymbol{x}) }} {\mathop{\mathbb{E}}_{\boldsymbol{x}^- \sim \pi^-_{\boldsymbol{\theta}}(\boldsymbol{\cdot} \,|\, \boldsymbol{x}) }} \left[ \ln \frac{\pi^-_{\boldsymbol{\theta}}(\boldsymbol{x}^- \,|\, \boldsymbol{x})}{p(\boldsymbol{x}^-)} \right],\!\! \label{eq:I}
}
and is also the same as optimizing $\boldsymbol{\theta}$ to maximize the conditional differential entropy of $\boldsymbol{x}^-$ given $\boldsymbol{x}$:
\ba{
\label{eq:entropy}
\mathcal H(X^-\,|\, X) = \mathbb{E}_{\boldsymbol{x}\sim p(\boldsymbol{x})}
\mathbb{E}_{\boldsymbol{x}^-\sim \pi^-_{\boldsymbol{\theta}}(\boldsymbol{\cdot} \,|\, \boldsymbol{x})}[-\ln \pi^-_{\boldsymbol{\theta}}(\boldsymbol{x}^-\,|\, \boldsymbol{x})].
}
Here the minimizer $\boldsymbol{\theta}^\star$ of $\mathcal{L}_\mathrm{CR}$ is also that of $I(X;X^-)$, whose global minimum zero is attained \emph{iff} $X$ and $X^-$ are independent, and the equivalent maximum of $\mathcal H(X^-\,|\, X)$ indicates the optimization of $\mathcal{L}_\mathrm{CR}$ is essentially aimed towards the uniformity of representation about negative samples.
\end{lemma}
We notice that one way to reach the optimum suggested in the above lemma is optimizing $\boldsymbol{\theta}$ by contrastive repulsion until that for any $\boldsymbol{x}\sim p(\boldsymbol{x})$, $d(f_{\boldsymbol{\theta}}(\boldsymbol{x}),f_{\boldsymbol{\theta}}(\boldsymbol{x}^-))$ is equal for all $\boldsymbol{x}^-\sim \boldsymbol{\pi}_{\boldsymbol{\theta}}^-(\boldsymbol{\cdot} \,|\, \boldsymbol{x})$.
This means for any sampled negative samples, their representations are also uniformly distributed after contrastive repulsion. Interestingly, this is consistent with the uniformity property achieved by CL~\cite{wang2020understanding}, which connects contrastive repulsion with CL in the perspective of negative sample effects.
Note that, although the above analysis builds upon the uniform isometric assumption, our method actually does not rely on it. Here, we formalize a more general relation between the contrastive repulsion and the contribution of negative samples in CL without this assumption as follows.
\begin{lemma}
\label{theorem:CL_CPP_MI}
As the number of negative samples $M$ goes to infinity, the contribution of the negative samples to the CL loss become the Uniformity Loss in AU-CL~\cite{wang2020understanding}, termed as $\mathcal{L}_\mathrm{uniform}$ for simplicity. It can be expressed as an upper bound of $\mathcal{L}_\mathrm{CR}$ by adding the mutual information $I(X;X^-)$ in Eqn.~\eqref{eq:I}:
$$\underbrace{\mathop{\mathbb{E}}\nolimits_{{\boldsymbol{x} \sim p(\boldsymbol{x}) }} \left[ \ln {\mathop{\mathbb{E}}\nolimits_{\boldsymbol{x}^- \sim p(\boldsymbol{x}^-) }} e^{f_{\boldsymbol{\theta}}(\boldsymbol{x}^{-})^{\top} f_{\boldsymbol{\theta}}(\boldsymbol{x}) / \tau} \right]}_{\mathcal{L}_\mathrm{uniform}}
+ I(X;X^-)
\geqslant \mathcal{L}_\mathrm{CR},$$
\vspace{-4mm}
\end{lemma}
As shown in Lemma~\ref{theorem:CL_CPP_MI}, the mutual information $I(X;X^-)$ helps quantify the difference between $\mathcal{L}_\mathrm{uniform}$ and $\mathcal{L}_\mathrm{CR}$. The difference between drawing $\boldsymbol{x}^-\sim \pi_{\boldsymbol{\theta}}^-(\boldsymbol{x}^- \,|\, \boldsymbol{x})$ (in CR) and drawing $\boldsymbol{x}^-$ independently in a mini-batch (in CL) is non-trivial as long as $I(X;X^-)$ is non-zero. In practice, this is true almost everywhere since we have to handle the skewed data distribution in real-world applications, \textit{e.g.}, the label-shift scenarios~\cite{garg2020unified}. In this view, CR does not require the representation space to be uniform like CL does, and is more robust to the complex cases through considering the intra-contrastive relation within negative samples.
\section{Experiments and analysis}\label{sec:experiments}
We compare the performance of CACR loss with representative CL methods, divided into two different categories according to their positive sampling size: $K=1$ and $K=4$. For methods with a single positive sample ($K=1$), the baseline methods include the conventional CL loss~\cite{oord2018representation,logeswaran2018efficient,chen2020learning,He_2020_CVPR}, AlignUniform CL loss (AU-CL)~\cite{wang2020understanding}, and the non-debiased version of the CL loss with hard negative sampling (HN-CL)~\cite{robinson2020contrastive}. In the case of $K=4$, we take contrastive multi-view coding (CMC) loss~\cite{tian2019contrastive} (align with our augmentation settings and use augmentation views instead of channels) as the comparison baseline.
For a fair comparison, on each dataset, we keep for all methods the same experiment setting including learning-rate, mini-batch size, and training epochs, but use their best temperature parameters. Please refer to Appendix~\ref{appendix:experiment_details} for other detailed experiment setups.
We conduct experiments on five image datasets of varying sizes, including CIFAR-10, CIFAR-100~\cite{hinton2007learning}, and STL-10~\cite{coates2011analysis} that are small-scale ones and ImageNet-100 and ImageNet-1K~\cite{deng2009imagenet} that are large-scale ones. Note that ImageNet-100 is a subset of ImageNet-1K, where 100 classes are randomly selected from the standard ImageNet-1K dataset and here we keep the same classes as commonly used in CL works~\cite{tian2019contrastive,wang2020understanding}. For small-scale datasets, we follow SimCLR to construct negative samples as the views augmented from different images within a batch. Moreover, we create two class-imbalanced CIFAR datasets as empirical verification of our theoretical analysis. For large-scale datasets, we follow MoCo-v2~\cite{chen2020mocov2} to maintain a queue of negative samples, updated with the momentum-based mechanism. To evaluate the learned representations, following the widely used linear classification protocol, the pre-trained encoder is fixed as a proxy and a linear classifier is added on the top of the base feature encoder for classification. Here we report the Top-1 validation accuracy on these datasets. We also report the results of object detection/segmentaion following the transfer learning protocol. The reported numbers for baselines are from the original papers if available, otherwise we report the best ones fine-tuned with the settings according to their corresponding papers.
\subsection{Linear classification on small-scale datasets}
\textbf{Classification accuracy:}
For small-scale datasets, we apply all methods with an AlexNet-based encoder following the setting in \citet{wang2020understanding}, trained in 200 epochs and with a ResNet50 encoder following the setting in \citet{robinson2020contrastive}. The results with the AlexNet-based encoder and ResNet50-based one are summarized in Table~\ref{tab:performance_small} and Table~\ref{tab:performance_small_resnet} (Appendix), respectively. We can observe that in the case of $K=1$, where the intra-positive contrast of CACR degenerates, CACR slightly outperforms all CL methods. With ResNet50, CACR outperforms with larger margin. Moreover, when $K=4$, it is interesting to observe an obvious boost in performance, where CMC improves CL by around 2-3\% while CACR improves CL by around 3-4\%. This supports our analysis that CA is helpful when the intra-positive contrast is not degenerated.
\textbf{On the effect of CA and CR:}
To understand the efficacy of the contrasts within positive and negative samples, we illustrate in Figure~\ref{figure:train_entropy_acc_cifar10} (Left) the evolution of conditional entropy $\mathcal{H}(X^-|X)$ and classification accuracy \textit{w.r.t.} the training epoch. In each epoch, we calculate the conditional entropy with Eqn.~\eqref{eq:entropy} on every mini-batch of size $M=512$ and take the average across mini-batches. As shown in Figure~\ref{figure:train_entropy_acc_cifar10}, $\mathcal{H}(X^-|X)$ is getting maximized as the encoder is getting optimized. As shown in Lemma~\ref{theorem: neg-unif}, under the uniform data prior assumption, the optimization of CL and CACR encourages the encoder to maximize $\mathcal{H}(X^-|X)$. It is also interesting to observe that in the case with multiple positive samples, the gap between CACR and CMC is much larger in terms of the conditional entropy. This implies the CA module can further boost the repulsion of negative samples. Although CMC uses multiple positive in CL loss, the lack of intra-positive contrast shows the gap of repulsion efficiency.
As shown in Figure~\ref{figure:train_entropy_acc_cifar10} (Right), CACR consistently outperforms the other methods in linear classification with the learned representations at the same epoch, indicating a superior learning efficiency of CACR. See Appendix~\ref{appendix:additional_experiment} for similar observations On CIFAR-10 and CIFAR-100.
As a qualitative verification, we randomly take a query from a mini-batch, and illustrate its positive and negative samples and their conditional probabilities in Figure~\ref{fig:visualization_of_samples}. As shown, given this query of a dog image, the positive sample with the largest weight contains partial dog information, indicating the encoder to focus on texture information; the negatives with larger weights are more related to the dog category, which encourages the encoder to focus on distinguishing these ``hard'' negative samples. In total, the weights learned by CACR enjoy the interpretability compared to the conventional CL.
\begin{table}[t]
\begin{minipage}[t]{.46\columnwidth}
\centering
\caption{The top-1 classification accuracy ($\%$) of different contrastive objectives under the SimCLR framework on small-scale datasets. All methods follow the SimCLR setting and apply an AlexNet-based encoder. The results of CL and AU-CL on STL-10 are quoted from \citet{wang2020understanding}. }\vspace{-0.2mm}
\label{tab:performance_small}
\renewcommand{\arraystretch}{1.}
\setlength{\tabcolsep}{1.0mm}{
\resizebox{0.96\columnwidth}{!}{
\begin{tabular}{cc|ccc}
\toprule
\multicolumn{2}{c|}{Methods} & CIFAR-10 & CIFAR-100 & STL-10 \\ \midrule
\multirow{4}{*} &CL & 83.47 & 55.41 & 83.89 \\
&AU-CL & 83.39 & 55.31 & 84.43 \\
&HN-CL & 83.67 & 55.87 & 83.27 \\
&CACR ($K=1$) & \textbf{83.73} & \textbf{56.52} & \textbf{84.51} \\ \midrule
\multirow{2}{*}&CMC ($K=4$) & 85.54 & 58.64 & 84.50 \\
&CACR ($K=4$) & \textbf{86.54} & \textbf{59.41} & \textbf{85.59} \\ \bottomrule
\end{tabular}
}
}
\end{minipage}\hfill
\begin{minipage}[t]{.52\columnwidth}
\centering
\caption{The classification accuracy ($\%$) of different contrastive objectives on class-imbalanced datasets. ``Linear'' and ``Exponentional'' indicate the number of samples in each class are chosen by following a linear rule or an exponential rule, respectively. The performance drops compared with the performance in Table~\ref{tab:performance_small} are shown next to each result.}
\label{tab:performance_imbalance}
\renewcommand{\arraystretch}{1.0}
\setlength{\tabcolsep}{1.0mm}{
\resizebox{\columnwidth}{!}{
\begin{tabular}{c|cc|cc}
\toprule
{Imbalance} & \multicolumn{2}{c|}{Linear} & \multicolumn{2}{c}{Exponential} \\ \hline
{Dataset} & CIFAR-10 & CIFAR-100 & CIFAR-10 & CIFAR-100 \\ \midrule
CL & $79.88_{3.59\downarrow}$ & $52.29_{3.57\downarrow}$ & $71.74_{11.73\downarrow}$ & $43.29_{12.57\downarrow}$ \\
AU-CL & $80.25_{3.14\downarrow}$ & $52.74_{2.57\downarrow}$ & $71.62_{11.76\downarrow}$ & $44.38_{10.93\downarrow}$ \\
HN-CL & $\textbf{80.51}_{3.15\downarrow}$ & $52.72_{3.14\downarrow}$ & $72.74_{10.93\downarrow}$ & $45.13_{10.73\downarrow}$ \\
CACR ($K=1$) & $80.46_{3.27\downarrow}$ & $\textbf{54.12}_{2.40\downarrow}$ & $\textbf{73.02}_{10.71\downarrow}$ & $\textbf{46.59}_{9.93\downarrow}$ \\ \midrule
CMC ($K=4$) & $82.20_{3.34\downarrow}$ & $55.38_{3.26\downarrow}$ & $74.77_{10.77\downarrow}$ & $48.87_{9.77\downarrow}$ \\
CACR ($K=4$) & $\textbf{83.62}_{2.92\downarrow}$ & $\textbf{56.91}_{2.50\downarrow}$ & $\textbf{75.89}_{10.65\downarrow}$ & $\textbf{50.17}_{9.24\downarrow}$ \\ \bottomrule
\end{tabular}
}}
\end{minipage}\hfill
\end{table}
\begin{figure}[t]
\vspace{-1mm}
\centering
{
\!\includegraphics[width=0.5\textwidth]{entropy/entropy_stl10.pdf}\includegraphics[width=0.42\textwidth]{training_evolution/stl10_evolution.pdf} \vspace{-3mm}
}
\caption{\small STL-10 training evolution, with mini-batch size 512.
\textbf{Left:} Conditional entropy $\mathcal{H}(X^-|X)$ \textit{w.r.t.} epoch. The maximal possible conditional entropy is indicated by a dotted line. \textbf{Right}: Linear classification with learned representations \textit{w.r.t.} epoch. }\label{figure:train_entropy_acc_cifar10}
\vspace{-3.2mm}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{visualization/sampling_visualization.pdf}\vspace{-6mm}
\caption{\small Illustration of positive/negative samples and their corresponding weights. (\textit{Left}) For a query augmented from the original dog image, 4 positive samples are shown, with their weights visualized as the blue distribution. (\textit{Right}) The sampling weights of negatives are visualized as the red distribution; we visualize 4 negative samples with the highest and 4 with the lowest weights, with their original images shown below.}
\label{fig:visualization_of_samples}
\vspace{-1.0mm}
\end{figure}
\subsection{Linear classification on class-imbalanced datasets}
To verify the robustness of CACR in comparison with that of CL when this uniform prior assumption is violated, we create two class-imbalanced datasets with CIFAR-10 and CIFAR-100. Such datasets are created by randomly sampling a certain number of samples from each class with a ``linear'' or ``exponentional'' rule by following the setting in~\cite{kim2020imbalanced}. Specifically, given a dataset with $C$ classes, for class $l\in \{1,2,...,C\}$, we randomly take samples with proportion $\lfloor \frac{l}{C} \rfloor$ for ``linear'' rule and proportion $\exp(\lfloor \frac{l}{C} \rfloor)$ for ``exponential'' rule. Once the dataset is sampled, it is fixed during training. For evaluation we keep the standard validation/testing datasets. Thus there is a label-shift between training and testing data distribution.
Summarized in Table~\ref{tab:performance_imbalance} are the results on class-imbalanced datasets, which show all the methods have a performance drop compared to the results in Table~\ref{tab:performance_small}. It is clear that CACR has the least performance decline in most cases. Especially, when $K=4$, CACR shows better performance robustness due to the characteristic of doubly contrastive within positive and negative samples. For example, in the ``exponentional'' setting of CIFAR-100, CL and HN-CL drop 12.57\% and 10.73\%, respectively, while CACR ($K=4$) drops 9.24\%. It is also interesting to observe HN-CL is relatively better among the baseline methods. According to \citet{robinson2020contrastive}, in HN-CL the negative samples are sampled according to the ``hardness'' \textit{w.r.t.} the query samples with an intra-negative contrast. Its loss could converge to CACR ($K=1$) with infinite negative samples. This performance gap indicates that directly optimizing the CACR loss could be superior when we have a limited number of samples. With this class-imbalanced datasets, we provide the empirical support to our analysis: When the condition in Lemma~\ref{theorem: neg-unif} is violated, CACR shows a clearer difference than CL and a better robustness with its unique doubly contrastive strategy within positive and negative samples.
\begin{table}[t]
\begin{minipage}[t]{.39\columnwidth}
\centering
\caption{Top-1 classification accuracy ($\%$) of different objectives with MoCo-v2 framework and ResNet50 encoder on ImageNet dataset. The results from paper or Github page are marked by $\star$.
}
\label{tab:performance_large}
\renewcommand{\arraystretch}{0.9}
\setlength{\tabcolsep}{1.0mm}
\resizebox{0.88\columnwidth}{!}{
\begin{tabular}{cc|cc}
\toprule
\multicolumn{2}{c|}{Methods} & ImageNet-100 & ImageNet-1K \\ \midrule
\multirow{4}{*} &CL & $77.54^\star$ & $67.50^\star$ \\
&AU-CL & $77.66^\star$ & $67.69^\star$ \\
&HN-CL & $76.34$ & $67.41$ \\
&CMC ($K=1$) & $75.80^\star$ & $66.20^\star$ \\
&CACR ($K=1$) & $\textbf{79.40}$ & $\textbf{68.40}$ \\ \midrule
&CMC ($K=4$) & 78.84 & $69.45$ \\
&CACR ($K=4$) & $\textbf{80.46}$ & $\textbf{70.35}$ \\ \bottomrule
\end{tabular}
}
\end{minipage}\hfill
\begin{minipage}[t]{.59\columnwidth}
\caption{Results of transferring features to object detection and segmentation task on Pascal VOC, with the pre-trained MoCo-v2 ResNet50 on ImageNet-1k. The results of CL loss on Pascal VOC are from their papers and Github pages.}
\label{table:detection_and_segmenation} \vspace{2mm}
\renewcommand{\arraystretch}{1.0}
\setlength{\tabcolsep}{1.0mm}{
\scalebox{0.71}{
\begin{tabular}{c|ccc|ccc|ccc}
\toprule
Task & \multicolumn{6}{c|}{Object Detction} & \multicolumn{3}{c}{Object Segmentation} \\ \hline
Dataset & \multicolumn{3}{c|}{Pascal VOC} & \multicolumn{3}{c|}{COCO} & \multicolumn{3}{c}{COCO} \\ \hline
Loss & AP & $\text{AP}_{50}$ & $\text{AP}_{75}$ & AP & $\text{AP}_{50}$ & $\text{AP}_{75}$ & AP & $\text{AP}_{50}$ & $\text{AP}_{75}$ \\ \midrule
CL & 57.00 & 82.40 & 63.60 & 40.90 &60.53 & 44.30 &35.73 &57.29 &\textbf{38.20} \\
AU-CL & 57.24 & 82.49 & 63.83 & 41.01 & 60.68 & 44.40 &35.56 &57.38 &37.93 \\
CACR ($K=1$) & \textbf{57.75} & \textbf{82.76} & \textbf{64.23} & \textbf{41.08} & \textbf{60.80} & \textbf{44.84} &\textbf{35.74} & \textbf{57.50} &38.07 \\ \midrule
CACR ($K=4$) & \textbf{57.91} & \textbf{82.83} & \textbf{64.85} & \textbf{41.50} & \textbf{61.11} & \textbf{45.30} & \textbf{36.08} & \textbf{57.95} & \textbf{38.68}
\\ \bottomrule
\end{tabular}
}}
\end{minipage}\hfill
\end{table}
\subsection{Linear classification on large-scale datasets}
For large-scale experiments, folloing the convention, we adapt all methods into the MoCo-v2 framework and pre-train a ResNet50 encoder in 200 epochs with mini-batch size 128/256 on ImageNet-100/ImageNet-1k. Table~\ref{tab:performance_large} summarizes the results of linear classification on these two large-scale datasets. Similar to the case on small-scale datasets, CACR consistently shows better performance, improving the baselines at least by 1.74\% on ImageNet-100 and 0.71\% on ImageNet-1K. In MoCo-v2, with multiple positive samples, CACR improves the baseline methods by 2.92\% on ImageNet-100 and 2.75\% on ImageNet-1K. It is worth highlighting that the improvement of CACR is more significant on these large-scale datasets, where the data distribution could be much more diverse compared to these small-scale ones. This is not surprising, as according to our theoretical analysis, CACR's double-contrast within samples enhances the effectiveness of the encoder's optimization. Moreover, we can see CACR ($K=1$) shows a clear improvement over HN-CL. A possible explanation is that although both increasing the negative sample size and selecting hard negatives are proposed to improve the CL loss, the effectiveness of hard negatives is limited when the sampling size is increased over a certain limit. As CACR targets to repel the negative samples away, the conditional distribution still efficiently guides the repulsion when the sampling size becomes large.
\subsection{Object detection and segmentation}
A main goal of unsupervised learning is to sufficiently capture general and transferable representations. Besides the linear classification evaluation, we also transfer the pre-trained contrastive models as the initialization for fine-tuning in downstream tasks, such as object detection and segmentation. To evaluate the transfer ability of the learned features, following the protocols in previous works~\cite{tian2019contrastive,He_2020_CVPR,chen2020mocov2,wang2020understanding}, we use the pretrained ResNet50 on ImageNet-1K for object detection and segmentation task on Pascal VOC~\cite{everingham2010pascal} and COCO~\cite{lin2014microsoft} by using detectron2~\cite{wu2019detectron2}. The experimental setting details are shown in Appendix~\ref{sec:setting-largescale} and kept the same as~\citet{He_2020_CVPR} and \citet{chen2020mocov2}. The test AP, AP$_{50}$, and AP$_{75}$ of bounding boxes in object detection and test AP, AP$_{50}$, and AP$_{75}$ of masks in segmentation are reported in Table~\ref{table:detection_and_segmenation}. We can observe that the performances of CACR is consistently better than that of other contrastive objectives. For example, compared to CL, the AP has been improved by 0.91\% on Pascal VOC object detection and by 0.31\% on COCO object segmentation.
\section{Conclusion}
In this paper, we rethink the limitation of conventional contrastive learning (CL) methods that form the contrastive loss by randomly selecting positive and negative samples for a query. We introduce a novel Contrastive Attration and Contrastive Repulsion (CACR) loss with a doubly contrastive strategy, which constructs for a random query two contradicting conditional distributions, which model the importance of a positive sample and that of a negative sample, respectively, to the query. To form the contrastive loss, CACR combines the independent and random sampling convention with the practice of contrastively reweighing both positive and negative samples according to their distances to the query. Our theoretical analysis and empirical results show that optimizing the CACR loss can effectively attract positive samples and repel negative ones from the query as CL intends to do, but is more robust in more general cases. Extensive experiments on small, large-scale, and imbalanced datasets consistently demonstrate the superiority and robustness of CACR over the state-of-the-art methods in contrastive representation learning and related downstream tasks.
\bibliographystyle{unsrtnat}
\section{Introduction}
The conventional Contrastive Learning (CL) loss~\cite{oord2018representation,poole2018variational} has achieved remarkable success in representation learning, benefiting downstream tasks in a variety of areas~\cite{misra2020self,He_2020_CVPR,chen2020simple,fang2020cert,Giorgi2020DeCLUTRDC}. Recent approaches mainly apply the conventional CL loss to make the encoder distinguish each positive sample within multiple negative samples. In image representation learning, this scheme is widely used to encourage the encoder to learn representations that are invariant to unnecessary details in the representation space, for which the unit hypersphere is the most common assumption~\cite{wang2017normface,davidson2018hyperspherical,hjelm2018learning,tian2019contrastive,bachman2019learning}. Meanwhile, the contrast with negative samples is demystified as avoiding the collapse issue, where the encoder outputs a trivial constant, and uniformly distributing samples on the hypersphere~\cite{wang2020understanding}. To improve the quality of the contrast, various methods, such as large negative memory bank~\cite{chen2020mocov2}, hard negative mining~\cite{chuang2020debiased,kalantidis2020hard}, and using strong or multi-view augmentations~\cite{chen2020simple,tian2019contrastive}, are proposed and succeed in learning powerful representations. Since the conventional CL loss achieves the one-vs-many contrast with a softmax cross-entropy loss, a notable concern is still that the contrast could be sensitive to the sampled positive and negative pairs~\cite{saunshi2019theoretical,chuang2020debiased}. Given a sampled query, a conventional CL method usually randomly takes one positive sample, and equally treats all the other negative samples, regardless of how informative they are to the query. The sampled positive pair could make the contrast either easy or difficult, and trivially selecting hard negative pairs could make the optimization inflexible and inefficient.
\begin{figure}[t]
\centering
\begin{minipage}[b]{\columnwidth}
\centering
\includegraphics[width=\columnwidth]{misc/motiv_ours3.pdf}
\end{minipage}
\vspace{-5.5mm}
\caption{\small Comparison of conventional contrastive learning (CL) and the proposed Contrastive Attraction and Contrastive Repulsion (CACR) framework. For conventional CL, given a query, the model randomly takes one positive sample to form a positive pair and compares it against multiple negative pairs, with all samples equally treated. For CACR, using multiple positive and negative pairs, the weight of a sample (indicated by point scale) is contrastively computed to allow the query to not only more strongly pull more distant positive samples, but also more strongly push away closer negative samples.
}
\label{fig:motiv}
\vspace{-4.5mm}
\end{figure}
An alternative intuition of the CL loss is that given a query, its positive sample needs to be close, while its negative ones need to be far away in the representation space. This motivates the construction of Contrastive Attraction and Contrastive Repulsion (CACR), a new CL framework where the positive and negative samples are first contrasted within themselves before getting pulled and pushed, respectively, from the query. As shown in Figure~\ref{fig:motiv}, unlike conventional CL, which equally treats samples and pulls/pushes them with the softmax cross-entropy contrast, CACR considers both the cost of moving positive samples close and that of moving negative samples away. Moreover, CACR applies a double-contrast strategy to contrast the positive samples and negative ones within themselves separately, where we formulate the contrasts as the conditional probabilities of moving a given query to different samples. Specifically, if a selected positive sample is far away from the query, it indicates the encoder does not sufficiently capture some information, and CACR will assign higher probability for the query to pull this positive sample. Conversely, if a selected negative sample is too close to the query, it indicates the encoder has difficulty distinguishing them, and CACR will assign higher probability for the query to push away this negative sample. This double-contrast method determines if a positive/negative sample is easy or hard to move, making the optimization more flexible.
We further provide theoretical analysis of CACR's properties to show the connection and difference between CACR and conventional CL methods. We also justify the effectiveness of the doubly contrastive strategy from both theoretical and empirical perspectives. Our main contributions include:
1) We propose CACR, a CL framework where the positive and negative samples first contrast within themselves respectively, with the importance of positive and negative samples modeled by two separate conditional distributions. With the weight of a sample indicating how informative it is to the query, CACR is able to attract positive samples close and repel negative samples away from the query in a more efficient and adaptive way. 2)~CACR produces useful representations by minimizing the expected cost of attracting the positive samples towards the query while maximizing that of pushing the negative samples away from the query.
The doubly contrastive strategy is realized by modeling two conditional distributions for the intra-contrasts within positive and negative samples. Our theoretical and empirical analysis show that these two conditional distributions help in making contrastive attraction and contrastive repulsion more effective and robust than conventional CL does.
3) Our experiments show that CACR consistently outperforms conventional CL methods in a variety of settings, achieving state-of-the-art results on standard vision tasks over various benchmark datasets.
\section{Related work}\label{sec:related_work}
Plenty of unsupervised representation learning~\cite{bengio2013representation} methods have been developed to learn good data representations, \textit{e.g.,} PCA~\cite{tipping1999probabilistic}, RBM~\cite{hinton2006reducing}, VAE~\cite{kingma2013auto}. Among them, CL~\cite{oord2018representation} is investigated as a lower bound of mutual information in early stage \cite{gutmann2010noise,hjelm2018learning}. Recently, many studies reveal that the effectiveness of CL is not just attributed to the maximization of mutual information \cite{tschannen2019mutual,tian2019crd}, motivating various works to demystify the contrastive learning scheme.
\textbf{Contrastive representation learning.}
In vision tasks, SimCLR~\cite{chen2020simple,chen2020big} studies extensive augmentations for positive and negative samples and intra-batch-based negative sampling. A memory bank that caches representations~\cite{wu2018unsupervised} and a momentum update strategy are introduced to enable the use of an enormous number of negative samples~\cite{He_2020_CVPR,chen2020mocov2}. CL has also been developed in learning representations for text~\cite{logeswaran2018efficient}, sequential data \cite{oord2018representation,henaff2019data}, structural data like graphs \cite{sun2019infograph,li2019graph,hassani2020contrastive,velickovic2019deep}, reinforcement learning~\cite{srinivas2020curl}, and few-shot scenarios~\cite{khosla2020supervised,sylvain2020locality}. Along with the success of these designs, works like~\citet{wang2020understanding} reveal the contrastive scheme is optimizing the alignment of positive samples and the uniformity of negative pairs in the limit of an infinite number of negative samples.
\textbf{Sample selection for CL.}
How to construct samples in CL has also been widely studied. For positive samples, \citet{chen2020simple,chen2020mocov2} propose to apply image perturbations. \citet{tian2019contrastive,tian2020makes} consider the image views in different modalities and minimize the irrelevant mutual information between them. Most works on negative selection observe the merits of using ``hard'' negative samples, motivating the introduction of additional techniques, such as Mixup and adversarial noise~\cite{bose2018adversarial,cherian2020representation,li2020self}. In a view that not all negative pairs are ``true'' negatives~\cite{saunshi2019theoretical}, \citet{chuang2020debiased} propose a decomposition of the data distribution to approximate the true negative distribution. RingCL~\cite{wu2020conditional} proposes to use ``neither too hard nor too easy'' negative samples by predefined percentiles, and HN-CL~\cite{robinson2020contrastive} applies Monte-Carlo sampling for selecting hard negative samples. The conditional distributions in CACR also show the spirit of the sampling weights, but the objective is not limited to contrast positives and negatives in a one-vs-many mechanism as RingCL and HN-CL do. An advantage of CACR is its flexibility of the learnable conditional distributions, rather than heuristically defined based on prior belief. Compared to HN-CL~\cite{robinson2020contrastive} inspired by the philosophy of hard negative mining, CACR is motivated by contrastively pull positives and contrastively push negatives according to how informative they are to the query. This yields two distinct loss functions shown in Section~\ref{sec:CACR}, and we further empirically explore the difference between sample selection based CL and CACR by experiments in Section~\ref{sec:experiments}.
\section{The proposed approach}
In CL, for observations $\boldsymbol{x}_{0:M} \sim p_\mathrm{data}(\boldsymbol{x})$, we commonly assume that each $\boldsymbol{x}_i$ can be transformed in certain ways, with the samples transformed from the same and different data regarded as positive and negative samples, respectively. Specifically, we denote $\mathcal{T}(\boldsymbol{x}_i,\epsilon_i)$ as a random transformation of $\boldsymbol{x}_i$, where $\epsilon_i \sim p(\epsilon)$ represents the randomness injected into the transformation. In computer vision, $\epsilon_i$ often represents a composition of random cropping, color jitter, Gaussian blurring, \textit{etc.} For each $\boldsymbol{x}_0$, with query $\boldsymbol{x}=\mathcal{T}(\boldsymbol{x}_0,\epsilon_0)$, we sample a positive pair $(\boldsymbol{x}, \boldsymbol{x}^+)$, where $\boldsymbol{x}^+= \mathcal{T}(\boldsymbol{x}_0, \epsilon^+)$, and $M$ negative pairs $\{(\boldsymbol{x}, \boldsymbol{x}^-_i)\}_{1:M}$, where $\boldsymbol{x}^-_i = \mathcal{T}(\boldsymbol{x}_{i}, \epsilon^-_{i})$. Denote $\tau\in \mathbb{R}^+$, where $\mathbb{R}^+:=\{x:x>0\}$, as a temperature parameter. With encoder $f_{\boldsymbol{\theta}}: \mathbb{R}^n \rightarrow \mathcal{S}^{d-1}$, where we follow the convention to restrict the learned $d$-dimensional features with a unit norm, we desire to have similar and distinct representations for positive and negative pairs, respectively, via the contrastive loss as
\begin{align}
\label{eq: CL}
\mathop{\mathbb{E}}_{\substack{(\boldsymbol{x}, \boldsymbol{x}^+, \boldsymbol{x}^-_{1:M}) }} \left[ \textstyle - \ln \frac{e^{f_{\boldsymbol{\theta}}(\boldsymbol{x})^{\top} f_{\boldsymbol{\theta}}(\boldsymbol{x}^+) / \tau}}{e^{f_{\boldsymbol{\theta}}(\boldsymbol{x})^{\top} f_{\boldsymbol{\theta}}(\boldsymbol{x}^+) / \tau}+{\mathop{\sum}_{i}} e^{f_{\boldsymbol{\theta}}(\boldsymbol{x}_{i}^{-})^{\top} f_{\boldsymbol{\theta}}(\boldsymbol{x}) / \tau}} \right].
\end{align}
Note by construction, the positive sample $\boldsymbol{x}^+$ is independent of $\boldsymbol{x}$ given $\boldsymbol{x}_0$ and the negative samples $\boldsymbol{x}_i^-$ are independent of $\boldsymbol{x}$.
Intuitively, this 1-vs-$M$ softmax cross-entropy encourages the encoder to not only pull the representation of a randomly selected positive sample closer to that of the query, but also push the representations of $M$ randomly selected negative samples away from that of the query.
\begin{figure*}[t]
\centering
\includegraphics[width=11cm]{misc/model_architecture.pdf}
\vspace{-9.pt}
\caption{\small Illustration of the CACR framework. The encoder extracts features from samples and the conditional distributions help weigh the samples differently given the query, according to the distance of a query $\boldsymbol{x}$ and its contrastive samples $\boldsymbol{x}^+,\boldsymbol{x}^-$. $\otimes$ means element-wise multiplication between costs and conditional weights.
}
\label{figure:model_architecture}
\vspace{-10pt}
\end{figure*}
\subsection{Contrastive attraction and contrastive repulsion} \label{sec:CACR}
In the same spirit of letting the query attract positive samples and repel negative samples, Contrastive Attraction and Contrastive Repulsion (CACR) directly models the cost of moving from the query to positive/negative samples with a doubly contrastive strategy:
\begin{align}
\centering
\mathcal{L}_\mathrm{CACR}&:=\underbrace{\mathbb{E}_{\boldsymbol{x}\sim p(\boldsymbol{x})}\mathbb{E}_{\boldsymbol{x}^+\sim \pi^+_{\boldsymbol{\theta}}(\boldsymbol{\cdot} \,|\, \boldsymbol{x}, \boldsymbol{x}_0)} \left[c(f_{\boldsymbol{\theta}}(\boldsymbol{x}), f_{\boldsymbol{\theta}}(\boldsymbol{x}^+)) \right]}_\textbf{Contrastive Attraction}
+
\underbrace{ \mathbb{E}_{\boldsymbol{x}\sim p(\boldsymbol{x})}E_{\boldsymbol{x}^-\sim \pi^-_{\boldsymbol{\theta}}(\boldsymbol{\cdot} \,|\, \boldsymbol{x})} \left[-c(f_{\boldsymbol{\theta}}(\boldsymbol{x}), f_{\boldsymbol{\theta}}(\boldsymbol{x}^-)) \right]}_\textbf{Contrastive Repulsion}, \notag \\
&:= \mathcal{L}_\mathrm{CA} + \mathcal{L}_\mathrm{CR} \label{eq:CACR_loss},
\end{align}
where we denote $\boldsymbol{\pi}^+$ and $\boldsymbol{\pi}^-$ as the conditional distributions of intra-positive contrasts and intra-negative contrasts, respectively, and $c(\boldsymbol{z}_1,\boldsymbol{z}_2)$ as the point-to-point cost of moving between two vectors $\boldsymbol{z}_1$ and $\boldsymbol{z}_2$, $e.g.$, the squared Euclidean distance $\|\boldsymbol{z}_{1}-\boldsymbol{z}_{2}\|^{2}$ or the negative inner product $-\boldsymbol{z}_{1}^{T}\boldsymbol{z}_{2}$. In the following we explain the doubly contrastive components with more details.
\textbf{Contrastive attraction}: The intra-positive contrasts is defined in a form of the conditional probability, where the positive samples compete to gain a larger probability to be moved from the query. Here we adapt to CACR a Bayesian strategy in~\citet{zheng2020act}, which exploits the combination of an energy-based likelihood term and a prior distribution, to quantify the difference between two implicit probability distributions given their empirical samples. Specifically, denoting {$d_{t^{+}}(\cdot, \cdot)$ as a distance metric with temperature $t^{+}\in \mathbb{R}^+$,} \textit{e.g.}, $d_{t^{+}}(\boldsymbol{z}_1, \boldsymbol{z}_2) = {t^{+}\| \boldsymbol{z}_1 - \boldsymbol{z}_2 \|^2}$, given a query $\boldsymbol{x}=\mathcal{T}(\boldsymbol{x}_0,\epsilon_0)$, we define the conditional probability of moving it to positive samples $\boldsymbol{x}^+= \mathcal{T}(\boldsymbol{x}_0, \epsilon^+)$:
\ba{
&\small \textstyle\pi^+_{\boldsymbol{\theta}}(\boldsymbol{x}^+ \,|\, \boldsymbol{x},\boldsymbol{x}_0) := \frac{e^{d_{t^{+}}(f_{\boldsymbol{\theta}}(\boldsymbol{x}), f_{\boldsymbol{\theta}}(\boldsymbol{x}^+))} p(\boldsymbol{x}^+\,|\, \boldsymbol{x}_0)}
{Q^+(\boldsymbol{x} \,|\, \boldsymbol{x}_0)},~~ Q^+(\boldsymbol{x} \,|\, \boldsymbol{x}_0)=:
{\int e^{d_{t^{+}}(f_{\boldsymbol{\theta}}(\boldsymbol{x}), f_{\boldsymbol{\theta}}(\boldsymbol{x}^+))}p(\boldsymbol{x}^+\,|\, \boldsymbol{x}_0) d\boldsymbol{x}^+}, \label{eq: CT-positive}}
where $f_{\boldsymbol{\theta}}(\cdot)$ is an encoder parameterized by $\boldsymbol{\theta}$ and $Q^+(\boldsymbol{x})$ is a normalization term. This construction makes it more likely to pull $\boldsymbol{x}$ towards a positive sample that is more distant in their latent representation space. With Eqn.~\eqref{eq: CT-positive}, the contrastive attraction loss $\mathcal{L}_\mathrm{CA}$ measures the expected cost of moving a query to its positive samples, as defined in Eqn.~\eqref{eq:CACR_loss}, which more heavily weighs $c(f_{\boldsymbol{\theta}}(\boldsymbol{x}), f_{\boldsymbol{\theta}}(\boldsymbol{x}^+))$ if $f_{\boldsymbol{\theta}}(\boldsymbol{x})$ and $f_{\boldsymbol{\theta}}(\boldsymbol{x}^+) $ are further away from each~other.
\textbf{Contrastive repulsion}: On the contrary of the contrastive attraction shown in Eqn.~\eqref{eq: CT-positive}, we define the conditional probability for moving query $\boldsymbol{x}$ to a negative sample as
\ba{
&\small \textstyle\pi^-_{\boldsymbol{\theta}}(\boldsymbol{x}^- \,|\, \boldsymbol{x}) := \frac{e^{-d_{t^{-}}(f_{\boldsymbol{\theta}}(\boldsymbol{x}), f_{\boldsymbol{\theta}}(\boldsymbol{x}^-))} p(\boldsymbol{x}^-)}{Q^-(\boldsymbol{x})},~~~~Q^-(\boldsymbol{x}):={\int e^{-d_{t^{-}}(f_{\boldsymbol{\theta}}(\boldsymbol{x}), f_{\boldsymbol{\theta}}(\boldsymbol{x}^-))}p(\boldsymbol{x}^-) d\boldsymbol{x}^-}, \label{eq: CT-negative}
}
where $t^{-}\in \mathbb{R}^+$ is the temperature. This construction makes it more likely to move query $\boldsymbol{x}$ to a negative sample that is closer from it in their representation space. With Eqn.~\eqref{eq: CT-negative}, the contrastive repulsion loss $\mathcal{L}_\mathrm{CR}$ measures the expected cost to repel negative samples from the query shown in Eqn.~\eqref{eq:CACR_loss}, which more heavily weighs $c(f_{\boldsymbol{\theta}}(\boldsymbol{x}), f_{\boldsymbol{\theta}}(\boldsymbol{x}^-))$ if $f_{\boldsymbol{\theta}}(\boldsymbol{x})$ and $f_{\boldsymbol{\theta}}(\boldsymbol{x}^-) $ are closer to each other.
\textbf{Choice of $c(\cdot, \cdot)$, $d_{t^{+}}(\cdot, \cdot)$ and $d_{t^{-}}(\cdot, \cdot)$.} There could be various choices for the point-to-point cost function $c(\cdot, \cdot)$, distance metric $d_{t^{+}}(\cdot, \cdot)$ in Eqn.~\eqref{eq: CT-positive}, and $d_{t^{-}}(\cdot, \cdot)$ in Eqn.~\eqref{eq: CT-negative}. Considering the encoder $f_{\boldsymbol{\theta}}$ outputs normalized vectors on the surface of a hypersphere, maximizing the inner product is equivalent
to minimizing squared Euclidean distance. Without loss of generality, we define them~as
\baa{\nonumber
& c(\boldsymbol{z}_1, \boldsymbol{z}_2) = \| \boldsymbol{z}_1-\boldsymbol{z}_2 \|^2_2,~~~ d_{t^{+}}(\boldsymbol{z}_1,\boldsymbol{z}_2) =t^{+} \| \boldsymbol{z}_1-\boldsymbol{z}_2 \|^2_2,~~~
d_{t^{-}}(\boldsymbol{z}_1,\boldsymbol{z}_2) = t^{-}\| \boldsymbol{z}_1-\boldsymbol{z}_2 \|^2_2,
}
where $t^{+}, t^{-} \in \mathbb{R}^+$. There are other choices for $c(\cdot, \cdot)$ and we show the ablation study in Section~\ref{sec:cost_choice}.
\subsection{Mini-batch based stochastic optimization}\label{sec:empirical_CPP}
Under the CACR loss as in Eqn.~\eqref{eq:CACR_loss}, to make the learning of $f_{\boldsymbol{\theta}}(\boldsymbol{\cdot})$ amenable to mini-batch stochastic gradient descent (SGD) based optimization,
we draw $(\boldsymbol{x}_{i}^\mathrm{data},\epsilon_i)\sim p_{data}(\boldsymbol{x}) p(\epsilon)$ for $i=1,\ldots,M$ and then approximate the distribution of the query using an empirical distribution of $M$ samples as
$$
\hat p(\boldsymbol{x}) = \textstyle \frac{1}{M} \sum_{i=1}^M \delta_{\boldsymbol{x}_i},~\boldsymbol{x}_{i} =\mathcal{T}(\boldsymbol{x}_{i}^\mathrm{data},\epsilon_i).
$$
With query $\boldsymbol{x}_i$ and
$\epsilon_{1:K} \stackrel{iid}\sim p(\epsilon)$,
we approximate $p(\boldsymbol{x}_i^-)$ for Eqn.\,\eqref{eq: CT-negative} and $p(\boldsymbol{x}_i^+\,|\, \boldsymbol{x}^{\rm{data}}_i)$ for Eqn.\,\eqref{eq: CT-positive}~as
\ba{
\textstyle\hat p(\boldsymbol{x}_i^-) = \frac{1}{M-1} \sum_{j\neq i} \delta_{\boldsymbol{x}_j},~~~\hat p(\boldsymbol{x}_i^+\,|\, \boldsymbol{x}^{\rm{data}}_i) =\textstyle \frac{1}{K}\sum_{k=1}^K \delta_{\boldsymbol{x}^+_{ik}},~~~ \boldsymbol{x}^+_{ik} = \mathcal{T}(\boldsymbol{x}^{\rm{data}}_i,\epsilon_k).\label{eq:x+}
}
Note we may improve the accuracy of $\hat p(\boldsymbol{x}_i^-)$ in Eqn.~\eqref{eq:x+} by adding previous queries into the support of this empirical distribution. Other more sophisticated ways to construct negative samples~\cite{oord2018representation,He_2020_CVPR,khosla2020supervised} could also be adopted to define $\hat p(\boldsymbol{x}_i^-)$. We will elaborate these points when describing experiments.
Plugging Eqn.~\eqref{eq:x+} into Eqn.~\eqref{eq: CT-positive} and Eqn.~\eqref{eq: CT-negative}, we can approximate the conditional distributions as
\baa{
&\textstyle\hat \pi^+_{\boldsymbol{\theta}}(\boldsymbol{x}^+_{i}\,|\, \boldsymbol{x}_i,\boldsymbol{x}^{\rm{data}}_i) := \sum_{k=1}^K \frac{e^{d_{t^{+}}(f_{\boldsymbol{\theta}}(\boldsymbol{x}_i), f_{\boldsymbol{\theta}}(\boldsymbol{x}^+_{ik}))}}{\sum_{k'=1}^K e^{d_{t^{+}}(f_{\boldsymbol{\theta}}(\boldsymbol{x}_i), f_{\boldsymbol{\theta}}(\boldsymbol{x}^+_{ik'}))}}\delta_{x_{ik}^+}, \\
&
\textstyle\hat \pi^-_{\boldsymbol{\theta}}(\boldsymbol{x}^-_{i} \,|\, \boldsymbol{x}_i) := \sum_{j\neq i} \frac{e^{-d_{t^{-}}(f_{\boldsymbol{\theta}}(\boldsymbol{x}), f_{\boldsymbol{\theta}}(\boldsymbol{x}_j))} }{\sum_{j'\neq i} e^{-d_{t^{-}}(f_{\boldsymbol{\theta}}(\boldsymbol{x}_i), f_{\boldsymbol{\theta}}(\boldsymbol{x}_{j'}))}}\delta_{\boldsymbol{x}_j},
\notag
}
which leads to a mini-batch based CACR loss as $\hat{\mathcal L}_{\text{CACR}}=\hat {\mathcal{L}}_\mathrm{CA} + \hat{\mathcal{L}}_\mathrm{CR}$, where
\ba{
\textstyle\hat{\mathcal{L}}_\mathrm{CA} & := \textstyle \frac{1}{M} \sum_{i=1}^M \sum_{k=1}^K \textstyle \frac{e^{d_{t^{+}}(f_{\boldsymbol{\theta}}(\boldsymbol{x}_i), f_{\boldsymbol{\theta}}(\boldsymbol{x}^+_{ik}))}}{\sum_{k'=1}^K e^{d_{t^{+}}(f_{\boldsymbol{\theta}}(\boldsymbol{x}_i), f_{\boldsymbol{\theta}}(\boldsymbol{x}^+_{i{k'}}))}}
{\times c(f_{\boldsymbol{\theta}}(\boldsymbol{x}_i), f_{\boldsymbol{\theta}}(\boldsymbol{x}^+_{ik}))},\notag\\
\textstyle\hat{\mathcal{L}}_\mathrm{CR} & := - \textstyle \frac{1}{M} \sum_{i=1}^M \sum_{j\neq i} \textstyle\frac{e^{-d_{t^{-}}(f_{\boldsymbol{\theta}}(\boldsymbol{x}_i), f_{\boldsymbol{\theta}}(\boldsymbol{x}_{j}))} }{\sum_{j'\neq i} e^{-d_{t^{-}}(f_{\boldsymbol{\theta}}(\boldsymbol{x}_i), f_{\boldsymbol{\theta}}(\boldsymbol{x}_{j'}))}}\times
{c(f_{\boldsymbol{\theta}}(\boldsymbol{x}_i), f_{\boldsymbol{\theta}}(\boldsymbol{x}_{j}))}\notag.
}
We optimize $\boldsymbol{\theta}$ via SGD using $\nabla_{\boldsymbol{\theta}} \hat{\mathcal{L}}_{\text{CACR}}$, with the framework instantiated as in Figure~\ref{figure:model_architecture}.
\begin{wraptable}{r}{.7\columnwidth}
\vspace{-13pt}
\caption{Comparison with representative CL methods. $K$ and $M$ denotes the number of positive and negative samples, respectively.}\label{tab:comparison}
\renewcommand{\arraystretch}{1.1}
\setlength{\tabcolsep}{1.0mm}{
\resizebox{.7\columnwidth}{!}{
\begin{tabular}{c|c|cc}
\toprule \hline
\multirow{2}{*}{Method} & \multirow{2}{*}{Contrast Loss} & {Intra-positive} & {Intra-negative} \\
& & contrast & contrast \\ \hline
CL~\cite{oord2018representation} & 1-vs-$M$ cross-entropy & \XSolidBrush & \XSolidBrush \\
AU-CL~\cite{wang2020understanding} & 1-vs-$M$ cross-entropy & \XSolidBrush & \XSolidBrush \\
HN-CL~\cite{robinson2020contrastive} & 1-vs-$M$ cross-entropy & \XSolidBrush & \Checkmark \\
CMC~\cite{tian2019contrastive} & $\binom{K}{2}$ $\times$ (1-vs-$M$ cross-entropy) & \XSolidBrush & \XSolidBrush \\ \hline
CACR (ours) & Intra-$K$-positive vs Intra-$M$-negative & \Checkmark & \Checkmark
\\ \hline \bottomrule
\end{tabular}
}
}
\vspace{-12pt}
\end{wraptable}
\textbf{Relation with CL}:
As shown in Eqn.~\eqref{eq:CACR_loss}, with both the contrastive attraction component and contrastive repulsion component, CACR loss shares the same intuition of conventional CL in pulling positive samples closer to and pushing negative samples away from the query in their representation space. However, CACR realizes this intuition by introducing the double-contrast strategy on the point-to-point moving cost, where the contrasts appear in the intra-comparison within positive and negative samples, respectively. The use of the double-contrast strategy clearly differs the CACR loss in Eqn.~\eqref{eq:CACR_loss} from the conventional CL loss in Eqn.~\eqref{eq: CL}, which typically relies on a softmax-based contrast formed with a single positive sample and multiple equally-weighted independent negative samples. A summary of the comparison between some representative CL losses and CACR is shown in Table~\ref{tab:comparison}.
\section{Property analysis of CACR}
\subsection{On the contrastive attraction}
We first analyze the effects \textit{w.r.t.} the positive samples. With contrastive attraction, the property below suggests that the optimal encoder produces representations invariant to the noisy details.
\begin{property}\label{theorem: pos-unif}
The contrastive attraction loss $\mathcal{L}_\mathrm{CA}$ is optimized if and only if all positive samples of a query share the same representation as that query. More specifically, for query $\boldsymbol{x}$ that is transformed from $\boldsymbol{x}_0\sim p_{data}(\boldsymbol{x})$, its positive samples share the same representation with it, which means
\ba{ f_{\boldsymbol{\theta}}(\boldsymbol{x}^+) = f_{\boldsymbol{\theta}}(\boldsymbol{x}) ~\text{ for any }~\boldsymbol{x}^+\sim \boldsymbol{\pi}(\boldsymbol{x}^+\,|\, \boldsymbol{x}, \boldsymbol{x}_0).
\label{eq:th1}
}
\end{property}
This property coincides with the characteristic (learning invariant representation) of the CL loss in \citet{wang2020understanding} when achieving the optima. However, the optimization dynamic in contrastive attraction evolves in the context of $\boldsymbol{x}^+ \sim \boldsymbol{\pi}_{\boldsymbol{\theta}}(\boldsymbol{x}^+ \,|\, \boldsymbol{x}, \boldsymbol{x}_0)$, which is different from that in the CL.
\begin{lemma}\label{theorem: pos-sampling}
Let us instantiate $c(f_{\boldsymbol{\theta}}(\boldsymbol{x}), f_{\boldsymbol{\theta}}(\boldsymbol{x}^+))=-f_{\boldsymbol{\theta}}(\boldsymbol{x})^\top f_{\boldsymbol{\theta}}(\boldsymbol{x}^+)$. Then, the contrastive attraction loss $\mathcal{L}_\mathrm{CA}$ in Eqn.~\eqref{eq:CACR_loss} can be re-written as
\ba{\textstyle
\mathbb{E}_{\boldsymbol{x}_0}
\mathbb{E}_{\boldsymbol{x},\boldsymbol{x}^+\sim p(\boldsymbol{\cdot}\,|\, \boldsymbol{x}_0)} \left[ -f_{\boldsymbol{\theta}}(\boldsymbol{x})^\top f_{\boldsymbol{\theta}}(\boldsymbol{x}^+)\frac{\pi^+_{\boldsymbol{\theta}}(\boldsymbol{x}^+ \,|\, \boldsymbol{x},\boldsymbol{x}_0)}{p(\boldsymbol{x}^+\,|\, \boldsymbol{x}_0)} \right],\notag
}
which could further reduce to the alignment loss
${\textstyle \mathbb{E}_{\boldsymbol{x}_0\sim p_{data}(\boldsymbol{x})} \mathbb{E}_{\boldsymbol{x},\boldsymbol{x}^+\sim p(\boldsymbol{\cdot}\,|\, \boldsymbol{x}_0)} \left[ -f_{\boldsymbol{\theta}}(\boldsymbol{x})^\top f_{\boldsymbol{\theta}}(\boldsymbol{x}^+)) \right]}\notag
$ in \cite{wang2020understanding}, \emph{iff} ${\pi^+_{\boldsymbol{\theta}}(\boldsymbol{x}^+ \,|\, \boldsymbol{x},\boldsymbol{x}_0)} = {p(\boldsymbol{x}^+\,|\, \boldsymbol{x}_0)}$.
\end{lemma}
Property~\ref{theorem: pos-unif} and Lemma~\ref{theorem: pos-sampling} jointly show contrastive attraction in CACR and the alignment loss in CL reach the same optima, while working in different sampling mechanism. In practice $\boldsymbol{x}^+$ and $\boldsymbol{x}$ are usually independently sampled augmentations in a mini-batch, as shown in Section~\ref{sec:empirical_CPP}, which raises a gap between the empirical distribution and the true distribution. Our method makes the alignment more efficient by considering the intra-relation of these positive samples to the query.
\subsection{On the contrastive repulsion}
Next we analyze the effects \textit{w.r.t.} the contribution of negative samples. \citet{wang2020understanding} reveals that a perfect encoder will uniformly distribute samples on a hypersphere under an uniform isometric assumption, \textit{i.e.}, for any uniformly sampled $\boldsymbol{x},\boldsymbol{x}^-\stackrel{iid}\sim p(\boldsymbol{x})$, their latent representations $\boldsymbol{z}=f_{\boldsymbol{\theta}}(\boldsymbol{x})$ and $\boldsymbol{z}^-=f_{\boldsymbol{\theta}}(\boldsymbol{x}^-)$ also satisfy $p(\boldsymbol{z})=p(\boldsymbol{z}^-)$. We follow their assumption to analyze contrastive repulsion via the following lemma.
\begin{lemma}\label{theorem: neg-unif} Without loss of generality, we define the moving cost and metric in the conditional distribution as $c(\boldsymbol{z}_1, \boldsymbol{z}_2) = d(\boldsymbol{z}_1, \boldsymbol{z}_2) = \| \boldsymbol{z}_1- \boldsymbol{z}_2 \|_2^2$. When we are with an uniform prior, namely $p(\boldsymbol{x})=p(\boldsymbol{x}^-)$ for any $\boldsymbol{x},\boldsymbol{x}^-\stackrel{iid}\sim p(\boldsymbol{x})$ and $p(\boldsymbol{z})=p(\boldsymbol{z}^-)$ given their latent representations $\boldsymbol{z}=f_{\boldsymbol{\theta}}(\boldsymbol{x})$ and $\boldsymbol{z}^-=f_{\boldsymbol{\theta}}(\boldsymbol{x}^-)$, then
optimizing $\boldsymbol{\theta}$ with $\mathcal{L}_\mathrm{CR}$ in Eqn.~\eqref{eq:CACR_loss} is the same as optimizing $\boldsymbol{\theta}$ to minimize the mutual information between $\boldsymbol{x}$ and $\boldsymbol{x}^-$:
\ba{
I(X;X^-)= \textstyle\mathop{\mathbb{E}}_{{\boldsymbol{x} \sim p(\boldsymbol{x}) }} {\mathop{\mathbb{E}}_{\boldsymbol{x}^- \sim \pi^-_{\boldsymbol{\theta}}(\boldsymbol{\cdot} \,|\, \boldsymbol{x}) }} \left[ \ln \frac{\pi^-_{\boldsymbol{\theta}}(\boldsymbol{x}^- \,|\, \boldsymbol{x})}{p(\boldsymbol{x}^-)} \right],\!\! \label{eq:I}
}
and is also the same as optimizing $\boldsymbol{\theta}$ to maximize the conditional differential entropy of $\boldsymbol{x}^-$ given $\boldsymbol{x}$:
\ba{
\label{eq:entropy}
\mathcal H(X^-\,|\, X) = \mathbb{E}_{\boldsymbol{x}\sim p(\boldsymbol{x})}
\mathbb{E}_{\boldsymbol{x}^-\sim \pi^-_{\boldsymbol{\theta}}(\boldsymbol{\cdot} \,|\, \boldsymbol{x})}[-\ln \pi^-_{\boldsymbol{\theta}}(\boldsymbol{x}^-\,|\, \boldsymbol{x})].
}
Here the minimizer $\boldsymbol{\theta}^\star$ of $\mathcal{L}_\mathrm{CR}$ is also that of $I(X;X^-)$, whose global minimum zero is attained \emph{iff} $X$ and $X^-$ are independent, and the equivalent maximum of $\mathcal H(X^-\,|\, X)$ indicates the optimization of $\mathcal{L}_\mathrm{CR}$ is essentially aimed towards the uniformity of representation about negative samples.
\end{lemma}
We notice that one way to reach the optimum suggested in the above lemma is optimizing $\boldsymbol{\theta}$ by contrastive repulsion until that for any $\boldsymbol{x}\sim p(\boldsymbol{x})$, $d(f_{\boldsymbol{\theta}}(\boldsymbol{x}),f_{\boldsymbol{\theta}}(\boldsymbol{x}^-))$ is equal for all $\boldsymbol{x}^-\sim \boldsymbol{\pi}_{\boldsymbol{\theta}}^-(\boldsymbol{\cdot} \,|\, \boldsymbol{x})$.
This means for any sampled negative samples, their representations are also uniformly distributed after contrastive repulsion. Interestingly, this is consistent with the uniformity property achieved by CL~\cite{wang2020understanding}, which connects contrastive repulsion with CL in the perspective of negative sample effects.
Note that, although the above analysis builds upon the uniform isometric assumption, our method actually does not rely on it. Here, we formalize a more general relation between the contrastive repulsion and the contribution of negative samples in CL without this assumption as follows.
\begin{lemma}
\label{theorem:CL_CPP_MI}
As the number of negative samples $M$ goes to infinity, the contribution of the negative samples to the CL loss become the Uniformity Loss in AU-CL~\cite{wang2020understanding}, termed as $\mathcal{L}_\mathrm{uniform}$ for simplicity. It can be expressed as an upper bound of $\mathcal{L}_\mathrm{CR}$ by adding the mutual information $I(X;X^-)$ in Eqn.~\eqref{eq:I}:
$$\underbrace{\mathop{\mathbb{E}}\nolimits_{{\boldsymbol{x} \sim p(\boldsymbol{x}) }} \left[ \ln {\mathop{\mathbb{E}}\nolimits_{\boldsymbol{x}^- \sim p(\boldsymbol{x}^-) }} e^{f_{\boldsymbol{\theta}}(\boldsymbol{x}^{-})^{\top} f_{\boldsymbol{\theta}}(\boldsymbol{x}) / \tau} \right]}_{\mathcal{L}_\mathrm{uniform}}
+ I(X;X^-)
\geqslant \mathcal{L}_\mathrm{CR},$$
\vspace{-4mm}
\end{lemma}
As shown in Lemma~\ref{theorem:CL_CPP_MI}, the mutual information $I(X;X^-)$ helps quantify the difference between $\mathcal{L}_\mathrm{uniform}$ and $\mathcal{L}_\mathrm{CR}$. The difference between drawing $\boldsymbol{x}^-\sim \pi_{\boldsymbol{\theta}}^-(\boldsymbol{x}^- \,|\, \boldsymbol{x})$ (in CR) and drawing $\boldsymbol{x}^-$ independently in a mini-batch (in CL) is non-trivial as long as $I(X;X^-)$ is non-zero. In practice, this is true almost everywhere since we have to handle the skewed data distribution in real-world applications, \textit{e.g.}, the label-shift scenarios~\cite{garg2020unified}. In this view, CR does not require the representation space to be uniform like CL does, and is more robust to the complex cases through considering the intra-contrastive relation within negative samples.
\section{Experiments and analysis}\label{sec:experiments}
We compare the performance of CACR loss with representative CL methods, divided into two different categories according to their positive sampling size: $K=1$ and $K=4$. For methods with a single positive sample ($K=1$), the baseline methods include the conventional CL loss~\cite{oord2018representation,logeswaran2018efficient,chen2020learning,He_2020_CVPR}, AlignUniform CL loss (AU-CL)~\cite{wang2020understanding}, and the non-debiased version of the CL loss with hard negative sampling (HN-CL)~\cite{robinson2020contrastive}. In the case of $K=4$, we take contrastive multi-view coding (CMC) loss~\cite{tian2019contrastive} (align with our augmentation settings and use augmentation views instead of channels) as the comparison baseline.
For a fair comparison, on each dataset, we keep for all methods the same experiment setting including learning-rate, mini-batch size, and training epochs, but use their best temperature parameters. Please refer to Appendix~\ref{appendix:experiment_details} for other detailed experiment setups.
We conduct experiments on five image datasets of varying sizes, including CIFAR-10, CIFAR-100~\cite{hinton2007learning}, and STL-10~\cite{coates2011analysis} that are small-scale ones and ImageNet-100 and ImageNet-1K~\cite{deng2009imagenet} that are large-scale ones. Note that ImageNet-100 is a subset of ImageNet-1K, where 100 classes are randomly selected from the standard ImageNet-1K dataset and here we keep the same classes as commonly used in CL works~\cite{tian2019contrastive,wang2020understanding}. For small-scale datasets, we follow SimCLR to construct negative samples as the views augmented from different images within a batch. Moreover, we create two class-imbalanced CIFAR datasets as empirical verification of our theoretical analysis. For large-scale datasets, we follow MoCo-v2~\cite{chen2020mocov2} to maintain a queue of negative samples, updated with the momentum-based mechanism. To evaluate the learned representations, following the widely used linear classification protocol, the pre-trained encoder is fixed as a proxy and a linear classifier is added on the top of the base feature encoder for classification. Here we report the Top-1 validation accuracy on these datasets. We also report the results of object detection/segmentaion following the transfer learning protocol. The reported numbers for baselines are from the original papers if available, otherwise we report the best ones fine-tuned with the settings according to their corresponding papers.
\subsection{Linear classification on small-scale datasets}
\textbf{Classification accuracy:}
For small-scale datasets, we apply all methods with an AlexNet-based encoder following the setting in \citet{wang2020understanding}, trained in 200 epochs and with a ResNet50 encoder following the setting in \citet{robinson2020contrastive}. The results with the AlexNet-based encoder and ResNet50-based one are summarized in Table~\ref{tab:performance_small} and Table~\ref{tab:performance_small_resnet} (Appendix), respectively. We can observe that in the case of $K=1$, where the intra-positive contrast of CACR degenerates, CACR slightly outperforms all CL methods. With ResNet50, CACR outperforms with larger margin. Moreover, when $K=4$, it is interesting to observe an obvious boost in performance, where CMC improves CL by around 2-3\% while CACR improves CL by around 3-4\%. This supports our analysis that CA is helpful when the intra-positive contrast is not degenerated.
\textbf{On the effect of CA and CR:}
To understand the efficacy of the contrasts within positive and negative samples, we illustrate in Figure~\ref{figure:train_entropy_acc_cifar10} (Left) the evolution of conditional entropy $\mathcal{H}(X^-|X)$ and classification accuracy \textit{w.r.t.} the training epoch. In each epoch, we calculate the conditional entropy with Eqn.~\eqref{eq:entropy} on every mini-batch of size $M=512$ and take the average across mini-batches. As shown in Figure~\ref{figure:train_entropy_acc_cifar10}, $\mathcal{H}(X^-|X)$ is getting maximized as the encoder is getting optimized. As shown in Lemma~\ref{theorem: neg-unif}, under the uniform data prior assumption, the optimization of CL and CACR encourages the encoder to maximize $\mathcal{H}(X^-|X)$. It is also interesting to observe that in the case with multiple positive samples, the gap between CACR and CMC is much larger in terms of the conditional entropy. This implies the CA module can further boost the repulsion of negative samples. Although CMC uses multiple positive in CL loss, the lack of intra-positive contrast shows the gap of repulsion efficiency.
As shown in Figure~\ref{figure:train_entropy_acc_cifar10} (Right), CACR consistently outperforms the other methods in linear classification with the learned representations at the same epoch, indicating a superior learning efficiency of CACR. See Appendix~\ref{appendix:additional_experiment} for similar observations On CIFAR-10 and CIFAR-100.
As a qualitative verification, we randomly take a query from a mini-batch, and illustrate its positive and negative samples and their conditional probabilities in Figure~\ref{fig:visualization_of_samples}. As shown, given this query of a dog image, the positive sample with the largest weight contains partial dog information, indicating the encoder to focus on texture information; the negatives with larger weights are more related to the dog category, which encourages the encoder to focus on distinguishing these ``hard'' negative samples. In total, the weights learned by CACR enjoy the interpretability compared to the conventional CL.
\begin{table}[t]
\begin{minipage}[t]{.46\columnwidth}
\centering
\caption{The top-1 classification accuracy ($\%$) of different contrastive objectives under the SimCLR framework on small-scale datasets. All methods follow the SimCLR setting and apply an AlexNet-based encoder. The results of CL and AU-CL on STL-10 are quoted from \citet{wang2020understanding}. }\vspace{-0.2mm}
\label{tab:performance_small}
\renewcommand{\arraystretch}{1.}
\setlength{\tabcolsep}{1.0mm}{
\resizebox{0.96\columnwidth}{!}{
\begin{tabular}{cc|ccc}
\toprule
\multicolumn{2}{c|}{Methods} & CIFAR-10 & CIFAR-100 & STL-10 \\ \midrule
\multirow{4}{*} &CL & 83.47 & 55.41 & 83.89 \\
&AU-CL & 83.39 & 55.31 & 84.43 \\
&HN-CL & 83.67 & 55.87 & 83.27 \\
&CACR ($K=1$) & \textbf{83.73} & \textbf{56.52} & \textbf{84.51} \\ \midrule
\multirow{2}{*}&CMC ($K=4$) & 85.54 & 58.64 & 84.50 \\
&CACR ($K=4$) & \textbf{86.54} & \textbf{59.41} & \textbf{85.59} \\ \bottomrule
\end{tabular}
}
}
\end{minipage}\hfill
\begin{minipage}[t]{.52\columnwidth}
\centering
\caption{The classification accuracy ($\%$) of different contrastive objectives on class-imbalanced datasets. ``Linear'' and ``Exponentional'' indicate the number of samples in each class are chosen by following a linear rule or an exponential rule, respectively. The performance drops compared with the performance in Table~\ref{tab:performance_small} are shown next to each result.}
\label{tab:performance_imbalance}
\renewcommand{\arraystretch}{1.0}
\setlength{\tabcolsep}{1.0mm}{
\resizebox{\columnwidth}{!}{
\begin{tabular}{c|cc|cc}
\toprule
{Imbalance} & \multicolumn{2}{c|}{Linear} & \multicolumn{2}{c}{Exponential} \\ \hline
{Dataset} & CIFAR-10 & CIFAR-100 & CIFAR-10 & CIFAR-100 \\ \midrule
CL & $79.88_{3.59\downarrow}$ & $52.29_{3.57\downarrow}$ & $71.74_{11.73\downarrow}$ & $43.29_{12.57\downarrow}$ \\
AU-CL & $80.25_{3.14\downarrow}$ & $52.74_{2.57\downarrow}$ & $71.62_{11.76\downarrow}$ & $44.38_{10.93\downarrow}$ \\
HN-CL & $\textbf{80.51}_{3.15\downarrow}$ & $52.72_{3.14\downarrow}$ & $72.74_{10.93\downarrow}$ & $45.13_{10.73\downarrow}$ \\
CACR ($K=1$) & $80.46_{3.27\downarrow}$ & $\textbf{54.12}_{2.40\downarrow}$ & $\textbf{73.02}_{10.71\downarrow}$ & $\textbf{46.59}_{9.93\downarrow}$ \\ \midrule
CMC ($K=4$) & $82.20_{3.34\downarrow}$ & $55.38_{3.26\downarrow}$ & $74.77_{10.77\downarrow}$ & $48.87_{9.77\downarrow}$ \\
CACR ($K=4$) & $\textbf{83.62}_{2.92\downarrow}$ & $\textbf{56.91}_{2.50\downarrow}$ & $\textbf{75.89}_{10.65\downarrow}$ & $\textbf{50.17}_{9.24\downarrow}$ \\ \bottomrule
\end{tabular}
}}
\end{minipage}\hfill
\end{table}
\begin{figure}[t]
\vspace{-1mm}
\centering
{
\!\includegraphics[width=0.5\textwidth]{entropy/entropy_stl10.pdf}\includegraphics[width=0.42\textwidth]{training_evolution/stl10_evolution.pdf} \vspace{-3mm}
}
\caption{\small STL-10 training evolution, with mini-batch size 512.
\textbf{Left:} Conditional entropy $\mathcal{H}(X^-|X)$ \textit{w.r.t.} epoch. The maximal possible conditional entropy is indicated by a dotted line. \textbf{Right}: Linear classification with learned representations \textit{w.r.t.} epoch. }\label{figure:train_entropy_acc_cifar10}
\vspace{-3.2mm}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{visualization/sampling_visualization.pdf}\vspace{-6mm}
\caption{\small Illustration of positive/negative samples and their corresponding weights. (\textit{Left}) For a query augmented from the original dog image, 4 positive samples are shown, with their weights visualized as the blue distribution. (\textit{Right}) The sampling weights of negatives are visualized as the red distribution; we visualize 4 negative samples with the highest and 4 with the lowest weights, with their original images shown below.}
\label{fig:visualization_of_samples}
\vspace{-1.0mm}
\end{figure}
\subsection{Linear classification on class-imbalanced datasets}
To verify the robustness of CACR in comparison with that of CL when this uniform prior assumption is violated, we create two class-imbalanced datasets with CIFAR-10 and CIFAR-100. Such datasets are created by randomly sampling a certain number of samples from each class with a ``linear'' or ``exponentional'' rule by following the setting in~\cite{kim2020imbalanced}. Specifically, given a dataset with $C$ classes, for class $l\in \{1,2,...,C\}$, we randomly take samples with proportion $\lfloor \frac{l}{C} \rfloor$ for ``linear'' rule and proportion $\exp(\lfloor \frac{l}{C} \rfloor)$ for ``exponential'' rule. Once the dataset is sampled, it is fixed during training. For evaluation we keep the standard validation/testing datasets. Thus there is a label-shift between training and testing data distribution.
Summarized in Table~\ref{tab:performance_imbalance} are the results on class-imbalanced datasets, which show all the methods have a performance drop compared to the results in Table~\ref{tab:performance_small}. It is clear that CACR has the least performance decline in most cases. Especially, when $K=4$, CACR shows better performance robustness due to the characteristic of doubly contrastive within positive and negative samples. For example, in the ``exponentional'' setting of CIFAR-100, CL and HN-CL drop 12.57\% and 10.73\%, respectively, while CACR ($K=4$) drops 9.24\%. It is also interesting to observe HN-CL is relatively better among the baseline methods. According to \citet{robinson2020contrastive}, in HN-CL the negative samples are sampled according to the ``hardness'' \textit{w.r.t.} the query samples with an intra-negative contrast. Its loss could converge to CACR ($K=1$) with infinite negative samples. This performance gap indicates that directly optimizing the CACR loss could be superior when we have a limited number of samples. With this class-imbalanced datasets, we provide the empirical support to our analysis: When the condition in Lemma~\ref{theorem: neg-unif} is violated, CACR shows a clearer difference than CL and a better robustness with its unique doubly contrastive strategy within positive and negative samples.
\begin{table}[t]
\begin{minipage}[t]{.39\columnwidth}
\centering
\caption{Top-1 classification accuracy ($\%$) of different objectives with MoCo-v2 framework and ResNet50 encoder on ImageNet dataset. The results from paper or Github page are marked by $\star$.
}
\label{tab:performance_large}
\renewcommand{\arraystretch}{0.9}
\setlength{\tabcolsep}{1.0mm}
\resizebox{0.88\columnwidth}{!}{
\begin{tabular}{cc|cc}
\toprule
\multicolumn{2}{c|}{Methods} & ImageNet-100 & ImageNet-1K \\ \midrule
\multirow{4}{*} &CL & $77.54^\star$ & $67.50^\star$ \\
&AU-CL & $77.66^\star$ & $67.69^\star$ \\
&HN-CL & $76.34$ & $67.41$ \\
&CMC ($K=1$) & $75.80^\star$ & $66.20^\star$ \\
&CACR ($K=1$) & $\textbf{79.40}$ & $\textbf{68.40}$ \\ \midrule
&CMC ($K=4$) & 78.84 & $69.45$ \\
&CACR ($K=4$) & $\textbf{80.46}$ & $\textbf{70.35}$ \\ \bottomrule
\end{tabular}
}
\end{minipage}\hfill
\begin{minipage}[t]{.59\columnwidth}
\caption{Results of transferring features to object detection and segmentation task on Pascal VOC, with the pre-trained MoCo-v2 ResNet50 on ImageNet-1k. The results of CL loss on Pascal VOC are from their papers and Github pages.}
\label{table:detection_and_segmenation} \vspace{2mm}
\renewcommand{\arraystretch}{1.0}
\setlength{\tabcolsep}{1.0mm}{
\scalebox{0.71}{
\begin{tabular}{c|ccc|ccc|ccc}
\toprule
Task & \multicolumn{6}{c|}{Object Detction} & \multicolumn{3}{c}{Object Segmentation} \\ \hline
Dataset & \multicolumn{3}{c|}{Pascal VOC} & \multicolumn{3}{c|}{COCO} & \multicolumn{3}{c}{COCO} \\ \hline
Loss & AP & $\text{AP}_{50}$ & $\text{AP}_{75}$ & AP & $\text{AP}_{50}$ & $\text{AP}_{75}$ & AP & $\text{AP}_{50}$ & $\text{AP}_{75}$ \\ \midrule
CL & 57.00 & 82.40 & 63.60 & 40.90 &60.53 & 44.30 &35.73 &57.29 &\textbf{38.20} \\
AU-CL & 57.24 & 82.49 & 63.83 & 41.01 & 60.68 & 44.40 &35.56 &57.38 &37.93 \\
CACR ($K=1$) & \textbf{57.75} & \textbf{82.76} & \textbf{64.23} & \textbf{41.08} & \textbf{60.80} & \textbf{44.84} &\textbf{35.74} & \textbf{57.50} &38.07 \\ \midrule
CACR ($K=4$) & \textbf{57.91} & \textbf{82.83} & \textbf{64.85} & \textbf{41.50} & \textbf{61.11} & \textbf{45.30} & \textbf{36.08} & \textbf{57.95} & \textbf{38.68}
\\ \bottomrule
\end{tabular}
}}
\end{minipage}\hfill
\end{table}
\subsection{Linear classification on large-scale datasets}
For large-scale experiments, folloing the convention, we adapt all methods into the MoCo-v2 framework and pre-train a ResNet50 encoder in 200 epochs with mini-batch size 128/256 on ImageNet-100/ImageNet-1k. Table~\ref{tab:performance_large} summarizes the results of linear classification on these two large-scale datasets. Similar to the case on small-scale datasets, CACR consistently shows better performance, improving the baselines at least by 1.74\% on ImageNet-100 and 0.71\% on ImageNet-1K. In MoCo-v2, with multiple positive samples, CACR improves the baseline methods by 2.92\% on ImageNet-100 and 2.75\% on ImageNet-1K. It is worth highlighting that the improvement of CACR is more significant on these large-scale datasets, where the data distribution could be much more diverse compared to these small-scale ones. This is not surprising, as according to our theoretical analysis, CACR's double-contrast within samples enhances the effectiveness of the encoder's optimization. Moreover, we can see CACR ($K=1$) shows a clear improvement over HN-CL. A possible explanation is that although both increasing the negative sample size and selecting hard negatives are proposed to improve the CL loss, the effectiveness of hard negatives is limited when the sampling size is increased over a certain limit. As CACR targets to repel the negative samples away, the conditional distribution still efficiently guides the repulsion when the sampling size becomes large.
\subsection{Object detection and segmentation}
A main goal of unsupervised learning is to sufficiently capture general and transferable representations. Besides the linear classification evaluation, we also transfer the pre-trained contrastive models as the initialization for fine-tuning in downstream tasks, such as object detection and segmentation. To evaluate the transfer ability of the learned features, following the protocols in previous works~\cite{tian2019contrastive,He_2020_CVPR,chen2020mocov2,wang2020understanding}, we use the pretrained ResNet50 on ImageNet-1K for object detection and segmentation task on Pascal VOC~\cite{everingham2010pascal} and COCO~\cite{lin2014microsoft} by using detectron2~\cite{wu2019detectron2}. The experimental setting details are shown in Appendix~\ref{sec:setting-largescale} and kept the same as~\citet{He_2020_CVPR} and \citet{chen2020mocov2}. The test AP, AP$_{50}$, and AP$_{75}$ of bounding boxes in object detection and test AP, AP$_{50}$, and AP$_{75}$ of masks in segmentation are reported in Table~\ref{table:detection_and_segmenation}. We can observe that the performances of CACR is consistently better than that of other contrastive objectives. For example, compared to CL, the AP has been improved by 0.91\% on Pascal VOC object detection and by 0.31\% on COCO object segmentation.
\section{Conclusion}
In this paper, we rethink the limitation of conventional contrastive learning (CL) methods that form the contrastive loss by randomly selecting positive and negative samples for a query. We introduce a novel Contrastive Attration and Contrastive Repulsion (CACR) loss with a doubly contrastive strategy, which constructs for a random query two contradicting conditional distributions, which model the importance of a positive sample and that of a negative sample, respectively, to the query. To form the contrastive loss, CACR combines the independent and random sampling convention with the practice of contrastively reweighing both positive and negative samples according to their distances to the query. Our theoretical analysis and empirical results show that optimizing the CACR loss can effectively attract positive samples and repel negative ones from the query as CL intends to do, but is more robust in more general cases. Extensive experiments on small, large-scale, and imbalanced datasets consistently demonstrate the superiority and robustness of CACR over the state-of-the-art methods in contrastive representation learning and related downstream tasks.
\bibliographystyle{unsrtnat}
\section{} and \subsection{} commands, these will automatically be printed on this slide as an overview of your presentation
\tableofcontents
\end{frame}
\section{Background}
\subsection{Unsupervised Representation Learning}
\begin{frame}{Unsupervised Representation Learning}
\begin{itemize}
\item Unsupervised representation learning~\cite{bengio2013representation} has drawn much attention.
\item Unsupervised representation learning shows great effects in reducing the expensive human efforts for annotations and the benefits to downstream machine learning algorithms.
\item Representative methods:
\begin{itemize}
\item Principle Component Analysis (PCA)~\cite{tipping1999probabilistic}
\item Restricted Boltzmann Machine (RBM)~\cite{hinton2006reducing}
\item Variational AutoEncoder (VAE)~\cite{kingma2013auto}
\item Contrastive Learning (CL)~\cite{oord2018representation}
\end{itemize}
\end{itemize}
\end{frame}
\subsection{Contrastive Representation Learning}
\begin{frame}{Contrastive Representation Learning}
\begin{itemize}
\item Contrastive Learning is investigated as a lower bound of mutual information to enrich the MI between data and their representations in early stage \cite{gutmann2010noise,hjelm2018learning}.
\item In recent years, the effectiveness of CL is not just attributed to the maximization of mutual information \cite{tschannen2019mutual,tian2019crd}.
\item Contrastive Learning is widely studied:
\begin{itemize}
\item SimCLR~\cite{chen2020simple,chen2020big} studies extensive augmentations for positive and negative samples and intra-batch-based negative sampling.
\item A memory bank that caches representations~\cite{wu2018unsupervised} and a momentum update strategy are introduced to enable the usage of an enormous number of negative samples~\cite{chen2020mocov2}.
\item \cite{wang2020understanding} reveals the contrastive scheme is optimizing the alignment of positive samples and the uniformity of negative pairs in the limit of an infinite number of negative samples.
\item Areas like text~\cite{logeswaran2018efficient}, sequential data \cite{oord2018representation,henaff2019data}, structural data like graphs \cite{sun2019infograph,li2019graph,hassani2020contrastive,velickovic2019deep}, reinforcement learning~\cite{srinivas2020curl}, and few-shot scenarios~\cite{khosla2020supervised,sylvain2020locality}.
\end{itemize}
\end{itemize}
\end{frame}
\begin{frame}{Problems of Contrastive Learning}
\begin{itemize}
\item As a mutual information lower bound estimator (NCE \cite{oord2018representation}), contrastive lower bound is biased \cite{poole2018variational}.
\item Contrastive Learning methods are sensitive to selected samples~\cite{saunshi2019theoretical}:
\begin{itemize}
\item Positive samples need to apply various perturbations \cite{chen2020simple}.
\item ``Hard" negative samples are observed to be helpful \cite{bose2018adversarial,cherian2020representation,li2020self}.
\item ``Not all samples are negatives":
\begin{itemize}
\item A decomposition of the data distribution to approximate the true negative distribution~\cite{chuang2020debiased}.
\item To use ``neither too hard nor too easy" negative samples~\cite{wu2020conditional}.
\item To apply Monte-Carlo sampling for selecting hard negative samples under the user's control~\cite{robinson2020contrastive}.
\end{itemize}
\end{itemize}
\end{itemize}
\end{frame}
\begin{frame}{Problems of Contrastive Learning}
\begin{itemize}
\item Contrastive loss maximizes (\textit{resp.} minimizes) the similarity of positive (\textit{resp.} negative) pairs in the feature space:
\beq{
\!\mathop{\mathbb{E}}_{\substack{(\boldsymbol{x}, \boldsymbol{x}^+, \boldsymbol{x}^-_{1:M}) }} \left[ - \ln \ \frac{e^{f_{\boldsymbol{\theta}}(\boldsymbol{x})^{\top} f_{\boldsymbol{\theta}}(\boldsymbol{x}^+) / \tau}}{e^{f_{\boldsymbol{\theta}}(\boldsymbol{x})^{\top} f_{\boldsymbol{\theta}}(\boldsymbol{x}^+) / \tau}+{\mathop{\sum}_{i=1}^M } e^{f_{\boldsymbol{\theta}}(\boldsymbol{x}_{i}^{-})^{\top} f_{\boldsymbol{\theta}}(\boldsymbol{x}) / \tau}} \right],\! \label{eq: CL}}
\item Encoder $f_{\boldsymbol{\theta}}: \mathbb{R}^n \rightarrow \mathcal{S}^{d-1}$, where the learned $d$-dimensional features are with a unit norm.
\item Typically:
\begin{itemize}
\item For observations $\boldsymbol{x}_{0:M} \sim p_\mathrm{data}(\boldsymbol{x})$, we commonly assume that each $\boldsymbol{x}_i$ can be randomly transformed in certain ways with a transformation function $\mathcal{T}(\boldsymbol{x}_i,\epsilon_i)$ with $\epsilon_i \sim p(\epsilon)$.
\item For each $\boldsymbol{x}_0$, the query (also known as the anchor) is defined as $\boldsymbol{x}=\mathcal{T}(\boldsymbol{x}_0,\epsilon_0)$.
\item For positive pair $\{(\boldsymbol{x}, \boldsymbol{x}^+)\}$: $\boldsymbol{x}^+= \mathcal{T}(\boldsymbol{x}_0, \epsilon^+)$ is transformed from the same observation.
\item For negative pairs $\{(\boldsymbol{x}, \boldsymbol{x}^-_i)\}_{1:M}$: $\boldsymbol{x}^-_i = \mathcal{T}(\boldsymbol{x}_{i}, \epsilon^-_{i})$ are transformed from different observations.
\end{itemize}
\end{itemize}
\end{frame}
\begin{frame}{Problems of Contrastive Learning}
\begin{itemize}
\item In feature space, positive samples should be close and negative samples should be away from each other.
\item Positive pairs are sampled from a joint distribution $\boldsymbol{x}, \boldsymbol{x}^+ \sim p(\boldsymbol{x}, \boldsymbol{x}^+)$.
\item Negative samples are sampled: $\boldsymbol{x}^- \stackrel{iid}{\sim} p(\boldsymbol{x}^-)$.
\item In practice, positive pairs $(\boldsymbol{x}, \boldsymbol{x}^+)$ are often independent augmented views, negative samples are sampled from data distribution $\boldsymbol{x}^- \stackrel{iid}{\sim} p_\mathrm{data}(\boldsymbol{x})$.
\end{itemize}
\end{frame}
\section{Contrastive Conditional Transport}
\subsection{Conditional Transport and Contrastive Learning}
\begin{frame}{Motivation of Contrastive Conditional Transport}
\begin{columns}[c]
\column{.45\textwidth}
\begin{itemize}
\item \textbf{Conventional CL:} Given a query, the model randomly takes one positive sample to form a positive pair and compares it against multiple negative pairs, with all samples equally treated.
\item \textbf{CCT:} using multiple positive and negative pairs, the weight of a sample (indicated by point scale) is contrastively computed to more strongly pull more distant positive samples towards the query and push closer negative samples away from the query, making the update adaptive to samples and encouraging uniformity.
\end{itemize}
\column{.5\textwidth}
\begin{figure}[t]
\centering
\includegraphics[width=.92\columnwidth]{misc/motiv_ours.pdf}
\end{figure}
\end{columns}
\end{frame}
\begin{frame}{Contrastive Uniform Transport (CUT) in the feature space}
\begin{itemize}
\item In the same spirit of Equation \eqref{eq: CL}, by transporting positive samples together and negative samples away from each other, we define the Contrastive Uniform Transport (CUT) as:
\baa{
& \min_{\boldsymbol{\theta}}\{ \mathbb{E}_{\boldsymbol{x}_0\sim p_{data}(\boldsymbol{x})}\mathbb{E}_{\epsilon_0,\epsilon^+\sim p(\epsilon)} \left[ c(f_{\boldsymbol{\theta}}(\boldsymbol{x}), f_{\boldsymbol{\theta}}(\boldsymbol{x}^+)) \right]-\mathbb{E}_{\boldsymbol{x},\boldsymbol{x}^{-}\sim p(\boldsymbol{x})} \left[ c(f_{\boldsymbol{\theta}}(\boldsymbol{x}), f_{\boldsymbol{\theta}}(\boldsymbol{x}^-)) \right]\}.
\label{eq: CL-transport}
}
\item $c(\boldsymbol{z}_1,\boldsymbol{z}_2)$ denotes the point-to-point cost of transporting between two feature vectors.
\item To minimize/maximize the expected cost of moving between the representations of positive/negative samples, with the costs of all sample pairs being uniformly weighted.
\item Our experimental results do not observe CUT in \eqref{eq: CL-transport} perform well.
\end{itemize}
\end{frame}
\subsection{Contrastive Conditional Transport}
\begin{frame}{CCT: Contrastive Conditional Transport--positive transport}
Generalizing CUT, we further define the CCT loss for transporting the positive pairs:
\begin{block}{Conditional probability for transporting the positive pairs}
\ba{
&\textstyle\pi^+_{\boldsymbol{\theta}}(\boldsymbol{x}^+ \,|\, \boldsymbol{x},\boldsymbol{x}_0) := \frac{e^{d_{t^{+}}(f_{\boldsymbol{\theta}}(\boldsymbol{x}), f_{\boldsymbol{\theta}}(\boldsymbol{x}^+))} p(\boldsymbol{x}^+\,|\, \boldsymbol{x}_0)}{\int e^{d_{t^{+}}(f_{\boldsymbol{\theta}}(\boldsymbol{x}), f_{\boldsymbol{\theta}}(\boldsymbol{x}^+))}p(\boldsymbol{x}^+\,|\, \boldsymbol{x}_0) d\boldsymbol{x}^+}; ~ d_{t^{+}}(\boldsymbol{z}_1, \boldsymbol{z}_2) = {t^{+}\| \boldsymbol{z}_1 - \boldsymbol{z}_2 \|^2}, \label{eq: CT-positive}}
\end{block}
\begin{block}{CCT loss for transporting the positive pairs}
\ba{
\min_{\boldsymbol{\theta}} \mathcal C^+ := \mathbb{E}_{\boldsymbol{x}\sim p(\boldsymbol{x})}\mathbb{E}_{\boldsymbol{x}^+\sim \pi^+_{\boldsymbol{\theta}}(\boldsymbol{\cdot} \,|\, \boldsymbol{x},\boldsymbol{x}_0)} \left[ c(f_{\boldsymbol{\theta}}(\boldsymbol{x}), f_{\boldsymbol{\theta}}(\boldsymbol{x}^+)) \right],\!\!\label{eq:C+}
}
\end{block}
\end{frame}
\begin{frame}{CCT: Contrastive Conditional Transport--negative transport}
Similarly, CCT for transporting the negative pairs:
\begin{block}{Conditional probability for transporting the negative pairs}
\ba{
&\textstyle\pi^-_{\boldsymbol{\theta}}(\boldsymbol{x}^- \,|\, \boldsymbol{x}) := \frac{e^{-d_{t^{-}}(f_{\boldsymbol{\theta}}(\boldsymbol{x}), f_{\boldsymbol{\theta}}(\boldsymbol{x}^-))} p(\boldsymbol{x}^-)}{\int e^{-d_{t^{-}}(f_{\boldsymbol{\theta}}(\boldsymbol{x}), f_{\boldsymbol{\theta}}(\boldsymbol{x}^-))}p(\boldsymbol{x}^-) d\boldsymbol{x}^-}; ~ d_{t^{-}}(\boldsymbol{z}_1, \boldsymbol{z}_2) = {t^{-}\| \boldsymbol{z}_1 - \boldsymbol{z}_2 \|^2}, \label{eq: CT-negative}
}
\end{block}
\begin{block}{CCT objective for transporting the negative pairs}
\ba{
\max_{\boldsymbol{\theta}} ~\mathcal C^- := \mathbb{E}_{\boldsymbol{x}\sim p(\boldsymbol{x})}\mathbb{E}_{\boldsymbol{x}^-\sim \pi^-_{\boldsymbol{\theta}}(\boldsymbol{\cdot} \,|\, \boldsymbol{x})} \left[ c(f_{\boldsymbol{\theta}}(\boldsymbol{x}), f_{\boldsymbol{\theta}}(\boldsymbol{x}^-)) \right],\label{eq:C-}
}
\end{block}
\end{frame}
\begin{frame}{On the mini-batch based optimization}
\begin{itemize}
\item With empirical samples, we have $(\boldsymbol{x}_{i}^\mathrm{data},\epsilon_i)\sim p_{data}(\boldsymbol{x}) p(\epsilon)$ for $i=1,\ldots,M$.
\item The distribution of queries:
$$
\hat p(\boldsymbol{x}) = \textstyle \frac{1}{M} \sum_{i=1}^M \delta_{\boldsymbol{x}_i},~\boldsymbol{x}_{i} =\mathcal{T}(\boldsymbol{x}_{i}^\mathrm{data},\epsilon_i).
$$
\item Given each query $\boldsymbol{x}_i$, the positive samples are sampled with $\epsilon_{1:K} \stackrel{iid}\sim p(\epsilon)$, and the distribution of positive samples is approximated as:
\ba{
\hat p(\boldsymbol{x}_i^+\,|\, \boldsymbol{x}^{\rm{data}}_i) =\textstyle \frac{1}{K}\sum_{k=1}^K \delta_{\boldsymbol{x}^+_{ik}},~ \boldsymbol{x}^+_{ik} = \mathcal{T}(\boldsymbol{x}^{\rm{data}}_i,\epsilon_k).\label{eq:x+}
}
\item The distribution of negative samples is approximated as:
\ba{
\textstyle\hat p(\boldsymbol{x}_i^-) = \frac{1}{M-1} \sum_{j\neq i} \delta_{\boldsymbol{x}_j}.
\label{eq:x-}
}
\item $\hat p(\boldsymbol{x}_i^-)$ could be both refined with other methods~\cite{oord2018representation,He_2020_CVPR,khosla2020supervised}.
\end{itemize}
\end{frame}
\begin{frame}{CCT: On the mini-batch based optimization}
\begin{block}{Conditional transport probability with empirical positive samples}
\baa{
&\textstyle\hat \pi^+_{\boldsymbol{\theta}}(\boldsymbol{x}^+_{i}\,|\, \boldsymbol{x}_i,\boldsymbol{x}^{\rm{data}}_i) := \sum_{k=1}^K \frac{e^{d_{t^{+}}(f_{\boldsymbol{\theta}}(\boldsymbol{x}_i), f_{\boldsymbol{\theta}}(\boldsymbol{x}^+_{ik}))}}{\sum_{k'=1}^K e^{d_{t^{+}}(f_{\boldsymbol{\theta}}(\boldsymbol{x}_i), f_{\boldsymbol{\theta}}(\boldsymbol{x}^+_{ik'}))}}\delta_{x_{ik}^+}, \notag\\
&\textstyle\hat \pi^-_{\boldsymbol{\theta}}(\boldsymbol{x}^-_{i} \,|\, \boldsymbol{x}_i) := \sum_{j\neq i} \frac{e^{-d_{t^{-}}(f_{\boldsymbol{\theta}}(\boldsymbol{x}), f_{\boldsymbol{\theta}}(\boldsymbol{x}_j))} }{\sum_{j'\neq i} e^{-d_{t^{-}}(f_{\boldsymbol{\theta}}(\boldsymbol{x}_i), f_{\boldsymbol{\theta}}(\boldsymbol{x}_{j'}))}}\delta_{\boldsymbol{x}_j}.\notag
}
\end{block}
\begin{block}{CCT loss for empirical samples}
\baa{
&\hat{\mathcal{C}}^+ := \frac{1}{M} \sum_{i=1}^M \sum_{k=1}^K
\textstyle
\frac{e^{d_{t^{+}}(f_{\boldsymbol{\theta}}(\boldsymbol{x}_i), f_{\boldsymbol{\theta}}(\boldsymbol{x}^+_{ik}))}}{\sum_{k'=1}^K e^{d_{t^{+}}(f_{\boldsymbol{\theta}}(\boldsymbol{x}_i), f_{\boldsymbol{\theta}}(\boldsymbol{x}^+_{i{k'}}))}}\scriptsize{\times c(f_{\boldsymbol{\theta}}(\boldsymbol{x}_i), f_{\boldsymbol{\theta}}(\boldsymbol{x}^+_{ik}))}, \notag \\
&\hat{\mathcal{C}}^- := \frac{1}{M} \sum_{i=1}^M \sum_{j\neq i} \textstyle\frac{e^{-d_{t^{-}}(f_{\boldsymbol{\theta}}(\boldsymbol{x}_i), f_{\boldsymbol{\theta}}(\boldsymbol{x}_{j}))} }{\sum_{j'\neq i} e^{-d_{t^{-}}(f_{\boldsymbol{\theta}}(\boldsymbol{x}_i), f_{\boldsymbol{\theta}}(\boldsymbol{x}_{j'}))}}\times \scriptsize{c(f_{\boldsymbol{\theta}}(\boldsymbol{x}_i), f_{\boldsymbol{\theta}}(\boldsymbol{x}_{j}))}\notag, \\
&\textstyle \mathcal L_{\text{CCT}} = \textstyle \hat{\mathcal{C}}^+ - \textstyle \hat{\mathcal{C}}^-.\notag
}
\end{block}
\end{frame}
\begin{frame}{Framework of CCT}
\begin{figure}[t]
\centering
\includegraphics[width=.9\columnwidth]{misc/model_architecture.pdf}
\caption{Illustration of CCT framework. The encoder extracts embeddings from samples and the conditional distributions indicate the transport maps for optimizing the transport cost in contrastive learning.
The conditional weights are calculated according to the distance of a query $\boldsymbol{x}$ and its contrastive samples $\boldsymbol{x}^+,\boldsymbol{x}^-$. $\otimes$ means element-wise multiplication between costs and conditional weights. }
\label{figure:model_architecture}
\end{figure}
\end{frame}
\subsection{Property Analysis}
\begin{frame}{Properties of CCT}
\begin{theorem}[Invariant representation with positive transport]
The positive transport is optimized if and only if all positive samples of a query share the same representation as that query. More specifically, for query $\boldsymbol{x}$ that is transformed from $\boldsymbol{x}_0\sim p_{data}(\boldsymbol{x})$, its positive samples share the same representation with it, which means
\ba{ f_{\boldsymbol{\theta}}(\boldsymbol{x}^+) = f_{\boldsymbol{\theta}}(\boldsymbol{x}) ~\text{ for any }~\boldsymbol{x}^+\sim p(\boldsymbol{x}^+\,|\, \boldsymbol{x}_0).
\label{eq:th1}
}
\end{theorem}
\end{frame}
\begin{frame}{Properties of CCT}
\begin{theorem}[Mutual information minimization with negative transport]
Suppose all samples are equally likely in the prior, which means $p(\boldsymbol{x})=p(\boldsymbol{x}^-)$ for any $\boldsymbol{x},\boldsymbol{x}^-\stackrel{iid}\sim p(\boldsymbol{x})$, and their latent representations $\boldsymbol{z}=f_{\boldsymbol{\theta}}(\boldsymbol{x})$ and $\boldsymbol{z}^-=f_{\boldsymbol{\theta}}(\boldsymbol{x}^-)$ are also equally likely in the encoder space, which means $p(\boldsymbol{z})=p(\boldsymbol{z}^-)$, then
optimizing $\boldsymbol{\theta}$ to maximize the
negative transport cost $\mathcal C^-$ in \eqref{eq:C-} is the same as optimizing $\boldsymbol{\theta}$ to maximize the conditional differential entropy of $\boldsymbol{x}^-$ given $\boldsymbol{x}$ under the joint distribution $p(\boldsymbol{x})\pi_{\boldsymbol{\theta}}^-(\boldsymbol{x}^-\,|\, \boldsymbol{x})$, which can be expressed as
\ba{
\label{eq:entropy}
\mathcal H(X^-\,|\, X) = \mathbb{E}_{\boldsymbol{x}\sim p(\boldsymbol{x})}
\mathbb{E}_{\boldsymbol{x}^-\sim \pi^-_{\boldsymbol{\theta}}(\boldsymbol{\cdot} \,|\, \boldsymbol{x})}[-\ln \pi^-_{\boldsymbol{\theta}}(\boldsymbol{x}^-\,|\, \boldsymbol{x})],\!\!
}
which is also the same as optimizing $\boldsymbol{\theta}$ to minimize the mutual information between $\boldsymbol{x}$ and $\boldsymbol{x}^-$, expressed as
\ba{
I(X;X^-)= \textstyle\mathop{\mathbb{E}}_{{\boldsymbol{x} \sim p(\boldsymbol{x}) }} {\mathop{\mathbb{E}}_{\boldsymbol{x}^- \sim \pi_{\boldsymbol{\theta}}(\boldsymbol{x}^- \,|\, \boldsymbol{x}) }} \left[ \ln \frac{\pi_{\boldsymbol{\theta}}(\boldsymbol{x}^-\,|\, \boldsymbol{x})}{p(\boldsymbol{x}^-)} \right].\!\! \label{eq:I}}
\end{theorem}
\end{frame}
\begin{frame}{Properties of CCT}
Interpretations:
\begin{itemize}
\item Positive transport: the optimal encoder produces representations invariant to the noisy details.
\item Negative transport: the optimal encoder distributes samples with an equal distance \textbf{to the query}.
\begin{itemize}
\item A perfect encoder trained with CL loss will uniformly distribute samples on the feature hypersphere \cite{wang2020understanding}.
\item The uniform hypersphere is achieved when the uniform prior $p(\boldsymbol{x}) = p(\boldsymbol{x}^-)$ is achieved.
\item In practice, this prior is not satisfied. Minimizing the mutual information $I(X; X^-)$ is more efficient in distributing the negative samples.
\end{itemize}
\end{itemize}
\end{frame}
\begin{frame}{Properties of CCT}
\begin{theorem}[Distinction between CL and CCT loss]
As the number of negative samples $M$ goes to infinity, the contribution of the negative samples to the CL loss shown in Equation \eqref{eq: CL} can be expressed as
$$\mathop{\mathbb{E}}_{{\boldsymbol{x} \sim p(\boldsymbol{x}) }} \left[ \ln {\mathop{\mathbb{E}}_{\boldsymbol{x}^- \sim p(\boldsymbol{x}^-) }} e^{f_{\boldsymbol{\theta}}(\boldsymbol{x}^{-})^{\top} f_{\boldsymbol{\theta}}(\boldsymbol{x}) / \tau} \right],$$
adding which to the mutual information $I(X;X^-)$ in Equation \eqref{eq:I}
becomes an uppper bound of $-\mathcal{C}^-$ defined as in Equation \eqref{eq:C-}, which is the contribution of the negative samples to the CCT loss.
\end{theorem}
The CL loss does not consider the impacts of the mutual information $I(X; X^-)$. As the uniform data prior assumption is not clearly violated, the robustness will be affected.
\end{frame}
\begin{frame}{Properties of CCT}
\begin{alertblock}{Reformulation of $\mathcal{C}^+$: comparison with uniform sampling}
Remind in CL loss, the alignment of positive samples is encouraged by:
\ba{\textstyle
\mathbb{E}_{\boldsymbol{x}_0\sim p_{data}(\boldsymbol{x})}
\mathbb{E}_{\boldsymbol{x},\boldsymbol{x}^+\sim p(\boldsymbol{\cdot}\,|\, \boldsymbol{x}_0)} \left[ -f_{\boldsymbol{\theta}}(\boldsymbol{x})^\top f_{\boldsymbol{\theta}}(\boldsymbol{x}^+)) \right].\nota
}
We can rewrite the positive transport cost $\mathcal C^+$ in \eqref{eq:C+} as
\ba{\textstyle
\mathbb{E}_{\boldsymbol{x}_0\sim p_{data}(\boldsymbol{x})}
\mathbb{E}_{\boldsymbol{x},\boldsymbol{x}^+\sim p(\boldsymbol{\cdot}\,|\, \boldsymbol{x}_0)} \left[ -f_{\boldsymbol{\theta}}(\boldsymbol{x})^\top f_{\boldsymbol{\theta}}(\boldsymbol{x}^+)) \frac{\pi^+_{\boldsymbol{\theta}}(\boldsymbol{x}^+ \,|\, \boldsymbol{x},\boldsymbol{x}_0)}{p(\boldsymbol{x}^+\,|\, \boldsymbol{x}_0)} \right]
.\notag
}
\end{alertblock}
The density ratio $\frac{\pi^+_{\boldsymbol{\theta}}(\boldsymbol{x}^+ \,|\, \boldsymbol{x},\boldsymbol{x}_0)}{p(\boldsymbol{x}^+\,|\, \boldsymbol{x}_0)}$ provides an importance weight and enhances the robustness when $p(\boldsymbol{x}^+\,|\, \boldsymbol{x}_0)$ does not have a uniform distribution.
\end{frame}
\subsection{Experiments}
\begin{frame}{Experiment settings}
\begin{itemize}
\item Training feature encoders with CCT.
\item Fine-tuning on downstream tasks: linear classification, object detection, segmentation.
\item Small-scale datasets:
\begin{itemize}
\item Datasets: CIFAR-10, CIFAR-100, STL-10.
\item Negative sample strategy: SimCLR \cite{chen2020simple}.
\item Encoder backbone: AlexNet-based Encoder \cite{wang2020understanding}.
\end{itemize}
\item Class-imbalanced datasets:
\begin{itemize}
\item Datasets: CIFAR-10, CIFAR-100. (skewed with a linear/exponential rule)
\item Negative sample strategy: SimCLR \cite{chen2020simple}.
\item Encoder backbone: AlexNet-based Encoder \cite{wang2020understanding}.
\end{itemize}
\item Large-scale datasets:
\begin{itemize}
\item Datasets: ImageNet-100, ImageNet-1K.
\item Negative sample strategy: MoCo-v2 \cite{chen2020mocov2}.
\item Encoder backbone: ResNet50 Encoder \cite{he2016deep}.
\end{itemize}
\item Ablation studies: mini-batch size, hyperparameters
\end{itemize}
\end{frame}
\begin{frame}{Experiments: Linear classification on small-scale datasets}
\begin{table}[t]
\vspace{-2mm}
\centering
\caption{The top-1 classification accuracy ($\%$) of different contrastive objectives with SimCLR framework on small-scale datasets. All methods follow SimCLR setting and apply an AlexNet-based encoder. The results of CL and AU-CL on STL-10 are quoted from \cite{wang2020understanding}. }
\label{tab:performance_small}
\renewcommand{\arraystretch}{1.1}
\setlength{\tabcolsep}{1.0mm}{
\begin{tabular}{cc|ccc}
\toprule
\multicolumn{2}{c|}{Methods} & CIFAR-10 & CIFAR-100 & STL-10 \\ \midrule
\multirow{4}{*} &CL & 83.47 & 55.78 & 83.89 \\
&AU-CL & 83.39 & 55.31 & \textbf{84.43} \\
&HN-CL & 83.67 & 55.87 & 83.27 \\
&CCT ($K=1$) & \textbf{83.73} & \textbf{56.52} & 83.90 \\ \midrule
\multirow{2}{*}&CMC ($K=4$) & 85.54 & 58.64 & 84.50 \\
&CCT ($K=4$) & \textbf{86.54} & \textbf{59.41} & \textbf{85.59} \\ \bottomrule
\end{tabular}
}
\vspace{-3.5mm}
\end{table}
\end{frame}
\begin{frame}[allowframebreaks]
\frametitle{Experiments: On the effect of conditional transport map}
\begin{figure}[t]
\centering
{
\!\includegraphics[width=0.4\textwidth]{entropy/entropy_512_cifar10.pdf}\includegraphics[width=0.4\textwidth]{training_evolution/cifar10_evolution.pdf}
}
\caption{CIFAR-10 training evolution, with mini-batch size 512.
\textbf{Left:} Conditional entropy $\mathcal{H}(X^-|X)$ \textit{w.r.t.} epoch. The maximal possible conditional entropy
is indicated by a dotted line. \textbf{Right}: Linear classification with learned representations \textit{w.r.t.} epoch. }\label{figure:train_entropy_acc_cifar10}
\end{figure}
\begin{figure}[t]
\centering
{
\!\includegraphics[width=0.4\textwidth]{entropy/entropy_512_cifar100.pdf}\includegraphics[width=0.4\textwidth]{training_evolution/cifar100_evolution.pdf}
}
\caption{CIFAR-100 training evolution, with mini-batch size 512.
\textbf{Left:} Conditional entropy $\mathcal{H}(X^-|X)$ \textit{w.r.t.} epoch. The maximal possible conditional entropy
is indicated by a dotted line. \textbf{Right}: Linear classification with learned representations \textit{w.r.t.} epoch. }\label{figure:train_entropy_acc_cifar100}
\end{figure}
\begin{figure}[t]
\centering
{
\!\includegraphics[width=0.4\textwidth]{entropy/entropy_512_stl10.pdf}\includegraphics[width=0.4\textwidth]{training_evolution/stl10_evolution.pdf}
}
\caption{STL-10 training evolution, with mini-batch size 512.
\textbf{Left:} Conditional entropy $\mathcal{H}(X^-|X)$ \textit{w.r.t.} epoch. The maximal possible conditional entropy
is indicated by a dotted line. \textbf{Right}: Linear classification with learned representations \textit{w.r.t.} epoch. }\label{figure:train_entropy_acc_stl10}
\end{figure}
\begin{figure}
\centering
{
\includegraphics[width=.47\textwidth]{visualization/cct_weights_p.pdf}}\vspace{-3mm}\\
{
\includegraphics[width=.485\textwidth]{visualization/cct_weights_n.pdf}}
\caption{Illustration of positive/negative samples and their corresponding weights.}
\label{fig:visualization_of_samples}
\end{figure}
\end{frame}
\begin{frame}{Experiments: Linear classification on class-imbalanced datasets}
\begin{table}[t]
\vspace{-2.5mm}
\centering
\caption{The classification accuracy ($\%$) of different contrastive objectives on class-imbalanced datasets. ``Linear'' and ``Exponentional'' indicate the number of samples in each class are chosen by following a linear rule or an exponential rule, respectively. The performance drops compared with the performance in Table~\ref{tab:performance_small} are shown next to each result.}
\label{tab:performance_imbalance}
\renewcommand{\arraystretch}{1.0}
\setlength{\tabcolsep}{1.0mm}{
\begin{tabular}{c|cc|cc}
\toprule
{Imbalance} & \multicolumn{2}{c|}{Linear} & \multicolumn{2}{c}{Exponential} \\ \hline
{Dataset} & CIFAR-10 & CIFAR-100 & CIFAR-10 & CIFAR-100 \\ \midrule
CL & $79.88_{3.59\downarrow}$ & $52.29_{3.57\downarrow}$ & $71.74_{11.73\downarrow}$ & $43.29_{12.57\downarrow}$ \\
AU-CL & $80.25_{3.14\downarrow}$ & $52.74_{2.57\downarrow}$ & $71.62_{11.76\downarrow}$ & $44.38_{10.93\downarrow}$ \\
HN-CL & $\textbf{80.51}_{3.15\downarrow}$ & $52.72_{3.14\downarrow}$ & $72.74_{10.93\downarrow}$ & $45.13_{10.73\downarrow}$ \\
CCT ($K=1$) & $80.46_{3.27\downarrow}$ & $\textbf{54.12}_{2.40\downarrow}$ & $\textbf{73.02}_{10.71\downarrow}$ & $\textbf{46.59}_{9.93\downarrow}$ \\ \midrule
CMC ($K=4$) & $82.20_{3.34\downarrow}$ & $55.38_{3.26\downarrow}$ & $74.77_{10.77\downarrow}$ & $48.87_{9.77\downarrow}$ \\
CCT ($K=4$) & $\textbf{83.62}_{2.92\downarrow}$ & $\textbf{56.91}_{2.50\downarrow}$ & $\textbf{75.89}_{10.65\downarrow}$ & $\textbf{50.17}_{9.24\downarrow}$ \\ \bottomrule
\end{tabular}
} \vspace{-2.5mm}
\end{table}
\end{frame}
\begin{frame}{Experiments: Linear classification on large-scale datasets}
\begin{table}[t]
\vspace{-2.mm}
\centering
\caption{The top-1 classification accuracy ($\%$) of different contrastive objectives with MoCo-v2 framework on the ImageNet dataset. All methods apply MoCo-v2 with ResNet50. The results from paper or Github page are marked by $\star$.
}
\label{tab:performance_large}
\renewcommand{\arraystretch}{0.9}
\setlength{\tabcolsep}{1.0mm}
\begin{tabular}{cc|cc}
\toprule
\multicolumn{2}{c|}{Methods} & ImageNet-100 & ImageNet-1K \\ \midrule
\multirow{4}{*} &CL & $77.54^\star$ & $67.50^\star$ \\
&AU-CL & $77.66^\star$ & $67.69^\star$ \\
&HN-CL & $76.34$ & $67.41$ \\
&CMC ($K=1$) & $75.80^\star$ & $66.20^\star$ \\
&CCT ($K=1$) & $\textbf{79.40}$ & $\textbf{68.40}$ \\ \midrule
&CMC ($K=4$) & 78.84 & $-$ \\
&CCT ($K=4$) & $\textbf{80.46}$ & $\textbf{70.35}$ \\ \bottomrule
\end{tabular}
\vspace{-3mm}
\end{table}
\end{frame}
\begin{frame}{\Large Experiments: Object detection and segmentation on ImageNet-1K}
\begin{table}[ht]
\centering
\caption{Results of transferring features to object detection and segmentation task on Pascal VOC, with the pre-trained MoCo-v2 ResNet50 on ImageNet-1k. The results of CL loss on Pascal VOC are from their papers and online Github pages.}
\label{table:detection_and_segmenation}
\renewcommand{\arraystretch}{1.0}
\setlength{\tabcolsep}{1.0mm}{
\scalebox{1.0}{
\begin{tabular}{c|ccc|ccc|ccc}
\toprule
Task & \multicolumn{6}{c|}{Object Detction} & \multicolumn{3}{c}{Object Segmentation} \\ \hline
Dataset & \multicolumn{3}{c|}{Pascal VOC} & \multicolumn{3}{c|}{COCO} & \multicolumn{3}{c}{COCO} \\ \hline
Loss & AP & $\text{AP}_{50}$ & $\text{AP}_{75}$ & AP & $\text{AP}_{50}$ & $\text{AP}_{75}$ & AP & $\text{AP}_{50}$ & $\text{AP}_{75}$ \\ \midrule
CL & 57.00 & 82.40 & 63.60 & 40.90 &60.53 & 44.30 &35.73 &57.29 &\textbf{38.20} \\% & \textbf{16.70} & 39.27 & 52.97 \\
AU-CL & 57.24 & 82.49 & 63.83 & 41.01 & 60.68 & 44.40 &35.56 &57.38 &37.93 \\% & 15.78 & 39.28 & 52.88 \\
CCT ($K=1$) & \textbf{57.75} & \textbf{82.76} & \textbf{64.23} & \textbf{41.08} & \textbf{60.80} & \textbf{44.84} &\textbf{35.74} & \textbf{57.50} &38.07 \\ \midrule
CCT ($K=4$) & \textbf{57.91} & \textbf{82.83} & \textbf{64.85} & \textbf{41.50} & \textbf{61.11} & \textbf{45.30} & \textbf{36.08} & \textbf{57.95} & \textbf{38.68}
\\ \bottomrule
\end{tabular}
}}
\end{table}
\end{frame}
\begin{frame}{Ablation studies: Conditional Transport Map}
\begin{table}[ht]
\centering
\caption{Linear classification performance (\%) of different variants of our method. ``CCT'' represents the normal CCT configuration, ``w/o $\pi_{\boldsymbol{\theta}}^{+}$'' means without the positive transport map, ``w/o $\pi_{\boldsymbol{\theta}}^{-}$'' means without the negative transport map. ``CUT'' indicates the conditional uniform transport (see the model we discussed in Equation~\ref{eq: CL-transport}), \textit{i.e.} without both positive and negative transport map. This experiment is done on all small-scale datasets, with $K=4$ and mini-batch size $M=128$.}
\label{tab:different_variant}
\renewcommand{\arraystretch}{1.1}
\setlength{\tabcolsep}{1.0mm}{
\begin{tabular}{cccc}
\toprule
Methods & CIFAR-10 & CIFAR-100 & STL-10 \\ \hline
CCT & \textbf{85.94} & \textbf{59.51} & \textbf{85.59} \\ \hline
w/o $\pi_{\boldsymbol{\theta}}^{+}$ & 85.22 & 58.74 & 85.06 \\
w/o $\pi_{\boldsymbol{\theta}}^{-}$ & 78.49 & 47.88 & 72.94 \\ \hline
CUT & 77.17 & 44.24 & 71.88 \\ \bottomrule
\end{tabular}}
\end{table}
\end{frame}
\begin{frame}{Ablation studies: On the sampling size}
\begin{figure}[ht]
\subfloat[CIFAR-10]
{
\includegraphics[width=0.3\textwidth]{batch_size/cifar10_sampling_size.pdf}
}\hfill
\subfloat[CIFAR-100]
{
\includegraphics[width=0.3\textwidth]{batch_size/cifar100_sampling_size.pdf}
}\hfill
\subfloat[STL-10]
{
\includegraphics[width=0.3\textwidth]{batch_size/stl10_sampling_size.pdf}
}
\caption{The linear classification results of training with different sampling size on small-scale datasets. The training batch size is proportional to the negative sampling size.}
\label{figure:sampling_size}
\end{figure}
\end{frame}
\begin{frame}[allowframebreaks]
\frametitle{Ablation studies: On the Effects of Hyper-parameter $t^{+}$, $t^{-}$}
Remind the hyper-parameters in the definition of transport map:
$$
\textstyle\pi^+_{\boldsymbol{\theta}}(\boldsymbol{x}^+ \,|\, \boldsymbol{x},\boldsymbol{x}_0) := \frac{e^{{t^{+}}\|f_{\boldsymbol{\theta}}(\boldsymbol{x}), f_{\boldsymbol{\theta}}(\boldsymbol{x}^+)\|^2} p(\boldsymbol{x}^+\,|\, \boldsymbol{x}_0)}{\int e^{{t^{+}}\|f_{\boldsymbol{\theta}}(\boldsymbol{x}), f_{\boldsymbol{\theta}}(\boldsymbol{x}^+)\|^2}p(\boldsymbol{x}^+\,|\, \boldsymbol{x}_0) d\boldsymbol{x}^+};\quad \textstyle\pi^-_{\boldsymbol{\theta}}(\boldsymbol{x}^- \,|\, \boldsymbol{x}) := \frac{e^{-{t^{-}}\|f_{\boldsymbol{\theta}}(\boldsymbol{x}), f_{\boldsymbol{\theta}}(\boldsymbol{x}^-)\|^2} p(\boldsymbol{x}^-)}{\int e^{-{t^{-}}\|f_{\boldsymbol{\theta}}(\boldsymbol{x}), f_{\boldsymbol{\theta}}(\boldsymbol{x}^-)\|^2}p(\boldsymbol{x}^-) d\boldsymbol{x}^-}.$$
\begin{table}[ht]
\centering
\caption{The classification accuracy(\%) of CCT ($K=4,~M=128$) with different hyper-parameters $t^{+}$ on small-scale datasets.}
\label{tab:hyperparameter_pos}
\begin{tabular}{c|c|cccccc}
\toprule
Method & Dataset & 0.5 & 0.7 & 0.9 & 1.0 & 2.0 & 3.0 \\ \midrule
\multirow{3}{*}{CCT ($K=4$)} & CIFAR-10 & 86.07 & 85.78 & 85.90 & \textbf{86.54} & 84.85 & 84.76 \\
& CIFAR-100 & \textbf{59.47} & 59.61 & 59.41 & 59.41 & 57.82 & 57.55 \\
& STL-10 & 85.90 & \textbf{85.91} & 85.81 & 85.59 & 85.65 & 85.14 \\ \bottomrule
\end{tabular}
\end{table}
\begin{table}[ht]
\centering
\caption{The classification accuracy(\%) of CCT ($K=1,~M=768$) and CCT ($K=4,~M=128$) with different hyper-parameters $t^{-}$ on small-scale datasets.}
\label{tab:hyperparameter_neg}
\begin{tabular}{c|c|cccccc}
\toprule
Methods & Dataset & 0.5 & 0.7 & 0.9 & 1.0 & 2.0 & 3.0 \\ \midrule
\multirow{3}{*}{CCT ($K=1$)} & CIFAR-10 & 81.66 & 82.40 & 83.07 & 82.74 & \textbf{83.73} & 83.11 \\
& CIFAR-100 & 51.42 & 52.81 & 53.36 & 54.20 & 56.21 & \textbf{56.52} \\
& STL-10 & 80.37 & 81.47 & 81.89 & 82.16 & 83.55 & \textbf{83.90} \\ \midrule
\multirow{3}{*}{CCT ($K=4$)} & CIFAR-10 & 85.67 & 86.19 & \textbf{86.54} & 86.41 & 85.94 & 85.69 \\
& CIFAR-100 & 58.17 & 58.63 & 59.37 & 59.35 & \textbf{59.41} & 59.31 \\
& STL-10 & 83.81 & 84.42 & 84.71 & 85.25 & \textbf{85.59} & 85.41 \\ \bottomrule
\end{tabular}
\end{table}
\end{frame}
\begin{frame}{Ablation studies: On the effects of different cost metrics}
\begin{align}
\label{eq:Euclidean_cost}
\text{Euclidean cost: } c(f_{\theta}(\boldsymbol{x}),f_{\theta}(\boldsymbol{y}))=||f_{\theta}(\boldsymbol{x})-f_{\theta}(\boldsymbol{y})||_{2}^{2}.
\end{align}
\begin{align}
\label{eq:RBF_neg}
\text{RBF cost: } c_\mathrm{RBF}(f_{\theta}(\boldsymbol{x}),f_{\theta}(\boldsymbol{y}))=-e^{-t||f_{\theta}(\boldsymbol{x})-f_{\theta}(\boldsymbol{y})||_{2}^{2}}.
\end{align}
\begin{table}
\centering
\caption{The classification accuracy ($\%$) of CCT ($K=1$) and CCT ($K=4$) with different cost metrics on CIFAR-10, CIFAR-100 and STL-10. Euclidean indicates the cost defined in Equation~\ref{eq:Euclidean_cost}, and RBF indicates the cost metrics defined in Equation~\ref{eq:RBF_neg}.}
\label{tab:different_cost_metrics}
\renewcommand{\arraystretch}{1.1}
\setlength{\tabcolsep}{1.0mm}{
\scalebox{1.0}{
\begin{tabular}{c|c|ccc}
\toprule
Methods & Cost Metric & CIFAR-10 & CIFAR-100 & STL-10 \\ \midrule
\multicolumn{1}{c|}{\multirow{2}{*}{CCT$(K=1)$}} & Euclidean & 83.73 & 56.21 & 83.55 \\ \cline{2-5}
\multicolumn{1}{c|}{} & RBF & 83.08 & 55.90 & 84.20 \\ \midrule
\multicolumn{1}{c|}{\multirow{2}{*}{CCT$(K=4)$}} & Euclidean & 85.94 & \textbf{59.41} & 85.59 \\ \cline{2-5}
\multicolumn{1}{c|}{} & RBF & \textbf{86.20} & 58.81 & \textbf{85.80} \\ \bottomrule
\end{tabular}
}}
\end{table}
\end{frame}
\begin{frame}[allowframebreaks]
\frametitle{Ablation studies: Feature space visualization}
\begin{figure}
\subfloat[Epoch 1]{
\includegraphics[width=0.3\textwidth]{tsne/tsne_CL_0.pdf}
}\hfill
\subfloat[Epoch 20]{
\includegraphics[width=0.3\textwidth]{tsne/tsne_CL_20.pdf}
}\hfill
\subfloat[Epoch 200]{
\includegraphics[width=0.3\textwidth]{tsne/tsne_CL_200.pdf}
}\\
\subfloat[Epoch 1]{
\includegraphics[width=0.3\textwidth]{tsne/tsne_CCT_0.pdf}
}\hfill
\subfloat[Epoch 20]{
\includegraphics[width=0.3\textwidth]{tsne/tsne_CCT_20.pdf}
}\hfill
\subfloat[Epoch 200]{
\includegraphics[width=0.3\textwidth]{tsne/tsne_CCT_200.pdf}
\label{figure:tsne_epoch}
}\vspace{-2mm}
\caption{The $t$-SNE visualization of the latent space at different training epochs, learned by CL loss (\textit{top}) and CCT loss (\textit{bottom}). The picked query is marked in green, with its positive samples marked in blue and its negative samples marked in red. The circle with radius $t^{-}$ is shown as the black dashed line.
}
\label{figure:tsne}
\end{figure}
\end{frame}
\section{Summary and Discussions}
\begin{frame}{Summary and Discussion}
\begin{itemize}
\item Randomly selecting positive and negative samples for a query is limiting the performance of contrastive representation learning.
\item Contrastive conditional transport (CCT) loss constructs for a random query with two contradicting conditional transport maps.
\item CCT combines the independent and random sampling convention with the practice of contrastively reweighing both positive and negative samples according to their distances to the query.
\item CCT shows consistent better performance and can be applied in the future works related to representation learning.
\end{itemize}
\end{frame}
\begin{frame}[allowframebreaks,noframenumbering]
\frametitle{References}
\bibliographystyle{amsalpha}
|
{
"timestamp": "2021-06-30T02:10:27",
"yymm": "2105",
"arxiv_id": "2105.03746",
"language": "en",
"url": "https://arxiv.org/abs/2105.03746"
}
|
\section{Introduction}
\label{sec:intro}
\begin{figure}[t]
\centering
\includegraphics[width=0.9\linewidth]{assets/performance_v3.pdf}
\vspace{-1.2em}
\caption{\textbf{Training and Prediction Performance of Regularized NDEs} We obtain an average training and prediction speedup of $1.45$x and $1.84$x respectively for our best model on supervised classification and time series problems.}
\vspace{-1em}
\label{fig:performance}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.9\linewidth]{assets/motivation.pdf}
\vspace{-1.2em}
\caption{\textbf{Error and Stiffness Regularization Keeps Accuracy.} We show the fits of the unregularized/regularized Neural ODE variants on the Sprial equation. However, the unregularized variant requires $1083.0 \pm 57.55$ NFEs while the one regularized using the stiffness and error estimates requires only $676.2 \pm 68.20$ NFEs, reducing prediction time by nearly 50\%.}
\vspace{-1em}
\label{fig:motivation}
\end{figure}
How many hidden layers should you choose in your recurrent neural network? \citet{chen2018neural} showed that the answer could be found automatically by using a continuous reformulation, the neural ordinary differential equation, and allowing an adaptive ODE solver to effectively choose the number of steps to take. Since then the idea was generalized to other domains such as stochastic differential equations \cite{liu2019neural, rackauckas2020universal} but one fact remained: solving a neural differential equation is expensive, and training a neural differential equation is even more so. In this manuscript we show a generally applicable method to force the neural differential equation training process to choose the least expensive option. We open the blackbox and show how using the numerical heuristics baked inside of these sophisticated differential equation solver codes allows for identifying the cheapest equations without requiring extra computation.
Our main contributions include:
\begin{itemize}
\item We introduce a novel regularization scheme for neural differential equations based on the local error estimates and stiffness estimates. We observe that by white-boxing differential equation solvers to leverage pre-computed statistics about the neural differential equations, we can obtain faster training and prediction time while having a minimal effect on testing metrics.
\item We compare our method with various regularization schemes~\citep{kelly2020learning, ghosh2020steer}, which often use higher order derivatives and are difficult to incorporate within existing systems. We empirically show that regularization using cheap statistics can lead to as efficient predictions as the ones requiring higher order automatic differentiation~\citep{kelly2020learning, finlay2020train} without the increased training time.
\item We release our code\footnote{\url{https://github.com/avik-pal/RegNeuralODE.jl}}, implemented using the Julia Programming Language~\cite{Julia-2017} and SciML Software Suite~\cite{rackauckas2019diffeqflux}, with the intention of wider adoption of the proposed methods in the community.
\end{itemize}
\section{Background}
\label{sec:background}
\subsection{Neural Ordinary Differential Equations}
Ordinary Differential Equations (ODEs) are used to model the instantaneous rate of change ($\frac{dz(t)}{dt}$) of a state $z(t)$. Initial Value Problems (IVPs) are a class of ODEs that involve finding the state at a later time $t_1$, given the value $z_0$ at time $t_0$. This state, $z(t_1) = z_0 + \int_{t_0}^{t_1} f_\theta(z(t), t) dt$, generally cannot be computed analytically and requires numerical solvers. \citet{lu2018beyond} observed the similarity between fixed time-step discretization of ODEs and Residual Neural Networks~\citep{he2015deep}. \citet{chen2018neural} proposed the Neural ODE framework which use neural networks to model the ODE dynamics $\frac{dz(t)}{dt} = f_{\theta}(z(t), t)$. Using adaptive time stepping allows the model to operate at a variable continuous depth depending on the inputs. Removal of the fixed depth constraint of Residual Networks provides a more expressive framework and offer several advantages in problems like density estimation~\cite{grathwohl2018ffjord}, irregularly spaced time series problems~\cite{rubanova2019latent}, etc.
\subsection{Neural Stochastic Differential Equations}
Stochastic Differential Equations (SDEs) couple the effect of noise to a deterministic system of equations. SDEs are popularly used to model fluctuating stock prices, thermal fluctuations in physical systems, etc. In this paper, we only discuss SDEs with Diagonal Multiplicative Noise, though our method trivially extends to all other forms of SDEs. \citet{liu2019neural} propose an extension to Neural ODEs by stochastic noise injection in the form of Neural SDEs. Neural SDEs jointly train two neural networks $f_{\theta}$ and $g_{\phi}$, such that, the dynamics $dz(t) = f_{\theta}(z(t), t)dt + g_{\phi}(z(t), t)dW$. Stochastic Noise Injection regularize the training of continuous neural models and achieves significantly better robustness and generalization performance.
\subsection{Regularizing Neural ODEs for Speed}
Given the map $z(0)\rightarrow z(1)$ does not uniquely define the dynamics, it is possible to regularize the training process to learn differential equations that can be solved using fewer evaluations of $f_{\theta}$. In the case of continuous normalizing flows (CNF), the ordinary differential equation:
\begin{align}
\frac{dz(t)}{dt} &= f_{\theta}(z(t), t) \\
\frac{dy(t)}{dt} &= -\text{tr}\left(\frac{df_\theta}{dz}\right)
\end{align}
where $y(t)$ evolves a log-density~\cite{chen2018neural}. The FFJORD method improves the speed of CNF evaluations by approximating $\textit{tr}(\frac{df_\theta}{dz})$ via the Hutcheson trace estimator, i.e. $\textit{tr}(\frac{df_\theta}{dz}) = \mathbb{E}[\epsilon^T \frac{df_\theta}{dz} \epsilon]$ where $\epsilon \sim \mathcal{N}(0,1)$~\cite{hutchinson1989stochastic,grathwohl2018ffjord}. Subsequent research showed that this trace estimator could be used to regularize the Frobenius norm of the Jacobian $\Vert \frac{df_\theta}{dz}\Vert = \epsilon^T \frac{df_\theta}{dz}$~\cite{finlay2020train}. While $\epsilon^T \frac{df_\theta}{dz}$ is computationally expensive as it requires a reverse mode automatic differentiation evaluation in the model (leading to higher order differentiation), in the specific case of FFJORD this term is already required and thus this estimate is a computationally-free regularizer.
It was later shown that this form of regularization can be extended beyond FFJORD by using higher order automatic differentiation \cite{kelly2020learning}. This was done by regularizing a heuristic for the local error estimate, namely $\mathcal{R}_K(\theta) = \int_{t_0}^{t_f}\Vert \frac{d^K z(t)}{dt^K}\Vert^2_2 dt$. The authors showed Taylor-mode automatic differentiation improves the efficiency of calculating this estimator to a $\mathcal{O}(k^2)$ cost where $k$ is the order of derivative that is required, though this still implies that obtaining the 5 derivatives requires is a significant computational increase. In fact, the authors noted that ``when we train with adaptive solvers we do not improve overall training time'', and in fact giving a 1.7x slower training time. In this manuscript we show that this is all the way up to 10x on the PhysioNet challenge problem.
Here we show how to arrive at a similar regularization heuristic that is applicable to all neural ODE applications with suitable adaptive ODE solvers and requires no higher order automatic differentiation. We will show that this form of regularization is able to significantly improve training times and generalizes to other architectures like neural SDEs.
\subsection{Adaptive Time Stepping using Local Error Estimates}\label{sec:local_error}
Runge-Kutta Methods~\cite{runge1895numerische, kutta1901beitrag} are widely used for numerically approximating the solutions of ordinary differential equations. They are given by a tableau of coefficients $\{A,c,b\}$ where the stages $s$ are combined to produce an estimate for the update at $t+h$:
\begin{align}
\label{eq:rk}
\begin{split}
k_s &= f\left(t+c_s h, z(t) + \sum_{i=1}^{s} a_{si} k_i\right)\\
z(t+h) &= z(t) + h \sum_{i=1}^{s} b_i k_i
\end{split}
\end{align}
For adaptivity, many Runge-Kutta methods include an alternative linear combiner $\tilde{b}_i$ such that $\tilde{z}(t+h) = z(t) + h \sum_{i=1}^{s} \tilde{b}_i k_i$ gives rise to an alternative solution, typically with one order less convergence~\cite{wanner1996solving,fehlberg1968classical,dormand1980family,Tsit5}. A classic result from Richardson extrapolation shows that $E = \Vert\tilde{z}(t+h) - z(t+h)\Vert$ is an estimate of the local truncation error~\citep{ascher1998computer, hairer1}. The goal of adaptive step size methods is to choose a maximal step size $h$ for which this error estimate is below user requested error tolerances. Given the absolute tolerance $atol$ and relative tolerance $rtol$, the solver satisfies the following constraint for determining the time stepping:
\begin{equation}
E \leq \text{atol} + \max(|z(t)|, |z(t+h)|) \cdot \text{rtol}
\end{equation}
The proportion of the error against the tolerance is thus:
\begin{equation}
q = \left\Vert\frac{E}{\text{atol} + \max(|z_n|, |z_{n + 1}|) \cdot \text{rtol}}\right\Vert
\end{equation}
If $q < 1$ then the proposed time step $h$ is accepted, else it is rejected and reduced. In either case, a proportional error control scheme (P-control) proposes $h_{\text{new}} = \eta qh$, while a standard PI-controller of explicit adaptive Runge-Kutta methods can be shown to be equivalent to using:
\begin{equation}
h_{\text{new}} = \eta q_{n-1}^\alpha q_n^\beta h
\end{equation}
where $\eta$ is the safety factor, $q_{n-1}$ denotes the error proportion of the previous step, and $(\alpha,\beta)$ are the tunable PI gain hyperparameters~\cite{wanner1996solving}. Similar embedded methods error estimation schemes have also been derived for stochastic Runge-Kutta integrators of SDEs \cite{rackauckas2017adaptive,rackauckas2020sosri}.
\subsection{Stiffness Estimation}
While there is no precise definition of stiffness, the definition used in practice is ``stiff equations are problems for which explicit methods don't work''~\cite{wanner1996solving,shampine1979user}. A simplified stiffness index is given by:
\begin{equation}
S = \text{max}\|Re(\lambda_i)\|
\end{equation}
where $\lambda_i$ are the eigenvalues of the local Jacobian matrix. We note that various measures of stiffness have been introduced over the years, all being variations of conditioning of the pseudospectra~\cite{shampine2007stiff,higham1993stiffness}. The difficulty in defining a stiffness metric is that in each case, some stiff systems like the classic Robertson chemical kinetics or excited Van der Pol equation may violate the definition, meaning all such definitions are (useful) heuristics. In particular, it was shown that for explicit Runge-Kutta methods satisfying $c_x = c_y$ for some internal step, the term
\begin{equation}
\|\lambda\| \approx \left\Vert\frac{ f(t+c_x h,\sum_{i=1}^{s} a_{xi}) - f(t+c_y h,\sum_{i=1}^{s} a_{yi})}{\sum_{i=1}^{s} a_{xi} - \sum_{i=1}^{s} a_{yi}}\right\Vert
\end{equation}
serves as an estimate to $S$~\cite{shampine1977stiffness}. Since each of these terms are already required in the Runge-Kutta updates of Equation \ref{eq:rk}, this gives a computationally-free estimate. This estimate is thus found throughout widely used explicit Runge-Kutta implementations, such as by the dopri method (found in suites like SciPy and Octave) to automatically exit when stiffness is detected~\cite{wanner1996solving}, and by switching methods which automatically change explicit Runge-Kutta methods to methods more suitable for stiff equations~\citep{rackauckas2019confederated}.
\section{Method}
\label{sec:main_methods}
\subsection{Regularizing Local Error and Stiffness Estimates}
\label{subsec:eest_reg}
Section \ref{sec:local_error} describes how larger local error estimates $E$ lead to reduced step sizes and thus a higher overall cost in the neural ODE training and predictions. Given this, we propose regularizing the neural ODE training process by the total local error in order to learn neural ODEs with as large step sizes as possible. Thus we define the regularizing term:
\begin{equation}\label{eq:reg_E}
R_{E} = \sum_j E_j |h_j|
\end{equation}
summing over $j$ the time steps of the solution. This was done by accumulating the $E_j$ from the internals of the time stepping process at the end of each step. We note that this is similar to the regularization proposed in \cite{kelly2020learning}, namely:
\begin{equation}\label{eq:reg_K}
R_{K} = \int_{t_0}^{t_1} \left\|\frac{d^K z(t)}{dt^K}\right\| dt
\end{equation}
where integrating over the $K^{th}$ derivatives is proportional to the principle (largest) truncation error term of the Runge-Kutta method \cite{hairer1}. However, this formulation requires high order automatic differentiation (which then is layered with reverse-mode automatic differentiation) which can be an expensive computation \cite{zhang2008computing} while Equation \ref{eq:reg_E} requires no differentiation.
Similarly, the stiffness estimates at each step can be summed as:
\begin{equation}\label{eq:reg_S}
R_{S} = \sum_j S_j %
\end{equation}
giving a computational heuristic for the total stiffness of the equation. Notably, both of these estimates $E_j$ and $S_j$ are already computed during the course of a standard explicit Runge-Kutta solution, making the forward pass calculation of the regularization term computationally free.
\subsection{Adjoints of Internal Solver Estimates}
\label{sec:adjoints}
Notice that $E_j = \sum_{i=1}^s (b_i-\tilde{b_i} )k_i$ cannot be constructed directly from the $z(t_j)$ trajectory of the ODE's solution. More precisely, the $k_i$ terms are not defined by the continuous ODE but instead by the chosen steps of the solver method. Continuous adjoint methods for neural ODEs \cite{chen2018neural, zhuang2021mali} only define derivatives in terms of the ODE quantities. This is required in order exploit properties such as allowing different steps in reverse and reversibility for reduced memory, and in constructing solvers requiring fewer NFEs~\cite{kidger2020hey}. Indeed, computing the adjoint of each stage variable $k_i$ can be done, but is known as discrete sensitivity analysis and is known to be equivalent to automatic differentiation of the solver \cite{zhang2014fatode}. Thus to calculate the derivative of the solution simultaneously to the derivatives of the solver states, we used direct automatic differentiation of the differential equation solvers for performing the experiments \cite{innes2018don}. We note that discrete adjoints are known to be more stable than continuous adjoints \cite{zhang2014fatode} and in the context of neural ODEs have been shown to stabilize the training process leading to better fits \cite{gholami2019anode,onken2020discretize}. While more memory intensive than some forms of the continuous adjoint, we note that checkpointing methods can be used to reduce the peak memory \cite{dauvergne2006data}. We note that this is equivalent to backpropagation of a fixed time step discretization if the step sizes are chosen in advance, and verify in the example code that no additional overhead is introduced.
\section{Experiments}
In this section, we consider the effectiveness of regularizing Neural Differential Equations (NDEs) on their training and prediction timings. We consider the following baselines while evaluating our models:
\begin{enumerate}[itemsep=0.25em]
\item \textbf{Vanilla Neural (O/S)DE} with discrete sensitivities.
\item \textbf{STEER}: Temporal Regularization for Neural ODE models by stochastic sampling of the end time during training~\citep{ghosh2020steer}.
\item \textbf{TayNODE}: Regularizing the $K^{th}$ order derivatives of the Neural ODEs~\cite{kelly2020learning}\footnote{We use the original code formulation of the TayNODE in order to ensure usage of the specially-optimized Taylor-mode automatic differentiation technique \cite{bettencourt2019taylor} in the training process. Given the large size of the neural networks, most of the compute time lies in optimized BLAS kernels which are the same in both implementations, meaning we do not suspect library to be a major factor in timing differences beyond the AD specifics.}.
\end{enumerate}
We test our regularization on four tasks -- supervised image classification (Section~\ref{subsec:classificationode}) and time series interpolation (Section~\ref{subsec:ts_interp}) using Neural ODE, and fitting Neural SDE (Section~\ref{subsec:fitneuralsde}) and supervised image classification using Neural SDE (Section~\ref{subsec:classificationsde}). We use DiffEqFlux~\cite{rackauckas2019diffeqflux} and Flux~\cite{innes2018fashionable}
for our experiments.
\subsection{Neural Ordinary Differential Equations}
In the following experiments, we use a Runge Kutta 5(4) solver
~\cite{Tsit5}
with absolute and relative tolerances of $1.4 \times 10^{-8}$ to solve the ODEs. To measure the prediction time, we use a test batch size equal to the training batch size.
\subsubsection{Supervised Classification}
\label{subsec:classificationode}
\begin{figure}[t]
\centering
\includegraphics[width=0.9\linewidth]{assets/mnist_node_v2.pdf}
\vspace{-1.2em}
\caption{\textbf{Number of Function Evaluations and Training Accuracy for Supervised MNIST Classification} Regularizing using ERNODE is the most consistent way to reduce the overall number of function evaluations. Using SRNODE alongside ERNODE stabilizes the training at the cost of increased prediction time.}
\vspace{-1em}
\label{fig:mnist_node}
\end{figure}
\begin{table*}[t]
\centering
\begin{adjustbox}{width=0.9\linewidth,center}
\begin{tabular}{llllll}
\toprule
\textbf{Method} & \textbf{Train Accuracy (\%)} & \textbf{Test Accuracy (\%)} & \textbf{Train Time (hr)} & \textbf{Prediction Time (s)} & \textbf{NFE}\\
\midrule
Vanilla NODE & 100.0 $\pm$ 0.00 & 97.94 $\pm$ 0.02 & 0.98 $\pm$ 0.03 & 0.094 $\pm$ 0.010 & 253.0 $\pm$ 3.46\\
STEER & 100.0 $\pm$ 0.00 & 97.94 $\pm$ 0.03 & 1.31 $\pm$ 0.07 & 0.092 $\pm$ 0.002 & 265.0 $\pm$ 3.46\\
TayNODE & 98.98 $\pm$ 0.06 & 97.89 $\pm$ 0.00 & 1.19 $\pm$ 0.07 & 0.079 $\pm$ 0.007 & 080.3 $\pm$ 0.43\\
\hdashline
\textit{SRNODE (Ours)} & 100.0 $\pm$ 0.00 & 98.08 $\pm$ 0.15 & 1.24 $\pm$ 0.06 & 0.094 $\pm$ 0.003 & 259.0 $\pm$ 3.46\\
\textit{ERNODE (Ours)} & 99.71 $\pm$ 0.28 & 97.32 $\pm$ 0.06 & 0.82 $\pm$ 0.02 & 0.060 $\pm$ 0.001 & 177.0 $\pm$ 0.00\\
\hdashline
STEER + \textit{SRNODE} & 100.0 $\pm$ 0.00 & 97.88 $\pm$ 0.06 & 1.55 $\pm$ 0.27 & 0.101 $\pm$ 0.009 & 275.0 $\pm$ 12.5\\
STEER + \textit{ERNODE} & 99.91 $\pm$ 0.02 & 97.61 $\pm$ 0.11 & 1.37 $\pm$ 0.11 & 0.086 $\pm$ 0.018 & 197.0 $\pm$ 9.17\\
\hdashline
\textit{SRNODE} + \textit{ERNODE} & 99.98 $\pm$ 0.03 & 97.77 $\pm$ 0.05 & 1.37 $\pm$ 0.04 & 0.081 $\pm$ 0.006 & 221.0 $\pm$ 17.3\\
\bottomrule
\end{tabular}
\end{adjustbox}
\caption{\textbf{MNIST Image Classification using Neural ODE} Using ERNODE obtains a training and prediction speedup of 16.33\% and 37.78\% respectively, at only 0.6\% reduced prediction accuracy. SRNODE doesn't help in isolation but is effective when combined with ERNODE to reduce the prediction time by 14.44\% while incurring a reduced test accuracy of only 0.17\%.}
\label{tab:mnist_node}
\end{table*}
\textbf{Training Details} We train a Neural ODE and a Linear Classifier to map flattened MNIST Images to their corresponding labels. Our model uses a two layered neural network $f_{\theta_1}$, as the ODE dynamics, followed by a linear classifier $g_{\theta_2}$, identical to the architecture used in \citet{kelly2020learning}.
\begin{align}
z_{\theta_1}(x, t) &= \tanh(W_1 [x; t] + B_1)\\
f_{\theta_1}(x, t) &= \tanh(W_2 [z_{\theta_1}(x, t); t] + B_2)\\
g_{\theta_2}(x, t) &= \sigma(W_3 x + B_3)
\end{align}
where the parameters $W_1 \in \mathbb{R}^{100 \times 785}$, $B_1 \in \mathbb{R}^{100}$, $W_2 \in \mathbb{R}^{784 \times 101}$, $B_2 \in \mathbb{R}^{784}$, $W_3 \in \mathbb{R}^{10 \times 784}$, and $B_3 \in \mathbb{R}^{10}$. We use a batch size of $512$ and train the model for $75$ epochs using Momentum~\cite{qian1999momentum} with learning rate of $0.1$ and mass of $0.9$, and a learning rate inverse decay of $10^{-5}$ per iteration. For Error Estimate Regularization, we perform exponential annealing of the regularization coefficient from $100.0$ to $10.0$ over $75$ epochs. For Stiffness Regularization, we use a constant coefficient of $0.0285$.
\textbf{Baselines} For the STEER baseline, we train the models by stochastically sampling the end time point from $\mathcal{U}(T - b, T + b)$ where $T = 1.0$ and $b = 0.5$\footnote{$b=0.25$ was also considered but final results were comparable}. We observe no training improvement but there is a minor improvement in prediction time. For the TayNODE baseline, we train the model with a reduced batch size of 100\footnote{Batch Size was reduced to ensure we reach a comparable train/test accuracy as the other trained models.}, $\lambda = 3.02 \times 10^{-3}$, and regularizing $3^{rd}$ order derivatives.
\textbf{Results} Figure~\ref{fig:mnist_node} visualizes the training accuracy and number of function evaluations over training. Table~\ref{tab:mnist_node} summarizes the metrics from the trained baseline and proposed models -- Error Estimate Regularized Neural ODE (\textit{ERNODE}) and Stiffness Regularized Neural ODE (\textit{SRNODE}). Additionally, we perform ablation studies by composing various regularization strategies.
\subsubsection{Time Series Interpolation}
\label{subsec:ts_interp}
\begin{table*}[t]
\centering
\begin{adjustbox}{width=0.9\linewidth,center}
\begin{tabular}{llllll}
\toprule
\textbf{Method} & \textbf{Train Loss ($\times 10^{-3}$)} & \textbf{Test Loss ($\times 10^{-3}$)} & \textbf{Train Time (hr)} & \textbf{Prediction Time (s)} & \textbf{NFE}\\
\midrule
Vanilla NODE & 3.48 $\pm$ 0.00 & 3.55 $\pm$ 0.00 & 1.75 $\pm$ 0.39 & 0.53 $\pm$ 0.12 & 733.0 $\pm$ 84.29 \\
STEER & 3.43 $\pm$ 0.02 & 3.48 $\pm$ 0.01 & 1.62 $\pm$ 0.26 & 0.54 $\pm$ 0.06 & 699.0 $\pm$ 141.1\\
TayNODE & 4.21 $\pm$ 0.02 & 4.21 $\pm$ 0.01 & 12.3 $\pm$ 0.32 & 0.22 $\pm$ 0.02 & 167.3 $\pm$ 11.93 \\
\hdashline
\textit{SRNODE (Ours)} & 3.52 $\pm$ 1.44 & 3.58 $\pm$ 0.05 & 0.87 $\pm$ 0.09 & 0.20 $\pm$ 0.01 & 273.0 $\pm$ 0.000\\
\textit{ERNODE (Ours)} & 3.51 $\pm$ 0.00 & 3.57 $\pm$ 0.00 & 0.94 $\pm$ 0.13 & 0.21 $\pm$ 0.02 & 287.0 $\pm$ 17.32\\
\hdashline
STEER + \textit{SRNODE} & 3.67 $\pm$ 0.02 & 3.73 $\pm$ 0.02 & 0.89 $\pm$ 0.08 & 0.20 $\pm$ 0.01 & 271.0 $\pm$ 12.49\\
STEER + \textit{ERNODE} & 3.41 $\pm$ 0.02 & 3.48 $\pm$ 0.01 & 1.03 $\pm$ 0.25 & 0.24 $\pm$ 0.05 & 269.0 $\pm$ 33.05 \\
\hdashline
\textit{SRNODE} + \textit{ERNODE} & 3.48 $\pm$ 0.11 & 3.56 $\pm$ 0.03 & 1.12 $\pm$ 0.08 & 0.21 $\pm$ 0.01 & 263.0 $\pm$ 12.49 \\
\bottomrule
\end{tabular}
\end{adjustbox}
\caption{\textbf{Physionet Time Series Interpolation} All the regularized variants of Latent ODE (except STEER) have comparable prediction times. Additionally, the training time is reduced by $36\% - 50\%$ on using one of our proposed regularizers, while TayNODE increases the training time by $7$x. Overall, SRNODE has the best training and prediction timings while incurring an increased $0.85\%$ test loss.}
\label{tab:latent_ode}
\end{table*}
\textbf{Training Details} We use the Latent ODE~\citep{chen2018neural} model with RNN encoder to learn the trajectories for ICU Patients for Physionet Challenge 2012 Dataset~\citep{silva2012predicting}. We use the preprocessed data provided by \citet{kelly2020learning} to ensure consistency in results. For every independent run, we perform an $80:20$ split of the data for training and evaluation.
Our model architecture is similar to the encoder-decoder models used in \citet{rubanova2019latent}. We use a 20-dimensional latent state and a 40-dimensional hidden state for the recognition model. Our ODE dynamics is given by a 4-layered neural network with 50 units and tanh activation. We train our models for $300$ epochs with a batchsize of $512$ and using Adamax~\citep{kingma2017adam} with a learning rate of $0.01$ and an inverse decay of $10^{-5}$. We minimize the negative log likelihood of the predictions and perform KL annealing with a coefficient of $0.99$.
For Error Estimate Regularization, we perform exponential annealing of the regularization coefficient from $1000.0$ to $100.0$ over $300$ epochs. We note that using $R_{E} = \sum_j E_j^2$, instead of $R_{E} = \sum_j E_j |h_j|$, yields similar results with a constant regularization coefficient of $100.0$. For Stiffness Regularization, we use a constant coefficient of $0.285$.
\textbf{Baselines} For STEER Baseline, we stochastically sample the timestep to evaluate the difference between interpolated and ground truth data. Essentially for the interval $(t_i, t_{i + 1})$, we evaluate the model at $\mathcal{U}(t_{i + 1} - \frac{t_{i + 1} - t_i}{2}, t_{i + 1} + \frac{t_{i + 1} - t_i}{2})$ and compare with the truth at $t_{i + 1}$. We sample end points after every iteration of the model. STEER reduces the training time but has no significant effect on the prediction time. TayNODE was trained by regularizing the $2^{nd}$ order derivatives and a coefficient of $0.01$ for 300 epochs and a batchsize of $512$. TayNODE had an exceptionally high training time $\sim 7\times$ compared to the unregularized baseline.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\linewidth]{assets/latent_ode_v2.pdf}
\vspace{-1.2em}
\caption{\textbf{Number of Function Evaluations and Training Loss for Physionet Time Series Interpolation} Regularized and Unregularized variants of the model have very similar trajectories for the training loss. We do notice a significant difference in the NFE plot. Using either Error Estimate Regularization or Stiffness Regularization is able to bound the NFE to $< 300$, compared to $\sim 700$ for STEER or unregularized Latent ODE.}
\vspace{-1em}
\label{fig:latent_ode}
\end{figure}
\textbf{Results} Figure~\ref{fig:latent_ode} shows the training MSE loss and the NFE counts for the considered models. Table~\ref{tab:latent_ode} summarizes the metrics and wall clock timings for the baselines, proposed regularizers and their compositions with previously proposed regularizers. We observe that SRNODE provides the most significant speedup while ERNODE attains similar losses at slightly higher training and prediction times.
\subsection{Neural Stochastic Differential Equations}
In these experiments, we use SOSRI/SOSRI2~\citep{rackauckas2020sosri} to solve the Neural SDEs. The wall clock timings represent runs on a CPU.
\subsubsection{Fitting Spiral Differential Equation}
\label{subsec:fitneuralsde}
\begin{table*}[t]
\centering
\begin{adjustbox}{width=0.75\linewidth,center}
\begin{tabular}{lllll}
\toprule
\textbf{Method} & \textbf{Mean Squared Loss} & \textbf{Train Time (s)} & \textbf{Prediction Time (s)} & \textbf{NFE}\\
\midrule
Vanilla NSDE & 0.0217 $\pm$ 0.0088 & 178.95 $\pm$ 20.22 & 0.07553 $\pm$ 0.0186 & 528.67 $\pm$ 6.11\\
\hdashline
\textit{SRNSDE (Ours)} & 0.0204 $\pm$ 0.0091 & 166.42 $\pm$ 14.51 & 0.07250 $\pm$ 0.0017 & 502.00 $\pm$ 4.00 \\
\textit{ERNSDE (Ours)} & 0.0227 $\pm$ 0.0090 & 173.43 $\pm$ 04.18 & 0.07552 $\pm$ 0.0008 & 502.00 $\pm$ 4.00\\
\bottomrule
\end{tabular}
\end{adjustbox}
\caption{\textbf{Spiral SDE} The ERNSDE attains a relative loss of 4\% compared to vanilla Neural SDE while reducing the training time and number of function evaluations. Using SRNSDE reduces both the training and prediction times by 7\% and 4\% respectively.}
\label{tab:fit_spiral_sde}
\end{table*}
\textbf{Training Details} In this experiment, we consider training a Neural SDE to mimic the dynamics of the Spiral Stochastic Differential Equation with Diagonal Noise (DSDE). Spiral DSDE is prescribed by the following equations:
\begin{align}
\begin{split}
du_1 &= -\alpha u_1^3 dt + \beta u_2^3 dt + \gamma u_1 dW\\
du_2 &= -\beta u_1^3 dt - \alpha u_2^3 dt + \gamma u_2 dW
\end{split}
\end{align}
where $\alpha = 0.1$, $\beta = 2.0$, and $\gamma = 0.2$. We
generate data across $10000$ trajectories at 30 uniformly spaced points between $t \in [0, 1]$ (Figure~\ref{fig:fit_neural_sde}). We parameterize our drift and diffusion functions using neural networks $f_\theta$ and $g_\phi$ via:
\begin{align}
\begin{split}
f_\theta(x, t) &= W_2 \tanh(W_1 x^3 + B_1) + B_2\\
g_\phi(x, t) &= W_3 x + B_3
\end{split}
\end{align}
\begin{figure}[t]
\centering
\includegraphics[width=0.9\linewidth]{assets/spiral_sde.pdf}
\vspace{-1.2em}
\caption{\textbf{Fitting a Neural SDE on Spiral SDE Data.} Regularizing has minimal effect on the learned dynamics with reduced training and prediction cost.}
\vspace{-1.2em}
\label{fig:fit_neural_sde}
\end{figure}
where the parameters $W_1 \in \mathbb{R}^{50 \times 2}$, $B_1 \in \mathbb{R}^{50}$, $W_2 \in \mathbb{R}^{2 \times 50}$, $B_2 \in \mathbb{R}^{2}$, $W_3 \in \mathbb{R}^{2 \times 2}$, and $B_3 \in \mathbb{R}^{2}$. For fitting the drift and diffusion functions to the simulated data, we used a generalized method of moments loss function \cite{luck2016generalized,jeisman2006estimation}. Our objective is to train these parameters to minimize the $L_2$ distance between the mean ($\mu$) and variance ($\sigma^2$) of predicted and real data. Let, $\hat{\mu}_i$'s and $\hat{\sigma}^2_i$'s denote the means and variances respectively of the multiple predicted trajectories.
\begin{equation}
\mathcal{L}(u_0; \theta, \phi) = \sum_{i = 1}^{30} \left[(\mu_i - \hat{\mu}_i)^2 + (\sigma^2_i - \hat{\sigma}^2_i)^2\right] + \lambda_r R_E
\end{equation}
The models were trained using AdaBelief Optimizer~\cite{zhuang2020adabelief} with a learning rate of $0.01$ for $250$ iterations. We generate 100 trajectories for each iteration to compute the $\hat{\mu}_i$s and $\hat{\sigma}^2_i$s.
\textbf{Results} Table~\ref{tab:fit_spiral_sde} summarizes the final results for the trained models for 3 different random seeds. We notice that even for this ``toy" problem, we can marginally improve training time while incurring a minimal penalty on the final loss.
\subsubsection{Supervised Classification}
\label{subsec:classificationsde}
\textbf{Training Details} We train a Neural SDE model to map flattened MNIST Images to their corresponding labels. Our diffusion function uses a two layered neural network $f_{\theta_2}$ and the drift function is a linear map $g_{\theta_3}$. We use two additional linear maps -- $a_{\theta_1}$ mapping the flattened image to the hidden dimension and $b_{\theta_4}$ mapping the output of the Neural SDE to the logits.
\begin{align}
a_{\theta_1}(x, t) &= W_1 x + B_1\\
f_{\theta_2}(x, t) &= W_3 ~ \tanh(W_2 ~ x + B_2) + B_3\\
g_{\theta_3}(x, t) &= W_4 ~ x + B_4\\
b_{\theta_4}(x, t) &= W_5 ~ x + B_5
\end{align}
where the parameters $W_1 \in \mathbb{R}^{32 \times 784}$, $B_1 \in \mathbb{R}^{32}$, $W_2 \in \mathbb{R}^{32 \times 64}$, $B_2 \in \mathbb{R}^{64}$, $W_3 \in \mathbb{R}^{32 \times 64}$, $B_3 \in \mathbb{R}^{32}$, $W_4 \in \mathbb{R}^{10 \times 32}$, and $B_3 \in \mathbb{R}^{10}$. We use a batch size of $512$ and train the model for $40$ epochs using Adam~\cite{kingma2017adam} with learning rate of $0.01$, and an inverse decay of $10^{-5}$ per iteration. While making predictions we use the mean logits across $10$ trajectories. For Error Estimate and Stiffness Regularization, we use constant coefficients $10.0$ and $0.1$ respectively.
\textbf{Results} Figure~\ref{fig:mnist_nsde} shows the variation in NFE and Training Error during training. Table~\ref{tab:mnist_nsde} summarizes the final metrics and timings for all the trained models. We observe that SRNSDE doesn't improve the training/prediction time, similar to the MNIST Neural ODE Experiment~\ref{subsec:classificationode}. However, ERNSDE gives us a training and prediction speedup of $33.7\%$ and $52.02\%$ respectively, at the cost of $0.7\%$ reduced test accuracy.
\begin{table*}[t]
\centering
\begin{adjustbox}{width=0.85\linewidth,center}
\begin{tabular}{llllll}
\toprule
\textbf{Method} & \textbf{Train Accuracy (\%)} & \textbf{Test Accuracy (\%)} & \textbf{Train Time (hr)} & \textbf{Prediction Time (s)} & \textbf{NFE}\\
\midrule
Vanilla NSDE & 98.97 $\pm$ 0.11 & 96.95 $\pm$ 0.11 & 6.32 $\pm$ 0.19 & 15.07 $\pm$ 0.93 & 411.33 $\pm$ 6.11\\
\hdashline
\textit{SRNSDE (Ours)} & 98.79 $\pm$ 0.12 & 96.80 $\pm$ 0.07 & 8.54 $\pm$ 0.37 & 14.50 $\pm$ 0.40 & 382.00 $\pm$ 4.00\\
\textit{ERNSDE (Ours)} & 98.16 $\pm$ 0.11 & 96.27 $\pm$ 0.35 & 4.19 $\pm$ 0.04 & 07.23 $\pm$ 0.14 & 184.67 $\pm$ 2.31\\
\bottomrule
\end{tabular}
\end{adjustbox}
\caption{\textbf{MNIST Image Classification using Neural SDE} ERNSDE obtains a training and prediction speedup of 33.7\% and 52.02\% respectively, at only 0.7\% reduced prediction accuracy.}
\vspace{-1.5em}
\label{tab:mnist_nsde}
\end{table*}
\begin{figure}[t]
\centering
\includegraphics[width=0.9\linewidth]{assets/mnist_nsde_v2.pdf}
\vspace{-1.2em}
\caption{\textbf{Number of Function Evaluations and Training Error for Supervised MNIST Classification using Neural SDE} ERNSDE reduces the NFE below 300 with minimal error change while the unregularized version has NFE $\sim 400$.}
\vspace{-1em}
\label{fig:mnist_nsde}
\end{figure}
\vspace{-1em}
\section{Discussion}
Numerical analysis has had over a century of theoretical developments leading to efficient adaptive methods for solving many common nonlinear equations such as differential equations. Here we demonstrate that by using the knowledge embedded within the heuristics of these methods we can accelerate the training process of neural ODEs.
We note that on the larger sized PhysioNet and MNIST examples we saw significant speedups while on the smaller differential equation examples we saw only minor performance improvements. This showcases how the NFE becomes a better estimate of the total compute time as the cost of the ODE $f$ (and SDE $g$) increase when the model size increases.
This result motivates efforts in differentiable programming \cite{wang2018backpropagation,abadi2019simple,rackauckas2020generalized} which enables direct differentiation of solvers since utilizing the solver's heuristics may be crucial in the development of advanced techniques. This idea could be straightforwardly extended not only to other forms of differential equations, but also to other ``implicit layer'' machine learning methods. For example, Deep Equilibrium Models (DEQ) \cite{bai2019deep} model the system as the solution to an implicit function via a nonlinear solver like Bryoden or Newton's method. Heuristics like the ratio of the residuals have commonly been used as a convergence criterion and as a work estimate for the difficulty of solving a particular nonlinear equation \cite{wanner1996solving}, and thus could similarly be used to regularize for learning DEQs whose forward passes are faster to solve. Similarly, optimization techniques such as BFGS \cite{kelley1999iterative} contain internal estimates of the Hessian which can be used to regularize the stiffness of ``optimization as layers'' machine learning architectures like OptNet \cite{amos2017optnet}. However, in these cases we note that continuous adjoint techniques have a significant computational advantage over discrete adjoint methods because the continuous adjoint method can be computed directly at the point of the solution while discrete adjoints would require differentiating through the iteration process. Thus while a similar regularization would exist in these contexts, in the case of differential equations the continuous and discrete adjoints share the same computational complexity which is not the case in methods which iterate to convergence. Further study of these applications would be required in order to ascertain the effectiveness in accelerating the training process, though by extrapolation one may guess that at least the forward pass would be accelerated.
\vspace{-1em}
\section{Limitations}
While these experiments have demonstrated major performance improvements, it is pertinent to point out the limitations of the method. One major point to note is that this only applies to learning neural ODEs for maps $z(0) \mapsto z(1)$ as is used in machine learning applications of the architecture \cite{chen2018neural}. Indeed, a neural ODE as an ``implicit layer'' for predictions in machine learning does not require identification of dynamical mechanisms. However, if the purpose is to learn the true governing dynamics a physical system from timeseries data, this form of regularization would bias the result, dampening higher frequency responses leading to an incorrect system identification. Approaches which embed neural networks into solvers could be used in such cases \cite{shen2020deep,poli2020hypersolvers}. Indeed we note that such Hypereuler approaches could be combined with the ERNODE regularization on machine learning prediction problems, which could be a fruitful avenue of research. Lastly, we note that while either the local error and stiffness regularization was effective on each chosen equation, neither was effective on all equations and at this time there does not seem to be a clear a priori indicator as to which regularization is necessary for a given problem. While it seems the error regularization was more effective on the image classification tasks while the stiffness regularization was more effective on the time series task, we believe more experiments will be required in order to ascertain whether this is a common phenomena, possibly worthy of theoretical investigation.
\vspace{-1em}
\section{Conclusion}
Our studies reveal that error estimate regularization provides a consistent way to improve the training/prediction time of neural differential equations. In our experiments, we see an average improvement of $1.4$x training time and $1.8$x prediction time on using error estimate regularization. Overall we provide conclusive evidence that cheap and accurate cost estimates obtained by white-boxing differential equation solvers can be as effective as expensive higher-order regularization strategies. Together these results demonstrate a generalizable idea for how to combine differentiable programming with algorithm heuristics to improve training speeds in a way that cannot be done with continuous adjoint techniques. Thus, even if a derivative can be defined for a given piece of code, our approach shows that differentiating the solver can still have major advantages because the solver internal details in terms of stability and performance.
\vspace{-1em}
\section{Acknowledgements}
The information, data, or work presented herein was funded in part by the Advanced Research Projects Agency-Energy (ARPA-E), U.S. Department of Energy, under Award Number DE-AR0001222. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof.
|
{
"timestamp": "2021-05-11T02:19:48",
"yymm": "2105",
"arxiv_id": "2105.03918",
"language": "en",
"url": "https://arxiv.org/abs/2105.03918"
}
|
\section{Introduction}
Reading comprehension is the ability to understand a passage either by human or machine. One of the great benchmarks to evaluate this ability is to try to answer specific questions related to the passage \cite{DBLP:conf/emnlp/RajpurkarZLL16}. Generally, this problem can contain single or multiple documents as context (containing relevant information needed to understand and answer the question), a question (a sentence with at least one asking parameter), and an answer (which is the parameter value of the question).
In the Task of Reading Comprehension of Abstract Meaning (ReCAM), we have one passage as a context, one question and five candidate answers \cite{zheng-2021-semeval-task4}. The goal is to identify the correct answer based on the context and the given question. You can see a sample of the data in Table \ref{table_data}. For each instance of the data, there is a passage, a question with a missing word that should be filled based on the passage, and five candidate answers to the question.
\begin{table}[]
\centering
\begin{tabularx}{0.5\textwidth}{|l|X|}
\hline
\textbf{Passage} & ... observers have even named it after him, ``Abenomics''. It is based on three key pillars - the ``three arrows'' of monetary policy, fiscal stimulus and structural reforms in order to ensure long-term sustainable growth in the world's third-largest economy. In this weekend's upper house elections ... \\\hline
\textbf{Question} & Abenomics: The \textit{\textbf{@Placeholder}} and the risks\\\hline
\textbf{Answer} & (A) chances ~ (B) prospective ~ (C) security ~ \textbf{(D) objectives} ~ (E) threats\\\hline
\end{tabularx}
\caption{\label{table_data} An instance of the data.}
\end{table}
The task divides into two subtasks: imperceptibility and non-specificity\cite{zheng-2021-semeval-task4}.
\begin{itemize}
\item imperceptibility: this level of abstract words refers to ideas and concepts that are distant from immediate perception; such as culture, economics, and politics.
\item non-specificity: In contrast to concrete words, this subtask includes more abstract words which focus on a different type of definition; for example, a concrete word like `cow` could be interpreted as an `animal` which is considered as a more abstract word \cite{changizi2008economically}.
\end{itemize}
The main challenges of this task are the abstract meaning concept representation as well as the machine reading comprehension. This is the main reason we have utilized contextualized language representation models to tackle abstract meaning representation problems.
In this paper, we use an end-to-end deep contextualized architecture to model this task. This model is also capable of considering more than one passage as the context, and more than five candidate answers. Since we use the long document transformer model (Longformer \cite{beltagy2020longformer}), no limitation is considered in context passage length. We have evaluated this model both on subtask-1 and subtask-2 which resulted in 70\% and 64\% accuracy, respectively. Therefore, we have about 40\% improvement compared to the baseline, which is a Gated Attention (GA) model \cite{zheng-2021-semeval-task4}.
The rest of the paper is as follows: Section 2 describes the related works and the background. Section 3 includes the description of the proposed method. Section 4 contains the evaluation metrics used as well as a brief discussion, which is then followed by a conclusion and future works in section 5.
\section{Background and Related Works}
Many approaches have been presented in the literature, from pipeline-based models to end-to-end ones. Each module is also well-investigated from rule-based models to deep learning ones. Despite various configurations presented in the literature to model this problem, most of the systems consist of three modules\cite{DBLP:journals/corr/abs-2001-01582}:
\begin{itemize}
\item Language representation: this module is responsible to encode the inputs. Context, question, and answer need to be represented as numeric values for computational algorithms to be usable on them. Dense vectorized representations are the most popular methods, which allow us to use the majority of machine learning algorithms.
\item Reasoning: this module is used to find demonstrations of why the answer is assumed to be valid. It can also be used as a limiter for searchable context.
\item Prediction: this module aims to generate, retrieve or select the correct answer based on the task description.
\end{itemize}
Recent studies are provided as follows with respect to these modules that the last two modules have been merged. In the end, the longformer model is presented as our mainstay in this paper.
\subsection{Word and text representation}
\par One of the most important problems in NLP is representation learning.
The earliest models for word representation in the time of deep learning were the models
proposed in \cite{pennington2014glove} and \cite{mikolov2013distributed}, which utilized the weights
learned for an auxiliary task (a simplified version of the task of language modeling) for word representation. Similarly, methods proposed in \cite{le2014distributed} and \cite{liu2015topical} utilized
a similar structure for sentence, paragraph, or document representation learning.
\par While these methods were quite effective, it has been shown that using neural language models
as a way of word representation results in much better, and context-aware representations. In
\cite{howard2018universal} it has been shown that fine-tuning language models as sentence encoders
result in a significant performance improvement. At the same time, \cite{peters2018deep} used language
models directly as word representations, which resulted in significant improvements. In \cite{devlin2018bert}
a transformer model was trained for the task of masked language models, which resulted in significant improvements,
surpassing human performance in many NLP tasks. One of the shortcomings of transformers is the lack of
a memory mechanism, which results in (theoretically) lower receptive field compared with LSTMs \cite{beltagy2020longformer}
this shortcoming was addressed by improving the self attention mechanism in transformers so that it would
have a (theoretically) unbounded receptive field. More details are presented later in this section.
\subsection{Natural language understanding}
\par Natural language understanding (NLU) is an umbrella term, referring to any tasks that require machine comprehension.
Compared to other NLP tasks, NLU requires the model to be able to understand and reason about the data \cite{semaan2012natural}.
While great progress has been made in this field by using contextual word representation \cite{devlin2018bert},
it has been found that designing the model itself must not be neglected \cite{zhu2018sdnet}. On the other hand,
it has been shown that utilizing a transfer learning setting to share knowledge
between different NLU tasks results in better performance with fewer data and fewer parameters \cite{pilault2020conditionally},
which proves a significant similarity between these tasks.
\subsection{The Longformer}
Deep contextualized language models like BERT\cite{devlin-etal-2019-bert} have been well investigated in the literature and achieved state-of-the-art results on various tasks. However, these models suffer from performance limitations due to their self-attention layer which results in quadratic space and time complexity concerning the sequence length. In contrast, this model removes the self-attention layer from the base language models, so the limitation resolves and the complexity scales to linear. In order to increase the quality of the model compared to basic models, they have added a global attention layer to the model end which significantly outperforms state-of-the-art models on long document (passage) tasks and competitive on normal documents. Also, this configuration increases the performance on both normal and lengthy inputs which makes it a good alternative for tasks containing large inputs. This model is also evaluated on a similar task on WikiHop dataset\cite{welbl2018constructing} and improved the results in terms of accuracy\cite{beltagy2020longformer}.
\section{Method}
As mentioned in section 1, given a passage, a question, and a set of answers to the question, the goal is to predict the correct answer among the candidates, which can be seen as a benchmark to evaluate how well the model can comprehend the abstract meaning. To do so, we considered an end-to-end deep learning architecture based on the transformer architecture.
Specifically, we used contextual word embeddings based on the transformer to better discover and encode the information contained in the passage. In our model, both subtasks use the same architecture as shown in figure \ref{fig:recam_model}, although we did not experiment on the possibility of multi-task learning. The word representation models are fine-tuned on the data for better performance. The fine-tuning procedure could allow us to extract additional, task-related information which could result in better accuracy in the evaluation phase.
\begin{figure*}
\centering
\includegraphics[width=0.46\textwidth]{recam-model.png}
\caption{The model architecture. The concatenated input vector will be encoded using the base model (like RoBERTa without the self-attention layer, in our case). A global attention\cite{luong2015effective} will be applied to the question and the candidate answers representations with respect to the passage as the context. The logit (score) of each ent token will be calculated using a linear transformation function, then the prediction distribution over the answer candidates (ent tokens) will be outputted using a softmax layer.}
\label{fig:recam_model}
\end{figure*}
To model this problem, let $c=\{c_1, c_2, ..., c_I\}$ denote the passage as the context, where $c_i$ corresponds to the $i^{th}$ token (word or subword, depending on the tokenization technique used) and $I$ is the number of tokens in the passage. Similarly, the question is considered as $q=\{q_1, q_2, ..., q_{K}\}$ where $K$ denotes the length of the question, and $q_k$ corresponds to the $k^{th}$ token of the question. Each answer also denotes as $e^{j}$ which is only one abstract word ($j \in \{1, 2, ..., 5\}$). Then we concatenate the question and the candidates as:
\begin{equation}
a = [q; e^{1}; e^{2}; ... ; e^{5}].
\end{equation}
The size of this sequence is $A = K + 5$ as we have only 5 candidates. Generally, this can be an arbitrary length based on the dataset.
Note that we introduce special tokens to separate the context, the question, and the candidates, similar to \cite{beltagy2020longformer}. Specifically, we introduce the tokens \verb|<s>| and \verb|</s>| for separating the context, \verb|<q>| and \verb|</q>| for separating the the question, and the tokens \verb|<ent>| and \verb|</ent>| for separating the candidates from each other. In the case of multiple passages, all passages are concatenated to form a single context. These tokens are randomly initialized and fine-tuned.
We used the Longformer model introduced in \cite{beltagy2020longformer} as the pre-trained contextual embedding model in our method. Since the context could be too long, we split the context sequence to separate chunks. Each chunk length is equal to maximum sequence length the model could accept appending the sequence $a$; in fact, $model\_max\_length= len(chunk) + len(a)$. If $c^l$ denote each chunk, this sequence could be showed as:
\begin{equation}
b = [c^l; a]
\end{equation}
where the full context is $c=\{c^1, c^2, ..., c^L\}$, and $L$ is the last chunk. The size of this sequence is $B$ so $B = L + A$.
After feeding the input $b$ to the Longformer model, we apply a global attention only on $a$ (concatenated question and answer candidates), and the rest is the context. As the longformer model utilizes a base model (like RoBERTa without the self-attention layer, in our case), we denote this as $basemodel$ function that outputs the encoded sequence of the input. If $GAttn$ denotes the global attention function, we have:
\begin{equation}
d_i = basemodel(b)
\end{equation}
\begin{equation}
g_i = GAttn(d_i).1(i \in A)
\end{equation}
where $d_i$ is the raw output vector for each input token. The global attention function is applied if it is a question or answer candidate token. Then, we extract the outputs corresponding to the question and the candidates tokens, i.e. we have:
\begin{equation}
h_j = GAttn(a,c^l)
\end{equation}
Finally, we obtain the logit of each candidate (\verb|<ent>| tokens) as $x_j$ ($x_j = h_j$ if $j$ correspond to a candidate), average over different chunks, and apply a linear transformation:
\begin{equation}
f_j = v^T x_j
\end{equation}
where the vector $v$ is trainable, and $f_j$ is the score of each candidate. And the probability distribution over the candidates will be calculated using a softmax layer on the logits. The predicted answer is the argmax of the softmax output. we fine-tuned the model using the cross-entropy loss.
\section{Evaluation}
\begin{table*}
\centering
\begin{tabular}{lrrr}
\hline \textbf{Metrics} & \textbf{Baseline(GA)} & \textbf{BERT} & \textbf{Our Method} \\ \hline
Accuracy & 23.01\% & 63.43\% & \textbf{70.30\%} \\
Macro Avg F1 & 22.83\% & 63.38\% & \textbf{70.23\%} \\
Weighted Avg F1 & 22.76\% & 63.40\% & \textbf{70.27\%} \\
\hline
\end{tabular}
\caption{\label{table_subtask1} Subtask1 evaluation metrics on the test set }
\end{table*}
\begin{table*}
\centering
\begin{tabular}{lrrr}
\hline \textbf{Metrics} & \textbf{Baseline(GA)} & \textbf{BERT} & \textbf{Our Method} \\ \hline
Accuracy & 22.95\% & 58.76\% & \textbf{64.38\%} \\
Macro Avg F1 & 22.42\% & 58.72\% & \textbf{64.35\%} \\
Weighted Avg F1 & 22.45\% & 58.75\% & \textbf{64.40\%} \\
\hline
\end{tabular}
\caption{\label{table_subtask2} Subtask-2 evaluation metrics on the test set}
\end{table*}
Although we only participated in the second subtask, we will evaluate our model on both subtasks here. We will explain our configurations for utilizing the model on the task as well as other baselines which are the BERT-base as an alternative model and the Gate-Attention (GA) as our task baseline. Finally, a brief discussion will be done based on the results.
\subsection{Metrics}
Popular metrics to evaluate these models are F1, EM (Exact Match or accuracy), and MRR (Mean Reciprocal Rank). As the precision and recall in our task are equal, so F1 = Precision = Recall. Also, F1 and EM are the same. And, the use of MRR is optional, so the metrics used to evaluate the result are the accuracy and the F1.
\subsection{Baseline configuration}
\par The baseline model (GA) is trained for 30 epochs, each epoch containing 101 mini-batches. The train batch size is set to 32. Dropout with the rate of 0.5 is also applied to the hidden states, and the learning rate is set to 0.001. The dimensionality of the GloVe embedding is 300, and the hidden size is set to 128. Training and evaluation take about 2 hours on a single v100 GPU.
\subsection{BERT configuration}
We use the same configuration as our method except for the global attention mechanism. In fact, we consider the output vector of each chunk as our final vector to be linearly transformed into single logit, followed by a softmax layer using the cross-entropy loss. Similarly, the logit is averaged over different chunks, before applying the linear transformation. Note that the maximum sequence length here is bounded to 512 tokens, and the model includes the $n^2$ attention mechanism. We use the base version of the model and fine-tuned it on each subtask.
\subsection{Our method configuration}
\par We used the same model introduced in section 3 for both subtasks. The model was initialized using the Longformer-base pre-training weights, then fine-tuned in each of the subtasks. Due to the performance issues, the model max sequence length is set to 4096 tokens which are sufficient in our case. We also used the RoBERTa-large tokenizer to tokenize the input sequence as the Longformer model has been trained on using this configuration. We used a batch size of 32 and a maximum learning rate of 3e-5 using the Adam optimizer with beta2=0.98. We then assumed the validation check interval to 250 which indicates the number of gradient updates between checking validation loss. And a weight decay of 0.01 has been considered to regularize the model and avoid overfitting.
\par Our proposed model is trained for 15 epochs for each task. Fine-tuning the model takes about six hours, and inference takes about nine seconds for each sample on a single V100 GPU.
\subsection{Evaluation od Subtask 1}
Subtask1 measures imperceptibility abstract level of language understanding. This subtask includes 3227 training samples, 837 validation samples, and 2025 test samples. The size of the biggest sample in terms of context length is about 2000 tokens. We have achieved an accuracy of 70\% on the validation set, which improves our baseline by about 40 percent. Table \ref{table_subtask1} showed the results of this subtask.
\subsection{Evaluation on Subtask 2}
Subtask2 measures the non-specificity level of abstract meaning in reading comprehension. It includes 3318 training samples, 851 validation samples, and 2017 test samples. The best accuracy on the validation set is 64\%. Table \ref{table_subtask2} showed the results of this subtask.
\subsection{Discussion}
We used two baselines to find out the effect of using a pre-trained model rather than a simple RNN model. Although this task offers a higher level of representation, using the pre-train models is helpful, and there is a higher chance of modeling such abstract concepts.
The results on subtask2 are weaker than subtask1 in pre-trained models. This can be the consequence of limited semantic representation for abstract word which indicates the subtask2 includes more abstract words in terms of abstract level; for example, the word 'animal' could be matched to any animal, like 'cat' or 'dog', but the word 'entity' is hard to be represented as it could be matched to a large number of words. And the model faces a limitation in the knowledge representation. Another assumption could be the data enrichment that these model has been trained on. As most of the available texts for training consist of concrete words, it is more likely to leverage the language understanding to less abstract words to achieve a better result.
Comparing our method which is based on longformer model to usual language models like BERT indicates a new insight in terms of passage length and the attention mechanism. Popular language models like BERT and RoBERTa use a $n^2$ attention which requires a large receptive field to represent long passages. This results in the performance limitation which bounds the input sequence up to 512 tokens. In contrast, the longformer global attention mechanism relaxes this limitation as we only need to pay attention to a small factor of context and more focus on the local window. So the receptive field will not overflow and saves the necessary information to better represent the language.
We have analyzed the errors that mostly affect our model performance. We think that the problem is the contextual representation of the language modeling, which is not well-suited in our method i.e. concatenating the context, question, and answer. The main disadvantage of concatenating the candidate answers to each other is the missing fine contextual representation as the state-of-the-art models consume the position embedding. Additionally, incorrect candidates register additional noise to each word representation as well as the placeholder in the question.
\section{Conclusion and Future works}
We have shown how different approaches can be leveraged to machine reading comprehension of abstract meaning. We reformulated the longformer model to learn abstract meaning as a new level of semantic in machine reading comprehension. This method can also be improved by taking advantage of external knowledge and task-specific model architectures that optimize the current baseline.
\bibliographystyle{acl_natbib}
|
{
"timestamp": "2021-05-11T02:15:05",
"yymm": "2105",
"arxiv_id": "2105.03775",
"language": "en",
"url": "https://arxiv.org/abs/2105.03775"
}
|
\section{Background}
\label{sec:back}
\subsection{Graph Mining Problem}
\label{sec:mining_basic}
A graph $G$ is defined as $G=(V,E)$, in which $V$ and $E$ are the set of graph vertices and edges.
An edge $(u,v)$ indicates that vertex
$u$ and $v$ are directly connected in $G$.
Similar to other systems~\cite{teixeira2015arabesque,chen2020pangolin,mawhirter2019automine,jamshidi2020peregrine}, this paper considers undirected graphs,
while the techniques are also
applicable to directed graphs.
We denote the vertex set containing $v$'s neighbors as $N(v)$.
Each vertex or edge may have a label, and can be represented by a mapping $f_L:V\cup E\rightarrow L$ where $L$ is the label set.
Currently, Kudu\xspace supports vertex labels, but
the edge label support can be added without
fundamental difficulty.
A graph $g=(V_g, E_g)$ is an edge-induced subgraph of $G=(V,E)$ iff. $V_g \subseteq V$ and $E_g \subseteq E$.
Furthermore, if $E_g$ contains all edges in $E$ whose endpoints are both in $V_g$, it is called a vertex-induced subgraph.
In Figure~\ref{fig:pattern_example}, we show
examples of vertex- and edge-induced subgraph
of G.
Two graphs $G_0=(V_0, E_0)$ and $G_1=(V_1, E_1)$ are isomorphic iff. there exists a bijective function $f:V_0\rightarrow V_1$ such that $(u,v)\in E_0 \iff (f(u), f(v))\in E_1$.
Intuitively, isomorphic graphs contain the same structure.
Given a input graph $G$,
a GPM task discovers and processes $G$'s subgraphs that are isomorphic with user-specified \textit{pattern} $p$, which
is a small connected graph reflecting some application-specific knowledge.
For example, the triangle pattern can be used in spam and fraud detection~\cite{becchetti2008efficient}.
The subgraphs matching (isomorphic with) $p$ is called $p$'s {\em embeddings}.
The embeddings that are vertex(edge)-induced
subgraphs are called vertex(edge)-induced embeddings.
In Figure~\ref{fig:pattern_example}, $G$'s subgraphs $e_1$ and $e_2$ are isomorphic
to the pattern graph $p$.
Some GPM applications only focus on vertex-induced embeddings while others
consider all edge-induced ones.
For example, $k$-motif counting only calculates the number of vertex-induced embeddings of each size-$k$ pattern~\cite{teixeira2015arabesque},
while FSM (Frequent Subgraph Mining)~\cite{bringmann2008frequent} determines whether a pattern is frequent according to all its edge-induced embeddings.
The examples in our discussion, unless otherwise specified, always assume
edge-induced embedding, but our system
supports both.
\subsection{Subgraph Enumeration}
\label{sec:mining_algo}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figures/sec2.pdf}
\vspace{-5mm}
\caption{Pattern Enumeration Example}
\label{fig:pattern_example}
\vspace{-8mm}
\end{figure}
The key operation of GPM is
pattern enumeration, which can be
implemented with pattern-oblivious or
pattern-aware manner.
The latter is adopted by
recent GPM systems~\cite{mawhirter2019automine,jamshidi2020peregrine,shi2020graphpi} which
have achieved
significant performance improvements.
It generates the embeddings
that satisfy the patterns by construction
and eliminates the need of expensive
isomorphism check.
We also focus on this method.
Figure~\ref{fig:pattern_example} shows
the pattern-aware enumeration algorithm
for pattern $p$, which consists a number of nested loops.
$N(v)$ denotes the edge list of vertex $v$.
The key operation is the intersection
on two edge lists to incrementally
construct the embeddings that match the pattern.
The loop-based algorithms adopt vertex-based extension---each level of the loop extends the current subgraph with one more vertex.
Pattern enumeration algorithms could also be edge-based---the enumerated subgraph is incrementally extended by edges.
We focus on vertex-based
extension while our ideas also generally apply to edge-based methods.
\section{Conclusion}
\label{sec:conc}
This paper proposes Kudu\xspace, a distributed
execution engine with a well-defined abstraction
that can be integrated
with various existing single-machine graph pattern mining (GPM) systems.
Kudu\xspace can transparently enable the
distributed execution with
minor code modifications to
the existing systems.
The key novelty is extendable embedding which can express pattern
enueration algorithm and enable
fine-grained task scheduling.
The novel BFS-DFS hybrid exploration
enables efficient scheduling by generating
sufficient concurrent tasks
without incurring
high memory consumption.
The computation and communication
of Kudu\xspace are be further optimized with
several effective techniques.
We implemented two scalable distributed GPM systems by porting Automine and GraphPi on Kudu\xspace.
\red{Kudu\xspace based systems outperform G-thinker, the state-of-the-art distributed GPM system with graph replication, by up to three orders of magnitude, achieve similar or even higher performance compared with the fastest graph replication based distributed system, and scale large graphs.}
\section{\red{Evaluation}}
\label{sec:eval}
\subsection{Evaluation Methodology}
\noindent\textbf{System configuration. }
Our experiment environment is an 8-node cluster with a Mellanox InfiniBand FDR (56Gbps) network.
Each node has two 8-core Intel Xeon E5-2630 v3 CPUs and 64GB DDR4 RAM,
and runs CentOS 7.4.
The MPI library is OpenMPI 3.0.1.
All systems evaluated are C++ based.
Except for Peregrine~\cite{jamshidi2020peregrine} and Pangolin~\cite{chen2020pangolin},
all systems are compiled with GCC 4.8.5.
Peregrine and Pangolin are compiled with GCC-9 as required.
The optimization level is O3.
\begin{table}
\scalebox{0.8}{
\begin{tabular}{c|c|c|c|c}
\hline
Graph & Abbr. & Num.Vertices & Num.Edges & Max.Degree \\
\hline
MiCo~\cite{elseidy2014grami} & mc & 96.6K & 1.1M & 1.4K \\
Patents~\cite{leskovec2005graphs} & pt & 3.8M & 16.5M & 0.8K \\
LiveJournal~\cite{backstrom2006group,leskovec2009community} & lj & 4.8M & 42.9M & 20.3K \\
\hline
UK-2005~\cite{boldi2004webgraph} & uk & 39.5M & 0.94B & 1.8M \\
Twitter-2010~\cite{kwak2010twitter} & tw & 41.7M & 1.5B & 3.0M \\
Friendster~\cite{yang2015defining} & fr & 65.6M & 1.8B & 5.2K \\
\hline
Yahoo~\cite{webscopeyahoo} & yh & 1.4B & 6.4B & 7.6M \\
RMAT-500M & rm & 500.0M & 10.0B & 173.5K \\
\hline
\end{tabular}}
\caption{Graph Datasets~\cite{snapnets}}
\vspace{-12mm}
\label{tab:datasets}
\end{table}
\noindent\textbf{Datasets. }
Table~\ref{tab:datasets} shows
the evaluated datasets,
including three small-size graphs (mc, pt, lj),
three median-size datasets (uk, tw, fr),
and two large networks (yh, rm).
Graph rm is synthesized by the RMAT graph generator~\cite{chakrabarti2004r,khorasani2015scalable} with default parameter settings,
and contains 500 million/10 billion vertices/undirected edges.
GPM applications take undirected graphs as input,
for directed datasets, the edge direction is simply ignored.
All datasets are pre-processed to delete self-loops and duplicated edges.
\noindent\textbf{Evaluated applications. }
We use three categories of GPM applications.
\textit{Triangle Counting (TC)} is a simple task that counts the number of triangle (a size-3 complete pattern graph) embeddings.
\textit{$k$-Motif Counting ($k$-MC)} discovers the embeddings for each size-$k$ pattern. For example, $3$-MC aims to mine two patterns, the triangle pattern and the 3-chain pattern (a simple size-3 pattern with two edges).
\textit{$k$-Clique Counting ($k$-CC)} counts the number of embeddings of the $k$-clique pattern (a complete pattern graph with $k$ vertices).
\subsection{Overall Performance}
\begin{table}[htbp]
\centering
\scalebox{0.9}{
\begin{tabular}{c|c|c|c}
\hline
Graph & k-Automine & k-GraphPi & G-thinker \\
\hline
\#nodes & 8 & 8 & 8 \\
\hline
mc & 40.2ms (52.2x) & 35.3ms (59.5x) & 2.1s \\
pt & 221.2ms (1289.8x) & 225.0ms (1268.0x) & 285.3s \\
lj & 706.8ms (44.7x) & 722.4ms (43.7x) & 31.6s \\
uk & 705.5s & 706.2s & CRASHED \\
tw & 2293.1s & 2300.6s & CRASHED \\
fr & 84.1s & 78.5s & CRASHED \\
\hline
\end{tabular}}
\caption{Comparing with G-thinker (Triangle Counting)}
\vspace{-4mm}
\label{tab:compare_with_gthinker}
\end{table}
\noindent\textbf{Comparing with distributed systems. }
We first compare Kudu\xspace based systems (k-Automine and k-GraphPi) with G-thinker~\cite{yan2020g}, the
state-of-the-art distributed GPM system with partitioned graph.
The results are presented in Table~\ref{tab:compare_with_gthinker}.
Since G-thinker does not contain reference implementations for $k$-MC and $k$-CC, we only evaluate it for triangle counting.
k-Automine and k-GraphPi on average outperform G-thinker by $144.4\times$ and $148.8\times$ (up to $1289.8\times$ and $1268.0\times$), respectively.
Note that the speedup on the Patents graph is very high.
It is because Patents is a less-skewed graph.
In G-thinker,
each graph data request issued to the software cache only accesses a extremely small amount of graph data.
Hence the cache management overhead cannot be effective amortized by graph data accessing time, leading to very bad performance.
In contrast, Kudu\xspace is more efficient
with much less overhead per request, thanks to our low-cost data reuse techniques.
\begin{table}[htbp]
\centering
\scalebox{0.85}{
\begin{tabular}{c|c|c|c|c}
\hline
App. & Graph & k-Automine & k-GraphPi & GraphPi (replicated) \\
\hline
\#nodes & & 8 & 8 & 8 \\
\hline
\multirow{6}{*}{TC} & mc & 40.2ms & 35.3ms & 704.4ms \\
& pt & 221.2ms & 225.0ms & 6.7s \\
& lj & 706.8ms & 722.4ms & 9.8s \\
& uk & 705.5s & 706.2s & 1268.4s \\
& tw & 2293.1s & 2300.6s & 2886.5s \\
& fr & 84.1s & 78.5s & 169.2s \\
\hline
\multirow{6}{*}{3-MC} & mc & 57.6ms & 56.4ms & 1.5s \\
& pt & 363.1ms & 289.2ms & 13.8s \\
& lj & 1.6s & 847.3ms & 20.1s \\
& uk & 1.1h & 689.4s & 1,380.7s \\
& tw & 2.8h & 2309.9s & 3,032.1s \\
& fr & 194.0s & 82.4s & 388.5s \\
\hline
\multirow{6}{*}{4-CC} & mc & 293.8ms & 299.9ms & 844.0ms \\
& pt & 370.2ms & 362.8ms & 6.7s \\
& lj & 4.7s & 4.8s & 12.8s \\
& uk & 5.1h & 5.1h & 8.6h \\
& tw & 6.7h & 6.8h & TIMEOUT \\
& fr & 132.9s & 137.7s & 177.8s \\
\hline
\multirow{4}{*}{5-CC} & mc & 9.8s & 9.9s & 8.2s \\
& pt & 780.7ms & 777.6ms & 6.8s \\
& lj & 169.6s & 169.3s & 174.7s \\
& fr & 204.3s & 210.3s & 260.0s \\
\hline
\end{tabular}}
\caption{Comparing with GraphPi (Timeout Limit: 10h)}
\vspace{-10mm}
\label{tab:compare_with_graph_pi}
\end{table}
Next, we compare Kudu\xspace based systems with GraphPi~\cite{shi2020graphpi}, the fastest distributed GPM systems based replicated graph (Table~\ref{tab:compare_with_graph_pi}).
Surprisingly, except for 5-CC on MiCo, {\em k-GraphPi consistently delivers better performance than GraphPi even with the remote graph accessing overhead}.
The performance improvement is attributed to two reasons:
1) GraphPi suffers from large startup overhead like workload partitioning,
and hence is slower on small-size workloads;
2) By decomposing the subgraph enumeration process into fine-grained embedding extension tasks, Kudu\xspace is able to exploit strictly more parallelism than GraphPi that only parallelizes the first or first few loops of the subgraph enumeration process in a coarse-grained fashion. k-Automine achieves similar performance with k-GraphPi except for 3-MC.
For 3-MC, k-Automine is slower than k-GraphPi due to GraphPi's better pattern matching technqiue like symmetry breaking.
\noindent\textbf{Comparing with single-machine systems. }
To show the efficiency of Kudu\xspace,
we further compare k-Automine's single-node performance with three state-of-the-art single-machine systems and report the results in Table~\ref{tab:compare_with_single_machine_system}.
We see that
k-Automine achieves comparable performance with AutomineIH~\cite{mawhirter2019automine}, Peregrine~\cite{jamshidi2020peregrine} and Pangolin~\cite{chen2020pangolin} for most workloads.
It is even faster than AutomineIH for TC/3-MC on the uk and tw graphs because of Kudu\xspace's better exploited
fine-grained parallelism
explained earlier.
On the other side,
k-Automine is less effcient on the pt graph---for example, it is slower than AutomineIH by $8.3\times$ for 5-CC.
The inefficiency is due to two
reasons.
First, since pt is a less-skewed dataset with small maximum degree,
the embedding extension task is usually very lightweight.
Hence, the computation is not sufficient to hide the communication cost by circulant scheduling.
Second, since the extension task is lightweight,
the overhead per extendable embedding (e.g., creation, scheduling) cannot be amortized and hence becomes the bottleneck.
\red{We notice that Pangolin is extremely fast for TC on the uk and tw graphs.
This is due to orientation~\cite{chen2020pangolin}, a powerful optimization specifically targeting triangle counting on skewed graphs, which is not adopted by Automine or Peregrine.}
\begin{table}[htbp]
\centering
\scalebox{0.78}{
\begin{tabular}{c|c|c|c|c|c}
\hline
App. & Graph & k-Automine & AutomineIH & Peregrine & Pangolin \\
\hline
\#nodes & & 1 & 1 & 1 & 1 \\
\hline
\multirow{6}{*}{TC} & mc & 83.5ms & 52.3ms & 68.7ms & 56ms \\
& pt & 1.2s & 330.7ms & 1.1s & 289ms \\
& lj & 4.6s & 2.8s & 3.8s & 2.2s \\
& uk & 1.7h & 2.0h & 1.3h & 26.6s \\
& tw & 4.7h & 8.6h & 5.7h & 747.7s \\
& fr & 497.2s & 378.3s & 305.2s & 384.6s \\
\hline
\multirow{5}{*}{3-MC} & mc & 222.9ms & 160.3ms & 84.7ms & 288ms \\
& pt & 2.3s & 930.9ms & 1.7s & 1.5s \\
& lj & 12.0s & 8.9s & 4.6s & 29.2s \\
& uk & 9.1h & TIMEOUT & 1.3h & TIMEOUT \\
& fr & 1425.8s & 1206.8s & 316.1s & 1.8h \\
\hline
\multirow{4}{*}{4-CC} & mc & 1.7s & 1.2s & 1.8s & 2.8s \\
& pt & 2.0s & 381.0ms & 1.3s & 773ms \\
& lj & 32.3s & 31.3s & 49.6s & 54.7s \\
& fr & 852.6s & 570.3s & 1237.5s & OUTOFMEM \\
\hline
\multirow{4}{*}{5-CC} & mc & 66.1s & 46.8s & 78.0s & 132.0s \\
& pt & 3.4s & 408.4ms & 1.5s & 967ms \\
& lj & 1259.3s & 982.9s & 2076.6s & OUTOFMEM \\
& fr & 1435.1s & 900.2s & 3032.8s & OUTOFMEM \\
\hline
\end{tabular}}
\caption{Comparing with Single-machine Systems}
\vspace{-8mm}
\label{tab:compare_with_single_machine_system}
\end{table}
\noindent\textbf{Performance on large-scale graphs}
We study Kudu\xspace's scalability to large graphs by running k-GraphPi (8-node) on Yahoo and RMAT-500M (Table~\ref{tab:perf_large_graphs}) for TC, 3-MC and 4-CC.
All workloads are finished in reasonable time.
RMAT-500M has 500M vertices and 10B undirected edges and takes 84GB storage in CSR format,
making it impossible to be handled by single-node in-memory systems or distributed systems without graph partitioning on our cluster (64GB RAM/node).
Out-of-core (OOC) single-node systems like RStream~\cite{wang2018rstream} and Automine~\cite{mawhirter2019automine} leveraging secondary storage may be
\begin{table}[htbp]
\centering
\vspace{-0mm}
\scalebox{1.}{
\begin{tabular}{c|c|c|c|c}
\hline
Graph & Num.Edges & TC & 3-MC & 4-CC \\
\hline
yh & 6.4B & 3.2h & 3.3h & 24.7h \\
rm & 10.0B & 776.9s & 830.1s & 962.2s \\
\hline
\end{tabular}}
\vspace{-0mm}
\caption{Performance on Large-Scale Graphs (k-GraphPi)}
\label{tab:perf_large_graphs}
\end{table}
able to handle such large graphs.
However, RStream is very inefficient due to its BSP-style disk-based embedding exploration and can be orders of magnitude slower than recent
systems~\cite{jamshidi2020peregrine,mawhirter2019automine,chen2020pangolin}.
Automine has only rudimentary disk-support by memory-mapped IO~\cite{mawhirter2019automine} and hence is not suitable for complicated patterns.
\subsection{Analyzing Kudu\xspace Optimizations}
\noindent\textbf{Vertical computation sharing (VCS). }
We run k-GraphPi for 4-CC and 5-CC with/without the optimization,
and report
\begin{figure}[htbp]
\centering
\includegraphics[width=.8\linewidth]{figures/vertical_com_sharing_speedup.pdf}
\vspace{-5mm}
\caption{Speedup by VCS}
\label{fig:speedup_vertical_comp_sharing}
\end{figure}
the
speedups in Figure~\ref{fig:speedup_vertical_comp_sharing}.
The optimization improves
the performance
by $2.10\times$ on average (up to $4.44\times$).
It is not very effective on the pt graph
since as mentioned earlier, embedding extension is lightweight on pt and only takes a small portion of execution time.
\noindent\textbf{Horizontal data sharing. }
We analyze the effect of horizontal data sharing (HDS) by running k-GraphPi for 4-CC and
\begin{figure}[htbp]
\centering
\includegraphics[width=.7\linewidth]{figures/communication_redundancy_elimination.pdf}
\vspace{-5mm}
\caption{Effect of HDS}
\label{fig:effect_communication_redundancy_elimination}
\end{figure}
5-CC with/without the optimization.
We report the network
traffic
and communication time (normalized with respect to the version without the optimization) on critical path in Figure~\ref{fig:effect_communication_redundancy_elimination}.
The optimization reduces network traffic and critical-path communication time by 70.5\% and 67.8\% on average (up to 99.3\% and 99.5\%), respectively.
The traffic reduction is moderate on the pt graph (20.4\% for 4-CC and 24.3\% for 5-CC).
It is because pt is less-skewed and thus
there are less ``hot-spot' active vertices appearing multiple times within a chunk.
\begin{table}[htbp]
\centering
\scalebox{0.9}{
\begin{tabular}{c|c|c|c|c|c}
\hline
\multirow{2}{*}{App.} & \multirow{2}{*}{G.} & \multicolumn{2}{c|}{Network Traffic} & \multicolumn{2}{c}{Runtime} \\
\cline{3-6}
& & with cache & no cache & with cache & no cache \\
\hline
\multirow{4}{*}{TC} & pt & 962.1MB & 1.0GB & 225.0ms & 228.5ms \\
& lj & 6.8GB & 7.9GB & 722.4ms & 770.8ms \\
& uk & 487.3GB & 57.7TB & 706.2s & 2615.2s \\
& fr & 1.4TB & 1.8TB & 78.5s & 89.6s \\
\hline
\multirow{3}{*}{4-CC} & pt & 1.2GB & 1.6GB & 362.8ms & 412.7ms \\
& lj & 15.7GB & 25.2GB & 4.8s & 4.9s \\
& fr & 2.3TB & 3.2TB & 137.7s & 185.3s \\
\hline
\multirow{3}{*}{5-CC} & pt & 1.3GB & 1.8GB & 777.6ms & 795.1ms \\
& lj & 33.8GB & 86.6GB & 169.3s & 169.5s \\
& fr & 2.7TB & 3.7TB & 210.3s & 250.1s \\
\hline
\end{tabular}}
\caption{Analyzing the Static Data Cache (k-GraphPi)}
\vspace{-5mm}
\label{tab:static_data_cache}
\end{table}
\noindent\textbf{Static data cache. }
The effect of static data cache is reported in Table~\ref{tab:static_data_cache}.
The optimization significantly reduces network traffic
and hence improves end-to-end performance.
The optimization is extremely useful for highly-skewed graphs like uk.
{\em For TC on uk, it reduces the traffic from 57.7TB to 487.3GB by more than 99\% (even with other optimizations like horizontal data sharing), and improves performance by $3.7\times$}.
Reduction in network traffic does
not necessarily translate to performance benefit (e.g., 4-CC on lj)
since communication cost is already completely hidden by computation.
\begin{table}[htbp]
\centering
\scalebox{1.}{
\begin{tabular}{c|c|c|c}
\hline
\multirow{2}{*}{App.} & \multirow{2}{*}{Graph} & With NUMA & No NUMA \\
& & support & support \\
\hline
\multirow{3}{*}{4-CC} & pt & 2.1s (1.53x) & 3.2s\\
& lj & 33.0s (1.20x) & 39.5s\\
& fr & 870.4s (1.15x) & 998.5s\\
\hline
\multirow{3}{*}{5-CC} & pt & 4.0s (1.47x) & 5.9s\\
& lj & 1243.2s (1.02x) & 1269.5s\\
& fr & 1487.6s (1.30x) & 1930.7s\\
\hline
\end{tabular}}
\caption{NUMA-aware Support}
\label{tab:effect_numa_support}
\end{table}
\noindent\textbf{NUMA-aware support. }
We analyze our NUMA-aware support by running k-GraphPi on a single node and present the results in Table~\ref{tab:effect_numa_support}.
Kudu\xspace's NUMA awareness leads to on average $1.26\times$ (up to $1.53\times$) performance gain.
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{figures/scalability.pdf}
\vspace{-3mm}
\caption{Kudu\xspace's Inter-node Scalability (graph: lj)}
\vspace{-3mm}
\label{fig:inter_node_scalability}
\end{figure}
\subsection{Communication Overhead Analysis}
We present k-GraphPi's communication overhead (the ratio between communication time on the critical path over the total runtime) for all evaluated workloads in Figure~\ref{fig:communication_overhead}.
Except for the pt graph, the communication overhead takes at most roughly 20\% of the execution time and hence is not the performance bottleneck.
The communication overhead
\begin{figure}[htbp]
\centering
\includegraphics[width=.8\linewidth]{figures/comm_overhead.pdf}
\vspace{-3mm}
\caption{Comm. Overhead}
\label{fig:communication_overhead}
\end{figure}
on pt is roughly 40-50\% since the
lightweight techniques in Kudu\xspace
is not quite sufficient to
hide all communication cost.
It indicates that reducing
communication cost for less-skewed graphs like pt is still a
promising problem.
Communication cost on highly-skewed graphs like uk and tw is negligible given that our static data cache works extremely well.
\subsection{Scalability}
\noindent\textbf{Inter-node scalability. }
We report the inter-node scalability of k-GraphPi and GraphPi in Figure~\ref{fig:inter_node_scalability} by varying the number of nodes.
k-GraphPi achieves similar or even better scalability compared with GraphPi,
and scales almost perfectly.
Leveraging 8 nodes is on average $6.77\times$ (up to $7.35\times$) faster than one node.
By contrast, GraphPi's speedup is only on average $4.04\times$.
\noindent\textbf{Intra-node scalability and the COST metric. }
In Figure~\ref{fig:intra_node_scalability},
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{figures/intra_node_scalability.pdf}
\vspace{-3mm}
\caption{Intra-node Scalability}
\label{fig:intra_node_scalability}
\end{figure}
we analyze Kudu\xspace's multi-threading scalability and efficiency by running k-Automine on a single node using different number of computation threads (1, 2, 4, 8, 12) on graph lj.
We do not utilize all 16 cores since four of them are reserved for communication threads.
By utilizing 12 threads, k-Automine achieves $10.7\times$, $11.6\times$ and $11.4\times$ speedups for TC, 3-MC and 4-CC on the lj graph, respectively.
We also report the COST metric---the number of threads with which a distributed system can outperform an efficient reference single-thread implementation~\cite{mcsherry2015scalability}.
We use the fastest single-thread runtime among Automine, Peregrine and Pangolin as the reference single-thread runtime (the dotted line in Figure~\ref{fig:intra_node_scalability}).
The COST metrics for TC, 3-MC and 4-CC are 4, 4, 2, respectively.
For 4-CC, k-Automine's single-thread runtime is only 1.3\% slower than the reference runtime.
\section{Think Like an Extendable Embedding}
\label{sec:extend}
Different from all prior works,
we propose a new principle of
{\em ``Think Like an Extendable Embedding''} that
can naturally generate the fine-grained
tasks with efficient and low-overhead
computation/communication scheduling.
This section explains
the key abstraction---extendable embedding.
\subsection{Extendable Embedding}
\label{sec:extend_def}
In pattern-aware subgraph enumeration,
the embedding $e(p)$ matching a given pattern $p$ can be constructed by
a sequence of subgraph extension
according to the pattern and the
algorithm, which determines the order
of the extensions.
To emphasize that these subgraphs
are enumerated in the process
of constructing $e(p)$, we specifically
call the subgraphs enumerated in the
sequence of extension also as embeddings,
which match part of $p$.
Formally, each embedding $e(p)$
can be constructed by a
sequence $e_0 \rightarrow e_1 \rightarrow ... \rightarrow e(p)$, where $e_0$
is a single vertex.
Each step is called an {\em embedding extension}.
With vertex-based extension,
$e_{i+1}$ contains exactly one more vertex
than $e_i$, the vertex
added to $e_i$ has level-$i$.
When $e_i$ is extended to $e_{i+1}$,
$e_i$ and $e_{i+1}$ are considered
to be the parent $e_{parent}$ and
$e_{child}$ embedding.
Based on the vertex-set-based algorithm
discussed in Section~\ref{sec:mining_algo},
extending $e_{parent}$ to $e_{child}$
requires the edge list of several
vertices in $e_{parent}$.
We call these vertices as {\em active vertices} for embedding extension
$e_{parent} \rightarrow e_{child}$.
The edge lists of active vertices
are called active edge lists.
An {\em extendable embedding} is defined
as a small portion of graph data containing $e_i$ along with active edge lists.
\red{Without confusion, we can also
refer to an extendable embedding by $e_i$.}
With the data of an extendable embedding available, $e_i$ can be
extended to $e_{i+1}$.
The active vertices and edge lists
are determined by the pattern graph and
the pattern enumeration algorithm.
Based on our definition,
the ``activeness'' follows the
{\em anti-monoticity property}:
if a vertex in $e_i$ is inactive,
then it must be also inactive
in any $e_j$, where $j>i$.
This property enables
the succinct storage of extendable
embeddings.
Figure~\ref{fig:extendable_embedding_example}
shows two examples of an extendable
embedding in the process of constructing
the embeddings that match
the pattern in Figure~\ref{fig:pattern_example}.
The first extendable embedding
contains (0,2), to extend it with an
additional vertex, we need to
compute the intersection of $N(0)$
and $N(2)$, so both vertex 0 and 2
are active and $N(0)$
and $N(2)$ are active edge lists.
Based on the graph partition in
Figure~\ref{fig:partition_gpm},
$N(2)$ is remote, so it needs
to be fetched through communication
with machine 2.
The second extenable embedding contains
(0,2,3), to get the last matching vertex,
we need to perform another intersection
between $N(0)$
and $N(2)$.
Due to the prior communication and
data sharing (will be discussed soon),
both the active edge lists are locally
available.
The notion of extendable embedding
is the key abstraction to
generate fine-grained tasks
that can be scheduled efficiently
leveraging the unique properties
of pattern-aware enumeration algorithms. Specifically, with graph partitioned
among distributed machines,
when the data of an extendable embedding
$e_i$ are all locally available,
the machine can perform the computation
to extend $e_i$ to $e_{i+1}$.
As a result, extendable embedding
breaks the embedding extension sequence
to generate each $e(p)$ into fine-grained
tasks with well-defined {\em dependent} data that can be stored either in the local
or remote machines, which,
when fetched to the local machine,
the computation of the task can
be performed.
\begin{algorithm}
\caption{Triangle Mining with Extendable Embedding}
\label{alg:tlee_triangle_counting}
\footnotesize
\begin{algorithmic}[1]
\Function{extend}{$e'$}
\State // input graph: $G=(V,E)$
\State $E'\gets \{\}$
\If{$e'$ contains one vertex}
\State obtain $v_0$ and $N(v_0)$ from $e'$
\For{$v_1\in N(v_0)$}
\State $e\gets$ \textbf{create\_extendable\_embedding}($e'$, $v_1$, 0)
\State mark $v_0$ and $v_1$ as active vertices
\State $E'\gets E'\cup \{e\}$
\EndFor
\Else
\State obtain $v_0$, $v_1$, $N(v_0)$ and $N(v_1)$ from $e'$
\For{$v_2\in N(v_0)\cap N(v_1)$}
\State construct a triangle embedding $e$ containing $v_0$, $v_1$, $v_2$
\State invoke a user-defined function to process $e$
\EndFor
\EndIf
\State \Return{$E'$}
\EndFunction
\end{algorithmic}
\end{algorithm}
The interface exposed by Kudu\xspace
to the GPM systems running above is
\texttt{EXTEND} function.
For a given pattern, a GPM system
can specify the
algorithm in a similar manner
in Figure~\ref{fig:pattern_example}, using extendable embedding.
Algorithm~\ref{alg:tlee_triangle_counting}
shows the implementation of
a simple triangle mining algorithm~\cite{mawhirter2019automine}
in \texttt{EXTEND} function with extendable
embedding.
In the \texttt{EXTEND} function, we construct the triangle embeddings $(v_0,v_1,v_2)$ incrementally---the first triangle vertex $v_0$ is enumerated from $V$, the second vertex $v_1$ is the neighbor of $v_0$, and the third one $v_2$ is a common neighbor of $v_0$ and $v_1$.
A \texttt{create\_extendable} \texttt{\_embedding()} function is used to create an extendable embedding by adding one vertex to an existing embedding (line 7). The first two parameters of the function are the existing embedding and the new vertex.
The third parameter represents the size of memory needed to store $e$'s reusable intermediate result (will be discussed in Section~\ref{sec:compute_share}).
When the embedding is constructed (line 13),
the user-defined function is invoked (line 14)
to perform analysis based on the identified
embedding.
Figure~\ref{fig:extendable_embedding_example} illustrates the relation between
\texttt{EXTEND} function and the pattern
enumeration algorithm.
Essentially, \texttt{EXTEND} function
breaks down the pattern enumeration
algorithm into
intersection operations that can be executed
in a {\em fine-grained} manner with {\em flexible} order through
a general execution model (Section~\ref{sec:extend_exec}) and
novel scheduling (Section~\ref{sec:hybird}).
With the \texttt{EXTEND} function specified,
Kudu\xspace engine can
transparently orchestrate distributed
execution.
\subsection{Hierarchical Data Representation}
\label{sec:hierarchy}
In Kudu\xspace, the number of on-the-fly extendable embeddings,
which can either wait for
data or execution, is performance-critical.
With sufficient number of on-the-fly
extendable
\setlength{\intextsep}{2pt}%
\setlength{\columnsep}{8pt}%
\begin{wrapfigure}[8]{r}{0.30\linewidth}
\centering
\vspace{-2mm}
\includegraphics[width=\linewidth]{figures/vertical_graph_data_sharing.pdf}
\vspace{-8mm}
\caption{Active Edge List Sharing}
\label{fig:vertical_graph_data_sharing}
\end{wrapfigure}
embeddings, the system can
batch the requests for graph data and amortize the cost of network latency.
Moreover, plenty of concurrent communications
reduce computation stall,
leading to better CPU/network utilization.
On the other side, storing
a large number of on-the-fly extendable embeddings results in high memory
consumption, because,
not only the embeddings themselves, but
also the active edge lists, are stored.
To reduce memory consumption, we propose a succinct hierarchical representation of extendable embeddings.
The key observation is that different extendable embeddings may share common
active edge lists.
Specifically, if an extendable embedding $e_{child}$ is extended from $e_{parent}$,
most active edge list data
of $e_{child}$ are also included in $e_{parent}$.
Figure~\ref{fig:vertical_graph_data_sharing}
shows an example for 5-clique mining.
Here, $e_{child}$ is obtained by extending $e_{parent}$ with a vertex $3$,
\red{which should is one of the vertices in the
intersection of $N(0)$, $N(1)$ and $N(2)$.
To extend $e_{child}$, $N(3)$ is needed.}
Both $e_{child}$ and $e_{parent}$
include the active edge lists
$N(0)$, $N(1)$ and $N(2)$,
and $e_{child}$ only needs to
additionally store $N(3)$ and refer $N(0) \sim N(2)$ to $e_{parent}$.
Since there is only one active vertex
of $e_{child}$ that is not included in $e_{parent}$,
$e_{child}$ only needs to store the
active edge list of one vertex.
This data representation reduce
the communication as well: for an extendable
embedding $e_{child}$,
the task only needs to fetch the edge
list of the new vertex.
Starting from each vertex, all extended embeddings form a tree.
With hierarchical representation,
extendable embeddings at different
levels of a given tree are dependent,
as shown in
Figure~\ref{fig:hierarchical_representation}.
The extendable embeddings in
level-$i$ have the same number of
vertices, and point
to the embeddings in level-$(i-1)$.
Hierarchical representation essentially
enables the ``vertical'' data sharing
among an embedding
and embeddings in its subtree.
Besides a parent pointer, each extendable embedding maintains the embedding itself, the edge list of the newly extended vertex, and certain intermediate results that can be reused by its descendants (Section~\ref{sec:compute_share}).
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figures/hierarchical_representation.pdf}
\vspace{-3mm}
\caption{Hierarchical Data Representation}
\label{fig:hierarchical_representation}
\vspace{-4mm}
\end{figure}
\subsection{Execution Model}
\label{sec:extend_exec}
\setlength{\intextsep}{2pt}%
\setlength{\columnsep}{8pt}%
\begin{wrapfigure}[5]{r}{0.6\linewidth}
\centering
\includegraphics[width=\linewidth]{figures/extendable_embedding_life_cycle.pdf}
\vspace{-6mm}
\caption{An Extendable Embedding's Life-cycle}
\vspace{-2mm}
\label{fig:extendable_embedding_life}
\vspace{-3mm}
\end{wrapfigure}
As shown in Figure~\ref{fig:extendable_embedding_life}, an extendable embedding has four states.
The ``pending'' state indicates that it has been created, but the active edge lists
are not ready.
In this state, the computation is waiting for
data.
The ``ready'' state means that
the active edge lists have been fetched from either remote or local memory, and thus
$e_i$ in the extendable embedding is
ready to be extended to $e_{i+1}$.
The embedding extension is scheduled when
the computation resource is available.
After the extension is performed, the state
changes to ``zombie'', which indicates that
the extendable embedding's computation is
finished, but its memory resource cannot
be released yet.
It is due to the hierarchical
representation---some data of the
completed extendable embedding
may be still shared with its children.
When all children of an extendable
embedding $e$ are completed,
the state is changed to ``terminated'', at which
point the system can release the memory
allocated to $e$.
Thus, the extendable embeddings are
deallocated in a ``bottom-up'' fashion.
Algorithm~\ref{alg:abstract_execution_model}
shows the execution model of Kudu\xspace.
It specifies the operations performed
by the computation and communication thread.
The details to ensure thread-safety
when multiple threads are used for
computation and
communication are discussed
in Section~\ref{sec:impl}.
The set
$E_{ready}$, $E_{pending}$, and
$E_{zombie}$ contain
the extendable embeddings in
``ready'', ``pending'', and
``zombie'' state, respectively.
At the beginning, $E_{ready}$ is initialized
to contain all single-vertex embeddings,
because the embedding enumeration of
any pattern needs to start from them.
The other two sets are initialized as empty sets.
The computation thread (line 4-10) keeps popping ``ready'' extendable embeddings from $E_{ready}$ (line 7) and extending them to larger ones (line 8).
The \texttt{EXTEND} function,
which depends on the concrete pattern matching algorithm, will return a set of extended embedding
$E'$ if it has not reached the
embedding of the pattern.
Otherwise, \texttt{EXTEND} function will
not return an empty set and call the user-defined
analysis function (line 11-14 in Algorithm~\ref{alg:tlee_triangle_counting}).
After returning from \texttt{EXTEND} function,
newly generated extended embeddings in $E'$
are added to $E_{pending}$ (line 9),
which will transparently
trigger potential communication inside
the execution model.
For the current embedding $e'$, since
the extension has completed, it is
added to $E_{zombie}$.
We omit the details of
keeping track of the children and
state change from ``zombie'' to ``terminated''.
The communication thread runs continuously
until the end of application and
executes whenever $E_{pending}$ is not
empty.
After popping one pending extendable
embedding from $E_{pending}$,
it fetches the data, either locally or
from a remote machine.
For local data, the communication thread
just record the local data pointer.
The remote data fetch is blocking, but we
can batch multiple requests into batch to
amortize the latency.
After the communication is finished,
the popped embedding from $E_{pending}$
is ready to be extended and added to
$E_{ready}$,
\begin{algorithm}
\caption{Kudu\xspace Abstract Execution Model}
\label{alg:abstract_execution_model}
\footnotesize
\begin{algorithmic}[1]
\State $E_{ready}\gets \{$all single-vertex embeddings$\}$
\State $E_{pending}\gets \{\}$
\State $E_{zombie}\gets \{\}$
\State // Computation thread
\While{$|E_{pending}|>0$ or $|E_{ready}|>0$}
\State // wait until $E_{ready}$ is not empty
\State pop one "ready" embedding $e'$ from $E_{ready}$
\State $E'\gets $ EXTEND($e'$)
\State add all elements in $E'$ to $E_{pending}$
\State add $e'$ to $E_{zombie}$
\EndWhile
\State // Communication thread
\While{not application terminated}
\State // wait until $E_{pending}$ is not empty
\State pop one "pending" embedding $e'$ from $E_{pending}$
\State fetch the working-set graph data of $e'$
\State add $e'$ to $E_{ready}$
\EndWhile
\end{algorithmic}
\end{algorithm}
We emphasize that the execution model is
implemented internally in Kudu\xspace to coordinate
efficient distributed execution.
The extendable embedding abstraction, as
the interface to the client
GPM systems above, does not expose any
details of communication and scheduling.
Instead, when a new extendable embedding
is generated in \texttt{EXTEND} function,
the communication will be triggered---it is
first inserted into $E_{pending}$ in line 9
and then data are fetched in line 15.
With the clean separation of
algorithm and execution model,
the code generator or developers
can focus on specifying
the high-level pattern enumeration
algorithm in \texttt{EXTEND} function
without considering low-level distributed system details.
\subsection{Example and Discussion}
\label{sec:extend_example}
On the right of Figure~\ref{fig:extendable_embedding_example}, we show a complete running
example of pattern enumeration
from vertex 0 based on the
input and pattern graph in
Figure~\ref{fig:partition_gpm}.
The dashed arrow also shows the
vertical data sharing.
Prior GPM systems have used
``Think Like an Embedding''~\cite{teixeira2015arabesque} and
``Think Like a Subgraph''~\cite{chen2018g,yan2020g}, but
as illustrated in Section~\ref{sec:problem},
none of them are suitable for distributed
GPM with partitioned graph.
In comparison, the ``Think Like a Vertex''~\cite{malewicz2010pregel,low2014graphlab,gonzalez2012powergraph} model and the corresponding API---{\em vertex program}---for
traditional graph computation
are effective both in expressing graph
algorithms, and abstracting the details
of distributed execution.
The key reason is that the graph algorithms
can be expressed as {\em local} operations
based on a vertex and its neighbors.
It not only simplifies programming but
also provides a natural way to generate
fine-grained tasks and schedule
communications.
We claim that our ``Think Like an Extendable
Embedding'' model for GPM,
which can also {\em express the computation
and communication in a ``localized'' fashion},
serves
as the exact counterpart of ``Think Like a
Vertex'' for graph computation.
Extendable embedding is essentially a primitive
to break down the pattern-aware enumeration
algorithm (nested loops) into small tasks
for distributed execution.
\section{Hybrid Embedding Exploration}
\label{sec:hybird}
Algorithm~\ref{alg:abstract_execution_model}
shows that the task execution and communication
{\em can} be scheduled with the execution model,
but it does not specify {\em how} to schedule.
Specifically, in line 6, it pops
a pending extendable embedding without
specifying which one---such policy
exactly decides the scheduling order of tasks.
This is the focus of this section.
\subsection{Motivation}
\label{sec:hy_motiv}
\begin{figure}[!ht]
\vspace{-4mm}
\subfloat{%
\includegraphics[width=0.47\linewidth]{figures/new_dfs.pdf}
}
\hfill
\subfloat{%
\includegraphics[width=0.47\linewidth]{figures/bfs.pdf}
}
\vspace{-3mm}
\caption{DFS (left) and BFS (right) Exploration}
\label{fig:dfs_bfs_exploration}
\vspace{-2mm}
\end{figure}
In Kudu\xspace,
pattern enumeration can be considered as
traversing dynamically
growing multiple extendable
embedding trees---each corresponds to the
enumeration process
from one vertex---until all nodes are visited.
By visiting a tree node (extendable embedding),
a task performs the extension and adds
more nodes into the tree.
Since the tree is dynamically changing,
memory associated with each node
needs to be managed.
A complication
is introduced due to hierarchical representation:
an extendable embedding cannot be released until all of its descendants are terminated.
In the following, we explain that neither
depth-first search (DFS) nor
breadth-first search (BFS)
exploration is ideal in terms of memory management and
efficient communication.
With DFS exploration shown in
Figure~\ref{fig:dfs_bfs_exploration} (left),
the extendable
embeddings of the same tree
can be maintained
by a stack with small memory consumption.
The FILO order naturally satisfies the
bottom-up memory release order.
However, with DFS,
each tree only has one on-the-fly
extendable embedding at any time---significantly
limiting the capability of generating batched
communication.
To generate enough remote data requests,
a large number of threads need to be used to explore
multiple trees concurrently, which will obviously
affect performance.
\red{The \texttt{EXTEND} function can be
considered as
``plugged'' in the vertex-set-based algorithm,
then, directly executing the nested loop in fact
corresponds
to a DFS order of embedding extension. }
Thanks to extendable embedding abstraction
and the general execution model,
task scheduling can be controlled
flexibly, {\em decoupled} with the algorithm.
With BFS exploration shown in
Figure~\ref{fig:dfs_bfs_exploration} (right),
while sufficient number of
on-the-fly extendable embeddings can be generated, the policy leads to inefficient
memory management.
The essential reason is that,
extendable embeddings are not released in the order that they are allocated.
Figure~\ref{fig:dfs_bfs_exploration} shows
that, different objects
vary in size and lead to
the fragmentation problem.
In general, BFS tends to generate
a very large number of embeddings---much larger than necessary for communication batching and overlapping---and results in enormous memory consumption.
\subsection{BFS-DFS Hybrid Exploration}
\label{sec:hy_disc}
To enable efficient memory management and
communication, we propose an elegant
{\em BFS-DFS hybrid exploration} to achieve
the best of both worlds.
Since DFS can perfectly match the memory
allocation/deallocation order required
by hierarchical representation, we
{\em apply DFS at a chunk granularity}.
A chunk is defined as {\em a configurable number of
extendable embeddings of the same level}.
Instead of exploring individual extendable
embedding with DFS, we propose to explore
chunks with DFS.
The hybrid exploration keeps the
advantage of DFS in terms of memory
management. The memory of the chunk
can be allocated and deallocated together
to avoid fragmentation.
Moreover, we can control the memory
consumption with the chunk size.
The data in the fixed memory for a chunk
can be {\em horizontally shared}
among its extendable embeddings.
It avoids the expensive reference counter
based software cache and garbage collection
without losing much data reuse opportunity.
The lightweight mechanism to enable the
sharing is discussed in Section~\ref{sec:data_share}.
The restrictive scheduling ensures that,
at any time, only one chunk is being processed,
so we do not need to maintain the general
data sharing for arbitrary scheduling.
On the other side, it eliminates the
drawbacks of DFS since a chunk leads
to many on-the-fly extendable embeddings
which enables batched communication.
Specifically,
before the execution, we pre-allocate a
certain size of memory (e.g., 1GB memory) for extendable embedding for the chunk in each level.
During the execution, we mark one of the levels as ``current'' and only extend this level's extendable embeddings.
The generated new extendable embeddings---with
one more vertex---are pushed to the next level (similar to BFS) until the pre-allocated
memory is full.
The procesure is shown in Figure~\ref{fig:hybrid_exploration}.
At the beginning, the current level is $i$, so we only extend level-$i$'s embeddings until
the memory for level-$(i+1)$ is full.
At this point, we stop the execution at level-$i$ and change level-$(i+1)$ as the current level.
Thus, the current level keeps going deeper in
a DFS manner
until it reaches the deepest level that does not generate any new embeddings, then it backtracks
to the previous level.
Once the current level changes from level-$(i+1)$
to level-$i$, all level-$(i+1)$'s extendable embeddings can be released because all their descendants have been processed, and the
execution of level-$i$ can resume.
The DFS at chunk granularity
repeats until all embeddings are enumerated.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figures/full_flowgraph.pdf}
\vspace{-7mm}
\caption{BFS-DFS Hybrid Exploration}
\label{fig:hybrid_exploration}
\vspace{-7mm}
\end{figure}
\subsection{Circulant Scheduling}
\label{sec:circ_schedule}
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{figures/communication_computation_overlapping.pdf}
\vspace{-5mm}
\caption{Overlapping Communication with Computation}
\label{fig:comm_comp_overlapping}
\end{figure}
For a chunk,
communication-computation
overlapping can be increased by circulant scheduling.
As shown in Figure~\ref{fig:comm_comp_overlapping}, on machine $K$, once a level becomes full, before
performing the extensions, the system shuffles all extendable embeddings at this level to $N$ batches according to the
source machine ID in a circulant manner, where $N$ is the number of machines in the cluster.
These batches contain the extendable embeddings whose active edge lists reside on machine $K$, $(K+1)\%N$, $(K+2)\%N$, $\ldots$, $0$, $\ldots$, $(K+N-1)\%N$, respectively.
The key idea is to divide the execution of a
chunk into multiple steps, and
pipeline the execution and
communication of the batches, so that
in each step the computation of batch-$i$
is overlapped with the data fetch for batch-$(i+1)$.
The communication for a batch is
between the local machine and one particular
remote machine.
At the beginning, we fetch the
active edge lists needed by the first batch,
which is very fast since all of them resides on the local node $K$.
Then we start extending the embeddings in the first batch, and at the same time, fetching the graph data for the next batch, residing on node $(K+1)\%N$.
Afterwards, we extend the second batch's embeddings, and fetch the third batch's graph data concurrently.
It is worth noting that we do not adopt
the strict pipelining---the computation does not stall communication.
For example, once the data required by batch-$i$ has been fetched, the system immediately starts the communication of batch-$(i+1)$
without waiting for the completion of
batch-$(i-1)$'s computation.
\section{\red{Implementation}}
\label{sec:impl}
Kudu\xspace engine is implemented in C++ and has approximately 4000 lines of code.
We implemented two scalable distributed GPM systems based on
partitioned graph, k-Automine and k-GraphPi, by porting two single-machine
systems
Automine~\cite{mawhirter2019automine}
and GraphPi~\cite{shi2020graphpi} on Kudu\xspace, both of them are
state-of-the-art systems.
GraphPi supports distributed
execution with replicated graph,
which is compared with k-GraphPi.
The porting effort is roughly 500 lines of codes per system,
which is significantly less than building a new system from scratch.
Since Automine is not open-sourced,
k-Automine is modified from our own Automine implementation---referred to as AutomineIH (in house)---that achieves comparable performance with the published results.
\noindent\textbf{Multi-threading support. }
We leverage multiple computation threads to extend the embeddings in a chunk.
The workload is distributed dynamically.
Once a batch of extendable embeddings become ready (Section~\ref{sec:circ_schedule}),
they are divided into multiple mini-batches that are the basic workload distribution units (64 embeddings per mini-batch).
Mini-batches are added to a lock-free workload queue and distributed to computation threads on demand.
Embedding extension generates new extendable embeddings that are inserted to the next-level chunk.
We protect the insertion by a mutex.
To avoid lock contention,
each computation thread has a small local buffer (the size is a half of the L1-D cache) containing the generated embeddings.
Once the buffer is full, the embeddings are flushed to the next-level chunk.
\noindent\textbf{Communication subsystem. }
The communication subsystem is built on top of MPI.
It consists of graph data requesting threads and responding threads (ratio $1:1$).
The ratio between communication threads and computation threads is $1:3$.
Each communication thread runs on a dedicated CPU core to avoid thread scheduling overhead.
\noindent\textbf{Graph representation. }
Each graph partition is represented in the CSR format, including a vertex array $vtx$ and an edge array $edges$.
$edges[vtx[v]\dots vtx[v+1]-1]$ contains all edges adjacent to vertex $v$.
It takes $O(|V|/p + |E|/p)$ space complexity, where $p$ is the number of partitions.
\section{Introduction}
\label{sec:intro}
Graph pattern mining (GPM)~\cite{teixeira2015arabesque,wang2018rstream,dias2019fractal,mawhirter2019automine,jamshidi2020peregrine,chen2020pangolin,shi2020graphpi,bindschaedler2021tesseract,chen2018g,yan2020g},
an important graph processing
workload, is widely used in various
applications~\cite{ma2009insights,schmidt2011efficient,wu2018software,valverde2005network,staniford1996grids,juszczyszyn2011motif,becchetti2008efficient}.
Given a large input graph, GPM enumerates all its \textit{embeddings}, i.e., the input graph's subgraphs isomorphic to some user-defined pattern(s),
and processes them to extract useful information.
GPM application is computation intensive
due to the need to enumerate an extremely large
number of subgraphs.
For example, there are more than 30 trillion edge-induced 6-chain embeddings on WikiVote~\cite{leskovec2010signed}---a tiny graph with only 7K vertices.
The complexity and importance of GPM
applications give rise to the
recent general GPM systems~\cite{teixeira2015arabesque,wang2018rstream,dias2019fractal,mawhirter2019automine,jamshidi2020peregrine,chen2020pangolin,shi2020graphpi,bindschaedler2021tesseract,chen2018g,yan2020g}.
Due to the computation intensive nature
and the increasing need to process
large graphs, we believe a GPM system
should scale with both the computation and
memory resources, while achieving
good computation efficiency.
Unfortunately, none of the existing systems
satisfy such seemingly basic requirements.
Early GPM systems, such as
Arabesque~\cite{teixeira2015arabesque},
Fractal~\cite{dias2019fractal}, and
RStream~\cite{wang2018rstream}, are
pattern-oblivious: the algorithm enumerates all subgraphs up to the pattern size.
Although some infeasible subgraphs
can be pruned early with the user-defined
filter function, expensive isomorphism checks
are needed to classify the enumerated embeddings.
Thus, the computation is not as efficient
as more recent systems.
Both Arabesque and Fractal support
distributed execution and can leverage
increasing number of cores among multiple machines. However,
the graph data is {\em replicated} in each node---limiting the graph size to the
size of each machine's memory.
More recent GPM systems, such as
Automine~\cite{jamshidi2020peregrine} (and
an improved version GraphZero~\cite{mawhirter2019graphzero}),
Peregrine~\cite{jamshidi2020peregrine}, and
GraphPi~\cite{shi2020graphpi}, improve
the computation efficiency by adopting
pattern-aware enumeration method,
which generates the embeddings
that match the patterns {\em by construction}---eliminating the need of
isomorphism checks.
These systems are mostly
designed for single machine where
the graph can fit into the memory, \red{with
rudimentary out-of-core support such as
memory mapped I/O.
For graph larger than memory size,
they cannot offer competitive performance. }
GraphPi does support distributed execution
with replicated graph.
Thus, it can only scale with computation but
not memory resource.
Pangolin~\cite{chen2020pangolin}
is a recent non-pattern-aware single machine
system supporting GPU.
It achieves comparable performance with Automine and Peregrine by exposing a set of low-level APIs, and requires users to implement pattern-specific optimizations like searching space pruning.
\red{Tesseract~\cite{bindschaedler2021tesseract} is a recent distributed GPM system optimized for
graph updates and is not quite
suitable for efficient pattern mining on static graphs.
For example, Tesseract takes 1.9 hours to mine 4-motif patterns on the LiveJournal~\cite{backstrom2006group,leskovec2009community} with eight 16-core machines~\cite{bindschaedler2021tesseract} while our experiments show that GraphPi only needs 279 seconds with one machine.
We focus on large-scale static GPM system, which is orthogonal.}
To scale GPM with both computation
and memory resource, there is no ``secret sauce'':
it can be achieved by partitioning the graph
among the memory of distributed machines.
In fact, this approach has been adopted by
a number of distributed graph computation
frameworks such as Pregel~\cite{malewicz2010pregel},
GraphLab~\cite{low2012distributed},
PowerGraph~\cite{gonzalez2012powergraph} ,
D-Galois~\cite{dathathri2018gluon},
Gemini~\cite{zhu2016gemini}, and
SympleGraph~\cite{zhuo2020symplegraph}.
In such systems, computation is
performed in parallel on the cores in
distributed machines, which may require
remote data accesses and
lead to communication overhead.
For GPM,
G-miner~\cite{chen2018g} and G-thinker~\cite{yan2020g}, to the best of our knowledge, are the only two distributed systems
with partitioned graph.
Unfortunately, the two systems are
quite inefficient with poor programmability.
Based on our experiments using the
publicly available implementation,
G-thinker running on multiple machines may be even slower than a straight-forward single-thread implementation.
For example, it takes G-thinker \red{285.3} seconds to count all triangles on the Patents~\cite{leskovec2005graphs} graph on a 8-node cluster (each node is equipped with 16 cores) while a single-thread version can finish within 6.2 seconds~\cite{mawhirter2019automine}.
In distributed execution with partitioned graph,
the enumeration process requires
remote graph data,
i.e., the edge list of a vertex residing
in a remote machine.
There are two general approaches:
1) {\em local data assumption (``moving computation to data'')}---pattern enumeration is always
performed on the machine that locally
holds the data; and
2) {\em local computation assumption (``moving
data to computation'')}---for each embedding matching
a pattern, the whole enumeration
process is performed in a fixed machine, which
fetches remote data as needed.
We analyze both approaches in Section~\ref{sec:limit}
and explain that the first does not lead to
efficient communication; while
the second, although providing better
opportunity to overlap the
communication with local computation,
is not implemented efficiently
in G-thinker (an improved version of G-miner).
At high level, system efficiency is determined
by two factors---task granularity and
execution schedule.
G-thinker's coarse task granularity does
not allow efficient task scheduling that
optimizes communication; and
the mechanisms to support
data reuse incur high overhead.
\begin{comment}
Thus, the subgraphs to be extended
are communicated around
the machines.
It does not lead to an efficient
implementation for three reasons.
First, since the data need to be accessed
for an extension may reside in
multiple machines, communication across
these relevant remote machines may be needed.
Second, to perform computation, i.e.,
intersection of two edge lists,
data transfers may not only
include subgraphs to be extended, but also
graph data just accessed
from one remote machine to another.
Finally, the computation---subgraph extension---can be only performed after the
data is received, providing little
opportunity for communication-computation
overlapping.
\end{comment}
The goal of this paper is to {\em build the
distributed graph mining systems
with partitioned graph that achieves
competitive or even better performance than
the state-of-the-art GPM systems with
replicated graph}.
We develop {\em Kudu\xspace, a general distributed
execution engine} with a well-defined abstraction
that can be integrated
with various existing single-machine GPM systems.
This approach keeps the user-facing
programming interface
of existing systems unmodified, which can be as
simple as specifying the pattern to mine,
and only requires the change of the
system implementation.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/overall.pdf}
\vspace{-3mm}
\caption{Overview of Kudu\xspace: Distributed GPM Engine}
\vspace{-7mm}
\label{fig:overall_architecture}
\end{figure}
The key novelty of Kudu\xspace is the new
principle of {\em ``Think Like an Extendable Embedding''}, where the notion of
{\em extendable embedding} serves as the
key abstraction that naturally
generates fine-grained tasks with
efficient low-overhead
computation/communication scheduling.
A GPM system can implement
the pattern-specific \texttt{EXTEND} function
for subgraph enumeration, which is plugged into
Kudu\xspace's distributed execution model.
It specifies
pattern-aware enumeration with a
sequence of embedding extension at different
level.
It is essentially a primitive
to break down the
algorithm (nested loops) into small tasks
for distributed execution.
Based on extendable embedding,
we propose a succinct hierarchical data
representation to reduce memory consumption.
For the compilation-based GPM systems
such as AutoMine~\cite{mawhirter2019automine} and GraphPi~\cite{shi2020graphpi},
\texttt{EXTEND} function can be conveniently implemented
by modifying code generator.
In this paper, we demonstrates the
integration of Kudu\xspace with AutoMine and GraphPi.
Figure~\ref{fig:overall_architecture} shows the
overview of Kudu\xspace.
To enable efficient scheduling,
we propose a novel BFS-DFS hybrid subgraph exploration that generates sufficient concurrent
embedding extension tasks at
the same level without incurring
high memory consumption.
Importantly, the data fetch and
computation of concurrent tasks
can be scheduled in a circulant
manner to maximize the communication-computation
overlapping.
The computation and communication
of Kudu\xspace can be further optimized with:
1) vertical intermediate result sharing to
avoid redundant computation
among embedding extensions at different level;
2) horizontal graph data sharing and
redundant communication reduction
among a chunk of
embedding extensions at the same level;
3) simple static graph data cache with
low overhead to reduce
the amount of communication, especially for
shewed graphs; and
4) NUMA-aware support to reduce remote socket
memory accesses.
Compared to G-thinker, Kudu\xspace
supports {\em lightweight data reuse} with
low-cost static cache together with vertical/horizontal
data sharing.
\red{
We built two scalable distributed GPM systems, k-Automine and k-GraphPi, by porting Automine~\cite{mawhirter2019automine} and GraphPi~\cite{shi2020graphpi} on top of Kudu\xspace,
with porting cost of roughly 500 lines of code per system.
k-Automine and k-GraphPi significantly outperform G-thinker by up to \maxspeedupkautominegthinker and \maxspeedupkgraphpigthinker (on average \avgspeedupkautominegthinker and \avgspeedupkgraphpigthinker), respectively.
Kudu\xspace based systems show similar or even better performance compared with GraphPi, the fastest distributed GPM system with replicated graph,
and scale to large graphs that replication based systems cannot handle.}
\section{Distributed Graph Pattern Mining}
\label{sec:prob_chal}
This section explains the problem and
challenges of distributed GPM, and demonstrates
the limitations of existing approaches.
Motivated by the analysis, we propose
{\em Kudu\xspace}, a general distributed execution engine
for GPM.
\subsection{Problems and Challenges}
\label{sec:problem}
As a fundamental technique to scale to massive graphs, graph partition divides
graph data into multiple partitions,
each of which is stored in the memory
of one machine of a distributed cluster.
By distributing the graph data to the memory of multiple machines,
the graph size is no longer limited by
the memory capacity of a single machine.
Out-of-core systems, such as RStream~\cite{wang2018rstream},
can also accommodate graphs larger than memory
by storing the graph in disk, but
they cannot scale with the computation resource.
In this paper, we consider
1-D graph partitioning: the vertices set $V$ of the input graph is partitioned into $N$ parts $V_0, V_1, \ldots, V_{N-1}$, where $N$ is the number of machines.
Machine $i$ ($0\le i<N$) maintains all graph data related to $V_i$---all edges with at least one endpoint in $V_i$---in its memory.
To ensure balanced data distribution, similar to previous systems~\cite{malewicz2010pregel,yan2020g}, the graph partition is determined by a hash function $H(v)$ that maps a vertex $v$ to its partition ID---an integer between $0$ and $N-1$.
Graph partition poses two
challenges for GPM.
First, the graph data needed in the
subgraph enumeration may not exist in the
memory of the local machine,
incurring remote data accesses
and communication overhead.
Moreover, such overhead can be
exaggerated in the shewed graphs---the real-world graphs that follow the ``power law''.
Specifically, the high-degree vertices belong
to much more subgraphs, the fetch of their edge
lists can lead to a tremendous amount of communication.
The second challenge is due to the complexity
of managing the graph data fetched from remote
machines.
For each machine, while keeping the fetched
remote graph data in memory can enable
data reuse, such data cannot consume too
much memory.
\subsection{Limitations of Existing Approaches}
\label{sec:limit}
There are two general
approaches to implement distributed GPM
with partitioned graph.
Here, we discuss them in more detail
with an example to show the limitations.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figures/sec3.pdf}
\vspace{-4mm}
\caption{GPM with 1-D Partitioned Graph}
\label{fig:partition_gpm}
\vspace{-4mm}
\end{figure}
{\bf Example}. Figure~\ref{fig:partition_gpm}
shows an example of 1-D partition of the
input graph among three machines.
Each machine contains the edge lists of
a subset of vertices
mapped to it and all edges
that are connected to them.
If we use the algorithm in Figure~\ref{fig:pattern_example} to construct
the embeddings starting from the local vertices,
remote data accesses are needed.
In the following, we consider two existing
approaches that enable the distributed execution
and explain the problems.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figures/sec3_other_dist.pdf}
\vspace{-4mm}
\caption{Existing Approaches}
\label{fig:other_dist}
\vspace{-7mm}
\end{figure}
{\bf Moving computation to data}.
This approaches is based on the execution
model in Arabesque~\cite{teixeira2015arabesque}, which proposed
the concept of ``Think Like an Embedding''.
The system keeps extending subgraphs
until they reach the pattern size.
The key property is that, subgraphs
are transferred among distributed machines, and
extensions starting from a single vertex
until reaching an embedding
can be performed in different machines.
Arabesque is based on replicated
graph, and the shuffling of embeddings are
for the purpose of load balance.
However, the principle can apply to partitioned
graph, in which the computation is
performed based on locally data.
Figure~\ref{fig:other_dist} (a) illustrates
the execution and communication pattern
when we use ``moving computation to data''
for distributed GPM.
Starting from vertex 0, machine 1
first locally extend it into three subgraphs
using local $N(0)$.
For the next extension, subgraph (0,1) requires
$N(1)$, which is also local; while
subgraph (0,2) and (0,3) require $N(2)$ and
$N(3)$, which are stored in machine 2 according
to the partition.
Thus, subgraph (0,2) and (0,3) are sent
to machine 2, together with $N(0)$ since
it is needed for intersection between
$N(2)$ and $N(3)$.
After machine 2 receives the subgraphs
and $N(0)$, it can perform local extension.
The extension of subgraph (0,1) is performed
in machine 1 locally.
This approach has three drawbacks:
1) extension may be performed in multiple
machines subject to data partition, potentially
lead to excessive communication overhead;
2) additional edge list data for remote computation; and
3) difficult to overlap communication and
computation.
{\bf Moving data to computation}.
This approach is used by G-thinker~\cite{yan2020g}, the only
GPM system supporting partitioned graph.
The design principle is
``Think Like a Subgraph'', in which users
have to specify the subgraph needs to be
fetched to local memory so that {\em all extension steps} of pattern
enumeration can be performed by a machine
with local data.
Figure~\ref{fig:other_dist} (b) provides
an example.
The enumeration from vertex 0 and 1
form two tasks, which requires
$N(0),N(1),N(2),N(3)$ (vertex 0 task)
and $N(0),N(1),N(2)$ (vertex 1 task).
$N(2)$ and $N(3)$ should be fetched
from machine 2 before the complete
algorithm is executed.
Suppose the two tasks are created
and execute concurrently,
only one request for $N(2)$ should be issued
and the fetched data can be shared
by the two tasks.
Since each graph partition contains
many vertices, a machine can generate
many tasks that execute concurrently which
may enable
communication-computation overlapping.
Note that not all data in the subgraph
are used in the enumeration, but due to the
coarse granularity of tasks, exact data usage can be
only determined after some
communication is wasted.
The programmers are responsible for specifying
both the subgraph, e.g.,
$k$-clique counting requires
the subgraph induced by the starting vertex and its 1-hop neighbors,
and pattern enumeration based on the
fetched subgraph.
Its programmability
is considerably poorer than the
recent GPM systems~\cite{mawhirter2019automine,jamshidi2020peregrine,shi2020graphpi} that only require
pattern specification.
Moreover, to mitigate the straggler,
users need to manually implement codes
to detect slow tasks, and
and divide them into multiple sub-tasks
if necessary.
The programming burden may partially
explain the fact that, G-thinker
only includes traingle counting, maximal-clique finding, and embedding counting of a special pattern.
To eliminate redundant
communication and enable data reuse, G-thinker
manages graph data by a
sophisticated software cache with expensive reference count based garbage collection.
In addition, since the number of tasks is
huge---equal to the number of vertices in
a graph partition,
some tasks need to be swapped to
disk to bound memory usage.
To achieve high performance,
lightweight
techniques are needed exploit data reuse.
Current solutions~\cite{chen2018g,yan2020g}
miss the opportunity
to leverage application-specific knowledge
for efficient implementations.
\begin{comment}
For example, in $k$-clique counting, i.e., calculating the number of complete subgraphs with $k$ vertices, all vertices in a $k$-clique embedding must be mutual 1-hop neighbors.
A task $t_i$ calculates the number of $k$-cliques
that include a given vertex $v_i(v_i\in V)$.
Due to the complete connectivity of $k$-clique, the data associated with task $t_i$
is the subgraph $g_i$ induced by $v_i$
and all its 1-hop neighbors.
For each task $t_i$, the system first pulls all graph data of $g_i$ that exist on
remote machines, and performs local $k$-clique counting on the local data
to finish the task.
\end{comment}
\begin{figure}[t]
\centering
\includegraphics[width=0.9\linewidth]{figures/sec4.pdf}
\vspace{-3mm}
\caption{Extendable Embedding Example}
\label{fig:extendable_embedding_example}
\vspace{-7mm}
\end{figure}
\subsection{Kudu\xspace System Overview}
\label{sec:principle}
To
provide appropriate abstraction and task
granularity,
we develop {\em Kudu\xspace, a general distributed
execution engine} with a well-defined abstraction
that can be integrated
with various existing single-machine GPM systems.
The advantage of this approach is obvious:
the programming interfaces and codes
based on existing GPM systems do not change and
Kudu\xspace can {\em transparently} enable the
distributed execution.
Figure~\ref{fig:overall_architecture} shows
the overview of Kudu\xspace.
The key abstraction is {\em extendable embedding},
based on
which the ``client systems'' above (such as AutoMine and GraphPi) can express the pattern
enumeration algorithm (such as the one
shown in Figure~\ref{fig:pattern_example})
in the \texttt{EXTEND} function.
We implement the compilation-based GPM systems
(AutoMine and GraphPi) on top of
Kudu\xspace since
\texttt{EXTEND} function can be conveniently implemented
by modifying code generator.
Kudu\xspace contains
1) a distributed execution engine
that executes the operations in \texttt{EXTEND}
function in a fine-grained manner with
a novel BFS-DFS hybrid exploration; and
2) a communication subsystem that enables
effective data sharing and
communication-computation
overlapping enabled by
BFS-DFS hybrid exploration.
Unlike G-thinker,
Kudu\xspace supports {\em low-cost data reuse}
with vertical/horizontal data reuse and
a simple static cache.
\section{Kudu\xspace System Optimizations}
\label{sec:reduce}
\subsection{Vertical Computation Sharing}
\label{sec:compute_share}
\begin{figure}
\centering
\vspace{-1mm}
\includegraphics[width=\linewidth]{figures/four_clique.pdf}
\vspace{-8mm}
\caption{4-clique Mining}
\label{alg:four_clique_mining}
\vspace{-5mm}
\end{figure}
Extending an embedding $e$ may generate intermediate results that can be reused by its descendants to avoid computation redundancy.
Let us consider
Automine's~\cite{mawhirter2019automine} algorithm
for 4-clique (a fully-connected size-4 pattern) mining, shown in Figure~\ref{alg:four_clique_mining}, as an example.
To extend an edge embedding $e_{edge}=(v_0,v_1)$ to a triangle $e_{triangle}$, we should find the common neighbors of $v_0$ and $v_1$, i.e., calculating $N(v_0)\cap N(v_1)$.
To further extend the triangle embedding $e_{triangle}=(v_0,v_1,v_2)$ to a 4-clique,
we need to find those directly connected with $v_0$, $v_1$, and $v_2$, i.e., calculating $N(v_0)\cap N(v_1)\cap N(v_2)$.
In this case, extending $e_{triangle}$ can reuse the intermediate result $N(v_0)\cap N(v_1)$ of its parent $e_{edge}$ to avoid extra intersection cost.
Our hierarchical data representation
can naturally enable intermediate result sharing
between parent and child extendable embedding.
The intermediate results that can be shared
across levels are determined by algorithm
and can be indicated in \texttt{EXTEND} function.
These results are stored in an extendable
embedding, its children can directly
access them and copy them to its own
embedding object. In this way, such
intermediate results can be accessed by
all descendants.
\subsection{Horizontal Data Sharing}
\label{sec:data_share}
As discussed before, the extendable
embeddings in a chunk---a number of embeddings in the same level---can share active edge lists
in the allocated memory for the chunk.
If some data are shared and fetched from a
remote machine, then the communication
to fetch them should ideally be performed
only once. Otherwise, if the local
machine directly sends the data requests
for each extendable embedding, some
communications may be redundant.
In the following, we discuss the
mechanism to enable the data sharing and
redundant communication reduction.
We maintain the active edge list of $v$ requested
by extendable embeddings $e$ in a chunk in a
per-level hash table with $v$ as the key and
$e$ as the value.
When a extendable embedding $e$
requesting a new active edge list of $v$
(not in $e$'s parent) is
created at $i$, we first check whether there is
already an entry for $v$
in the hash table.
If so (suppose the value is $e'$),
we add an extra pointer in $e$ to $e'$, indicating that the new active edge list
requested by $e$ can be found in $e'$.
Thus, there is no need to fetch or
allocate memory for it.
Otherwise, the pointer of $e$ is added
to the hash table using $v$ as the key.
To minimize computation cost, we do not support collisions for hash table insertion---if the hash table entry $hash(v)$ is already occupied by another vertex $u$ with the same hash value, we simply drop the insertion of $v$ rather than building up a collision chain.
This simple policy
leaves a small amount of redundant
communication (or additional memory)
that would have been
eliminated (or saved) but significantly reduce
hash table overhead.
We find that it can still
drastically reduces the communication cost.
For instance, it reduces the communication volume from 4.4TB to 33.8GB for 5-clique mining on the LiveJournal~\cite{backstrom2006group,leskovec2009community} graph.
\subsection{Static Data Cache}
\label{sec:cache}
Data accesses in GPM show long-term locality.
Due to the power law and skewness of graphs,
the graph data (edge list) of some vertices are much more frequently accessed than others, and
therefore contribute to a large portion of communication cost.
For example, the most frequently accessed 5\% graph data for 3-motif mining on the UK~\cite{boldi2004webgraph} graph contribute to 93\% communication volume.
Intuitively, these ``hot-spot'' vertices with
high degree are included in more embeddings
during the enumeration.
To leverage locality to reduce communication cost while keeping low computation overhead, we design an efficient static software graph data cache
{\em shared by all chunks, across different chunks at different levels}.
The cache size is typically 5\% or 10\% of the graph size.
The cache is empty at the beginning.
During embedding enumeration,
every time the system is about to fetch the graph data of a vertex $v$,
it will query the cache first to see whether the data of $v$ has been cached.
If so, it directly obtains the data from it.
Otherwise, if $v$'s degree is larger than a threshold (e.g., 64) and the cache is not full, the system will cache $v$'s data after fetching it through the network.
Once the cache is full, the system will no longer insert any data to since
we do not support cache eviction and replacement.
We make this design choice to
make the cache as lightweight as possible.
It is a good trade-off because typically graph workloads present very poor spatial and temporal locality and would not benefit much from the
general cache.
Our ``first accessed first cached with threshold'' policy
approximately caches the most frequent data---effectively capturing the shewed
graph access characteristics.
The no replacement policy works well for
the following reason.
Assuming that graph data accesses
are temporally uniformly distributed,
if a vertex $v$ is more frequently accessed than $u$ in whole access history,
it is also more likely that the first access to $v$ is earlier than that of $u$.
Thus, the more frequently accessed data
have higher chance to be placed in the cache.
\red{Now it is a good time to recap the data
(active edge lists) sharing
policies in Kudu\xspace, which include
three aspects: 1) the vertical
sharing {\em between an extendable embedding
and its decedents};
2) the horizontal data sharing
among extendable embeddings in the
{\em same chunk}---they are by definition also in
the same level; and
3) the sharing among {\em all chunks
across different levels} with static cache.
Unless shared through static cache,
Kudu\xspace does {\em not} support
the sharing between two extendable
embeddings in different levels
if one is not the decedent of another.
We intentionally make this design choice
because it incurs
high overhead---checking the hash tables
of different levels and accessing the data.
G-thinker indeed supports that
with its software cache but it is also
one of the key reasons for its low
performance. }
\subsection{NUMA-aware Support}
\label{sec:numa}
Modern clusters usually adopt the NUMA architecture~\cite{lameter2013overview}---the memory and processors are distributed to multiple sockets, and accessing remote-socket memory is more costly.
To reduce remote-socket memory accesses, we can enumerate embeddings at a NUMA-aware manner:
each socket on the same node explores extendable embeddings independently from different starting vertices.
Cross-socket communication will only happen in two cases:
1) the communication thread on one socket fetches graph data from the memory of another; and
2) One socket has finished embedding exploration, and tries to steal some work from another.
\section{Introduction}
ACM's consolidated article template, introduced in 2017, provides a
consistent \LaTeX\ style for use across ACM publications, and
incorporates accessibility and metadata-extraction functionality
necessary for future Digital Library endeavors. Numerous ACM and
SIG-specific \LaTeX\ templates have been examined, and their unique
features incorporated into this single new template.
If you are new to publishing with ACM, this document is a valuable
guide to the process of preparing your work for publication. If you
have published with ACM before, this document provides insight and
instruction into more recent changes to the article template.
The ``\verb|acmart|'' document class can be used to prepare articles
for any ACM publication --- conference or journal, and for any stage
of publication, from review to final ``camera-ready'' copy, to the
author's own version, with {\itshape very} few changes to the source.
\section{Template Overview}
As noted in the introduction, the ``\verb|acmart|'' document class can
be used to prepare many different kinds of documentation --- a
double-blind initial submission of a full-length technical paper, a
two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready''
journal article, a SIGCHI Extended Abstract, and more --- all by
selecting the appropriate {\itshape template style} and {\itshape
template parameters}.
This document will explain the major features of the document
class. For further information, the {\itshape \LaTeX\ User's Guide} is
available from
\url{https://www.acm.org/publications/proceedings-template}.
\subsection{Template Styles}
The primary parameter given to the ``\verb|acmart|'' document class is
the {\itshape template style} which corresponds to the kind of publication
or SIG publishing the work. This parameter is enclosed in square
brackets and is a part of the {\verb|documentclass|} command:
\begin{verbatim}
\documentclass[STYLE]{acmart}
\end{verbatim}
Journals use one of three template styles. All but three ACM journals
use the {\verb|acmsmall|} template style:
\begin{itemize}
\item {\verb|acmsmall|}: The default journal template style.
\item {\verb|acmlarge|}: Used by JOCCH and TAP.
\item {\verb|acmtog|}: Used by TOG.
\end{itemize}
The majority of conference proceedings documentation will use the {\verb|acmconf|} template style.
\begin{itemize}
\item {\verb|acmconf|}: The default proceedings template style.
\item{\verb|sigchi|}: Used for SIGCHI conference articles.
\item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles.
\item{\verb|sigplan|}: Used for SIGPLAN conference articles.
\end{itemize}
\subsection{Template Parameters}
In addition to specifying the {\itshape template style} to be used in
formatting your work, there are a number of {\itshape template parameters}
which modify some part of the applied template style. A complete list
of these parameters can be found in the {\itshape \LaTeX\ User's Guide.}
Frequently-used parameters, or combinations of parameters, include:
\begin{itemize}
\item {\verb|anonymous,review|}: Suitable for a ``double-blind''
conference submission. Anonymizes the work and includes line
numbers. Use with the \verb|\acmSubmissionID| command to print the
submission's unique ID on each page of the work.
\item{\verb|authorversion|}: Produces a version of the work suitable
for posting by the author.
\item{\verb|screen|}: Produces colored hyperlinks.
\end{itemize}
This document uses the following string as the first command in the
source file:
\begin{verbatim}
\documentclass[sigplan,screen]{acmart}
\end{verbatim}
\section{Modifications}
Modifying the template --- including but not limited to: adjusting
margins, typeface sizes, line spacing, paragraph and list definitions,
and the use of the \verb|\vspace| command to manually adjust the
vertical spacing between elements of your work --- is not allowed.
{\bfseries Your document will be returned to you for revision if
modifications are discovered.}
\section{Typefaces}
The ``\verb|acmart|'' document class requires the use of the
``Libertine'' typeface family. Your \TeX\ installation should include
this set of packages. Please do not substitute other typefaces. The
``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used,
as they will override the built-in typeface families.
\section{Title Information}
The title of your work should use capital letters appropriately -
\url{https://capitalizemytitle.com/} has useful rules for
capitalization. Use the {\verb|title|} command to define the title of
your work. If your work has a subtitle, define it with the
{\verb|subtitle|} command. Do not insert line breaks in your title.
If your title is lengthy, you must define a short version to be used
in the page headers, to prevent overlapping text. The \verb|title|
command has a ``short title'' parameter:
\begin{verbatim}
\title[short title]{full title}
\end{verbatim}
\section{Authors and Affiliations}
Each author must be defined separately for accurate metadata
identification. Multiple authors may share one affiliation. Authors'
names should not be abbreviated; use full first names wherever
possible. Include authors' e-mail addresses whenever possible.
Grouping authors' names or e-mail addresses, or providing an ``e-mail
alias,'' as shown below, is not acceptable:
\begin{verbatim}
\author{Brooke Aster, David Mehldau}
\email{dave,judy,steve@university.edu}
\email{firstname.lastname@phillips.org}
\end{verbatim}
The \verb|authornote| and \verb|authornotemark| commands allow a note
to apply to multiple authors --- for example, if the first two authors
of an article contributed equally to the work.
If your author list is lengthy, you must define a shortened version of
the list of authors to be used in the page headers, to prevent
overlapping text. The following command should be placed just after
the last \verb|\author{}| definition:
\begin{verbatim}
\renewcommand{\shortauthors}{McCartney, et al.}
\end{verbatim}
Omitting this command will force the use of a concatenated list of all
of the authors' names, which may result in overlapping text in the
page headers.
The article template's documentation, available at
\url{https://www.acm.org/publications/proceedings-template}, has a
complete explanation of these commands and tips for their effective
use.
Note that authors' addresses are mandatory for journal articles.
\section{Rights Information}
Authors of any work published by ACM will need to complete a rights
form. Depending on the kind of work, and the rights management choice
made by the author, this may be copyright transfer, permission,
license, or an OA (open access) agreement.
Regardless of the rights management choice, the author will receive a
copy of the completed rights form once it has been submitted. This
form contains \LaTeX\ commands that must be copied into the source
document. When the document source is compiled, these commands and
their parameters add formatted text to several areas of the final
document:
\begin{itemize}
\item the ``ACM Reference Format'' text on the first page.
\item the ``rights management'' text on the first page.
\item the conference information in the page header(s).
\end{itemize}
Rights information is unique to the work; if you are preparing several
works for an event, make sure to use the correct set of commands with
each of the works.
The ACM Reference Format text is required for all articles over one
page in length, and is optional for one-page articles (abstracts).
\section{CCS Concepts and User-Defined Keywords}
Two elements of the ``acmart'' document class provide powerful
taxonomic tools for you to help readers find your work in an online
search.
The ACM Computing Classification System ---
\url{https://www.acm.org/publications/class-2012} --- is a set of
classifiers and concepts that describe the computing
discipline. Authors can select entries from this classification
system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the
commands to be included in the \LaTeX\ source.
User-defined keywords are a comma-separated list of words and phrases
of the authors' choosing, providing a more flexible way of describing
the research being presented.
CCS concepts and user-defined keywords are required for for all
articles over two pages in length, and are optional for one- and
two-page articles (or abstracts).
\section{Sectioning Commands}
Your work should use standard \LaTeX\ sectioning commands:
\verb|section|, \verb|subsection|, \verb|subsubsection|, and
\verb|paragraph|. They should be numbered; do not remove the numbering
from the commands.
Simulating a sectioning command by setting the first word or words of
a paragraph in boldface or italicized text is {\bfseries not allowed.}
\section{Tables}
The ``\verb|acmart|'' document class includes the ``\verb|booktabs|''
package --- \url{https://ctan.org/pkg/booktabs} --- for preparing
high-quality tables.
Table captions are placed {\itshape above} the table.
Because tables cannot be split across pages, the best placement for
them is typically the top of the page nearest their initial cite. To
ensure this proper ``floating'' placement of tables, use the
environment \textbf{table} to enclose the table's contents and the
table caption. The contents of the table itself must go in the
\textbf{tabular} environment, to be aligned properly in rows and
columns, with the desired horizontal and vertical rules. Again,
detailed instructions on \textbf{tabular} material are found in the
\textit{\LaTeX\ User's Guide}.
Immediately following this sentence is the point at which
Table~\ref{tab:freq} is included in the input file; compare the
placement of the table here with the table in the printed output of
this document.
\begin{table}
\caption{Frequency of Special Characters}
\label{tab:freq}
\begin{tabular}{ccl}
\toprule
Non-English or Math&Frequency&Comments\\
\midrule
\O & 1 in 1,000& For Swedish names\\
$\pi$ & 1 in 5& Common in math\\
\$ & 4 in 5 & Used in business\\
$\Psi^2_1$ & 1 in 40,000& Unexplained usage\\
\bottomrule
\end{tabular}
\end{table}
To set a wider table, which takes up the whole width of the page's
live area, use the environment \textbf{table*} to enclose the table's
contents and the table caption. As with a single-column table, this
wide table will ``float'' to a location deemed more
desirable. Immediately following this sentence is the point at which
Table~\ref{tab:commands} is included in the input file; again, it is
instructive to compare the placement of the table here with the table
in the printed output of this document.
\begin{table*}
\caption{Some Typical Commands}
\label{tab:commands}
\begin{tabular}{ccl}
\toprule
Command &A Number & Comments\\
\midrule
\texttt{{\char'134}author} & 100& Author \\
\texttt{{\char'134}table}& 300 & For tables\\
\texttt{{\char'134}table*}& 400& For wider tables\\
\bottomrule
\end{tabular}
\end{table*}
Always use midrule to separate table header rows from data rows, and
use it only for this purpose. This enables assistive technologies to
recognise table headers and support their users in navigating tables
more easily.
\section{Math Equations}
You may want to display math equations in three distinct styles:
inline, numbered or non-numbered display. Each of the three are
discussed in the next sections.
\subsection{Inline (In-text) Equations}
A formula that appears in the running text is called an inline or
in-text formula. It is produced by the \textbf{math} environment,
which can be invoked with the usual
\texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with
the short form \texttt{\$\,\ldots\$}. You can use any of the symbols
and structures, from $\alpha$ to $\omega$, available in
\LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few
examples of in-text equations in context. Notice how this equation:
\begin{math}
\lim_{n\rightarrow \infty}x=0
\end{math},
set here in in-line math style, looks slightly different when
set in display style. (See next section).
\subsection{Display Equations}
A numbered display equation---one set off by vertical space from the
text and centered horizontally---is produced by the \textbf{equation}
environment. An unnumbered display equation is produced by the
\textbf{displaymath} environment.
Again, in either environment, you can use any of the symbols and
structures available in \LaTeX\@; this section will just give a couple
of examples of display equations in context. First, consider the
equation, shown as an inline equation above:
\begin{equation}
\lim_{n\rightarrow \infty}x=0
\end{equation}
Notice how it is formatted somewhat differently in
the \textbf{displaymath}
environment. Now, we'll enter an unnumbered equation:
\begin{displaymath}
\sum_{i=0}^{\infty} x + 1
\end{displaymath}
and follow it with another numbered equation:
\begin{equation}
\sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f
\end{equation}
just to demonstrate \LaTeX's able handling of numbering.
\section{Figures}
The ``\verb|figure|'' environment should be used for figures. One or
more images can be placed within a figure. If your figure contains
third-party material, you must clearly identify it as such, as shown
in the example below.
Your figures should contain a caption which describes the figure to
the reader.
Figure captions are placed {\itshape below} the figure.
Every figure should also have a figure description unless it is purely
decorative. These descriptions convey what’s in the image to someone
who cannot see it. They are also used by search engine crawlers for
indexing images, and when images cannot be loaded.
A figure description must be unformatted plain text less than 2000
characters long (including spaces). {\bfseries Figure descriptions
should not repeat the figure caption – their purpose is to capture
important information that is not already provided in the caption or
the main text of the paper.} For figures that convey important and
complex new information, a short text description may not be
adequate. More complex alternative descriptions can be placed in an
appendix and referenced in a short figure description. For example,
provide a data table capturing the information in a bar chart, or a
structured list representing a graph. For additional information
regarding how best to write figure descriptions and why doing this is
so important, please see
\url{https://www.acm.org/publications/taps/describing-figures/}.
\subsection{The ``Teaser Figure''}
A ``teaser figure'' is an image, or set of images in one figure, that
are placed after all author and affiliation information, and before
the body of the article, spanning the page. If you wish to have such a
figure in your article, place the command immediately before the
\verb|\maketitle| command:
\section{Citations and Bibliographies}
The use of \BibTeX\ for the preparation and formatting of one's
references is strongly recommended. Authors' names should be complete
--- use full first names (``Donald E. Knuth'') not initials
(``D. E. Knuth'') --- and the salient identifying features of a
reference should be included: title, year, volume, number, pages,
article DOI, etc.
The bibliography is included in your source document with these two
commands, placed just before the \verb|\end{document}| command:
\begin{verbatim}
\bibliographystyle{ACM-Reference-Format}
|
{
"timestamp": "2021-05-11T02:15:37",
"yymm": "2105",
"arxiv_id": "2105.03789",
"language": "en",
"url": "https://arxiv.org/abs/2105.03789"
}
|
\subsubsection{Motivation}
\paragraph{For what purpose was the dataset created?}
The dataset was created for the purpose of extending the range of existing VL-NLE datasets with a large-scale dataset that requires fine-grained reasoning.
\vspace{-2.5ex}
\paragraph{Who created the dataset (e.g., which team, research group) and on behalf of which entity (e.g., company, institution, organization)?}
The dataset was created by researchers from the University of Oxford. It builds on existing datasets which involved other institutions (NEC Laboratories America for SNLI-VE) and universities (Stanford University for SNLI, University of Illinois at Urbana-Champaign for Flickr30k, University of Oxford for e-SNLI).
\vspace{-2.5ex}
\subsubsection{Composition}
\paragraph{What do the instances that comprise the dataset represent (e.g., documents, photos, people, countries)?}
Photos (some with people) and natural language sentences.
\vspace{-2.5ex}
\paragraph{How many instances are there in total?}
In total, there are 430,796 instances.
\vspace{-2.5ex}
\paragraph{Does the dataset contain all possible instances or is it a sample (not necessarily random) of instances from a larger set? }
The dataset contains a reduced sample of the original 570k sentence pairs from SNLI \cite{bowman2015large}. It has been reduced because various filtering methods were applied to remove noise that occurred from combining e-SNLI and SNLI-VE. The filtering steps disproportionately affect the ``neutral" class.
\vspace{-2.5ex}
\paragraph{What data does each instance consist of?}
Each instance consists of an image, a natural language hypothesis, a label that classifies the image-hypothesis pair as entailment, contradiction, or neutral, and a natural language explanation that explains why the label was given.
\vspace{-2.5ex}
\paragraph{Is any information missing from individual instances?}
No, all instances contain the complete the information described above.
\vspace{-2.5ex}
\paragraph{Are relationships between individual instances made explicit?}
Yes. Some instances refer to the same image, which is indicated via their image ID.
\vspace{-2.5ex}
\paragraph{Are there recommended data splits?}
Yes, the train, dev, and test splits are given with the release of the dataset.
\vspace{-2.5ex}
\paragraph{Are there any errors, sources of noise, or redundancies in the dataset?}
The labels and explanations were originally annotated for textual premise-hypothesis pairs. By replacing the textual premise with an image, noise occurs. Despite our best efforts to filter out this noise, a considerable error rate remains.
\vspace{-2.5ex}
\paragraph{Is the dataset self-contained, or does it link to or otherwise rely on external resources?}
The dataset needs to be linked with Flickr30k images, which are publicly available.
\vspace{-2.5ex}
\paragraph{Does the dataset contain data that might be considered confidential (e.g., data that is protected by legal privilege or by doctor-patient confidentiality, data that includes the content of individuals’ non-public communications)? }
No.
\cleardoublepage
\subsubsection{Collection Process}
\paragraph{How was the data associated with each instance acquired?}
Hypothesises and explanations were annotated by people. SNLI-VE combined e-SNLI and Flickr30k by replacing the textual premise by an image. This was possible because the textual premises in SNLI are all captions of Flickr30k images. e-SNLI-VE was obtained by associating the explanations from SNLI with SNLI-VE. We used MTurk to reannotate the labels and explanations for the neutral class in the validation and test set. Numerous validation steps have been used to measure the effectiveness of merging, re-annotating, and filtering the dataset.
\vspace{-2.5ex}
\paragraph{What mechanisms or procedures were used to collect the data (e.g., hardware apparatus or sensor, manual human curation, software program, software API)?}
Software program and manual human curation.
\vspace{-2.5ex}
\subsubsection{Preprocessing/Cleaning/Labeling}
\paragraph{Was any preprocessing/cleaning/labeling of the data done (e.g., discretization or bucketing, tokenization, part-of-speech tagging, SIFT feature extraction, removal of instances, processing of missing values)?}
Various filters were used to remove noise. We used a false neutral detector (details in Section~\ref{d:fn}), a keyword filter (details in Section~\ref{d:kw}), a similarity filter (details in Section~\ref{d:sim}), and an uncertainty filter (details in Section~\ref{d:unc}). We also reannotated all neutral examples in the validation and test set.
\subsubsection{Distribution}
\paragraph{Will the dataset be distributed to third parties outside of the entity (e.g., company, institution, organization) on behalf of which
the dataset was created?}
The dataset is publicly released and free to access.
\vspace{-2.5ex}
\subsubsection{Maintenance}
\paragraph{Who is supporting/hosting/maintaining the dataset?}
The first author of this paper.
\vspace{-2.5ex}
\paragraph{How can the owner/curator/manager of the dataset be contacted (e.g., email address)?}
The first author of this paper can be contacted via the email address given on the title page.
\subsection{e-SNLI-VE Datasheet}
\input{appendix/datasheet}
\subsection{Relabeling e-SNLI-VE via MTurk} \label{app:mturk_relab}
In this work, we collect new labels and explanations for the neutral pairs of the validation and test sets of e-SNLI-VE. We provide workers with the definitions of entailment, neutral, and contradiction for image-sentence pairs and one example for each label. As shown in Figure~\ref{fig:amt_esnlive_setup}, for each image-sentence pair, workers are required to (a) choose a label, (b) highlight words in the sentence that led to their label decision, and (c) explain their decision in a comprehensive and concise manner, using at least half of the words that they highlighted. We point out that it is likely that requiring an explanation at the same time as requiring a label has a positive effect on the correctness of the label, since having to justify in writing the picked label may make annotators pay an increased attention. Moreover, we implemented additional quality control measures for crowdsourced annotations, such as (a) collecting three annotations for every input, (b) injecting trusted annotations, and (c) restricting to annotators with at least 90\% previous approval rate.
\begin{figure}[H]
\begin{center}
\includegraphics[width=1\linewidth]{app_figures/example_mturk_1.png}
\end{center}
\caption{A snapshot of the annotation interface that was used to manually reannotate the neutral labels in the validation and test sets of e-SNLI-VE.}%
\label{fig:amt_esnlive_setup}
\vspace{-2ex}
\end{figure}
There were 2,060 workers in the annotation effort, with an average of 1.98 assignments per worker and a standard deviation of 5.54. No restriction was put on the workers’ location. Each assignment consisted of a set of 10 image-sentence pairs. The instructions are shown in Figure~\ref{fig:amt_esnlive_inst}. The annotators were also guided by three examples, one for each label. For each assignment of 10 questions, one trusted annotation with known label was inserted at a random position, as a measure to control the quality of label annotation. Each assignment was completed by three different workers.
\begin{figure}[H]
\begin{center}
\includegraphics[width=1\linewidth]{app_figures/example_instructions.png}
\end{center}
\caption{A snapshot of the instructions that were provided to the workers that reannotated the neutral labels in the validation and test sets of e-SNLI-VE.}%
\label{fig:amt_esnlive_inst}
\vspace{-2ex}
\end{figure}
To check the success of our crowdsourcing, we manually assessed the relevance of explanations among a random subset of 100 examples. A marking scale between 0 and 1 was used, assigning a score of k/n when k required attributes were given in an explanation out of n. We report an 83.5\% relevance of explanations from workers.
\subsection{Ambiguity in e-SNLI-VE}
We noticed that some instances in SNLI-VE are ambiguous. We show some examples with justifications in Figures~\ref{fig:ambig-leer}, \ref{fig:ambiguous2} and \ref{fig:ambiguous3}. In order to have a better sense of this ambiguity, three authors of this paper independently annotated 100 random examples. All three authors agreed on 54\% of the examples, exactly two authors agreed on 45\%, and there was only one example on which all three authors disagreed. We identified the following three major sources of ambiguity:
(1) mapping an emotion in the hypothesis to a facial expression in the image premise, e.g., “people enjoy talking”, “angry people”, “sad woman”. Even when the face is seen, it may be subjective to infer an emotion from a static image, (2) personal taste, e.g., “the sign is ugly”, and (3) lack of consensus on terms such as “many people” or “crowded”.
In our crowdsourced re-annotation effort, we accounted for this by removing an instance if all three annotator disagreed on the label (5.2\% for validation and 5.5\% test set). Otherwise we choose the majority label. Looking at the 18 instances where we disagreed with the label assigned by MTurk workers, we noticed that 12 were due to ambiguity in the examples, and 6 were due to workers’ errors.
\begin{figure}[H]
\centering
\includegraphics[scale=0.3]{app_figures/ambiguous2.png}
\caption{\label{fig:ambiguous2}Ambiguous SNLI-VE instance. Some may argue that the woman's face betrays sadness, but the image is not quite clear. Secondly, even with better resolution, facial expression may not be a strong enough evidence to support the hypothesis about the woman's emotional state.}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[scale=0.3]{app_figures/ambiguous1.png}
\caption{\label{fig:ambig-leer}Ambiguous SNLI-VE instance. The lack of consensus is on whether the man is ``leering'' at the woman. While it is likely the case, this interpretation in favour of entailment is subjective, and a cautious annotator would prefer to label the instance as neutral.}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[scale=0.3]{app_figures/ambig-kindergarten.png}
\caption{\label{fig:ambiguous3}Ambiguous SNLI-VE instance. Some may argue that it is impossible to certify from the image that the children are kindergarten students, and label the instance as neutral. On the other hand, the furniture may be considered as typical of kindergarten, which would be sufficient evidence for entailment.}
\end{figure}
\subsection{Details on Filters} \label{sec:app_filt}
In Table~\ref{tab:filters} we provide a quantitative analysis of the effects our filters had on the dataset. The accuracies are obtained from our hand-annotated subset of 535 examples. On this subset, we first annotated every image-sentence pair as Entailment, Neutral, or Contradiction. Accuracies are obtained by comparing our own annotation with the dataset annotation. Note that we obtain higher error rates for the Entailment and Contradiction classes (9.7\% and 8.6\%) than what the authors of the original paper found~\cite{xie_visual_2019} (less than 1\%). One explanation for that could be the ambiguity that is inherent in the task. The share of bad explanations is obtained by evaluating every explanation as \emph{bad}, \emph{okay}, or \emph{great}. If the label is wrong, the explanation is automatically deemed \emph{bad}, as it will try to explain a wrong answer.
Note that in e-SNLI, the authors have found that the human annotated explanations have an error rate of 9.6\% (19.6\% on entailment, 7.3\% on neutral, 9.4\% on contradiction), which serves as an upper bound of what could be achieved in terms of dataset cleaning.
\begin{table*}[ht!]
\begin{center}
\begin{tabulary}{\linewidth}{RCCCCCCCCCCC}
\toprule
& \multicolumn{3}{c}{Dataset Size} & \multicolumn{4}{c}{Share of wrong labels} & \multicolumn{4}{c}{Share of bad explanations} \\
\cmidrule(r){2-4} \cmidrule(r){5-8} \cmidrule(r){9-12}
& \mbox{Train Set} & \mbox{Val Set} & \mbox{Test Set} & All & E & N & C & All & E & N & C \\
\midrule
Raw & 529,505 & 17,554 & 17,899 & 19.3\% & 9.7\% & 38.6\% & 8.6\% & 35.7\% & 35.2\% & 45.1\% & 26.3\% \\
FN removal & 481,479 & 17,554 & 17,899 & 13.0\% & 9.7\% & 23.5\% & 8.6\% & 31.3\% & 35.2\% & 32.6\% & 26.3\% \\
KW Filter & 459,353 & 16,862 & 17,188 & 13.4\% & 10.1\% & 23.7\% & 8.8\% & 28.0\% & 28.3\% & 32.1\% & 24.6\% \\
\mbox{Uncertainty Filter} & 429,774 & 15,402 & 15,829 & 12.5\% & 10.1\% & 23.7\% & 4.5\% & 26.7\% & 28.3\% & 32.1\% & 19.5\% \\
Similarity Filter & 401,717 & 14,339 & 14,740 & 12.8\% & 10.5\% & 23.7\% & 4.5\% & 25.2\% & 24.1\% & 32.1\% & 19.5\% \\
\bottomrule
\end{tabulary}
\caption{Each row describes the state of the dataset upon application of the given filter. The share of wrong labels and bad explanations is only representative of the training split. The first row describes the state of the dataset in its raw form, i.e., before any of the automatic filtering steps. The second row describes the state of the datasets upon application of the false neutral (FN) removal filter, etc.}%
\label{tab:filters}
\end{center}
\end{table*}
An illustrative example for the motivation of the false neutral detector is given in the main paper in Figure~\ref{fig:fn}. Examples for the keyword and similarity filters are given in Figure \ref{fig:kw} and Figure \ref{fig:sf}, respectively.
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.8\linewidth]{app_figures/kw_example.png}
\end{center}
\caption{The use of the words ``synonym" and ``rephrasing" makes it clear that the explanation is overly focused on the linguistic features of the textual premise.
\label{fig:kw}
\end{figure}
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.8\linewidth]{app_figures/sf_example.png}
\end{center}
\caption{The textual premise and hypothesis are almost identical sentences, which led to a low-quality explanation.
\label{fig:sf}
\end{figure}
\subsection{Anchor Effects}
An example of the instructions that were shown to the MTurk annotators can be seen in Figure~\ref{fig:evil_inst}. The interface through which the annotators evaluated the explanations is displayed in Figure~\ref{fig:evil_inst}.
The cost to evaluate \emph{one} model on \emph{one} dataset is 108-117\$.
\begin{figure*}[ht]
\begin{center}
\includegraphics[scale=0.4]{app_figures/mturk2_instructions.png}
\end{center}
\caption{A snapshot of the instructions that were provided to the annotators that evaluated the explanations.}%
\label{fig:evil_inst}
\vspace{-2ex}
\end{figure*}
\begin{figure}[ht!]
\begin{center}
\includegraphics[width=1\linewidth]{app_figures/vqax_mturk_example.png}
\end{center}
\caption{A snapshot of the interface through which annotators evaluated the explanations.}%
\label{fig:evil_inst}
\vspace{-2ex}
\end{figure}
\subsection{Reproducing Previous Results}
In this work we reproduced three different models. The code for RVT was publicly available and we only had to add a classifier that is suited for the input type of RVT. The code of PJ-X is also publicly available, albeit in an outdated version of the Caffe framework, and therefore we translated it into Pytorch. For FME no code is available and thus we re-implemented their model (as much as possible) according to the instructions given in the paper~\cite{wu_faithful_2019}. In Table~\ref{tab:reprod} we show that the NLG metrics of our re-implementations come very close to those reported in the original papers.
For PJ-X and FME, we had to make a few minor deviations from the original implementations. To address issues with the gradients (vanishing and destabilizing) in PJ-X, we changed the L2 normalization to layer normalization~\cite{ba2016layer} in the decoder, and added gradient clipping with a threshold of 0.1. FME was re-implemented in contact with the first author of the original paper. We re-implemented their ``base'' model, which leaves out some of their model extensions. This is motivated by the fact that these extensions either did not lead to performance increases for us (their $\mathcal{L}_F$ loss) or are difficult to reproduce from the descriptions in the paper (their dataset filter $\mathcal{F}$). For the sake of standardization, we use a ResNet-101 as feature extractor for both models. We also tried a ResNet-152, but this had little effect on our results.
\begin{table*}[ht!]
\begin{center}
\begin{tabulary}{\linewidth}{LLCCCCC}
\toprule
Model & & BLEU-4 & METEOR & ROUGE-L & CIDEr & SPICE \\
\midrule
\multirow{2}*{PJ-X~\cite{park_multimodal_2018}} & \emph{Original} & 19.8 & 18.6 & 44.0 & 73.4 & 15.4 \\
& \emph{Ours} & 20.1 & 18.3 & 43.0 & 71.8 & 15.3 \\
\multirow{2}*{FME~\cite{wu_faithful_2019}} & \emph{Original} & 23.5 & 19.0 & 46.2 & 81.2 & 17.2 \\
& \emph{Ours} & 20.8 & 19.2 & 44.8 & 77.9 & 16.7 \\
\bottomrule
\end{tabulary}
\caption{A comparison (under the same settings) of automatic NLG metrics on VQA-X between our re-implementations (\emph{Ours}) of PJ-X and FME and the results reported in the papers (\emph{Original}).}%
\label{tab:reprod}
\end{center}
\end{table*}
\subsection{Hyperparameters} \label{app:hyp}
In total, we have four models and three datasets. For PJ-X and FME, we choose the same hyperparameters as the authors across all datasets. For PJ-X, we also experimented with larger learning rates, as we experienced convergence issues. For RVT and e-UG, we conducted grid search on three batch sizes, three learning rates, and three ways to combine the loss. We compared dynamic weight loss~\cite{liu2019end} (with two loss temperatures $T=2$ and $T=0.5$) with simply adding both losses. However, this did not affect our results enough to warrant the increase in complexity. We selected the best configuration on VQA-X and then used these settings to train on e-SNLI-VE and VCR. For BERT on VCR, we had to use a higher batch size (128), as the results would not have converged otherwise. The final hyperparameters for all four models are reported in Table \ref{tab:hparams}.
\begin{table*}[ht!]
\begin{center}
\begin{tabulary}{\linewidth}{LCCCC}
\toprule
& PJ-X & FME & RVT & e-UG \\
\midrule
Batch Size & 128 & 128 & 32* / 64 & 64 \\
Learning Rate (LR) & \num{7e-4} & \num{5e-4} & \num{5e-5} & \num{2e-5} \\
Training Type & JOINT* & JOINT* & SEPARATE & JOINT \\
Loss Combination & $\mathcal{L}_T+\mathcal{L}_E$ & $\mathcal{L}_T+\mathcal{L}_E$ & N.A. & $\mathcal{L}_T+\mathcal{L}_E$ \\
Optimizer & Adam & Adam & AdamW & AdamW for BERT \\
LR Scheduler & - & Step decay & Linear w/ warmup & Linear w/ warmup \\
Tokenization & Word & Word & WordPiece & WordPiece \\
Max Question Length & 23 & 23 & 19 & 19 \\
Max Answer Length & 23 & 40 & 23 & 23 \\
Max Explanation Length & 40 & 40 & 51 & 51 \\
Decoding & Greedy & Greedy & Greedy & Greedy \\
\bottomrule
\end{tabulary}
\caption{Hyperarameters used for the different models across all datasets. $\mathcal{L}_T$ and $\mathcal{L}_E$ are the task loss and explanation loss, respectively. For RVT, the task batch size for VCR is 128, as 32 did not lead to convergence. For PJ-X and FME, we trained $M_T$ and $M_E$ separately on VQA-X.}%
\label{tab:hparams}
\end{center}
\end{table*}
An additional overview of the differences between the models is given in Table~\ref{tab:modDiff}.
\begin{table*}[ht!]
\begin{center}
\begin{tabulary}{\linewidth}{LLLLL}
\toprule
Model $M$ & Vision Backbone & VL Model $M_T$ & Explanation Model $M_E$ & $M_E$ Input \\
\midrule
PJ-X & ResNet-101 & MCB & LSTM (a) & image features, question, answer \\
FaiMu & ResNet-101 & UpDown & LSTM (b) & image features, question, answer \\
RVT & \mbox{Faster R-CNN} & BERT & GPT-2 & object tags, question, answer \\
e-UG & \mbox{Faster R-CNN} & UNITER & GPT-2 & contextualized embeddings of image-question pair, question, answer \\
\bottomrule
\end{tabulary}
\caption{Summary of the model differences.}%
\label{tab:modDiff}
\end{center}
\end{table*}
\subsection{Adaptations for VCR}
To accommodate for the multiple-choice nature of task~$T$, we adapt the architectures accordingly. For UNITER, we follow the original paper and formulate multiple-choice as a binary classification of question-image-answer tuples as True or False. The final answer is determined through a softmax of the four True scores. For PJ-X and FME, we follow the approach in the original VCR paper and obtain the logit for response $j$ via the dot product of the final representation of the model and the final hidden state of the LSTM encoding of the response $r^j$ \cite{zellers_recognition_2019}. For RVT, we use \textsc{BertForMultipleChoice} from the transformers library~\cite{wolf-etal-2020-transformers}.
\section{human }
\subsection{e-SNLI-VE improve measures}
detecting fn: In terms of detecting false neutrals, our method has a recall of 0.43 and a precision of 0.61.
Marasovic et al.~\cite{marasovic_natural_2020} also assessed the raw dataset and found that 81\% and 77\% of explanations are plausible (for a sample of 250 contradiction and entailment pairs).
\section{MTurk eval}
By taking the same sample of 300 examples for each dataset and not only evaluating those that were predicted correctly (as done in xxPark),
Previous work xxCite has coniditioned (xxVerify) the explanations on the GT before evaluating them by humans. We argue against this, as this is not a real life scenario. An alternative would be to select a subset of X examples for every dataset where the answer was predicted correctly, as was done by Park et al. The issue with this is that we would introduce a bias by choosing the type of examples that were answered correctly, which could e.g. be easier and this would not enable a fair comparison. We could take a subset of examples where the answer was correct for all the models, but this would still not allow future methods to reuse our benchmark and compare results.
\section{Hyperparameters}
\section{removed examples}
\subsection{Detailed Results for e-SNLI-VE} \label{sec:esnlive_dets}
In this section, we provide more detailed results on our newly released e-SNLI-VE dataset. We break down the task accuracy and explanation scores by the three different classes (see Table~\ref{tab:esnlive_det}). For all models, we observe significantly lower accuracies and explanation scores for the neutral class. There are two potential explanations for this. First, the neutral class can be harder to identify than the other classes. In image-hypothesis pairs, entailment and contradiction examples can sometimes be reduced to more straightforward yes/no classifications of image descriptions. For the neutral class, there always needs to be some reasoning involved to decide whether the image does (not) contain enough evidence to neither indicate entailment nor contradiction. A second reason is that, despite our best efforts to clean the dataset, the neutral class is still more noisy and less represented in the training data.
\begin{table*}[htbp]
\begin{center}
\begin{tabulary}{\linewidth}{LCCCCCCCCC}
\toprule
& \multicolumn{3}{c}{Entailment} & \multicolumn{3}{c}{Neutral} & \multicolumn{3}{c}{Contradiction} \\
\cmidrule(r){2-4} \cmidrule(r){5-7} \cmidrule(r){8-10}
& Acc. & MET. & BERTS. & Acc. & MET. & BERTS. & Acc. & MET. & BERTS. \\
\midrule
PJ-X & 74.4 & 14.0 & 79.2 & 61.5 & 12.4 & 77.4 & 72.8 & 15.9 & 79.3 \\
FME & 77.3 & 15.1 & 79.8 & 67.3 & 13.5 & 77.9 & 77.2 & 16.3 & 79.8 \\
RVT & 74.6 & 17.9 & 81.3 & 63.3 & \textbf{19.0} & 80.7 & 79.4 & 19.4 & 81.4 \\
e-UG & \textbf{80.3} & \textbf{19.6} & \textbf{81.6} & \textbf{71.7} & 18.5 & \textbf{80.9} & \textbf{87.5} & \textbf{20.9} & \textbf{82.6} \\
\bottomrule
\end{tabulary}
\caption{Class-wise results on e-SNLI-VE for the different models. NLG metrics are only shown for METEOR and BERTScore, as those correlate most with human judgement.}%
\label{tab:esnlive_det}
\end{center}
\end{table*}
\subsection{Statistical Analysis of the $S_E$ Score}
To ensure high quality of our results, we had a number of in-browser checks that prevented the annotators from submitting the questionnaire when their evaluations seemed of poor quality. Checks include making sure that they cannot simultaneously say that an explanation is insufficient (they select the \emph{No} or \emph{Weak No} option described in Section \ref{sec:evil}) and has no shortcomings, or that it is optimal (they select \emph{Yes} option), but has shortcomings. We also experimented with further post-hoc cleaning measures (such as verifying that they evaluated the ground-truth favorably or did not always choose similar answers), but they had a negligible impact and thus were disregarded.
Our MTurk sample consists of 19,194 evaluations, half of which are for ground-truth explanations, and the other half for model generated explanations. We obtain evaluations for 264 to 299 unique question-image pairs for every model-dataset combination, leaving us with explanations missing for only 3.3\% of questions. There are 82.1 evaluations per annotator on average ($SD=170.1$), ranging from 16 to 1,244 with a median of 34. After pooling annotations of the same explanation, 6,494 annotations remain (887 to 897 for the evaluations generated by each model).
In Figure~\ref{fig:appendix_he_evil_bar}, we add standard errors to the numerical $S_E$ scores given in Table~\ref{tab:he_score}. This figure confirms that e-UG uniformly outperforms the other models.
To further investigate the robustness of the e-ViL benchmark, we do a statistical analysis of our $S_E$ scores by using a Linear Mixed Model (LMM) that predicts $S_E$ from the model-dataset pairs, with model as fixed factor and dataset as random effect. LMM predicts the evaluations with the Likelihood-Ratio-Test of the fixed effect being significant, with $\chi^2(3)=37.462, p<0.001$. To gain better insight, we performed post-hoc pairwise contrasts, which indicate that e-UG significantly outperforms the remaining models, with $p<0.001$. Further, RVT outperforms PJ-X significantly, with $p=0.007$. The significance level was adjusted for a family-wise type I error rate of $\alpha=0.05$ using Bonferroni-Holm adjustments.
\begin{figure*}[ht]
\centering
\includegraphics[width=0.7\textwidth]{figures/he_evil_scores.pdf}
\caption{Human evaluation framework: e-ViL scores $S_E$. This plot shows the main e-ViL scores (based on numerical average) for the different model-dataset pairs. Error bars show $\pm 2 \text{SD} / \sqrt{n}$ for each group.}%
\label{fig:appendix_he_evil_bar}
\end{figure*}
\begin{figure*}[ht]
\centering
\includegraphics[width=0.7\textwidth]{figures/he_comparative_score.pdf}
\caption{Human evaluation framework: Comparative scores. This figure displays the comparative scores (with respect to the ground-truth) of the explanations for the different model-dataset pairs. Error bars show $\pm 2 \text{SD} / \sqrt{n}$ for each group.}
\label{fig:appendix_he_compare_gt}
\end{figure*}
\subsection{Alternative $S_E$ Scores} \label{sec:evil_alts}
The nature of our human evaluation questionnaire allows for multiple ways to compute the e-ViL score $S_E$ of the generated explanations. The key differences between the scoring methods are on how to pool the up-to-three evaluations we have for each explanation, and how to compute the overall numerical value. In the main paper, we compute $S_E$ by mapping the four evaluation choices to numerical values, then taking the average for every explanation in the sample and then the sample average to get our $S_E$ score. Below, we propose two alternative ways to compute $S_E$. While they lead to different values, the performance differences between our models remain relatively similar.
\subsubsection{Median Pooling}
In median pooling, we obtain the score for each explanation by taking the median of its up-to-three ordinal evaluations (as opposed to taking a numerical average). We always interpolate with rounding off, meaning that the median of (\emph{Yes}, \emph{Weak Yes}) $\mapsto$ \emph{Weak Yes} and (\emph{Yes}, \emph{No}) $\mapsto$ \emph{Weak No}. This allows us to plot the distribution of \emph{No}, \emph{Weak No}, \emph{Weak Yes}, and \emph{Yes} for every model-dataset pair, as displayed in Figure \ref{fig:appendix_pooled_barplot}.
\begin{figure*}[ht]
\centering
\includegraphics[width=\textwidth]{figures/he_evil_ordinal.pdf}
\caption{Human evaluation framework: Ordinal representation of the evaluations. Median responses for each question-image pair given by participants to the evaluation question question ``Given the image and the question/hypothesis, does the explanation justify the answer?".}%
\label{fig:appendix_pooled_barplot}
\end{figure*}
We observe that e-UG performs better across all datasets, with RVT following in second place for the VCR and VQA-X datasets. The differences between the PJ-X, FME and RVT are relatively small.
We analyse our results using a Cumulative Link Mixed Model (CLMM) with a logit link and flexible thresholding. We predict annotator responses using the dataset as random effect and the VL-NLE model as fixed effect. We find that the model significantly influences ratings, as suggested by the Likelihood-Ratio-Test, $\chi^2(2)=42.4, p < 0.001$, when comparing the full model to a nested statistical model
that is merely based on the dataset as predictor. The model predictor is dummy-coded with e-UG as reference class
, which enables us to interpret the model's coefficients in the statistical test
as pairwise contrasts
of all other models towards e-UG. All coefficients have $p$-values $p<0.001$, indicating the e-UG significantly outperforms all other models.
\subsubsection{Comparative $S_E$ Score}
We also designed a comparative score, for which we do not map our questionnaire evaluation options (\emph{No}, \emph{Weak No}, \emph{Weak Yes}, and \emph{Yes}) to numerical values, but instead compare them to the evaluation of the ground-truth. For every image-question pair, the annotator has to evaluate both the ground-truth and the generated explanation, without knowing which is which. This enables us to see, for every generated explanation, if it was deemed equally good, better, or worse than the ground-truth. This mimics the approach in \citet{park_multimodal_2018} and \citet{wu_faithful_2019}, where annotators were explicitly asked if the generated explanation was worse, equally good, or better than the ground-truth. An advantage of this method is that we can seamlessly incorporate the criticalness of each annotator. The disadvantage is that we do not get \emph{absolute} measurements of the quality of the explanations.
The generated explanation gets the score 1 if it is as good or better than the ground-truth, and otherwise 0. We pool the comparative score via median pooling with rounding off.
Figure \ref{fig:appendix_he_compare_gt} displays the comparative score. We can observe that e-UG scores are strongest across all datasets, while the other three models are performing similarly, except on the VCR dataset, where PJ-X performs worse than the other models.
For our statistical analysis, we fit a generalized linear mixed model (GLMM) on the full unpooled annotation set predicting the whether an explanation was rated positively (compared to the ground-truth) using the dataset and annotator as random effects and the VL-NLE model as fixed effect. We utilise a logit link. The model parameter significantly predicts the evaluations, with $\chi^2(3)=67.366, p<0.001$. Post-hoc tests (Tukey contrasts with Bonferroni-Holm adjusted significance) show that the e-UG outperforms all other models, with $p<0.001$, and that RVT outperforming PJ-X at $p=0.011$. All other pairwise comparisons were not significant. Extending the model to include ground-truth explanations as a model category also demonstrates that all model-generated explanations were evaluated significantly worse than the ground-truth explanations. We conclude that the e-UG outperforms all other models, whereas performance differences between them are rather small, replicating our findings from the alternative analyses.
\section{Introduction} \label{sec:intro}
\input{sections/intro.tex}
\section{Related Work} \label{sec:rw}
\input{sections/relwork.tex}
\section{The e-SNLI-VE Dataset} \label{sec:esnlive}
\input{sections/esnlive.tex}
\section{The e-ViL Benchmark} \label{sec:evil}
\input{sections/benchmark}
\section{Experimental Evaluation} \label{sec:exp}
\input{sections/experiments}
\section{Summary and Outlook}
\input{sections/conclusion}
{\small
\bibliographystyle{plainnat}
\section{Introduction}
Please follow the steps outlined below when submitting your manuscript to
the IEEE Computer Society Press. This style guide now has several
important modifications (for example, you are no longer warned against the
use of sticky tape to attach your artwork to the paper), so all authors
should read this new version.
\subsection{Language}
All manuscripts must be in English.
\subsection{Dual submission}
Please refer to the author guidelines on the ICCV 2021 web page for a
discussion of the policy on dual submissions.
\subsection{Paper length}
Papers, excluding the references section,
must be no longer than eight pages in length. The references section
will not be included in the page count, and there is no limit on the
length of the references section. For example, a paper of eight pages
with two pages of references would have a total length of 10 pages.
{\bf There will be no extra page charges for ICCV 2021.}
Overlength papers will simply not be reviewed. This includes papers
where the margins and formatting are deemed to have been significantly
altered from those laid down by this style guide. Note that this
\LaTeX\ guide already sets figure captions and references in a smaller font.
The reason such papers will not be reviewed is that there is no provision for
supervised revisions of manuscripts. The reviewing process cannot determine
the suitability of the paper for presentation in eight pages if it is
reviewed in eleven.
\subsection{The ruler}
The \LaTeX\ style defines a printed ruler which should be present in the
version submitted for review. The ruler is provided in order that
reviewers may comment on particular lines in the paper without
circumlocution. If you are preparing a document using a non-\LaTeX\
document preparation system, please arrange for an equivalent ruler to
appear on the final output pages. The presence or absence of the ruler
should not change the appearance of any other content on the page. The
camera ready copy should not contain a ruler. (\LaTeX\ users may uncomment
the \verb'\iccvfinalcopy' command in the document preamble.) Reviewers:
note that the ruler measurements do not align well with lines in the paper
--- this turns out to be very difficult to do well when the paper contains
many figures and equations, and, when done, looks ugly. Just use fractional
references (e.g.\ this line is $095.5$), although in most cases one would
expect that the approximate location will be adequate.
\subsection{Mathematics}
Please number all of your sections and displayed equations. It is
important for readers to be able to refer to any particular equation. Just
because you didn't refer to it in the text doesn't mean some future reader
might not need to refer to it. It is cumbersome to have to use
circumlocutions like ``the equation second from the top of page 3 column
1''. (Note that the ruler will not be present in the final copy, so is not
an alternative to equation numbers). All authors will benefit from reading
Mermin's description of how to write mathematics:
\url{http://www.pamitc.org/documents/mermin.pdf}.
\subsection{Blind review}
Many authors misunderstand the concept of anonymizing for blind
review. Blind review does not mean that one must remove
citations to one's own work---in fact it is often impossible to
review a paper unless the previous citations are known and
available.
Blind review means that you do not use the words ``my'' or ``our''
when citing previous work. That is all. (But see below for
tech reports.)
Saying ``this builds on the work of Lucy Smith [1]'' does not say
that you are Lucy Smith; it says that you are building on her
work. If you are Smith and Jones, do not say ``as we show in
[7]'', say ``as Smith and Jones show in [7]'' and at the end of the
paper, include reference 7 as you would any other cited work.
An example of a bad paper just asking to be rejected:
\begin{quote}
\begin{center}
An analysis of the frobnicatable foo filter.
\end{center}
In this paper we present a performance analysis of our
previous paper [1], and show it to be inferior to all
previously known methods. Why the previous paper was
accepted without this analysis is beyond me.
[1] Removed for blind review
\end{quote}
An example of an acceptable paper:
\begin{quote}
\begin{center}
An analysis of the frobnicatable foo filter.
\end{center}
In this paper we present a performance analysis of the
paper of Smith \etal [1], and show it to be inferior to
all previously known methods. Why the previous paper
was accepted without this analysis is beyond me.
[1] Smith, L and Jones, C. ``The frobnicatable foo
filter, a fundamental contribution to human knowledge''.
Nature 381(12), 1-213.
\end{quote}
If you are making a submission to another conference at the same time,
which covers similar or overlapping material, you may need to refer to that
submission in order to explain the differences, just as you would if you
had previously published related work. In such cases, include the
anonymized parallel submission~\cite{Authors14} as additional material and
cite it as
\begin{quote}
[1] Authors. ``The frobnicatable foo filter'', F\&G 2014 Submission ID 324,
Supplied as additional material {\tt fg324.pdf}.
\end{quote}
Finally, you may feel you need to tell the reader that more details can be
found elsewhere, and refer them to a technical report. For conference
submissions, the paper must stand on its own, and not {\em require} the
reviewer to go to a tech report for further details. Thus, you may say in
the body of the paper ``further details may be found
in~\cite{Authors14b}''. Then submit the tech report as additional material.
Again, you may not assume the reviewers will read this material.
Sometimes your paper is about a problem which you tested using a tool which
is widely known to be restricted to a single institution. For example,
let's say it's 1969, you have solved a key problem on the Apollo lander,
and you believe that the ICCV70 audience would like to hear about your
solution. The work is a development of your celebrated 1968 paper entitled
``Zero-g frobnication: How being the only people in the world with access to
the Apollo lander source code makes us a wow at parties'', by Zeus \etal.
You can handle this paper like any other. Don't write ``We show how to
improve our previous work [Anonymous, 1968]. This time we tested the
algorithm on a lunar lander [name of lander removed for blind review]''.
That would be silly, and would immediately identify the authors. Instead
write the following:
\begin{quotation}
\noindent
We describe a system for zero-g frobnication. This
system is new because it handles the following cases:
A, B. Previous systems [Zeus et al. 1968] didn't
handle case B properly. Ours handles it by including
a foo term in the bar integral.
...
The proposed system was integrated with the Apollo
lunar lander, and went all the way to the moon, don't
you know. It displayed the following behaviours
which show how well we solved cases A and B: ...
\end{quotation}
As you can see, the above text follows standard scientific convention,
reads better than the first version, and does not explicitly name you as
the authors. A reviewer might think it likely that the new paper was
written by Zeus \etal, but cannot make any decision based on that guess.
He or she would have to be sure that no other authors could have been
contracted to solve problem B.
\medskip
\noindent
FAQ\medskip\\
{\bf Q:} Are acknowledgements OK?\\
{\bf A:} No. Leave them for the final copy.\medskip\\
{\bf Q:} How do I cite my results reported in open challenges?
{\bf A:} To conform with the double blind review policy, you can report results of other challenge participants together with your results in your paper. For your results, however, you should not identify yourself and should not mention your participation in the challenge. Instead present your results referring to the method proposed in your paper and draw conclusions based on the experimental comparison to other results.\medskip\\
\begin{figure}[t]
\begin{center}
\fbox{\rule{0pt}{2in} \rule{0.9\linewidth}{0pt}}
\end{center}
\caption{Example of caption. It is set in Roman so that mathematics
(always set in Roman: $B \sin A = A \sin B$) may be included without an
ugly clash.}
\label{fig:long}
\label{fig:onecol}
\end{figure}
\subsection{Miscellaneous}
\noindent
Compare the following:\\
\begin{tabular}{ll}
\verb'$conf_a$' & $conf_a$ \\
\verb'$\mathit{conf}_a$' & $\mathit{conf}_a$
\end{tabular}\\
See The \TeX book, p165.
The space after \eg, meaning ``for example'', should not be a
sentence-ending space. So \eg is correct, {\em e.g.} is not. The provided
\verb'\eg' macro takes care of this.
When citing a multi-author paper, you may save space by using ``et alia'',
shortened to ``\etal'' (not ``{\em et.\ al.}'' as ``{\em et}'' is a complete word.)
However, use it only when there are three or more authors. Thus, the
following is correct: ``
Frobnication has been trendy lately.
It was introduced by Alpher~\cite{Alpher02}, and subsequently developed by
Alpher and Fotheringham-Smythe~\cite{Alpher03}, and Alpher \etal~\cite{Alpher04}.''
This is incorrect: ``... subsequently developed by Alpher \etal~\cite{Alpher03} ...''
because reference~\cite{Alpher03} has just two authors. If you use the
\verb'\etal' macro provided, then you need not worry about double periods
when used at the end of a sentence as in Alpher \etal.
For this citation style, keep multiple citations in numerical (not
chronological) order, so prefer \cite{Alpher03,Alpher02,Authors14} to
\cite{Alpher02,Alpher03,Authors14}.
\begin{figure*}
\begin{center}
\fbox{\rule{0pt}{2in} \rule{.9\linewidth}{0pt}}
\end{center}
\caption{Example of a short caption, which should be centered.}
\label{fig:short}
\end{figure*}
\section{Formatting your paper}
All text must be in a two-column format. The total allowable width of the
text area is $6\frac78$ inches (17.5 cm) wide by $8\frac78$ inches (22.54
cm) high. Columns are to be $3\frac14$ inches (8.25 cm) wide, with a
$\frac{5}{16}$ inch (0.8 cm) space between them. The main title (on the
first page) should begin 1.0 inch (2.54 cm) from the top edge of the
page. The second and following pages should begin 1.0 inch (2.54 cm) from
the top edge. On all pages, the bottom margin should be 1-1/8 inches (2.86
cm) from the bottom edge of the page for $8.5 \times 11$-inch paper; for A4
paper, approximately 1-5/8 inches (4.13 cm) from the bottom edge of the
page.
\subsection{Margins and page numbering}
All printed material, including text, illustrations, and charts, must be kept
within a print area 6-7/8 inches (17.5 cm) wide by 8-7/8 inches (22.54 cm)
high.
Page numbers should be included for review submissions but not for the
final paper. Review submissions papers should have page numbers in the
footer with numbers centered and .75 inches (1.905 cm) from the bottom
of the page and start on the first page with the number 1.
Page numbers will be added by the publisher to all camera-ready papers
prior to including them in the proceedings and before submitting the
papers to IEEE Xplore. As such, your camera-ready submission should
not include any page numbers. Page numbers should automatically be
removed by uncommenting (if it's not already) the line
\begin{verbatim}
\end{verbatim}
near the beginning of the .tex file.
\subsection{Type-style and fonts}
Wherever Times is specified, Times Roman may also be used. If neither is
available on your word processor, please use the font closest in
appearance to Times to which you have access.
MAIN TITLE. Center the title 1-3/8 inches (3.49 cm) from the top edge of
the first page. The title should be in Times 14-point, boldface type.
Capitalize the first letter of nouns, pronouns, verbs, adjectives, and
adverbs; do not capitalize articles, coordinate conjunctions, or
prepositions (unless the title begins with such a word). Leave two blank
lines after the title.
AUTHOR NAME(s) and AFFILIATION(s) are to be centered beneath the title
and printed in Times 12-point, non-boldface type. This information is to
be followed by two blank lines.
The ABSTRACT and MAIN TEXT are to be in a two-column format.
MAIN TEXT. Type main text in 10-point Times, single-spaced. Do NOT use
double-spacing. All paragraphs should be indented 1 pica (approx. 1/6
inch or 0.422 cm). Make sure your text is fully justified---that is,
flush left and flush right. Please do not place any additional blank
lines between paragraphs.
Figure and table captions should be 9-point Roman type as in
Figures~\ref{fig:onecol} and~\ref{fig:short}. Short captions should be centered.
\noindent Callouts should be 9-point Helvetica, non-boldface type.
Initially capitalize only the first word of section titles and first-,
second-, and third-order headings.
FIRST-ORDER HEADINGS. (For example, {\large \bf 1. Introduction})
should be Times 12-point boldface, initially capitalized, flush left,
with one blank line before, and one blank line after.
SECOND-ORDER HEADINGS. (For example, { \bf 1.1. Database elements})
should be Times 11-point boldface, initially capitalized, flush left,
with one blank line before, and one after. If you require a third-order
heading (we discourage it), use 10-point Times, boldface, initially
capitalized, flush left, preceded by one blank line, followed by a period
and your text on the same line.
\subsection{Footnotes}
Please use footnotes\footnote {This is what a footnote looks like. It
often distracts the reader from the main flow of the argument.} sparingly.
Indeed, try to avoid footnotes altogether and include necessary peripheral
observations in
the text (within parentheses, if you prefer, as in this sentence). If you
wish to use a footnote, place it at the bottom of the column on the page on
which it is referenced. Use Times 8-point type, single-spaced.
\subsection{References}
List and number all bibliographical references in 9-point Times,
single-spaced, at the end of your paper. When referenced in the text,
enclose the citation number in square brackets, for
example~\cite{Authors14}. Where appropriate, include the name(s) of
editors of referenced books.
\begin{table}
\begin{center}
\begin{tabular}{|l|c|}
\hline
Method & Frobnability \\
\hline\hline
Theirs & Frumpy \\
Yours & Frobbly \\
Ours & Makes one's heart Frob\\
\hline
\end{tabular}
\end{center}
\caption{Results. Ours is better.}
\end{table}
\subsection{Illustrations, graphs, and photographs}
All graphics should be centered. Please ensure that any point you wish to
make is resolvable in a printed copy of the paper. Resize fonts in figures
to match the font in the body text, and choose line widths which render
effectively in print. Many readers (and reviewers), even of an electronic
copy, will choose to print your paper in order to read it. You cannot
insist that they do otherwise, and therefore must not assume that they can
zoom in to see tiny details on a graphic.
When placing figures in \LaTeX, it's almost always best to use
\verb+\includegraphics+, and to specify the figure width as a multiple of
the line width as in the example below
{\small\begin{verbatim}
\usepackage[dvips]{graphicx} ...
\includegraphics[width=0.8\linewidth]
{myfile.eps}
\end{verbatim}
}
\subsection{Color}
Please refer to the author guidelines on the ICCV 2021 web page for a discussion
of the use of color in your document.
\section{Final copy}
You must include your signed IEEE copyright release form when you submit
your finished paper. We MUST have this form before your paper can be
published in the proceedings.
{\small
\bibliographystyle{ieee_fullname}
\subsection{Task Formulation}
We denote a module that solves a VL task as $M_T$, which takes as input visual information $V$ and textual information~$L$.
Its objective is to complete a task $T$ where the outcome is $a$, i.e., $M_T(V,L)=a$. An example of a VL task is VQA, where $V$ is an image, $L$ is a question, and $T$ is the task of providing the answer~$a$ to that question. We extend this by an additional task $E$, which requires an NLE $e$ justifying how $V$ and $L$ lead to~$a$, solved by the module $M_E(V,L)=e$. The final model $M$ then consists of $M_T$ and $M_E$. Thus, $M = (M_T, M_E)$ and $M(V, L)=a,e$.
\subsection{Datasets}
Our benchmark uses the following three datasets, which vary in size and domain. Examples are shown in Figure \ref{fig:dset_examples} in the Appendix.
\vspace{-2.5ex}
\paragraph{e-SNLI-VE.}
Our proposed e-SNLI-VE dataset
has been described in Section~\ref{sec:esnlive}.
\vspace{-2.5ex}
\paragraph{VQA-X.}
VQA-X \cite{park_multimodal_2018} contains human written explanations for a subset of questions from the VQA v2 dataset \cite{goyal2017making}. The image-question pairs are split into train, dev, and test with 29.5k, 1.5k, and 2k instances, respectively.
The task $T$ is formulated as a multi-label classification task of 3,129 different classes. One question can have multiple possible answers as each example has been annotated by multiple people.
\vspace{-2.5ex}
\paragraph{VCR.}
Visual Commonsense Reasoning (VCR) is a VL dataset that asks multiple-choice (single answer) questions about images from movies~ \cite{zellers_recognition_2019}. In addition to four answer options, it also provides four NLEs options, out of which one is correct. For the purpose of our proposed VL-NLE task, we reformulate it as an explanation generation task. As the test set for VCR is not publicly available, we split the original train set into a train and dev set, and use the original validation set as test set. The splits are of size 191.6k, 21.3k, and 26.5k, respectively.
\vspace{-2.5ex}
\paragraph{Human Judgment of Explanations.}
In our benchmark experiments (Section~\ref{sec:exp}), human annotators evaluate the ground-truth explanations of all three datasets. This enables us to get insights into the quality of the explanations of each dataset. For each explanation, participants responded to the question ``Given the image and the question/hypothesis, does the explanation justify the answer?" with \textit{no}, \textit{weak no}, \textit{weak yes} or \textit{yes}. The results in Table~\ref{tab:dataQual} show that e-SNLI-VE comes close to the manually annotated datasets VCR and VQA-X (82.8\% explanations with \textit{yes} or \textit{weak yes} vs. 87.9\% and 91.4\%).
\begin{table}[htbp]
\begin{center}
\begin{tabulary}{\linewidth}{LCCCC}
\toprule
& No & Weak No & Weak Yes & Yes \\
\midrule
e-SNLI-VE & 10.3\% & 6.9\% & 27.7\% & 55.1\% \\
VQA-X & 4.1\% & 4.5\% & 25.1\% & 66.3\% \\
VCR & 6.9\% & 5.2\% & 36.6\% & 51.3\% \\
\bottomrule
\end{tabulary}
\caption{Human evaluation of the ground-truth explanations for the three datasets used in e-ViL. The question asked was: ``Given the image and the question/hypothesis, does the explanation justify the answer?". For each dataset, we have a sample size of 300 distinct explanations and each explanation was evaluated by 12 different annotators.
}%
\label{tab:dataQual}
\end{center}
\vspace{-6ex}
\end{table}
\subsection{Evaluation} \label{sec:tf}
\paragraph{Evaluation Scores.}
We define separate evaluation scores $S_T$, $S_E$, and $S_O$ for $M_T$, $M_E$, and $M$, respectively. $S_T$ is the metric that is defined by the original VL task $T$, e.g., label accuracy for e-SNLI-VE and VCR, and VQA accuracy for VQA-X.
We define $S_E$ as the average explanation score of the examples for which the answer $a$ was predicted correctly. The explanation score can be any custom human or automatic metric. The metric used in e-ViL is outlined in the next paragraph.
An explanation $e$ is expected to be false
when the answer $a$ is predicted incorrectly (as it is expected to justify a wrong answer) and is, therefore, not considered in the computation of $S_E$. Finally, we want $S_O$ to summarize the performance of a model on both tasks $T$ and $E$, to give us the overall performance of a VL-NLE model $M$. We define $S_O = S_T \times S_E$, which equates to the average of the scores of all explanations, but where we set the score of an explanation to~$0$ if its associated answer was predicted incorrectly.
As mentioned before, existing automated NLG metrics have strong limitations for evaluating NLEs. Therefore, to compute $S_E$, we developed the human evaluation framework outlined below.
\vspace{-2ex}
\paragraph{Human Evaluation Framework.}
We collect human annotations on MTurk, where we ask the annotators to proceed in two steps. First, they have to solve the task $T$, i.e., provide the answer $a$ to the question. This helps the annotators to get into the right mindset to evaluate explanations and enables us to do in-browser quality checks (since we know the answers). We disregard their annotation if they answered the VL task $T$ incorrectly.
For each explanation, we ask them a simple evaluation question: ``Given the image and the question/hypothesis, does the explanation justify the answer?".
We follow \citet{marasovic_natural_2020} in giving the following four response choices: \emph{yes}, \emph{weak yes}, \emph{weak no}, and \emph{no}.
\citet{marasovic_natural_2020} later merges \emph{weak yes} with \emph{yes} and \emph{weak no} with \emph{no}, giving them only a binary evaluation for every explanation.
We argue that keeping the four response options will allow for a more fine-grained evaluation. We map \emph{yes}, \emph{weak yes}, \emph{weak no}, and \emph{no} to the numeric scores of $1$, $2/3$, $1/3$, and $0$, respectively.
We also ask annotators to select the main shortcomings (if any) of the explanations. We observe three main limitations of explanations. First, they can \textit{insufficiently justify the answer}. For example, the sentence ``because it's cloudy" does not sufficiently justify the answer ``the sea is not calm". Second, an explanation can \textit{incorrectly describe the image}, e.g., if a model learned generic explanations that are not anchored in the image. For example, the explanation ``it's a watermelon because it's a big round fruit" is generally a good explanation for the answer ``it's a watermelon", but the image could actually display cut up chunks of the fruit. Lastly, the sentences can be \textit{nonsensical}, such as ``a man cannot be a man".
For each model-dataset pair, we select a random sample of 300 datapoints where the model answered the question correctly.
Every sample contains only unique images. For VCR, all movies are represented in the samples. Note that it is not possible to evaluate all models on exactly the same instances, as they do not all answer the same questions correctly. Taking a subset of examples where \emph{all} models answered correctly is disadvantageous for two reasons. First, this makes the benchmark less re-usable, as future methods might not answer the same questions correctly. Second, this would bias the dataset towards the questions that the weakest model answered correctly. However, in order to still maximize the overlap between the samples, we shuffled all the instances in the test sets randomly and then for each model we took the 300 first on which the answer was correct.
We propose three measures to further ensure robustness and re-usability of the framework. In order to account for annotator subjectivity, we evaluate every instance by three different annotators. The final score per explanation is given by the average of all evaluations. In addition, we evaluate one model at a time to avoid potential anchoring effects between models (e.g., the annotator evaluates one model more favorably because they are influenced by poor explanations from a different model).
To further implicitly induce a uniform anchoring effect, the annotators have to evaluate both the ground-truth explanation (which is invariant to the model) and the explanation generated by a model for every image-question pair. They do not know which is which and are not asked to compare them. This implicitly ensures that all evaluations have the same anchor (i.e., the ground-truth) and it allows us to compute $S_E$ in different ways, as outlined in Appendix \ref{sec:evil_alts}.
Finally, all our annotators had to have a 98\% prior acceptance rate on the MTurk platform.
More details and screenshots of our MTurk evaluation can be found in the Appendix \ref{app:results}. For re-usability, we publicly release the questionnaires used in our benchmark\footnote{\url{https://github.com/maximek3/e-ViL}}.
\begin{figure*}[ht]
\begin{subfigure}[h]{0.4\linewidth}
\centering
\includegraphics[height=6cm]{figures/arch.png}%
\caption{High-level structure of VL models.}
\end{subfigure}
\hfill
\begin{subfigure}[h]{0.6\linewidth}
\centering
\includegraphics[height=6cm]{figures/models.png}%
\caption{The components of the models that we evaluate.}
\end{subfigure}%
\caption{High-level architectures of the models that are included in our benchmark.}%
\label{fig:comArch}
\vspace{-2ex}
\end{figure*}
\subsection{Correcting SNLI-VE} \label{d:fn}
In SNLI-VE \cite{xie_visual_2019}, an image and a textual hypothesis are given, and the task is to classify the relation between the image-premise and the textual hypothesis. The possible labels are \emph{entailment} (if the hypothesis is true, given the image), \emph{contradiction} (the hypothesis is false, given the image), or \emph{neutral} (if there is not enough evidence to conclude whether the hypothesis true or false). SNLI-VE builds on top of the textual entailment dataset SNLI \cite{bowman2015large} by replacing textual premises with Flickr30k images \cite{young2014image}. This is possible because the textual premises in SNLI are caption sentences of those images. However, this replacement led to labeling errors, as an image typically contains more information than a single caption describing it. Especially for the neutral class, a caption may not have enough evidence to suggest entailment or contradiction, but the corresponding image does (see Figure \ref{fig:fn}). On a manually evaluated subset of 535 samples, we found a 38.6\% error rate among the neutral labels. This subset will be used below to evaluate the effectiveness of our filters. Error rates for entailment and contradiction are reported to be under~1\%~\cite{xie_visual_2019}, hence we focus only on correcting the neutral instances.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.8\linewidth]{figures/fn_example.png}
\end{center}
\caption{The original label of the textual premise-hypothesis pair in SNLI is neutral. However, the image contains more information than the textual premise and reveals that they are outside and not in the church. By looking at other captions from Flicker30k describing the same image (\#2 and \#4) we can determine that the neutral label is false.}
\label{fig:fn}
\vspace{-2ex}
\end{figure}
In the validation and test sets, we relabeled the neutral examples using Amazon Mechanical Turk (MTurk). To ensure high-quality annotations, we used a series of quality control measures, such as in-browser checks, inserting trusted examples, and collecting three annotations per instance. In total, 39\% of the neutral labels were changed to entailment or contradiction. The label distribution shifted from uniform to Ent/Neut/Cont of 39\%/20\%/41\% and 39\%/21\%/40\% for the validation and test sets, respectively.
For the training set, we propose an automatic way to remove false neutrals. As previously stated, the main limitation of replacing a caption with the image that it describes is that the image generally contains more information than the caption. We addressed this by leveraging all the five captions attached to each image in the Flickr30k dataset, which may be complementary and contain clues that the neutral label is false. For every image-hypothesis pair $i$, we ran a natural language inference model $m_{\mathrm{nli}}$ on each caption-hypothesis pair $p_{i,c}$, where $c$ is one of the captions. If the original label of image-hypothesis pair $i$ is neutral, but $\sum_{c} m_{\mathrm{nli}}(p_{i,c})$ indicates with high confidence that the label is not neutral, we deem the label incorrect and removed the instance from the dataset. An example is shown in Figure \ref{fig:fn}. For $m_{\mathrm{nli}}$, we used Roberta-large \cite{liu2019roberta} trained on the MNLI dataset \cite{williams2018broad}. Instances were removed if $\sum_{c} m_{\mathrm{nli}}(p_{i,c})$ exceeded $2.0$ for entailment and contradiction classes. On our 535-samples subset, this filter decreased the error of neutral labels from 39\% to 24\%. When validated against the relabeling on the validation set, the error decreased from 39\% to 30\%.
\subsection{Adding Explanations to SNLI-VE}
To create e-SNLI-VE, we source explanations from e-SNLI \cite{camburu_e-snli_2018}, which extends SNLI with human-written NLEs. However, the explanations in e-SNLI are tailored to the textual premise-hypothesis pairs and are therefore not always well-suited for the image-hypothesis pair. In our 535-samples subset, we found that 36\%, 22\%, and 42\% of explanations were of low (i.e., wrong), medium (i.e., not wrong, but could be more relevant), and high quality (i.e., correct and relevant), respectively. We propose several steps to detect and remove explanations of low quality.
\vspace{-2.5ex}
\paragraph{Re-annotation.} \label{d:reanno}
First, we replace the explanations for the neutral pairs in the validation and test sets with new ones, collected via MTurk at the same time as we collected new labels for these subsets. In order to submit the annotation of an image-sentence pair, three steps must be completed: annotators must choose a label, highlight words in the hypothesis, and use at least half of the highlighted words to write an explanation for their decision. On a random sample of 100 explanations, we found that 83\% of the newly annotated explanations are of high quality.
\vspace{-2.5ex}
\paragraph{Keyword Filter.} \label{d:kw}
Next, we use keyword filtering to detect explanations that make reference to a linguistic feature of the textual premise. The keywords, which we manually defined, are ``synonym", ``mention", ``rephrasing", ``sentence", ``way to say" and ``another word for". The keyword filter removed 4.6\% of all instances, and our 535-samples subset suggests that \emph{all} filtered explanations were indeed of low quality.
\vspace{-2.5ex}
\paragraph{Similarity Filter.} \label{d:sim}
We noticed that the share of low-quality explanations is highest for entailment examples. This happens frequently when the textual premise and hypothesis are almost identical, as then the explanation often just repeats both statements. To overcome this, we removed all examples where the ROUGE-1 score (a measure for sentence similarity \cite{lin2004rouge}) between the textual premise and hypothesis was above 0.57. This reduced the share of low-quality explanations for entailment by 4.2\%.
\vspace{-2.5ex}
\paragraph{Uncertainty Filter.} \label{d:unc}
Lastly, we found that image-hypo\-the\-sis pairs with high uncertainty are correlated with low quality explanations for contradictions. We define uncertainty as the diversion of the scores from $m_{\mathrm{nli}}(p_{i,c})$ for the five captions that describe the image. $m_{\mathrm{nli}}$ is the same Roberta-large model that was described above.
This filter reduced the share of low-quality explanations for contradiction examples by 5.1\%.
The final e-SNLI-VE dataset statistics are displayed in Table~\ref{tab:esnlive}. A human evaluation of the ground-truth explanations in the test set can be found in Table \ref{tab:dataQual}. The results indicate that the quality of the e-SNLI-VE ground-truth explanations is not far off that of VQA-X and VCR. Qualitative examples and a more detailed rundown of our filtering methods are in Appendix \ref{app:dset}.
\subsection{Models}\label{models}
Existing VL-NLE models follow a common high-level structure (Figure \ref{fig:comArch}). First, a VL model learns a joint representation of the image and language inputs and predicts the answer. The models in this work then condition their explanation on different combinations of the question, image, their joint representation, and the answer.
The four models that we evaluate in this work are described below and are illustrated in Figure \ref{fig:comArch}.
\vspace{-2ex}
\paragraph{PJ-X.}
The PJ-X model~\cite{park_multimodal_2018} provides multimodal explanations for VQA tasks and was originally evaluated on VQA-X. Its $M_T$ module consists of a simplified MCB network~\cite{fukui2016multimodal} that was pre-trained on VQA v2.
We implemented PJ-X in PyTorch following closely the authors' implementation in Caffe\footnote{\url{https://github.com/Seth-Park/MultimodalExplanations}}.
To address numerical optimization problems,
we replaced the L2 normalization in the decoder with LayerNorm~\cite{ba2016layer}, as the original normalization zeroed gradients for earlier model parts.
Additionally, we added gradient clipping of 0.1 to prevent too large gradients.
To adapt PJ-X for multiple-choice question-answering in VCR, we follow the approach in the original VCR paper~\cite{zellers_recognition_2019}.
\vspace{-2ex}
\paragraph{FME.}
The model introduced by~\citet{wu_faithful_2019}, which we will refer to as FME (Faithful Multimodel Explanations), puts emphasis on producing faithful explanations.
In particular, it aims to ensure that the explanation utilizes the same visual features that were used to produce the answer. Their code is not publicly available and we, therefore, re-implemented their base model according to the instructions in the paper. We chose the base model, as it was trained on the entire VQA-X 29.5K train split and the modifications of the other variations were difficult to re-implement from the descriptions in the paper.
Our re-implementation of FME is based on a frozen modified UpDown~\cite{anderson2018bottom} VQAv2 pre-trained VL-model.
Similarly to PJ-X, we also train FME with a gradient clipping of 0.
To adapt FME for multiple-choice QA in VCR, we follow the approach in the original VCR paper \cite{zellers_recognition_2019}.
\vspace{-2ex}
\paragraph{RVT.}
The Rationale-VT Transformer (RVT) model ~\cite{marasovic_natural_2020} uses varying vision algorithms to extract information from an image and then feeds this information, the ground-truth answer, and the question to the pre-trained GPT-2 language model \cite{radford2019language}, which yields an explanation.
As they omit the question answering part, we extend their model by an answer prediction module to allow for a fair comparison and to get a sense of the overall performance. We use their overall most effective visual input\footnote{It obtained the highest visual plausibility score averaged across all datasets.}, given by the tags of the objects detected in the image. As task model $M_T$, we use BERT \cite{devlin_bert_2019}, which takes as input the object tags and the question, and predicts the answer.
\vspace{-2ex}
\paragraph{e-UG.}
\citet{marasovic_natural_2020} obtains the best explanation accuracy when using object labels as the sole image information. We address this limitation by proposing e-UG, a model that enables a stronger image conditioning by combining GPT-2 with UNITER~\cite{chen_uniter_2020}, a powerful transformer-based VL model.
Similar to BERT, UNITER leverages self-attention mechanisms to learn contextualized embeddings of image-text pairs.
The outputs of UNITER are contextualized embeddings of the word tokens and image regions in the image-text pair. Words are embedded by tokenizing them into WordPieces and adding their position embedding. Images are embedded by extracting visual features of regions with Faster R-CNN \cite{ren2015faster} and encoding their location features. UNITER achieves SOTA on many downstream tasks when fine-tuned on them. For e-UG, we leverage these contextualized embeddings to condition GPT-2 on an efficient representation of the image and question. The embeddings of the image regions and question words are simply prepended to the textual question and predicted answer, and then fed to GPT-2. GPT-2 is a decoder-only architecture that is pre-trained on conventional language modeling and therefore well-suited for language generation~\cite{radford2019language}. We follow \citet{marasovic_natural_2020} and do greedy decoding during inference.
\subsection{Training}
\begin{table*}[htbp]
\begin{center}
\begin{tabulary}{\linewidth}{LCCCCCCCCCC}
\toprule
& Overall & \multicolumn{3}{c}{VQA-X} & \multicolumn{3}{c}{e-SNLI-VE} & \multicolumn{3}{c}{VCR} \\
\cmidrule(r){2-2} \cmidrule(r){3-5} \cmidrule(r){6-8} \cmidrule(r){9-11}
& $S_E$ & $S_O$ & $S_T$ & $S_E$ & $S_O$ & $S_T$ & $S_E$ & $S_O$ & $S_T$ & $S_E$ \\
\midrule
PJ-X & 59.2 & 49.9 & 76.4 & 65.4 & 41.2 & 69.2 & 59.6 & 20.6 & 39.0 & 52.7 \\
FME & 60.1 & 47.7 & 75.5 & 63.2 & 43.1 & 73.7 & 58.5 & 28.6 & 48.9 & 58.5 \\
RVT & 62.8 & 46.0 & 68.6 & 67.1 & 42.8 & 72.0 & 59.4 & 36.4 & 59.0 & 61.8 \\
e-UG & \textbf{68.5} & \textbf{57.6} & \textbf{80.5} & \textbf{71.5} & \textbf{54.8} & \textbf{79.5} & \textbf{68.9} & \textbf{45.5} & \textbf{69.8} & \textbf{65.1} \\
GT & 79.3 & \multicolumn{1}{l}{--} & \multicolumn{1}{l}{--} & 84.5 & -- & -- & 76.2 & -- & -- & 77.3 \\
\bottomrule
\end{tabulary}
\caption{e-ViL benchmark scores. $S_O$, $S_T$, and $S_E$ are defined in Section~\ref{sec:tf}. GT denotes the ground-truth explanations in each dataset. The best results are in bold.}%
\label{tab:he_score}
\end{center}
\vspace{-2ex}
\end{table*}
All models are trained separately on each dataset.
To ensure comparability, image features for PJ-X and FME are obtained from the same ResNet-101~\cite{he2016deep} pre-trained on ImageNet, which yields a 2048d feature representation for an image.
To account for the small size of VQA-X, the VQA $M_T$ models were pre-trained on VQA v2 for VQA-X, and trained from scratch for the other two datasets. For UNITER, we follow the pre-training procedures used in the original paper~\cite{chen_uniter_2020}. The object tags in RVT are obtained from a Faster R-CNN that was trained on ImageNet and COCO. For GPT-2, we load the pre-trained weights of the original GPT-2 with 117M parameters~\cite{radford2019language}.
\vspace{-2ex}
\paragraph{Joint or Separate Training.}
All the VL-NLE models $M$ in this work consist of $M_T$ and $M_E$ modules, which can either be trained jointly or separately.
For the RVT model, training jointly would make no difference, as the explanation generation is not conditioned on a learnable representation in $M_T$ (but instead on the fixed object tags for each image). For all other models, training jointly can be advantageous, because we backpropagate the explanation loss into the task model $M_T$, but this also comes at the risk of negatively affecting the optimization \cite{caruana1997multitask}.
The authors of the PJ-X model mentioned that they tried both training approaches, but they do not specify which one worked best. \citet{wu_faithful_2019} only trained separately. It should be noted that PJ-X and FME were both solely run on VQA-X, where a much larger dataset VQA v2 exists for task $T$. They pre-train $M_T$ separately on this dataset, and it could be argued that, when training jointly, $M_T$ runs the risk of becoming worse by overfitting on the smaller dataset VQA-X. For e-SNLI-VE and VCR, no such pre-training dataset exists. In this work, we train both jointly and separately for every model.
\vspace{-2ex}
\paragraph{Hyperparameters.}
Choosing hyperparameters via human evaluation is prohibitively expensive. Instead, we defined a set of automatic NLG metrics that we used to approximate the selection of the best hyperparameters. We define the score of an explanation as the harmonic mean of the BERTScore F1~\cite{zhang2019bertscore} and $\mathrm{NGRAMScore}$, where $\mathrm{NGRAMScore}$ is the harmonic mean of the $n$-gram NLG metrics ROUGE-L~\cite{lin2004automatic}, SPICE~\cite{anderson2016spice}, CIDEr~\cite{vedantam2015cider}, and METEOR~\cite{banerjee2005meteor}. We pick the harmonic mean as it puts more emphasis on the weaker scores. Further details on the hyperparameters are given in Appendix \ref{app:hyp}.
\subsection{Results}
In this section, we provide the results obtained by the different models for human evaluation and for automatic NLG metrics. We also provide a study on the correlation between different automatic NLG metrics and human evaluation scores. Lastly, we look at the effect that training with explanations has on the performance on task $T$. Alternative computations of the human evaluation score and a statistical analysis of the results are provided in Appendix \ref{app:results}.
\subsubsection{Human Evaluation}
The explanation scores $S_E$ obtained from the e-ViL human evaluation framework are displayed in Table~\ref{tab:he_score}. Our model e-UG outperforms existing methods on all datasets, with an average $S_E$ score 5.7 points higher than the second-best model, RVT. Despite leveraging little image information, RVT achieves higher scores than PJ-X and FME on average, reflecting the ability of GPT-2 to learn to generate convincing explanations, without much anchoring on the image. There is still a significant gap between $S_E$ scores of generated explanations and ground-truth (GT) explanations. For VQA-X, $S_E$ scores are higher for all models, indicating that the dataset is easier. In terms of the overall score $S_O$, the gap between e-UG and the rest increases even further, which is due to UNITER achieving higher performance on VL tasks than the $M_T$ modules of the other models. In Figure \ref{fig:test_ex} we show an example with the explanations generated by each model. In this example, e-UG is the only model that accurately describes the image and justifies the answer. Additional examples are given in Figure \ref{fig:gen_examples} in the Appendix.
As a second question, we ask the annotators to select the shortcomings (if any) for every explanation. Results for this are given in Table~\ref{tab:shortcoming}. The most frequent shortcoming is an insufficient justification of the answer. Next, explanations can be poor if they describe the image incorrectly. Lastly, around 10\% of explanations across all models are nonsensical (e.g., ``a woman is a woman"). All models struggle similarly much with producing explanations that do not sufficiently justify the answer. e-UG and PJ-X seem to do a better job at producing coherent sentences. In terms of the explanations accurately describing the image content, e-UG is significantly superior to the other models. This empirically confirms the effectiveness of our enhanced conditioning on the image. On a dataset level, we see that it is easiest for all models to provide explanations that make grammatical sense and justify the answer on VQA-X, which can be explained by the fact that the questions in VQA-X are easier and require less elaborate explanations than in the other datasets.
A statistical analysis of our findings can be found in Appendix \ref{app:results}.
\begin{figure}
\begin{center}
\includegraphics[width=1\linewidth]{figures/test_example.png}
\end{center}
\caption{Generated explanations for each model on an image-hypothesis pair in e-SNLI-VE.}%
\label{fig:test_ex}
\vspace{-2ex}
\end{figure}
\begin{table}[htbp]
\begin{center}
\begin{tabulary}{\linewidth}{LCCCC}
\toprule
Model & Untrue to Image & Lack of Justification & Non-sensical Sentence \\
\midrule
\mbox{PJ-X} & 25.0\% & 26.4\% & 8.9\% \\
RVT & 20.4\% & 24.2\% & 12.0\% \\
\mbox{FME} & 21.8\% & \textbf{23.1\%} & 13.7\% \\
\mbox{e-UG} & \textbf{15.9\%} & 25.0\% & \textbf{7.4\%} \\
\midrule
Dataset & & & \\
\midrule
\mbox{e-SNLI-VE} & 21.3\% & 28.7\% & 12.8\% \\
VCR & 21.0\% & 31.2\% & 11.7\% \\
\mbox{VQA-X} & 20.0\% & 15.4\% & 7.4\% \\
\bottomrule
\end{tabulary}
\caption{Main shortcomings of the generated explanations, by models and by datasets. Human judges could choose multiple shortcomings per explanation. The best model results are in bold.}%
\label{tab:shortcoming}
\end{center}
\vspace{-4ex}
\end{table}
\vspace{-2ex}
\subsubsection{Automatic NLG Metrics} \label{sec:autoNLG}
We report the automatic NLG scores in Table \ref{tab:autoNLG}. Those are computed for all the explanations from the test sets where the predicted answer was correct. A quick observation is that the human evaluation results are not always reflected by the automatic metrics. For example, on the VCR dataset, FME, and not e-UG, obtains the highest $S_E$ score when using automatic NLG metrics. Some tendencies are reflected nonetheless, such as the fact that e-UG is the best model overall and that e-UG consistently outperforms RVT (albeit by a small margin).
\vspace{-2ex}
\paragraph{Question-only GPT-2.}
In order to verify our intuition that the object labels used by RVT provide very little information about the image, we trained GPT-2 that only conditions on the question and answer, ignoring the image (called \emph{GPT-2 only} in Table \ref{tab:autoNLG}). Without having any image input, this model closely shadows the performance of RVT on most metrics. RVT is still slightly better in most cases, indicating that the object labels do provide some minor improvement. This suggests that RVT is not able to use visual information effectively and learns the explanations mostly based off spurious correlations and not based on the image.
\begin{table*}[htbp]
\begin{center}
\begin{tabulary}{\linewidth}{LCCCCCCCCCCCC}
\toprule
& \multicolumn{3}{c}{e-ViL Scores (auto)} & \multicolumn{8}{c}{$n$-gram Scores} & Learned Sc\\
\cmidrule(r){2-4} \cmidrule(r){5-12} \cmidrule(r){13-13}
\emph{VQA-X} & $S_O$ & $S_T$ & $S_E$ & B1 & B2 & B3 & B4 & R-L & MET. & CIDEr & SPICE & BERTScore \\
\midrule
PJ-X~\cite{park_multimodal_2018} & 32.1 & 76.4 & 42.1 & \textbf{57.4} & 42.4 & 30.9 & 22.7 & \textbf{46.0} & 19.7 & 82.7 & 17.1 & 84.6 \\
FME~\cite{wu_faithful_2019} & 33.0 & 75.5 & 43.7 & 59.1 & \textbf{43.4} & \textbf{31.7} & 23.1 & 47.1 & 20.4 & \textbf{87.0} & 18.4 & 85.2 \\
RVT~\cite{marasovic_natural_2020} & 26.8 & 68.6 & 39.1 & 51.9 & 37.0 & 25.6 & 17.4 & 42.1 & 19.2 & 52.5 & 15.8 & 85.7 \\
GPT-2 only & N.A. & N.A. & 37.8 & 51.0 & 36.4 & 25.3 & 17.3 & 41.9 & 18.6 & 49.9 & 14.9 & 85.3 \\
e-UG & \textbf{36.5} & \textbf{80.5} & \textbf{45.4} & 57.3 & 42.7 & 31.4 & \textbf{23.2} & 45.7 & \textbf{22.1} & 74.1 & \textbf{20.1} & \textbf{87.0} \\
\midrule
\emph{VCR} & & & & & & & & & & & & \\
\midrule
PJ-X~\cite{park_multimodal_2018} & 7.2 & 39.0 & 18.4 & 21.8 & 11.0 & 5.9 & 3.4 & 20.5 & 16.4 & 19.0 & 4.5 & 78.4 \\
FME~\cite{wu_faithful_2019} & 17.0 & 48.9 & \textbf{34.8} & \textbf{23.0} & \textbf{12.5} & \textbf{7.2} & \textbf{4.4} & \textbf{22.7} & \textbf{17.3} & 27.7 & \textbf{24.2} & \textbf{79.4} \\
RVT~\cite{marasovic_natural_2020} & 15.5 & 59.0 & 26.3 & 18.0 & 10.2 & 6.0 & 3.8 & 21.9 & 11.2 & 30.1 & 11.7 & 78.9 \\
GPT-2 only & N.A & N.A & 26.3 & 18.0 & 10.2 & 6.0 & 3.8 & 22.0 & 11.2 & 30.6 & 11.6 & 78.9 \\
e-UG & \textbf{19.3} & \textbf{69.8} & 27.6 & 20.7 & 11.6 & 6.9 & 4.3 & 22.5 & 11.8 & \textbf{32.7} & 12.6 & 79.0 \\
\midrule
\emph{e-SNLI-VE} & & & & & & & & & & & & \\
\midrule
PJ-X~\cite{park_multimodal_2018} & 26.5 & 69.2 & 38.4 & 29.4 & 18.0 & 11.3 & 7.3 & \textbf{28.6} & 14.7 & 72.5 & 24.3 & 79.1 \\
FME~\cite{wu_faithful_2019} & 29.9 & 73.7 & 40.6 & \textbf{30.6} & 19.2 & 12.4 & 8.2 & 29.9 & 15.6 & 83.6 & 26.8 & 79.7 \\
RVT~\cite{marasovic_natural_2020} & 31.7 & 72.0 & 44.0 & 29.9 & 19.8 & 13.6 & \textbf{9.6} & 27.3 & 18.8 & 81.7 & 32.5 & 81.1 \\
GPT-2 only & N.A. & N.A. & 43.6 & 29.8 & 19.7 & 13.5 & 9.5 & 27.0 & 18.7 & 80.4 & 32.1 & 81.1 \\
e-UG & \textbf{36.0} & \textbf{79.5} & \textbf{45.3} & 30.1 & \textbf{19.9} & \textbf{13.7} & \textbf{9.6} & 27.8 & \textbf{19.6} & \textbf{85.9} & \textbf{34.5} & \textbf{81.7} \\
\bottomrule
\end{tabulary}
\caption{Automatic NLG metrics for all model-dataset pairs. The $S_E$ based on automatic NLG metrics is the harmonic mean that was used to select the best model during validation. B1 to B4 stand for BLEU-1 to BLEU-4, R-L for ROUGE-L, and MET for METEOR.}%
\label{tab:autoNLG}
\end{center}
\end{table*}
\vspace{-2ex}
\subsubsection{Correlation of NLG Metrics with Human Evaluation}
To better understand to what extent automatic NLG metrics are able to mirror human judgment of explanations, we compute the Spearman correlation of different NLG metrics with the human evaluation scores. The human evaluation score is averaged and normalised (across all annotators) for each explanation. We have human evaluation scores for a total of 3,566\footnote{We have 4 models, 3 datasets of 300 examples, therefore 3,600 explanations. However, for 34 of them, all the three annotators answered the question incorrectly.} generated explanations, which makes it the currently largest study on the correlation of NLG metrics with human evaluation in NLEs.
The results in Table~\ref{tab:corr} show that BERTScore and METEOR exhibit significantly higher correlation with human annotators across all datasets, reaching a maximal value of 0.293, which is a relatively low correlation. The reliability of automatic metrics also differs by dataset. They are highest on VQA-X and lowest on VCR. This could be explained by the fact that explanations in VCR are generally semantically more complex or more speculative (and, therefore, there are more different ways to explain the same thing) than those in VQA-X. It is noteworthy that some $n$-gram metrics, such as BLEU, ROUGE, or CIDEr, have no statistically significant correlation with human judgment on VCR.
\begin{table}[htbp]
\begin{center}
\begin{tabulary}{\linewidth}{LCCCC}
\toprule
Metric & All datasets & VQA-X & e-SNLI-VE & VCR \\
\midrule
BLEU-1 & 0.222 & 0.396 & 0.123 & \textit{0.032} \\
BLEU-2 & 0.236 & 0.412 & 0.142 & \textit{0.034} \\
BLEU-3 & 0.224 & 0.383 & 0.139 & \textit{0.039} \\
BLEU-4 & 0.216 & 0.373 & 0.139 & \textit{0.038} \\
METEOR & 0.288 & \textbf{0.438} & 0.186 & 0.113 \\
\mbox{ROUGE-L} & 0.238 & 0.399 & 0.131 & \textit{0.050} \\
CIDEr & 0.245 & 0.404 & 0.133 & \textit{0.093} \\
SPICE & 0.235 & 0.407 & 0.162 & 0.116 \\
BERTScore & \textbf{0.293} & 0.431 & 0.189 & \textbf{0.138} \\
BLEURT~\cite{sellam2020bleurt} & 0.248 & 0.338 & \textbf{0.208} & 0.128 \\
\bottomrule
\end{tabulary}
\caption{Correlation between human evaluation and automatic NLG metrics on NLEs. All values, except those in \textit{italic}, have p-values $< 0.001$.}%
\label{tab:corr}
\end{center}
\vspace{-4ex}
\end{table}
\vspace{-2ex}
\subsubsection{Explanations as Learning Instructions} \label{sec:bb}
\begin{table*}[htbp!]
\begin{center}
\begin{tabulary}{\linewidth}{LLCCCCCC}
\toprule
& & \multicolumn{2}{c}{VQA-X} & \multicolumn{2}{c}{SNLI-VE} & \multicolumn{2}{c}{VCR} \\
\cmidrule(r){3-4} \cmidrule(r){5-6} \cmidrule(r){7-8}
Model & $M_T$ model & $M_T$ only & Joint & $M_T$ only & Joint & $M_T$ only & Joint \\
\mbox{PJ-X} & MCB~\cite{fukui2016multimodal} & N.A. & N.A. & \underline{69.7} & 69.2 & 38.5 & \underline{39.0} \\
\mbox{FME}& UpDown~\cite{anderson2018bottom} & N.A. & N.A. & 71.4 & \underline{73.7} & 35.7 & \underline{48.9} \\
e-UG & UNITER~\cite{chen_uniter_2020} & 80.0 & \underline{80.5} & 79.4 & 79.5 & 69.3 & \underline{69.8} \\
\bottomrule
\end{tabulary}
\caption{Comparison of task scores $S_T$ (e.g., accuracies) when the models are trained only on task $T$ vs.\ when trained jointly on task $T$ and $E$. Scores are underlined if their difference is greater than 0.5.}%
\label{tab:bb}
\end{center}
\end{table*}
Training a model jointly on the tasks $T$ and $E$ can be viewed as a form of multi-task learning~\cite{caruana1997multitask}. The explanations $e$ augment the datapoints of task $T$ by explaining why an answer $a$ was given. The module $M_T$ (which solves task $T$) may benefit from this additional signal from the explanations. Indeed, the model is forced to learn a representation of the image and question from which both the answer and explanation can be extracted, which could improve the model's representation capabilities. To verify this hypothesis, we compare the task scores of models $M_T$ that trained only on task $T$ and those that, together with $M_E$, were jointly trained on tasks $T$ and $E$. We do this for e-UG on all three datasets, and for FME and PJ-X on VCR and e-SNLI-VE (because a larger pre-training dataset exists for VQA-X). The results in Table~\ref{tab:bb} show that, without any adaptations, the task performance for joint training is equal or better in all but one model-dataset combination. These results suggests that explanations may have the potential to act as ``learning instructions" and thereby improve the classification capabilities of a model. Additional experiments are required to further verify this and to develop approaches that more efficiently leverage the explanations.
\section{ICCV Style guide}
Please follow the steps outlined below when submitting your manuscript to
the IEEE Computer Society Press. This style guide now has several
important modifications (for example, you are no longer warned against the
use of sticky tape to attach your artwork to the paper), so all authors
should read this new version.
\subsection{Language}
All manuscripts must be in English.
\subsection{Dual submission}
Please refer to the author guidelines on the ICCV 2021 web page for a
discussion of the policy on dual submissions.
\subsection{Paper length}
Papers, excluding the references section,
must be no longer than eight pages in length. The references section
will not be included in the page count, and there is no limit on the
length of the references section. For example, a paper of eight pages
with two pages of references would have a total length of 10 pages.
{\bf There will be no extra page charges for ICCV 2021.}
Overlength papers will simply not be reviewed. This includes papers
where the margins and formatting are deemed to have been significantly
altered from those laid down by this style guide. Note that this
\LaTeX\ guide already sets figure captions and references in a smaller font.
The reason such papers will not be reviewed is that there is no provision for
supervised revisions of manuscripts. The reviewing process cannot determine
the suitability of the paper for presentation in eight pages if it is
reviewed in eleven.
\subsection{The ruler}
The \LaTeX\ style defines a printed ruler which should be present in the
version submitted for review. The ruler is provided in order that
reviewers may comment on particular lines in the paper without
circumlocution. If you are preparing a document using a non-\LaTeX\
document preparation system, please arrange for an equivalent ruler to
appear on the final output pages. The presence or absence of the ruler
should not change the appearance of any other content on the page. The
camera ready copy should not contain a ruler. (\LaTeX\ users may uncomment
the \verb'\iccvfinalcopy' command in the document preamble.) Reviewers:
note that the ruler measurements do not align well with lines in the paper
--- this turns out to be very difficult to do well when the paper contains
many figures and equations, and, when done, looks ugly. Just use fractional
references (e.g.\ this line is $095.5$), although in most cases one would
expect that the approximate location will be adequate.
\subsection{Mathematics}
Please number all of your sections and displayed equations. It is
important for readers to be able to refer to any particular equation. Just
because you didn't refer to it in the text doesn't mean some future reader
might not need to refer to it. It is cumbersome to have to use
circumlocutions like ``the equation second from the top of page 3 column
1''. (Note that the ruler will not be present in the final copy, so is not
an alternative to equation numbers). All authors will benefit from reading
Mermin's description of how to write mathematics:
\url{http://www.pamitc.org/documents/mermin.pdf}.
\subsection{Blind review}
Many authors misunderstand the concept of anonymizing for blind
review. Blind review does not mean that one must remove
citations to one's own work---in fact it is often impossible to
review a paper unless the previous citations are known and
available.
Blind review means that you do not use the words ``my'' or ``our''
when citing previous work. That is all. (But see below for
tech reports.)
Saying ``this builds on the work of Lucy Smith [1]'' does not say
that you are Lucy Smith; it says that you are building on her
work. If you are Smith and Jones, do not say ``as we show in
[7]'', say ``as Smith and Jones show in [7]'' and at the end of the
paper, include reference 7 as you would any other cited work.
An example of a bad paper just asking to be rejected:
\begin{quote}
\begin{center}
An analysis of the frobnicatable foo filter.
\end{center}
In this paper we present a performance analysis of our
previous paper [1], and show it to be inferior to all
previously known methods. Why the previous paper was
accepted without this analysis is beyond me.
[1] Removed for blind review
\end{quote}
An example of an acceptable paper:
\begin{quote}
\begin{center}
An analysis of the frobnicatable foo filter.
\end{center}
In this paper we present a performance analysis of the
paper of Smith \etal [1], and show it to be inferior to
all previously known methods. Why the previous paper
was accepted without this analysis is beyond me.
[1] Smith, L and Jones, C. ``The frobnicatable foo
filter, a fundamental contribution to human knowledge''.
Nature 381(12), 1-213.
\end{quote}
If you are making a submission to another conference at the same time,
which covers similar or overlapping material, you may need to refer to that
submission in order to explain the differences, just as you would if you
had previously published related work. In such cases, include the
anonymized parallel submission~\cite{Authors14} as additional material and
cite it as
\begin{quote}
[1] Authors. ``The frobnicatable foo filter'', F\&G 2014 Submission ID 324,
Supplied as additional material {\tt fg324.pdf}.
\end{quote}
Finally, you may feel you need to tell the reader that more details can be
found elsewhere, and refer them to a technical report. For conference
submissions, the paper must stand on its own, and not {\em require} the
reviewer to go to a tech report for further details. Thus, you may say in
the body of the paper ``further details may be found
in~\cite{Authors14b}''. Then submit the tech report as additional material.
Again, you may not assume the reviewers will read this material.
Sometimes your paper is about a problem which you tested using a tool which
is widely known to be restricted to a single institution. For example,
let's say it's 1969, you have solved a key problem on the Apollo lander,
and you believe that the ICCV70 audience would like to hear about your
solution. The work is a development of your celebrated 1968 paper entitled
``Zero-g frobnication: How being the only people in the world with access to
the Apollo lander source code makes us a wow at parties'', by Zeus \etal.
You can handle this paper like any other. Don't write ``We show how to
improve our previous work [Anonymous, 1968]. This time we tested the
algorithm on a lunar lander [name of lander removed for blind review]''.
That would be silly, and would immediately identify the authors. Instead
write the following:
\begin{quotation}
\noindent
We describe a system for zero-g frobnication. This
system is new because it handles the following cases:
A, B. Previous systems [Zeus et al. 1968] didn't
handle case B properly. Ours handles it by including
a foo term in the bar integral.
...
The proposed system was integrated with the Apollo
lunar lander, and went all the way to the moon, don't
you know. It displayed the following behaviours
which show how well we solved cases A and B: ...
\end{quotation}
As you can see, the above text follows standard scientific convention,
reads better than the first version, and does not explicitly name you as
the authors. A reviewer might think it likely that the new paper was
written by Zeus \etal, but cannot make any decision based on that guess.
He or she would have to be sure that no other authors could have been
contracted to solve problem B.
\medskip
\noindent
FAQ\medskip\\
{\bf Q:} Are acknowledgements OK?\\
{\bf A:} No. Leave them for the final copy.\medskip\\
{\bf Q:} How do I cite my results reported in open challenges?
{\bf A:} To conform with the double blind review policy, you can report results of other challenge participants together with your results in your paper. For your results, however, you should not identify yourself and should not mention your participation in the challenge. Instead present your results referring to the method proposed in your paper and draw conclusions based on the experimental comparison to other results.\medskip\\
\begin{figure}[t]
\begin{center}
\fbox{\rule{0pt}{2in} \rule{0.9\linewidth}{0pt}}
\end{center}
\caption{Example of caption. It is set in Roman so that mathematics
(always set in Roman: $B \sin A = A \sin B$) may be included without an
ugly clash.}
\label{fig:long}
\label{fig:onecol}
\end{figure}
\subsection{Miscellaneous}
\noindent
Compare the following:\\
\begin{tabular}{ll}
\verb'$conf_a$' & $conf_a$ \\
\verb'$\mathit{conf}_a$' & $\mathit{conf}_a$
\end{tabular}\\
See The \TeX book, p165.
The space after \eg, meaning ``for example'', should not be a
sentence-ending space. So \eg is correct, {\em e.g.} is not. The provided
\verb'\eg' macro takes care of this.
When citing a multi-author paper, you may save space by using ``et alia'',
shortened to ``\etal'' (not ``{\em et.\ al.}'' as ``{\em et}'' is a complete word.)
However, use it only when there are three or more authors. Thus, the
following is correct: ``
Frobnication has been trendy lately.
It was introduced by Alpher~\cite{Alpher02}, and subsequently developed by
Alpher and Fotheringham-Smythe~\cite{Alpher03}, and Alpher \etal~\cite{Alpher04}.''
This is incorrect: ``... subsequently developed by Alpher \etal~\cite{Alpher03} ...''
because reference~\cite{Alpher03} has just two authors. If you use the
\verb'\etal' macro provided, then you need not worry about double periods
when used at the end of a sentence as in Alpher \etal.
For this citation style, keep multiple citations in numerical (not
chronological) order, so prefer \cite{Alpher03,Alpher02,Authors14} to
\cite{Alpher02,Alpher03,Authors14}.
\begin{figure*}
\begin{center}
\fbox{\rule{0pt}{2in} \rule{.9\linewidth}{0pt}}
\end{center}
\caption{Example of a short caption, which should be centered.}
\label{fig:short}
\end{figure*}
\section{Formatting your paper}
All text must be in a two-column format. The total allowable width of the
text area is $6\frac78$ inches (17.5 cm) wide by $8\frac78$ inches (22.54
cm) high. Columns are to be $3\frac14$ inches (8.25 cm) wide, with a
$\frac{5}{16}$ inch (0.8 cm) space between them. The main title (on the
first page) should begin 1.0 inch (2.54 cm) from the top edge of the
page. The second and following pages should begin 1.0 inch (2.54 cm) from
the top edge. On all pages, the bottom margin should be 1-1/8 inches (2.86
cm) from the bottom edge of the page for $8.5 \times 11$-inch paper; for A4
paper, approximately 1-5/8 inches (4.13 cm) from the bottom edge of the
page.
\subsection{Margins and page numbering}
All printed material, including text, illustrations, and charts, must be kept
within a print area 6-7/8 inches (17.5 cm) wide by 8-7/8 inches (22.54 cm)
high.
Page numbers should be included for review submissions but not for the
final paper. Review submissions papers should have page numbers in the
footer with numbers centered and .75 inches (1.905 cm) from the bottom
of the page and start on the first page with the number 1.
Page numbers will be added by the publisher to all camera-ready papers
prior to including them in the proceedings and before submitting the
papers to IEEE Xplore. As such, your camera-ready submission should
not include any page numbers. Page numbers should automatically be
removed by uncommenting (if it's not already) the line
\begin{verbatim}
\end{verbatim}
near the beginning of the .tex file.
\subsection{Type-style and fonts}
Wherever Times is specified, Times Roman may also be used. If neither is
available on your word processor, please use the font closest in
appearance to Times to which you have access.
MAIN TITLE. Center the title 1-3/8 inches (3.49 cm) from the top edge of
the first page. The title should be in Times 14-point, boldface type.
Capitalize the first letter of nouns, pronouns, verbs, adjectives, and
adverbs; do not capitalize articles, coordinate conjunctions, or
prepositions (unless the title begins with such a word). Leave two blank
lines after the title.
AUTHOR NAME(s) and AFFILIATION(s) are to be centered beneath the title
and printed in Times 12-point, non-boldface type. This information is to
be followed by two blank lines.
The ABSTRACT and MAIN TEXT are to be in a two-column format.
MAIN TEXT. Type main text in 10-point Times, single-spaced. Do NOT use
double-spacing. All paragraphs should be indented 1 pica (approx. 1/6
inch or 0.422 cm). Make sure your text is fully justified---that is,
flush left and flush right. Please do not place any additional blank
lines between paragraphs.
Figure and table captions should be 9-point Roman type as in
Figures~\ref{fig:onecol} and~\ref{fig:short}. Short captions should be centered.
\noindent Callouts should be 9-point Helvetica, non-boldface type.
Initially capitalize only the first word of section titles and first-,
second-, and third-order headings.
FIRST-ORDER HEADINGS. (For example, {\large \bf 1. Introduction})
should be Times 12-point boldface, initially capitalized, flush left,
with one blank line before, and one blank line after.
SECOND-ORDER HEADINGS. (For example, { \bf 1.1. Database elements})
should be Times 11-point boldface, initially capitalized, flush left,
with one blank line before, and one after. If you require a third-order
heading (we discourage it), use 10-point Times, boldface, initially
capitalized, flush left, preceded by one blank line, followed by a period
and your text on the same line.
\subsection{Footnotes}
Please use footnotes\footnote {This is what a footnote looks like. It
often distracts the reader from the main flow of the argument.} sparingly.
Indeed, try to avoid footnotes altogether and include necessary peripheral
observations in
the text (within parentheses, if you prefer, as in this sentence). If you
wish to use a footnote, place it at the bottom of the column on the page on
which it is referenced. Use Times 8-point type, single-spaced.
\subsection{References}
List and number all bibliographical references in 9-point Times,
single-spaced, at the end of your paper. When referenced in the text,
enclose the citation number in square brackets, for
example~\cite{Authors14}. Where appropriate, include the name(s) of
editors of referenced books.
\begin{table}
\begin{center}
\begin{tabular}{|l|c|}
\hline
Method & Frobnability \\
\hline\hline
Theirs & Frumpy \\
Yours & Frobbly \\
Ours & Makes one's heart Frob\\
\hline
\end{tabular}
\end{center}
\caption{Results. Ours is better.}
\end{table}
\subsection{Illustrations, graphs, and photographs}
All graphics should be centered. Please ensure that any point you wish to
make is resolvable in a printed copy of the paper. Resize fonts in figures
to match the font in the body text, and choose line widths which render
effectively in print. Many readers (and reviewers), even of an electronic
copy, will choose to print your paper in order to read it. You cannot
insist that they do otherwise, and therefore must not assume that they can
zoom in to see tiny details on a graphic.
When placing figures in \LaTeX, it's almost always best to use
\verb+\includegraphics+, and to specify the figure width as a multiple of
the line width as in the example below
{\small\begin{verbatim}
\usepackage[dvips]{graphicx} ...
\includegraphics[width=0.8\linewidth]
{myfile.eps}
\end{verbatim}
}
\subsection{Color}
Please refer to the author guidelines on the ICCV 2021 web page for a discussion
of the use of color in your document.
\subsection{Notes on approaches of other works}
Most approaches also condition on the predicted answer $a$ and thus are given by $M_E(M_T(V,L),a)=e$. A conceptual architecture is displayed in xxFigure.
Most approaches also condition on the predicted answer $a$ and thus are given by $M_E(M_T(V,L),a)=e$. A conceptual architecture is displayed in xxFigure. Some existing methods only provide explanations $e$, without completing the original VL task $T$ and thus their model $M$ only consists of $M_E$ \cite{marasovic_natural_2020}. In this work, we always complete both task $T$ and $E$, as we consider the ability to explain the reasoning of the model as the key contribution of natural language explanations. Other methods have opted for feeding the ground-truth label of task $T$ (xxConfirm xxCite). We also argue against this approach, as it prohibits the explanations from being faithful to the model's inner reasoning. Indeed, let's say a model $M_T$ predicts answer $a=\alpha$, but the correct answer is $a=\beta$, then the explanation will be given by $M_E(M_T(V,L)=\alpha,a=\beta)$. This creates a discrepancy where $M_E$ needs to explain a label based on a model that would have lead to a different label.
\subsection{additional/repeated notes on need for human eval}
Generating explanations is a natural language generation (NLG) task, which is famously difficult to evaluate xxCite. For explanations, it is even more difficult. Besides the fact that there are different linguistic forms to express the same meaning, for explanation there are also completely different semantic meanings to explain the same answer (see xxFigure). This makes automated NLG metrics relatively unsuited for the task. Previous work has already demonstrated this xxCite. Currently, the only way to objectively compare natural language explanations is via human evaluation. Some previous methods do this, but they always use different frameworks and therefore aren't comparable with one another. In this work we propose a unified framework that can also be used by future methods.
Motivation for human eval: see slides Feb-19
|
{
"timestamp": "2021-05-11T02:14:22",
"yymm": "2105",
"arxiv_id": "2105.03761",
"language": "en",
"url": "https://arxiv.org/abs/2105.03761"
}
|
\section{Introduction}
Dirac is reported to have remarked in one of his talks that his equation was more intelligent than its author. But, as noted by Victor Weisskopf~\cite{Weisskopf1981}, it must be added that it was Dirac himself who found most of the additional insights. The concept of Majorana fermion, which also followed from the Dirac equation and was developed by Ettore Majorana \cite{Majorana1937}, was a notable exception. In relativistic quantum mechanics, physics of spin--1/2 fermions is described by the Dirac equation~\cite{Dirac1928, Peskin1995},
\begin{equation}
( i\gamma^{\mu}\partial_{\mu} -m )\psi = 0.
\label{eq:DE}
\end{equation}
Here, we have used the natural units ($\hbar=c=1$), and $\gamma^{\mu}$, with $\mu = 0,1,2,3$, are a set of matrices satisfying the algebra, \begin{equation}\{\gamma^{\mu},\gamma^{\nu}\}\equiv \gamma^{\mu} \gamma^{\nu}+\gamma^{\nu} \gamma^{\mu}=2\eta^{\mu\nu},
\label{Eq:Gamma_1}
\end{equation}
where $\eta^{\mu\nu}$ is the Minkowski metric ($\eta^{\mu \nu}=0, \mu \neq \nu; \eta^{00}=1; \eta^{ii}=-1, i=1,2,3$), and \begin{equation}\gamma_0\gamma_{\mu}\gamma_0 = \gamma_{\mu}^{\dagger}
\label{Eq:Gamma_2}
\end{equation}
Eq.~{\ref{Eq:Gamma_1}}
follows from the requirement that the energy eigenvalues satisfy the condition $E^2= p^2 + m^2$, while Eq.~{\ref{Eq:Gamma_2}} ensures hermiticity of Dirac Hamiltonian $H_D$ that follows from re-casting of Eq.~{\ref{eq:DE}} in terms of
Schrodinger equation \begin{equation}i\frac{\partial \psi}{\partial t}=H_D \psi; \hspace*{.3 cm} H_{D}=-i\gamma^0 \vec{\gamma}\cdot \vec{\nabla}+\gamma^0 m. \label{Eq:Dirac_H}
\end{equation}
In $(3+1)$ dimension, the simplest representation of the Clifford algebra in Eqs.~{\ref{Eq:Gamma_1},\ref{Eq:Gamma_2}} can be found in terms of four $4\times 4$ matrices $\gamma^{\mu}$, known as Dirac matrices. There are infinitely many unitarily equivalent representations of the Dirac matrices, each constituting a basis for solving the Dirac equation in Eq.~{\ref{eq:DE}}.
In general the matrices have both real and imaginary elements, e.g., the so-called chiral or Weyl representation has,
\begin{align}
\begin{matrix}
\gamma^0\\
\end{matrix}
=
\begin{pmatrix}
0 & 1 \\
1 & 0 &
\end{pmatrix},
&&
\begin{matrix}
\gamma^i\\
\end{matrix}
=
\begin{pmatrix}
0 & \sigma^i \\
-\sigma^i & 0 &
\end{pmatrix}
\label{eq:Dirac_Matrices_Weyl}
\end{align}
where each element itself is a $2\times 2$ matrix, and the $\sigma^i$ are the Pauli matrices,
\begin{align}
\begin{matrix}
\sigma^1\\
\end{matrix}
=
\begin{pmatrix}
0 & 1 \\
1 & 0 &
\end{pmatrix},
&&
\begin{matrix}
\sigma^2\\
\end{matrix}
=
\begin{pmatrix}
0 & -i \\
i & 0 &
\end{pmatrix}
&&
\begin{matrix}
\sigma^3\\
\end{matrix}
=
\begin{pmatrix}
1 & 0 \\
0 & -1 &
\end{pmatrix}.
\end{align}
Because of both real and imaginary elements in $\gamma^{\mu}$, Eq.~(\ref{eq:DE}) is a set of coupled differential equations with complex coefficients. Thus, the general solution $\psi(x)$ of Eq.~(\ref{eq:DE}) is a complex four-component spinor. This equation provides a relativistically covariant description of a spin$-1/2$ particle with charge, exactly as required by the electron. It also provides a natural
explanation for the gyromagnetic ratio of the electron being close to 2.
However, did not solve
the puzzle of negative energy solutions, which had plagued earlier versions of relativistic quantum mechanics and had
confused the founders of quantum mechanics
such as Pauli, Weiskopff, Wigner and Dirac himself. The presence of negative energy solutions leads to the possibility
that positive energy electron can scatter and reduce their energy without limit.
Dirac proposed a solution to this problem by postulating that all negative energy states were filled in the vacuum state.
He also noticed
that corresponding to any negative energy solution $\psi(x,t)\propto e^{-i |E|t}$ of Eq.~\ref{eq:DE}, one could write a positive energy charge conjugate wave-function~\cite{Sakurai}
\begin{equation}
\psi^{(c)}(x,t)={\cal{C}}\psi^*(x,t)\propto e^{i|E|t}, \end{equation}
where $\cal{C}$ is the charge conjugation matrix. The charge conjugation matrix is defined as a matrix that satisfies the constraint
\begin{align}
& {\cal{C}}^{-1}\gamma_\mu {\cal{C}}=-\gamma_\mu^*.\label{eqC}
\end{align}
By considering the coupling to the vector potential, it is clear that the wave-function $\psi^{(c)}(x,t)$
represents a particle with the same mass and spin but opposite charge and magnetic moment to the particles described by \textbf{the pair} of
positive energy solutions describing electrons. He suggested that these positive energy solutions would represent holes in the otherwise
filled vacuum. The hole excitations were termed positrons, which are the anti-particle of the electron, whose existence was
verified by Anderson~\cite{Anderson_Positron}. The observation of the positron was one of the first examples of a successful prediction of a new fundamental particle, and led to Pauli's memorable comment, ``Success seems to
have been on the side of Dirac rather than of logic''~\cite{Pais}. With the advent of semiconductor physics,
Dirac's argument became the standard description of hole doped semiconductors. Dirac's picture of a filled sea of negative energy electron states is an intrinsically many-particle picture, where, even in the absence of inter-particle interactions, a single particle can no longer be described by the solution of a single particle wave equation, e.g., the Dirac spinor $\psi$ ~\cite{Sakurai}. For example, Klein considered that the Dirac equation in a region of uniform electric field where the electrostatic
potential drops by $2m$, where $m$ is the mass parameter in Eq.~\ref{eq:DE}, would connect propagating negative kinetic energy states in one side to positive energy solutions
on the other side. This is called Klein tunneling of the negative energy states and has been seen in condensed matter systems \cite{Young2011}. In Dirac's picture, this process is viewed as the generation of electron-positron pairs by the electric field. While this process conserves charge, it does
not conserve number of particles because a pair of particles are created from vacuum. Thus, a more natural and indeed necessary~\cite{Sakurai} framework is to view the solution
$\psi(x,t)$ of Eq.~\ref{eq:DE}
as a second quantized field operator that unifies the description of particles and antiparticles through the introduction of creation and annihilation operators,
\begin{equation}
\psi(x,t) = \int \frac{d^3p}{(2\pi)^3 \sqrt{2E_{\mathbf{p}}}} \sum_{s} [b_s(\mathbf{p})u_s(\mathbf{p})e^{-ipx+iEt}+ c_s^{\dagger}(\mathbf{p})v_s(\mathbf{p})e^{ipx-iEt}].
\label{Eq:Dirac_Field}
\end{equation}
Here, $s=\pm 1$ are the projection of the spin on the $z$ axis, $E_{\mathbf{p}}=p^0=\sqrt{\mathbf{p}^2+m^2}$, $u_s(\mathbf{p}), v_s(\mathbf{p})$ are positive and negative energy spinor solutions of Eq.~\ref{eq:DE} evaluated in the momentum space, and $b_s(\mathbf{p}) (c_s^{\dagger}(\mathbf{p}))$ is the annihilation (creation) operator for the particle (antiparticle). The Dirac field $\psi(x)$ annihilates a particle and creates its antiparticle and its hermitian conjugate $\psi^{\dagger}(x)$ creates a particle and annihilates its antiparticle. Imposing anti-commutation relations on the Dirac fields $\psi(x), \psi^{\dagger} (y)$ leads to similar anti-commutation relations among the particle and antiparticle creation and annihilation operators ($b_s^{\dagger}(\mathbf{p}), b_r(\mathbf{p}), c_s^{\dagger}(\mathbf{p}), c_r(\mathbf{p})$). One can also now write the total Hamiltonian as $H_D=\int \frac{d^3p}{(2\pi)^3}E_{\mathbf{p}}(b_s^{\dagger}(\mathbf{p})b_s(\mathbf{p}) + c_s^{\dagger}(\mathbf{p})c_s(\mathbf{p}))$ plus an unimportant constant that can be thrown away. The Dirac equation (~\ref{eq:DE}) thus naturally predicts the existence of anti-fermion for each spin--$1/2$ fermion, e.g., positively charged positron for negatively charged electron as discovered by Anderson~\cite{Anderson_Positron}.
In 1937, Ettore Majorana discovered a representation of the $\gamma$-matrices that satisfied the Clifford algebra (Eqs. (\ref{Eq:Gamma_1},\ref{Eq:Gamma_2})) but were purely imaginary.
\begin{comment}
One such representation can be written as,
\begin{align}
\begin{matrix}
\gamma^0_M\\
\end{matrix}
=
\begin{pmatrix}
0 & \sigma^2 \\
\sigma^2 & 0 &
\end{pmatrix},
&&
\begin{matrix}
\gamma^1_M\\
\end{matrix}
=
\begin{pmatrix}
i\sigma^3 & 0\\
0 & i\sigma^3 &
\end{pmatrix},
&&
\begin{matrix}
\gamma^2_M\\
\end{matrix}
=
\begin{pmatrix}
0 & -\sigma^2 \\
\sigma^2 & 0 &
\end{pmatrix},\nonumber\\
&&
\begin{matrix}
\gamma^3_M\\
\end{matrix}
=
\begin{pmatrix}
-i\sigma^1 & 0\\
0 & -i\sigma^1 &
\end{pmatrix},
\end{align}
where the $M$ in the subscript of the $\gamma$-matrices stands for Majorana basis.
\end{comment}
In this so-called Majorana basis ${\cal{C}}=\textbf{1}$, and the charge conjugation (CC) operation, which takes a particle to its antiparticle~\cite{Gellmann}, reduces to the complex conjugation operation.
Majorana then noticed that since the Dirac equation Eq.~\ref{eq:DE} becomes completely real, we can require that the solutions
$\psi(x,t)$ are also completely real, i.e.
\begin{equation}\psi=\psi^{c}=\cal{C}\psi^*=\psi^*.\label{Majcons}
\end{equation}
With this constraint the Dirac equation would represent particles that are
identical to their own antiparticle.
Majorana obsered that this constraint of particle being related to its own antiparticle can be ensured in any
representation of the Dirac matrices if we replace the Dirac mass term by the so-called Majorana mass $m$ in the Dirac
equation Eq.~\ref{eq:DE} to construct the Majorana equation
\begin{equation}
i\gamma^{\mu}\partial_{\mu}\psi -m\psi^{(c)} = 0.\label{eq:ME}
\end{equation}
Because $\psi^{(c)}$ involves complex conjugation, the above equation is no longer invariant under the $U(1)$ gauge
transformation $\psi\rightarrow e^{i\Lambda}\psi$ needed to couple the particle to electromagnetic field in a gauge
invariant way. Therefore, $\psi(x,t)$ must represent a neutral particle, which makes sense for a particle which is
its own anti-particle.
The physical equivalence of the Majorana particle and its antiparticle becomes clearer in the framework of quantum field theory.
In analogy with Eq.~\ref{Eq:Dirac_Field}, a solution to the Majorana equation Eq.~\ref{eq:ME} can be expanded in terms of plane wave solutions~\cite{Pal2011},
\begin{equation}
\psi(x,t) = \int \frac{d^3p}{(2\pi)^3 \sqrt{2E_{\mathbf{p}}}} \sum_{s} [b_s(\mathbf{p})u_s(\mathbf{p})e^{-ipx+i E t}+ b_s^{\dagger}(\mathbf{p})C u_s^*(\mathbf{p})e^{ipx-iE t}].
\label{Eq:Majorana_Field}
\end{equation}
The field in Eq.~\ref{Eq:Majorana_Field} satisfies the Majorana self-conjugacy relation $\psi=\psi^{(c)}$ (charge neutral particle that is identical to its own antiparticle). More importantly, only one kind of creation operator $b_s^{\dagger}(\mathbf{p})$ operator
appears in the field equation as opposed to separate and electron and positron operators as in Eq.~\ref{Eq:Dirac_Field}.
An alternative way to see that the Majorana equation Eq.~\ref{eq:ME} involves half the degrees of freedom (i.e. does
not have a separate hole degree of freedom) is to use the Weyl representation of the $\gamma$ matrices in Eq.~\ref{eq:Dirac_Matrices_Weyl}. In this representation
the $\gamma$ matrices are block off-diagonal (see Eq.~\ref{eq:Dirac_Matrices_Weyl}) and the charge conjugation matrix $\cal{C}$ is purely block diagonal so that the bottom two components of
the Majorana equation Eq.~\ref{eq:ME} can be written in terms of a two-component spinor $\omega$:
\begin{equation}
\bar{\sigma}^{\mu}\partial_{\mu}\omega + m\sigma_2 \omega^* = 0.\label{eq:MWE},
\end{equation}
where $\bar{\sigma}^{\mu}=[I_2,-\sigma^{1},-\sigma^{2},-\sigma^3]$. The full four component Majorana spinor satisfying Eq.~\ref{Majcons} and ~\ref{eq:MWE}
is obtained as $\psi^T(x)=(\omega^T,(i\sigma^2\omega)^T)$.
The twin properties of charge neutrality and the identification of the particle with its own antiparticle are not particularly uncommon among bosons, with photons and $\pi^0$-mesons being two examples. In fact, as is well known \cite{Peskin1995}, the Fourier expansion of the photon field $A_{\mu} (x)$ also contains creation and annihilation operators of only one kind, $a_{\lambda}(\mathbf{p})$ and $a_{\lambda}^{\dagger}(\mathbf{p})$, which is similar to the Fourier expansion in Eq.~\ref{Eq:Majorana_Field}, because photon is a boson that is identical to its own antiparticle. This is not counter-intuitive in bosons, which are considered as fields in the classical limits rather than
particles. But these properties are quite special among fermions, which are considered as particles in the classical limit. The neutron, for example, is a charge--neutral fermion, but it has an anti--particle (anti--neutron) that is different from the neutron by the sign of its magnetic moment. Also, neutrinos produced in beta-decay are thought to be charge--neutral, which follows from the conservation of electric charge. They also have a small but non--zero rest mass, so cannot be Weyl fermions~\cite{Weyl1929} which are massless. But whether neutrinos can be Majorana fermions, so that they are the same as their own antiparticle, is still an unsettled question in particle physics.
Although the jury is still out on whether any fermionic particle known in high energy physics can be a Majorana fermion, in the last few years the concept of a fermionic particle being identical to its own antiparticle has entered the realm of condensed matter physics \cite{Nayak2008,Wilczek2009,Beenakker2011,Alicea2012,Leijnse2012,Sato2017,Aguado2017}. In condensed matter systems, however, the ``elementary'' particles are necessarily electrons and protons, and so the term ``particle'' and ``antiparticle'' refer to the so-called \textit{quasiparticles} and \textit{quasiholes} corresponding to excitations of an underlying many--body state. As shown in a seminal paper by N. Read and D. Green~\cite{Read2000}, the Bogoliubov--de Gennes (BdG) equations satisfied by the superconducting quasiparticles in the so--called weak--pairing phase of a two--dimensional (2D) spinless $p_x+ip_y$ superconductor are very similar to the Majorana form of the Dirac equation of relativistic quantum mechanics. These superconducting quasiparticles, then, represent the condensed matter analog of the Majorana fermions of high--energy physics. Similar to the Majorana fermions in high energy physics, in general, the Majorana-like BdG quasiparticles of 2D spinless $p_x+ip_y$ superconductors are characterized by a finite mass and finite energy. However, in the context of more recent developments with relevance to topological quantum computation (TQC), the term ``Majorana fermion'' has a slightly different meaning. The Bardeen-Cooper-Schrieffer (BCS) Hamiltonian of a superconductor has a so-called particle-hole (p-h) symmetry, according to which for every energy eigenvalue $+E$, there exists another energy eigenvalue $-E$. Additionally, if the hypothetical superconductor is made up of spinless electrons, the second-quantized operators corresponding to the excitations at $\pm E$ satisfy $\gamma_E^{\dagger}=\gamma_{-E}$. Consequently, if the Hamiltonian admits a non-degenerate zero--energy eigenvalue, the corresponding quasiparticle is identical to its own anti-quasiparticle, with the second quantized operators satisfying $\gamma_0^{\dagger}=\gamma_{0}$. The zero-energy BdG quasiparticle states, should they exist, are bound states localized by defects in the superconductor, e.g., vortices and sample edges~\cite{Read2000}, where the superconducting order parameter vanishes.
In this chapter, by the terms ``{\em Majorana zero mode (MZM)}'', or ``{\em Majorana bound state (MBS)}'' we will refer to such localized, charge--neutral, zero--energy bound states that may occur at defects and boundaries in appropriate superconductors.
The creation operator for such a zero energy state is a hermitian second quantized operator $\gamma^{\dagger}=\gamma$ that anti--commutes with other fermion operators and satisfy the relation $\gamma^2=1$. A collection of such Majorana bound states satisfy the algebra:
\begin{align}
&\{\gamma_i, \gamma_j\}=2\delta_{ij}, \gamma_i^{\dagger}=\gamma_i, \gamma_i^2=1.
\end{align}
In two spatial dimensions, they obey a form of particle statistics known as non-Abelian statistics \cite{Moore1991,Nayak1996,Read2000,Ivanov2001,Stern2004}. In non-Abelian statistics, pairwise exchanges of particle coordinates represent non--commutative operations, a fundamental property that can be used to implement fault-tolerant quantum gates. Recently, the interest in possible realization of zero energy Majorana bound states in condensed matter systems increased dramatically, motivated by the proposal~\cite{Kitaev2003} to use them as building blocks for fault tolerant topological quantum computation.
Charge-neutral fermionic quasiparticles are difficult to obtain even in condensed matter systems. The typical fermionic excitations in metals and semiconductors, e.g., electrons and holes, are charged quasiparticles. Bound states of electrons and holes, called excitons, can be charge neutral but they are bosonic quasiparticles. Superconductors may be a good system to look for charge neutral fermionic quasipartciles, because due to gauge symmetry breaking, charge is no longer a sharp observable. Indeed the Bogoliubov quasiparticles in a superconductor are linear superpositions of a particle and a hole, and hence are not charge eigenstates. However, satisfying the Majorana requirement $\gamma^{\dagger}=\gamma$ for a quasiparticle excitation is difficult even in a superconducting system. The reason can be traced back to the spin degeneracy of the constituent electrons and holes: The second quantized creation operator for a Bogoliubov excitation in a typical BCS superconductor can be written as, $d^{\dagger} = u c_{\uparrow}^{\dagger} + v c_{\downarrow}$. Therefore, even if the excitation energy vanishes (i.e., the Bogoliubov quasiparticle lies on the Fermi surface), forcing $u=v^{*}$, $d^{\dagger}$ is still not equal to $d$, because of the spins of electrons. It is for this reason a hypothetical spinless superconductor is introduced \cite{Read2000} as an ideal platform to realize MZMs, provided non-degenerate zero-energy localized states can be realized in such a system.
In a superconductor, a non--degenerate localized zero--energy eigenstate, should it exist, enjoys a form of protection -- called \textit{topological protection} -- that makes it immune to weak local perturbations, as long as such perturbations do not close the superconducting gap. Such perturbations cannot move the state away from zero energy because of the particle--hole symmetry and the non--degeneracy condition. Under particle-hole symmetry, if the Hamiltonian of a superconductor has an energy eigenvalue $+E$, it must also have an energy eigenvalue $-E$. If the zero energy solution is non-degenerate, it is then its own particle-hole pair.
Since small perturbations to the BdG differential equation are not expected to change the total number of solutions, it follows that weak local perturbations (i.e., perturbations that do not couple pairs of MFs) leave the non--degenerate zero energy eigenvalue unperturbed, because perturbing it to a non-zero value (say $+\epsilon$) will necessitate the introduction of its particle-hole counterpart ($-\epsilon$).
This argument also implies that the zero--energy solutions are characterized by vanishing expectation values for \textit{any} local physical observable such as charge, mass, and spin. Otherwise, if there were a non-zero average of any of these observables in the zero energy wave function, a weak local field that couples to it (e.g., a weak magnetic field that couples to spin) would be able to shift the energy of the zero energy state.
As zero energy MFs in solid state systems are topologically protected, they can be removed from zero energy only by tuning the system through a topological quantum phase transition
(TQPT)~\cite{Volovik1988} at which the bulk energy gap vanishes and the MFs become entangled with other gapless states at the topological quantum critical point.
In the past decade, Majorana fermions have been discussed in a variety of low temperature systems
\cite{Nayak1996,Read2000,Moore1991,Read1992,DSarma2005,DSarma2006,Kitaev2001,Bonderson2006,Zhang2008,Sato2009, Akhmerov2009,Ivanov2001,Stern2004,Chung2007,Fu2008,Fu2009,Fu2009a,Cook2011, Potter2012, Sato2009a,Ghosh2010, Martin2012, Sau2011,Klinovaja2011,Mao2011,Mao2012,Nadj-Perge602,Jack1255,Kimeaar5251,Zhang2019,Zhu189,Machida2019,dartiailh2021phase,ren2019topological,pientka2017topological}.
Perhaps the most important of these -- the semiconductor-superconductor (SM-SC) heterostructure -- has attracted intense attention as a result of an abundance of exciting experimental and theoretical results that have appeared steadily in the literature. In what follows, we will first discuss the theoretical background necessary to understand the emergence of MFs in defects and boundaries in condensed matter systems, followed by a detailed discussion of the theory and experiments looking for MFs in SM-SC heterostructures.
\section{Theoretical background}
\subsection{Jackiw-Rebbi solution of a zero energy bound state in one dimension}\label{JackiwRebbi}
Before we discuss zero energy bound states localized in defects of the order parameter in superconductors, it will be instructive to discuss how such bound state solutions emerge in one-dimensional (1D) Dirac Hamiltonian. The zero energy bound state solutions for the various systems discussed in this chapter can be qualitatively understood by appropriate mapping of the corresponding Hamiltonians on the 1D Dirac problem.
One of the earliest examples of zero energy bound state solutions in a condensed matter system was investigated in domain wall states of polyacetylene\cite{Su1979,Su1980}.
In that case, starting with an ansatz for the dimerization order-parameter profile of polyacetylene, it was also possible to demonstrate the existence of a
zero energy bound state solution localized at a domain wall of the order parameter by explicitly solving the
mean-field equations \cite{Maki}. Remarkably, this domain
wall zero energy bound state was shown to be a condensed matter realization of the
zero mode associated with the mass solitons of a 1D Dirac problem
investigated by Jackiw and Rebbi\cite{Jackiw1,Jackiw2}.
The Jackiw and Rebbi soliton solution is a simple example of an
index theorem where fermionic zero modes can be used to count the
topological defects of a background
order parameter.
We begin with the Dirac Hamiltonian $H_D$ given in Eq.~\ref{Eq:Dirac_H}, $H_D=-i\gamma^0 \vec{\gamma}\cdot \vec{\nabla}+\gamma^0 m$. In one spatial dimension, we need only two Dirac matrices satisfying
\begin{equation}
\gamma^0\gamma^0=\textbf{1}, \gamma^0\gamma^1+\gamma^1\gamma^0=0, \gamma^1\gamma^1=-\textbf{1}.
\end{equation}
To satisfy this algebra with only a pair of matrices, the following $2\times2$ matrices will do the trick:
\begin{align}
\begin{matrix}
\gamma^0\\
\end{matrix}
=\begin{matrix}
\sigma_z\\
\end{matrix},
&&
\begin{matrix}
\gamma^1\\
\end{matrix}
=
\begin{matrix}
i\sigma_x
\end{matrix}\label{gammaJR}
\end{align}
We can compute the charge conjugation matrix to satisfy Eq.~\ref{eqC} to be ${\cal{C}}=\sigma_x$.
The one dimensional Dirac Hamiltonian becomes, $H_D^1=-i\sigma_z(-i\sigma_x)\partial_x+\sigma_z m$. To discuss the Jakiew-Rebbi zero mode, we begin with the 1D second quantized Dirac Hamiltonian,
\begin{eqnarray} H_{D}^1=\int
dx\Big[-iv_F\psi^{\dagger}\sigma_y\partial_x\psi
+m(x)\psi^{\dagger}\sigma_z\psi\Big],\label{h2}\end{eqnarray} where $
\psi^{\dagger}(x)=\begin{pmatrix}f^{\dagger}_1(x),&
f^{\dagger}_2(x)\end{pmatrix}$ with $f_{1,2}(x)$ being two independent fermion fields.
In \Eq{h2}, we have used Fermi velocity $v_F$ to replace the velocity of light $c$ of the original Dirac equation ($c=1$ in natural units used in Eq.~\ref{eq:DE}) to indicate an effective Dirac equation valid in a lattice. The second term in \Eq{h2} has an effective mass and we assume
$m(-x)=-m(x)$ is a spatially varying mass term that changes sign
at $x=0$ (the location of the domain wall).
We now assume that the quasiparticle operator \begin{equation} q^{\dagger}=\int
dx~ [\phi_1(x)f^{\dagger}_1(x)+\phi_2(x) f^{\dagger}_2(x)]
\label{Eq:QP}
\end{equation}
satisfies the operator equation, \begin{equation}
[H_D^1,q^{\dagger}]=\epsilon q^{\dagger}.
\label{Eq:Ladder}\end{equation}
Computing the commutator in Eq.~\ref{Eq:Ladder} we find the
following Dirac equation for the two-component wave function
$\phi^{\rm{T}}(x)=(\phi_1(x),\phi_2(x))$: \begin{eqnarray}
-iv_F\sigma_y\partial_x\phi(x)+\sigma_z m(x)\phi(x)=\epsilon
\phi(x). \label{wa} \end{eqnarray} First we note that because ${\sigma}_y$
anticommutes with ${\sigma}_x$ and ${\sigma}_z$, the first quantized Hamiltonian $H_{D}^1=-iv_F\sigma_y \partial_x+\sigma_z m(x)$ anticommutes with $\sigma_x$, $\{H_D^1,\sigma_x\}=0$. Then, if $\phi(x)$ is an
eigenfunction of $H_D^1$ with eigenvalue $\epsilon$, $H_D^1 \sigma_y \phi(x)=-\sigma_y H_D^1 \phi(x)=-\epsilon \sigma_y \phi(x)$, i.e., $\sigma_y\phi(x)$ is
also an eigenfunction of $H_D^1$ with eigenvalue $-\epsilon$. As a result,
the $\epsilon=0$ solutions of \Eq{wa} can be made a simultaneous
eigenstate of $H_D^1$ and ${\sigma}_x$. Let $\phi_0(x)$ denote such a solution and
\begin{equation}
{\sigma}_x\phi_0(x)=\lambda\phi_0(x).\label{sxcond}
\end{equation} Setting $\epsilon=0$ and left-multiplying
\Eq{wa} by $i{\sigma}_z$ we obtain \begin{eqnarray} \partial_x\phi_0(x)={\lambda\over v_F}
m(x)\phi_0(x)\nonumber,\end{eqnarray} which implies \begin{eqnarray}
\phi_0(x)=e^{{\lambda\over v_F}\int_0^x
m(x')dx'}\phi_0(0).\label{zero}\end{eqnarray} For $m(x)=\pm {\rm
sign}(x)|m(x)|$, \Eq{zero} is a normalizable function for $\lambda=\mp 1$. In
this way we have now proven that for a sign change of $m(x)$ (i.e. a mass domain wall) there
is a single zero energy eigenvalue of the one-dimensional Dirac Hamiltonian with a wave function that is exponentially localized away from the mass domain wall. Note that, the zero energy solution is robust against variations in the mass distribution function $m(x)$, as long as there is a change of sign of the mass term at some value of $x$. The normalizable zero-energy solution exists if \begin{eqnarray}
m(x) &=& -m_1 \hspace*{1 cm} \rm{if} \hspace*{.5 cm} x < 0\nonumber\\
&=& +m_2 \hspace*{1 cm}\rm{if} \hspace*{.5 cm} x>0
\end{eqnarray}
with $m_1, m_2 >0$. In particular, the solution exists even if the mass on the right hand side of the domain wall diverges $m_2 \rightarrow \infty$. In this case, the wave function corresponding to the zero energy eigenvalue vanishes for $x>0$. However, it remains non-zero and exponentially localized for $x\leq 0$. Now suppose the equation governing a topological system near a straight boundary (interface) with vacuum can be cast in the form a one dimensional Dirac equation (defined along the direction perpendicular to the boundary) with a negative mass term. Then modelling the vacuum just outside the boundary as a region with a positive infinite mass (so no particle can escape there) ensures the existence of a robust zero energy eigenfunction exponentially localized in the direction perpendicular to the boundary of the topological medium. This type of qualitative argument provides a visually appealing picture of topologically robust zero energy states localized at boundaries and order parameter defects in spinless $p_x+ip_y$ superconductors as discussed below.
It is important to mention here that despite the existence of a single topologically robust zero energy eigenstate localized at the mass domain wall in 1D Dirac theory, it does not represent a Majorana zero mode. In fact, the second quantized creation operator $q^{\dagger}$ in Eq.~{\ref{Eq:QP}} allows us to define the corresponding annihilation operator,
\begin{equation} q=\int
dx~ [\phi_1^{*}(x)f_1(x)+\phi_2^{*}(x) f_2(x)].
\label{Eq:QP1}
\end{equation}
It should be clear that the operators $q,q^{\dagger}$ do not satisfy the Majorana condition $q^{\dagger}=q$, but, when properly normalized, satisfy the fermion anticommutation relation $\{q^{\dagger}, q\}=1$. The wave function $\phi_0(x)$ in
Eq.~\ref{zero} and operators $q^{\dagger}, q$, thus, describe a zero-energy localized \textit{conventional} fermionic mode. Such a conventional fermion mode can in fact be viewed as the bound state of a \textit{pair} of Majorana zero modes with strongly overlapping wave functions. To see this, we define the operators,
\begin{equation}
\gamma_{+}=q^{\dagger} + q \hspace*{1 cm} \gamma_{-} = i(q^{\dagger}-q)
\label{Eq:Overlapping}
\end{equation}
It should be easy to check that $\gamma_{+}$ and $\gamma_{-}$ both individually satisfy the Majorana condition, $\gamma_{+}^{\dagger}=\gamma_{+}, \gamma_{-}^{\dagger}=\gamma_{-}$, and further, $\gamma_{\pm}^2=q^{\dagger}q + q q^{\dagger} =1$, and they mutually anticommute with each other $\{\gamma_{+}, \gamma_{-}\}=0$. If we define a Fock state $|0\rangle$ with energy $E=0$ defined by the condition $q|0\rangle =0$ (i.e., the zero energy state at mass domain wall is unoccupied), then the occupied state $|1\rangle$ can be defined as $|1\rangle = q^{\dagger} |0\rangle$, which is degenerate ($E=0$) with $|0\rangle$. The conventional fermion occupation number operator $n=q^{\dagger}q$ can be expressed in terms of the Majorana operators $\gamma_{+}, \gamma_{-}$ as, $n=q^{\dagger}q= \frac{1}{2}(1 + i \gamma_{+}\gamma_{-})$. Conversely, the operator $i\gamma_{+}\gamma_{-}= 2q^{\dagger}q-1$, which takes the value $1$ in the state $|1\rangle$ and $-1$ in the state $|0\rangle$, is called the fermion parity operator. It is now clear that the conventional fermionic mode described by $q^{\dagger}, q$ allows us to define a \textit{pair} of Majorana zero modes, but the two MZMs $\gamma_{+}, \gamma_{-}$ do not occur separately in space, describe the same localized wave function $\phi_0(x)$ in Eq.~\ref{zero}, and should really be viewed as a basis transformation from the creation and annihilation operators $q^{\dagger}, q$ to the Majorana operators $\gamma_{+}, \gamma_{-}$. The goal of the ongoing research on Majorana fermions in condensed matter systems is to create experimental conditions so that individual Majorana zero modes (e.g., $\gamma_{+}$ or $\gamma_{-}$) can occur spatially well-separated from each other. It is only in this limit they individually acquire topological protection.
\subsection{2D Spinless $(p_x+ip_y)$ superconductor}
The 2D spinless $(p_x+ip_y)$ superconductor (superfluid) is the canonical system that supports zero energy MZMs localized at the defects of the order parameter, such as vortices and sample edges~\cite{Read2000}. In 2D the mean field Hamiltonian describing the quasiparticle excitations for such a system is given by,
\begin{equation}
H_{2D}^{p} = \sum_{p}\xi_p c_p^{\dagger}c_p + \Delta_0 \sum_p\left[(p_x + ip_y)c_p^{\dagger}c_{-p}^{\dagger} + h.c.\right],
\label{eq:Hp}
\end{equation}
where $\xi_p = \epsilon_p - \mu$ with $\epsilon_p \rightarrow \frac{p^2}{2m*}$ for small $p$, $m^*$ is the effective mass, and $\mu$ is the chemical potential. Here, the spin indices of the electron operators are omitted because the system is considered to be spinless (or spin-polarized). Read and Green~\cite{Read2000} showed that for the Hamiltonian in Eq.~\ref{eq:Hp}, the long distance behavior of Cooper pair wave function $g(\mathbf{r})$ undergoes a dramatic change as a function of $\mu$. For $\mu < 0$, $g(r) \sim e^{-r/r_0}$, indicating that the pairs are tightly bound and the superconductor (superfluid) is in a so-called strong pairing phase. On the other hand, for $\mu >0$, $g(r) \sim \frac{1}{r}$, and the long tail in the Cooper pair wave function indicates that the system is in a so-called weak pairing phase, which is continuously connected to the BCS weak coupling superconductor. The phase transition at $\mu=0$, at which the excitation gap vanishes at the momentum space point $\mathbf{k}=0$, is not associated with any change in symmetry of the superconducting state but is topological in nature.
The weak and strong pairing phases of the Hamiltonian in Eq.~\ref{eq:Hp} are distinguished by distinct integer values of a topological invariant that can be defined as follows: The Hamiltonian in Eq.~\ref{eq:Hp} can be written in the Nambu basis as,
\begin{equation}
H_{2D}^p = \sum_p \Psi^{\dagger}(p) {\cal{H}}_{2D}^p \Psi(p),
\label{eq:Nambu}
\end{equation}
where $\Psi^{\dagger}(p)=(c_p^{\dagger}, c_{-p})$ and $\Psi(p)$ is its hermitian congugate. Here, the $2\times 2$ Hamiltonian matrix ${\cal{H}}_{2D}^p$ can be cast in the form,
\begin{equation}
{\cal{H}}_{2D}^p=\mathbf{d(\mathbf{p})}\cdot \boldsymbol{\sigma},
\label{eq:d}
\end{equation}
where the three-component vector $\mathbf{d}(\mathbf{p})$ can be written as, $\mathbf{d}(\mathbf{p})=({\rm{Re}}(\Delta_p), -{\rm{Im}}(\Delta_p), \xi_p)$, $\Delta_p=\Delta_0(p_x+ip_y)$, and $\boldsymbol{\sigma}$ is the three-component vector of the Pauli matrices. The unit vector corresponding to $\mathbf{d}(\mathbf{p})$, namely, ${\hat{\mathbf{d}}(\mathbf{p})}=\frac{\mathbf{d}(\mathbf{p})}{|\mathbf{d}(\mathbf{p})|}$, provides a mapping of the two-dimensional momentum space on the surface of a unit sphere defined by $|{\hat{\mathbf{d}}(\mathbf{p})}|=1$ . As $\mathbf{p}$ moves over the 2D momentum space, ${\hat{\mathbf{d}}(\mathbf{p})}$ sweeps out area over its unit sphere. Starting with $|\mathbf{p}|\rightarrow \infty$ as one covers the momentum space ending at $|\mathbf{p}|= 0$, the number of times the unit vector ${\hat{\mathbf{d}}(\mathbf{p})}$ wraps around the unit sphere is a topological invariant called Chern number ($C$). Mathematically, the quantity $C$ is given by,
\begin{equation}
C=\int \frac{d^2p}{4\pi} (\hat{\mathbf{d}}\cdot(\partial_{p_x} \hat{\mathbf{d}} \times \partial_{p_y} \hat{\mathbf{d}}))
\label{eq:Chern}
\end{equation}
For $|\mathbf{p}|\rightarrow \infty$ in any direction, $\xi_p \sim \frac{p^2}{2m^*}$ dominates and $\hat{\mathbf{d}}$ points along the north pole of the unit sphere. In the strong pairing phase of Eq.~\ref{eq:Hp}, $\mu <0$, and for $|\mathbf{p}|= 0$, $\hat{\mathbf{d}}_x, \hat{\mathbf{d}}_y =0, \hat{\mathbf{d}}_z=-\mu >0$. Hence in the strong pairing phase, for $|\mathbf{p}|= 0$, $\hat{\mathbf{d}}$ continues to point along the north pole. Conversely, in the weak pairing phase given by $\mu >0$, at the origin of the momentum space $\hat{\mathbf{d}}_z=-\mu <0$ and the unit vector $\hat{\mathbf{d}}$ points along the south pole of its unit sphere. Mathematically it can be shown that the Chern number in Eq.~\ref{eq:Chern} vanishes in the strong pairing phase while it acquires a value $C=+1$ in the weak pairing phase of Eq.~\ref{eq:Hp}. The value of the Chern number can only change at a topological quantum phase transition which, in the present case, occurs through a gap closing at the origin of the momentum space for $\mu=0$.
To elucidate the topological difference between the weak and the strong pairing phases, Read and Green showed that \cite{Read2000}, in the weak pairing phase ($\mu > 0$), the BdG equations $H_{2D}^p\Psi(r)=E\Psi(r)$ near a vortex or the sample edges (where the order parameter $\Delta_0$ vanishes) admit zero energy ($E=0$) solutions, while the corresponding second--quantized operator (the creation operator for the Bogoliubov state) is hermitian, $\gamma^{\dagger}=\gamma$. No such zero energy Majorana solutions exist near the vortices or sample edges in the strong pairing phase ($\mu >0$). To understand this, consider a long boundary or edge of the system parallel to the $y$-axis, separating the system in weak pairing phase situated at $x<0$ and vacuum or free space at $x>0$. The vacuum is characterized by the absence of particles, which can be implemented by having a potential $V$ large and positive for $x>0$. Since in the Hamiltonian \ref{eq:Hp}, $V$ modifies the chemical potential as $\mu \rightarrow (\mu -V)$, a large and positive potential $V$ in vacuum implies a large and negative $\mu$ for $x>0$. Since for $x<0$ we have the weak pairing phase with $\mu >0$, there must be a domain wall in $\mu$ where it changes sign near the boundary parallel to the $y$-axis. Using the small-$p$ approximation $\xi_p = \frac{p^2}{2m^*}-\mu \simeq -\mu$, that is, ignoring the term quadratic in $p$ in comparison to the linear $p$ term in the order parameter, it can be shown that \cite{Read2000}, for $E=0$, the BdG equations for the spinor $\Psi(r)=(u(r), v(r))^{T}$ near the boundary parallel to the $y$-axis can be written as,
\begin{eqnarray}
&&i\Delta_0 \frac{\partial v}{\partial x}=\mu(x) u\nonumber\\&&i\Delta_0 \frac{\partial u}{\partial x}=-\mu(x) v.
\label{eq:Edge}
\end{eqnarray}
It can be easily checked that this equation, when written in terms of $\Psi(r)$ and using Pauli matrices in the space of ($u, v$), is identical to the 1D Dirac equation in Eq.~\ref{wa} with $\epsilon=0$ and $\mu(x)$ playing the role of the spatially varying Dirac mass $m(x)$. From the Jackiw-Rebbi solution (Eq.~{\ref{zero}}) it then follows that, in the weak pairing phase ($\mu >0$) of Eq.~\ref{eq:Hp}, there is a topologically robust zero energy solution of the BdG equations near the boundary that acts as a domain wall in the chemical potential, and the corresponding wave function is exponentially localized in the direction perpendicular to the boundary. For the strong pairing phase ($\mu<0$) of Eq.~\ref{eq:Hp}, $\mu <0$ both for $x>0$ (vacuum) and $x<0$ (system), and in the absence of a domain wall in the chemical potential no such guaranteed zero energy solution exists at the boundary.
The simplest vortex excitation in a 2D superconductor can be modelled as a point with vanishing $\Delta_0$ (vortex core) and the superconducting phase $\phi$ having a $2\pi$ phase winding around that point. This, in turn, can be modelled as a puncture (hole) in the superconductor, where $\Delta_0$ is automatically zero, and a $2\pi$ phase winding of the order parameter around the hole. Thus, for a vortex excitation in the weak pairing phase ($\mu >0$) of 2D spinless $p_x + ip_y$ superconductor with flux quantum $\frac{hc}{2e}$, we may consider the edge of the vortex core as a circular ring of radius $r=r_0$, separating a region ($r>r_0$) with $\mu >0$ from the region inside the vortex core ($r<r_0$) modelled as vacuum ($\mu <0$). Assuming azimuthal symmetry in the presence of a single vortex situated at the origin, and writing the superconducting order parameter $\Delta(r,\theta)=|\Delta(r)| e^{i \theta}$ and $\mu(r)=\mu_0 h(r)$, the BdG equations near the vortex core can be written in polar coordinates. It can then be shown that \cite{Read2000}, in the weak pairing phase and for a vortex with flux quantum $\frac{hc}{2e}$, the BdG equations near the vortex core admit a zero energy solution analogous to the Jackiw-Rebbi solution near a mass domain wall as in the case of the boundary or the sample edge.
The zero energy solution near the vortex core exists in the orbital angular momentum channel $l=0$. More generally, the low-energy BdG solutions describing chiral edge modes propagating along the circular edge at $r=r_0$ with energy $E$ can be written as,
\begin{align}
\chi_E(r, \theta) &= e^{il\theta}e^{-\int_{r_0}^{r} h(r')dr'}\begin{pmatrix} e^{-i\theta/2}\\epsilon^{i\theta/2} \end{pmatrix}
\label{eq:Vortex}
\end{align}
where $l$ is the orbital angular momentum and $E=\frac{\mu_0l}{r_0}$ \cite{Nayak2008}. If the vortex has an odd number of flux quantum $\frac{hc}{2e}$, the BdG wave function must be anti-periodic upon one full rotation around the vortex core, $\theta \rightarrow \theta + 2\pi$. However, since the spinor on the right hand side of Eq.~\ref{eq:Vortex} is also anti-periodic upon $\theta \rightarrow \theta + 2\pi$, it follows that $l$ must be an integer (including zero). Thus, a $\frac{hc}{2e}$ vortex (or vortices with flux $\frac{nhc}{2e}$ with $n$ odd) admit a zero energy solution for $l=0$. Conversely, if the flux inside the vortex core is an even multiple of $\frac{hc}{2e}$, the BdG wave function must be periodic upon $\theta \rightarrow \theta + 2\pi$. This ensures that $l$ must be a half-odd-integer and there will be no zero energy mode inside the vortex core even in the weak pairing phase of 2D $p_x+ip_y$ superconductor.
Analogous to the zero mode solution near the boundary, the wave function of the zero energy solution near a vortex is exponentially localized away from the vortex core. A number of papers have demonstrated the vortex zero mode in spinless 2D $p_x+ip_y$ superconductor by explicitly solving the BdG equations~\cite{Stone2004,Stone2006,Tewari_Vortex} or mapping the problem on the Jackiw-Rebbi solution of 1D Dirac equation~\cite{Tewari_Index}
In the strong pairing phase ($\mu <0$), because $\mu$ is negative for both $r>r_0$ and $r<r_0$, no such zero energy solution is expected near the vortex core. The $E=0$ solution near the vortex core in the weak pairing phase of 2D $p_x+ip_y$ supercondctor can be contrasted with the case of the BdG equations near the vortex core of ordinary 2D $s$-wave superconductor which only admit $E\neq 0$ Caroli-de Gennes-Matricon \cite{Caroli} solutions. Furthermore, the spinlessness of the system under discussion guarantees that the second quantized operator corresponding to the non-degenerate $E=0$ solution near vortices and the sample boundary satisfies the Majorana condition $\gamma^{\dagger}=\gamma$.
\subsection{1D spinless p-wave superconductor: Kitaev lattice model}
In 1D spinless $p$-wave superconductor, in the absence of vortex excitations, the zero energy Majorana solutions occur near the boundaries or the ends of the wire. As argued by Kitaev \cite{Kitaev2001}, these zero modes should be observable in a fractional AC Josephson effect--type experiment.
Although we could discuss the 1D spinless $p$-wave superconductor just as the 1D version of Eq.~\ref{eq:Hp}, it will be instructive to discuss this system using the real space lattice model introduced by Kitaev \cite{Kitaev2001}. The one-dimensional model of topological superconductivity proposed by Kitaev can be written as a tight binding Hamiltonian as follows,
\begin{equation}
H_K=-\sum\limits_{j=1}^{N}\mu c_j^\dagger c_j -\sum\limits_{j=1}^{N-1}\left(tc_{j}^\dagger c_{j+1} + \Delta e^{i\phi} c_j c_{j+1}+ h.c.\right)\label{eq:KM}
\end{equation}
where $t >0$, $\mu >0$, and $\Delta e^{i\phi}$ with $\Delta >0$ are the nearest neighbor hopping amplitude, chemical potential, and superconducting order parameter, respectively, and $c_j^{\dagger}$ and $c_j$ are second quantized creation and annihilation operators on a 1D lattice with number of sites $N$.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{Kitaev_Model.pdf}
\caption{Top: Topologically trivial phase of the Kitaev model. The Majorana fermions on the same site are paired and there are no unpaired Majorana zero mode at the ends. Bottom: Topologically non-trivial phase of the Kitaev model. Majorana fermions from nearest neighbor sites are paired, with a pair of unpaired, dangling, Majorana zero modes at the two ends. The system is gapped in the bulk in both topologically trivial and non-trivial phases, but because of the zero energy Majorana fermions at the ends, the ground state of the topologically non-trivial phase is two-fold degenerate.
}\label{fig:Kitaev_Model}
\end{figure}
To analyze the Hamiltonian in Eq.~\ref{eq:KM}, let's first consider the special case $\mu=0, t=\Delta$. In this limit, the Hamiltonian can be written as,
\begin{eqnarray}
H_K &=& -t\sum\limits_{j=1}^{N-1}\left(c_{j}^\dagger c_{j+1} +c_{j+1}^\dagger c_{j}+ e^{i\phi} c_j c_{j+1}+e^{-i\phi} c^{\dagger}_{j+1} c^{\dagger}_{j} \right)\nonumber\\
&=& -t\sum\limits_{j=1}^{N-1}\left(e^{i\phi} c_j c_{j+1}-c_{j}c^{\dagger}_{j+1}+c_{j}^\dagger c_{j+1} -e^{-i\phi} c^{\dagger}_{j} c^{\dagger}_{j+1} \right)
\end{eqnarray}
In analogy with Eq.~\ref{Eq:Overlapping}, let's now introduce a pair of Majorana modes for each conventional fermionic mode described by $c_j, c^{\dagger}_j$,
\begin{eqnarray}
&\gamma_{+,j}&=e^{-i\phi/2}c^{\dagger}_j+ e^{i\phi/2}c_j\nonumber\\
&\gamma_{-,j}&=i(e^{-i\phi/2}c^{\dagger}_j - e^{i\phi/2}c_j)
\label{eq:KM1}
\end{eqnarray}
Despite the phase factors in Eq.~\ref{eq:KM1}, introduced to take into account the superconducting phase in Eq.~\ref{eq:KM}, it should be easy to check that $\gamma_{+,j}$ and $\gamma_{-,j}$ both satisfy the Majorana condition $\gamma_{\pm,j}^{\dagger}=\gamma_{\pm,j}$. In terms of these operators, the Hamiltonian in Eq.~\ref{eq:KM1} can be written as,
\begin{equation}
H_K=-it\sum\limits_{j=1}^{N-1}\gamma_{+,j}\gamma_{-,j+1}
\label{eq:KM2}\end{equation}
Note that the two Majorana operators $\gamma_{-,1}$ and $\gamma_{+,N}$ do not even appear in Eq.~\ref{eq:KM2}, and the rest of the Majorana operators are paired between neighboring lattices sites by the hopping parameter $t$. This situation is pictorially represented in the lower panel of Fig.~\ref{fig:Kitaev_Model}. As a result of nearest neighbor pairing, a pair of MZMs at the two ends remain unpaired. Interestingly, because in this limit ($\mu=0, t=\Delta$), the dangling Majorana operators $\gamma_{-,1}, \gamma_{+,N}$ do not appear in the Hamiltonian, they trivially commute with $H_K$ in Eq.~\ref{eq:KM2}: $[H_K, \gamma_{-,1}]=0=[H_K, \gamma_{+,N}]$. With these Majorana operators we can construct a conventional fermion operator, $q^{\dagger}=(\gamma_{-,1}-i\gamma_{+,N})$, and by virtue of the commutation relations of $H_K$ with $\gamma_{-,1}, \gamma_{+,N}$, we find $[H_K, q^{\dagger}]=0$. It follows that if $|G\rangle$ is the ground state of $H_K$ with energy $E_G$, $q^{\dagger} |G\rangle$ is also an eigenstate of $H_K$ with the same energy $E_G$. Thus, the ground state energy is two-fold degenerate and the pair of degenerate ground states differ in the total fermion number by one, i.e., they have opposite fermion parity. Introducing new conventional fermion operators $d_j=(\gamma_{-,j+1}+i\gamma_{+,j})$ and corresponding $d^{\dagger}_j$, we can rewrite the Hamiltonian $H_K$ as,
\begin{equation}
H_K=\frac{t}{2}\sum\limits_{j=1}^{N-1} d^{\dagger}_j d_j - (N-1)t
\end{equation}
Therefore, the system has a bulk gap to conventional fermion excitations and a pair of degenerate ground states which differ by the total fermion parity (i.e., they differ by one in the total fermion number). Note that one of the degenerate ground states, $q^{\dagger}|G\rangle= (\gamma_{-,1}-i\gamma_{+,N})|G\rangle$, is associated with fermion occupation of a non-local state composed out of the pair of dangling MZMs at the two ends.
Now let's consider the Hamiltonian in Eq.~\ref{eq:KM} in another simple limit, $\mu<0, t=\Delta=0$. In this case, using Eq.~\ref{eq:KM1}, the Hamiltonian reduces to
\begin{equation}
H_K=-\frac{\mu}{2} \sum\limits_{j=1}^{N}(1+i\gamma_{+,j}\gamma_{-,j})
\label{eq:KM3}
\end{equation}
As shown in Fig.~\ref{fig:Kitaev_Model} top panel, in this case, the Majorana modes $\gamma_{+,j},\gamma_{-,j}$ on the same site $j$ are paired. As a result, there are no longer any dangling MZMs at the two ends.
The system is gapped in this limit as well since introducing a new spinless
fermion excitation costs a non-zero energy $-\mu >0$. This phase is topologically trivial and there is no degeneracy of the ground state associated with different fermion parity. It is important to emphasize that although we have discussed the properties of the topologically non-trivial and trivial phases of Kitaev model only in simple limits, both these phases are gapped. Because of the spectral gap, the properties of these phases are valid more generally even away from the simple limits as long as the gap in the spectrum remains non-zero. The topological properties of phases can change only at a quantum phase transition where the bulk gap collapses.
More generally, applying periodic boundary conditions, and Fourier transforming the Hamiltonian in Eq.~\ref{eq:KM} into momentum space, the Bogoliubov-de Gennes Hamiltonian can be written as (for simplicity we take $\phi=0$ and the lattice constant $a=1$),
\begin{align}
\begin{split}
&H_K=\int dk\Psi^{\dagger}\left(k\right){\cal{H}}_{K}\Psi\left(k\right)\hspace{10mm}\Psi^\dagger (k) =\left(c_k^\dagger,c_{-k}\right)
\\
&{\cal{H}}_{K}=(-2t\cos k-\mu)\tau_z+2\Delta\tau_y \sin k
\end{split}
\label{eq:HamKit}
\end{align}
where $k$ is the momentum and $\tau_z,\tau_y$ are the Pauli matrices operating in the particle-hole space. The bulk band structure for the wire, found by diagonalizing Eq.~\ref{eq:HamKit}, is given by,
\begin{equation}
E(k)=\pm \sqrt{(2t\cos k+\mu)^2+4\left|\Delta\right|^2\sin^2 k},
\label{eq:Kitaev_Band}
\end{equation}
which shows a bulk band gap closure at $k=\{0,\pi\}$ for $\mu=\pm 2t$, representing the topological quantum phase transitions as described in the Kitaev model. The topological superconducting phase with a pair of MZMs at the end occurs for $|\mu|<2t$ while the topologically trivial phase emerges for $|\mu|>2t$.
We will now describe how to formally characterize the topological phase of superconductors such as the Kitaev model
using a topological invariant. A topological invariant is a quantity that can change only at TQPT's where the bulk gap closes. For a superconductor, these are points in the parameter space where the energy eigenvalues of the BdG Hamiltonian go through zero. One might think that the determinant of the BdG Hamiltonian may be used as a topological invariant because it is given by the product of the energy eigenvalues. Unfortunately, however, the sign of the determinant of the BdG Hamiltonian does not change when one of the energy eigenvalues go through zero because of the particle-hole symmetry that ensures that the eigenvalues of the BdG Hamiltonian come in $(E,-E)$ pairs. In order to find a suitable quantity that changes sign at each gap closing we define the Pfaffian of the BdG Hamiltonian as explained below.
Any BdG Hamiltonian anti-commutes with a particle-hole symmetry operator, which, for ${\cal{H}}_K$, is given by, $\Lambda=\tau_x K$,
so that
\begin{align}
&\Lambda H_{BdG}\Lambda=-H_{BdG}\\\nonumber
&\textrm{or }\tau_x H_{BdG}\tau_x=-H_{BdG}^*=-H_{BdG}^T.\label{eqPHS}
\end{align}
This relation is analogous to the charge-conjugation constraint for Majorana fermions and guarantees that energy eigenvalues of
the BdG Hamiltonian come in $(\epsilon,-\epsilon)$ pairs.
Thus, the low energy projected BdG Hamiltonian near such a zero-energy level crossing where the energy eigenvalue is $\epsilon$
can be written as
\begin{align}
&H_{BdG}\approx \left(\begin{array}{cc}\epsilon & 0\\0&-\epsilon\end{array}\right),
\end{align}
which is particle-hole symmetric with the operator $\tau_x K$.
Such a crossing of a pair of levels can be characterized using a Pfaffian of the BdG Hamiltonian~\cite{Kitaev2001},
which is written as
\begin{align}
&Pf[\tau_x H_{BdG}]=Pf[\left(\begin{array}{cc}0&\epsilon\\-\epsilon&0\end{array}\right)]=\epsilon,\label{PfHBdG}
\end{align}
and can be seen to change sign when $\epsilon$ crosses zero. For a two-level system, writing the energy eigenvalue $\epsilon$
as a Pfaffian appears as a coincidence. To see that this way of writing $\epsilon$ is more than a coincidence,
let us recall that the Pfaffian of a $2n\times 2n$ anti-symmetric matrix $A$ is defined as
\begin{align}
&Pf[A]=(2^n n\!)^{-1}\sum_\pi \epsilon_{i_1 j_1\dots i_n j_n}\prod_k A_{i_k j_k},
\end{align}
where permutations $\pi$ are constrained so that $i_k<j_k$ and $i_k<i_{k+1}$. The determinant of $H_{BdG}$ will not
serve this purpose because $Det[H_{BdG}]=-\epsilon^2$ does not change sign as $\epsilon$ crosses zero.
To compute the pfaffian associated with $H_{BdG}$ i.e. Eq.~\ref{PfHBdG} we need to convert $H_{BdG}$
into an anti-symmetric Hamiltonian. This can be done using the particle-hole symmetry of $H_{BdG}$ (i.e. Eq.~\ref{eqPHS}),
which can be re-written as
\begin{align}
&\tau_x H_{BdG}=-[\tau_x H_{BdG}]^T.
\end{align}
Thus, the matrix $A=\tau_x H_{BdG}$ associated with the BdG Hamiltonian $H_{BdG}$ is anti-symmetric and can be used to compute the Pfaffian.
Similar to the Pfaffian of the projected Hamiltonian Eq.~\ref{PfHBdG}, the Pfaffian of the full Hamiltonian $Pf[H_{BdG}\tau_x]$
changes sign at a zero-energy level crossing.
This suggests that the $\textrm{sgn}[Pf[H_{BdG}\tau_x]]$ as a topological invariant.
For translationally invariant systems, the BdG Hamiltonian is characterized by crystal momentum $k$. Out of these
only $H_{BdG}(k)$ is particle-hole symmetric only at $k=0,\pi$, which are the only values of $k$ that satisfy $k\equiv -k$ modulo $2\pi$.
While we might be tempted to use $\textrm{sgn}[Pf[H_{BdG}(k=0,\pi)\tau_x]]$ as a two component $Z_2$ topological invariant, one can see that a
disconnected chain of systems with $\textrm{sgn}[Pf[H_{BdG}\tau_x]]=\pm 1$, which would be trivial.
Such trivial systems would be characterized by $\textrm{sgn}[Pf[H_{BdG}(k=0,\pi)\tau_x]]=\textrm{sgn}[Pf[H_{BdG}\tau_x]]$.
This suggests the topological invariant for superconductors constructed by Kitaev~\cite{Kitaev2001},
\begin{align}
&Q=\textrm{sgn}\left[Pf[\tau_x {\cal{H}}_{K}\left(k=0\right)]Pf[\tau_x {\cal{H}}_{K}\left(k=\pi\right)]\right].\label{topinvKitaev}
\end{align}
This topological invariant also provides information about the two dimensional topological superconductors, which are usually
described by a Chern number topological invariant. Specifically, the topological invariant Eq.~\ref{topinvKitaev} determines
the parity of the Chern number in two dimensional topological superconductors, which is exactly the condition for obtaining
odd number of Majorana modes in vortex cores~\cite{Ghosh2010}.
The change in the sign of the Pfaffian between $k=0$ and $k=\pi$ i.e. the topological invariant Eq.~\ref{topinvKitaev} can be
associated with the existence of end Majorana modes for open boundary conditions, thus proving a bulk-boundary correspondence.
To see this, let us start by considering a long ring of the system with periodic boundary conditions with an odd number of unit
cells $L$. The allowed momentum in that case is $k=2\pi n/L$, includes $k=0$ and a set of $(k,-k)$ pairs for $n<(L+1)/2$.
The pairs $(k,-k)$ can be combined into a particle-hole symmetric Hamiltonian, which in the absence of gap closure at $k\neq 0$
is adiabatically connected to $k\simeq 0$. Thus, the Pfaffian for the pair $(k,-k)$ would be positive so that the $\textrm{sgn}[Pf[H_{BdG}(\textrm{periodic})\tau_x]]=\textrm{sgn}[Pf[H_{BdG}(k=0)\tau_x]]$. Similarly for anti-periodic boundary
conditions $\textrm{sgn}[Pf[H_{BdG}(\textrm{anti-periodic})\tau_x]]=\textrm{sgn}[Pf[H_{BdG}(k=\pi)\tau_x]]$. Thus,
the topological invariant Eq.~\ref{topinvKitaev} amounts to a change in the Pfaffian of the BdG Hamiltonian in going from
periodic to anti-periodic boundary conditions.
Changing the boundary conditions
from periodic to anti-periodic is equivalent to changing the hopping across the ring from $t$ to $-t$.
BdG Hamiltonians are however $Z_2-$gauge invariant in the sense
that one can change the fermion operators $\psi^\dagger\rightarrow -\psi^\dagger$ by a unitary transformation $U=exp(i\pi\psi^\dagger\psi)$, which does not affect
the spectrum of the Hamiltonian. Applying such a $Z_2-$gauge transformation to a segment of the SC loop, flips the sign of the hopping $t$ at the ends of the segment.
If the segment is longer than the coherence length of the superconductor, the two ends cannot affect each other. This shows that the spectrum of the junction cannot be changed by the transformation of boundary conditions from $t\rightarrow-t$. Let us now imagine that we change the hopping adiabatically as $\tilde{t}(\lambda)=\lambda t$
where the boundary conditions go from periodic to anti-periodic as the parameter $\lambda$ goes from $1$ to $-1$. Because of the $\tilde{t}\rightarrow-\tilde{t}$ symmetry of the spectrum, any zero-level crossing that occurs at $\lambda$ also occurs at $-\lambda$. Thus, the only way for the Pfaffian of the BdG Hamiltonian to change sign
is for there to be a level crossing at $\lambda=0$. This means that in the topological phase i.e. satisfying Eq.~\ref{topinvKitaev}, the spectrum has a pair of $E=0$ zero-energy Majorana modes for open boundary conditions (i.e. $\lambda=0$) at each end of the system.
To compute this topological invariant for the Kitaev Hamiltonian ${\cal{H}}_{K}\left(k\right)$ at $k=0,\pi$ as
\begin{align}
&\tau_x{\cal{H}}_{K}\left(k=0,\pi\right)=(\pm 2 t-\mu)\tau_x\tau_z=i(\pm 2 t-\mu)\tau_y.
\end{align}
Substituting this into the topological invariant Eq.~\ref{topinvKitaev} becomes,
\begin{align}
&Q_{Kit}=\textrm{sgn}\left[\mu^2-(2t)^2\right],
\end{align}
which produces exactly the criterion $|\mu|>2t$ to obtain topological superconductivity in the Kitaev model.
\subsection{Majorana fermions and Majorana zero modes in one dimensional $p-$wave superconductors}
The Kitaev Hamiltonian $H_{K}$ in the continuum limit can be viewed as a $p-$wave superconductor.
For small $k$ in Eq.~\ref{eq:Kitaev_Band}, using the approximations $\sin k \sim k$ and $\cos k \sim 1-\frac{k^2}{2}$, the Hamiltonian $H_{K}$ in Eq.~\ref{eq:HamKit} can be approximated as,
\begin{align}
\begin{split}
&H_K=\int dk\Psi^{\dagger}\left(k\right){\cal{H}}_{K}\Psi\left(k\right)\hspace{10mm}\Psi^\dagger (k) =\left(c_k^\dagger,c_{-k}\right)
\\
&{\cal{H}}_{K}= (tk^2- {\tilde{\mu}})\tau_z - i\tilde{\Delta}\partial_x \tau_y
\end{split}
\label{eq:small_k}
\end{align}
where $\tilde{\mu}=\mu + 2t$ and $\tilde{\Delta}=2\Delta$.
As shown by the energy eigenvalues in Eq.~\ref{eq:Kitaev_Band}, for small $k$, the topological quantum phase transition is at $\mu = -2t$ or ${\tilde{\mu}}=0$. The Hamiltonian in Eq.~\ref{eq:small_k} can be mapped on the transverse-field Ising model by Jordan-Wigner transformations~\cite{Pfeuty}. Modifications of Eq.~\ref{eq:small_k} by longer range hopping and pairing terms \cite{Chakravarty,Sen} reveal the existence of multiple topological phases with more than one MZMs at each end protected by chirality symmetry \cite{tewari2012topological}. Exact analytical solutions of Eq.~\ref{eq:small_k} for a finite length wire $L$ reveal exponentially localized MZMs and splitting oscillations of MZM wave functions~\cite{Chuanchang2019}. In the following, we will see how the bulk excitations of such a superconductor are Majorana fermions, while the
end excitations can be viewed as Majorana bound states.
Let us see how the excitations of $p-$wave superconductor with the Hamiltonian Eq.~\ref{eq:small_k} are Majorana
fermions~\cite{Read2000} in the sense of being described by the one dimensional version of the Majorana equation Eq.~\ref{eq:ME}.
The Hamiltonian of the $p-$ wave superconductor Eq.~\ref{eq:small_k}, in the small $k$ limit where we drop the $tk^2$ term, can be written in real space as:
\begin{align}
&H=\int dx \left[m\psi^\dagger(x)\psi(x)+i \Delta (\psi^\dagger(x)\partial_1 \psi^\dagger(x)-\psi(x)\partial_1\psi(x))\right],\label{Hpx}
\end{align}
where $\psi^\dagger(x)=\sum_k c_k^\dagger e^{i k x}$.
We note that $m$ plays the role of chemical potential ${\tilde{\mu}}$.
The equation of motion for the fermion operator is written as
\begin{align}
&\partial_0\psi = i[H,\psi]=-\partial_1\psi^\dagger-m\psi.\label{BdGpwave}
\end{align}
Let us now compare this to the Majorana equation in ($1+1$) dimension,
which is the generalization of Eq.~\ref{eq:ME} to one spatial dimension and is written as,
\begin{align}
&i [\gamma_0\partial_0+\gamma_1\partial_1]\Psi+m{\cal{C}}\Psi^*=0,\label{ME1D}
\end{align}
Here, the $\gamma$ matrices are chosen to be the ones shown in Eq.~\ref{gammaJR}.
The solutions of this equation satisfy the Majorana constraint $\Psi={\cal{C}}\Psi^*=\sigma_x\Psi^*$, which is solved automatically by
writing $\Psi^T=(\psi,\psi^*)$.
Writing the equation by components, similar to Eq.~\ref{eq:MWE}, the above Majorana equation of motion becomes
an equation for a single fermion component
\begin{align}
&i [\partial_0 \psi+\partial_1 \psi^*]+m\psi=0,\label{MajSC}
\end{align}
which is identical to the operator form of the time-dependent BdG equation for a $p-$wave superconductor in Eq.~\ref{BdGpwave}.
The two component Majorana equation Eq.~\ref{ME1D}, with the Majorana constraint is equivalent to the Dirac equation. Multiplying
by $\gamma_0$ we can rewrite Eq.~\ref{ME1D} in a form that is identical to the familiar time-dependent Bogoliubov-de Gennes equation corresponding
to the Hamiltonian Eq.~\ref{Hpx}, which is written as
\begin{align}
&i \partial_0\Psi=-i\sigma^2\partial_1\Psi+m\sigma^3\Psi,\label{tBdG}
\end{align}
where the spinor $\Psi^T=(\psi,\psi^*)$ would be called a Nambu spinor. For stationary states, we can expand $\Psi\propto \Psi_\epsilon e^{-i \epsilon t}$, this reduces to the conventional BdG equation
\begin{align}
&-i\sigma^2\partial_1\Psi+m(x)\sigma^3\Psi=\epsilon\Psi,\label{pxBdG}
\end{align}
where the charge conjugation constraint for the Majorana mode Eq.~\ref{Majcons} is now equivalent to the particle-hole symmetry of BdG equations and maps energy eigenvalues of the above equation from $\epsilon$ to $-\epsilon$.
Thus, the bulk quasiparticles in the $p-$wave superconductor realize a solid state analog of Majorana fermions~\cite{Read2000}, exactly in
the same way that semiconductors with a massive Dirac dispersion realize Dirac fermions. Despite the BCS Hamiltonian being
discovered more than fifty years ago, it was only recently pointed out that the Majorana nature of such superconducting quasiparticleas
can be tested by observing the pair annihilation of pairs of Bogoliubov quasiparticles~\cite{beenakker2014annihilation}.
Let us now discuss zero energy bound states localized in defects of the Majorana mass in Eq.~\ref{ME1D} following the observation that
Eq.~\ref{pxBdG} is formally identical to the Jackiw-Rebbi Eq.~\ref{wa}.
Because of the correspondence between the Majorana equation Eq.~\ref{ME1D} and
the $p-$wave superconductors discussed above, such defects correspond to defects in the $p-$wave superconductor Eq.~\ref{eq:small_k}
where one goes from ${\tilde{\mu}}>0$ to ${\tilde{\mu}}<0$ as $x$ crosses $0$. Interestingly, zero-energy modes were predicted
in high-energy physics~\cite{jackiw1981zero}
as bound states in vortices of scalar fields that are coupled to Dirac fermions.
Since we are looking for bound states without momentum conservation,
the appropriate Majorana field ansatz generalizing Eq.~\ref{Eq:Majorana_Field} is written as
\begin{align}
&\Psi(x)=\Phi(x)q_\epsilon e^{i\epsilon t}+q_\epsilon^\dagger e^{-i\epsilon t}{\cal{C}}\Phi^*(x),\label{Psipwave}
\end{align}
where $\Psi(x)$ satisfies the Majorana equation Eq.~\ref{ME1D} or equivalently the time-dependent BdG equation Eq.~\ref{tBdG} if
$\Phi(x)$ satisfies Eq.~\ref{wa}.
As discussed in subsection~\ref{JackiwRebbi}, Eq.~\ref{wa} has real solution at $\epsilon=0$, $\Phi(x)=\phi_0(x)$, which satisfies Eq.~\ref{sxcond}, which is equivalent to the Majorana condition Eq.~\ref{Majcons} so that $\Phi(x)={\cal{C}}\Phi^*(x)$.
This ensures that $\Psi(x)=[q_{\epsilon=0}+q_{\epsilon=0}^\dagger]\Phi(x)=\gamma\Phi(x)$.
Thus, the operator $\gamma$ associated with the zero-energy state
\begin{equation}
\gamma=[q_{\epsilon=0}+q_{\epsilon=0}^\dagger]=\gamma^\dagger,
\end{equation}
satisfy the Majorana condition $\gamma^{\dagger}=\gamma$ and $\gamma^2=1$ with proper normalization, which are the conditions that we found
Majorana bound states in the Kitaev model to satisfy.
Actually, for a finite length Kitaev wire, the vacuum outside can be modelled as a region of large and negative chemical potential $\tilde{\mu} \ll 0$. When the wire is in the topological phase, $\tilde{\mu}>0$, an edge in the Kitaev wire then acts as a mass domain wall considered above.
Thus, the zero-energy modes associated with the
one dimensional Majorana equation Eq.~\ref{ME1D}, which is the equation obeyed by the excitations of a $p$-wave superconductor,
is a Majorana bound state that is adiabatically connected to the end Majorana modes of the Kitaev model.
\subsection{Non-Abelian statistics: quantum information processing using Majorana modes}\label{braiding}
As we discussed in the last section a pair of Majorana zero modes, say $\gamma_{1}$ and $\gamma_2$ is associated with a zero-energy fermion
$q=\gamma_1+i\gamma_2$. The zero-energy of the fermion means that we can describe the quantum state of the pair of Majorana modes
by two states of the number operator $n=q^\dagger q=0,1$. Alternatively, we will find it convenient to use the conserved fermion parity $F=1-2n=i\gamma_1\gamma_2$
instead of the number operator. The fermion parity of freedom
can be used to store and manipulate quantum information. The basic operation for such quantum information processing is to measure
the fermion parity of each pair of Majoranas. We will postpone the details of how such a measurement can be accomplished till the
sub-section~\ref{teleportation}. Below we will discuss, how all topologically protected operations on Majorana systems, such as braiding can be
accomplished by a sequence of fermion parity measurements through a scheme called measurement only quantum computation~\cite{bonderson2008measurement}.
The fundamental resource for manipulating quantum information stored in Majorana modes is
non-Abelian statistics. Non-Abelian statistics is a generalization of quantum statistics of fermions and bosons in particle physics, which
is defined by the transformation of the many-body quantum wave-function under interchange of a pair of particles.
Such an interchange is based on
transport of the Majorana modes from one position to another~\cite{Alicea2011}. To understand how this transport can be accomplished by
only fermion parity measurement, without physically moving the particles through the system~\cite{vijay2016teleportation},
consider three Majorana modes $\gamma_1$, $\gamma_2$ and $\gamma_3$. Let us start in an eigenstate $\ket{\Psi}$ of the fermion parity of
$\gamma_2$ and $\gamma_3$ so that $i\gamma_2\gamma_3\ket{\Psi}=\zeta_{23}\ket{\Psi}$,
where $\zeta_{23}=\pm 1$. Such a fermion parity eigenstate may be prepared by measuring the fermion parity
$i\gamma_2\gamma_3$. Following this
we measure the fermion parity $i\gamma_1\gamma_2$ and obtain a result $\zeta_{12}$.
The Majorana operators, in the Heisenberg picture, change following this measurement are now labelled as
$\gamma^{'}_1$,
$\gamma^{'}_2$,
$\gamma^{'}_3$.
The measurement projects the wavefunction $\ket{\Psi}$ to $\ket{\Psi'}=2^{1/2} \Pi\ket{\Psi}$, where $\Pi=(1+i\zeta_{12}\gamma_1\gamma_2)/2$
is a projection operator into the eigenstate with eigenvalue $\zeta_{12}$ for the fermion parity operator $i\gamma_1\gamma_2$.
We can check that the state $\ket{\Psi'}$ is normalized by noting that
\begin{align}
&\expect{\Psi|(i\gamma_1\gamma_2)|\Psi}\zeta_{23}=\expect{\Psi|(i\gamma_1\gamma_2)(i\gamma_2\gamma_3)|\Psi}=\expect{\Psi|(i\gamma_2\gamma_3)(i\gamma_1\gamma_2)|\Psi},
\end{align}
where comparing the latter two forms
\begin{align}
&\expect{\Psi|(i\gamma_1\gamma_2)|\Psi}\zeta_{23}=-\expect{\Psi|(\gamma_1\gamma_3)|\Psi}=-\expect{\Psi|(\gamma_3\gamma_1)|\Psi}=\expect{\Psi|(\gamma_1\gamma_3)|\Psi}=0.\label{eqPsigamma12}
\end{align}
The key observation to understand the transformation of Majorana modes is that the total fermion parity operator $i\gamma_1\gamma_2\gamma_3$
for the three Majorana operators involved is left invariant by any fermion parity measurement involving the Majorana bound states $\gamma_1$,
$\gamma_2$ or $\gamma_3$. To see how this is true let us compute the expectation of the fermion parity $i\gamma_1\gamma_2\gamma_3$,
together with an operator $\cal{O}$ that is independent of $\gamma_1$,
$\gamma_2$ or $\gamma_3$, with respect to $\ket{\Psi'}$ as
\begin{align}
&2^{-1}\expect{\Psi'|i\gamma_1\gamma_2\gamma_3{\cal{O}}|\Psi'}=\expect{\Psi|\Pi(i\gamma_1\gamma_2)\gamma_3{\cal{O}}\Pi|\Psi}=\expect{\Psi|\Pi(i\gamma_1\gamma_2)\gamma_3{\cal{O}}|\Psi},\label{eq59}
\end{align}
where we have used the fact that $[\Pi,(i\gamma_1\gamma_2)\gamma_3{\cal{O}}]=0$ and that $\Pi^2=\Pi$ as expected for a projection operator.
Continuing the above chain of equations
\begin{align}
&\expect{\Psi|\Pi(i\gamma_1\gamma_2)\gamma_3{\cal{O}}|\Psi}=2^{-1}[\expect{\Psi|(i\gamma_1\gamma_2)\gamma_3{\cal{O}}|\Psi}-\expect{\Psi|\gamma_3{\cal{O}}|\Psi}]=2^{-1}\expect{\Psi|(i\gamma_1\gamma_2)\gamma_3{\cal{O}}|\Psi}.\label{eq60}
\end{align}
Comparing the first term of Eq.~\ref{eq59} and the last term of Eq.~\ref{eq60}, we see that the operator $i\gamma_1\gamma_2\gamma_3$ is
indeed preserved as promised. The next step is to observe that $\ket{\Psi}$ is an eigenstate of the fermion parity $i\gamma_2\gamma_3$,
while $\ket{\Psi'}$ is an eigenstate of $i\gamma_1\gamma_2$. Applying this to the invariance relation we proved we see that
\begin{align}
&\zeta_{12}\expect{\Psi'|\gamma_3{\cal{O}}|\Psi'}=\expect{\Psi'|i\gamma_1\gamma_2\gamma_3{\cal{O}}|\Psi'}=\expect{\Psi|(i\gamma_1\gamma_2)\gamma_3{\cal{O}}|\Psi}=\zeta_{23}\expect{\Psi|\gamma_1{\cal{O}}|\Psi}.
\end{align}
This shows that $\gamma_1$ is transferred into to $\zeta_{23}\zeta_{12}\gamma_3$ by the fermion parity measurement of $\gamma_1$ and $\gamma_2$.
As an example of how to perform an exchange using such a transport process consider the exchange of Majorana modes $\gamma_{1,2}$, which will require
two auxilliary pairs $\gamma_{3,4}$ and $\gamma_{5,6}$ in an eiegnstate with eigenvalues $\zeta_{34}$ and $\zeta_{56}$ respectively. For the first
step we measure $i\gamma_2\gamma_3$ to obtain an eigenvalue $\zeta_{23}$. This transfers $\gamma_2\rightarrow \zeta_{34}\zeta_{23}\gamma_4$.
For the next step we measure $i\gamma_1\gamma_2$ to obtain an eigenvalue $\zeta_{12}$. This transfers $\gamma_{1}\rightarrow \zeta_{23}\zeta_{12}\gamma_3$.
The result of this pair of measurements is to transfer the pair $(1,2)$ to the pair $(3,4)$. The next steps repeat this by sending $(3,4)$ to $(5,6)$ followed by sending
the pair $(5,6)$ back to $(2,1)$. The final result of this is the braiding relation
\begin{align}
&\gamma_1\rightarrow \zeta\gamma_2\nonumber\\
&\gamma_2\rightarrow -\zeta\gamma_1,
\end{align}
where $\zeta=\zeta_{34}\zeta_{23}\zeta_{56}\zeta_{45}\zeta_{21}\zeta_{62}$. Noting that the parity operator $(i\gamma_1\gamma_2)^2=1$, we can write the
above exchange matrix as a unitary operator
\begin{align}
&U_{12}(\zeta)=e^{-\pi\zeta\gamma_1\gamma_2/4}=\frac{1}{\sqrt{2}}(1-\zeta\gamma_1\gamma_2).
\end{align}
Using the relations $\{\gamma_i,\gamma_j\}=2\delta_{i,j}$, it should be easy to check that
\begin{align}
&U_{12}(\zeta)\gamma_1 U_{12}^\dagger(\zeta)=\frac{1}{2}(1-\zeta\gamma_1\gamma_2)\gamma_1 (1+\zeta\gamma_1\gamma_2)=\frac{\gamma_1}{2}(1+\zeta\gamma_1\gamma_2)^2=\gamma_1(\zeta\gamma_1\gamma_2)=\zeta\gamma_2.
\end{align}
and similarly $U_{12}(\zeta)\gamma_2 U_{12}^\dagger(\zeta)=-\zeta \gamma_1$.
In the measurement based scheme the value of $\zeta$ for the sign of the exchange would be random but computable from the results of the measurements
in the various steps. If one is unhappy with the result one can always continue to measure until one has the correct sign of $\zeta$. Ultimately, this is an
overhead in the computation, which is not greatly significant, though it can be avoided in a deterministic Hamiltonian based approach~\cite{sau2011controlling} as opposed to the measurement-based approach discussed here.
We can convince ourselves of the non-Abelian nature of these operations by computing a pair of exchanges of $1$ and $2$ and then $2$ and $3$ follows by
the reverse exchanges in the same order
\begin{align}
&U_{12}(\zeta_{12})U_{23}(\zeta_{23})U_{12}(\zeta_{12})^\dagger U_{23}(\zeta_{23})^\dagger = e^{(\pi/4)\zeta_{23}(1-\zeta_{12})\gamma_{2}\gamma_3}.\label{eqreverse}
\end{align}
Note that even though we did each operation in forward and reverse, the combined effect of the exchanges is non-trivial (i.e. not the identity
operation) if $\zeta_{12}\neq 1$. By multiplying both sides of Eq.~\ref{eqreverse} by
the $U_{23}(\zeta_{23})U_{12}(\zeta_{12})$ on the right, we see the two exchange operations if produced in different orders produces different outcomes
\begin{align}
&U_{12}(\zeta_{12})U_{23}(\zeta_{23}) = e^{(\pi/4)\zeta_{23}(1-\zeta_{12})\gamma_{2}\gamma_3}U_{23}(\zeta_{23})U_{12}(\zeta_{12})\neq U_{23}(\zeta_{23})U_{12}(\zeta_{12}).
\end{align}
Thus, Majorana modes obey non-Abelian exchange statistics.
The fermion parity degree of freedom associated with a pair of Majorana modes cannot by itself be used as a qubit (i.e. two level system) for quantum information
processing. There are several ways of constructing two level systems from Majorana modes, which is a process known as encoding~\cite{bravyi2002fermionic}. To understand how qubits can be constructed from Majorana modes, we first note that
the fermion parity of a superconductor is conserved. Therefore
the fermion parity of a system of Majorana modes cannot be changed by any of the allowed operations. One efficient, though not the
simplest way to avoid this constraint is to use six Majorana modes $\gamma_{j=1,\dots,6}$ to construct two two-level systems described by spin$-1/2$ operators
$S^{(j=1,2)}_{a=1,2,3}$~\cite{karzig2017scalable}. Even though we start from an eight-dimensional Hilbert space associated with six Majorana modes, the restriction of a total fermion parity
constraint on exchange operators limited to these qubits limits us to the four dimensional Hilbert space associated with the two spin$-1/2$.
One choice for the effective spin components we can use is written compactly as
\begin{align}
&S^{(j)}_a=i\gamma_{3(j-1)+a}\gamma_{3(j-1)+a+1 \textrm{mod}(3)}.
\end{align}
More explicitly, for $j=1$, $S^{1}_{a=1,2,3}=i\gamma_1\gamma_2, i\gamma_2\gamma_3$ and $i\gamma_{3}\gamma_1$ respectively. It is easy to check that
the two Pauli spin operations defined commute $[S^{(1)}_a,S^{(2)}_b]=0$ as well as obey the usual algebra for spin$-1/2$ i.e. $S^{(j)}_a S^{(j)}_b=i\epsilon_{abc}S^{(j)}_c$. Measurements of pairs of Majorana modes on the island, that generate the non-Abelian statistics, also are equivalent to measurement of the components
of the spin matrices. By using $j=2$ as an auxiliary spin, we can generate the Hadamard gap on the spin $j=1$ by measurements or by non-Abelian exchanges.
Together with joint measurements of sets of four or more Majoranas, that can be done via teleportation (more details in the subsection on teleportation), we can
generate entanglement as well as all quantum operations in the so-called Clifford group~\cite{karzig2017scalable}. These operations by themselves
are almost complete, in the sense of spanning the entire Hilbert space, and can be made complete by adding a phase gate~\cite{nayak2008nonabelian}.
The details of these procedures are beyond the scope of this chapter.
\section{Topological superconductivity in spin-orbit coupled semiconductors}
2D or 1D spinless $p_x+ip_y$ superconductors do not exist in nature. There is some evidence that strontium ruthenate\cite{Mackenzie2003} may be a layered quasi-2D $p_x+ip_y$ superconductor, but it is spinful. In this system, a certain type of vortex excitations, called half-quantum vortices (HQV)~\cite{Leggett,Kee2000}, which carry magnetic flux $\frac{hc}{4e}$ as opposed to the usual superconducting flux quantum $\frac{hc}{2e}$, were proposed to support Majorana zero modes~\cite{Tewari_HQV}. Qualitatively, a HQV can be thought of as an ordinary single quantum vortex (i.e., a vortex carrying flux $\frac{hc}{2e}$), but only in one of the spin sector. Thus, the HQV in a spinful $p_x+ip_y$ superconductor can inherit some of the properties of $\frac{hc}{2e}$ vortices in spinless superconductors, specifically the occurrence of MZM at the vortex core.
Despite the possibility of MZMs in HQV's in superconducting strontium ruthenate~\cite{Tewari_HQV,Chung,Jang} and cold fermion systems in the presence of a p--wave Feshbach resonance~\cite{Regal,Ticknor,Schunck}, actually realizing MZMs in these systems are quite challenging. In strontium ruthenate, it is not known for sure if the symmetry of the superconducting order parameter is indeed spin-triplet $p_x+ip_y$. Also, even if the order parameter is of the appropriate symmetry and the required HQV's can be realized experimentally, the mini--gap $\sim \Delta^2/\epsilon_F \sim 0.1$mK (with $\epsilon_F$ the Fermi energy and $\Delta$ the magnitude of the p--wave order parameter) that separates the MZM from the higher energy regular BdG excitations localized at the vortex cores should be very small. On the other hand, in cold fermion systems with p--wave Feshbach resonance in the unitary limit, even if the mini--gap $\sim \Delta^2/\epsilon_F \sim \epsilon_F$ may be relatively large, the p--wave pairs could be unstable and the short lifetimes of these pairs and molecules represents an important experimental challenge. Finding different and more realistic schemes of realizing MZMs in condensed matter systems is therefore of utmost experimental importance.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{Spin_Orbit.pdf}
\caption{Left: Fermi surface in a Zeeman-shifted spin-polarized band. Inducing $s$-wave superconductivity on the Fermi surface is impossible in this case because the spins of the electrons at opposite momenta ($\mathbf{k}$ and $-\mathbf{k}$) are aligned. Right: Fermi surface in a Zeeman-shifted spin-polarized band in the presence of Rashba spin-orbit coupling. Because of the combined effect of Rashba and Zeeman coupling the spins of the electrons make a constant angle with the momentum on the Fermi surface. Spins are no longer exactly aligned at momenta $\mathbf{k}$ and $-\mathbf{k}$ and inducing $s$-wave superconductivity is now possible. Figure reproduced from Sau et al. arXiv:1012.3170~\cite{sau2011chiral}
}\label{fig:Spin_Orbit}
\end{figure}
To devise realistic schemes for realizing MZMs, we recall that first and foremost we need an appropriate superconductor. To make the scheme experimentally feasible, an ordinary $s$-wave superconductor is most preferable. The second requirement is the existence of spinless fermions, which obviously do not occur naturally in solid state systems. One possible way out is the application of a Zeeman field to the electrons, $V_z \sigma_z$, where $V_z$ is the strength of the Zeeman field and $\sigma_z$ is a Pauli spin matrix. Since the energy of the electrons in the parallel and anti-parallel direction to the Zeeman field is different, this would lead to a shift of the parallel and anti-parallel energy bands. If now the Fermi energy could be tuned to fall in the lower band (Fig.~\ref{fig:Spin_Orbit}), we would have spin-polarized fermions on the Fermi surface. For the purposes of realizing MZMs, spin-polarized fermions are just as good as spinless fermions because in both cases the definition of the Bogoliubov operator does not involve mixing of different spins. One could now think of inducing $s$-wave superconductivity on the Fermi surface, but this is impossible because the spins of the electrons at opposite momenta $\mathbf{k}$ and $-\mathbf{k}$ on the Fermi surface are exactly aligned. To work around this problem one could think of electrons having a strong Rashba spin-orbit coupling $\alpha (\vec{\sigma} \times \vec{p})\cdot \hat{z}$, where $\alpha$ is the strength of the Rashba coupling. Due to the combined effects of $\alpha$ and $V_z$, the spins of the electrons make a constant angle with the momenta on the Fermi surface. Since the spins of the electrons at momenta $\mathbf{k}$ and $-\mathbf{k}$ on the Fermi surface are no longer exactly aligned (Fig.~\ref{fig:Spin_Orbit}), $s$-wave superconductivity is now possible. One could now think of inducing $s$-wave superconductivity on the electrons with Rashba spin-orbit coupling and Zeeman field by proximity effect from a bulk ordinary $s$-wave superconductor. Ideas along these lines but without full analytical solutions of the resulting BdG equations and Majorana zero modes were proposed earlier in Refs.~[\onlinecite{Fujimoto2008,Zhang2008,Sato}].
To derive the BdG Hamiltonian for the spin-orbit coupled semiconductor with proximity induced $s$-wave superconductivity, let us start by writing down the non-superconducting
part of the Hamiltonian i.e. that of the spin-orbit coupled semiconductor in a Zeeman field as
\begin{equation}
H_N=\int d\mathbf{r}\, \sum_{\sigma,\sigma'}c^{\dagger}_{\sigma}(\mathbf{r})H_{0,\sigma\sigma'} c_{\sigma'}(\mathbf{r}),
\end{equation}
where the normal part of the Hamiltonian density is written as
\begin{equation}
H_0=[\frac{p^2}{2 m^*}\!-\!\mu\!+\!V_z \sigma_z\!+\!\alpha (\vec \sigma \!\times \! \vec p)\!\cdot\! \hat{z}],
\end{equation}
and $\hat{c}_{\sigma}^{\dagger}(\mathbf{r})$ are the creation operators for electrons with spin $\sigma$.
The proximity-induced superconducting pairing potential is written as
\begin{equation}
H_{p}=\int d\mathbf{r}\,\{\Delta(\mathbf{r})c^{\dagger}_{\uparrow}(\mathbf{r})c^{\dagger}_{\downarrow}(\mathbf{r})+\rm H.c\},
\end{equation}
where $\Delta(\mathbf{r})$ is the proximity-induced superconducting pair potential.
The BdG equations describe the Bogoliubov excitation operators of this Hamiltonian which are of the form
\begin{equation}
\gamma^\dagger=\int d\mathbf{r}\,\sum_{\sigma}u_{\sigma}(\mathbf{r})c_{\sigma}^{\dagger}(\mathbf{r})+v_{\sigma}(\mathbf{r})c_{\sigma}(\mathbf{r})
\label{eq:BCSqp}
\end{equation}
and are defined by the equation
\begin{equation}\label{eq:qpeqn}
[H_{BdG},\gamma^\dagger]=E\hat{\gamma}^\dagger
\end{equation}
Here, $H_{BdG}$ is defined as $H_{BdG}=H_{N}+ H_{p}$.
By encoding the particle and the hole components of the wave-function $u_{\sigma}(\mathbf{r})$ and $v_{\sigma}(\mathbf{r})$ into a Nambu spinor
$\Psi(\mathbf{r})=(u_{\uparrow}(\mathbf{r}), u_{\downarrow}(\mathbf{r}),-v_{\downarrow}(\mathbf{r}),v_{\uparrow}(\bm r))$, the above
operator equation can be written as a BdG equation for the wave-function $\Psi(\mathbf{r})$ as
\begin{equation}
H_{BdG}\Psi(\mathbf{r})=\left(\begin{array}{cc}H_0&\Delta(\mathbf{r})\\\Delta^*(\mathbf{r})&-\sigma_y H_0^* \sigma_y\end{array}\right)\Psi(\mathbf{r})=E\Psi(\mathbf{r}).
\label{eq:H5}
\end{equation}
The spins for the hole components in $\Psi(\mathbf{r})$ are inverted, which leads to the $\sigma_y$ in the lower right corner of the
Hamiltonian to ensure the manifest spin-rotation symmetry of the singlet superconductivity.
The matrix structure in the particle-hole space of $H_{BdG}$ can be captured by Pauli matrices, which allow one to
write the BdG Hamiltonian for the system as
\begin{equation}
H_{BdG}=[\frac{p^2}{2 m^*}\!-\!\mu\!+\!\alpha (\vec \sigma \!\times \! \vec p)\!\cdot\! \hat{z}]\tau_z\!+\!V_z \sigma_z+[\Delta(\bm r)\tau_++h.c],\label{HBdG}
\end{equation}
where $\tau_+=\tau_-^\dagger=\frac{\tau_x+\imath\tau_y}{2}$.
By applying a Hermitian conjugation to Eq.~\ref{eq:qpeqn} we can see that any solution $\gamma^\dagger$ at energy $E$ of Eq.~\ref{eq:qpeqn} is accompanied by another solution $\gamma$ with energy $-E$. In the spinor language, this corresponds to a new spinor wave-function
that is related by the particle-hole transformation
\begin{equation}
\Xi\Psi(\mathbf{r})=\sigma_y\tau_y \Psi^*(\mathbf{r}),
\end{equation}
where $\Xi=\sigma_y\tau_y K$ is defined as the particle-hole symmetry operator with $K$ being complex conjugation.
The existence of spinor solutions that come in $(E,-E)$ pairs is guaranteed by the particle-hole symmetry
of the BdG Hamiltonian
\begin{equation}
\Xi H_{BdG}\Xi=\sigma_y\tau_y H_{BdG}^*\sigma_y\tau_y=-H_{BdG}.\label{PHS}
\end{equation}
As we show in the next section, a detailed analytical solution of the BdG equations following from Eq.~\ref{eq:H5} near defects of the pair potential $\Delta$, e.g., vortices and sample edges, reveal the existence of MZMs above a critical value of the Zeeman coupling $V_z$.
\subsection{Vortex bound states in semiconductor-superconductor structures}
In this section, we show how one can derive explicitly the existence of the Majorana bound states in
a vortex in a spin-orbit coupled semiconductor superconductor heterostructure~\cite{sau2010nonabelian}. Let us start by
introducing a vortex into $\Delta(\bm r)$ by assuming it to be
of the form $\Delta(r,\theta)=|\Delta(r)|e^{i n\theta}$, where $n=0,1,\dots$ is the multiplicity of the vortex.
In the absence of a vortex (i.e. $n=0$), the BdG Hamiltonian has a rotation symmetry
generated by the total angular momentum $J_z=L_z+\sigma_z/2$. The spin-orbit coupling
term proportional to $\alpha$ couples spin and orbital angular momentum so that only the total angular momentum $J_z$ is conserved. Adding a vortex changes this angular momentum operator, because application of rotation $\theta\rightarrow\theta+\varphi$ shifts the superconductor phase $\Delta(r,\theta)\rightarrow \Delta(r,\theta+\varphi)=\Delta(r,\theta)e^{i\varphi}$. This phase is generated by application of the unitary operator $U=e^{i n\tau_z\varphi/2}$ using the relation $U^\dagger \tau_+ U=e^{in\varphi}\tau_+$. We can include this unitary operator in the rotation operator
by modifying the total angular momentum operator to
\begin{equation}
J_{z}=L_z+\frac{1}{2}(\sigma_z-n\tau_z).
\label{eq:Jz}
\end{equation}
With this choice, it is a straightforward calculation to check that $J_z$ is conserved
i.e. $[J_z,H_{BdG}]=0$.
This allows us to assume the vortex solutions to be eigenstates of total angular
momentum $J_z=m_J$. The angular (i.e. $\theta$) dependence of such eigenstates
is constrained entirely by $m_J$, so that the wave function can be written entirely
in terms of a radial spinor:
\begin{equation}
\Psi_{m_J}(r,\theta)=e^{\imath L_z \theta}\Psi_{m_J}(r)=e^{\imath (m_J-\sigma_z/2+n\tau_z/2) \theta}\Psi_{m_J}(r)\label{eq:theta},
\end{equation}
where the radial spinor $\Psi_{m_J}(r)=\left(u_{\uparrow,m_J}(r),u_{\downarrow,m_J}(r)e,v_{\downarrow,m_J}(r),-v_{\uparrow,m_J}(r)\right)^T$.
We are now at a point where we can write the full BdG Hamiltonian for an $n$-fold vortex in polar coordinates as
\begin{equation}
H_{BdG}=(-\eta\nabla^2-\mu)\tau_z + V_z\sigma_z+\imath\frac{\alpha}{2} (\sigma_+p_--\sigma_-p_+)\tau_z+\Delta(r)[\cos{(n\theta)}\tau_x+\sin{(n\theta)}\tau_y].
\end{equation}
where $\eta=\frac{\hbar^2}{2 m^*}$, $\sigma_+=\sigma_-^\dagger=\sigma_x+\imath \sigma_y$
and $p_+=p_x+\imath p_y=e^{\imath\theta}(-\imath\partial_r+\frac{1}{r}\partial_\theta)$ and $p_-=p_x-\imath p_y=e^{-\imath\theta}(-\imath\partial_r-\frac{1}{r}\partial_\theta)$.
With this Hamiltonian the BdG equation can be written as $H_{BdG}\Psi_{m_J}(r,\theta)=E\Psi_{m_J}(r,\theta)$.
We can now use the angular dependence in Eq.~\ref{eq:theta} to write a purely
radial (i.e. one dimensional) BdG equation in terms of a radial BdG Hamiltonian
\begin{align}
&\tilde{H}_{BdG,m_J}=e^{-\imath (m_J-\sigma_z/2+n\tau_z/2)\theta}H_{BdG}e^{\imath (m_J-\sigma_z/2+n\tau_z/2)\theta}.
\end{align}
By substituting the full Hamiltonian $H_{BdG}$ into this equation, we get the radial
Hamiltonian as,
\begin{align}
&\tilde{H}_{BdG,m_J}=-\{\eta(\partial_r^2+\frac{1}{r}\partial_r+\frac{(2 m_J-\sigma_z+n\tau_z)^2}{4 r^2})+\mu\}\tau_z + V_z\sigma_z\nonumber\\
&-\frac{\imath\alpha}{2} \{\sigma_+-\sigma_-\}\tau_z\partial_r-\imath\frac{\alpha}{2 r} \{\sigma_+\frac{2 m_J+n\tau_z+1}{2}+\sigma_-\frac{2 m_J+n\tau_z-1}{2}\}\tau_z+\Delta(r)\tau_x.
\end{align}
The resulting BdG equations are now much more tractable because they are one dimensional i.e. radial though complex. We can make this Hamiltonian real by performing a $\pi/4$ spin rotation via the unitary transformation $U=e^{i\pi \sigma_z/4}$ so that
$U\sigma_+ U^\dagger=i\sigma_+$. Following this transformation, the solutions
of the BdG equation can be assumed to be real without loss of generality.
However, the BdG equations are still quite challenging because they are
four component coupled second order differential equations.
We can reduce the complexity of this problem by half by taking advantage of
the particle-hole symmetry of the zero-energy Majorana mode in the vortex.
The particle-hole symmetry operator, $\Xi$,
transforms the $J_z=m_J$ spinor eigenstate with energy $E$
into a $-m_J$ eigenstate with energy $-E$ because
$$\Xi e^{\imath (m_J-\sigma_z/2+n\tau_z/2) \theta}\Psi_{m_J}(r)= e^{\imath (-m_J-\sigma_z/2+n\tau_z/2) \theta}\Xi\Psi_{m_J}(r).$$
Since a Majorana mode is particle-hole symmetric, it must be associated with
the quantum numbers $m_J=0$.
Focusing on this channel of the BdG Hamiltonian $\tilde{H}_{BdG, m_J =0}=\tilde{H}_{BdG}$ and the fact that particle-hole symmetry changes the sign
of the energy $E$, such a particle-hole symmetry for real Hamiltonians is tantamount to a
chiral symmetry
\begin{align}
&\sigma_y\tau_y \tilde{H}_{BdG}\sigma_y\tau_y=-\tilde{H}_{BdG}.\label{eqchiralsymmetry}
\end{align}
It is easy to check that zero-energy solutions $\Psi_{m_J=0}(r)=\Psi(r)$ of such a chiral symmetric
Hamiltonian would be an eigenstate of the chiral symmetry operator $S=\sigma_y\tau_y$
so that
\begin{align}
&S\Psi(r)=\lambda \Psi(r),
\end{align}
where $\lambda=\pm 1$.
Since each eigenvalue $\lambda$ of $S$ is two-fold degenerate, by choosing
an eigenvalue $\lambda$ we can write $\Psi(r)$ in terms of two functions $u_\sigma(r)$
as
$\Psi(r)=\sum_\sigma u_\sigma(r)\eta_\sigma$ where $S\eta_\sigma=\lambda\eta_\sigma$ are the two eigenspinors of $S$. Thus, we can use the particle-hole constraint
to reduce the 4 component BdG equations to two components.
We can see the reduction of the BdG Hamiltonian to two components by replacing
$\tau_x$ in $\tilde{H}_{BdG}$ by
$\tau_x=\imath\lambda\sigma_y\tau_z$, which follows from $\sigma_y\tau_y=\lambda$.
Making this substitution, which applies only to the $E=0$ states,
the BdG Hamiltonian for a given value of $\lambda$ becomes
\begin{align}
&\tilde{H}_{BdG}=-\{\eta(\partial_r^2+\frac{1}{r}\partial_r+\frac{(-\sigma_z+\tau_z)^2}{4 r^2})+\mu\}\tau_z + V_z\sigma_z\nonumber\\
&-\frac{\alpha}{2} \{\sigma_++\sigma_-\}\tau_z\partial_r-\frac{\alpha}{2 r} \{\sigma_+\frac{\tau_z+1}{2}+\sigma_-\frac{\tau_z-1}{2}\}\tau_z+\imath \lambda\sigma_y\tau_z\Delta(r)\label{eq:decBdg}.
\end{align}
While this still appears to be a $4\times 4$ matrix, the transformed Hamiltonian
commutes with $\tau_z$, so that we can consider each of the
$\tau_z=\pm 1$ sectors (electron and hole) separately.
This allows one to write the BdG differential equation in the
$(\tau_z=+1)$
in terms of the spinor
$\Psi_0(r)=(u_{\uparrow}(r),u_{\downarrow}(r))^T$ for a single vortex $(n=1)$
in the form of a $2\times 2$ matrix differential equation:
\begin{align}\label{eq:zeroenergy}
\!&\!\left(\!\begin{array}{cc}\!\!-\!\eta (\partial_r^2\!+\!\frac{1}{r}\partial_r)\!+\!V_z\!-\!\mu\!&\! \lambda\Delta(r)\!+\!\alpha (\partial_r\!+\!\frac{1}{r} )\\\\ -\lambda \Delta(r)\!-\!\alpha \partial_r \!&\! -\!\eta (\partial_r^2\!+\!\frac{1}{r}\partial_r\!-\!\frac{1}{r^2}\!)\!-\!V_z\!-\!\mu\! \end{array}\!\right)\!\!\Psi_0(r)\!=\!0.
\end{align}
To make progress towards an analytic solution, we approximate the radial dependence
of $\Delta(r)$ by $\Delta(r)=0$ for $r<R$ and $\Delta(r)=\Delta$ for $r\geq R$.
Let us start by considering the range $(r<R)$, which is the non-superconducting region $(\Delta(r)=0)$. In this range, the wave function is simply that of a spin-orbit
coupled semiconductor. In the absence of spin-orbit coupling, each spin-component
has solutions given by Bessel functions $J_0(z r)$ and $J_1(z r)$.
This suggests that we can include spin-orbit coupling by trying a spinor
of the form
\begin{equation}
\Psi(r)= \left(\begin{array}{c}u_{\uparrow}J_0(z r)\\u_{\downarrow}J_1(z r)\end{array}\right).
\label{eq:Bessel}
\end{equation}
We can determine $(u_\uparrow,u_\downarrow)$ and $z$ by substituting Eq.~(\ref{eq:Bessel}) into Eq.~(\ref{eq:zeroenergy}), which then takes the form:
\begin{align}
&\left(\begin{array}{cc}\eta (-\partial_r^2-\frac{1}{r}\partial_r)+V_z-\mu & \alpha (\partial_r+\frac{1}{r} )\\ -\alpha (\partial_r) & \eta (-\partial_r^2-\frac{1}{r}\partial_r+\frac{1}{r^2})-V_z-\mu \end{array}\right)\left(\begin{array}{c} u_{\uparrow} J_0 (z r )\\ u_{\downarrow} J_1(z r)\end{array}\right)\nonumber\\
&=\left(\begin{array}{c}(-\eta z^2+V_z-\mu) u_{\uparrow} J_0 (z r )+ z \alpha u_{\downarrow} J_0(z r) \\ z\alpha u_{\uparrow} J_1(z r)+(-\eta z^2 -V_z-\mu)u_{\downarrow} J_1(z r)\end{array}\right)=0.
\end{align}
This condition simplifies to
\begin{equation}
\left(\begin{array}{cc}-\eta z^2+V_z-\mu & z\alpha \\ \alpha z & -\eta z^2 -V_z-\mu \end{array}\right)\left(\begin{array}{c} u_{\uparrow} \\ u_{\downarrow}\end{array}\right)=0.
\label{eq:matrix}
\end{equation}
The value of $z$ can be determined by setting the determinant of the above matrix to
zero. This leads to the equation for $z$
\begin{equation}
(\eta z^2-\mu)^2-V_z^2-z^2\alpha^2=0.\label{char:eq0}
\end{equation}
Note that the solutions of the above equation come in pairs $\pm z$. However,
the Bessel functions $J_0(zr)$ and $J_1(zr)$ are odd and even functions of
$z$ respectively. Therefore, there are two linearly independent solutions that one
can obtain for $r<R$.
The $r>R$ region includes non-zero amplitude of superconductivity. This region
is complicated to solve analytically except by a power-series solution in $1/r$. However,
our focus of whether there is a normalizable solution in this region depends on
the conditions for which there are exponentially decaying solutions as $r\rightarrow\infty$. Inspired by the asymptotic form of Bessel functions $J_n(zr)\propto e^{-zr}/r^{1/2}$, we consider an ansatz
\begin{align}
&\Psi_0(r)=\frac{e^{-z r}}{r^{1/2}}\left(\begin{array}{c}\rho_{\uparrow}\\\rho_{\downarrow}\end{array}\right).\label{Psi0}
\end{align}
Substituting this ansatz into the BdG equation we get
\begin{align}
&\left(\begin{array}{cc}\eta (-\partial_r^2-\frac{1}{4 r^2}+2 z\partial_r-z^2) +V_z-\mu & \lambda\Delta+\alpha (\partial_r+\frac{1}{2 r} -z)\\ -\lambda\Delta-\alpha (\partial_r-\frac{1}{2 r}-z) & \eta (-\partial_r^2+\frac{3}{4 r^2}+2 z\partial_r-z^2)-V_z-\mu \end{array}\right)\left(\begin{array}{c}\rho_{\uparrow}\\\rho_{\downarrow}\end{array}\right)=0.
\label{eq:64}
\end{align}
We notice that the $r$ dependence disappears in the limit $r\rightarrow \infty$ so that
\begin{align}
&\left(\begin{array}{cc}-\eta z^2 +V_z-\mu & \lambda\Delta-z\alpha \\ -\lambda \Delta+z\alpha & -\eta z^2-V_z-\mu \end{array}\right)\left(\begin{array}{c}\rho_{\uparrow}\\\rho_{\downarrow}\end{array}\right)=0\label{eq:largeRmode_vortex}.
\end{align}
Similar to the case for $r<R$, this equation has a non-trivial solution if $z$ satisfies
the secular equation:
\begin{align}
&Det\left(\begin{array}{cc}-\eta z^2 +V_z-\mu & \lambda\Delta-z\alpha \\ -\lambda\Delta+z\alpha & -\eta z^2-V_z-\mu \end{array}\right)
\nonumber\\
&=(-\eta z^2-\mu)^2-V_z^2+(z\alpha\lambda-\Delta)^2=0\label{eq:char_eq}.
\end{align}
We see from the form of the equation that the solutions of $z$ for the two values of
$\lambda$ are related by the sign of $\lambda$.
We can determine zero-energy Majorana solutions to the BdG equation
associated with the vortex by matching boundary conditions for the spinor $\Psi_0(r)$ between solutions
for $r<R$ and $r>R$ at $r=R$. The boundary conditions for the continuity of the two component
spinor $\Psi_0(r)$ and its derivative $\Psi_0'(r)$ at $r=R$ constitute four linear
equations. There would be non-trivial solutions to these equations if there are
five or more solutions between $r<R$ and $r>R$ out of which to construct
solutions $\Psi_0(r)$. Following the analysis of Eq.~\ref{char:eq0}, we concluded
that there were always two linearly independent solutions to use for constructing
$\Psi_0(r<R)$. Therefore, we can obtain normalizable Majorana modes if there
are at least three solutions of $z$ with $Re(z)>0$ to use to construct $\Psi_0(r>R)$.
The existence of normalizable zero-energy Majorana solutions depends crucially on the
number of available roots with $Re(z)>0$ of the above characteristic equation, which has
four roots in all. Since the
characteristic equation is real, solutions $z$ appear in complex conjugate pairs $(z,z^*)$
or are real.
Furthermore, since the coefficient of $z^3$ in Eq.~\ref{eq:char_eq} vanishes, the sum
of the four roots must vanish. This implies that at least one of the roots must have a negative real part (i.e. $Re(z)<0$) and another must have $Re(z)>0$. One possibility is that both these roots are complex, in which case we get only two roots with $Re(z)>0$ and
no Majorana mode. Let us call this case A. If one of these roots is real then it must be accompanied by another real root. Further, if these two real roots have the same sign we are in a similar situation as before and there are no Majorana modes. Let us call this case B. Finally, if these two real roots happen to have opposite signs then three roots
will have real parts with the same sign. By flipping the sign of $\lambda$ if necessary, we can ensure these three roots satisfy $Re(z)>0$ and there will be a zero-energy Majorana
mode. We call this case C.
We can separate the interesting topological case C from the cases A and B by considering
the sign of the product $C_0=\prod_n z_n=(\mu^2+\Delta^2-V_Z^2)$ of
the roots $z_n$, which is the polynomial at $z=0$.
Note that complex conjugate pairs $(z,z^*)$
do not change the sign of this product $C_0$ since $z z^*=|z|^2>0$.
The case B has pairs of real roots with the same sign. These also do not contribute
to the sign of $C_0$. On the other hand, the topological case C is characterized by
pairs of real roots with opposite signs, so that the product $C_0$ would be negative.
This leads to the condition to realize topological zero-energy Majorana modes
\begin{align}
&C_0=(\mu^2+\Delta^2-V_Z^2)<0.\label{C0}
\end{align}
Therefore Majorana modes are realized only for Zeeman field in excess of a
critical value $V_Z>\sqrt{\Delta^2+\mu^2}$.
While this gives us the condition for a Majorana mode, we still have not written
an explicit form of the solution, although there are many constraints that are
derived. We have almost exact solutions for $r<R$, except for certain coefficients
that can be written in terms the matrix equations Eq.~\ref{eq:matrix}. However, we
considered an asymptotic $r\rightarrow\infty$ limit for $r>R$. This has been~\cite{sau2010nonabelian} extended
to a power-series solution in $1/r$, which is systematic, but does not directly
impact the qualitative structure of the solution.
\subsection{Domain wall states in the topological phase}
Apart from vortex Majorana states, topological superconductors are characterized
by interesting states associated with edges of the system that are set by decreasing the chemical potential $\mu(r)$ or changes in the phase of the superconductor $\Delta(r)$.
Similar to the case of vortices, we can understand the spectra of such edges by
reducing the BdG equation from Eq.~\ref{HBdG} to one dimension by assuming parameters
to vary only along the $x-$direction. In this case we can assume the BdG spinor
to be a plane wave with wave vector $k_y$ along the y direction so that the
BdG Hamiltonian is
\begin{equation}
H_{BdG}=(-\eta \partial_x^2-\mu(x)+\imath\alpha\sigma_y\partial_x-\alpha k_y\sigma_x)\tau_z + V_Z B\sigma_z+\Delta(x)\tau_x.\label{edgeH}
\end{equation}
Similar to the angular momentum quantum number $m_J$ in Eq.~\ref{eq:theta}, $k_y$
transforms to $-k_y$ under particle-hole symmetry $\Xi$ because of the complex
conjugation operator $K$. As a result $H_{BdG}$ is still particle-hole symmetric and
the zero-energy Majorana operators appear at $k_y=0$.
The BdG Hamiltonian at $k_x=0$, is particle-hole symmetric and real so it is chiral
symmetric with the chiral symmetry operator $S$. As before, we can then assume the zero
energy Majorana mode is an eigenstate of $S$, so that we can replace $\tau_x$ with $\sigma_y\tau_z$.
This reduces the BdG Hamiltonian to a two component form similar to Eq.~\ref{eq:decBdg} as
\begin{align}
&\tilde{H}_{BdG}=-\{\eta\partial_x^2+\mu(x)-\imath\frac{\alpha}{2} \sigma_y\partial_x\}\tau_z + V_z\sigma_z+\imath \lambda\sigma_y\tau_z\Delta(x).\label{eqdomain}
\end{align}
The corresponding BdG equation, similar to the case of vortices (i.e. Eq.~\ref{eq:zeroenergy}) is a $2\times 2$
coupled differential equation as well:
\begin{align}
\!&\!\left(\!\begin{array}{cc}\!\!-\!\eta (\partial_x^2\!)\!+\!V_z\!-\!\mu(x)\!&\! \lambda\Delta(x)\!+\!\alpha (\partial_x\!)\\\\ -\lambda \Delta(x)\!-\!\alpha \partial_x \!&\! -\!\eta (\partial_x^2\!\!)\!-\!V_z\!-\!\mu(x)\! \end{array}\!\right)\!\!\Psi_0(x)\!=\!0.
\end{align}
As in the case of vortices we will consider the parameters $\mu$ and $\Delta$ to be constants at different values across a domain wall at $x=0$. On each side we can expand $\Psi_0(x)$ in terms of plane-waves $\Psi_0(x)\propto e^{zx}\Psi_0$ so that the above equation becomes
\begin{align}
\!&\!\left(\!\begin{array}{cc}\!\!-\!\eta (z^2\!)\!+\!V_z\!-\!\mu\!&\! \lambda\Delta\!+\!\alpha z\\\\ -\lambda \Delta\!-\!\alpha z \!&\! -\!\eta (z^2\!\!)\!-\!V_z\!-\!\mu\! \end{array}\!\right)\!\!\Psi_0\!=\!0.\label{eq:Psi0edge}
\end{align}
This equation is identical to Eq.~\ref{eq:largeRmode_vortex} and therefore has three solutions with $Re(z)$ with the same sign only in the case of a topological bulk with $C_0<0$. The properties of domain-walls with various
boundary conditions depends on the details of the boundary conditions discussed in the two sub-sections.
\subsubsection{Edge boundary conditions}
Based on analogy with FQHE and chiral $p$-wave superconductors, one
expects a chiral gap-less state confined to the edge of the
semiconductor heterostructure.
We model an edge with a chemical potential $\mu$ towards the edge
increases from $\mu(x)=\mu$ for $x<0$
to $\mu(x)=\infty>|V_z|$ for $x>0$.
We will assume that $\Delta(x)=\Delta$ is independent of $x$.
The wave-function $\Psi(x)$ must then vanish at $x=0$. This leads to the boundary condition
\begin{align}
&\Psi(x=0)=\sum_n \Psi_n e^{z_n x}=0,
\end{align}
where $\Psi_n$ and $z_n$ satisfy Eq.~\ref{eq:Psi0edge}. The resulting wave-function $\Psi_n(x)$ with be normalizable
if $Re(z_n)>0$. The boundary condition for the two component spinor $\Psi(x=0)$
contains two constraints. These can be solved for three values $n$. As we found in the case
of the vortex, Eq.~\ref{eq:Psi0edge} has three solutions with $Re(z_n)>0$ only if $C_0<0$, which
was exactly the topological condition that leads to Majorana zero modes in the vortex.
Let us now discuss the solution of the BdG equation associated with the Hamiltonian in Eq.~\ref{edgeH}
away from $k_y=0$. This Hamiltonian has one zero-energy Majorana state at $k_y=0$. Using perturbation theory,
we can calculate the correction to the energy at finite $k_y$ as
\begin{align}
&E(k_y)\sim \alpha k_y \int dx \Psi_0(x)^\dagger \sigma_x\tau_z \Psi_0(x),
\end{align}
which is linear in $k_y$ at this order. This shows that the dispersion of the Majorana zero mode at $k_y=0$ of
the topological superconductor for $C_0<0$ is a chiral Majorana with a linear dispersion relation.
\subsubsection{Non-chiral Majorana models in Josephson junction $\pi$ junctions}\label{nonchiralJJ}
Let us now consider another kind of boundary i.e. a Josephson junction with a phase difference of
$\pi$. In this case, $\mu$ remains the same, while $\Delta(x)$ changes sign at
$x=0$ from $\Delta(x)=\Delta$ for $x<0$ to $\Delta(x)=-\Delta$ for $x>0$.
Such a $\pi$-phase shift Josephson junction, with $\mu$ being constant is described
by the
real BdG Hamiltonian Eq.~\ref{eqdomain}. In contrast to the case of the system edge,
$\mu(x)$ is now assumed to be constant with $\Delta(x)$ changing sign at $x=0$.
Similar to the case of the vortex and the edge, we can solve the BdG equation by
matching plane-wave wave-functions for $x<0$ and $x>0$ at $x=0$. The boundary conditions in this case
consists of matching a two component spinor wave-function $\Psi_0(x=0)$
and it's derivative $\Psi_0'(x=0)$ and thus contains four constraints (similar to the
vortex case). Wave-functions for $x<0$ are identical to those for the edge of the system,
which consists of three allowed solutions for $C_0<0$ i.e. the topological phase. The
sign of $\Delta(x)$ flips as we cross the domain wall to $x>0$.
Using Eq.~\ref{eq:char_eq}, we note that this change of sign of $\Delta$ also changes
the sign of $z$ in the same $\lambda$ sector. Thus, in the topological regime, $C_0<0$,
we obtain three solutions with $Re(z_n)<0$. These values correspond to three
normalizable plane-wave solutions for $x>0$. Combining the solutions for $x<0$ and
$x>0$, in the topological regime $C_0<0$, we have six plane-wave solutions to be used
to match four constraints. This leads us to a pair of zero energy Majorana modes
for the $\pi$ Josephson junction at $k_y=0$.
We can move away from $k_y=0$ by computing the matrix elements of the $k_y$
perturbation proportional to $\sigma_x\tau_z$ in Eq.~\ref{edgeH}, similar to the case of the chiral edge states. However, in this case, conjugating the BdG Hamiltonian with $\sigma_z\tau_z$ together with the application of a mirror symmetry $x\rightarrow -x$ has the effect
of flipping $k_y$ without any other change of the system. Therefore, the dispersion
of the system resulting from the perturbation of adding $k_y$, in contrast
to that of the chiral edge states, must be symmetric $E(k_y)=E(-k_y)$. This dispersion is
what is called a non-chiral Majorana mode. The two fold degeneracy of the zero energy Majorana modes at $k_y=0$ is broken by going away from the $\pi$ phase difference, similar to those in topological insulators~\cite{fu2008superconducting}. The resulting dispersion is
that of a massive Dirac/Majorana mode:
\begin{align}
&E=\pm\sqrt{v^2 k_y^2+b^2 (\phi-\pi)^2},\label{eqEJ}
\end{align}
where $v,b$ are constants that would be determined from perturbation theory.
\subsection{Relation to bulk phase transition and topological invariant}
The calculation so far showed us that vortices and edges in the semiconductor-superconductor
heterostructure support Majorana zero modes only when $C_0<0$. Ultimately, this
condition was obtained from combining the boundary conditions for a vortex together
with the equation for the evanescent plane wave wave vector $z$ written in Eq.~\ref{eq:char_eq}.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{gapclosure.pdf}
\caption{Left panel shows the effect of a perturbation on a non-degenerate zero-energy sub-gap Majorana mode.
Particle-hole symmetry requires energy levels with non-zero energy to come in $(E,-E)$ pairs. Thus, a perturbation
cannot shift a non-degenerate zero-energy mode since it will not have a partner similar to states in trivial sub-gap states
shown on the right. These two scenarios must be separated by a bulk phase transition where the gap closure eliminates
any sub-gap states. The right panel shows the bulk spectrum of a spin-orbit coupled semiconductor/superconductor (Eq.~\ref{Ek})
as a function of applied Zeeman field $V_Z$. The applied Zeeman field $V_Z$ first closes the bulk
quasiparticle gap before it reopens as a topological superconducting gap proportional to the strength of spin-orbit coupling $\alpha$.
Right panel reproduced from Sau et. al. arXiv:1006.2829~\cite{sau2010nonabelian}
}\label{fig:gapclosure}
\end{figure}
The wavevector $z$ is a complex number representing the $e^{-zr}$ position dependence
of the wave function in Eq.~\ref{Psi0}. We used this form to conclude that only solutions
with $Re(z)>0$ of Eq.~\ref{eq:char_eq} could be used to construct normalizable solutions.
We found Majorana solutions only when three such solutions existed of Eq.~\ref{eq:char_eq}. While this feels like a fine-tuned condition, any adiabatic change of the
parameters that changes this condition will necessarily pass through a point in the parameter
space where $Re(z)=0$. Since $z$ is purely imaginary, we can write $z=i k$ for such parameters. Such purely imaginary values of $z$ correspond to propagating plane-wave states. Making this substitution in the characteristic equation one gets,
\begin{align}
&(\eta k^2-\mu)^2-V_z^2+(ik\alpha\lambda-\Delta)^2=0.\label{eqtop}
\end{align}
This equation contains an imaginary part $ik\alpha\lambda\Delta$, which automatically
forces $k$ to vanish.
The bulk spectrum of the Hamiltonian is
\begin{equation}
E_k^2=V_z^2+\Delta^2+\tilde{\epsilon}^2+\alpha^2 k^2\pm 2 \sqrt{V_z^2\Delta^2+\tilde{\epsilon}^2(V_z^2+\alpha^2 k^2)}\label{Ek}
\end{equation}
where $\tilde{\epsilon}=\eta k^2-\mu$.
To associate this relation with bulk properties let us write down the bulk Hamiltonian
corresponding to Eq.~\ref{HBdG} at the point $k=0$ in the momentum space,
\begin{align}
&H_{BdG}(k=0)=-\mu\tau_z+V_z \sigma_z\!+\Delta\tau_x.
\end{align}
The spectrum of this Hamiltonian is
\begin{align}
&E=\pm V_Z\pm\sqrt{\Delta^2+\mu^2}.
\end{align}
These energies vanish precisely when Eq.~\ref{eqtop} is satisfied. Therefore,
the parameters where we go from having zero energy vortex modes in a topological superconductor to the trivial phase of the superconductor
corresponds to a phase transition where the bulk energy gap of the BdG
Hamiltonian in Eq.~\ref{HBdG} closes at $k=0$.
In fact, this closure of the gap can be understood in terms of an application of Kitaev's
topological invariant in Eq.~\ref{topinvKitaev} to the spinful superconductor, which is written as
\begin{align}
&Q=\textrm{sgn}[Pf[\sigma_y\tau_y H_{BdG}(k=0)]],\label{topinv}
\end{align}
in particle-hole symmetric systems where ``$Pf$'' stands for the Pfaffian. One key difference is
that because of the different Nambu basis we use for spinful superconductors, the particle-hole matrix
in the Kitaev Hamiltonian $\tau_x$ is replaced by $\tau_x\rightarrow\sigma_y\tau_y$. In addition,
for the continuum system used to model systems such as the superconducting nanowire, the $k=\pi$
term in Eq.~\ref{topinvKitaev} is trivial and may be dropped.
By computing this Pfaffian for the $4\times 4$
matrix using the
standard definition~\cite{parameswaran1954skew} we get
\begin{align}
&Q=\mu^2+\Delta^2-V_Z^2,
\end{align}
which is exactly the $C_0$ topological number (see Eq.~\ref{C0}) we obtained from studying vortices
and edges.
\subsection{Dimensional reduction to one dimensional Nanowires}\label{nanowire}
Majorana modes can also be realized in the one dimensional analog of the
semiconductor-superconductor structure. This structure is experimentally simpler
because one does not require a magnetic insulator to generate the Zeeman field.
In the one dimensional case, the Zeeman field can be generated by a magnetic
field parallel to the semiconductor superconductor wire. In this case, the
Zeeman field cannot generate substantial undesired orbital effects that would
suppress superconductivity, though a finite diameter of the wire can leave
residual effects~\cite{vaitiekenas2020flux,nijholt2016orbital}.
\begin{figure}
\centering
\includegraphics[width=0.5\linewidth]{nanowire_fig.pdf}
\caption{ The one dimensional semiconductor
superconductor heterostructure allows the application of a Zeeman field by a parallel magnetic field.
Such a magnetic field does not suppress superconductivity significantly for a thin-film superconductor.
Theory (Eq.~\ref{C0}) predicts that the system enters the topological phase for applied Zeeman
field $V_Z$ in excess of $\sqrt{\mu^2+\Delta^2}$. The zero-energy end Majorana mode that emerges
at the end of the wire can be measured by tunnel spectroscopy e.g. using scanning tunneling microscopy (STM)~\cite{sau2010nonabelian}.
Figure reproduced from Sau et. al. arXiv:1006.2829~\cite{sau2010nonabelian}.
}\label{fig:nanowire}
\end{figure}
Understanding the one dimensional nanowire does not require any new
calculations. The BdG Hamiltonian Eq.~\ref{eqdomain} that we used to study the
spectrum of domain wall states at $k_y=0$ is exactly the Hamiltonian
for the semiconductor-superconductor nanowire heterostructure. The end of the wire corresponds
precisely to the edge of the system. The argument from the previous sub-section
then tells us that a semiconductor nanowire will support topological Majorana modes
for $C_0<0$. Additionally, the results from the sub-section on Josephson junctions
implies that a $\pi$-phase Josephson junction will support a pair of such Majorana
modes. We will discuss signatures of both the end Majorana modes as well as pair
of Majorana modes in the following section.
One subtlety worth noting is that of the chiral symmetry (i.e. Eq.~\ref{eqchiralsymmetry}) of the BdG Hamiltonian for the superconducting
semiconductor. This is because the exact form of the topological invariant depends on the symmetry class of the system.
For example the Kitaev topological invariant Eq.~\ref{topinvKitaev} is only defined for one dimensional superconductors, which have
particle-hole symmetry by definition and are classified as symmetry class D~\cite{altland1997nonstandard}. This invariant makes no assumption about the
Hamiltonian being real, which happens to be the case for the specific superconducting semiconductor system being considered.
In this case the reality of the BdG Hamiltonian leads to a chiral symmetry Eq.~\ref{eqchiralsymmetry}, which allows
us to define a more refined topological invariant~\cite{tewari2012topological} in this new symmetry class called BDI.
To define this operator we split the space into subspaces with projectors $\Sigma_{\pm}$ of eigenvalues $\pm 1$ of
the chiral symmetry operator $S$. This would imply that $S\Sigma_{\pm}=\pm \Sigma_{\pm}$. The BdG Hamiltonian is now
purely off-diagonal in this space so that $\Sigma_+ H_{BdG}(k)\Sigma_+= \Sigma_- H_{BdG}(k)\Sigma_-=0$. We can
then define the off-diagonal part
\begin{align}
&A_k=\Sigma_+ H_{BdG}(k)\Sigma_-
\end{align}
of the Hamiltonian $H_{BdG}(k)$, which has the property that $Det[H_{BdG}(k)]=|Det[A_k]|^2$. Thus, $|Det[A_k]|$ can
only vanish if $H_{BdG}(k)$ has a zero eigenvalue i.e. there is a gap closure.
Therefore, if $H_{BdG}(k)$ is fully gapped, we can define the phase winding of $Det[A_k]$
\begin{align}
&W=\int \frac{dk}{2\pi i}\frac{d Det[A_k]/dk}{Det[A_k]}
\end{align}
as a topological invariant characterizing the topological superconductor in symmetry class BDI~\cite{altland1997nonstandard}.
This has interesting physical consequences such as it allows multi-channel generalizations of the superconductor semiconductor
wires to have multiple Majorana modes that are protected as long as the chiral symmetry is preserved~\cite{tewari2012topological}.
\section{Experimental signatures}
\subsection{Transport signature}
As we saw from the schematic set-up in Fig.~\ref{fig:nanowire}, both the zero-energy
Majorana mode as well as the quasiparticle gap closing associated with the topological quantum
phase transition may be probed by tunneling transport. Such transport involves transfer of
electrons from a normal lead into the superconductor which is reflected back as an
electron or hole according to the scattering matrix:
\begin{equation}\label{eq:Sij}
S=\left(\begin{array}{cc}S^{ee} & S^{eh} \\ S^{he} & S^{hh}\end{array}\right).
\end{equation}
The process of electron reflecting back as a hole from the normal-superconductor interface and transferring a Cooper pair to the superconductor is termed
Andreev reflection.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{goodcombined.pdf}
\caption{(Left panel) End conductance $G=dI/dV$ into a semiconducting/superconducting nanowire
as a function of applied Zeeman $V_Z$ relative to superconducting gap $\Delta$, shows a
gap closure at the critical Zeeman field (shown by yellow line) $V_{Z,c}=\sqrt{\Delta^2+\mu^2}$~\cite{pan2020physical}. A zero-bias peak with height quantized
at $2e^2/h$ emerges about the topological quantum phase transition at $V_Z=V_{Z,c}$. (Middle panel) The non-local conductance between the ends of a semiconducting/superconducting nanowires shows a closure of the superconducting gap where non-zero conductance appears near zero bias before a gap reopening. (Right panel) The non-local conductance at the topological critical point (blue line in the middle panel) shows a conductance which is linearly dependent on bias voltage. The inset shows that the gap
closure is associated with quantized thermal conductance exactly at the topological phase transition. Figure reproduced from
Pan et. al. arXiv:2009.11809~\cite{pan2020physical}. }\label{fig:endconductance}
\end{figure}
Using the Blonder-Tinkham-Klapwijk\cite{blonder1982transition} formalism, the conductance into the superconductor can be written in terms of elements of this scattering matrix as
\begin{equation}\label{eq:Glocal}
G=\frac{e^2}{h}\left(N_{ch}-T^{ee}+T^{eh}\right),
\end{equation}
where
\begin{equation}
T^{\alpha\beta}={\text{tr}}\,\left([S^{\alpha\beta}]^\dagger S^{\alpha\beta}\right).\label{eqT}
\end{equation} {[$ {\text{tr}}\,(..) $ for the trace associated with additional channels such as spin etc.]} and
$ N_{ch}=2 $ is the number of the electron mode in this single channel model, and the transmission.
In the limit of a long wire, the transmission coefficient $T^{ee}=N-T^{eh}$
so that the conductance becomes related to the Andreev reflection probability
\begin{equation}
G^{(long)}\simeq\frac{2e^2}{h}T^{eh}.\label{eqGlong}
\end{equation}
We can understand this conductance as resulting from the transfer of Cooper pairs from the normal lead to the superconductor.
We can compute the conductance into the end of a semiconductor nanowire by first computing the scattering matrix $S(E)$ numerically using a program
such as KWANT~\cite{groth2014kwant} and then substituting the answer into Eq.~\ref{eqGlong}.
In addition to a discretized version of the BdG Hamiltonian in Eq.~\ref{eqdomain}, to compute the scattering matrix $S(E)$, we need to specify the
normal lead and a tunnel barrier. We typically choose the normal lead Hamiltonian to be similar to Eq.~\ref{eqdomain} except that $\Delta=0$ and
the chemical potential $\mu_{lead}$ is much higher than that in the semiconductor nanowire.
The result shown in Fig.~\ref{fig:endconductance} shows a gap at small Zeeman potential, which closes
as the Zeeman potential is increased and merges into a zero-energy peak which
persists beyond the topological phase transition. The closure of the gap seen in the spectrum is consistent with the spectrum that we saw in Fig.~\ref{fig:gapclosure}
in the discussion of the topological quantum phase transition. Note that we do not
typically see the reopening of the gap in these plots similar to experiments. Actually the measurement of a gap closure followed by
the emergence of a zero-bias conductance peak qualitatively similar to that seen in Fig.~\ref{fig:endconductance}, seen
in experiments~\cite{mourik2012signatures} a few years after the prediction has been one of the main motivating drivers of the field.
However, several quantitative features are yet to be observed. As we will discuss later these together with certain alternative scenarios that
may arise in these systems have made the search for Majorana modes somewhat controversial.
One of the quantitative features in the conductance plot shown in Fig.~\ref{fig:endconductance} is the quantized value of the height
of the zero-bias conductance peak associated with the end Majorana mode. To understand this quantization of conductance into a Majorana
mode, we eliminate the lead by using the Mahaux-Wiedenmuller transformation~\cite{beenakker1997randommatrix} to write the scattering matrix $S$
in terms of the Hamiltonian $H$ of the nanowire as:
\begin{align}
&S=1-2\pi i W^\dagger(E-H+i\pi W W^\dagger)^{-1}W,\label{SMW}
\end{align}
where $W$ is an $N_{ch}\times N$ matrix and $N_{ch}$ is the number of channels and $N$ is the size of the BdG Hamiltonian of the nanowire.
Since the conductance in the tunneling limit is not expected to depend on the details of the lead (which can be checked numerically),
\begin{equation}\label{eq:W}
W_{mn}=\delta_{m,n}, \quad 1\le m\le M, \quad 1\le n \le N.
\end{equation}
We can then write this scattering matrix in terms of the nanowire Green function $g_0(E)=(E-H)^{-1}$ by expanding as a formal power-series
in $W$ as
\begin{align}
&S=1-2\pi i W^\dagger [g_0(E) -i\pi W W^\dagger g_0(E)+\dots]W.
\end{align}
By writing $W=\Gamma w$, we can then formally resum the power-series to write $S$ in terms of the local part of the Green function $g_l(E)=w^\dagger g_0(E) w$
so that
\begin{align}
&S=1-2i\pi \Gamma[g_l(E)^{-1}-i\Gamma^{-1}\pi]^{-1}.\label{eqSf}
\end{align}
The local Green function $g_l(E)$ can be thought of the Green function at the end of the wire and is only an $N_{ch}\times N_{ch}$ matrix,
where $N_{ch}=2$ in the simplest case of a spin-polarized lead. The two components of the channel are the particle and hole of the Nambu space.
The zero-energy Majorana mode, which has a particle-hole symmetric wave-function, appears as a zero-energy pole of both $g_0(E)$ as well as
$g_l(E)$. We can include this pole structure of $g_l(E)$, by approximating $g_l(E)$ near $E\sim 0$ as
\begin{align}
&g_l(E)\simeq (\bm 1+\tau_x)u^2/E+a\tau_z.\label{eqgl}
\end{align}
Here we use a Nambu basis where the particle-hole symmetry of the Green function takes the
form $\tau_x g_l(E)\tau_x=-g_l(-E)$.
Substituting $g_l(E)$ into Eq.~\ref{eqSf} we get the amplitude of the electron-hole
reflection amplitude to be
\begin{align}
&r_{eh}=S_{eh}(E)=\frac{2i\pi\Gamma[u^2-i\pi \Gamma a^2 E]}{E(\pi^2\Gamma^2 a^2-1)-i\pi \Gamma u^2}\approx -\frac{i\pi\Gamma[u^2]}{E+i\pi \Gamma u^2},
\end{align}
where for the last step we took the limit of small $a$ i.e. contribution from other states.
Note that $r_{eh}(E=0)=-1$, which is referred to the phenomenon of perfect Andreev reflection.
This is a hall-mark signature of topological superconductors and indeed can be derived from the topological invariant~\cite{Wimmer2011Quantum}.
In contrast, the non-topological superconducting case, which does not have any such zero energy pole can be understood from Eq.~\ref{eqgl} by assuming $u(E)\propto E$, is characterized by an
Andreev reflection amplitude $r_{eh}(E\sim 0)\rightarrow 0$ that vanishes.
The reflection amplitude $r_{eh}$ determines the reflection probabilty $T_{ii}^{eh}=|r_{eh}|^2$, so that the conductance resonance for the Majorana mode is given by
\begin{equation}
G_{Maj}(E)\simeq\frac{2e^2}{h}\frac{(\pi\Gamma u^2)^2}{E^2+(\pi\Gamma u^2)^2}.
\end{equation}
This is the standard form for the conductance resonance associated with tunneling into a Majorana
zero mode. What is remarkable is that the height of the peak is quantized i.e. $G_{Maj}(E=0)=2e^2/h$ independent of the value of the tunneling to the normal lead $\Gamma$. However, the total weight under the conductance peak, which is related to the current
$I_{Maj}$ at bias voltages larger than the peak width $\Gamma u^2$ is proportional to
$\Gamma$ i.e.
\begin{align}
I_{Maj}=\int_0^{\infty}dE G_{Maj}(E)\propto \Gamma u^2.
\end{align}
Since the current ultimately vanishes with $\Gamma$, this resolves the apparent paradox of the zero-bias conductance being independent of tunneling
even as the tunneling rate $\Gamma$ vanishes.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{ExperimentalFiguresConductance.pdf}
\caption{(Color online) (Left panel) Conductance into the end of a semiconductor/superconductor nanowire~\cite{zhang2021large} similar to that measured in Mourik et al Science (2012)~\cite{mourik2012signatures},
shows a ZBCP signature theoretically expected from Majorana modes above a critical Zeeman field though gap closing and reopening signatures are not seen. Experimental figure reproduced from Zhang et al arxiv:2101.11456~\cite{zhang2021large}. (Right panel) Conductance across a similar superconducting nanowire in a three-terminal configuration
shows evidence for a gap in transmission at smaller Zeeman field that is closed by increasing the field~\cite{puglia2020closing}. Hints of a reopening are seen at higher magnetic field, but the gap appears to be weak. Experimental figure reproduced from Puglia et al arXiv:2006.01275~\cite{puglia2020closing}}\label{fig:exptconductance}
\end{figure}
As already mentioned, the field received a large boost from early, though preliminary verification of the transport and Josephson predictions~\cite{mourik2012signatures,das2012zerobias,deng2012anomalous,churchill2013superconductornanowire,finck2013anomalous}.
Specifically, as we see in the recent conductance data seen in the left panel of Fig.~\ref{fig:exptconductance}, the conductance as a function of bias voltage $V$
and applied Zeeman potential $V_Z$, shows a gap at small Zeeman field, which closes at about a magnetic field of $B\sim 0.7 T$ followed by the emergence of a
zero-bias conductance peak with conductance near the quantized value predicted for Majorana zero modes. This is qualitatively similar to the first measurements
of the conductance into the device~\cite{mourik2012signatures,das2012zerobias,deng2012anomalous,churchill2013superconductornanowire,finck2013anomalous}, though improvements in device fabrication since then have significantly enhanced the quality of the features.
Qualitatively similar results have been obtained by several groups since the first observation confirming that these results are quite reproducible ~\cite{deng2016majorana,zhang2017ballistic,nichele2017scaling,puglia2020closing,vaitiekenas2018effective,vaitiekenas2020flux,zhang2021large,yu2020nonmajorana}.
While these features are in qualitative agreement with the theory predictions seen in Fig.~\ref{fig:endconductance}, there are discrepancies. The first discrepancy
worth noting is that neither the theoretical conductance (left panel of Fig.~\ref{fig:endconductance}) or the experimental measurement (left panel of Fig.~\ref{fig:exptconductance}) shows evidence of the bulk gap reopening expected from the right panel of Fig.~\ref{fig:gapclosure}. While this is not technically a
discrepancy in the sense that the theoretical conductance in most models also do not show this feature because of competition from the zero-bias peak
it is an important feature to confirm. Secondly, the height of the conductance peak, although near the predicted value~\cite{nichele2017scaling,zhang2021large}, shows significant deviations both above
and below the predicted value with changing parameters and therefore does not appear to be as robust as predicted by theory. We will elaborate on the
implications of this discrepancy for the field in the last section of the chapter.
\subsection{Bulk gap closure}
The conductance from the end shown in Fig.~\ref{fig:endconductance} only shows us an apparent gap closure but not the gap reopening expected
from the right panel of Fig.~\ref{fig:gapclosure}. Additionally, even the gap closure feature is typically expected to be obscured by the presence
of Andreev bound states associated with complicated end potentials that we will discuss later.
Instead, we can consider a more direct measure of the gapless states at the phase transition by studying transport through such states.
Such an experiment can be performed by adding another lead on the right end $R$ of the semiconductor wire, in addition to the lead $L$ at
the left end. Since the superconductor has to be grounded this is referred to as a three-terminal configuration.
The scattering matrix $S$ must now be doubled to include scattering from both leads
\begin{equation}\label{eq:S}
S=\left(\begin{array}{cc}S_{\text{LL}} & S_{\text{LR}}\\S_{\text{RL}} & S_{\text{RR}}\end{array}\right),
\end{equation}
where each block $S_{ij}$ has the particle-hole structure of the scattering matrix in Eq.~\ref{eq:Sij} from the previous sub-section.
We can characterize the transport properties of such a three-terminal device by a conductance matrix
\begin{equation}\label{eq:condmat}
\hat{G}=\left(\begin{array}{cc}G_{\text{LL}} & G_{\text{LR}}\\ G_{\text{RL}} & G_{\text{RR}}\end{array}\right)=\left(\begin{array}{cc}dI_L/dV_L & -dI_L/dV_R \\ -dI_R/V_L & dI_R/dV_R\end{array}\right),
\end{equation}
where $ I_{\text{L}} $ ($ I_{R} $) is the current entering the left (right) normal lead from the scattering region, and $ V_{\text{L}} $ ($ V_{\text{R}} $).
In the limit of a long wire, $ G_{\text{LL}} $ and $ G_{\text{RR}} $ are the local conductances at each end that we discussed in the
last sub-section and are plotted in the left panel of Fig.~\ref{fig:endconductance}. The transport properties
across the wire would be measured from the nonlocal conductances ($ G_{\text{LR}} $ and $ G_{\text{RL}} $).
These non-nonlocal conductances can be written in terms of the transmission probabilities defined in Eq.~\ref{eqT}
\begin{equation}\label{eq:Gnonlocal}
G_{ij}=\frac{e^2}{h}(T_{ij}^{ee}-T_{ij}^{eh}), \qquad i\neq j,
\end{equation}
and are therefore expecteed to vanish for a gapped wire (away from the topological phase transition).
The appearance
of a finite non-local conductance near zero bias would signal such a bulk gap closure. This expectation is verified~\cite{rosdahl2018andreev}
is verified by the numerical results for the non-local conductance shown in the middle panels Fig.~\ref{fig:gapclosure}. These results were
obtained for a scattering matrix $S$ computed using KWANT~\cite{groth2014kwant} using the BdG Hamiltonian Eq.~\ref{HBdG} for a nanowire.
An unfortunate complication of these results is that the non-local conductance actually, even at the topological phase transition, vanishes at exactly zero bias
as a consequence of particle-hole symmetry~\cite{akhmerov2011quantized}. While the vanishing at the critical point is linear in voltage
as shown in the right panel of Fig.~\ref{fig:gapclosure} as opposed to a gapped signal away from the phase transition, introduction of disorder etc
might make the bulk gap closure difficult to identify in a definitive way. It is worth mentioning that an advantge of the scattering matrix formalism
is that we do not need the bulk Hamiltonian or the topological condition Eq.~\ref{C0}, which are limited to clean single-band systems, to identify
the topology of the nanowire. We can compute the topological invariant $TV_L$ directly from the scattering matrix $S_{LL}(E=0)$ from the scattering
matrix topological invariant
\begin{align}
&TV_L=det(S_{LL}(E=0)).
\end{align}
The numerical result for this invariant is shown in the inset of the right panel of Fig.~\ref{fig:gapclosure} and confirms that the topological invariant $TV_L$
vanishes at the quantum critical point.
The vanishing of $TV_L$ at the topological quantum critical point, suggests a zero-mode in the scattering matrix $S_{LL}(E=0)$. This means that
there is a mode that suffers no reflection when incident from the left lead $L$ and must thus be completely transmitted. This seems
to contradict the linear vanishing of the non-local conductance $G_{RL}$ that we discussed in the last paragraph.
As will become clear when we discuss the teleportation process through a Majorana wire, the vanishing of the non-local conductance through a Majorana
wire occurs because of the transformation of an electron into an equal superposition of electron and hole during the transmission process. While this
destroys the charge associated with the transfer of the electron, it does not reduce the entropy transfer associated with this transmission.
This entropy transfer at the bulk gap closure contributes to the heat conductance
\begin{equation}\label{eq:Gth}
\kappa=\kappa_0(T_{ij}^{ee}+T_{ij}^{eh}), \qquad i\neq j,
\end{equation}
which can also be computed from the transmission probabilities $T_{ij}^{ab}$ computed from the scattering matrix $S$.
The perfect transmission of quasiparticles at the bulk gap closure associated with the topological quantum phase transition appears as a quantized peak
in thermal conductance with height $ \kappa_0=\pi^2k_B^2\tau/6h $ at temperature $ \tau $ happens at TQPT~\cite{senthil1999spin,senthil2000quasiparticle,evers2008anderson} {($ h $ for the Planck constant and $ k_B $ for the Boltzmann constant)}.
The non-local conductance, for which theoretical results were shown in the middle panel of Fig.~\ref{fig:endconductance}, has also been measured
in recent experiments~\cite{puglia2020closing}. The results, though not as extensively as the end conductance, shown in the right panel of Fig.~\ref{fig:exptconductance}
indicate the existence of a gap at low magnetic fields consistent with the local conductance. In this plot, we infer a gap from the range of bias voltage over which
the conductance vanishes. Increasing a magnetic field appears to suppress this gap near the critical value where a zero bias conductance peak appears.
However, the data is not definitive about a re-entrant gap beyond the critical value, where only a slight suppression of non-local conductance is seen.
\subsection{Fractional Josephson effect}
As we saw earlier from the spectrum of Josephson junction in topological superconductors (i.e. Eq.~\ref{eqEJ}), a $\pi$ phase difference leads
to a degenerate pair of states, which splits in energy as the phase difference $\phi$ deviates from $\pi$. Formally Eq.~\ref{eqEJ} contains a wave-vector $k_y$.
However, we saw in the dimensional reduction argument to one dimensional nanowire (see subsection \ref{nanowire}) that the results for the nanowire can be obtained by setting
$k_y=0$ in the domain wall results for the two dimensional superconductor. A detailed numerical calculation of the spectrum for a Josephson junction
using the BdG Hamiltonian of a nanowire (i.e. Eq.~\ref{HBdG}), which is shown in panel (a) of Fig.~\ref{fig:spectrumtnph}, confirms the existence of the
pair of zero-energy bound states at phase $\phi=\pi$ on in the topological superconducting (i.e. topologically nontrivial) phase. The topologically trivial phase shows a
gapped spectrum as a function of phase $\phi$ as expected.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figJJ.pdf}
\caption{(a) Andreev bound state spectrum of a JJ in a superconducting in the topologically trivial (dashed line: $\tilde V_x\!=\!0.75$) and topologically nontrivial (solid line: $\tilde V_x\!=\!1.25$) superconducting phases.
The spectrum as a function of superconducting phase $\phi$, in the trivial case, shows a gap while that in the nontrivial case shows a crossing of zero-energy with a pair
of zero-energy Majorana modes at phase $\phi=\pi$~\cite{lutchyn2010majorana}.
(b) The zero-energy crossing of the Andreev state spectrum of the topologically non-trivial superconducting phase leads to a supercurrent (shown in blue) that changes
sign i.e. going from the solid to the dashed line as the phase $\phi$ advances by $2\pi$. The topological superconducting supercurrent is thus $4\pi$ periodic, leading to the
fractional Josephson effect, as opposed to the topologically trivial supercurrent (shown in red) which is $2\pi$ periodic as expected~\cite{lutchyn2010majorana}. Figure courtesy of Lutchyn et al arXiv:1002.4033~\cite{lutchyn2010majorana}. }\label{fig:spectrumtnph}
\end{figure}
While the value of phase $\phi$ where the zero-energy crossing occurs is not protected against changes in detail of the
Hamiltonian of the junction,
the crossing itself is protected by particle-hole symmetry. Let us start by writing the dispersion of the Andreev bound states (i.e. Eq.~\ref{eqEJ}) near phase $\phi\sim\pi$
as
\begin{align}
&E_{\pm}\simeq \pm \zeta (\phi-\pi),
\end{align}
where the index $\pm$ refer to the two branches of the spectrum for the topologically nontrivial phase seen in Fig.~\ref{fig:spectrumtnph}(a).
The level crossing of Andreev states as the phase $\phi$ changes by $2\pi$ can be understood in terms of the topological invariant in Eq.~\ref{topinvKitaev}
by considering the topological superconductor Josephson junction in a ring. The $2\pi$ superconducting phase difference is generated across
the Josephson junction by introducing a superconducting flux quantum $\Phi=\Phi_0$, where $\Phi_0$ is actually half an electron flux quantum $2\Phi_0=hc/e$
( by virtue of the electron charge being half a Cooper pair charge ). Thus, changing the superconducting phase $\phi$ by $2\pi$ changes the boundary conditions
around the ring from periodic i.e. $k=0$ to anti-periodic i.e. $k=\pi$, where $k$ is the wave-vector for Bogoliubov quasiparticles.
For topological superconductors with a non-trivial value for the topological invariant in Eq.~\ref{topinvKitaev}, such a change
in boundary condition leads to a change in the Pfaffian of the BdG Hamiltonian with Josephson junction phase $\phi=0$ and $\phi=2\pi$.
This change in the sign of the Pfaffian according to Eq.~\ref{PfHBdG} from $\phi=0$ to $\phi=2\pi$ guarantees a zero-energy level crossing
of the Andreev bound states in the junction between $\phi=0$ and $2\pi$ for the topologically nontrivial phase as seen in Fig.~\ref{fig:spectrumtnph}(a). An odd number of such crossings is uniquely associated with
topological superconducting phase
based on the invariant in Eq.~\ref{topinv}.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{schematicJJ.pdf}
\caption{(Color online) Andreev level crossing in a topological Josephson junction is associated with fermion parity
change in topological superconducting ring. fermion parity change can in principle be detected by Coulomb blockade transport~\cite{liu2019proposal}. Figure reproduced from Liu et. al. arXiv:1803.01872~\cite{liu2019proposal}}\label{fig:spectrumtnph2}
\end{figure}
A physical consequence of the change of the Pfaffian from a change in flux is the change in ground state fermion
parity~\cite{Read2000,Kitaev2001,Stone_2006} associated with such a flux in the set-up shown in Fig.~\ref{fig:spectrumtnph2}.
To understand the connection between the Pfaffian and ground state fermion parity, note that
the two branches of the spectrum that are seen near the zero-energy crossing in Fig.~\ref{fig:spectrumtnph}(a) are related
by particle-hole symmetry that transforms $E\rightarrow -E$ and also transforms the creation operator $\psi^\dagger$ for the state to $\psi^\dagger\rightarrow \psi$.
Thus, we can combine these two branches in terms of the creation operator to
\begin{align}
&H_{JJ}(\phi)\approx \zeta(\phi-\pi)\psi_\phi^\dagger\psi_\phi.
\end{align}
Noting that the fermion operator is continuous across $\phi\sim\pi$ i.e. $\psi^\dagger_{\phi\sim\pi}\sim\psi^\dagger_\phi$,
the crossing of the pair of Andreev bound
states seen in Fig.~\ref{fig:spectrumtnph}(a) is really a zero crossing of the energy $\zeta(\phi-\pi)$ of the fermion state with creation
operator $\psi^\dagger$.
Once the energy of such a state goes
from being negative to positive, we can lower the energy of the system by emptying the fermion state.
Thus, zero-energy level crossings, which are associated
with changes in the sign of the Pfaffian according to Eq.~\ref{PfHBdG}, are also associated with a change in fermion parity.
This suggests that the sign of the Pfaffian of the BdG Hamiltonian is related to it's ground state fermion parity, as can
also be established by direct computation~\cite{Stone_2006}.
Thus the change in Pfaffian in going from phase $\phi=0$ to $\phi=2\pi$ implied by Eq.~\ref{topinvKitaev}, also implies a change in
the ground state fermion parity of the Josephson junction as the superconducting phase $\phi$is changed by $2\pi$.
The change in the ground state fermion parity of the ring represents a change
in the number of electrons in the ring from even to odd. Such a change in fermion parity can be
detected by measuring Coulomb blockade transport in the superconducting
ring~\cite{liu2019proposal}.
The superconducting ring used in the setup in Fig.~\ref{fig:spectrumtnph2} to detect the fermion parity change is practically challenging to construct. Alternatively,
we can consider a case where the JJ is isolated from external leads so that the fermion parity of the system remains fixed as one changes the phase $\phi$ by $2\pi$.
This necessarily forces a topological superconducting system with a changing ground state fermion parity to enter an excited state when the phase changes by $2\pi$.
Introducing a second change of phase $\phi$ by $2\pi$ restores the system to the ground state. This leads to a $4\pi$ periodicity of the supercurrent of the form
\begin{align}
&I_{JJ}(\phi)=I_{2\pi}\sin{\phi}+I_{4\pi}\sin{\phi/2},\label{Ifrac}
\end{align}
where $I_{2\pi}$ is a conventional contribution to the supercurrent from topologically trivial channels and $I_{4\pi}$ is the amplitude of the $4\pi$ periodic component from the topologically superconducting channel.
Such a $4\pi$ periodic supercurrent that arises from the change in quasiparticle occupation in the junction is termed the fractional Josephson effect~\cite{Kwon2004Fractional} and
is a hall-mark of topological superconductivity.
The supercurrent for the topologically nontrivial phase in Fig.~\ref{fig:spectrumtnph}(b) is consistent with the form of $I_{JJ}$ in the limit where $I_{4\pi}\gg I_{2\pi}$.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{ExperimentalFiguresJosephson.pdf}
\caption{(Color online) (Top panel) The spectrum of radiation emitted by a Josephson junction in a a superconducting nanowire in the absence of a magnetic field (i.e. $B=0$), shows a conventional Josephson relation between frequency of radiation (related to $V_{Det}$ and the voltage $V=V_{NW}$ applied across the Josephson junction. The right panel, which is in the presence of a large magnetic field, shows the frequency of the emitted radiation (i.e. $V_{DET}$) drop by half relative to $V$~\cite{laroche2019observation}. (Bottom panel) Applying an external ac radiation of frequency $\omega$ generates Shapiro steps at applied voltage $V$ that are multiples of the frequency $\omega$. In the topologically trivial phase (i.e. at $B=0$) one sees a conventional sequence of steps in voltage $V$ at integer multiples of $\omega$. Applying a magnetic field leads to the disappearance of the first
shapiro step suggesting voltage steps at $2\omega$~\cite{Rokhinson2012Fractional}. Figures reproduced from Laroche et. al. arXiv:1712.08459 ~\cite{laroche2019observation} and Rokhinson et al. arXiv:1204.4212~\cite{Rokhinson2012Fractional}}\label{fig:exptJosephson}
\end{figure}
The $4\pi$ periodic supercurrent $I_{JJ}(\phi)$ associated with the fractional Josephson effect may be measured from the radiation of a voltage biased
Josephson junction~\cite{laroche2019observation}. Applying a voltage $V$, the phase across the Josephson junction varies as $\phi=V t$, which leads to an
ac Josephson supercurrent $I_{JJ}(\phi=V t)=I_{2\pi}\sin{Vt}+I_{4\pi}\sin{Vt/2}$. The radiation from the ac current
for the topologically trivial phase at zero magnetic field, which is shown in the top left panel of Fig.~\ref{fig:exptJosephson}, is peaked at a frequency
that has a conventional slope with respect to the applied voltage across the junction. In contrast the frequency of the radiation in the
topologically non-trivial phase at large Zeeman field has slope with voltage, which is half of the conventional case.
Being able to detect the radiation from the Josephson junction in a single nanowire requires sensitive on-chip detector technology. Because of this, the first measurements of the
fractional Josephson effect was based on the
Shapiro voltage steps where a microwave irradiated Josephson junction is used to generate finite voltage steps~\cite{Tinkham1996introduction}.
The origin of the voltage steps can be understood by considering a phase of the form $\phi=V t+\phi_{ac}\sin{(\omega t)}$,
where one can check that a dc current is supported
in the limit of $\omega$ and
$V$ becoming commensurate. Similar to the half frequency radiation from the fractional Josephson effect,
the voltage steps $V$ (relative to $\omega$) in a potentially topologically non-trivial superconducting phase was observed to be larger
by a factor of 2~\cite{Rokhinson2012Fractional} as seen in the right panel of
Fig.~\ref{fig:exptJosephson}. Since these measurements involve measurement of DC voltages,
they preceded the radiation measurements and were observed
in 2012~\cite{Rokhinson2012Fractional} about the same time as the zero-bias conductance peak measurements.
In fact both these effects have also been claimed to be observed in other systems as well~\cite{wiedenmann20164,deacon2017josephson}.
However, the constraint of a fixed fermion
parity, which is required for the validity of the fractional Josephson effect (i.e. Eq.~\ref{Ifrac}), requires a dynamical measurement
on a time-scale shorter than the quasiparticle poisoning time. Realistic experimental systems have a finite density of sub-gap states that would allow the excited state from the zero-energy crossing of the Andreev state
to relax by a process that is referred to as quasiparticle poisoning unless the experiment is done on a sufficiently short time-scale ~\cite{lutchyn2010majorana,houzet2013dynamics}. On the other hand, rapidly changing the phase $\phi$
drives the system out of equilibrium through Landau-Zener transitions~\cite{sau2017detecting} that can also produce unconventional current
phase relations in topologically trivial superconductors. In fact, such non-topological fractional Josephson effects from processes have already been claimed
to be observed both in conventional JJ~\cite{billangeon2007ac} as well as InAs nanowires~\cite{dartiailh2021missing}.
\subsection{Teleportation}\label{teleportation}
The fact that a pair of Majorana modes $\gamma_{1,2}$ can be used to construct a single fermion mode $c^\dagger=\gamma_1+i\gamma_2$
leads to a unique transport property of Coulomb blockaded Majorana wires~\cite{Fu2010Teleportation} when in the set-up shown in Fig.~\ref{fig:teleportation}.
Before considering the effect of Coulomb blockade on the central wire, let us first assume that the pair of end Majorana modes $\gamma_{1,2}$
have a small splitting $\epsilon$.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{MajoranafractionalizatioPicture.pdf}
\caption{(Color online) Pair of Majorana modes $\gamma_{1,2}$ form conventional fermion $c^\dagger=(\gamma_1+i\gamma_2)/2$. This form of fractionalization of a fermion into Majorana modes leads to non-local transport or teleportation~\cite{Fu2010Teleportation} of electrons from a left lead to a right lead in the scenario that the topological superconductor is Coulomb blockaded.}\label{fig:teleportation}
\end{figure}
The Hamiltonian for such a system is
\begin{align}
&H=\sum_{\alpha}t_\alpha[d_\alpha^\dagger \gamma_\alpha+h.c]+\epsilon_c c^\dagger c,
\end{align}
where $t_\alpha$ is the coupling to the lead fermions $d^\dagger_{\alpha=1,2}$.
The above Hamiltonian can be written compactly in a Majorana basis consisting of $\tilde{\gamma}_\alpha=d_\alpha^\dagger+d_\alpha$
\begin{align}
&H=i\sum_{\alpha}t_\alpha[\tilde{\gamma}_\alpha \gamma_\alpha]+i\epsilon_c \gamma_1\gamma_2.
\end{align}
We can eliminate the Majorana modes in the wire $\gamma_{1,2}$ from the low-energy states in the tunneling limit $E, t_\alpha \ll \epsilon_c$,
so that the Hamiltonian can be approximated by an effective tunneling of the lead Majorana modes
\begin{align}
&H_{eff}=i\tilde{\epsilon}\tilde{\gamma}_1\tilde{\gamma}_2=i\tilde{\epsilon} (d_1+d_1^\dagger)(d_2+d_2^\dagger),
\end{align}
where $\tilde{\epsilon}=t_1 t_2/\epsilon_c$.
The above Hamiltonian implies an effective direct transmission between the two leads similar to that discussed in the subsection
on bulk gap closure. In fact, an electron $d_1^\dagger$ in lead 1 transmits to an equal mixture of electron and hole
consistent with the observation of a vanishing non-local conductance at zero bias in Fig.~\ref{fig:endconductance} (middle and right panels)~\cite{Bolech2007}.
Let us now consider the limit of a long wire, where $\epsilon_c$ is generated by Coulomb blockade as
the energy difference between different electron number states. In this case the operator $c^\dagger$ has the same charge as an electron
and is indeed exactly an electron creation operator.
To understand the effect of the charging energy, we eliminate the Majorana modes $\gamma_{1,2}$ from $H$ in favor of the electron operator $c$ so that
\begin{align}
&H=\sum_{\alpha}t_\alpha[(d_\alpha^\dagger-d_\alpha) (s_{\alpha}^*c+s_{\alpha}c^\dagger)]+\epsilon_c c^\dagger c,
\end{align}
where $s_\alpha=1,i$.
In the case of strong Coulomb blockade charge conservation violating terms such as $d_\alpha^\dagger c^\dagger$ are projected out of the intermediate state so that
the system is described by
\begin{align}
&H_C=\sum_{\alpha}t_\alpha[(s_\alpha d_\alpha^\dagger c-s_\alpha^* d_\alpha c^\dagger) ]+\epsilon_c c^\dagger c.
\end{align}
The Hamiltonian for the system is now exactly equivalent to transmission of electrons between leads through a non-interacting quantum dot i.e. a
Fano resonance with an amplitude $t_1t_2/\epsilon_C$ at $E=0$. Even wires that are significantly longer than the coherence length can have a large charging energy with a large energy $\epsilon_c$.
In this case, the above Hamiltonian $H_C$ describes a process of teleportation of electrons between the leads $1$ and $2$ through the Majorana wire.
This process is actually a coherent transfer of an electron and may be used to propose an interferometric signature of Majorana modes. Preliminary
evidence of such interferometric signatures have recently been observed by the Copenhagen group~\cite{whiticar2020coherent}.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{ExperimentalFiguresTeleportation.pdf}
\caption{(Left panel) Shows the zero-bias conductance as a function gate voltage shows $2e$ periodic Coulomb blockade oscillations that evolve into
$e$ periodic oscillations with increasing magnetic field where a topological superconducting phase is possible~\cite{albrecht2016exponential}. In the case of a long gapped wire, such a conductance
could represent teleportation through Majorana modes. (Right panel) Shows interference of electrons transmitted through a superconducting semiconductor wire
in a putative topological regime as a flux dependent conductance oscillation~\cite{whiticar2020coherent}. The conductance oscillation shifts phase as the gate tunes the energy of the bound
state $c^\dagger$ (see Fig.~\ref{fig:teleportation}) through the Fermi energy. Figures courtesy of Albrecht et al arXiv:1603.03217
~\cite{albrecht2016exponential} and Whiticar et al. arXiv:1902.07085~\cite{whiticar2020coherent}.
}\label{fig:exptteleportation}
\end{figure}
The teleportation process can be used to measure a Majorana qubit, which is one of the key ingredients of the braiding protocols discussed earlier
in subsection~\ref{braiding}.
To understand how we would use $H_C$ to measure a Majorana qubit, imagine that the superconducting island in Fig.~\ref{fig:teleportation}
had an additional semiconductor wire with its own set of Majorana modes $\gamma_{3,4}$. The parity of the extra pair of Majorana modes would
affect the fermion parity of the island, which ultimately affects the sign of $\epsilon_C$. Since the Coulomb blockade can be chosen to favor a particular
total fermion parity $i\gamma_1\gamma_2\gamma_3\gamma_4$, we can view the sign of $\epsilon_C$ as being set by the fermion parity of the
Majorana wire $i\gamma_1\gamma_2$. Thus, the effective tunneling between
the leads is modified to
\begin{align}
&t_{eff}=\frac{t_1t_2}{\epsilon_C}(i\gamma_1\gamma_2).
\end{align}
The sign of this effective tunneling amplitude reflects the phase of the transmitted electron, which can be measured by interference
~\cite{plugge2017majorana}.
Measurement of the interference phase would be equivalent to the measurement of the fermion parity of the Majorana qubit.
Interestingly, preliminary experimental evidence for teleportation has been seen in recent experiments as seen in Fig.~\ref{fig:exptteleportation}.
The conductance shown in the left panel of Fig.~\ref{fig:exptteleportation}, shows that conductance through such a wire in the Coulomb blockade regime
which is $e$ periodic in gate voltage. This is characteristic of transport through an electronic bound state in the wire. A relatively trivial possibility for the
origin of such a state, which is difficult to rule out experimentally~\cite{liu2019proposal}, is a conventional bound state in the middle of a short wire.
The signal from such a state in the case of a long wire would be negligible and can only arise if there are a pair of bound states similar to the Majorana states seen in
Fig.~\ref{fig:teleportation}. However, the same signal can arise from Andreev bound states (ABSs) at the ends of the wire that we will discuss later~\cite{Sau2015Proposal}.
The non-local conductance from ABSs is expected to be different from those arising from Majorana modes i.e. the conductance from ABSs
would not show interference as seen in
the right panel of Fig.~\ref{fig:exptteleportation}. However, as mentioned, it is difficult to rule out the short wire scenario without more careful experiments such as
those that have been proposed~\cite{liu2019proposal}.
\section{Discussion and Conclusion}
After a decade of intense search for MZMs in SM-SC heterostructures, the field is right now at a crossroads. Tremendous efforts have been invested in the past few years to improve the quality of the various interfaces and reduce non-magnetic and magnetic disorders in the heterostructures. This has resulted in the conversion of a soft gap characterized by a substantial non-zero-bias background sub-gap conductance induced by disorder scattering in the early Majorana experiments \cite{mourik2012signatures} to a near perfect hard gap in the recently realized full epitaxy InAs/Al hybrid nanowires \cite{Chang2015Hard}. Experimental progress in reducing dissipation has also resulted in substantial enhancement in the height of the zero bias conductance peak from about $\sim 0.1 \frac{2e^2}{h}$ in the first generation Majorana devices \cite{mourik2012signatures} to zero bias conductance peak height approaching and exceeding $\frac{2e^2}{h}$ in the recent experiments \cite{nichele2017scaling,zhang2021large,yu2020nonmajorana}. However, despite the claims of several breakthroughs, the field has perennially seemed to remain on the cusp of a confirmatory evidence of MZM that has not yet been materialized in experiments.
The main obstacle to a confirmatory evidence of MZM is that ends of a nanowire are often locations where robust zero energy states are induced by various non-topological effects that have little or no connection to MZMs. To distinguish such robust zero energy states of non-topological origin from topological MZMs, experiments investigating the quantization of the zero bias peak heights at value $\frac{2e^2}{h}$, and the persistence of this quantized peak height with variation in the experimental parameters (the so-called quantized conductance plateau), have recently attracted vigorous attention. While ballistic Andreev reflection \cite{blonder1982transition} from an ordinary zero mode leads to a conductance quantization with zero bias peak height $\frac{4e^2}{h}$, MZMs should lead to a quantized peak height of $\frac{2e^2}{h}$ as they effectively behave as ``half a fermion''. Moreover, topological MZMs being insensitive to weak perturbations, the persistence of the ZBP height with variations in the tunnel gate potential and applied magnetic field are taken as spectacular transport evidence unique to MZMs.
In recent experiments \cite{nichele2017scaling,zhang2021large,yu2020nonmajorana} the height of the zero bias conductance peak has indeed been observed to approach or exceed $\sim \frac{2e^2}{h}$. However, a quantized conductance plateau for a convincing range of tunnel barrier or magnetic field is yet to be realized. Unfortunately, quantized conductance peak alone at isolated points in the parameter space cannot be taken as confirmatory evidence for MZMs. Furthermore, the absence of a convincing quantization plateau around the points in parameter space with ZBP height $\sim \frac{2e^2}{h}$ may be an indication that the robust zero energy states in these systems may in fact have originated from either, (a) disorder-induced weak anti-localization that leads to robust class-D conductance peaks with peak height between $0$ and $\frac{2e^2}{h}$,~\cite{Pikulin2012Zero,bagrets2012class,mi2014xshaped,pan2020generic} or (b) partially separated Andreev bound states (ps-ABS) \cite{Moore2018}, also known as quasi-Majorana modes \cite{vuik2018reproducing,Stanescu_Robust}, whose low bias conductance signature depends on the overlap of the wave functions of component Majorana bound states of a conventional fermionic state. The observation of zero bias conductance peaks with peak height $\sim \frac{2e^2}{h}$ only at isolated points in the parameter space and absence of convincing quantized plateaus around them may be an indication of significant residual disorder in the hybrid nanowires. With sustained improvements in material parameters by reducing disorder and interface inhomogeneity, we hope that quantized conductance plateau with peak height $\frac{2e^2}{h}$ will eventually be found in experiments, not just along a single tuning parameter but in islands in a higer-dimensional parameter space, confirming the existence of topological MZMs in SM-SC heterostructures.
|
{
"timestamp": "2021-05-11T02:14:53",
"yymm": "2105",
"arxiv_id": "2105.03769",
"language": "en",
"url": "https://arxiv.org/abs/2105.03769"
}
|
\section{Introduction}
\IEEEPARstart{D}{uring} the recent years, with the standardization of the 5G new radio (NR) communication systems and the ongoing resurgence of satellite communications (SatCom), the integration of satellite and terrestrial 5G networks is considered as a promising approach for future mobile communications \cite{Integ,Satellite-enabled,Energy,Satellite-5G,kato19optim,Jia18space}.
Thanks to the wide-area service coverage capabilities, satellite networks are expected to foster the roll out 5G services in un-served areas that are not covered by terrestrial 5G networks \cite{3gpp.38.811,ASurvey,robust18wang,outa19you,massive20you,sat18kap}.
Several key impacts in 5G NR protocols/architecture have been identified to provide support for non-terrestrial networks \cite{Architectures,ran18xiong,Kons18use}, one of which is the adaptability of the existing 5G uplinking timing advance (TA) method in low earth orbit (LEO) SatCom.
To ensure the uplink intra-cell orthogonality, 5G NR requires that the signals transmitted from different users within the same subframe arrive approximately in a time-aligned manner when reaching the base station (BS), i.e., the BS can receive the uplink frames within the range of one cyclic prefix (CP) \cite{sch185g,Erik18nr,3gpp.38.213}.
To this end, 5G NR employs an uplink TA scheme during the random access procedure to avoid timing misalignment interference, particularly in the terrestrial networks.
However, in a typical LEO SatCom system, the differential time delay will be significantly larger than that of the terrestrial networks.\footnote{For example, with an orbital altitude of 1000 km and the minimum elevation angle of $ 20^{\circ} $, the differential time delay will be approximately 3.74 ms.} Moreover, the propagation delay in a satellite-to-ground link varies due to the fast movement of the LEO satellite.
Such a significant difference between the LEO SatCom system and terrestrial wireless one raises a question: \textit{Is it possible to achieve accurate TA estimation in the LEO satellite networks employing a random access procedure compatible with 5G NR?} This paper aims to answer this question.
The TA estimation for random access in non-terrestrial networks (NTN) has been investigated during the past few years.
Recent 3rd generation partnership project (3GPP) studies have identified that location information of user equipment (UE) is beneficial for uplink TA estimation \cite{3gpp.38.811}.
Some proposals also consider several physical random access channel (PRACH) formats for long-distance transmissions, such as the use of long sequences (length $ = $839) for both FR1 (450 MHz-6 GHz) and FR2 (24.25 GHz-52.6 GHz) operating bands, more repetition or multiple sequence transmissions \cite{ran19harri,ran18zhen,Zhen20prea,Si13lte,Caus20new,3gpp.38.104}.
For SatCom systems with large Doppler shifts and oscillator uncertainties, symmetric Zadoff-Chu (ZC) sequences have been adopted to estimate TA \cite{Enhanced}.
In \cite{Two-step}, a two-step time delay difference estimation was presented for SatCom systems, which first divides a beam cell into some layered small sub-areas and then two types of PRACH preamble burst formats are transmitted.
TA estimation based on the correlation between a ZC sequence and its conjugate replica has been used in \cite{Timing}. In \cite{Yu20timing}, a reliable TA estimation approach with robustness to frequency offset was proposed in satellite mobile communication scenarios. Compared with sending TA commands from the satellite to the UE, signaling overhead can be significantly reduced if the TA value can be estimated directly at the UE side. However, to the best of our knowledge, most previous works on TA estimation for SatCom systems were carried out at the satellite side during the uplink while little focus has been placed on the investigation of TA estimation at the UE side with the utilization of 5G downlink synchronization signals.
In this paper, we propose a novel UE location information-assisted approach for uplink TA estimation in 5G integrated LEO SatCom. Two specific but important scenarios are taken into account. One is that the satellite broadcasts ephemeris periodically, and the UE is not able to obtain its position using a global navigation satellite system (GNSS). The visibility of 5G LEO satellites and GNSS satellites, which are mostly deployed in the medium earth orbits (MEOs), is not similar \cite{Prin13paul}. Moreover, sometimes, it is expected to design a system that can work independently of the other systems. Hence, the consideration of this scenario is reasonable. This could happen in urban scenarios where having enough GNSS satellites to compute the position is troublesome, but 5G LEO satellites are visible. The other is that the satellite does not broadcast ephemeris, and the UE can perform GNSS positioning.
We utilize the timing and frequency offset estimates acquired from downlink synchronization signals to perform TA estimation in these two scenarios separately.
The timing and frequency offset estimation can be then transformed into time difference of arrival (TDOA) and frequency difference of arrival (FDOA) measurements equivalently. As the combined TDOA and FDOA measurements have been extensively used in the source localization \cite{Sun11an,Kc04an,Geolocation,Iterative}, we propose to adopt them to estimate the UE location or satellite ephemeris information, and then the value of uplink TA can be calculated at the UE side. The major contributions of our work are summarized as follows:
\begin{itemize}
\item We propose a method for TA estimation in 5G integrated LEO SatCom through a 5G NR compatible random access procedure. Depending on whether the UE has the capability of GNSS positioning or not, we divide the scenarios for TA estimation problem into two categories and carry out the studies separately. We further extend the problem into the multi-satellite networks based on the system model of TA estimation in the single-satellite case.
\item We estimate the timing and frequency offset in the downlink synchronization phase to acquire the TDOA and FDOA measurements. With these measurements, we convert the problem of TA estimation into either UE geolocation or ephemeris estimation. As the altitude of UE is often known, we exploit this altitude information to improve the positioning accuracy of UE.
\item We formulate the equality-constrained optimization problem via using the system model of 5G integrated LEO SatCom for either geographical location or ephemeris estimation. Then we propose a quadratic penalty algorithm to find the globally optimal solution of the estimation problem. In order to reduce the computational complexity, we further propose an iterative constrained weighted least squares (CWLS) method for this equality-constrained problem.
\end{itemize}
Some of the notations adopted in this paper are listed as follows:
\begin{itemize}
\item Upper and lower case boldface letters denote matrices and column vectors, respectively.
\item $ \mathbb{R}^{M\times N} $ denotes the $ M \times N $ dimensional real-valued vector space.
\item $\mathbf{I}_{N}$ and $ \mathbf{0}_{M\times N} $ denote the $ N\times N $ dimensional identity matrix and $ M\times N $ dimensional zero matrix, respectively. The subscripts are sometimes omitted for brevity.
\item The superscript $ (\cdot)^{T} $, $ (\cdot)^{-1} $, and $ (\cdot)^{\dagger} $ denote the transpose, inverse, and pseudo-inverse operations, respectively.
\item $ \lVert \cdot \lVert $ denotes the Euclidean norm.
\item $ \mathrm{det}{(\cdot)} $ denotes the determinant operation.
\item $\dot {(\cdot)}$ denotes the derivative of $(\cdot)$ with respect to time.
\item $ \mathrm{diag}(\cdot) $ denotes the diagonal matrix with the elements of $ (\cdot) $ on the main diagonal.
\item $\nabla(\cdot) $ denotes the gradient computation.
\item $ [\mathbf{A}]_{i,:} $, $ [\mathbf{A}]_{:,j} $, and $ [\mathbf{A}]_{i,j} $ denote the $ i $-th row, the $ j $-th column and the $ (i,j) $-th element of the matrix $ \mathbf{A} $, respectively.
\item All estimated parameters are described as $ \hat{(\cdot)} $.
\end{itemize}
The rest of the paper is organized as follows. Section \ref{system_m} introduces the TA problem in 5G integrated LEO SatCom. Section \ref{location_esti} and Section \ref{ephemeris_esti} present the location and ephemeris estimation algorithms with downlink synchronization signals, respectively. Section \ref{multi-satellite} presents the TA estimation in multi-satellite systems. Section \ref{numerical_resu} illustrates the numerical results and Section \ref{conclusion} concludes the paper.
\section{System Model}\label{system_m}
\subsection{Timing Advance in 5G Integrated LEO SatCom}
In the development of 5G integrated SatCom, most existing works focused on the air interface design of the satellite module to maximize utilization of the technology commonalities with the terrestrial systems, so as to reduce the implementation costs and simplify the interactive procedures.
For example, as the 5G NR basic waveform, CP Orthogonal Frequency Division Multiplexing Access (CP-OFDMA) requires that the signals transmitted from different UEs are time-aligned when reaching the BS to keep the uplink intra-cell orthogonality, i.e., any timing misalignment across the received signals should fall within the range of one CP.
To this end, 5G NR adopts a scheme for TA during the random access procedure, where the BS first estimates the uplink TA for the UE through the PRACH preamble and then sends the adjustment information to the UE by the random access response (RAR) message \cite{3gpp.38.213}.
The UE further adjusts its uplink transmission time based on the received TA values combined with the acquired downlink timing synchronization information.
However, such a scheme is designed specifically for the terrestrial networks and may not be applicable to the satellite-to-ground transmission.
For example, the cell coverage is limited by the CP length of PRACH preambles.
The current 5G NR PRACH preambles with $L_{RA} = 839$ formats allow the cell coverage varying from 15 km to 102 km. The beam footprints of LEO satellites can be designed to be similar in size to the terrestrial cell coverage. However, in most practical LEO satellites, due to the consideration of coverage requirements and implementation complexity, the diameter of beam footprints is designed on the order of several hundreds of kilometers, which is significantly larger than that in terrestrial systems \cite{Por19tech,Xia19beam}.
In addition, large variation of the round-trip delay in satellite-to-ground communications within a cell/beam would limit the availability of cyclic shift (CS) multiplexing as well, resulting in smaller cell reuse factor of preamble sequences \cite{R1.1908818}.
On the other hand, the maximum value of the TA command in the RAR message defined in 5G NR may be smaller than the round-trip delay of a LEO satellite-to-ground link \cite{3gpp.38.811}.
\newcolumntype{L}{>{\hspace*{-\tabcolsep}}l}
\newcolumntype{R}{c<{\hspace*{-\tabcolsep}}}
\definecolor{lightblue}{rgb}{0.93,0.95,1.0}
In addition, due to the high-speed motion of the LEO satellite, the satellite-to-ground links usually exhibit a varying propagation delay. As this change rate of the propagation delay between the UE and satellite is very large, the TA command sent by the satellite is outdated by the time the UE receives it \cite{R1.1910982}. Hence, to account for this expected TA inaccuracy, an additional adjustment procedure is needed to update the original TA command, as shown in Fig. \ref{fig1_0325}.
In view of these challenges, 3GPP has agreed that several options can be considered to support TA adjustment in random access procedure for NTN.
Firstly, when UE positioning capabilities, e.g., GNSS positioning, are enabled at the UE side, they can be used to enhance the TA estimation at the UE side and minimize the amount of signaling required, especially in LEO SatCom systems \cite{R1.1910064}.
Moreover, as indicated by Fig. \ref{fig1_0325}, the total TA can be divided into beam/cell specific common TA $ L_{1} $ and user-specific differential TA $ L_{2} $, where the former is used to compensate for the round-trip delay at a reference point within the cell/beam, e.g., the nearest point to the satellite, and the latter is used to represent the difference between the common TA and the actual TA for a specific user.
Note that the common TA can be obtained by UEs via broadcast information from the satellite and then only the differential TA is responded in the RAR message.
\begin{figure}
\centering
\includegraphics[width=8cm]{./fig1_0516.pdf}
\caption{Illustration of cell/beam coverage of NTN.}
\label{fig1_0325}
\end{figure}
\subsection{Location Based Timing Advance Estimation}
In this part, we investigate how a UE can get a rough user-specific TA in 5G integrated LEO SatCom using only downlink signals. In 5G NR, the downlink synchronization signal block (SSB) is introduced, consisting of the primary and secondary synchronization signals \cite{3gpp.38.300}. The UE can acquire timing and frequency synchronization within a cell and the physical layer Cell ID through the detection of SSB. In the following, we first introduce the 5G downlink synchronization signals received at the UE side.
Consider $ K $ OFDM symbols in one frame. Denote the number of subcarriers and length of CP as $ N $ and $ N_{g} $, respectively. Then the received signal corresponding to the $ k $-th OFDM symbol $ s_{k}(n),k=1,2,...,K,n=0,1,...,N+N_{g}-1 $ can be given by \cite{Near}
\begin{align}\label{eq1}
r_{k}(n)=e^{j2\pi n\varepsilon/N}\sum_{l=0}^{L-1}h(l)s_{k}(n-\theta-l)+z(n),
\end{align}
where $\theta$ denotes the integer-valued symbol timing offset (normalized by the sampling interval) and $\varepsilon$ denotes the normalized carrier frequency offset (CFO) with respect to the subcarrier spacing, including the contribution of the difference in the transmitter and receiver oscillators $ \varepsilon_{e} $ and the Doppler frequency shift $ \varepsilon_{d} $ introduced during the downlink transmission of signals. $ \varepsilon_{e} $ is assumed to be constant during the observation time of the signal \cite{Morelli16robust,Hsieh99low,Minn03robust}. In addition, $ h(l) $, $ l=0,1,...,L-1 $ denotes the impulse response of a multipath channel with $ L $ uncorrelated taps, and $ z(n) $ is the additive white Gaussian noise with zero mean and variance $ \sigma_{z}^{2} $.
By using the maximum log-likelihood criterion, discrete prolate spheroidal sequences (DPSS), and the CP structure of OFDM, the timing and frequency offset estimation algorithms in \cite{Near} can achieve near optimal performance, i.e., accurate estimation of parameters $ \theta $ and $ \varepsilon $ based on the observation $ r_{k}(n) $ at the receiver is available. Denote the total number of downlink SSBs as $ M $. Consider a short period, namely the timing-window,\footnote{Note that the timing-window refers to the product of ($ M-1 $) and the time interval between two adjacent SSBs.} in which the satellite is visible to the target ground UE. Within this timing-window, we can then obtain sequences of timing and frequency offset estimates corresponding to different SSBs written as $
\widetilde{\theta}_{i} $ and $ \widetilde{\varepsilon}_{i} $, where $ i=1,2,...,M $ denotes the serial number of SSBs.
Consider the initial synchronization timing offset $ \widetilde\theta_{1} $ and CFO $ \widetilde\varepsilon_{1} $ as the reference, then the estimated TDOA $ \widetilde t_{i,1} $ and FDOA $\widetilde f_{i,1} $, $ i=2,3,...,M $ between SSB $ i $ and SSB $ 1 $ are given by
\begin{subequations}\label{equ1}
\begin{align}
\widetilde t_{i,1}&=(\widetilde\theta_{i}-\widetilde \theta_{1}) T_{s},\label{equ1_1}\\
\widetilde f_{i,1}&=(\widetilde \varepsilon_{i}-\widetilde \varepsilon_{1}) \Delta f,\label{equ1_2}
\end{align}
\end{subequations}
respectively, where $ T_{s} $ and $ \Delta f $ represent the sampling interval and subcarrier spacing, respectively. Note that TDOA noise $ \Delta t_{i,1} $ and FDOA noise $ \Delta f_{i,1} $ caused by estimation error of the timing offset and CFO can be written as
\begin{subequations}\label{ti1}
\begin{align}
\widetilde t_{i,1}=t_{i,1}+\Delta t_{i,1},\\
\widetilde f_{i,1}=f_{i,1}+\Delta f_{i,1},
\end{align}
\end{subequations}
where $ t_{i,1} $ and $ f_{i,1} $ denote the noise-free values of TDOA and FDOA, respectively.
Hence, based on the timing and CFO estimation algorithm with 5G downlink synchronization signals, we can obtain noisy measurements of TDOA and FDOA.
Then, we perform the problem of TA estimation in the following two important scenarios:
\begin{itemize}
\item[1)]\textit{Scenario 1: The satellite broadcasts ephemeris periodically and the GNSS service is not available for UEs.} As the ephemeris information is available at the UE side, the TA estimation for random access for the 5G integrated LEO SatCom is therefore transformed into the location estimation of the UE with the utilization of downlink synchronization signals. With the UE location estimates and ephemeris information, the propagation delay between UE and the satellite can be estimated at the UE side. The UE can then adjust the timing of its uplink transmissions based on the delay estimates.
Regarding the broadcasting ephemeris data, there are two different possible representations as described in \cite{3gpp.38.821}. One possibility is to use orbital parameters, including the orbital plane and satellite level parameters. The other is to provide the location and velocity of the satellite combined with a reference point in time. Since several satellites typically share a common orbital plane in a satellite network, orbital plane parameters remain the same and can be pre-provisioned to UE as baseline ephemeris data. Then only satellite level parameters are required to be broadcasted to UE via system information. Hence, in this paper, we consider the first option to represent the ephemeris data to reduce the broadcasting overhead. In addition, to make sure that the UE always uses the latest ephemeris data for initial access, once the UE has obtained new ephemeris data, the parameters stored in the UE are obsolete and overwritten by the newer values.
\item[2)]\textit{Scenario 2: The satellite does not broadcast ephemeris but the UE has the capability of GNSS positioning.} In this scenario, UE location is available through the GNSS positioning. However, the ephemeris information is unknown. Thus, the TA estimation is converted into the ephemeris estimation with the downlink synchronization signals, and then the propagation delay between UE and the satellite can be calculated.
\end{itemize}
Note that in this paper, we perform the TA estimation method in the regenerative architecture, which only considers the user link between the satellite and UE. In practice, the gateway and satellite locations are assumed to be known by each other. Then the propagation delay and frequency shift of the feeder link can be pre-compensated at the satellite or the gateway side. Hence, our work can also be applied to the transparent architecture. In addition, both the 2-step and 4-step random access procedures are taken into consideration in this paper. Fig. \ref{fig1_0203} shows the 2-step and 4-step random access procedures with location-based TA estimation for 5G integrated LEO SatCom. The type of random access is selected based on the network configuration by the UE at initiation of the random access procedure.
As the TA estimation is transformed into the geolocation of UE and the ephemeris estimation with downlink synchronization signals, next sections further give the location estimation algorithms for \textit{Scenario 1} and ephemeris estimation algorithms for \textit{Scenario 2}.
\begin{figure}
\centering
\includegraphics[width=9cm]{./fig1_0223_2.pdf}
\caption{The random access procedure with location-based TA estimation.}
\label{fig1_0203}
\end{figure}
\section{Location Estimation Algorithms with Downlink Synchronization Signals}\label{location_esti}
In this section, we focus on \textit{Scenario 1}, and our aim is to estimate the UE location with the downlink synchronization signals. With the relationship between timing/frequency offset estimation and TDOA/FDOA measurements in (\ref{equ1_1}) and (\ref{equ1_2}), UE location estimation with the downlink synchronization signals is converted into the geolocation with joint TDOA and FDOA measurements. In this part, we first relate the TDOA and FDOA measurements to the unknown UE location.
\subsection{Problem Formulation}
\begin{figure}
\centering
\includegraphics[width=8cm]{./relpos.pdf}
\caption{Geometric relation of a single LEO satellite and the UE in ECEF coordinate.}
\label{relpos}
\end{figure}
Fig. \ref{relpos} shows a single LEO satellite with a UE located on the surface of the earth in the Earth Centered Earth Fixed (ECEF) coordinate system, which is aligned with the equatorial plane and the Greenwich meridian \cite{SatelliteOrbits}. The position and velocity vector of the UE in ECEF are denoted by $\mathbf{p}=[x,y,z]^{T}$ and $\dot{\mathbf{p}}=[\dot{x},\dot{y},\dot{z}]^{T}$, respectively. The satellite locations $\mathbf{s}_{i}=[x_{i},y_{i},z_{i}]^{T}$ and velocities $ \dot{\mathbf{s}}_{i}=[\dot{x}_{i},\dot{y}_{i},\dot{z}_{i}]^{T} $ when at the transmit instant of the $ i $-th SSB, $ i=1,2,...,M $ are assumed to be known thanks to the broadcast satellite ephemeris as supposed in \textit{Scenario 1}.
Let $d_{i}$ represent the distance between the satellite and the UE corresponding to the $ i $-th downlink SSB given by
\begin{align}\label{di}
d_{i}=\rVert\mathbf{s}_{i}-\mathbf{p}\rVert,\quad i=1,2,...,M.
\end{align}
Then the range difference of arrival between the $ i $-th and the first SSB related to the TDOAs is given by
\begin{align}\label{di1}
d_{i,1}=d_{i}-d_{1}=c t_{i,1},\quad i=2,3,...,M,
\end{align}
where $ c $ is the speed of light. Taking derivative of (\ref{di1}) with respect to time, we can obtain the range rate differences \cite{Iterative}, which are denoted as $ \dot{d}_{i,1} $ given by
\begin{align}\label{di11}
\dot d_{i,1}=c \dot t_{i,1}=\dot d_{i}-\dot d_{1},\quad i=2,3,...,M,
\end{align}
where $ \dot t_{i,1} $ and $ \dot d_{i} $ denote the rate of change of $ t_{i,1} $ and $ d_{i} $, respectively. From the derivative of (\ref{di}) with respect to time, $ \dot d_{i} $ can be further described as
\begin{align}\label{ddi}
\dot d_{i}
=\frac{(\mathbf{s}_{i}-\mathbf{p})^{T}(\dot{\mathbf{s}}_{i}-\dot{\mathbf{p}})}{d_{i}}.
\end{align}
$ \dot t_{i,1} $ in (\ref{di11}) can be derived from the FDOAs written as \cite{pc82emitter}
\begin{align}\label{fi1}
f_{i,1}=f_{c}{\dot t_{i,1}},
\end{align}
where $ f_{c} $ denotes the carrier frequency.
From (\ref{di1}), it can be observed that TDOAs are equivalent to the range differences. In the following, TDOAs and the range differences will be used interchangeably. In addition, FDOAs and range rate differences will be also used interchangeably as they are equivalent by (\ref{di11}) and (\ref{fi1}).
Taking into account the influence of noises caused by estimation errors of timing and frequency offsets, we define $ \widetilde{d}_{i,1} $ and $ \widetilde{\dot{d}}_{i,1} $ as the measured value of range and range rate differences, respectively. They can be derived from noisy measurements of TDOA and FDOA sequences as
\begin{subequations}
\begin{align}
\widetilde{d}_{i,1}&=d_{i,1}+c\Delta t_{i,1},\\ \widetilde{\dot{d}}_{i,1}&=\dot{d}_{i,1}+ c\Delta \dot{t}_{i,1},
\end{align}
\end{subequations}
where $ \Delta \dot{t}_{i,1} = \Delta f_{i,1}/f_{c} $ is equivalent to the FDOA noise. Let $ \mathbf{n}_{t}=[c\Delta t_{2,1},c\Delta t_{3,1},...,c\Delta t_{M,1}]^{T}\in\mathbb{R}^{(M-1)\times1} $ and $ \mathbf{n}_{f}=[c\Delta \dot t_{2,1},c\Delta\dot t_{3,1},...,c\Delta \dot t_{M,1}]^{T}\in\mathbb{R}^{(M-1)\times1} $ be the vectors of TDOA and FDOA noises, respectively. We assume that they are both zero mean and have covariance matrix as follows
\begin{align}
\mathbf{Q}_{t}=\mathbf{E}[\mathbf{n}_{t}\mathbf{n}_{t}^{T}],\quad
\mathbf{Q}_{f}=\mathbf{E}[\mathbf{n}_{f}\mathbf{n}_{f}^{T}].
\end{align}
Developing the squared term in $d_{i}^{2}=(d_{i,1}+d_{1})^{2}$, and using (\ref{di}) to span $d_{i}^{2}$ and $d_{1}^{2}$, we can then obtain a set of TDOA equations
\begin{equation}
\begin{aligned}
\label{ri11}
d_{i,1}^{2} +2d_{i,1}d_{1}=-2(\mathbf{s}_{i}-\mathbf{s}_{1})^{T}\mathbf{p}+\mathbf{s}_{i}^{T}\mathbf{s}_{i}-\mathbf{s}_{1}^{T}\mathbf{s}_{1},\\ i=2,3,...,M.
\end{aligned}
\end{equation}
Further, to make use of FDOAs, we take derivative of (\ref{ri11}) with respect to time and obtain
\begin{equation}\label{td}
\begin{aligned}
{ d_{i, 1} \dot{d}_{i, 1}+ d_{i, 1} \dot{d}_{1}+ \dot{d}_{i, 1} d_{1}- \mathbf{s}_{i}^{T} \dot{\mathbf{s}}_{i}+ \mathbf{s}_{1}^{T} \dot{\mathbf{s}}_{1}} =\\-\left(\dot{\mathbf{s}}_{i}-\dot{\mathbf{s}}_{1}\right)^{T} \mathbf{p}-\left(\mathbf{s}_{i}-\mathbf{s}_{1}\right)^{T} \dot{\mathbf{p}}.
\end{aligned}
\end{equation}
Define $ \mathbf{u}_{1}=[\mathbf{p}^{T},\dot{\mathbf{p}}^{T},d_{1},\dot{d}_{1}]^{T} $. Note that $ d_{i,1}=\widetilde{d}_{i,1}-c\Delta t_{i,1} $, $ \dot{d}_{i,1}=\widetilde{\dot{d}}_{i,1}-c\Delta \dot{t}_{i,1} $, then the set of equations (\ref{ri11}) and (\ref{td}) becomes
\begin{align}\label{h1}
\mathbf{h}_{1}=\mathbf{G}\mathbf{u}_{1}+\boldsymbol{\epsilon},
\end{align}
where
\begin{equation}
\mathbf{h}_{1} =\left[\begin{array}{c}{\widetilde d_{2,1}^{2}-\mathbf{s}_{2}^{T} \mathbf{s}_{2}+\mathbf{s}_{1}^{T} \mathbf{s}_{1}} \\ {\widetilde d_{3,1}^{2}-\mathbf{s}_{3}^{T} \mathbf{s}_{3}+\mathbf{s}_{1}^{T} \mathbf{s}_{1}} \\ {\vdots} \\ {\widetilde d_{M,1}^{2}-\mathbf{s}_{M}^{T} \mathbf{s}_{M}+\mathbf{s}_{1}^{T} \mathbf{s}_{1}} \\ {2 \widetilde d_{2, 1} \widetilde{\dot{d}}_{2, 1}-2 \mathbf{s}_{2}^{T} \dot{\mathbf{s}}_{2}+2 \mathbf{s}_{1}^{T} \dot{\mathbf{s}}_{1}} \\ {2\widetilde d_{3, 1} \widetilde{\dot{d}}_{3, 1}-2 \mathbf{s}_{3}^{T} \dot{\mathbf{s}}_{3}+2 \mathbf{s}_{1}^{T} \dot{\mathbf{s}}_{1}} \\ {\vdots} \\ {2\widetilde d_{M, 1} \widetilde{\dot{d}}_{M, 1}-2 \mathbf{s}_{M}^{T} \dot{\mathbf{s}}_{M}+2 \mathbf{s}_{1}^{T} \dot{\mathbf{s}}_{1}}\end{array}\right],
\end{equation}
\begin{equation}
\begin{aligned}
\mathbf{G} = -2\begin{bmatrix}
\mathbf{s}_{2}^{T}-\mathbf{s}_{1}^{T} & \mathbf{0}_{1\times3} & \widetilde{d}_{2,1} & 0 \\
\mathbf{s}_{3}^{T}-\mathbf{s}_{1}^{T} & \mathbf{0}_{1\times3}& \widetilde{d}_{3,1} & 0 \\
\vdots & \vdots & \vdots & \vdots \\
\mathbf{s}_{M}^{T}-\mathbf{s}_{1}^{T} & \mathbf{0}_{1\times3}& \widetilde{d}_{M,1} & 0 \\
\dot{\mathbf{s}}_{2}^{T}-\dot{\mathbf{s}}_{1}^{T} & \mathbf{s}_{2}^{T}-\mathbf{s}_{1}^{T} & \widetilde{\dot{d}}_{2,1} &\widetilde{d}_{2,1}\\
\dot{\mathbf{s}}_{3}^{T}-\dot{\mathbf{s}}_{1}^{T} & \mathbf{s}_{3}^{T}-\mathbf{s}_{1}^{T}& \widetilde{\dot{d}}_{3,1} &\widetilde{d}_{3,1}\\
\vdots &\vdots & \vdots & \vdots \\
\dot{\mathbf{s}}_{M}^{T}-\dot{\mathbf{s}}_{1}^{T} & \mathbf{s}_{M}^{T}-\mathbf{s}_{1}^{T}& \widetilde{\dot{d}}_{M,1} &\widetilde{d}_{M,1}
\end{bmatrix},
\end{aligned}
\end{equation}
and $ \boldsymbol{\epsilon} $ is the error vector derived from (\ref{ri11}) and (\ref{td}). By ignoring the second order error term, $ \boldsymbol{\epsilon} $ becomes a Gaussian random vector with covariance matrix given by
\begin{align}\label{psi}
\boldsymbol\Psi=\left[\begin{array}{cc}{\mathbf{B}} & {\mathbf{0}} \\ {\dot{\mathbf{B}}} & {\mathbf{B}}\end{array}\right]\left[\begin{array}{cc}{\mathbf{Q}_{t}} & {\mathbf{0}} \\ {\mathbf{0}} & {\mathbf{Q}_{f}}\end{array}\right]\left[\begin{array}{cc}{\mathbf{B}} & {\dot{\mathbf{B}}} \\ {\mathbf{0}} & {\mathbf{B}}\end{array}\right]\\ \nonumber
\in\mathbb{R}^{2(M-1)\times2(M-1)},
\end{align}
where
\begin{align}\label{Bdiag}
\mathbf{B}=&2\mathrm{diag}\{d_{2},d_{3},...,d_{M}\}\in\mathbb{R}^{(M-1)\times(M-1)},\\\label{dBdiag}
\dot{\mathbf{B}}=&2\mathrm{diag}\{\dot{d}_{2},\dot{d}_{3},...,\dot{d}_{M}\}\in\mathbb{R}^{(M-1)\times(M-1)}.
\end{align}
Consider that the elements of $ \mathbf{u}_{1} $ are statistically independent, then the maximum-likelihood estimation of $ \mathbf{u}_{1} $ can be written as
\begin{align}\label{esti_1}
\mathbf{\hat u}_{1}=\mathop{\arg\max}\limits_{\mathbf{u}_{1}}\ \log f(\mathbf{h}_{1}|\mathbf{u}_{1}),
\end{align}
where $ f(\mathbf{h}_{1}|\mathbf{u}_{1}) $ is the conditional probability density function of $ \mathbf{h}_{1} $ given $ \mathbf{u}_{1} $ and given by
\begin{align}\label{esti_2}
f(\mathbf{h}_{1}|\mathbf{u}_{1})=&\dfrac{1}{(2\pi)^{M-1}(\mathrm{det}(\boldsymbol\Psi))^{1/2}}\notag\\&\cdot\exp{\left\lbrace -\dfrac{1}{2}(\mathbf{h}_{1}-\mathbf{G}\mathbf{u}_{1})^{T}\boldsymbol\Psi^{-1}(\mathbf{h}_{1}-\mathbf{G}\mathbf{u}_{1}) \right\rbrace }.
\end{align}
Thus, the maximum-likelihood estimation of $ \mathbf{u}_{1} $ can be described as
\begin{align}\label{esti}
\mathbf{\hat u}_{1}=\mathop{\arg\min}\limits_{\mathbf{u}_{1}}\ {\left\lbrace (\mathbf{h}_{1}-\mathbf{G}\mathbf{u}_{1})^{T}\boldsymbol\Psi^{-1}(\mathbf{h}_{1}-\mathbf{G}\mathbf{u}_{1})\right\rbrace}.
\end{align}
Weighting matrix $ \boldsymbol\Psi $ is unknown in practice as $\mathbf{B}$ and $ \dot{\mathbf{B}} $ contain the accurate satellite-UE distance and its rate of change, respectively.
We propose to solve this problem through a further approximation, which considers two typical cases. In the first case of short-time random access procedure, $d_{i} (i=1,...,M)$ is close to each other. Supposing they all approach $d^{0}$, then $\mathbf{B}\approx 2d^{0}\mathbf{I}$ is satisfied. Correspondingly, $\dot{\mathbf{B}}\approx \mathbf{0}$ is also satisfied. Since scaling $ \boldsymbol{\Psi} $ does not affect the solution to problem (\ref{esti}), we substitute $ \mathbf{I} $ for $\mathbf{B}$ to simplify the weighting matrix. In the other case of initial access for the search of the UE location, since the observation window of the satellite might be much larger, we take the initial value of $\mathbf{B}$ and $ \dot{\mathbf{B}} $ as $ \mathbf{I} $ and $ \mathbf{0} $, respectively, and then iteratively update the weighting matrix by (\ref{psi}) with the latest estimation results \cite{Geolocation}. In the following, we focus on the estimation problem with a fixed weighting matrix.
In the solution of problem (\ref{esti}), the correlations among the elements of $ \mathbf{u}_{1} $ are not considered. However, they are related to each other in practice \cite{Feng16app,Feng18on}. In the following, we aim to exploit this relationship to provide an improved estimate. Firstly, consider a non-spherical earth model \cite{Geolocation} and that the UE is located on the surface of the earth, then the UE location $ \mathbf{p} $ satisfies the following equation
\begin{align}\label{earth}
\frac{{x}^{2}}{(R_{a})^{2}}+\frac{{y}^{2}}{(R_{a})^{2}}+\frac{{z}^{2}}{(R_{b})^{2}}-1=0,
\end{align}
where $R_{a}$ and $ R_{b} $ denote the semi-major and semi-minor axes of the earth, respectively. In addition, the elements of $ \mathbf{u}_{1} $ are also related by (\ref{di}) and (\ref{ddi}) at $ i=1 $.
With these constraints, the above TDOA/FDOA based location estimation problem in (\ref{esti}) can be reformulated as
\begin{align}\label{obj1}
\mathop{\mathrm{minimize}}\limits_{\mathbf{u}_{2}}\quad & g(\mathbf{u}_{2})=(\mathbf{h}_{2}-\mathbf{G}\mathbf{u}_{2})^{T}\boldsymbol \Psi^{-1}(\mathbf{h}_{2}-\mathbf{G}\mathbf{u}_{2}), \nonumber\\
\mathrm{subject}\ \mathrm{to} \quad& c_{1}(\mathbf{u}_{2})=\mathbf{u}_{2}^{T}\mathbf{C}_{1}\mathbf{u}_{2}+2\mathbf{q}_{1}^{T}\mathbf{u}_{2}-\rho_{1}=0, \nonumber\\
&c_{2}(\mathbf{u}_{2})=\mathbf{u}_{2}^{T}\mathbf{C}_{2}\mathbf{u}_{2}=0,\nonumber\\
&c_{3}(\mathbf{u}_{2})= \mathbf{u}_{2}^{T}\mathbf{C}_{3}\mathbf{u}_{2}=0,
\end{align}
where
\begin{subequations}
\begin{align}
\mathbf{u}_{2}&=\mathbf{u}_{1}-\widetilde{\mathbf{r}}_{1},\\
\widetilde{\mathbf{r}}_{1}&=(\mathbf{s}_{1}^{T},\dot{\mathbf{s}}_{1}^{T},0,0)^{T},\\
\mathbf{h}_{2}&=\mathbf{h}_{1}-\mathbf{G}\widetilde{\mathbf{r}}_{1},\\
\mathbf{q}_{1}&=\mathbf{C}_{1}\widetilde{\mathbf{r}}_{1},\\
\rho_{1}&=1-\widetilde{\mathbf{r}}_{1}^{T}\mathbf{C}_{1}\widetilde{\mathbf{r}}_{1},\\
\mathbf{r}&=[\frac{1}{R_{a}^{2}},\frac{1}{R_{a}^{2}},\frac{1}{R_{b}^{2}}]^{T},\\
\mathbf{C}_{1}&=\mathrm{diag}{[\mathbf{r}^{T},0,0,0,0,0]},\\
\mathbf{C}_{2}&=\mathrm{diag}{[1,1,1,0,0,0,-1,0]},\\
\mathbf{C}_{3}&=\left(\begin{array}{cccc}{\mathbf{0}_{3 \times 3}} & {\mathbf{I}_{3}} & {\mathbf{0}_{3 \times 1}} & {\mathbf{0}_{3 \times 1}} \\ {\mathbf{0}_{3 \times 3}} & {\mathbf{0}_{3 \times 3}} & {\mathbf{0}_{3 \times 1}} & {\mathbf{0}_{3 \times 1}} \\ {\mathbf{0}_{1 \times 3}} & {\mathbf{0}_{1 \times 3}} & {0} & {-1} \\ {\mathbf{0}_{1 \times 3}} & {\mathbf{0}_{1 \times 3}} & {0} & {0}\end{array}\right).
\end{align}
\end{subequations}
\subsection{Globally Optimal Solution}
The optimization problem in (\ref{obj1}) is a quadraric programming with quadratic equality constraints. Quadratic penalty method is commonly used in practice to solve the equality-constrained problems because of its simplicity \cite{Jor04Num}. The idea of the quadratic penalty method is to transform the original constrained problem into an equivalent unconstrained one, so that the algorithms, e.g., Newton's method and conjugate gradient method can be used to solve the equivalent unconstrained problem \cite{nonli06Ba,nonli99Di}. In this part, we first adopt the quadratic penalty method to solve this problem and then discuss its optimality.
The penalty function can be defined as follows
\begin{align}\label{Fza}
F(\mathbf{u}_{2};\mu)=g(\mathbf{u}_{2})+\mu\alpha(\mathbf{u}_{2}),
\end{align}
where
\begin{align}\label{alp}
\alpha(\mathbf{u}_{2})=\sum_{i=1}^{3}c_{i}^{2}(\mathbf{u}_{2}).
\end{align}
Note that (\ref{alp}) is the penalty function of the constraint problem (\ref{obj1}). Obviously,
\begin{align}\label{alp1}
\alpha(\mathbf{u}_{2})\left\{ \begin{array}{ll}
=0,\ &\mathbf{u}_{2} \in \mathcal{D},\\
>0,\ &\mathbf{u}_{2} \notin \mathcal{D}.
\end{array}\right.
\end{align}
where $\mathcal{D}$ is the feasible set of (\ref{obj1}).
Note that $F(\mathbf{u}_{2};\mu)$ is called the augmented objective function of (\ref{obj1}) where $\mu(>0)$ is the penalty parameter of the quadratic penalty method. Then, (\ref{obj1}) can be transformed into the following unconstrained optimization problem
\begin{align}\label{uncon}
\mathop{\mathrm{minimize}}\limits_{\mathbf{u}_{2}}\quad &F(\mathbf{u}_{2};\mu).
\end{align}
\begin{prop}\label{equa}
For a given $ \boldsymbol\Psi $ and $\mu_{k}$, suppose that $\mathbf{u}_{2,k}$ is the minimum point of (\ref{uncon}). Then the sufficient and necessary condition under which $\mathbf{u}_{2,k}$ is the minimum point of (\ref{obj1}) is $\mathbf{u}_{2,k} \in \mathcal{D}$.
\end{prop}
\begin{IEEEproof}
Please refer to Appendix \ref{appendixa}.
\end{IEEEproof}
\propref{equa} shows that if the global minimizer of $F(\mathbf{u}_{2};\mu)$ belongs to the feasible set $\mathcal{D}$, then it is indeed the solution of (\ref{obj1}).
\begin{prop}\label{conv}
For a given weighting matrix $ \boldsymbol\Psi $, suppose that $\mathbf{u}_{2,k}$ is the global minimizer of $F(\mathbf{u}_{2};\mu_{k})$ defined by (\ref{Fza}) and that $\mu_{k}\to\infty$. Then every limit point $\mathbf{u}_{2}^{*}$ of the sequence $\{\mathbf{u}_{2,k}\}$ is a globally optimal solution to problem (\ref{obj1}).
\end{prop}
\begin{IEEEproof}
Please refer to Appendix \ref{appendixb}.
\end{IEEEproof}
To make the solution of (\ref{uncon}) approach the optimal solution to the original problem, the penalty parameter $ \mu $ is required to be sufficiently large. Therefore, we choose an increasing sequence of penalty parameters $ \{\mu_{k}\} $ to repeatedly solve a sequence of problems in (\ref{uncon}). On this basis, we adopt Newton's method \cite{Jor04Num} to solve the unconstrained optimization problem (\ref{uncon}). The iterative equation of each step is given by
\begin{align}
\mathbf{u}_{2,j+1}&=\mathbf{u}_{2,j}+\beta_{j}\mathbf{p}_{j},\notag\\
\mathbf{p}_{j}&=-\mathbf{G}_{j}^{-1}\nabla F_{j}, \notag\\
F_{j}&=g(\mathbf{u}_{2,j})+\mu\alpha(\mathbf{u}_{2,j}),
\end{align}
where $ \mathbf{u}_{2,j} $ and $ \mathbf{u}_{2,j+1} $ denote the $ j $-th and $ (j+1) $-th iteration estimated values, respectively. $\mathbf{G}_{j}$ is a nonsingular and symmetric matrix, and the positive scalar $\beta_{j}$ is the step length for the $ j $-th iteration. To ensure the line search direction to be a descending direction, the following condition need to be satisfied: $\mathbf{p}_{j}^{T}\nabla F_{j}<0$. In Newton's method, $\mathbf{G}_{j}$ is the exact Hessian $\nabla^{2}F_{j}$. However, the Hessian matrix may not be positive definite, which causes the search direction not always to be descending. In such cases, we can adopt a modified Hessian method by adding a positive diagonal matrix to the original Hessian \cite{Jor04Num}.
\subsection{Low Complexity Method}
In the above proposed algorithm, we first adopt the quadratic penalty method to transform problem (\ref{obj1}) into an unconstrained one, and then utilize Newton's method with modification to solve the unconstrained problem in (\ref{uncon}). We show that the proposed algorithm is guaranteed to obtain a globally optimal solution. However, since the penalty parameter $\mu$ is required to be updated repeatedly and the unconstrained problem in (\ref{uncon}) for a given $ \mu $ also needs multiple iterations in Newton's method, this approach tends to have a high computational complexity. It is possible to trade-off some degree of optimality for a reduced cost. Hence, in the following, we propose an iterative CWLS method as an alternative algorithm.
The method starts from an initial estimate of $ \mathbf{u}_{2} $. Then, we write one of the variables $ \mathbf{u}_{2} $ in $ c_{i}(\mathbf{u}_{2}),i=1,2,3 $ as a combination of the estimated value $ \mathbf{\hat u}_{2} $ and the estimated error $ \Delta \mathbf{u}_{2}$.
With the estimated value of $ \mathbf{u}_{2} $, we convert the problem in (\ref{obj1}) into an approximate quadratic programming with linear equality constraints, which is verified to have a closed-form solution in \cite{Iterative}. Next, we update the linear equality constraints with the latest estimation of $ \mathbf{u}_{2} $ and solve the approximate quadratic programming iteratively. In the following, we present more detailed descriptions of this method.
An initial estimate of $ \mathbf{u}_{2} $ can be calculated by (\ref{esti}) as
\begin{align}
\mathbf{\hat u}_{2}=(\mathbf{G}^{T}\boldsymbol\Psi^{-1}\mathbf{G})^{-1}\mathbf{G}^{T}\boldsymbol\Psi^{-1}\mathbf{h}_{2}.
\end{align}
Then, the approximate quadratic programming with linear equality constraints based on (\ref{obj1}) can be formulated as
\begin{align}\label{obj111}
\mathop{\mathrm{minimize}}\limits_{\mathbf{u}_{2}}\quad &g(\mathbf{u}_{2})=
(\mathbf{h}_{2}-\mathbf{G}\mathbf{u}_{2})^{T}\boldsymbol \Psi^{-1}(\mathbf{h}_{2}-\mathbf{G}\mathbf{u}_{2}), \notag\\
\mathrm{subject}\ \mathrm{to} \quad& c_{1}(\mathbf{u}_{2})=(\mathbf{\hat u}_{2}^{T}\mathbf{C}_{1}+2\mathbf{q}_{1}^{T})\mathbf{u}_{2}-\rho_{1}=0, \notag\\
& c_{2}(\mathbf{u}_{2})=\mathbf{\hat u}_{2}^{T}\mathbf{C}_{2}\mathbf{u}_{2}=0,\notag\\
& c_{3}(\mathbf{u}_{2})=\mathbf{\hat u}_{2}^{T}\mathbf{C}_{3}\mathbf{u}_{2}=0.
\end{align}
The above problem (\ref{obj111}) has been proved to possess a closed-form solution \cite{Iterative}, which can be expressed as
\begin{align}\label{cfs}
\mathbf{\breve u}_{2}&=\\\nonumber
&(\mathbf{P}_{1}\mathbf{G}^{T}\boldsymbol\Psi^{-1}\mathbf{G}\mathbf{P}_{1})^{\dagger}(\mathbf{G}^{T}\boldsymbol\Psi^{-1}\mathbf{h}_{2}-\mathbf{G}^{T}\boldsymbol\Psi^{-1}\mathbf{G}\mathbf{P}_{2})+\mathbf{P}_{2},
\end{align}
where
\begin{subequations}
\begin{align}
\mathbf{P}_{1}&=\mathbf{I}-\mathbf{A}^{T}(\mathbf{A}\mathbf{A}^{T})^{-1}\mathbf{A},\\
\mathbf{P}_{2}&=\mathbf{A}^{T}(\mathbf{A}\mathbf{A}^{T})^{-1}\boldsymbol\beta_{1},\\
\mathbf{A}&=[(\mathbf{\hat u}_{2}^{T}\mathbf{C}_{1}+2\mathbf{q}_{1}^{T});\mathbf{\hat u}_{2}^{T}\mathbf{C}_{2};\mathbf{\hat u}_{2}^{T}\mathbf{C}_{3}],\\
\boldsymbol\beta_{1}&=[\rho_{1};0;0].
\end{align}
\end{subequations}
Note that the quadratic penalty and CWLS methods both need to update the weighting matrix. Simulations show that updating the weighting matrix once is sufficient to provide an accurate result in practice. The penalty method requires inner iteration to solve the unconstrained problem (\ref{uncon}) with Newton's method and outer iteration for the updating of the penalty parameter $ \mu $. Compared with the penalty method, the CWLS method derives a closed form solution to the approximate problem (\ref{obj111}) and only requires to update the $ \mathbf{u}_{2} $ iteratively by (\ref{cfs}). Hence, the computational complexity can be significantly reduced.
\subsection{Cram\'{e}r-Rao Lower Bound}
The CRLB is the mimimum variance that an unbiased parameter estimator can attain \cite{funda93kay}. The constrained CRLB of an unbiased estimator has been given in \cite{Asimplederivation}. Let $ \mathbf{d}=[d_{2,1},d_{3,1},...,d_{M,1}]^{T} $, $ \dot{\mathbf{d}}=[\dot{d}_{2,1},\dot{d}_{3,1},...,\dot{d}_{M,1}]^{T} $, and $ \mathbf{u}=[\mathbf{p}^{T},\dot{\mathbf{p}}^{T}]^{T} $. Combined with the system model of this paper, the constrained CRLB for $ \mathbf{u} $ can be completely calculated as
\begin{align}\label{CRLB}
\mathrm{CRLB}(\mathbf{u})=\mathbf{J}^{-1}-\mathbf{J}^{-1} \mathbf{F}\left(\mathbf{F}^{T} \mathbf{J}^{-1} \mathbf{F}\right)^{-1} \mathbf{F}^{T} \mathbf{J}^{-1},
\end{align}
where
\begin{align}
\mathbf{F}=[{x},{y},(\frac{R_{a}^{2}}{R_{b}^{2}}) {z},0,0,0]^{T},
\end{align}
\begin{align}
\mathbf{J}=\left[\begin{array}{cc}{\dfrac{\partial \mathbf{d}^{T}}{\partial \mathbf{p}}} & {\dfrac{\partial \dot{\mathbf{d}}^{T}}{\partial \mathbf{p}}}\\
{\dfrac{\partial \mathbf{d}^{T}}{\partial \dot{\mathbf{p}}}} & {\dfrac{\partial \dot{\mathbf{d}}^{T}}{\partial \dot{\mathbf{p}}}}
\end{array}\right]\left[\begin{array}{cc}{\mathbf{Q}_{t}^{-1}} & \mathbf{0} \\ \mathbf{0} & {\mathbf{Q}_{f}^{-1}}\end{array}\right]\left[\begin{array}{cc}{\dfrac{\partial \mathbf{d}}{\partial \mathbf{p}^{T}}} & {\dfrac{\partial {\mathbf{d}}}{\partial \dot{\mathbf{p}}^{T}}} \\ {\dfrac{\partial \dot{\mathbf{d}}}{\partial \mathbf{p}^{T}}} & {\dfrac{\partial \dot{\mathbf{d}}}{\partial \dot{\mathbf{p}}^{T}}} \end{array}\right],
\end{align}
and
\begin{subequations}
\begin{align}
\dfrac{\partial \mathbf{d}}{\partial \mathbf{p}^{T}}&=-\left[\begin{array}{c}{\left(\mathbf{s}_{2}-\mathbf{p}\right)^{T} / d_{2}-\left(\mathbf{s}_{1}-\mathbf{p}\right)^{T} / d_{1}} \\ {\left(\mathbf{s}_{3}-\mathbf{p}\right)^{T} / d_{3}-\left(\mathbf{s}_{1}-\mathbf{p}\right)^{T} / d_{1}} \\ {\vdots} \\ {\left(\mathbf{s}_{M}-\mathbf{p}\right)^{T} / d_{M}-\left(\mathbf{s}_{1}-\mathbf{p}\right)^{T} / d_{1}}\end{array}\right],\\
\dfrac{\partial \mathbf{d}}{\partial \dot{\mathbf{p}}^{T}}&=\mathbf{0}_{(M-1)\times3},\\
\frac{\partial \dot{\mathbf{d}}}{\partial \mathbf{p}^{T}}&=\\\nonumber
&\left(\begin{array}{c}{\frac{\left(\mathbf{s}_{2}-\mathbf{p}\right)^{T} \dot{d}_{2}}{d_{2}^{2}}-\frac{\left(\mathbf{s}_{1}-\mathbf{p}\right)^{T} \dot{d}_{1}}{d_{1}^{2}}-\frac{(\dot{\mathbf{s}}_{2}-\dot{\mathbf{p}})^{T}}{d_{2}}+\frac{(\dot{\mathbf{s}}_{1}-\dot{\mathbf{p}})^{T}}{d_{1}}} \\ {\frac{\left(\mathbf{s}_{3}-\mathbf{p}\right)^{T} \dot{d}_{3}}{d_{3}^{2}}-\frac{\left(\mathbf{s}_{1}-\mathbf{p}\right)^{T} \dot{d}_{1}}{d_{1}^{2}}-\frac{(\dot{\mathbf{s}}_{3}-\dot{\mathbf{p}})^{T}}{d_{3}}+\frac{(\dot{\mathbf{s}}_{1}-\dot{\mathbf{p}})^{T}}{d_{1}}} \\ {\vdots} \\ {\frac{\left(\mathbf{s}_{M}-\mathbf{p}\right)^{T} \dot{d}_{M}}{d_{M}^{2}}-\frac{\left(\mathbf{s}_{1}-\mathbf{p}\right)^{T} \dot{d}_{1}}{d_{1}^{2}}-\frac{(\dot{\mathbf{s}}_{M}-\dot{\mathbf{p}})^{T}}{d_{M}}+\frac{(\dot{\mathbf{s}}_{1}-\dot{\mathbf{p}})^{T}}{d_{1}}}\end{array}\right),\\
\dfrac{\partial \dot{\mathbf{d}}}{\partial \dot{\mathbf{p}}^{T}}&=\frac{\partial {\mathbf{d}}}{\partial \mathbf{p}^{T}}.
\end{align}
\end{subequations}
The CRLB derived from (\ref{CRLB}) will be used for the following simulations to compare with the mean-square error (MSE) of our proposed method.
\section{Ephemeris Estimation Algorithms with Downlink Synchronization Signals}\label{ephemeris_esti}
For \textit{Scenario 2}, we aim to perform the ephemeris estimation with the aid of downlink synchronization signals in this section. To facilitate the formulation of the following estimation problem, we first introduce two coordinates extensively used in the satellite-to-ground links.
\subsection{Coordinates Used in the Satellite-to-Ground Links}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{./opc.pdf}
\caption{Orbital plane coordinate system.}
\label{OPC}
\end{figure}
\figref{OPC} illustrates the orbital plane coordinate (OPC), where the orbital plane forms the X-Y reference plane, X-axis points towards the perigee, and Z-axis completes a right hand rule \cite{SatelliteOrbits}. The satellite location in OPC when launching the $ i $-th SSB can be then written as
\begin{align}\label{opc}
(\mathbf{s}_{i})_{opc}=[r_{i}\cos\alpha_{i},r_{i}\sin\alpha_{i},0]^{T},
\end{align}
where $ r_{i} $ denotes the distance between the satellite and the center of the earth, and $ \alpha_{i} $ is the angle between X-axis and position vector of the satellite as shown in \figref{OPC}. In the following, we focus on the circular orbits and denote the orbital radius by $ r $.
Another common coordinate system for describing the satellite orbits is the Earth Centered Interval (ECI) coordinate with its origin at the center of the earth, X-axis aligned with the vernal equinox, and Z-axis directed to the north pole. The transformation between any two of the coordinates OPC, ECI, and ECEF can be achieved by rotations of the coordinate axes. Define the elementary matrices
\begin{align}
\mathbf{R_{x}}(\phi)&=\left(\begin{array}{ccc} {1} & {0} & {0}\\ {0} & {\cos\phi} & {\sin\phi}\\ {0} & {-\sin\phi} & {\cos\phi} \end{array}\right),\\
\mathbf{R_{z}}(\phi)&=\left(\begin{array}{ccc} {\cos\phi} & {\sin\phi} & {0}\\ {-\sin\phi} & {\cos\phi} & {0}\\ {0} & {0} & {1} \end{array}\right),
\end{align}
to describe rotations around the X and Z axes by an angle of $ \phi $, respectively. Then, the satellite location in ECI when launching the $ i $-th SSB can be obtained by \cite{SatelliteOrbits}
\begin{align}\label{sit}
(\mathbf{s}_{i})_{eci}=\mathbf{R_{z}}(-\Omega)\mathbf{R_{x}}(-\vartheta)\mathbf{R_{z}}(-\varphi)(\mathbf{s}_{i})_{opc},
\end{align}
or
\begin{align}\label{sit1}
(\mathbf{s}_{i})_{eci}=\mathbf{R_{z}}(-\theta_{g_{i}})\mathbf{s}_{i},
\end{align}
where $ \Omega $, $ \vartheta $, and $ \varphi $ denote the right ascension of ascending node, inclination, and argument of perigee of the orbit, respectively. $ \theta_{g_{i}} $ denotes the Greenwich Sidereal Time (GST) when the satellite launches the $ i $-th SSB.
With the relationships in (\ref{sit}) and (\ref{sit1}), we next investigate how to relate the TDOAs and FDOAs to the unknown positions and velocities of the satellite.
\subsection{Problem Formulation}
As the GNSS service is available for UEs, the UE location and velocity are assumed known. For the notation simplicity in the following, we transform the satellite and UE coordinate from ECEF to ECI. The UE location in ECI can be described as $ (\mathbf{p}_{i})_{eci} $ given by
\begin{align}
(\mathbf{p}_{i})_{eci}=\mathbf{R_{z}}(-\theta_{g_{i}})\mathbf{p}, \quad i=1,2,...,M.
\end{align}
Accordingly, the velocity of UE in ECI can be given by $ \dot{\mathbf{p}} $ denoted as $ (\dot{\mathbf{p}}_{i})_{eci} $.
In the case of the definite satellite orbit, the locations of satellite corresponding to different SSBs satisfy the following equation
\begin{align}\label{si}
(\mathbf{s}_{i})_{eci}=\mathbf{A}_{i} (\mathbf{s}_{1})_{eci},\quad i=1,2,...,M,
\end{align}
where $ \mathbf{A}_{i} $ is a transformation matrix related to the time interval of adjacent SSBs and orbital parameters, e.g, $ \Omega $, $ \vartheta $, and $ \varphi $. The derivation of $ \mathbf{A}_{i} $ is given in \appref{appendixc}.
For a satellite in a circular orbit, the velocities of the satellite in OPC at the transmit of the $ i $-th SSB can be given by
\begin{align}\label{vopc}
(\dot{\mathbf{s}}_{i})_{opc}=[-v\sin\alpha_{i},v\cos\alpha_{i},0]^{T},
\end{align}
where $ v=\sqrt{\dfrac{\mu'}{r}} $ and $ \mu' $ is the geocentric gravitational constant. By employing the same procedure of derivation in \appref{appendixc}, the satellite velocities in ECI can be also described by the transformation matrix $ \mathbf{A}_{i} $ given by
\begin{align}\label{dsi}
(\dot{\mathbf{s}}_{i})_{eci}=\mathbf{A}_{i}(\dot{\mathbf{s}}_{1})_{eci}.
\end{align}
In addition, from (\ref{opc}) and (\ref{vopc}), the satellite velocities can be denoted by the satellite positions and the orbital parameters, i.e.,
\begin{align}\label{sidsi}
(\dot{\mathbf{s}}_{i})_{opc}=\mathbf{C}_{4}(\mathbf{s}_{i})_{opc},
\end{align}
where \begin{align}
\mathbf{C}_{4}=\left(\begin{array}{ccc}{0} & {-v/r} & {0} \\ {v/r} & {0} & {0} \\ {0} & {0} & {0}\end{array}\right).
\end{align}
Equation (\ref{sidsi}) can be further rewritten as
\begin{align}\label{dsieci}
(\dot{\mathbf{s}}_{i})_{eci}={\boldsymbol\Phi}(\mathbf{s}_{i})_{eci},
\end{align}
where
\begin{align}
{\boldsymbol\Phi}&= (\mathbf{E}^{opc}_{eci})^{-1}\mathbf{C}_{4}\mathbf{E}^{opc}_{eci},\\
\mathbf{E}^{opc}_{eci}&=\mathbf{R_{z}}(\varphi)\mathbf{R_{x}}(\vartheta)\mathbf{R_{z}}(\Omega).
\end{align}
With (\ref{si}), (\ref{dsi}), and (\ref{dsieci}), $ (\mathbf{s}_{i})_{eci} $ and $ (\dot{\mathbf{s}}_{i})_{eci} $ can be represented by $ (\mathbf{s}_{1})_{eci} $. Equations (\ref{ri11}) and (\ref{td}) can be then written as
\begin{align}\label{dii}
d_{i,1}^{2}+2d_{i,1}d_{1}&=-2\mathbf{w}_{i}(\mathbf{s}_{1})_{eci},
\end{align}
and
\begin{equation}
\begin{aligned}\label{ddii}
&\dot{d}_{i,1} d_{i,1}+\dot{d}_{i,1}d_{1}+d_{i,1}\dot{d}_{1}=-\mathbf{v}_{i}(\mathbf{s}_{1})_{eci},
\end{aligned}
\end{equation}
where
\begin{subequations}
\begin{align}
\mathbf{w}_{i}&=(\mathbf{p}_{i})_{eci}^{T}\mathbf{A}_{i}-(\mathbf{p}_{1})_{eci}^{T},\\
\mathbf{v}_{i}&=\mathbf{w}_{i}{\boldsymbol\Phi}+(\dot{\mathbf{p}}_{i})_{eci}^{T}\mathbf{A}_{i}-(\dot{\mathbf{p}}_{1})_{eci}^{T}.
\end{align}
\end{subequations}
Note that (\ref{dii}) and (\ref{ddii}) constitute a set of linear equations with unknowns $(\mathbf{s}_{1})_{eci} $, $d_{1}$, and $\dot{d}_{1}$. Define $ \mathbf{z}_{1}=[(\mathbf{s}_{1})_{eci}^{T},d_{1},\dot{d}_{1}]^{T} $. The error vector $ \boldsymbol{\epsilon} $ in (\ref{h1}) can be then rewritten as
\begin{align}\label{epsilon}
\boldsymbol{\epsilon}=\mathbf{b}_{1}-\mathbf{G}_{1}\mathbf{z}_{1},
\end{align}
where
\begin{equation}
\mathbf{b}_{1}=\left[\begin{array}{c}{\widetilde d_{2,1}^{2}} \\ {\widetilde d_{3,1}^{2}} \\ {\vdots} \\ {\widetilde d_{M,1}^{2}} \\ {2 \widetilde d_{2, 1} \widetilde{\dot{d}}_{2, 1}} \\ {2\widetilde d_{3, 1} \widetilde{\dot{d}}_{3, 1}} \\ {\vdots} \\ {2\widetilde d_{M, 1} \widetilde{\dot{d}}_{M, 1}}\end{array}\right],
\end{equation}
\begin{equation}
\begin{aligned}
\mathbf{G}_{1}=-2\cdot\left[\begin{array}{ccc}{\mathbf{w}_{2}} & {\tilde{d}_{2,1}} & {0} \\ {\mathbf{w}_{3}} & {\tilde{d}_{3,1}} & {0} \\ {\vdots} & {\vdots} & {\vdots} \\ {\mathbf{w}_{M}} & {\tilde{d}_{M, 1}} & {0} \\ {\mathbf{v}}_{2} & {\tilde{\dot{d}}_{2, 1}} & {\tilde{d}_{2, 1}} \\ {\mathbf{v}}_{3} & {\tilde{\dot{d}}_{3, 1}} & {\tilde{d}_{3, 1}} \\ {\vdots} & {\vdots} & {\vdots} \\ {\mathbf{v}}_{M} & {\tilde{\dot{d}}_{M, 1}} & {\tilde{d}_{M,1}}\end{array}\right].
\end{aligned}
\end{equation}
The elements in $ \mathbf{z}_{1} $ are related to each other. $ d_{1} $ and $ \dot{d}_{1} $ in $ \mathbf{z}_{1} $ are related to $ (\mathbf{s}_{1})_{eci} $ through the nonlinear relationships (\ref{di}) and (\ref{ddi}) at $ i=1 $. In addition, as the satellite moves in a definite circular orbit, (\ref{opc}) is satisfied and its equivalent expression in ECEF can be written as
\begin{align}
(\mathbf{s}_{1})_{eci}^{T}(\mathbf{s}_{1})_{eci}=r^{2},\quad\mathbf{g}^{T}(\mathbf{s}_{1})_{eci}=0,
\end{align}
where $ \mathbf{g}^{T}=[\mathbf{E}^{opc}_{eci}]_{3,:} $.
Like the location estimation problem in (\ref{obj1}), with the above constraints, the ephemeris estimation problem can be written as
\begin{align}\label{obj2}
\mathop{\mathrm{minimize}}\limits_{\mathbf{z}_{2}}\quad & (\mathbf{b}_{2}-\mathbf{G}_{1}\mathbf{z}_{2})^{T}\boldsymbol \Psi^{-1}(\mathbf{b}_{2}-\mathbf{G}_{1}\mathbf{z}_{2}), \nonumber\\
\mathrm{subject}\ \mathrm{to} \quad& \mathbf{z}_{2}^{T}\mathbf{C}_{5}\mathbf{z}_{2}+2\mathbf{q}_{2}^{T}\mathbf{z}_{2}-\rho_{2}=0, \nonumber\\
& \mathbf{q}_{3}^{T}\mathbf{z}_{2}-\rho_{3}=0, \nonumber\\
&\mathbf{z}_{2}^{T}\mathbf{C}_{6}\mathbf{z}_{2}=0,\nonumber\\
&\mathbf{z}_{2}^{T}\mathbf{C}_{7}\mathbf{z}_{2}+\mathbf{q}_{4}^{T}\mathbf{z}_{2}=0,
\end{align}
where
\begin{subequations}
\begin{align}
\mathbf{z}_{2}&=\mathbf{z}_{1}-\widetilde{\mathbf{r}}_{2},\\
\widetilde{\mathbf{r}}_{2}&=((\mathbf{p}_{1})_{eci}^{T},0,0)^{T},\\
\widetilde{\mathbf{r}}_{3}&=((\dot{\mathbf{p}}_{1})_{eci}^{T},0,0)^{T},\\
\mathbf{b}_{2}&=\mathbf{b}_{1}-\mathbf{G}_{1}\widetilde{\mathbf{r}}_{2},\\
\mathbf{C}_{5}&=\mathrm{diag}{[1,1,1,0,0]},\\
\mathbf{q}_{2}&=\mathbf{C}_{5}\widetilde{\mathbf{r}}_{2},\\
\mathbf{q}_{3}&=[\mathbf{g}^{T},0,0]^{T},\\
\mathbf{q}_{4}&=\mathbf{C}_{7}\widetilde{\mathbf{r}}_{2}+\widetilde{\mathbf{r}}_{3},\\
\rho_{2}&=r^{2}-(\mathbf{p}_{1})^{T}_{eci}(\mathbf{p}_{1})_{eci},\\
\rho_{3}&=-\mathbf{g}^{T}(\mathbf{p}_{1})_{eci},\\
\mathbf{C}_{6}&=\mathrm{diag}{[1,1,1,-1,0]},\\
\mathbf{C}_{7}&=\left(\begin{array}{ccc}{{\boldsymbol\Phi}} & {\mathbf{0}_{3 \times 1}} & {\mathbf{0}_{3 \times 1}} \\ {\mathbf{0}_{1 \times 3}} & {0} & {-1} \\ {\mathbf{0}_{1 \times 3}} & {0} & {0}\end{array}\right).
\end{align}
\end{subequations}
\subsection{The Ephemeris Estimation Algorithm}
The ephemeris estimation problem in (\ref{obj2}) is a quadratic programming with equality constraints. As the problems in (\ref{obj2}) and (\ref{obj1}) are essentially the same kind of optimization problem, we employ the CWLS method to solve the problem in (\ref{obj2}). The procedure of the ephemeris estimation algorithm with the downlink synchronization signals is given as follows.
\begin{itemize}
\item[1)] Initialize $ k=0 $, $ \mathbf{B}^{0}=\mathbf{I} $, $ \dot{\mathbf{B}}^{0}=\mathbf{0} $, $ \boldsymbol{\Psi}^{0} $ by (\ref{psi}), and $ \mathbf{\widehat z}_{2}^{0}=(\mathbf{G}_{1}^{T}(\boldsymbol\Psi^{0})^{-1}\mathbf{G}_{1})^{-1}\mathbf{G}_{1}^{T}(\boldsymbol\Psi^{0})^{-1}\mathbf{b}_{2} $.
\item[2)] Set $ k=k+1 $, and the approximate quadratic programming with linear equallity constraints based on (\ref{obj2}) can be written as
\begin{align}\label{obj3}
\mathop{\mathrm{minimize}}\limits_{\mathbf{z}_{2}}\quad & (\mathbf{b}_{2}-\mathbf{G}_{1}\mathbf{z}_{2})^{T}(\boldsymbol \Psi^{k-1})^{-1}(\mathbf{b}_{2}-\mathbf{G}_{1}\mathbf{z}_{2}), \notag\\
\mathrm{subject}\ \mathrm{to} \quad& ((\mathbf{\widehat z}_{2}^{k-1})^{T}\mathbf{C}_{5}+2\mathbf{q}_{2}^{T})\mathbf{z}_{2}=\rho_{2},\notag\\
& \mathbf{q}_{3}^{T}\mathbf{z}_{2}=\rho_{3},\notag\\
& (\mathbf{\widehat z}_{2}^{k-1})^{T}\mathbf{C}_{6}\mathbf{z}_{2}=0,\notag\\
& ((\mathbf{\widehat z}_{2}^{k-1})^{T}\mathbf{C}_{7}+\mathbf{q}_{4}^{T})\mathbf{z}_{2}=0.
\end{align}
\item[3)] Solve the problem (\ref{obj3}) by:
\begin{align}
\mathbf{\widehat z}_{2}^{k}=(\mathbf{P}_{3}^{k-1}\mathbf{G}_{1}^{T}(\boldsymbol\Psi^{k-1})^{-1}\mathbf{G}_{1}\mathbf{P}_{3}^{k-1})^{\dagger}(\mathbf{G}_{1}^{T}(\boldsymbol\Psi^{k-1})^{-1}\mathbf{b}_{2}\nonumber\\
-\mathbf{G}_{1}^{T}(\boldsymbol\Psi^{k-1})^{-1}\mathbf{G}_{1}\mathbf{P}_{4}^{k-1})+\mathbf{P}_{4}^{k-1},
\end{align}
where
\begin{subequations}
\begin{align}
\mathbf{P}_{3}^{k-1}=\mathbf{I}-(\mathbf{Y}^{k-1})^{T}(\mathbf{Y}^{k-1}(\mathbf{Y}^{k-1})^{T})^{-1}\mathbf{Y}^{k-1},
\end{align}
\begin{align}
\mathbf{P}_{4}^{k-1}=(\mathbf{Y}^{k-1})^{T}(\mathbf{Y}^{k-1}(\mathbf{Y}^{k-1})^{T})^{-1}\boldsymbol\beta_{2},
\end{align}
\begin{align}
\mathbf{Y}^{k-1}=&[((\mathbf{\widehat z}_{2}^{k-1})^{T}\mathbf{C}_{5}+2\mathbf{q}_{2}^{T});\mathbf{q}_{3}^{T};\notag\\&(\mathbf{\widehat z}_{2}^{k-1})^{T}\mathbf{C}_{6};((\mathbf{\widehat z}_{2}^{k-1})\mathbf{C}_{7}+\mathbf{q}_{4}^{T})],
\end{align}
\begin{align}
\boldsymbol\beta_{2}=[\rho_{2};\rho_{3};0;0].
\end{align}
\end{subequations}
\item[4)] Estimate the location of the satellite in the $ k $-th iteration:
\begin{align}
(\mathbf{\widehat s}_{1})_{eci}^{k}=\mathbf{\widehat z}_{2}^{k}(1:3)+(\mathbf{p}_{1})_{eci}.
\end{align}
\item[5)] Update $ \mathbf{B}^{k} $, $ \dot{\mathbf{B}}^{k} $, and $ \boldsymbol{\Psi}^{k} $ by (\ref{psi}), (\ref{Bdiag}), and (\ref{dBdiag}), respectively and go to step 2).
\end{itemize}
The derivation of the constrained CRLB for the satellite location $ \mathbf{s}_{1} $ follows the same procedure as that in the Section \ref{location_esti}.
\section{Timing Advance Estimation for Multi-Satellite Systems}\label{multi-satellite}
The solutions in the above sections apply to TA estimation in single-satellite networks. However, with the rapid growth on the quantity and type of satellites, the ground UEs may receive downlink synchronization signals from multiple satellites at the same time \cite{zhao19multi}. Here, we extend the problem of TA estimation to the multi-satellite case in \textit{Scenario 1}.
Consider that the UE receives downlink synchronization signals from $ G $ satellites that are located at $ \mathbf{s}_{i,g}=[x_{i,g},y_{i,g},z_{i,g}]^{T} $, and move with velocities $ \dot{\mathbf{s}}_{i,g}=[\dot{x}_{i,g},\dot{y}_{i,g},\dot{z}_{i,g}]^{T} $ when launching the $ i $-th SSB, $ i=1,2,...,M $, $ g=1,2,...,G $. Taking the signal transmitted from satellite 1 corresponding to the first downlink SSB as the reference signal, TDOA $ \widetilde t_{m,1} $ and FDOA measurements $ \widetilde f_{m,1} $, $ m=2,3,..,GM $ can be represented by (\ref{equ1_1}) and (\ref{equ1_2}), respectively. Then, we adopt the location estimation algorithms in Section \ref{location_esti} to estimate the propagation delay between UE and satellites, thus TA estimation can be calculated at the UE side.
The multi-satellite systems have several characteristics, e.g., more extensive coverage area, more complex connection relationships, and dynamic geometry change. \figref{geometry1} and \figref{geometry2} illustrate two different arrangements of satellites, respectively. In \figref{geometry2}, satellites are placed almost in a curvature, which limits the length of the visibility window of satellite links and thus results in poor Geometric Dilution of Precision (GDOP), while the satellites in \figref{geometry1} are distributed at well-spaced positions with a better geometry \cite{perf20zohair,GDOP09sh}. We will compare and analyze the performances of our proposed methods for TA estimation under typical situations in the following simulations.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{./geometry1_c.pdf}
\caption{Geometry of the UE in relationship to the satellites with good GDOP.}
\label{geometry1}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{./geometry2_c.pdf}
\caption{Geometry of the UE in relationship to the satellites with poor GDOP.}
\label{geometry2}
\end{figure}
\section{Numerical Results}\label{numerical_resu}
The numerical results are provided to evaluate the performance of our proposed CWLS method for estimating the TA in 5G integrated LEO SatCom. In this section, the TDOA and FDOA measurements are generated by applying the time-and-frequency synchronization algorithm in \cite{Near}. Due to the requirements of downlink synchronization defined for NTN UE in \cite{3gpp.38.811}, the signal-to-noise ratio (SNR) of synchronization in downlink is set to be -6 dB in this simulation. A multipath fading channel with system bandwidth 20 MHz and sampling frequency 30.72 MHz is adopted. In addition, the CP duration is assumed to be 0.67 ms, which corresponds to the standardized value for a subcarrier spacing of 15 kHz, and delay spread is set to be 250 ns, which is stated to cover 90 \% of the cases \cite{3gpp.38.811}.
The positioning accuracy is evaluated in terms of root MSE (RMSE). For\textit{ Scenario 1} and \textit{Scenario 2}, they are defined as
\begin{align}
\mathrm{RMSE}_{1}=\sqrt{\frac{\sum_{l=1}^{L}\rVert\hat{\mathbf{p}}^{l}-\mathbf{p}\rVert^{2}}{L}},
\end{align}
\begin{align}
\mathrm{RMSE}_{2}=\sqrt{\frac{\sum_{l=1}^{L}\rVert\hat{\mathbf{s}}^{l}_{1}-\mathbf{s}_{1}\rVert^{2}}{L}},
\end{align}
respectively, where $ \hat{\mathbf{p}}^{l} $ denotes the estimate of UE position of the $ l $-th run, $ \hat{\mathbf{s}}^{l}_{1} $ denotes the estimate of satellite position of the $ l $-th run at the transmit instant of the first SSB, and $ L=2000 $ is the number of independent runs.
TA estimation error is defined as the differential value between the actual satellite-UE distance and its estimate. Hence, the TA estimation error for the single-satellite scenario can be given by
\begin{align}
\Delta d=\rvert\hat d_{1}-d_{1} \rvert,
\end{align}
where $ \hat d_{1} $ denotes the estimate of the distance between the satellite and UE corresponding to the first downlink SSB. For the multi-satellite scenario, we consider taking satellite 1 as the reference, then the estimate of distance is calculated between satellite 1 and UE.
\subsection{Single-Satellite Systems}
\begin{table}
\footnotesize
\caption{Simulation Setup Parameters}\label{tb:sim_cor_par}
\centering
\ra{1.3}
\begin{tabular}{LcR}
\toprule
Parameter & & Value\\
\midrule
\rowcolor{lightblue}
Orbital altitude && 1070 km \\
Eccentricity of the orbit && 0\\
\rowcolor{lightblue}
Inclination of the orbit && $85^{\circ}$ \\
Argument of perigee of the orbit && $ 0^{\circ} $ \\
\rowcolor{lightblue}
Right ascension of ascending node of the orbit && $ 0^{\circ} $ \\
Carrier frequency && 2.6 GHz \\
\rowcolor{lightblue}
Half viewing angle of the satellite && $57^{\circ}$ \\
Minimum elevation angle of the UE && $20^{\circ}$ \\
\bottomrule
\end{tabular}
\end{table}
In this simulation part, we analyze the performance of our proposed methods in single-satellite systems.
The major simulation setup parameters for single-satellite systems are listed in Table \ref{tb:sim_cor_par}. We first select three representative UE locations in the satellite beam coverage area, which are sub-satellite point and two positions (\textit{Pos1} and \textit{Pos2}) at the region edge with lower elevation angles. \textit{Pos1} is in the direction of the sub-satellite trajectory and \textit{Pos2} is in the vertical direction of this trajectory. The detailed locations of these ground terminals are given in Table \ref{tb:sim_cor_par_1}. \figref{pos3_6_8} illustrates the distribution of these UEs.
\begin{table}
\footnotesize
\caption{Location of the Ground Terminals}\label{tb:sim_cor_par_1}
\centering
\ra{1.3}
\begin{tabular}{LcRcR}
\toprule
Ground terminal & & Location & & Elevation angle\\
\midrule
\rowcolor{lightblue}
Sub-satellite point && ($ 6^{\circ}$N\ ,\ $ 0^{\circ}$E) && $90^{\circ}$\\
Pos1 && ($ 20^{\circ}$N\ ,\ $ 0^{\circ}$E) && $22^{\circ}$\\
\rowcolor{lightblue}
Pos2 && ($ 6^{\circ}$N\ ,\ $ 15^{\circ}$E) && $22^{\circ}$\\
\bottomrule
\end{tabular}
\end{table}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{./pos3_6_8-eps-converted-to.pdf}
\caption{The distribution of three representative UE locations.}
\label{pos3_6_8}
\end{figure}
We first study the shortest possible timing-window with the guaranteed performance of TA estimation.
Keeping the time interval between two adjacent SSBs = 20 ms, Fig. \ref{fig1_1117} and \figref{2s3p} show the cumulative probability distribution of TA estimation error of different UEs in \textit{Scenario 1} and \textit{Scenario 2}, respectively. We can observe that for a given SSB interval of 20 ms, the timing-window of 12 s can guarantee that the TA estimation offsets of all uplink frames are no more than 14 km in \textit{Scenario 1}, and the length of timing-window can be reduced to 2 s in \textit{Scenario 2}. Considering the CP type and the delay spread value, the timing-window can ensure that the TA estimation offsets of all uplink frames fall with the range of one CP. As the satellite location to be estimated in \textit{Scenario 2} is constrained by a definite orbit while the possible estimates of UE location in \textit{Scenario 1} follow the spherical equation of the earth, the performance of the TA estimation algorithm is better in \textit{Scenario 2}.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{./fig1-0205-eps-converted-to.pdf}
\caption{The cumulative probability distribution of TA estimation error in \textit{Scenario 1} (Timing-window: 12 s).}
\label{fig1_1117}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{./fig2-0220-eps-converted-to.pdf}
\caption{The cumulative probability distribution of TA estimation error in \textit{Scenario 2} (Timing-window: 2 s).}
\label{2s3p}
\end{figure}
\begin{table}
\footnotesize
\caption{Average Runtime (in Sec.) for Sub-satellite Point in Scenario 1 (Timing-window: 10 s; SSB Interval: 20 ms) }\label{tb:sim_cor_par_2}
\centering
\ra{1.3}
\begin{tabular}{LcR}
\toprule
Method & & Average time\\
\midrule
\rowcolor{lightblue}
Quadratic penalty method && 53.10 \\
Iterative CWLS method && 6.24 \\
\bottomrule
\end{tabular}
\end{table}
\tabref{tb:sim_cor_par_2} compares the runtimes of quadratic penalty and iterative CWLS methods. In this simulation, we set timing-window = 10 s and SSB interval = 20 ms. The simulation was conducted by MATLAB on a desktop computer with Inter i7-8700 processor and memory capacity 16 GB. We can observe that the average computation time of the quadratic penalty method is significantly larger than that of the iterative CWLS method.
Fig. \ref{fig2_1117} and \figref{fig2_0522} compare the MSE of our proposed CWLS method and the quadratic penalty algorithm with the constrained CRLB in \textit{Scenario 1} and \textit{Scenario 2}, respectively. We observe that the proposed CWLS method approaches the CRLB effectively, and that the performance of the proposed CWLS method has little degradation compared with the quadratic penalty algorithm.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{./fig-0122-1-eps-converted-to.pdf}
\caption{Comparison of positioning accuracy with CRLB in \textit{Scenario 1} (Terminal: \textit{Pos2}; SSB interval: 20 ms).}
\label{fig2_1117}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{./fig-0122-2-eps-converted-to.pdf}
\caption{Comparison of positioning accuracy with CRLB in \textit{Scenario 2} (Terminal: Sub-satellite point; SSB interval: 20 ms).}
\label{fig2_0522}
\end{figure}
In addition, in Sections \uppercase\expandafter{\romannumeral2} and \uppercase\expandafter{\romannumeral3}, the approximation $ \mathbf{B}\approx2d^{0}\mathbf{I} $, $ \dot{\mathbf{B}}\approx\mathbf{0} $, and their iteratively updated values are applied to calculate the weighting matrix $ \boldsymbol{\Psi} $. To verify the accuracy of the approximations, we compare the performances of our proposed method with the weighting matrix taking the exact, approximate without the update, and iteratively updated values. Fig. \ref{fig1} and Fig. \ref{fig2} show the comparisons of the cumulative probability distribution of TA estimation error in \textit{Scenario 1} and \textit{Scenario 2}, respectively. We can observe that in both scenarios, our proposed methods of TA estimation with weighting matrix taking the exact and iteratively updated values have almost the same performance. As the timing window in \textit{Scenario 1} is larger than that in \textit{Scenario 2}, the iteratively updated expression provides slightly better TA estimation results than the approximated values in \textit{Scenario 1}, but no significant improvement is observed in \textit{Scenario 2}. Hence, for the short-time random access procedure, the approximated expressions without the update are sufficient, and the iteratively updated expressions are preferred for longer initial access time.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{./fig-0120-eps-converted-to.pdf}
\caption{The cumulative probability distribution of TA estimation error in \textit{Scenario 1} with the weighting matrix taking the exact, approximate without the update, and iteratively updated values (Timing-window: 12 s; SSB interval: 20 ms). }
\label{fig1}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{./fig-s1-0120-eps-converted-to.pdf}
\caption{The cumulative probability distribution of TA estimation error \textit{in Scenario 2} with the weighting matrix taking the exact, approximate without the update, and iteratively updated values (Timing-window: 2 s; SSB interval: 20 ms).}
\label{fig2}
\end{figure}
\subsection{Multi-Satellite Systems}
The performance of our proposed CWLS methods for multi-satellite systems in \textit{Scenario 1} is discussed in this section. The number of satellites is set to be four and two scenarios with different geometries of satellites are presented. In the first scenario with good GDOP, Sat1 and Sat3 are in the same orbit, while Sat2 and Sat4 are in the same orbit. The right ascensions of ascending node of these two orbits are $ 0^{\circ} $ and $ 20^{\circ} $, respectively. The satellites in the second scenario with bad GDOP are all in the same orbit and its right ascension of ascending node is $ 0^{\circ} $. The other orbital parameters are the same as those in Table \ref{tb:sim_cor_par}. \figref{fig3_0522} shows the cumulative probability distribution of TA estimation under these two different arrangements of satellites. We can observe that the timing-window of 1s can guarantee that the TA estimation offsets of all uplink frames to fall within the range of one CP, and the performance of our proposed method for TA estimation under geometry with good GDOP is better.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{./fig3-0220-eps-converted-to.pdf}
\caption{The cumulative probability distribution of TA estimation error for multi-satellite systems in \textit{Scenario 1} (Timing-window: 1 s).}
\label{fig3_0522}
\end{figure}
\figref{fig1_0204} compares the cumulative probability of TA estimation error with different number of LEOs. In our simulation, the numbers of LEO satellites vary from 2, 4, 6, to 8. Except for right ascension of ascending node, the other orbital parameters are set to be the same as those in Table \ref{tb:sim_cor_par}. We can observe that the timing-window of 1 s can guarantee that the TA estimation offsets to fall within the range of one CP, and that the performances of our proposed method for TA estimation with 4, 6, and 8 satellites are significantly better than that with 2 satellites.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{./fig2-0204-eps-converted-to.pdf}
\caption{The cumulative probability of TA estimation error in \textit{Scenario 1} with different number of LEO satellites (Terminal: Pos 2; SSB interval: 20 ms; timing-window: 1 s).}
\label{fig1_0204}
\end{figure}
\section{Conclusion}\label{conclusion}
In this paper, we proposed a new approach aided by the UE geolocation to perform uplink TA for random access in 5G integrated LEO SatCom with the TDOA and FDOA measurements acquired in the downlink timing and frequency synchronization phase, thus solving the inadaptability of the TA scheme initially designed for 5G NR system in 5G integrated LEO SatCom. We established the system model of TA estimation in two typical scenarios separately and converted the problem into UE geolocation or satellite ephemeris estimation.
We introduced an equality-constrained quadratic optimization problem to obtain the UE geographical position or satellite ephemeris and then adopted a quadratic penalty algorithm to find the globally optimal solution of the problem. To reduce the computational complexity, we further proposed an iterative CWLS method alternatively. Numerical results showed that the proposed method can effectively reach the constrained CRLB of TA estimation and thus achieve uplink frame alignment across UEs.
\begin{appendices}
\section{Proof of Proposition 1}
\label{appendixa}
Since the minimum point of the problem (\ref{obj1}) must be in the feasible set $\mathcal{D}$, the necessity is established. Proof of sufficiency is given as follows.
Supposing ${\mathbf{u}}_{2,k}\in \mathcal{D}$, then for $\forall \mathbf{u}_{2}\in \mathcal{D}$, we obtain
\begin{align}\label{A1}
\alpha(\mathbf{u}_{2,k})&=0,\notag\\
g(\mathbf{u}_{2,k})&=F(\mathbf{u}_{2,k};\mu_{k}).
\end{align}
Since $\mathbf{u}_{2,k}$ is the minimum point of $F$, it leads to the following inequality
\begin{align}\label{A2}
F(\mathbf{u}_{2,k};\mu_{k})\leq F(\mathbf{u}_{2};\mu_{k}).
\end{align}
Since $\alpha(\mathbf{u}_{2})=0$, we then obtain
\begin{align}\label{A3}
F(\mathbf{u}_{2};\mu_{k})=g(\mathbf{u}_{2}).
\end{align}
From (\ref{A1}), (\ref{A2}) and (\ref{A3}), we can observe that
\begin{align}
g(\mathbf{u}_{2,k})\leq g(\mathbf{u}_{2}).
\end{align}
This concludes the proof.
\section{Proof of Proposition 2}
\label{appendixb}
Supposing $\bar{\mathbf{u}}_{2}$ is a global solution of (\ref{obj1}), we obtain
\begin{equation}
\begin{aligned}
g(\bar{\mathbf{u}}_{2})\leq g(\mathbf{u}_{2})\quad \mathrm{for}\ \mathrm{all}\ \mathbf{u}_{2}\ \mathrm{with}\ c_{i}(\mathbf{u}_{2})=0,\\ i=1,2,3.
\end{aligned}
\end{equation}
Since $\mathbf{u}_{2,k}$ minimizes $F(\mathbf{u}_{2};\mu_{k})$, we have $F(\mathbf{u}_{2,k};\mu_{k})\leq F(\bar{\mathbf{u}}_{2};\mu_{k})$, which leads to the following equality
\begin{equation}
\begin{aligned}\label{u2}
g(\mathbf{u}_{2,k})+& \mu_{k}\sum_{i=1}^{3} c_{i}^{2}(\mathbf{u}_{2,k})\leq
\\&g(\bar{\mathbf{u}}_{2})+\mu_{k}\sum_{i=1}^{3}c_{i}^{2}(\bar{\mathbf{u}}_{2})=g(\bar{\mathbf{u}}_{2}).
\end{aligned}
\end{equation}
By rearranging (\ref{u2}), we obtain
\begin{align}\label{A7}
\sum_{i=1}^{3} c_{i}^{2}(\mathbf{u}_{2,k})\leq \frac{1}{\mu_{k}}[g(\bar{\mathbf{u}}_{2})-g(\mathbf{u}_{2,k})].
\end{align}
Suppose that $\mathbf{u}_{2}^{*}$ is a limit point of the sequence $\{\mathbf{u}_{2,k}\}$, i.e.,
\begin{align}
\lim\limits_{k\to \infty}\ \mathbf{u}_{2,k}=\mathbf{u}_{2}^{*}.
\end{align}
By taking the limit as $k\to \infty$ on both sides of (\ref{A7}), we obtain
\begin{equation}
\begin{aligned}
\sum_{i=1}^{3} c_{i}^{2}(\mathbf{u}_{2}^{*})=&\lim\limits_{k\to \infty}\ \sum_{i=1}^{3} c_{i}^{2}(\mathbf{u}_{2,k})\\ \leq &\lim\limits_{k\to \infty}\ \frac{1}{\mu_{k}}[g(\bar{\mathbf{u}}_{2})-g(\mathbf{u}_{2,k})]=0,
\end{aligned}
\end{equation}
thus we have that $c_{i}(\mathbf{u}_{2}^{*})=0$, i.e., $\mathbf{u}_{2}^{*}$ is a feasible point. By taking the limit as $k\to \infty$ and considering the nonnegativity of $\mu_{k}$, we obtain
\begin{align}
g(\mathbf{u}_{2}^{*})\leq g(\mathbf{u}_{2}^{*})+\lim\limits_{k\to \infty}\ \mu_{k}\sum_{i=1}^{3} c_{i}^{2}(\mathbf{u}_{2,k})\leq g(\bar{\mathbf{u}}_{2}).
\end{align}
Since $\mathbf{u}_{2}^{*}$ is feasible and its objective value is no larger than that of the global minimizer $\bar{\mathbf{u}}_{2}$, we can conclude that $\mathbf{u}_{2}^{*}$ is also a globally optimal solution. This concludes the proof.
\section{Derivation of $ \mathbf{A}_{i} $ in (\ref{si})}
\label{appendixc}
From (\ref{opc}), the tranformation between $ (\mathbf{s}_{i})_{opc} $ and $ (\mathbf{s}_{1})_{opc} $ can be described as
\begin{align}
(\mathbf{s}_{i})_{opc}&=\mathbf{R_{z}}(-(\alpha_{i}-\alpha_{1}))(\mathbf{s}_{1})_{opc}\nonumber\\
&=\mathbf{R_{z}}(-(i-1)\cdot n'\cdot T)(\mathbf{s}_{1})_{opc},
\end{align}
where $ T $ denotes the time interval between adjacent SSBs, and $ n' $ denotes the mean motion satisfying
\begin{align}
n'=\sqrt{\dfrac{\mu'}{r^{3}}}.
\end{align}
Define $ \mathbf{E}^{eci}_{opc}=\mathbf{R_{z}}(-\Omega)\mathbf{R_{x}}(-\vartheta)\mathbf{R_{z}}(-\varphi)$. With the transformation matrix in (\ref{sit}), it further has
\begin{align}\label{sit2}
(\mathbf{s}_{i})_{eci}&=\mathbf{E}^{eci}_{opc}(\mathbf{s}_{i})_{opc}\nonumber\\
&=\mathbf{E}^{eci}_{opc}\mathbf{R_{z}}(-(i-1)\cdot n'\cdot T)(\mathbf{s}_{1})_{opc}.
\end{align}
At $ i=1 $, (\ref{sit2}) can be rewritten as
\begin{align}
(\mathbf{s}_{1})_{opc}=(\mathbf{E}^{eci}_{opc})^{-1}(\mathbf{s}_{1})_{eci}.
\end{align}
Thus, the transformation matrix $ \mathbf{A}_{i} $ in (\ref{si}) is given by
\begin{align}
\mathbf{A}_{i}=\mathbf{E}^{eci}_{opc}\mathbf{R_{z}}(-(i-1)\cdot n'\cdot T)(\mathbf{E}^{eci}_{opc})^{-1}.
\end{align}
\end{appendices}
|
{
"timestamp": "2021-05-11T02:18:10",
"yymm": "2105",
"arxiv_id": "2105.03858",
"language": "en",
"url": "https://arxiv.org/abs/2105.03858"
}
|
\section*{Introduction}
Hilbert $C^*$-modules (\cite{Paschke}) are a natural generalization of Hilbert spaces, in which the ``scalar'' product takes values in some $C^*$-algebra instead of the field of complex numbers. Although many properties of Hilbert $C^*$-modules are similar to those of Hilbert spaces, there are several important differences, among which are the following: not every closed submodule is orthogonally complemented, and not every functional is defined as a ``scalar'' product by some element (Riesz theorem).
If $M$ is a (right) Hilbert $C^*$-module over the $C^*$-algebra $A$, then it is natural to call a bounded $A$-linear map of $M$ into $A$ an $A$-linear functional. The set of all such mappings constitutes the dual module $M'$, on which there is the structure of a right $A$-module, but, in general, there is no $A$-valued ``scalar'' product. Moreover, the bidual module $M'' $, dual to the module $M'$ is a Hilbert $C^*$-module, and there is an isometric embedding $M\subset M''\subset M'$ (\cite{Frank1}, \cite{Paschke2}).
The standard Hilbert $C^*$-module $l_2(A)$ is a (right) $A$-module of sequences $(a_n)_{n\in\mathbb N}$, where $a_n\in A$, $ n\in\mathbb N$, and $\sum_{n=1}^\infty a_n^*a_n$ converges in $A$ (in the norm). In this case, the dual module $l_2(A)'$ consists of sequences $(a_n)_{n\in\mathbb N}$, for which the partial sums $\sum_{n=1}^m a_n^*a_n$, $m\in\mathbb N$, are uniformly bounded, but for the bidual module $l_2(A)''$ there is generally no good description (but the description of $l_2(A)''$ is known in the case when $A$ is commutative \cite{FMT}).
Another important difference between Hilbert $C^*$-modules and Hilbert spaces is that a finitely generated Hilbert $C^*$-module does not have to be free, and a countably generated Hilbert $C^*$-module does not have to be standard, for example, $C_0(0,1)$ is not isomorphic to $l_2(C[0,1])$ as modules over $C[0,1]$.
The purpose of this paper is to demonstrate the variety of Hilbert $C^*$-modules using the example of modules over uniform Roe algebras. Information about Hilbert $C^*$-modules can be found in \cite{MT}, and about Roe algebras and underlying metric spaces in \cite{Novak-Yu}, \cite{Roe}.
\section{Hilbert $C^*$-modules over uniform Roe algebras}
We denote the algebra of bounded (respectively, compact) operators of the Hilbert space $H$ by $\mathbb B(H)$ (respectively, $\mathbb K(H)$).
Let $X=(X,d_X)$ be a discrete countable metric space $X$ with the metric $d_X$, $H_X=l^2(X)$ be the Hilbert space of square summable complex-valued functions on $X$ with a standard orthonormal basis consisting of delta functions of points, $\delta_x$, $x\in X$. A bounded operator $T$ on $H_X$ with the matrix $(T_{x,y})_{x,y\in X}$ with respect to the standard basis, i.\,e. $T_{x,y}=(\delta_x,T\delta_y)$, has a \textit{propagation} not exceeding $L$ if $d_X(x,y)\geq L$ implies that $T_{x,y}=0$. The $*$-algebra of all bounded operators of finite propagation is denoted by $\mathbb C_u[X]$, and its norm completion in $\mathbb B(H_X)$ is called the uniform Roe algebra $C^*_u(X)$.
For the set $Y$, let $d$ be the metric on $X\sqcup Y$ coinciding on $X$ with $d_X$, i.\,e. $d|_X=d_X$.
We denote by $\mathbb M_{Y,d}$ the set of all bounded operators of finite propagation $T:H_X\to H_Y$, and by $M_{Y,d}$ its norm closure in the set $\mathbb B(H_X,H_Y)$ of all bounded operators from $H_X$ to $H_Y$.
If the operators $T\in\mathbb B(H_X,H_Y)$ and $R\in\mathbb B(H_X)$ have a finite propagation with respect to the metrics $d$ and $d_X$, respectively, then their composition $TR$ obviously also has a finite propagation with respect to the metric $d$. It follows from the continuity of the composition that the action of the $C^*$-algebra $C_u^*(X)$ on $M_{Y,d}$ is well defined and provides the structure of a right $C_u^*(X)$-module.
Similarly, if $T,S\in\mathbb B(H_X,H_Y)$ are operators of finite propagation with respect to $d$, then $S^*T$ has a finite propagation on $l^2(X)$ with respect to $d_X$, so one can define $\langle S,T\rangle=S^*T\in C_u^*(X)$ and extend it by continuity to a $C_u^*(X)$-valued inner product on the module $M_{Y,d}$.
\begin{lemma}
The module $M_{Y,d}$ is a Hilbert $C^*$-module over $C^*_u(X)$.
\end{lemma}
\begin{proof}
Evidently, $\|T\|^2=\|\langle T,T\rangle\|$. The remaining properties of Hilbert $C^*$-modules follow from associativity of operator multiplication.
\end{proof}
\begin{example}
Let $Y$ be a one-point space, $Y=\{y_0\}$. Any operator $T:H_X\to H_Y=\mathbb C$ is a functional on $H_X$ and can be approximated by a functional with a finite number of nonzero coordinates; therefore, it is the limit of operators of finite propagation, i.\,e., $M_{y_0}$ can be identified with functionals on $H_X$ and, by the Riesz theorem, with $H_X$.
\end{example}
In \cite{Frank-monotone} it was shown that the structure of a Hilbert $C^*$-module $M$ extends to the dual module $M'$ (making the latter a Hilbert $C^*$-module) if and only if the $C^*$-algebra $A$ is monotone complete. Recall that monotone completeness of $A$ means that any bounded increasing set $\{a_\alpha:\alpha\in I\}$ of self-adjoint elements of the $C^*$-algebra $A$ has the least upper bound $a=\sup\{a_\alpha:\alpha\in I\}$ in $A$.
\begin{theorem}
A metric space $X$ is bounded if and only if the $C^*$-algebra $C_u^*(X)$ is monotone complete.
\end{theorem}
\begin{proof}
If the space $X$ is bounded, then any bounded operator $T:H_X\to H_X$ has a finite propagation, then $C_u^*(X)=\mathbb B(H_X)$ is a von Neumann algebra, hence is monotone complete.
Conversely, suppose that the space $X$ is unbounded and that the algebra $C_u^*(X)$ is monotone complete. Find a sequence of pairs of different points $\{(x_n,y_n)\}_{n\in\mathbb N}$ in $X$ satisfying the condition $d_X(x_n,y_n)>n$. Suppose that the pairs of points $(x_1,y_1),\ldots,(x_n,y_n)$ with the condition $d_X(x_i,y_i)>i$, $i=1,\ldots,n$, have already been found. If the estimate $d_X(x,y)\leq n+1$ were fulfilled for any $x,y\neq x_1,\ldots,x_n,y_1,\ldots,y_n$, then the diameter of $X$ would be finite. Hence, it is possible to find points $x_{n+1},y_{n+1}\in X$ that do not coincide with any of the previous ones and satisfy the condition $d_X(x_{n+1},y_{n+1})\geq n+1$. We find such pairs of points inductively for each $n\in\mathbb N$.
Set
$$
T^{(n)}_{x,y}=\left\lbrace\begin{array}{cl}1,& \mbox{if\ }(x,y)\in\{(x_i,y_i),(y_i,x_i):i=1,\ldots,n\};
\\0&\mbox{otherwise.}\end{array}\right.
$$
Then the matrix $(T^{(n)}_{x,y})$ defines a bounded self-adjoint operator $T^{(n)}$ of finite rank, hence, a finite propagation, for each $n\in\mathbb N$. Let $T\in C_u^*(X)$ be the least upper bound for the set $\{T^{(n)}\}_{n\in\mathbb N}$. Then $T_{x_n,y_n}\geq 1$ for any $n\in\mathbb N$, which contradicts the fact that $T\in C_u^*(X)$.
\end{proof}
\begin{corollary}
A metric space $X$ is bounded if and only if the structure of a Hilbert $C^*$-module extends from any Hilbert $C^*$-module $M$ over $C_u^*(X)$ to its dual module $M'$.
\end{corollary}
Recall that two metrics, $d_1,d_2$ on a space $Z$ are \textit{coarsely equivalent} \cite{Novak-Yu} if there exists a monotonely increasing function $\varphi$ on $[0,\infty)$ such that $\lim_{t\to\infty}\varphi(t)=\infty$ and one has $d_1(z_1,z_2)\leq\varphi(d_2(z_1,z_2))$ and $d_2(z_1,z_2)\leq \varphi(d_1(z_1,z_2))$ for any $z_1,z_2\in Z$.
\begin{proposition}
Let $d_1$, $d_2$ are metrics on $X\sqcup Y$ with the same restriction on $Y$. They are coarsely equivalent if and only if $M_{Y,d_1}=M_{Y,d_2}$.
\end{proposition}
\begin{proof}
If the metrics are roughly equivalent, then having a finite propagation with respect to one of them is equivalent to having a finite propagation with respect to the other.
Conversely, suppose the metrics are coarsely nonequivalent. Then there is a sequence of pairs of points $(x_n,y_n)$, $x_n\in X$, $y_n\in Y$, $n\in\mathbb N$, such that for one metric the values $d_1(x_n,y_n)$ are uniformly bounded by some constant $C>0$, while the other metric satisfies the estimate $d_2(x_n,y_n)\geq n$. We claim that each point $x_k$ can occur in the sequence $\{x_n\}_{n\in\mathbb N}$ only a finite number of times. Indeed, if $x_k=x_{n_1}=x_{n_2}=\cdots$ then
$$
d_1(y_{n_i},y_{n_1})\leq d_1(x_k,y_{n_i})+d_1(x_k,y_{n_1})\leq 2C
$$
for any $i\in\mathbb N$, while
$$
d_2(y_{n_i},y_{n_1})\geq d_2(x_k,y_{n_i})-d_2(x_k,y_{n_1})\geq n_i-n_1,
$$
i.\,e. is not bounded, but the metrics $d_1$, $d_2$ are equal on $Y$, and this contradiction shows that the point $x_k$ can be repeated in the sequence $\{x_n\}_{n\in\mathbb N}$ only a finite number of times. Passing to a subsequence, we may assume that the sequence $\{x_n\}_{n\in\mathbb N}$ does not contain repeating points at all. The same may be assumed for the sequence $\{y_n\}_{n\in\mathbb N}$.
Set
$$
T_{x,y}=\left\lbrace\begin{array}{cl}1,& \mbox{if\ }(x,y)\in\{(x_n,y_n):n\in\mathbb N\};\\
0&\mbox{otherwise.}\end{array}\right.
$$
Then the matrix $(T_{x,y})$ defines a bounded operator $T$ from $H_X$ to $H_Y$. It has a finite propagation with respect to the metric $ d_1 $, i.\,e. $T\in M_{Y,d_1}$. If $M_{Y,d_1}=M_{Y,d_2}$ then the operator $T$ should be the limit of finite propagation operators with respect to the metric $d_2$, but this is not the case.
\end{proof}
The following statements are obvious.
\begin{proposition}
If $d_1(x,y)\leq d_2(x,y)$ for any $x,y\in X\sqcup Y$ then $M_{Y,d_2}\subset M_{Y,d_1}$.
\end{proposition}
\begin{proposition}
If $Y=Y_1\sqcup Y_2$ then $M_{Y}=M_{Y_1}\oplus M_{Y_2}$.
\end{proposition}
\section{The case $Y=X$ }
Let $Y=X$. To avoid ambiguity, we will denote the first copy of $X$ by $X_0$ and the second copy by $X_1$. Accordingly, the point $x\in X$ will be denoted by $x_0\in X_0$ if it lies in the first copy of $X$, and $x_1\in X_1$ if it lies in the second copy. We will also identify $\mathbb B(H_{X_0},H_{X_1})$ with the algebra $\mathbb B(H_X)$.
The set $S(X)$ of coarse equivalence classes of metrics on $X_0\sqcup X_1$ has the natural structure of an inverse semigroup \cite{M}, where the composition of metrics is given by the formula
$$
d_1d_2(x_0,z_1)=\inf\nolimits_{y\in X}[d_2(x_0,y_1)+d_1(y_0,z_1)],
$$
the adjoint (pseudoinverse) metric is given by the formula $d^*(x_0,y_1)=d(y_0,x_1)$, the unit element is given by the metric $d(x_0,y_1)=d_X(x,y)+1$ and the zero element is given by the metric $d(x_0,y_1)=d_X(x,u)+d_X(y,u)+1$ with a fixed point $u\in X$ (recall that for a metric on $X_0\sqcup X_1$ it suffices to define distances between points lying in different copies of the space $X$).
It is clear that if the metrics $d_1$ and $d_2$ are coarsely equivalent then $M_{X,d_1}=M_{X,d_2}$. Thus, we have the Hilbert $C^*$-module $M_{X,d}$ for each coarse equivalence class $s=[d]\in S(X)$. The collection of these Hilbert $C^*$-modules forms a \textit{Fell bundle} in the sense of Definition 2.1 from \cite{exel} (the mapping $M_{X,d_1}\otimes M_{X,d_2}\to M_{X,d_1d_2}$ is given by composition, see \cite{M-RJMP}).
\begin{example}
Let $A\subset X$. Define the metric $d^A$ on $X_0\sqcup X_1$ by
$$
d^A(x_i,y_i)=d_X(x,y), \quad i=0,1;
$$
$$
d^A(x_0,y_1)=\inf\nolimits_{z\in A}[d_X(x,z)+d_X(y,z)+1]
$$
for any $x,y\in X$.
\end{example}
Denote the $k$-neighborhood of $A$ by $N_k(A)$, i.\,e.
$$
N_k(A)=\{x\in X:d_X(x,A)\leq k\}.
$$
For $B\subset X$, denote by $H_B=l_2(B)\subset l_2(X)=H_X$ the closed subspace in $l_2(X)$ generated by the functions $\delta_x$, $x\in B$. To simplify notation, we will identify an operator $T\in\mathbb B(H_B)$ with the operator in $\mathbb B(H_X)$ equal to $T$ on $H_B$ and equal to 0 on $H_B^\perp$.
\begin{proposition}
The module $M_{X,d^A}$ is canonically isomorphic to the norm closure of the set
$\bigcup_{k=1}^\infty C_u^*(N_k(A))$.
\end{proposition}
\begin{proof}
Let $T\in C_u^*(N_k(A))$ be an operator of propagation not exceeding $L$. If $T_{x_0,y_1}\neq 0$ then $d_X(x,y)\leq L$ and $x,y\in N_k(A)$, hence there exists a point $u\in A$ such that $d_X(x,u)\leq k+1$. Then, taking $z=u$, we obtain
\begin{eqnarray*}
d^A(x_0,y_1)&=&\inf_{z\in A}[d_X(x,z)+d_X(z,y)+1]\leq d_X(x,u)+d_X(u,y)+1\leq\\
&\leq& k+1+d_X(u,x)+d_X(x,y)+1\leq L+2k+3,
\end{eqnarray*}
i.\,e. $T$ is of finite propagation, $T\in M_{X,d^A}$.
Let now $S\in M_{X_1,d^A}$ be an operator of propagation not exceeding $L$. If $S_{x_0,y_1}\neq 0$ then
$$
d^A(x_0,y_1)=\inf_{z\in A}[d_X(x,z)+d_X(z,y)+1]\leq L.
$$
The triangle inequality implies that
\begin{equation}\label{treug}
d_X(x,y)\leq d_X(x,z)+d_X(z,y)
\end{equation}
for any $z\in X$, hence, passing in (\ref{treug}) to the infimum with respect to $z\in X$, we get
$$
d_X(x,y)\leq d^A(x_0,y_1)-1\leq L-1.
$$
As a metric is always non-negative, we have
$$
d_X(x,A)=\inf_{z\in A}d_X(x,z)\leq\inf_{z\in A}[d_X(x,z)+d_X(z,y)]= d^A(x_0,y_1)-1\leq L-1.
$$
Similarly, we obtain that $d_X(y,A)\leq L$. Thus, $S_{x_0,y_1}\neq 0$ implies that $x,y\in N_L(A)$ and $d_X(x,y)\leq L-1$, i.\,e. $S\in C_u^*(N_L(A))$.
\end{proof}
\begin{lemma}
Let $A\subset X$. If $X\setminus N_k(A)$ is not empty for any $k\in\mathbb N$ then the submodule
$M_{X,d^A}$ in the module $C_u^*(X)$ is not orthogonally complemented.
\end{lemma}
\begin{proof}
Let $B\subset X$, $P_B$ a projection onto $l_2(B)$ in $l_2(X)$. Evidently, $P_{N_k(A)}\in M_{X,d^A}$ for any $k\in\mathbb N$. If $S\in C_u^*(X)$ is orthogonal to $M_{X,d^A}$ then $P_{N_k(A)}S=0$ for any $k\in\mathbb N$, but, as $\bigcup_{k=1}^\infty N_k(A)=X$, the sequence $P_{N_k(A)}$ of projections is convergent to $1=P_X$ with respect to the strong topology, whence $S=0$.
By assumption, $M_{X,d^A}\neq C_u^*(X)$. Indeed, if $1\in C_u^*(X)$ would belong to $M_{X,d^A}$ then there would exist a sequence $T^{(k)}\in C_u^*(N_k(A))$ such that $T^{(k)}$ would converge to the unit with rspect to the norm topology. But if $x\notin N_k(A)$ then $T^{(k)}\delta_x=0$, so the convergence may take place only with respect to the strong topology, but not in norm.
\end{proof}
Two extreme examples are the cases when $A=X$ and when $A$ consists of a single point. In the first case $M_{X,d^X}=C_u^*(X)$, and the second case is given by the following statement.
\begin{proposition}
Let $x_0\in X$. If $X$ is proper, i.\,e. if each ball contains only a finite number of points, then $M_{X,d^{\{x_0\}}}=\mathbb K(H_X)$.
\end{proposition}
\begin{proof}
Properness of $X$ means that the operator $T\in M_{Y,d^{\{x_0\}}}$ is of finite propagation if and only if it is of finite rank. Passing to the closure, we obtain the required statement.
\end{proof}
\section{Case $Y=X\times\mathbb N$}
Consider the case $Y=X\times\mathbb N$. Let us introduce the notation $\overline{\mathbb N}=\mathbb N\cup\{0\}$. For convenience, we write $X_n$ instead of $X\times\{n\}\subset Y$, and $X_0=X$. For the point $x\in X$, we denote the point $(x,n)\in X\times\mathbb N$, $n\in\overline{\mathbb N}$, by $x_n\in X_n$.
In this case $H_{X\times\mathbb N}=\oplus_{n=1}^\infty H_{X_n}$. By $Q_n:H_{X\times\mathbb N}\to H_{X_n}$ we denote the projection onto the $n$-th direct summand.
For $T:H_X\to H_{X\times\mathbb N}$, put $T_n=Q_nT:H_X\to H_{X_n}$. In what follows, we identify $T$ with the sequence $(T_n)_{n\in\mathbb N}$.
Consider first the metric $d_1$ on $X\sqcup X\times\mathbb N=X\times\overline{\mathbb N}$ given by the formula $d_1(x_n,y_m)=d_X(x,y)+|nm|$ for any $x,y\in X$, $n,m\in\overline{\mathbb N}$, i.\,e. coinciding on the factors $X$ and $\overline{\mathbb N}$ with the metric $d_X$ and with the standard metric on $\overline{\mathbb N}$, respectively.
\begin{proposition}
The module $M_{X\times\mathbb N,d_1}$ coincides with the standard Hilbert $C^*$-module $l_2(C_u^*(X))$.
\end{proposition}
\begin{proof}
Let the operator $T=(T_n)_{n\in\mathbb N}$ have a propagation not exceeding $L$. Then $T_n=0$ for any $n>L$, therefore the sequence $(T_n)$ consists of zeroes for $n>L$. On the other hand, any sequence $(T_1,T_2,\ldots,T_n,0,0,\ldots)$, where $T_i\in C_u^*(X)$, $i=1,\ldots,n$, lies in $M_{X\times\mathbb N,d_1}$. Thus, $M_{X\times\mathbb N,d_1}$ is the completion of the set of finitely supported sequences with elements from $C_u^*(X)$, and therefore coincides with $l_2(C_u^*(X))$.
\end{proof}
As a second example, let us consider the metric $d_0$ on $X\sqcup X\times\mathbb N$ determined by the formulas $d_0(x_n,y_m)=d_X(x,y)+1$ for $m\neq n$ and $d_0(x_n,y_n)=d_X(x,y)$ for any $x,y\in X$.
\begin{theorem}
If the space $X$ is bounded then the module $M_{X\times\mathbb N,d_0}$ coincides with $l_2(\mathbb B(H))'=l_2(C_u^*(X))'$. If the space $X$ is unbounded then there exists an element $T\in M_{X\times\mathbb N,d_0}$ such that $T\notin l_2(C_u^*(X))''$.
\end{theorem}
\begin{proof}
The first statement is obvious. Suppose that $X$ is unbounded, then there is a sequence of pairs of distinct points $(x^k,y^k)_{k\in\mathbb N}$ such that $d_X(x^k,y^k)>k$. Put $(T_n)_{x^k,y^k_n}=(T_n)_{y^k,x^k_n}=1$, and all other matrix elements of the matrix of $T_n$ are equal to zero; $(S_n)_{x^k,x^k_n}=(S_n)_{y^k,y^k_n}=1$, and all other matrix elements of the matrix of $S_n$ are equal to zero. Let $T=(T_1,T_2,\ldots)$, $S=(S_1,S_2,\ldots)$. Note that $T_nS_n=S_n$ for any $n\in\mathbb N$, and that the series $\sum_{n=1}^\infty T_n$ and $\sum_{n=1}^\infty S_n$ are convergent with respect to the strong operator topology in $\mathbb B (H_X)$.
The sequence $(T_n)_{n\in\mathbb N}$ defines an element of the dual module $l_2(C_u^*(X))'$. Indeed, each operator $T_n$ has propagation $d_X(x^n,y^n)$, therefore, lies in $C_u^*(X)$, and the partial sums $\sum_{i=1}^N T_n^* T_n$ are uniformly bounded in $N$. However, this sequence does not lie in $M_{X\times\mathbb N,d_0}$ because the strong limit $\sum_{i=1}^\infty T_n^* T_n$ does not lie in $C_u^*(X)$.
The sequence $(S_n)_{n\in\mathbb N}$ lies in $M_{X\times\mathbb N,d_0}$. Indeed, the propagation of each $S_n$, $n\in\mathbb N$, is equal to one, which means that the propagation of $S$ is equal to one. Suppose that $S$ lies in the second dual module $l_2(C_u^*(X))''$. There is a natural action of the first dual module on the second dual. Let $T(S)\in C_u^*(X)$ be the result of this action of $T$ on $S$. This action extends the standard inner product $\langle T,S\rangle=\sum_{i=1}^\infty T_i^* S_i$ when $T,S\in l_2(C_u^*(X))$, but in general the value of $T(S)$ is not related to the series $\sum_{i=1}^\infty T_i^* S_i$, even if this series converges in some topology. Let $L_N\cong C_u^*(X)^N\subset l_2(C_u^*(X))$ be a free submodule of finite sequences of length $N$. Then $l_2(C_u^*(X))=L_N\oplus L_N^\perp$, and similar decompositions into direct sums hold for the first and second dual modules of the module $l_2(C_u^*(X))$. Let $T=T_N+T'_N$, $S=S_N+S'_N$ be the corresponding decompositions of $T$ and $S$, respectively. Let $K_M\subset H_X$ be the $2M$-dimensional linear subspace generated by functions $\delta_{x_n}$ and $\delta_{y_n}$, $n\leq M$. Let us fix $M\in\mathbb N$. Since the series $\sum_{n=1}^\infty S_n^* S_n$ is convergent with respect to the strong topology, for any $\varepsilon>0 $ one can find $N\in\mathbb N$ such that
$$
\left|\left( \xi,\sum\nolimits_{n=N+1}^\infty S_n^*S_n\eta\right)\right|<\varepsilon
$$
for any unit vectors $\xi,\eta\in K_M$. Let $P_M$ be the projection in $H_X$ onto $K_M$. Evidently, $P_M\in C_u^*(X)$. Then
$$
\|S'_N P_M\|^2=\sup_{\xi,\eta\in K_M,\|\xi\|=\|\eta\|=1}\left|\left(\xi,\sum\nolimits_{n=N+1}^\infty P_MS_n^*S_nP_M\eta\right)\right|\leq\varepsilon,
$$
hence
$$
|(\xi,S'_N(T)\eta)|=|(\xi,P_MS'_N(T)\eta)|\leq\|S'_NP_M\|\|T\|\leq\varepsilon\|T\|=\varepsilon.
$$
Let $\xi=\delta_{x_n}$, $\eta=\delta_{y_n}$, $M\geq n$. Then
$$
(\delta_{x_n},S_N(T)\delta_{y_n})=\Bigl(\sum\nolimits_{n=1}^N S^*_nT_n\Bigr)_{x_n,y_n}=T_{x_n,y_n}=1
$$
and
$$
(\delta_{x_n},S(T)\delta_{y_n})=(\delta_{x_n},S_N(T)+S'_N(T)\delta_{y_n}),
$$
therefore, $|(\delta_{x_n},S(T)\delta_{y_n})-1|\leq \varepsilon$. Thus, if $\varepsilon<\dfrac{1}{2}$ then $\bigl|\bigl(S(T)\bigr)_{x_n,y_n}\bigr|\geq\dfrac{1}{2}$. As $n$ was arbitrary, we have $S(T)\notin C_u^*(X)$, which gives a contradiction.
\end{proof}
We don't know if it is true that $l_2(C_u^*(X))''\subset M_{X\times\mathbb N,d_0}$.
\section{Case $X=\mathbb N^2$}
One of the simplest unbounded spaces is the space $X=\mathbb N^2=\{k^2: k\in\mathbb N\}$ with the standard metric. In particular, its asymptotic dimension \cite{gromov} is zero. Here we consider some examples of Hilbert $C^*$-modules over the uniform Roe algebra of this space. This algebra has a simple description: it is the sum (but not a direct sum) of two $*$-subalgebras: the subalgebra of compact operators and the subalgebra of diagonal operators, $C_u^*(X)=\mathbb K(H_X)+\mathbb D(H_X)$. Denote by $l_1(\overline{\mathbb N})$ the Banach space of absolutely summable sequences $(t_0,t_1,t_2,\ldots)$, $t_i\in\mathbb R$, $i\in\overline{\mathbb N}$, with the standard $l_1$-metric.
For convenience, we shall denote the point $k^2\in X$ by $x^k$.
Let
$$
x^k=x_0^k=(k^2,0,\ldots,0,1,0,0,\ldots),\quad X=X_0=\{x_0^k:k\in\mathbb N\},
$$
and let $X_n=\{x_n^k:k\in\mathbb N\}$ be the $n$-th copy of the space $X$.
Consider several examples of metrics induced by various embeddings $Y=\bigsqcup_{n=1}^\infty X_n\subset l_1(\overline{\mathbb N})$.
\begin{example}
Let $x^k_n=(k^2,0,\ldots,0,1,0,0,\ldots)$ for $k\geq n$, where 1 is the $n$-th coordinate.
For $k<n$, we place the points $x^k_n$ on the ray
$$
t_0=\cdots=t_{n-1}=t_{n+1}=t_{n+2}=\cdots=0,\quad t_n\geq 0,
$$
in such a way that the distance between them coincide with the metric $d_X$, i.\,e. $d(x^k_n,x^l_n)=d_X(x^k,x^l)$, $k,l\in\mathbb N$, where $d$ is the metric on $\bigsqcup_{n=0}^\infty X_n$ induced by the metric on $l_1(\overline{\mathbb N})$. Then
$\lim_{n\to\infty}d(x^k_0,x^k_n)=\infty$, and $d(x^k_0,x^k_n)=1$ for $k\geq n$.
\end{example}
Denote by $l_2(\mathbb D(H_X))'_0\subset l_2(\mathbb D(H_X))'$ the submodule consisting of sequences $(D_n)_{n\in\mathbb N}$,
$$
D_n=\operatorname{diag}(d_n^1,d_n^2,\ldots)\in\mathbb D(H_X),\qquad d_n^i\in\mathbb C,\quad i,n\in\mathbb N,
$$
such that $d_n^i=0$ for $i<n$.
\begin{proposition}\label{Lemma-primer}
$M_{X\times\mathbb N,d}= l_2(\mathbb K(H_X))+l_2(\mathbb D(H_X))'_0$.
\end{proposition}
\begin{proof}
If $T=(T_n)_{n\in\mathbb N}\in l_2(\mathbb K(H_X))$ then for any $\varepsilon>0$ there exist $K_1,\ldots,K_n\in\mathbb K(H_X)$ such that $\|T-K\|<\varepsilon$, where $K=(K_1,\ldots,K_n,0,0,\ldots)$. As the propagation of the operator $K$ is finite, we have $K\in M_{Y,d}$. If $T=(T_n)_{n\in\mathbb N}\in l_2(\mathbb D(H_X))'_0$ then the propagation of $T$ equals one, hence, $T\in M_{X\times\mathbb N,d}$.
Let the propagation of $T\in M_{Y,d}$ does not exceed $L$. It follows from $T_{x^k_0,x_n^l}\neq 0$ that $d(x^k_0,x^l_n)\leq L$, hence, if $n\geq L$ then $k=l\geq n$. Denote by $P_L$ the projection onto the linear span of the functions $\delta_{x_1},\ldots,\delta_{x_L}$. Let $D_n$ be the diagonal part of $T_n$, i.\,e. the operator given by $D_n\delta_x=(T_n)_{x_0,x_n}\delta_{x_n}$, $x\in X$, and let $K_n=P_L(T_n-D_n)P_L$. Then the rank of $K_n$ does not exceed $L$ for $n\in\mathbb N$, and $K_n=0$ for $n\geq L$. Set
$$
K=(K_1,K_2,\ldots),\quad D=(D_1,D_2,\ldots),\quad T=K+D.
$$
Let now $\{T^{(L)}\}_{L\in\mathbb N}$ be norm convergent to $T$, and let the propagation of $T^{(L)}$ does not exceed $L$. Then the diagonal parts $D^{(L)}$ of the operators $T^{(L)}$ are norm convergent to the diagonal part $D$ of the operator $T$, and, therefore, the same holds for their compact parts: $K^{(L)}\to K$ as $L\to\infty$. Since $K_n^{(L)}=0$ for $n> L$, the norm closure of the finite support sequence lies in $l_2(\mathbb K(H_X))$.
Let us show that the partial sums $\|\sum_{n=1}^N D^*_nD_n\|$ are uniformly bounded in $N$. If this would be false then for any $m>0$ there would exist $N_m$ such that $\|\sum_{n=1}^{N_m} D^*_nD_n\|>m$. As $D^{(L)}$ is norm convergent to $D$, there exists $L_0>0$ such that $\|D^{(L)}\|\leq \|D\|+1$ for any $L\geq L_0$. But
$$
\bigl\|\sum\nolimits_{n=1}^{N_m}(D^{(L)}_n)^*D^{(L)}_n\bigr\|\leq \|D^{(L)}\|^2\leq (\|D\|+1)^2.
$$
Taking here $m>(\|D\|+1)^2$, we obtain a contradiction. Thus, $D\in l_2(\mathbb D(H_X))'$. It is easy to see that $D$ lies in the submodule $l_2(\mathbb D(H_X))'_0$.
\end{proof}
\begin{example}
Let
$$
x^k_n=\left\lbrace\begin{array}{ll}(k^2,0,\ldots,0,1,0,0,\ldots)& \mbox{for\ } k<n;\\ (k^2-n,0,\ldots,0,n+1,0,0,\ldots)& \mbox{for\ }k\geq n, \end{array}\right.
$$
where the non-zero entries are at the 0-th and $n$-th place.
Let $\rho$ be the metric on $\bigsqcup_{n=0}^\infty X_n$ induced by the metric on $l_1(\overline{\mathbb N})$. Then
$\lim_{n\to\infty}d(x^k_0,x^k_n)=\infty$, and $d(x^k_0,x^k_n)=1$ for $k\geq n$.
\end{example}
Let $l_2(\mathbb D(H_X))'_1\subset l_2(\mathbb D(H_X))'$ be a submodule consisting of sequences $(D_n)_{n\in\mathbb N}$,
$$
D_n=\operatorname{diag}(d_n^1,d_n^2,\ldots)\in\mathbb D(H_X),\qquad d_n^i\in\mathbb C,\quad i,n\in\mathbb N,
$$
such that $d_n^i=0$ for $i> n$.
\begin{proposition}
$M_{X\times\mathbb N,\rho}= l_2(\mathbb K(H_X))+l_2(\mathbb D(H_X))'_1$.
\end{proposition}
\begin{proof}[Proof\nopunct] is similar to that of Proposition \ref{Lemma-primer}.
\end{proof}
Let a mapping $\varphi:\mathbb N\to\mathbb N$ satisfy the following conditions:
\begin{enumerate}
\item[1)]
$\varphi$ takes each value infinitely many times,
\item[2)]
$\varphi(k)\leq k$ for any $k\in\mathbb N$.
\end{enumerate}
\begin{example}
Set
$$
x_n^k=(k^2-\varphi(k),0,\ldots,0,\varphi(k),0,0,\ldots),
$$
where $\varphi(k)$ is the $(n+1)$-th coordinate. The metric on $X_0=\{x_0^k:k\in\mathbb N\}$ coincides with the standard metric on $X=\mathbb N^2$. Let $b$ be the metric on $\bigsqcup_{n=0}^\infty X_n$ induced by the metric on $l_1(\overline{\mathbb N})$. Let $\{k_i\}_{i\in\mathbb N}$ be a sequence such that $\varphi(k_i)=1$ for any $i\in\mathbb N$. Then $b(x_0^{k_i},x_n^{k_i})=2$ for any $i\in\mathbb N$. Put
$$
(T_n)_{x_0^k,x_n^l}=\left\lbrace\begin{array}{cl}1,&\mbox{if\ }k=l=k_n;\\0&\mbox{otherwise.}\end{array}\right.
$$
Then $T=(T_n)_{n\in\mathbb N}\in M_{X\times\mathbb N,b}$.
\end{example}
|
{
"timestamp": "2021-05-11T02:14:26",
"yymm": "2105",
"arxiv_id": "2105.03764",
"language": "en",
"url": "https://arxiv.org/abs/2105.03764"
}
|
\section{Experimental Results}
\label{sec::dataAnalysis}
In this section, we first represent and analyze the measured metrics regarding the serving cell signal. The measured and analyzed metrics are RSRP, RSRQ, RSSI, SINR, downlink throughput, and uplink throughput. We then compare the RSRP and RSRQ between the serving cell signal and the signals of the neighboring cells.
The Reference signal received power (RSRP) is the first investigated metric for the serving cell signal, in our measurement and analysis. According to 3GPP, RSRP is defined as the "linear average over the power contributions of the resource elements that carry cell-specific reference signals within the considered measurement frequency bandwidth" \cite{3GPP}. In a simpler form, the RSRP represents the power of the reference signal at the receiver in a LTE network, excluding the noise and interference from neighboring cells. It is measured in dBm, and the signal is considered excellent if $(RSRP\ge -80 dBm)$, good if $(-90 dBm\le RSRP\le -80 dBm)$, fair to poor if $(-100 dBm\le RSRP\le -90 dBm)$, and no signal if $(RSRP \le -100 dBm)$. Fig. (\ref{fig::RSRP}) shows the RSRP for the combination of different speeds and different elevations and for both ways of going from the starting point to 500 m away, and returning back to the starting point. The highlighted intervals represent a handover process happened at that interval.
Figures (\ref{fig::RSRP}a and b) show the RSRP for the fixed speed of 60 kmph and three different elevations of 40 m, 80 m, and 120 m, for both directions. As it is clear from this figure, the higher elevation helps the network node to get a signal with higher power, as the higher elevation leads to more areas with a dominant line-of-sight signal. The number of handover processes shows also obvious decrements at the elevation of 120 m. Fig. (\ref{fig::RSRP}c,d) show the same results for the elevation of 120m and different speeds of 30 kmph and 60 kmph. While the two speeds represent results with an almost identical pattern, the lower speed shows slightly better performance. The RSRP shows almost the same pattern of results for the back and forward directions. Thus, for the other metrics, we represent the figures only for one way.
\begin{figure}[h!]
\centering
\subfloat[Different elev. (speed=60kmph)]{\includegraphics[width=.5\linewidth]{figs/RSRPelv}}
\subfloat[Different elev. opposite direction (speed=60kmph)]{\includegraphics[width=.5\linewidth]{figs/RSRPelvReverse}}\\
\subfloat[Different speeds (elev=120m)]{\includegraphics[width=.5\linewidth]{figs/RSRPspeed}}
\subfloat[Different speeds opposite direction(elev=120m)]{\includegraphics[width=.5\linewidth]{figs/RSRPspeedReverse}}\\
\caption{A comparison of RSRP for different elevations and different UAV speeds.}
\label{fig::RSRP}
\end{figure}
Reference signal received quality (RSRQ) is the next investigated metric. The RSRQ represents the quality of received signal at the user equipment and it is measured in dB. While the RSRP is the main metric for decision-making on handover and cell reselection, it can provide additional information when RSRP is insufficient. The signal is considered as excellent if $(RSRQ\ge -10 dB)$, good if $(-15dB\le RSRQ \le -10dB)$, fair to poor if $(-20dB\le RSRQ \le -15dB)$, and no signal if ($RSRQ\le -20dB$). Fig. (\ref{fig::RSRQ}) shows the serving signal RSRQ for the combination of different elevations and different speeds. Looking at the range of variation of this metric, we find that there is not a significant improvement among different settings. However, the higher elevation and lower speed show slightly better performance in terms of RSRQ.
\begin{figure}[h!]
\centering
\subfloat[Different elev. (speed=60 kmph)]{\includegraphics[width=.5\linewidth]{figs/RSRQelv}}
\subfloat[Different speeds (elev=120 m)]{\includegraphics[width=.5\linewidth]{figs/RSRQspeed}}
\caption{A comparison of RSRQ for different elevations and different UAV speeds.}
\label{fig::RSRQ}
\end{figure}
The Received signal strength indicator (RSSI) is the next measured metric. The RSSI represents the strength of the received signal, considering the noise and interference, and it is measured in dBm. Fig. (\ref{fig::RSSI}) shows the RSSI for different elevations and different speeds. Again the superiority of the higher elevations is significant for this metric. As we discussed earlier, the test is done in a rural area covered by trees. That is why when the flight altitude is much larger than the trees' height, the signal strength remains almost the same in the low-altitude tests. However, we note that for the same elevation, the speed has no obvious effect on this parameter.
\begin{figure}[h!]
\centering
\subfloat[Different elev. (speed=60 kmph)]{\includegraphics[width=.5\linewidth]{figs/RSSIelv}}
\subfloat[Different speeds (elev=120 m)]{\includegraphics[width=.5\linewidth]{figs/RSSIspeed}}
\caption{A comparison of RSSI for different elevations and different UAV speeds.}
\label{fig::RSSI}
\end{figure}
Signal to interference and noise ratio (SINR), measured in dB, is the next measured metric. This metric is defined as the RSRP divided by the sum of the interference power from the neighboring cells and noise power. SINR is important as a quantifying metric of the relationship between the radio frequency and achievable throughput. Fig. (\ref{fig::SINR}) shows the SINR results for different elevations and different UAV speeds. While the higher speed as well as the lowest elevation, in most cases, lead to lower SINR, for the elevations of 80 m and 120 m there is no significant superiority in terms of SINR.
\begin{figure}[h!]
\centering
\subfloat[Different elev. (speed=60 kmph)]{\includegraphics[width=.5\linewidth]{figs/SINRelv}}
\subfloat[Different speeds (elev=120 m)]{\includegraphics[width=.5\linewidth]{figs/SINRspeed}}
\caption{A comparison of SINR for different elevations and different UAV speeds.}
\label{fig::SINR}
\end{figure}
The last investigated metrics are the achievable Transmission Control Protocol (TCP) downlink and uplink throughput. To measure the throughput, in the script which is running on the smartphone, we start uploading a large file to a server with a bandwidth much higher than that of cellular network. At the same time, we download a large enough file, again from a server with a bandwidth much more than that of cellular network. In this case, the limiting parameter is the maximum achievable throughput at the user equipment side, which is the network throughput.
Fig. (\ref{fig::dlThroughput} and \ref{fig::ulThroughput}) show the throughput for a combination of different elevations and different drone speeds for downlink and uplink, respectively. In Fig. (\ref{fig::dlThroughput}), the variation in the elevation does not show a significant change in the achievable downlink throughput as the lower speed shows its superiority. However, the elevation of 80 m shows slightly better downlink throughput, in most points. One possible reason could be the higher interference at the higher elevation and the absence of line of sight signals at the lower elevation. In Fig. (\ref{fig::ulThroughput}), we see that the uplink shows an obvious superiority at the higher elevation and the lower speed flight.
\begin{figure}[t!]
\centering
\subfloat[Different elev. (speed=60 kmph)]{\includegraphics[width=.5\linewidth]{figs/throughputDLelv}}
\subfloat[Different speeds (elev=120 m)]{\includegraphics[width=.5\linewidth]{figs/throughputDLspeed}}
\caption{A comparison of downlink throughput for different elevations and different UAV speeds.}
\label{fig::dlThroughput}
\end{figure}
\begin{figure}[t!]
\centering
\subfloat[Different elev. (speed=60 kmph)]{\includegraphics[width=.5\linewidth]{figs/throughputULelv}}
\subfloat[Different speeds (elev=120 m)]{\includegraphics[width=.5\linewidth]{figs/throughputULspeed}}
\caption{A comparison of uplink throughput for different elevations and different UAV speeds.}
\label{fig::ulThroughput}
\end{figure}
Now, we compare the RSRP and RSRQ of the best available signals including the serving cell's signal and the signal of the three of its neighboring cells. Fig. (\ref{fig::RSRPcdfElv}) shows the empirical cumulative distribution function (CDF) for the RSRP of the signals with the highest RSRP for three different elevations and a fixed speed of 60 kmph. In Fig. (\ref{fig::RSRPcdfElv}a), the signal 'a' always has the close RSRP to the other signals 'b' and 'c' but was never been choosen as the serving cell. In Fig. (\ref{fig::RSRPcdfElv}b), signals 'b', 'c', and 'd' serve as the main signal, where signal 'a' always was from a neighboring cell. In Fig. (\ref{fig::RSRPcdfElv}b), signal 'd', in almost all times, was the serving cell signal.
\begin{figure*}[t]
\centering
\subfloat[Elevation= 40 m ]{\includegraphics[width=.33\linewidth]{figs/RSRPcdf40elv60}}
\subfloat[Elevation= 80m]{\includegraphics[width=.33\linewidth]{figs/RSRPcdf80elv60}}
\subfloat[Elevation= 120 m]{\includegraphics[width=.33\linewidth]{figs/RSRPcdf120elv60}}
\caption{A comparison of RSRP CDF for signals with the highest RSRP in different elevations and a fixed speed of 60 kmph. }
\label{fig::RSRPcdfElv}
\end{figure*}
Fig. (\ref{fig::RSRPcdfSpeed}) shows the empirical CDF for the RSRP of the signals with the highest RSRP for two different speeds and an elevation of 120 m. In both figures of (\ref{fig::RSRPcdfSpeed}- a and \ref{fig::RSRPcdfSpeed}- b) signal 'd' is chosen as the serving cell signal. While in Fig. (\ref{fig::RSRPcdfSpeed}a), it is always the signal with the highest RSRP, in Fig. (\ref{fig::RSRPcdfSpeed}b), signals 'b' and 'c' somewhere act better but never have been chosen as the serving cell signals. It is worthy to mention that we did not experience any call drop during the tests. As it is clear from Fig. (\ref{fig::RSRPcdfSpeed} and \ref{fig::RSRQcdfElv}), most of the times, there are at least a couple of signals with enough power to keep the call alive. However, we see a different number of handovers in different test settings. Table (\ref{tbl::HO}) compares the number of handover processes for different test settings. This table shows that at the elevation of 120 m, we experience less number of handovers in comparison with other elevations, where the elevation of 80 m leads to the highest number of handover processes. Generally, we find that the highest speed leads to a slightly less number of handover processes. It seems that at the elevation of 40 m, there are a couple of handover processes due to the environmental obstacles. At the elevation of 80 m, the interference of other signals seems to have its highest effect. However, at the elevation of 120 m, the effect of interference from other cells' signals has its lowest effect on the serving signal, which leads to a less often handovers.
\begin{figure*}[t]
\centering
\subfloat[Speed= 30 kmph ]{\includegraphics[width=.33\linewidth]{figs/RSRPcdf120elv30}}
\subfloat[Speed= 60 kmph]{\includegraphics[width=.33\linewidth]{figs/RSRPcdf120elv60}}
\caption{A comparison of RSRP CDF for signals with the highest RSRP in different speeds and a fixed elevation of 120m.}
\label{fig::RSRPcdfSpeed}
\end{figure*}
\begin{table}[t!]
\caption{Number of handovers}
\begin{center}
\begin{tabular}{|c|c|c||c|c|}
\hline
\multicolumn{5}{|c|}{Number of Handovers}\\\hline
\multicolumn{3}{|c||}{Elevation (m)}&\multicolumn{2}{|c|}{Speed (kmph)}\\\hline
40&80&120&30&60\\\hline
2&4&1&1&0\\
\hline
\end{tabular}
\end{center}
\label{tbl::HO}
\end{table}
Finally, we show the empirical CDF of RSRQ of the serving cell's signal and its neighboring cells' signal for different elevations and different speeds in Fig. (\ref{fig::RSRQcdfElv} and \ref{fig::RSRQcdfSpeed}), respectively. We used the same naming of the signals in these figures as those of Fig. (\ref{fig::RSRPcdfElv} and \ref{fig::RSRPcdfSpeed}). Thus, the serving signals are the same as those of previous figures.
\begin{figure*}[t]
\centering
\subfloat[Elevation= 40 m ]{\includegraphics[width=.33\linewidth]{figs/RSRQcdf40elv60}}
\subfloat[Elevation= 80 m]{\includegraphics[width=.33\linewidth]{figs/RSRQcdf80elv60}}
\subfloat[Elevation= 120 m]{\includegraphics[width=.33\linewidth]{figs/RSRQcdf120elv60}}
\caption{A comparison of RSRQ CDF for signals with the highest RSRQ in different elevations and a fixed speed of 60 kmph.}
\label{fig::RSRQcdfElv}
\end{figure*}
\begin{figure*}[t]
\centering
\subfloat[Speed= 30 kmph ]{\includegraphics[width=.33\linewidth]{figs/RSRQcdf120elv30}}
\subfloat[Speed= 60 kmph]{\includegraphics[width=.33\linewidth]{figs/RSRQcdf120elv60}}
\caption{A comparison of RSRQ CDF for signals with the highest RSRQ in different speeds and a fixed elevation of 120m.}
\label{fig::RSRQcdfSpeed}
\end{figure*}
\section{Data Collection and Processing}
\label{sec::dataGathering}
In this section, we describe the field flight tests, as well as the data processing procedure to extract communication parameters from the collected log-files. We use a commercial drone, DJI Matrice 200, in a rural area near Flagstaff, Arizona, US. Flagstaff lies at approximately 2100 m elevation above sea level. We performed our tests in the Arboretum Garden, one of the southwest experimental garden array (SEGA) sites. The test location is covered by Ponderosa pine trees and surrounded by several Base Transceiver Stations (BTSs). We performed our tests using the commercial Verizon LTE network on its 1700 MHz band. Fig. (\ref{fig::area}) shows the area map. Fig. (\ref{fig::area}a) shows the area with all of the base stations, where Fig. (\ref{fig::area}b) shows only the base stations in which we received their signal in our tests. In this figure, the oval represents the exact test location. Fig. (\ref{fig::area}b) shows also the two-dimension coverage area of the network. This 2D coverage map has been created using multiple drive tests. As shown in this map, the terrestrial user, in the best case, can receive an strong signal from only one base station (the eNB with the identification number 22158), which confirms the weak coverage in this rural area. We put a Samsung S20 phone on the drone with TEMS pocket app \cite{tems} installed on. We flew the drone at three different elevations of 40 m, 80 m, and 120 m, with two different speeds 30 kmph and 60 kmph. While the elevation is limited by the federal aviation administration (FAA) to not exceed 400 feet, i.e. 120 m, the maximum speed is limited by our drone. In each flight instance, the drone take-off from the starting point, flies for 500 m in a straight line and returns back the same way to the take-off point.
\begin{figure*}[h!]
\centering
\subfloat[Overall view]{\fbox{\includegraphics[width=0.31\linewidth]{figs/overalBTSs}}}
\subfloat[The captured signals]{\fbox{\includegraphics[width=0.31\linewidth]{figs/DetectedBTSs}}}
\subfloat[The test map]{\fbox{\includegraphics[width=0.33\linewidth]{figs/Arboretum}}}
\caption{A map of the test area.}
\label{fig::area}
\end{figure*}
For the measurement process, as we mentioned earlier, we use the TEMS pocket application version 22.1.2 \cite{tems} installed on the attached smart phone. To be able to investigate the communication quality and uplink and downlink data transmission performance, we design a script to call another phone, upload a file to a server, and download a file from another server, all at the same time. We record the RSRP, RSRQ, RSSI, SINR, and uplink and downlink throughput as the performance evaluation metrics. We record the mentioned metrics for the signal of the neighboring cells as well. We also keep the data for all handover processes. We use TEMS discovery to process the files and extract the desired information.
The collected data, i.e. the output of the TEMS discovery, contains some unnecessary data such as the take-off and landing data when the drone increases or decreases its elevation. Before we use the extracted information, we carefully clean up the data from the unnecessary information. Furthermore, the granularity of TEMS discovery in managing the data is 2 seconds. We represent the data based on the location of drone, starting from the starting point, going in the straight line for 500 m, in steps of 50 m. To calculate the information at the exact locations of interest, we interpolated the required data points, using Lagrange interpolation.
\section*{ACKNOWLEDGMENT OF SUPPORT AND DISCLAIMER}
\nocite{*}
\bibliographystyle{IEEEtran}
\section{Conclusion}
\label{sec::conclusion}
Reusing the already deployed cellular network for low-altitude aerial user communication sounds promising, due to the cost efficiency, wide-coverage, high data transmission rate, and security of data transmission. However, the feasibility of such a communication, as well as its performance, need comprehensive investigations. In this paper, we gathered data for an aerial LTE user in a rural area. We showed that for the low-altitude aerial users, the higher altitude leads to higher signal power, where the moderate altitude leads to slightly higher throughput. We did not experience any call drop during our tests which is a good sign for the feasibility of using the pre-deployed cellular network by the aerial users. The number of handover processes is very low at the highest altitude and the increment in the speed slightly decreases this number. While we tried to exhaustively study the reuse of the existed LTE network for aerial users, our study covered only a rural area. As future work, we aim at performing a similar study for suburban and urban areas. We can also compare the results for aerial users with that of terrestrial user via drive-test.
\section{acknowledgment}
The authors acknowledge the financial and technical support provided by Infovista in using TEMS POCKET and TEMS Discovery in this project. The authors also acknowledge the cooperation of SEGA garden management.
\section{Introduction}
Drones have been increasingly used in both military and civilian applications. Ease of deployment, highly dynamic 3D movement ability, low price, and the availability of drones make them find their way into various applications. While border surveillance, target tracking and strike are some military applications, traffic management, package delivery, search and rescue, disaster relief, and post-disaster imagery, and area monitoring are some civilian applications to be named \cite{fireMonitoring1,fireMonitoring2}. Reliable communication is of paramount importance for command and control to guarantee the safety of drones, people and infrastructure. It is also crucial to guarantee reliable data transfer in applications such as monitoring and surveillance.
Exploiting the existing cellular networks for UAV communications is a cost-effective solution to provide a reliable, wide-coverage, and secure communication for drones. However, the cellular networks are designed to cover the near-ground areas and might not be able to offer an optimal service to aerial users due to several concerns. The first concern is that the current cellular networks are managed to minimize the interference among different cells' signals. The obstacles such as trees, houses, and buildings block the signals at a certain level, and hence the signal powers are managed based on the current on-ground physical situations. In higher elevations, there are no such obstacles and most of the antennas are located in the line-of-sight of the user. While the line-of-sight communication might have stronger signal, it may cause considerable interference and make it infeasible to use such networks for drone communication.
The second concern is about the coverage area in the higher elevations. The tilt angle of the cellular network antennas are such that the antennas face the ground to best cover the terrestrial users. Thus, the coverage area is investigated only for the two-dimension maps. In order to utilize cellular towers for drone communications, we need to investigate the coverage area above the ground and for different elevations, to find the three-dimensional coverage map. The third concern is the effect of the aerial users on the quality of the communications of the main terrestrial users. To the best of our knowledge, there is not enough information on the interference caused by aerial cellular users on the terrestrial users in different geographical distributions and urbanization.
In this paper, we present the results of field measurements on using LTE for drone communication in a rural area. We gather data for different low-altitude flight's elevations, $40$, $80$, and $120~m$, and different speeds of $30$ and $60$ kmph. In our measurements, we consider the call quality, and maximum achievable up-link and down-link throughput, altogether.
The experimental measurements are performed using a commercial drone in rural area by attaching a smart phone to the drone. TEMS pocket \cite{tems} application is installed on the smart phone supported with a dedicated script on it to gather the required data. The script makes a voice call, download a large file, and upload a stream of data to a server, using the Verizon commercial LTE network in its 1700 MHz band. All tests are done in a rural forest area close to Flagstaff city, Arizona, US. The collected log files are processed by TEMS discovery \cite{tems}. We then extracted two types of information, those of the serving cell signal, and those of comparing the best signals with one another including the serving cell signal and the neighboring cells' signals.
In our analysis, we measured the reference signal received power (RSRP), reference signal received quality (RSRQ), reference signal strength indicator (RSSI), signal to interference and noise ratio (SINR), up-link throughput, down-link throughput, and the number of handover processes. We show in our analysis that, in low-altitude flights over rural areas, the lower elevation results in the worst case performance, since the signal is attenuated with the obstacles and multi-path fading. We show that the higher elevation results in, in most cases, the best choice for the voice calls and small-size data communication. However, the moderate elevation, 80 m in our measurements, reaches the highest possible throughput in uploading and downloading. We further show that although the speed has a negligible impact on the signal quality, in most cases, the lower speed flights have slightly better performance. Furthermore, the moderate flight's elevation leads to the largest number of handovers, whereas in the highest elevation, we see the lowest number of handover processes. The higher speed flight also shows slightly less often handover processes for all the tested scenarios. Generally, we find that there are always a couple of signals with enough strength to keep the call live, and the interference caused by the line-of-sight signals does not have a significant effect on the serving cell signal quality, in the understudy settings.
The rest of this paper is organized as follows. We review the literature works in Section (\ref{sec::relatedWork}). Then, we present the processes of data collection, cleanup, and information extraction in Section (\ref{sec::dataGathering}). We analyze the data and represent the results in Section (\ref{sec::dataAnalysis}). Finally, we conclude the paper and mention the future directions in Section (\ref{sec::conclusion}).
\section{Related Work}
\label{sec::relatedWork}
Using the existing cellular networks for drone communications can facilitate the wide deployment of drone technology in a secure way without the need for substantial investments to establish new communication networks, however, the aerial coverage of cellular networks and the interference caused by the aerial users on terrestrial cellular users need to be thoroughly investigated. The third generation partnership project (3GPP) made a valuable effort in its release 15 \cite{3gpp15} to discover the support of enhanced long term evaluation (LTE) for aerial vehicles, and release 17 \cite{3gpp17} to support the 5G enhancement for UAVs. Van Der Bergh et al. \cite{Vanderbergh} studied the impact of interference and path loss on the connected drones via LTE network. They found that the signal strength is decreased rapidly by the increment in the altitude until the line-of-sight propagation is established. Concurrently, the signal quality is decreased because of the decrements in the SINR.
Lin et al. \cite{ericsson} shared some of their measurements data for low altitude drone connected to commercial LTE network, gathered in Finland. They found that the already deployed LTE networks can support the low-altitude aerial communication, but the interference and mobility may cause challenges. Amorim et al. \cite{AmorimChannel} made several measurements to model the radio channel for aerial use of LTE network in Denmark. Their results showed better radio clearance as the aerial vehicle increases its elevation, in the low-latitude flights. However, they did not measure the SINR and left it for their future works.
In \cite{amorimUrban}, authors performed measurements for aerial users of LTE network at elevations which do not exceed 40 m. They targeted an urban area with the highest building height of 15 m and compared their results to those of 3GPP \cite{3gpp15}. They found that the measured metrics for three different frequencies of 800, 1800, and 2600 MHz lead to similar results.
Khawaja et al. \cite{Khawaja} found in their measurements that the signal strength mostly follows a two-path ray propagation model for higher altitudes. Al-Hourani et al. \cite{Hourani} provided a cellular to aerial channel model in terms of path loss and shadowing, based on the real experiments performed in a suburban environment.
Hayat et al. \cite{hayat} experimentally evaluated the LTE network to measure the signal to interference ratio and the downlink throughput in different elevations, in a suburban area. They showed that the throughput at 150 m altitude outperforms all lower elevation throughput. However, the throughput of 50 m elevation is much higher than that of 100 m. Kovacs et al. \cite{bellLabs} analyzed their measurements on aerial connectivity with an LTE network in a rural area. They characterized the radio channel behaviour in terms of downlink and uplink metrics, and estimated the gains of interference mitigation techniques.
Marques et al. \cite{marques} performed an experimental study for using LTE as the communicating network for aerial vehicles, in rural areas. They measured the uplink and downlink throughput for different low-altitude elevations. They found that the 25 m elevation outperforms all other elevations in terms of uplink and downlink throughput. Muzaffar et al. \cite{raheeb} performed an experimental test for connecting the aerial user to a 5G network and compared its results with that of 4G network. They found that the 5G network generally outperform the 4G network. Overall, there is a significant need for exhaustive studies to show the usability of already deployed cellular networks for aerial communication. Most of the mentioned works performed valuable measurements and analysis. However, none of them presented a sound and complete study, covering all area types, different elevations, and all the metrics studied in this paper, altogether. Hence, we aim at studying this problem for more metrics and different elevations as well as different UAV speeds.
|
{
"timestamp": "2021-05-11T02:15:18",
"yymm": "2105",
"arxiv_id": "2105.03778",
"language": "en",
"url": "https://arxiv.org/abs/2105.03778"
}
|
\section{Introduction}
Since the perturbative QCD (PQCD) framework for three-body $B$ meson decays was proposed in~\cite{Chen:2002th},
there have been extensive applications to various
channels~\cite{epjc77199,prd99093007,epjc79792,cpc43073103,epjc80394,epjc80517,epjc8191,prd103013005,
prd95056008,epjc7937,prd103016002,epjc80815,prd101111901,jpg46095001}, and rich phenomenology has been
explored. This formalism is based on the $k_T$ factorization theorem for leading-power regions of
a Dalitz plot, where two final-state hadrons are roughly collimated to each other.
The dominant nonperturbative dynamics responsible for the production of the hadron pair,
including final-state interactions between the two hadrons, is absorbed into two-hadron
distribution amplitudes (DAs)~\cite{G,G1,DM,Diehl:1998dk,Diehl:1998dk1,Diehl:1998dk2,MP}.
It is similar to the absorption of collinear divergences associated with a hadron,
which participates a high-energy QCD exclusive process, into its hadron DAs.
The remaining contributions, being calculable at the parton level in perturbation
theory, go into hard kernels. The analysis of three-body $B$ meson decays is then simplified
to the one of two-body decays, where a Feynman diagram for hard kernels at leading order (LO)
of the strong coupling $\alpha_s$ contains a single virtual gluon exchange. The same idea has been
extended to four-body charmless hadronic $B$ meson decays recently~\cite{Rui:2021kbn}: they are
assumed to proceed dominantly with two intermediate resonances, which then strongly decay into two
light meson pairs. Various asymmetries in final-state angular distributions from the
$B_{(s)} \to (K\pi) (K\pi)$ decays were predicted based on the universality of the two-meson DAs
for the $K\pi$ pair.
A two-hadron DA, being the time-like version of a generalized parton distribution
function, depends on the parton momentum fraction $x$, the meson momentum fraction $\zeta$,
which describes the relative motion between the two hadrons in the pair, and the hadron-pair
invariant mass squared $\omega^2$. It can be expanded in terms of a series of Gegenbauer polynomials
$C_n^{3/2}(2x-1)$ in $x$ and a series of Legendre polynomials $P_l(2\zeta-1)$
in $\zeta$ simultaneously with the $\omega^2$-dependent coefficients $B_{nl}(\omega^2)$~\cite{MP},
\begin{eqnarray}\label{2mda}
\Phi(x,\zeta,\omega^2)=\frac{6}{2\sqrt{2N_c}}x(1-x)\sum_{n=0}^\infty\sum_{l=0}^{n+1}
B_{nl}(\omega^2)C_n^{3/2}(2x-1)P_l(2\zeta-1),
\end{eqnarray}
where $N_c=3$ is the number of colors, and
$l=0,1, 2$,... denote the $S$-wave , $P$-wave, $D$-wave,... components, respectively.
The time-like form factor $B_{0l}(\omega^2)$, which normalizes each of the partial-wave component,
contains both resonant and nonresonant contributions. Some form factors, such as the time-like
pion form factor that receives contributions from the series of $\rho$ resonances, have been
constrained stringently by experimental data~\cite{prd86-032013}. The other coefficients
$B_{nl}(\omega^2)$, referred to as the Gegenbauer moments, are still quite uncertain
due to a lack of systematic nonperturbative studies. Note that these Gegenbauer moments differ
from those in the DA for a specific resonance which strongly decays into
the hadron pair, because, as stated above, a two-hadron
DA collects contributions from a series of resonances as well as nonresonant contributions.
Moreover, they are $\omega^2$-dependent, a feature dramatically distinct from the
Gegenbauer moments for a hadron DA. It has been observed~\cite{plb763-29} that
the Gegenbauer moments of a $P$-wave di-pion DA differ from those of the $\rho(770)$ meson DA.
Therefore, it is essential to determine the Gegenbauer moments for two-hadron DAs in order to
improve the precision of theoretical predictions for multi-body $B$ meson decays
in factorization frameworks.
We will perform a global fit of the Gegenbauer moments in two-meson DAs to measured
branching ratios and direct $CP$ asymmetries in three-body charmless hadronic $B$ meson decays
$B\to VP_3\to P_1P_2P_3$ in the PQCD approach, where $V$ stands for an intermediate vector resonance,
and $P_i$, $i=1,2,3$, stand for final-state pseudoscalar mesons. As the first attempt to a global
determination of two-meson DAs, we focus on the $P$-wave components, and employ the LO PQCD
factorization formulas for decay amplitudes.
We establish a Gegenbauer-moment-independent database, by means of which each decay amplitude
is expressed as a combination of the relevant Gegenbauer moments in the two-meson DAs. The Gegenbauer
moments in the DAs for the mesons $P_3=\pi,K$ are input from the global analysis of two-body $B$
meson decays in Ref.~\cite{2012-15074}. The leading-twist (twist-2) and next-to-leading-twist
(twist-3) DAs for the pairs $P_1P_2=\pi\pi, K\pi$ and $KK$ with the intermediate vector mesons
$V=\rho, K^*$ and $\phi$, respectively, are then fixed in the global fit. Because
the current data for three-body $B$ meson decays are not yet precise enough to determine
the $\omega^2$ dependence of the Gegenbauer moments, we first treat them as constant parameters
defined at the initial scale 1 GeV. One or two Gegenbauer moments for each of the above
two-meson DAs are obtained with satisfactory fit quality, depending on the abundance of available
data. It is noticed that the result and the precision of the extracted
two-meson DAs depend on the number of the Gegenbauer moments considered in the fit:
when more Gegenbauer moments are introduced into the $K\pi$ DAs, the quality of the fit is improved
at the cost of amplified uncertainties for fit outcomes.
The determined Gegenbauer
moments are then employed to make predictions for those obervables, whose data are excluded in the
fit due to larger experimental errors. A general consistency between our predictions and
data for various modes is achieved, except those which suffer significant subleading corrections
according to previous PQCD analyses,
such as the $B^0 \to \pi^0(\rho^0\to)\pi \pi$ decay~\cite{prd74-094020,Epjc72-1923}.
The consistency hints the validity of the PQCD formalism for
three-body hadronic $B$ meson decays and the universality of the nonperturbative two-meson DAs.
The $\pi\pi, K\pi$ and $KK$ twist-2 and twist-3 DAs presented in this work are
ready for applications to PQCD studies of other multi-body $B$ meson decays involving the same
meson pairs. Our formalism can be extended to global fits for other two-meson DAs of various
partial waves straightforwardly. It can be also generalized to include
higher-order and/or higher-power corrections to PQCD factorization formulas~\cite{Li:2012nk},
when they are available, so that more accurate two-meson DAs are attainable in a systematic way.
As a more ambitious attempt, we investigate the dependence of the Gegenbauer moments in
the di-pion DAs on the pion-pair invariant mass squared $\omega^2$.
It is unlikely to determine the exact $\omega^2$ dependence
from current data, so we simply parametrize the Gegenbauer moments up to the first power in $\omega^2$,
following their series expansion derived in Ref.~\cite{MP}. The global fit indicates
that at least the linear term in one of the twist-3 di-pion DAs can be constrained
effectively. It implies that the determination of the $\omega^2$-dependent Gegenbauer moments
in two-meson DAs is promising, when data become more precise in the future.
The rest of the paper is organized as follows. The kinematic variables for three-body hadronic
$B$ meson decays are defined in Sec.~II, where the dependence on final-state meson masses is
included to describe the phase space accurately. The considered two-meson $P$-wave DAs are also
parametrized, whose normalization form factors are assumed to take the relativistic Breit-Wigner
(RBW) model~\cite{epjc78-1019} or the Gounaris-Sakurai (GS) model~\cite{prl21-244}.
We explain how to perform the global fit, present and discuss the numerical results,
and try to extract the $\omega^2$ dependence of the Gegenbauer moments in Sec.~III,
which is followed by the Conclusion. We collect the involved PQCD factorization formulas for
decay amplitude in the Appendix.
\section{FRAMEWORK}\label{sec:2}
\subsection{Kinematics}
Consider the charmless $B$ meson decay into three pseudoscalar mesons via
a vector intermediate resonance,
$B(p_B)\rightarrow V(p)P_3(p_3)\rightarrow P_1(p_1)P_2(p_2)P_3(p_3)$,
with the meson momenta $p_B=p+p_3$ and $p=p_1+p_2$.
We work in the $B$ meson rest frame and parametrize the relevant momenta in the light-cone coordinates as
\begin{eqnarray}
p_{B}&=&\frac{m_{B}}{\sqrt 2}(1,1,\textbf{0}_{\rm T}), ~\quad k_{B}=\left(0,x_B \frac{m_{B}}{\sqrt2} ,\textbf{k}_{B \rm T}\right),\nonumber\\
p&=&\frac{m_{B}}{\sqrt2}(f_{+},f_{-},\textbf{0}_{\rm T}), ~\quad k= \left( z f_{+}\frac{m_{B}}{\sqrt2},0,\textbf{k}_{\rm T}\right),\nonumber\\
p_3&=&\frac{m_{B}}{\sqrt 2}(g_{-},g_{+},\textbf{0}_{\rm T}), ~\quad k_3=\left(0,x_3 g_{+} \frac{m_B}{\sqrt{2}},\textbf{k}_{3{\rm T}}\right),\label{mom-B-k}
\end{eqnarray}
where $m_{B}$ is the $B$ meson mass, and $k_{B}, k$ and $k_3$ are the valence quark momenta in the $B$ meson,
the meson pair, and the bachelor meson with the parton momentum fraction (transverse momenta)
$x_B, z$ and $x_3$ (${k}_{B \rm T}, {k}_{\rm T}$ and ${k}_{3{\rm T}}$), respectively. That is, we have
chosen the frame, such that the meson pair and the bachelor meson move in the directions $n=(1,0,0_{\rm T})$
and $v=(0,1,0_{\rm T})$, respectively. Since the parton momentum $k$ ($k_3$) is aligned with the meson pair
(bachelor meson), its small minus (plus) component has been neglected. We have also dropped the plus component $k_B^+$,
because it does not appear in the hard kernels for dominant factorizable contributions.
In the above expressions, the functions $f_{\pm}$ and $g_{\pm}$ are written as
\begin{eqnarray}
f_{\pm}&=&\frac{1}{2}\left(1+\eta-r_3\pm\sqrt{(1-\eta)^2-2r_3(1+\eta)+r_3^2}\right),\nonumber\\
g_{\pm}&=&\frac{1}{2}\left(1-\eta+r_3\pm\sqrt{(1-\eta)^2-2r_3(1+\eta)+r_3^2}\right),\label{fg}
\end{eqnarray}
with the ratio $r_3=m_{P_3}^2/m^2_{B_{(s)}}$ and $\eta=\omega^2/m^2_{B_{(s)}}$, $m_{P_3}$ being the bachelor meson mass
and $\omega^2=p^2$ being the invariant mass squared of the meson pair.
For a $P$-wave meson pair, we introduce the longitudinal polarization vector
\begin{eqnarray}\label{eq:pq1}
\epsilon=\frac{1}{\sqrt{2\eta}}(f_{+},-f_{-},\textbf{0}_{T}).
\end{eqnarray}
We derive the meson momenta $p_1$ and $p_2$,
\begin{eqnarray}\label{eq:p1p2}
p_1&=&\left((\zeta+\frac{r_1-r_2}{2\eta})f_{+}\frac{m_B}{\sqrt{2}},
(1-\zeta+\frac{r_1-r_2}{2\eta})f_{-}\frac{m_B}{\sqrt{2}}, \textbf{p}_{\rm T}\right), \nonumber\\
p_2&=&\left((1-\zeta-\frac{r_1-r_2}{2\eta})f_{+}\frac{m_B}{\sqrt{2}},
(\zeta-\frac{r_1-r_2}{2\eta})f_{-}\frac{m_B}{\sqrt{2}}, -\textbf{p}_{\rm T}\right),\nonumber\\
p_{\rm T}^2&=&\zeta(1-\zeta)\omega^2+\frac{(m_{P_1}^2-m_{P_2}^2)^2}{4\omega^2}-\frac{m^2_{P_1}+m^2_{P_2}}{2},
\end{eqnarray}
from the relation $p=p_1+p_2$ and the on-shell conditions $p_i^{2}=m_{P_i}^{2}$, $i=1,2$,
with the mass ratios $r_{1,2}=m_{P_1,P_2}^2/m^2_B$. The variable
$\zeta+(r_1-r_2)/(2\eta)=p_1^+/p^+$ bears the meaning of the meson momentum fraction up to corrections
from the final-state meson masses.
Alternatively, one can define the polar angle $\theta$ of the meson $P_{1}$ in the $P_1P_2$ pair rest frame.
The transformation between the $B$ meson rest frame
and the meson pair rest frame leads to the relation between the meson momentum fraction $\zeta$ and
the polar angle $\theta$,
\begin{eqnarray}\label{eq:cos}
2\zeta-1=\sqrt{1-2\frac{r_1+r_2}{\eta}+\frac{(r_1-r_2)^2}{\eta^2}}\cos\theta,
\end{eqnarray}
with the bounds
\begin{eqnarray}
\zeta_{\text{max,min}}=\frac{1}{2}\left[1\pm\sqrt{1-2\frac{r_1+r_2}{\eta}+\frac{(r_1-r_2)^2}{\eta^2}}\right].
\end{eqnarray}
We emphasize that the parametrization with the exact dependence on the final-state meson masses in
Eq.~(\ref{eq:p1p2}) is crucial for establishing Eq.~(\ref{eq:cos}), such that the Legendre
polynomials in Eq.~(\ref{2mda}) correspond to the partial waves of the meson pair exactly.
\begin{figure}[tbp]
\centerline{\epsfxsize=14cm \epsffile{fig1.eps}}
\caption{LO diagrams for the three-body decays $B \to V P_3\to P_1P_2P_3$
with the light quarks $q=u,d,s$, where the symbol $\bullet$ represents the weak vertex.}
\label{fig:fig1}
\end{figure}
\begin{figure}[tbp]
\centerline{\epsfxsize=14cm \epsffile{fig2.eps}}
\caption{More LO diagrams for the three-body decays $B \to V P_3\to P_1P_2P_3$.}
\label{fig:fig2}
\end{figure}
The branching ratio for a three-body $B$ meson decay is given by~\cite{pdg2020}
\begin{eqnarray}\label{eq:br}
\int d\mathcal{B}=\frac{\tau_B m_B}{256\pi^3} \int^1_{(\sqrt{r_1}+\sqrt{r_2})^2} d\eta \sqrt{(1-\eta)^2-2r_3(1+\eta)+r^2_3}\int^{\zeta_{\text{max}}}_{\zeta_{\text{min}}}d\zeta|\mathcal{A}|^2,
\end{eqnarray}
with the $B$ meson lifetime $\tau_B$. The decay amplitude $\mathcal{A}$, according to the factorization
theorem stated in the Introduction, is expressed as
\begin{eqnarray}
\mathcal{A}= \Phi_B \otimes H\otimes \Phi_{P_1P_2} \otimes \Phi_{P_3},
\end{eqnarray}
where $\Phi_B$ ($\Phi_{P_3}$) is the $B$ (bachelor) meson DA, and the two-meson DA $\Phi_{P_1P_2}$
absorbs the nonperturbative dynamics in the production of the meson pair $P_1P_2$.
The symbol $\otimes$ denotes the convolution of the above factors in parton momenta.
The LO diagrams for the hard kernel $H$
are displayed in Figs.~\ref{fig:fig1} and \ref{fig:fig2}, where Figs.~\ref{fig:fig1}(a)-(d)
(Figs.~\ref{fig:fig2}(a)-(d)) are associated with the $B\to P_1P_2$ ($B\to P_3$) transition,
and Figs.~\ref{fig:fig1}(e)-(h) and Figs.~\ref{fig:fig2}(e)-(h) with the annihilation contributions.
\subsection{Distribution Amplitudes}\label{sec:22}
The light-cone hadronic matrix element for a $B$ meson is parametrized
as~\cite{prd63-054008,prd65-014007,epjc28-515,ppnp51-85,Prd85-094003}
\begin{eqnarray}
\Phi_B= \frac{i}{\sqrt{2N_c}} ({ p \hspace{-2.0truemm}/ }_B +m_B) \gamma_5 \phi_B (x,b), \label{bmeson}
\end{eqnarray}
with the impact parameter $b$ being conjugate to the parton transverse momentum $k_{B \rm T}$.
The $B$ meson DA $\phi_B (x,b)$ is chosen as the model form widely adopted in
the PQCD approach~\cite{prd63-054008,prd65-014007,epjc28-515,ppnp51-85,Prd85-094003,Li:2012md},
\begin{eqnarray}
\phi_B(x,b)&=& N_B x^2(1-x)^2\mathrm{exp} \left [ -\frac{m_B^2 x^2}{2 \omega_{B}^2} -\frac{1}{2} (\omega_{B} b)^2\right] ,
\label{phib}
\end{eqnarray}
where the constant $N_B$ is related to the $B$ meson decay constant $f_B$
through the normalization condition $\int_0^1dx \; \phi_B(x,b=0)=f_B/(2\sqrt{2N_c})$.
The shape parameter takes the values $\omega_B = 0.40$ GeV for $B^+,B^0$ mesons and $\omega_{B_s}=0.48$
GeV~\cite{prd63-054008,plb504-6,prd63-074009,2012-15074} for a $B^0_s$ meson with 10\% variation in the
numerical study below.
The light-cone matrix elements for a pseudoscalar meson is decomposed, up to twist 3, into~\cite{prd65-014007,epjc28-515}
\begin{eqnarray}
\Phi_{P}\equiv \frac{i}{\sqrt{2N_C}}\gamma_5
\left [{ p \hspace{-2.0truemm}/ }_3 \phi_{P}^{A}(x_3)+m_{03} \phi_{P}^{P}(x_3)
+ m_{03} ({ n \hspace{-2.2truemm}/ }
{ v \hspace{-2.2truemm}/ } - 1)\phi_{P}^{T}(x_3)\right ],
\end{eqnarray}
with $P=\pi, K$ and the chiral scale $m_{03}$. The pion and kaon DAs
have been determined at the scale 1 GeV in a recent global analysis~\cite{2012-15074}
based on LO PQCD factorization formulas, which is at the same level of accuracy as the present work.
The results are quoted as
\begin{eqnarray}
\phi_{\pi}^A(x) &=& \frac{3f_{\pi}}{\sqrt{6}} x(1-x)[ 1 +0.644C_2^{3/2}(2x-1)-0.41C_4^{3/2}(2x-1)], \nonumber\\
\phi_{\pi}^P(x) &=& \frac{f_{\pi}}{2\sqrt{6}}[1 +1.08C_2^{1/2}(2x-1)], \nonumber\\
\phi_{\pi}^T(x) &=& \frac{f_{\pi}}{2\sqrt{6}}(1-2x)[1-0.48(10x^2-10x+1)],\nonumber\\
\phi_{K}^A(x) &=& \frac{3f_{K}}{\sqrt{6}}x(1-x)[1+0.331C_1^{3/2}(2x-1)+0.28C_2^{3/2}(2x-1)-0.398C_4^{3/2}(2x-1)],\nonumber \\
\phi_{K}^P(x) &=& \frac{f_{K}}{2\sqrt{6}} [1+0.24C_2^{1/2}(2x-1)],\nonumber \\
\phi_{K}^T(x) &=&-\frac{f_{K}}{2\sqrt{6}}[C_1^{1/2} (2x-1)+0.35 C_3^{1/2} (2x-1)],
\end{eqnarray}
where the Gegenbauer polynomials are defined as
\begin{eqnarray}
C_1^{3/2}(t)=3t, \quad C_2^{3/2}(t)=\frac{3}{2}(5t^2-1), ~\quad C_4^{3/2}(t)=\frac{15}{8}(1-14t^2+21t^4).
\end{eqnarray}
Note that the twist-3 DAs $\phi_K^P$ and $\phi_K^T$, which were not obtained in Ref.~\cite{2012-15074},
come from sum-rule calculations~\cite{prd76-074018}.
As stated before, we focus on the $P$-wave components in Eq.~(\ref{2mda}) proportional to $P_{l=1}(2\zeta-1)=2\zeta-1$.
The correspoding light-cone matrix element for a longitudinal meson pair is decomposed, up to the twist 3, into~\cite{plb763-29}
\begin{eqnarray}
\Phi_{P_1P_2}(x,\zeta,\omega^2)=\frac{1}{\sqrt{2N_c}}\left[\omega{ \epsilon \hspace{-1.5truemm}/ }
\phi_{P_1P_2}^0(x,\omega^2)+\omega\phi_{P_1P_2}^{s}(x,\omega^2)
+\frac{{p\hspace{-1.5truemm}/}_1{p\hspace{-1.5truemm}/}_2
-{p\hspace{-1.5truemm}/}_2{p\hspace{-1.5truemm}/}_1}{\omega(2\zeta-1)}\phi_{P_1P_2}^t(x,\omega^2)
\right](2\zeta-1),
\label{eq:phifunc}
\end{eqnarray}
where the two-meson DAs for the $\pi\pi, KK$ and $K\pi$ pairs are parametrized as
\begin{eqnarray}
\phi_{\pi\pi}^0(x,\omega^2)&=&\frac{3F_{\pi\pi}^{\parallel}(\omega^2)}{\sqrt{2N_c}}x(1-x)\left[1
+a^0_{2\rho}C_2^{3/2}(2x-1)\right] ,\nonumber\\
\phi_{\pi\pi}^{s}(x,\omega^2)&=&\frac{3F_{\pi\pi}^{\perp}(\omega^2)}{2\sqrt{2N_c}}(1-2x)\left[1
+a^s_{2\rho}(10x^2-10x+1)\right] ,\nonumber\\%\label{eq:pilds}
\phi_{\pi\pi}^t(x,\omega^2)&=&\frac{3F_{\pi\pi}^{\perp}(\omega^2)}{2\sqrt{2N_c}}(1-2x)^2\left[1
+a^t_{2\rho}C_2^{3/2}(2x-1)\right] ,\nonumber\\
\phi_{K\pi}^0(x,\omega^2)&=&\frac{3F_{K\pi}^{\parallel}(\omega^2)}{\sqrt{2N_c}} x(1-x)
\left[1+a_{1K^*}^{0}C_1^{3/2}(2x-1)+a_{2K^*}^{0}C_2^{3/2}(2x-1)+a_{4K^*}^{0}C_4^{3/2}(2x-1)\right],\nonumber\\%\label{eq:pikld0}
\phi_{K\pi}^s(x,\omega^2)&=&\frac{3F_{K\pi}^{\perp}(\omega^2)}{2\sqrt{2N_c}}(1-2x) ,\nonumber\\%\label{eq:piklds}
\phi_{K\pi}^t(x,\omega^2)&=&\frac{3F_{K\pi}^{\perp}(\omega^2)}{2\sqrt{2N_c}}(1-2x)^2,\nonumber\\
\phi_{KK}^0(x,\omega^2)&=&\frac{3F_{KK}^{\parallel}(\omega^2)}{\sqrt{2N_c}}x(1-x)\left[1
+a^0_{2\phi}C_2^{3/2}(2x-1)\right] ,\nonumber\\
\phi_{KK}^{s}(x,\omega^2)&=&\frac{3F_{KK}^{\perp}(\omega^2)}{2\sqrt{2N_c}}(1-2x) ,\nonumber\\
\phi_{KK}^t(x,\omega^2)&=&\frac{3F_{KK}^{\perp}(\omega^2)}{2\sqrt{2N_c}}(1-2x)^2.
\label{eq:pikldt}
\end{eqnarray}
The Gegenbauer moments $a^{0,s,t}_{2\rho}$, $a_{1K^*,2K^*,4K^*}^{0}$,
and $a^0_{2\phi}$ will be determined in a global analysis in the
next section. Since the current data are not yet precise enough for fixing the Gegenbuaer moments in
the twist-3 DAs $\phi_{K\pi}^{s,t}$ and $\phi_{KK}^{s,t}$, they have been
set to the asymptotic forms.
The elastic rescattering effects in a final-state meson pair can be absorbed into the
time-like form factors $F^{\parallel,\perp}(\omega^2)$, namely, the leading Gegenbauer moment $B_{00}(\omega^2)$
in a two-meson DA according to the Watson theorem~\cite{pr88-1163}.
The resonant contribution from a $\rho$ meson with a broad width is usually
parameterized as the GS model~\cite{prl21-244} based on the Breit-Wigner (BW)
function~\cite{BW-model} in experimental investigations of
three-body hadronic $B$ meson decays, which interprets observed structures beyond the $\rho(770)$
resonance in terms of heavier isovector vector mesons.
Taking the $\rho$-$\omega$ interference and excited state contributions into account, we have
the form factor~\cite{prd86-032013}
\begin{eqnarray}
F^\parallel_{\pi\pi}(\omega^2)= \left [ {\rm GS}_\rho(\omega^2,m_{\rho},\Gamma_{\rho})
\frac{1+c_{\omega} {\rm BW}_{\omega}(\omega^2,m_{\omega},\Gamma_{\omega})}{1+c_{\omega}}
+\Sigma c_j {\rm GS}_j(\omega^2,m_j,\Gamma_j)\right] \left( 1+\Sigma c_j\right)^{-1},
\label{GS}
\end{eqnarray}
where $m_{\rho,\omega,j}$ ($\Gamma_{\rho,\omega,j}$),
$j=\rho^{\prime}(1450), \rho^{\prime \prime}(1700)$ and $\rho^{\prime \prime \prime}(2254)$,
are the masses (decay widths) of the series of resonances, and $c_{\omega,j}$ are the
weights associated with the corresponding resonances.
The function ${\rm GS}_\rho(s,m_{\rho},\Gamma_{\rho})$ is given by
\begin{equation}
{\rm GS}_\rho(\omega^2, m_\rho, \Gamma_\rho) =
\frac{m_\rho^2 [ 1 + d(m_\rho) \Gamma_\rho/m_\rho ] }{m_\rho^2 - \omega^2 + f(\omega^2, m_\rho, \Gamma_\rho)
- i m_\rho \Gamma (\omega^2, m_\rho, \Gamma_\rho)},
\end{equation}
with the factors
\begin{eqnarray}
d(m_\rho) &=& \frac{3}{\pi} \frac{m_\pi^2}{k^2(m^2_\rho)} \ln \left( \frac{m_\rho+2 k(m^2_\rho)}{2 m_\pi} \right)
+ \frac{m_\rho}{2\pi k(m_\rho^2)}
- \frac{m_\pi^2 m_\rho}{\pi k^3(m^2_\rho)},\nonumber\\
f(\omega^2, m_\rho, \Gamma_\rho) &=& \frac{\Gamma_\rho m^2_\rho}{k^3(m^2_\rho)} \left\{ k^2(\omega^2) [ h(\omega^2)-h(m^2_\rho) ]
+ (m^2_\rho-\omega^2) k^2(m^2_\rho) h'(m^2_\rho)\right\},\nonumber\\
\Gamma (\omega^2, m_\rho, \Gamma_\rho) &=& \Gamma_\rho \frac{\omega^2}{m^2_\rho}
\left[ \frac{\beta_\pi (\omega^2) }{ \beta_\pi (m^2_\rho) } \right]^3,
\end{eqnarray}
where the functions $k(\omega^2)$, $h(\omega^2)$ and $\beta_\pi (\omega^2)$ are expressed as
\begin{eqnarray}
k(\omega^2) &=& \frac{1}{2} \sqrt{\omega^2} \beta_\pi (\omega^2),\nonumber\\
h(\omega^2) &=& \frac{2}{\pi} \frac{k(\omega^2)}{\sqrt{\omega^2}}
\ln \left( \frac{\sqrt{\omega^2}+2 k(\omega^2)}{2 m_\pi} \right),\nonumber\\
\beta_\pi (\omega^2) &=& \sqrt{1 - \frac{4m_\pi^2}{\omega^2}}.
\end{eqnarray}
The function ${\rm BW}_{\omega}(\omega^2,m_{\omega},\Gamma_{\omega})$ for the $\omega$ resonance
takes the standard BW form~\cite{BW-model}.
We employ the RBW line shape for contributions from the intermediate resonances
$K^*$ and $\phi$ of narrow widths to the form factors~\cite{epjc78-1019},
\begin{eqnarray}
\label{BRW}
F^\parallel_{K\pi,KK}(\omega^2)&=&\frac{m_{K^*,\phi}^2}{m^2_{K^*,\phi} -\omega^2-im_{K^*,\phi}\Gamma_{K^*,\phi}(\omega^2)},
\end{eqnarray}
with the mass-dependent widths
\begin{eqnarray}
\label{BRWl}
\Gamma_{K^*,\phi}(\omega^2)&=&\Gamma_{K^*,\phi}\left(\frac{m_{K^*,\phi}}{\omega}\right)
\left(\frac{|\vec{p}_1|}{|\vec{p}_0|}\right)^{(2L_R+1)},
\end{eqnarray}
where the masses $m_{K^*,\phi}$ and the widths $\Gamma_{K^*,\phi}$ of the $K^*$ and $\phi$
resonances, respectively, take the values in~\cite{pdg2020}. The magnitude of the spatial momentum of the meson $P_1$,
\begin{eqnarray}
|\vec{p}_1|=\frac{\sqrt{\lambda(\omega^2,m_{P_1}^2,m_{P_2}^2)}}{2\omega},
\end{eqnarray}
with the K$\ddot{a}$ll$\acute{e}$n function $\lambda (a,b,c)= a^2+b^2+c^2-2(ab+ac+bc)$,
is measured in the rest frame of the resonance, and $|\vec{p}_0|$ is its value at the resonance mass.
The orbital angular momentum $L_R$ in the two-meson system is set to $L_R=1$ for a $P$-wave state.
Due to the limited knowledge on the form factors $F^{\perp}(\omega^2)$, we assume the ratio
$F^{\perp}_i(\omega^2)/F^{\parallel}_i(\omega^2)
\approx (f^T_i/f_i)$~\cite{plb763-29}, $i=\rho, K^*$ and $\phi$,
with $f_i^T$ ($f_i$) being the tensor (vector) decay constants of the intermediate resonances.
\section{Numerical Analysis}
\subsection{Global Fit}
We specify the parameters adopted in the numerical analysis below, including the masses (in units of GeV)~\cite{pdg2020}
\begin{eqnarray}
m_{B}&=&5.280, \quad m_{B_s}=5.367, \quad m_b=4.8, \quad m_{K^\pm}=0.494,\nonumber\\
m_{K^0}&=&0.498, \quad m_{\pi^{\pm}}=0.140, \quad m_{\pi^0}=0.135,
\end{eqnarray}
and the decay constants (in units of GeV) and the $B$ meson lifetimes (in units of ps)~\cite{prd76-074018,prd95056008}
\begin{eqnarray}
f_B&=&0.21, \quad f_{B_s}=0.23, \quad f_{\rho}=0.216 , \quad f^T_{\rho}=0.184,\nonumber\\
f_{\phi(1020)}&=&0.215, \quad f_{\phi(1020)}^T=0.186, \quad f_{K^*}=0.217, \quad f^T_{K^*}=0.185,\nonumber\\
\tau_{B^0}&=&1.519,\quad \tau_{B^{\pm}}=1.638, \quad \tau_{B_{s}}=1.512.
\end{eqnarray}
The Wolfenstein parameters in the Cabibbo-Kobayashi-Maskawa (CKM) matrix take the values in Ref.~\cite{pdg2018}:
$A=0.836\pm0.015, \lambda=0.22453\pm 0.00044$, $\bar{\rho} = 0.122^{+0.018}_{-0.017}$ and $\bar{\eta}= 0.355^{+0.012}_{-0.011}$.
Equation~(\ref{eq:pikldt}) suggests that the total amplitudes $\cal{A}$ for the
$B_{(s)} \to P(\pi \pi,\pi K, KK)$ decays with $P=\pi, K$
can be expanded in terms of the Gegenbauer moments of the two-meson DAs.
As a result, we decompose the squared amplitudes
\begin{eqnarray}
|{\cal A}_{\pi\pi}|^2 &= & M_{0\rho}+a^{0}_{2\rho}M_{1\rho}+(a^{0}_{2\rho})^2M_{2\rho}+a^{s}_{2\rho}M_{3\rho}+(a^{s}_{2\rho})^2M_{4\rho}\nonumber\\
&+& a^{t}_{2\rho}M_{5\rho}+(a^{t}_{2\rho})^2 M_{6\rho}+a^{0}_{2\rho}a^{s}_{2\rho}M_{7\rho}+
a^{0}_{2\rho}a^{t}_{2\rho}M_{8\rho}+a^{s}_{2\rho}a^{t}_{2\rho}M_{9\rho},\nonumber\\
|{\cal A}_{K\pi}|^2 &= & M_{0K^*}+(a^{0}_{1K^*})M_{1K^*}+(a^{0}_{1K^*})^2M_{2K^*}+a^{0}_{2K^*}M_{3K^*}\nonumber\\
&+& (a^{0}_{2K^*})^2M_{4K^*}+a^{0}_{4K^*} M_{5K^*}+(a^{0}_{4K^*})^2M_{6K^*}\nonumber\\
&+& a^{0}_{1K^*}a^{0}_{2K^*}M_{7K^*}+a^{0}_{1K^*}a^{0}_{4K^*} M_{8K^*}
+a^{0}_{2K^*}a^{0}_{4K^*}M_{9K^*},\nonumber\\
|{\cal A}_{KK}|^2 &= & M_{0\phi}+a^0_{2\phi}M_{1\phi}+(a^0_{2\phi})^2 M_{2\phi},\label{a}
\end{eqnarray}
into the linear combinations of the Gegenbauer moments $a^{0,s,t}_{2\rho}$, $a^{0}_{1K^*,2K^*,4K^*}$ and $a^0_{2\phi}$,
and their products. We then compute the coefficients $M$, which involve only the Gegenbauer polynomials,
to establish the database for our global fit.
\begin{table}[!htbh]
\caption{Fitted Gegenbauer moments for the twist-2 and twist-3 two-meson DAs.}
\begin{center}
\setlength{\tabcolsep}{3mm}{
\begin{tabular}{lcccccccccccccc}
\hline
&$a^0_{2\rho}$ &$a^s_{2\rho}$ &$a^t_{2\rho}$ &$a^0_{2\phi}$ & \\ \hline
\ fit \ &$0.08\pm0.13$ &$-0.23\pm0.24$ &$-0.35\pm0.06$ &$-0.31\pm0.19$ & \\ \hline
&$a_{1K^*}^{0}(\text{Scenario I})$ &$a_{2K^*}^{0}(\text{Scenario I})$ &$a_{1K^*}^{0}(\text{Scenario II})$ &$a_{2K^*}^{0}(\text{Scenario II})$ &$a_{4K^*}^{0}(\text{Scenario II})$ \\ \hline
\ fit \ &$0.31\pm0.16$ & $1.19\pm0.10$ &$0.57\pm0.20$ &$1.13\pm0.32$ &$-0.85\pm0.16$ \\ \hline
\end{tabular}}
\label{tab:gen}
\end{center}
\end{table}
Similar to the proposal in Ref.~\cite{2012-15074}, we determine the Gegenbauer moments of
the two-meson DAs by fitting the formulas in Eq.~(\ref{a}) with the
Gegenbauer-moment-independent database to the
measured branching ratios ${\cal B}$ and direct $CP$ asymmetries ${\cal A}_{CP}$ of the
$B_{(s)} \to P(\rho\to)\pi\pi$, $B_{(s)} \to P(K^*\to)K\pi$ and $B_{(s)} \to P(\phi\to)KK$ decays.
We adopt the nonlinear least-$\chi^2$ (lsq) method~\cite{Peter:2020}, in which
the $\chi^2$ function is defined for $n$ pieces of experimental data
$v_i\pm \delta v_i$ with the errors $\delta v_i$ and the corresponding theoretical
values $v^{\text{th}}_i$ as
\begin{eqnarray} \label{eq:chi}
\chi^2= \sum_{i=1}^{n} \Big(\frac {v_i - v^{\text{th}}_i}{\delta v_i}\Big)^2.
\end{eqnarray}
To minimize statistical uncertainties, we should include maximal amount of
data in the fit. On the other hand, those measurements with significance lower than 3$\sigma$
do not impose stringent constraints, and need not be taken into account in principle.
The data of those modes, which are affected by subleading contributions
manefestly based on the previous PQCD studies~\cite{prd74-094020,Epjc72-1923}, are also excluded, even
though they may have higher precision. The $B^0 \to \pi^0(\rho^0\to)\pi\pi$ decay, dominated by
the color-suppressed tree amplitude that is expected to receive substantial higher-order corrections~\cite{Li:2005kt},
is a typical example.
\subsection{Results}
The Gegenbauer moments $a^0_{2\rho}$, $a^s_{2\rho}$ and $a^t_{2\rho}$ for the twist-2 and twist-3
$\pi\pi$ DAs in Table~\ref{tab:gen} are obtained from the fit to eight pieces of $B\to P(\rho\to)\pi\pi$
data marked by ''$\dagger$" in Tables~\ref{krho} and \ref{pirho} with $\chi^2 / d.o.f.=2.6$, whose errors
mainly arise from experimental uncertainties. We point out that the measured
$B^+ \to \pi^+(\rho^0\to)\pi\pi$ branching ratio, imposing a strong constraint on the
Gegenbauer moment $a^s_{2\rho}$, is considered in our fit, but the corresponding $B^+ \to \pi^+\rho^0$ data
were excluded in the global analysis of two-body hadroic $B$ meson decays~\cite{2012-15074}. It is seen
that our Gegenbauer moments differ from the corresponding ones of the $\rho(770)$ meson
DAs derived in QCD sum rules~\cite{ball98} as mentioned before: the $\pi\pi$ DAs contain the $\rho$-$\omega$
mixing effect and the contributions from higher $\rho$ resonances with finite widths via Eq.~(\ref{GS}),
so they need not to be the same as the $\rho(770)$ meson DAs. Our Gegenbauer moments also differ from
$a^0_{2\rho}=0.25$, $a^s_{2\rho}=0.75$ and $a^t_{2\rho}=-0.60$ chosen in Ref.~\cite{plb763-29}
for two reasons at least. First, only the $B \to K(\rho \to)\pi\pi$ data were employed to constrain the
$\pi\pi$ DAs in~\cite{plb763-29}, while the additional $B \to \pi(\rho \to)\pi\pi$ data are included
in our global analysis. Second, some $B \to K(\rho \to)\pi\pi$ data have been updated in the
present work.
A single Gegenbauer moment $a^0_{2\phi}$ is introduced into the $KK$ twist-2 DA, and the twist-3 ones
have been set to their asymptotic forms,
since only two pieces of data from the $B\to K(\phi \to)KK$ decays in Table~\ref{kk} meet the required precision.
The value of $a^0_{2\phi}$, determined with $\chi^2 / d.o.f.=0.35$,
is distinct from, but still consistent with that of the $\phi$ meson DA in QCD sum rules~\cite{ball98}
within theoretical errors. Note that our $a^0_{2\phi}$ deviates from the value
$-0.50\pm 0.10$ adopted in Ref.~\cite{epjc79792},
where $B_s$ meson decays into charmonia plus a kaon pair were investigated.
The deviation is understandable, because the choice of $a^0_{2\phi}$ depends on models
for the uncertain charmonium DAs, as relevant data were accommodated.
The $K\pi$ DAs are determined in a fit to six pieces of $B_{(s)}\to P(K^*\to)K\pi$ data in Tables~\ref{kkstar}
and \ref{pikstar}. We first work on Scenario I, in which the two Gegenbauer moments $a_{1K^*}^{0}$ and
$a_{2K^*}^{0}$ of the twist-2 two-meson DA are fitted with $\chi^2 / d.o.f.=1.5$, and observe that $a_{2K^*}^{0}$
is slightly larger than unity as shown in Table~\ref{tab:gen}. A larger moment is not favored in view of the
convergence of the Gegenbauer expansion. Therefore, one more Gegenbauer moment $a_{4K^*}^{0}$ is
added in Scenario II, and a fit with $\chi^2 / d.o.f.=1.4$ is attained. The resultant $a_{2K^*}^{0}$
decreases a bit but with amplified uncertainty , and $a_{4K^*}^{0}$ is smaller than the unity.
The measured $B^0_s \to K^\pm(K^{*\mp} \to)K\pi$ and $B^0_s \to \kern 0.18em\optbar{\kern -0.18em K}{}\xspace^0 (\kern 0.18em\optbar{\kern -0.18em K}{}\xspace\!^{*0} \to)K\pi$ branching
ratios cannot give an effective constraint due to their larger experimental errors, such that
the uncertainties of the Gegenbauer moments increase dramatically in Scenario II.
For a similar reason, the obtained Gegenbauer moments differ from
those of the $K^*$ meson DA in QCD sum rules~\cite{ball98}, and from
$a_{1K^*}^{0}=0.2$ and $a_{2K^*}^{0}=0.5$ chosen
in the PQCD study on the $B_{(s)}\to\psi K\pi$ decays~\cite{Li:2019pzx}.
\begin{table}[thp]
\caption{$CP$ averaged branching ratios ${\cal B}$ and direct $CP$ asymmetries ${\cal A}_{CP}$
of the $B_{(s)}\to K (\rho \to) \pi \pi$ decays in the PQCD approach. The experimental data for
comparison are quoted from Ref.~\cite{pdg2020}. Those data marked by $\dagger$ are included in the fit.
The theoretical errors are attributed to the variations of the shape parameters $\omega_{B_{(s)}}$ in the
$B_{(s)}$ meson DA and the decay constant $f_{B_{(s)}}$, of the Gegenbauer moments in the two-pion DAs, and
of the hard scale $t$ and the QCD scale $\Lambda_{\rm QCD}$. }
\label{krho}
\begin{center}
\begin{threeparttable}
\setlength{\tabcolsep}{5mm}{
\begin{tabular}{c c c c}
\hline
{Modes} &\qquad & Results & Data \\
\hline
$B^+ \to K^+(\rho^0\to)\pi \pi$ &~~~${\cal B} (10^{-6})$~~~ &$2.91^{+0.68+0.77+1.43}_{-0.60-0.68-0.82}$ &$3.7\pm{0.5}$~\tnote{$\dagger$}~~\\
&${\cal A}_{CP} (\%)$ &$53.5^{+0.4+4.5+11.9}_{-1.4-4.3-15.0}$ &$37\pm{10}$~\tnote{$\dagger$}~~\\
$B^0 \to K^+(\rho^-\to)\pi \pi$ &${\cal B} (10^{-6})$ &$8.48^{+2.20+1.63+3.87}_{-1.95-1.48-2.51}$ &$7.0\pm{0.9}$~\tnote{$\dagger$}~~\\
&${\cal A}_{CP} (\%)$ &$33.0^{+1.1+5.2+8.9}_{-1.5-4.9-12.1}$ &$20\pm{11}$\\
$B_s^0 \to K^-(\rho^+\to)\pi \pi$ &${\cal B} (10^{-6})$ &$16.41^{+7.59+0.16+1.10}_{-5.30-0.15-1.31}$ &$-$\\
&${\cal A}_{CP} (\%)$ &$19.4^{+3.6+3.3+3.1}_{-3.2-3.3-2.9}$ &$-$\\
$B^+ \to K^0(\rho^+\to)\pi \pi$ &${\cal B} (10^{-6})$ &$7.86^{+2.07+1.51+3.68}_{-1.82-1.50-2.31}$ &$7.3^{+1.0}_{-1.2}$~\tnote{$\dagger$}~~\\
&${\cal A}_{CP} (\%)$ &$13.1^{+1.2+1.8+1.5}_{-0.5-2.5-3.6}$ &$-3\pm{15}$\\
$B^0 \to K^0(\rho^0\to)\pi \pi$ &${\cal B} (10^{-6})$ &$3.76^{+0.95+0.57+0.92}_{-0.81-0.52-0.81}$ &$3.4\pm{1.1}$~\tnote{$\dagger$}~~\\
&${\cal A}_{CP} (\%)$ &$1.4^{+0.6+0.5+2.1}_{-0.5-0.6-3.1}$ &$-4\pm20$\\
$B_s^0 \to \bar K^0(\rho^0\to)\pi \pi$ &${\cal B} (10^{-6})$ &$0.17^{+0.04+0.02+0.01}_{-0.04-0.02-0.02}$ &$-$\\
&${\cal A}_{CP} (\%)$ &$-51.0^{+1.1+11.7+26.6}_{-0.6-10.6-13.4}$ &$-$\\
\hline
\end{tabular}}
\end{threeparttable}
\end{center}
\end{table}
\begin{table}[]
\caption{Same as Table~\ref{krho} but for the $B_{(s)}\to \pi (\rho \to) \pi \pi$ decays.}
\label{pirho}
\begin{center}
\begin{threeparttable}
\setlength{\tabcolsep}{5mm}{
\begin{tabular}{c c c c}
\hline
{Modes} &\qquad & Results & Data \\
\hline
$B^+ \to \pi^+(\rho^0\to)\pi \pi$ &~~~${\cal B} (10^{-6})$~~~ &$5.98^{+1.56+1.46+0.45}_{-1.37-1.31-0.37}$ &$8.3\pm{1.2}$~\tnote{$\dagger$}~~\\
&${\cal A}_{CP} (\%)$ &$-34.9^{+2.0+5.3+7.3}_{-0.7-4.4-9.6}$ &$0.9\pm{1.9}$\\
$B^0 \to \pi^+(\rho^-\to)\pi \pi$ &${\cal B} (10^{-6})$ &$5.28^{+2.08+1.56+0.42}_{-1.58-1.44-0.52}$ &$23.0\pm{2.3}$~\tnote{1}~\tnote{$\dagger$}~~\\
&${\cal A}_{CP} (\%)$ &$-30.6^{+3.4+4.1+4.5}_{-3.5-4.1-5.4}$ &$-8\pm{8}$\\
$B^0 \to \pi^-(\rho^+\to)\pi \pi$ &${\cal B} (10^{-6})$ &$20.20^{+8.90+0.48+1.30}_{-6.62-0.54-1.04}$ &$~23.0\pm{2.3}$~\tnote{1}~\tnote{$\dagger$}~~\\
&${\cal A}_{CP} (\%)$ &$9.3^{+1.9+1.7+1.9}_{-1.6-1.7-1.9}$ &$13\pm6$\\
$B_s^0 \to \pi^+(\rho^-\to)\pi \pi$ &${\cal B} (10^{-6})$ &$0.23^{+0.04+0.03+0.03}_{-0.04-0.05-0.04}$ &$-$\\
&${\cal A}_{CP} (\%)$ &$-24.3^{+2.0+4.5+8.8}_{-3.8-14.3-6.1}$ &$-$\\
$B_s^0 \to \pi^-(\rho^+ \to)\pi \pi$ &${\cal B} (10^{-6})$ &$0.12^{+0.01+0.01+0.00}_{-0.05-0.06-0.06}$ &$-$\\
&${\cal A}_{CP} (\%)$ &$-71.7^{+2.1+12.0+4.8}_{-1.8-5.6-0.7}$&$-$\\
$B^+ \to \pi^0(\rho^+\to)\pi \pi$ &${\cal B} (10^{-6})$ &$8.50^{+4.25+1.05+0.24}_{-3.04-0.98-0.55}$ &$10.9\pm{1.4}$~\tnote{$\dagger$}~~\\
&${\cal A}_{CP} (\%)$ &$20.4^{+5.0+4.6+4.7}_{-4.1-4.4-6.4}$ &$2\pm{11}$\\
$B^0 \to \pi^0(\rho^0\to)\pi \pi$ &${\cal B} (10^{-6})$ &$0.08^{+0.01+0.02+0.05}_{-0.02-0.03-0.05}$ &$2.0\pm{0.5}$ \\
&${\cal A}_{CP} (\%)$ &$20.8^{+6.0+17.0+11.7}_{-4.4-16.5-40.1}$ &$27\pm24$\\
$B_s^0 \to \pi^0(\rho^0 \to)\pi \pi$ &${\cal B} (10^{-6})$ &$0.14^{+0.03+0.04+0.04}_{-0.03-0.01-0.04}$ &$-$\\
&${\cal A}_{CP} (\%)$ &$-47.9^{+5.5+4.8+4.5}_{-3.0-6.6-7.5}$&$-$\\
\hline
\end{tabular}}
\begin{tablenotes}
\item $\tnote{1}$ Sum of two branching ratios, ${\cal B}(B\to f) + {\cal B}(B\to \bar{f})$.
\end{tablenotes}
\end{threeparttable}
\end{center}
\end{table}
With the fitted Gegenbauer moments in Table~\ref{tab:gen}, we calculate the $CP$ averaged branching
rations ${\cal B}$ and the direct $CP$ asymmetries ${\cal A}_{CP}$ in the LO PQCD formalism, and present the
results in the central columns of Tables~\ref{krho}-\ref{pikstar}.
The first theoretical uncertainty originates from
the shape parameter $\omega_B=0.40$~GeV or $\omega_{B_s}=0.48$~GeV with 10\% variation,
and the decay constant $f_{B_{(s)}}$. The second one is from the Gegenbauer moments
in the two-meson DAs. The last one is caused by the variations of the hard scale $t$ from
$0.75t$ to $1.25t$, which characterizes the effect of next-to-leading-order QCD corrections,
and of the QCD scale $\Lambda_{\rm QCD}=0.25\pm0.05$~GeV.
The errors attributed to the CKM matrix elements are tiny and can be ignored safely.
Note that the data for the $B^0 \to \pi^+(\rho^-\to)\pi \pi$ and $B^0 \to \pi^-(\rho^+\to)\pi \pi$
branching ratios in Table~\ref{pirho} represent the sum over these two modes.
It is also the case for the measured $B_s^0 \to K^+(K^{*-}\to)K\pi$ and $B^+ \to \pi^0(K^{*+}\to)K \pi$
branching ratios, and for the measured $B^0 \to \pi^0(K^{*0}\to)K \pi$ and $B_s^0 \to \bar{K}^0(K^{*0}\to)K \pi$
branching ratios in Table~\ref{kkstar}.
It is found that most of the considered data in Tables~\ref{krho} and \ref{pirho}
are well reproduced, in particular those with higher precision.
Larger deviation from the data is observed in the $B^+ \to \pi^+(\rho^0\to)\pi \pi$ and
$B^+ \to \pi^0(\rho^+\to)\pi \pi$ branching ratios. It is ascribed to the
involved color-suppressed tree contributions, which receive sizable
next-to-leading-order corrections. The observables removed from the fit
are also predicted in the LO PQCD formalism, and compared with the data in Tables~\ref{krho} and \ref{pirho}.
Our prediction for the $B^0 \to \pi^0(\rho^0\to)\pi \pi$ branching ratio, which suffers significant
subleading corrections as stated before, is still below the data, similar to that derived in the framework for
two-body decays. Most of the ${\cal A}_{CP}$ data for the $B_{(s)} \to P(\rho\to)\pi \pi$ decays with $P=\pi,K$
are not yet precise enough. We mention that ${\cal A}_{CP}$ in the $B^+ \to \pi^+\rho^0$ mode has been
predicted to be large and negative in most QCD approaches~\cite{Cheng:2020hyj,prd95056008}, including the
current analysis on three-body decays as shown in Table~\ref{pirho}. However, its data are as small as
$0.009\pm0.019$~\cite{pdg2020}. Both the theoretical and experimental errors need to be reduced
greatly in order to tell whether the discrepancy is really a puzzle.
\begin{table}[thb]
\caption{Same as Table~\ref{krho} but for the $B_{(s)}\to P (\phi \to) KK$ decays with $P=\pi, K$.}
\label{kk}
\begin{center}
\begin{threeparttable}
\setlength{\tabcolsep}{5mm}{
\begin{tabular}{l c c c}
\hline
{Modes} &\qquad & Results & Data \\
\hline
$B^+ \to K^+(\phi \to)KK$ &~~~${\cal B} (10^{-6})$~~~ &$8.46^{+3.57+0.41+2.65}_{-2.70-0.45-1.95}$ &$8.8^{+0.7}_{-0.6}$~\tnote{$\dagger$}~~\\
&${\cal A}_{CP} (\%)$ &$1.4^{+0.8+0.1+0.0}_{-0.3-1.7-0.8}$ &$2.4\pm2.8$\\
$B^0 \to K^0(\phi \to)K K$ &~~~${\cal B} (10^{-6})$~~~ &$7.82^{+3.18+0.40+2.40}_{-2.50-0.19-1.71}$ &$7.3\pm0.7$~\tnote{$\dagger$}~~\\
&${\cal A}_{CP} (\%)$ &$0$ &$1\pm14$\\
$B_s^0 \to \bar K^0(\phi \to)K K$ &~~~${\cal B} (10^{-8})$~~~ &$3.52^{+1.30+1.50+2.30}_{-0.64-0.02-1.27}$ &$-$\\
&${\cal A}_{CP} (\%)$ &$0$ &$-$\\
$B^+ \to \pi^+(\phi \to)K K$ &~~~${\cal B} (10^{-8})$~~~ &$1.15^{+0.46+0.02+0.34}_{-0.33-0.20-0.28}$ &$3.2\pm1.5$\\
&${\cal A}_{CP} (\%)$ &$0$ &$10\pm50$\\
$B^0 \to \pi^0(\phi \to)K K$ &~~~${\cal B} (10^{-9})$~~~ &$5.32^{+2.21+0.14+1.61}_{-1.53-0.91-1.27}$ &$~~<15~~$\\
&${\cal A}_{CP} (\%)$ &$0$ &$-$\\
$B_s^0 \to \pi^0(\phi \to)K K$ &~~~${\cal B} (10^{-7})$~~~ &$1.06^{+0.41+0.15+0.07}_{-0.34-0.20-0.14}$ &$-$\\
&${\cal A}_{CP} (\%)$ &$27.3^{+1.1+3.2+3.5}_{-1.0-1.4-5.8}$ &$-$\\
\hline
\end{tabular}}
\end{threeparttable}
\end{center}
\end{table}
Both the $B\to K(\phi \to)KK$ data considered in the fit are well reproduced with a single
Gegenbauer moment $a^0_{2\phi}$ as indicated in Table~\ref{kk}. Our predictions for the branching
ratios and direct $CP$ asymmetries excluded in the fit, mainly associated with $B_s$ meson decays,
can be confronted by more precise data in the future. All the available ${\cal A}_{CP}$ data for the
$B \to P(\phi\to) KK$ decays with $P=\pi,K$ have large errors. There is minor difference between the central
values of the prediction and the data for the $B^+ \to \pi^+(\phi \to)K K$ branching ratio, but they still
agree with each other within uncertainties.
\begin{table}[]
\caption{Same as Table~\ref{krho} but for the $B_{(s)}\to K (K^* \to) K \pi$ decays.}
\label{kkstar}
\begin{center}
\begin{threeparttable}
\setlength{\tabcolsep}{5mm}{
\begin{tabular}{l c c c c}
\hline
{Modes} &\qquad & Results (Scenario I) & Results (Scenario II) & Data \\
\hline
$B^+ \to K^+(\bar{K}^{*0} \to)K\pi$ &~~~${\cal B} (10^{-6})$~~~ &$0.55^{+0.14+0.04+0.20}_{-0.13-0.06-0.14}$ &$0.56^{+0.17+0.10+0.15}_{-0.13-0.06-0.13}$ &$0.59\pm0.08$~\tnote{$\dagger$}~~\\
&${\cal A}_{CP} (\%)$ &$46.3^{+1.0+10.9+2.8}_{-0.3-4.7-3.9}$ &$63.8^{+1.1+2.0+3.4}_{-3.2-8.5-23.5}$ &$12\pm10$\\
$B^0 \to K^+(K^{*-} \to)K\pi$ &~~~${\cal B} (10^{-6})$~~~ &$0.27^{+0.05+0.05+0.04}_{-0.05-0.06-0.03}$ &$0.25^{+0.01+0.09+0.01}_{-0.01-0.03-0.01}$ &$<0.4$~\tnote{1}\\
&${\cal A}_{CP} (\%)$ &$19.8^{+0.5+2.1+13.4}_{-3.6-2.1-7.5}$ &$20.2^{+7.1+10.6+16.9}_{-0.0-1.6-0.0}$ &$-$\\
$B^0 \to K^-(K^{*+} \to)K\pi$ &~~~${\cal B} (10^{-6})$~~~ &$0.09^{+0.01+0.01+0.04}_{-0.02-0.01-0.03}$ &$0.11^{+0.02+0.01+0.03}_{-0.06-0.02-0.02}$ &$<0.4$~\tnote{1}\\
&${\cal A}_{CP} (\%)$ &$-5.2^{+12.3+9.4+30.4}_{-15.5-11.4-0.0}$ &$33.8^{+13.4+16.4+9.4}_{-0.0-14.4-0.0}$ &$-$\\
$B_s^0 \to K^+(K^{*-}\to)K\pi$ &~~~${\cal B} (10^{-6})$~~~ &$15.15^{+2.78+1.90+7.29}_{-2.53-1.72-4.61}$ &$9.89^{+1.92+2.93+5.64}_{-1.66-1.90-4.16}$ &$(19\pm5)$~\tnote{1}~\tnote{$\dagger$}~~\\
&${\cal A}_{CP} (\%)$ &$42.1^{+4.5+2.4+5.5}_{-5.3-3.6-6.9}$ &$6.1^{+0.4+8.8+7.0}_{-1.3-11.0-10.4}$ &$-$\\
$B_s^0 \to K^-(K^{*+}\to)K \pi$ &~~~${\cal B} (10^{-6})$~~~ &$10.22^{+1.97+1.27+4.51}_{-1.73-1.24-2.72}$ &$7.72^{+1.88+1.82+3.24}_{-1.59-1.49-2.69}$ &$(19\pm5)$~\tnote{1}~\tnote{$\dagger$}~~\\
&${\cal A}_{CP} (\%)$ &$-34.8^{+3.0+1.5+7.5}_{-2.3-0.6-6.6}$ &$-24.0^{+1.5+6.1+11.4}_{-0.3-4.1-6.1}$ &$-$\\
$B^+ \to \bar{K}^0(K^{*+}\to)K \pi$ &~~~${\cal B} (10^{-6})$~~~ &$0.31^{+0.06+0.07+0.16}_{-0.05-0.04-0.09}$ &$0.19^{+0.06+0.06+0.11}_{-0.05-0.07-0.05}$ &$-$\\
&${\cal A}_{CP} (\%)$ &$-13.6^{+2.5+2.0+5.7}_{-1.0-3.5-7.9}$ &$-22.7^{+13.3+20.7+7.5}_{-0.0-18.4-7.3}$ &$-$\\
$B^0 \to K^0(\bar{K}^{*0}\to)K \pi$ &~~~${\cal B} (10^{-6})$~~~ &$0.44^{+0.14+0.04+0.15}_{-0.11+0.03+0.11}$ &$0.38^{+0.13+0.05+0.11}_{-0.11-0.04-0.11}$ &$<0.96$~\tnote{1}\\
&${\cal A}_{CP} (\%)$ &$0$ &$0$ &$-$\\
$B^0 \to \bar{K}^0(K^{*0}\to)K \pi$ &~~~${\cal B} (10^{-6})$~~~ &$0.44^{+0.08+0.06+0.22}_{-0.08+0.07+0.15}$ &$0.30^{+0.07+0.08+0.16}_{-0.05-0.02-0.12}$ &$<0.96$~\tnote{1}\\
&${\cal A}_{CP} (\%)$ &$0$ &$0$ &$-$\\
$B_s^0 \to K^0(\bar{K}^{*0}\to)K \pi$ &~~~${\cal B} (10^{-6})$~~~ &$14.06^{+2.54+1.89+6.88}_{-2.30-1.70-4.48}$ &$8.84^{+1.66+2.77+5.31}_{-1.46-1.98-3.54}$ &$(20\pm6)$~\tnote{1}~\tnote{$\dagger$}~~\\
&${\cal A}_{CP} (\%)$ &$0$ &$0$ &$-$\\
$B_s^0 \to \bar{K}^0(K^{*0}\to)K \pi$ &~~~${\cal B} (10^{-6})$~~~ &$10.39^{+2.01+1.18+5.58}_{-1.78-1.17-2.86}$ &$7.92^{+1.95+1.63+3.46}_{-1.64-1.36-2.85}$ &$(20\pm6)$~\tnote{1}~\tnote{$\dagger$}~~\\
&${\cal A}_{CP} (\%)$ &$0$ &$0$ &$-$\\
\hline
\end{tabular}}
\end{threeparttable}
\end{center}
\end{table}
\begin{table}[thb]
\caption{Same as Table~\ref{krho} but for the $B_{(s)}\to \pi (K^* \to) K \pi$ decays.}
\label{pikstar}
\begin{center}
\begin{threeparttable}
\setlength{\tabcolsep}{5mm}{
\begin{tabular}{ l c c c c}
\hline
{Modes} &\qquad & Results (Scenario I) & Results (Scenario II) & Data \\
\hline
$B^+ \to \pi^+(K^{*0}\to)K \pi$ &~~~${\cal B} (10^{-6})$~~~ &$7.17^{+1.56+0.64+3.46}_{-1.37-0.62-2.23}$ &$8.19^{+2.14+0.94+2.74}_{-1.77-0.66-1.93}$ &$~~10.1\pm{0.8}~~$\\
&${\cal A}_{CP} (\%)$ &$-5.4^{+0.5+0.8+2.1}_{-0.2-0.3-0.8}$ &$-4.5^{+0.5+1.1+2.7}_{-0.6-1.4-1.2}$ &$-4\pm9$\\
$B^0 \to \pi^-(K^{*+}\to)K \pi$ &~~~${\cal B} (10^{-6})$~~~ &$7.47^{+1.60+0.72+3.29}_{-1.41-0.71-2.06}$ &$7.61^{+1.83+0.92+2.40}_{-1.61-0.65-1.78}$ &$7.5\pm{0.4}$~\tnote{$\dagger$}~~\\
&${\cal A}_{CP} (\%)$ &$-52.9^{+3.1+0.7+9.3}_{-1.8-1.0-7.0}$ &$-32.3^{+0.7+10.3+7.9}_{-0.2-8.4-6.3}$ &$-27\pm{4}$\\
$B_s^0 \to \pi^+(K^{*-}\to)K\pi$ &~~~${\cal B} (10^{-6})$~~~ &$12.13^{+4.66+1.36+0.92}_{-3.55-1.29-0.75}$ &$5.52^{+2.22+2.09+0.41}_{-1.66-1.85-0.41}$ &$~~2.9\pm{1.1}~~$\\
&${\cal A}_{CP} (\%)$ &$-32.8^{+4.1+2.7+4.2}_{-4.8-3.2-5.5}$ &$-30.6^{+4.1+6.5+8.2}_{-4.5-6.9-8.9}$ &$-$\\
$B^+ \to \pi^0(K^{*+}\to)K \pi$ &~~~${\cal B} (10^{-6})$~~~ &$4.71^{+1.18+0.39+1.92}_{-0.98-0.38-1.30}$ &$5.62^{+1.54+0.62+1.55}_{-1.28-0.49-1.11}$ &$6.8\pm{0.9}$~\tnote{$\dagger$}~~\\
&${\cal A}_{CP} (\%)$ &$-36.2^{+1.6+0.1+7.4}_{-1.0-0.4-8.2}$ &$-19.1^{+2.4+6.6+4.6}_{-1.9-5.4-6.0}$ &$-39\pm{21}$\\
$B^0 \to \pi^0(K^{*0}\to)K \pi$ &~~~${\cal B} (10^{-6})$~~~ &$2.99^{+0.59+0.33+1.49}_{-0.55-0.33-0.89}$ &$2.55^{+0.57+0.36+1.06}_{-0.49-0.19-0.74}$ &$3.3\pm{0.6}$~\tnote{$\dagger$}~~\\
&${\cal A}_{CP} (\%)$ &$-11.6^{+1.0+0.49+5.0}_{-1.2-0.2-1.0}$ &$-11.8^{+1.2+4.3+4.3}_{-1.2-1.7-0.2}$ &$-15\pm{13}$\\
$B_s^0 \to \pi^0(\bar{K}^{*0}\to)K \pi$ &~~~${\cal B} (10^{-6})$~~~ &$0.20^{+0.03+0.01+0.06}_{-0.04-0.02-0.05}$ &$0.12^{+0.02+0.02+0.03}_{-0.04-0.03-0.03}$ &$-$\\
&${\cal A}_{CP} (\%)$ &$-70.6^{+6.7+13.2+23.5}_{-6.7-2.8-15.1}$ &$-50.4^{+3.1+22.7+15.1}_{-2.6-12.4-14.1}$ &$-$\\
\hline
\end{tabular}}
\end{threeparttable}
\end{center}
\end{table}
Overall speaking, Scenario II reproduces the considered $B_{(s)} \to P(K^{*}\to)K\pi$
data with $P=\pi,K$ better than Scenario I does as seen
in Tables~\ref{kkstar} and \ref{pikstar}. The $B_s \to P(K^{*}\to)K\pi$
branching ratios differ between the two scenarios more than the $B \to P(K^{*} \to)K\pi$ branching
ratios do. This feature is understandable, because the former involve the $B_s \to (K^{*}\to) K\pi$
transition form factors, which are more sensitive to the variation of the Gegenbauer moments in the
$K\pi$ DA. Hence, more precise $B_s \to P(K^{*}\to)K\pi$ data are crucial for fixing the $K\pi$ DAs.
The direct $CP$ asymmetries ${\cal A}_{CP}$ in some $B_{(s)} \to P(K^{*}\to)K\pi$ modes depend on
the chosen scenarios strongly, implying that more accurate $K\pi$ DAs are necessary for
predicting these observables unambiguously. The central value of the
predicted $B_s^0 \to \pi^+(K^{*-}\to)K\pi$ branching ratio in Scenario II, which is already much
lower than in Scenario I, remains above the data. It deserves more thorough theoretical and
experimental investigations. Similarly, most of the ${\cal A}_{CP}$ data for the $B_{(s)} \to P(K^{*}\to)K\pi$
decays have substantial uncertainties so far, so it is still early to make a meaningful
comparison with our results.
It is noticed that the parametrization of the parton momenta in Eqs.~(\ref{mom-B-k}) and (\ref{fg})
introduces the dependence on the light meson mass $m_3$ into the hard kernels and the
Sudakov exponents, as explicitly shown in the Appendix. Since both these factors are perturbatve pieces
in a PQCD factorization formula, they should be insensitive to a light scale. Therefore,
we test the sensitivity of our numerical results to this light scale by setting it to zero in the
hard kernels and the Sudakov exponents. The corresponding branching ratios and
direct $CP$ asymmetries for two typical modes, $B^+ \to K^+(\rho^0\to)\pi \pi$ and
$B^0 \to \pi^-(K^{*+}\to)K\pi$ in Scenario II, are presented in Table~\ref{krho1}.
The neglect of the kaon mass for the former mode causes about 10\% variation
in the branching ratio and the direct $CP$ asymmetry. The quantities associated with the
latter mode are relatively stable with respect to the neglect of the
pion mass as expected. The insensitivity to the light scale confirms that our parametrization for
kinematic variables in three-body $B$ meson decays is reasonable.
\begin{table}[thp]
\caption{$CP$ averaged branching ratios and direct $CP$ asymmetries
of the $B^+ \to K^+(\rho^0\to)\pi \pi$ decay and the $B^0 \to \pi^-(K^{*+}\to)K\pi$ decay in
Scenario II with and without the light meson mass in the hard kernels and the Sudakov exponents. The experimental data
are quoted from~\cite{pdg2020}. The sources of the theoretical errors are the same as in Table~\ref{krho}.}
\label{krho1}
\begin{center}
\setlength{\tabcolsep}{5mm}{
\begin{tabular}{lllll}
\hline
{Modes} &\qquad & Results (with light mass) & Results (without light mass) & Data \\
\hline
$B^+ \to K^+(\rho^0\to)\pi \pi$ &${\cal B} (10^{-6})$
&$2.91^{+0.68+0.77+1.43}_{-0.60-0.68-0.82}$ &$2.51^{+0.56+0.71+1.34}_{-0.52-0.53-0.80}$ &$3.7\pm{0.5}$\\
&${\cal A}_{CP} (\%)$ &$53.5^{+0.4+4.5+11.9}_{-1.4-4.3-15.0}$
&$58.5^{+0.0+4.4+11.9}_{-1.9-6.6-17.1}$ &$37\pm{10}$\\
$B^0 \to \pi^-(K^{*+}\to)K\pi$ &${\cal B} (10^{-6})$ &$7.61^{+1.83+0.92+2.40}_{-1.61-0.65-1.78}$
&$7.66^{+1.84+0.95+2.43}_{-1.60-0.64-2.06}$ &$7.5\pm{0.4}$\\
&${\cal A}_{CP} (\%)$ &$-32.3^{+0.7+10.3+7.9}_{-0.2-8.4-6.3}$
&$-32.7^{+0.6+10.4+7.9}_{-0.1-8.4-6.1}$ &$-27\pm{4}$\\
\hline
\end{tabular}}
\end{center}
\end{table}
\subsection{$\omega^2$-depndent Gegenbauer Moments}
We make a more aggressive attempt in this subsection to determine the
dependence of the Gegenbauer moments in the two-meson DAs on the meson
pair invariant mass. As stated in the Introduction, it is unlikely to extract the exact dependence
from current data, so we simply expand the Gegenbauer moments up to the first power in
$\omega^2$, and examine whether the additional linear terms can be constrained effectively in the
global fit. Consider the parametrizations of the di-pion DAs,
\begin{eqnarray}
\phi_{\pi\pi}^0(x,\omega^2)&=&\frac{3F_{\pi\pi}^{\parallel}(\omega^2)}{\sqrt{2N_c}}x(1-x)\left[1
+a^0_{2\rho}(1+c_\rho^0\omega^2)C_2^{3/2}(2x-1)\right] ,\nonumber\\
\phi_{\pi\pi}^{s}(x,\omega^2)&=&\frac{3F_{\pi\pi}^{\perp}(\omega^2)}{2\sqrt{2N_c}}(1-2x)\left[1
+a^s_{2\rho}(1+c_\rho^s\omega^2)(10x^2-10x+1)\right] ,\nonumber\\%\label{eq:pilds}
\phi_{\pi\pi}^t(x,\omega^2)&=&\frac{3F_{\pi\pi}^{\perp}(\omega^2)}{2\sqrt{2N_c}}(1-2x)^2\left[1
+a^t_{2\rho}(1+c_\rho^t\omega^2)C_2^{3/2}(2x-1)\right] ,
\end{eqnarray}
with the free parameters $a^{0,s,t}_{2\rho}$ and $c_\rho^{0,s,t}$.
The above parametrization follows the power series for the $\omega^2$-dependent
Gegenbauer moments derived in Ref.~\cite{MP}.
\begin{table}
\caption{Fitted parameters for the $\omega^2$-dependent Gegenbauer moments in the twist-2 and twist-3 $\pi\pi$ DAs.}
\begin{center}
\setlength{\tabcolsep}{4mm}{
\begin{tabular}{lcccccc}
\hline
&$a^0_{2\rho}$ &$a^s_{2\rho}$ &$a^t_{2\rho}$ &$c^0_{\rho}$ (GeV$^{-2}$)
&$c^s_{\rho}$ (GeV$^{-2}$) &$c^t_{\rho}$ (GeV$^{-2}$) \\ \hline
\ fit \ &$-0.45\pm0.29$ &$1.12\pm0.33$ &$-0.43\pm0.11$ &$-0.44\pm0.93$
& $-1.42\pm0.42$ &$-0.03\pm0.32$ \\ \hline
\end{tabular}}
\label{tab:genc0st}
\end{center}
\end{table}
The global fit to the same set of $B_{(s)}\to P(\rho\to)\pi\pi$ data with $P=\pi,K$
leads to the outcomes in Table~\ref{tab:genc0st} with a smaller $\chi^2 / d.o.f.=0.51$, which
are not difficult to understand: varying $\omega^2$ around the $\rho$ resonance
in its width window, we find that the values of $a^{0,s,t}_{2\rho}(1+c_\rho^{0,s,t}\omega^2)$
are in fact consistent with the corresponding ones in Table~\ref{tab:gen}. The consistency
is particularly obvious for $a^{t}_{2\rho}(1+c_\rho^{t}\omega^2)$ with the tiny coefficient $c_\rho^{t}$.
It is observed from Table~\ref{tab:genc0st} that the parameters for the twist-3 DA $\phi_{\pi\pi}^{s}$,
which gives sizable contributions to branching ratios,
can be constrained effectively by the current data. It suggests that the determination of
the $\omega^2$-dependent Gegenbauer moments is promising, when more precise data are available in the
future. Because our purpose is to demonstrate the potential to extract the $\omega^2$ dependence
of the Gegenbauer moments, we will not work on the $K\pi$ and $KK$ DAs.
The effect of including the $\omega^2$ dependence of the Gegenbauer moments is
similar to that of introducing more parameters. That is, the fit quality is improved with a lower
$\chi^2 / d.o.f.$ at the cost of larger uncertainties for fit results as shown in Table~\ref{tab:fitproces}.
For example, the reproduced branching ratios for the $B^+\to K^+(\rho^0\to)\pi\pi$ and
$B^+\to \pi^+(\rho^0\to)\pi\pi$ decays get closer to the data, which have relatively higher
precision. However, the uncertainty caused by the variation of the di-pion DAs is amplified
compared to the second source of errors in Table~\ref{krho}.
\begin{table}[htbp!]
\label{tab:fitproces}
\caption{
$CP$ averaged branching ratios and direct $CP$ asymmetries derived
from the fitted Gegenbauer moments in Table~\ref{tab:genc0st}, and compared with data~\cite{pdg2020}.
For simplicity, only the theoretical errors from the Gegenbauer moments are
presented. }
\begin{center}
\begin{threeparttable}
\setlength{\tabcolsep}{8mm}{
\begin{tabular}{lccc}
\hline
{Modes} &\qquad &Results &Data\\
\hline
$B^+\to K^+(\rho^0\to)\pi\pi$ &${\cal B} (10^{-6})$ &$3.12^{+1.81}_{-1.14}$ &$3.7\pm0.5$~\tnote{$\dagger$}~~ \\
&${\cal A}_{CP} (\%)$ &$37.9^{+10.6}_{-11.9}$ &$37\pm 10$~\tnote{$\dagger$}~~\\
$B^+\to K^0(\rho^+\to)\pi\pi$ &${\cal B} (10^{-6})$ &$8.66^{+3.24}_{-1.99}$ &$7.3\pm1.2$~\tnote{$\dagger$}~~ \\
&${\cal A}_{CP} (\%)$ &$17.8^{+2.6}_{-1.1}$ &$-3\pm15$\\
$B^0\to K^+(\rho^-\to)\pi\pi$ &${\cal B} (10^{-6})$ &$8.22^{+3.21}_{-1.93}$ &$7.0\pm0.9$~\tnote{$\dagger$}~~ \\
&${\cal A}_{CP} (\%)$ &$18.9^{+7.8}_{-8.7}$ &$20\pm11$\\
$B^0\to K^0(\rho^0\to)\pi\pi$ &${\cal B} (10^{-6})$ &$2.88^{+1.19}_{-0.72}$ &$3.4\pm1.1$~\tnote{$\dagger$}~~ \\
&${\cal A}_{CP} (\%)$ &$1.9^{+1.8}_{-0.8}$ &$-4\pm20$\\
$B^+\to \pi^+(\rho^0\to)\pi\pi$ &${\cal B} (10^{-6})$ &$7.69^{+2.67}_{-1.65}$ &$8.3\pm1.2$~\tnote{$\dagger$}~~ \\
&${\cal A}_{CP} (\%)$ &$-17.2^{+9.2}_{-4.2}$ &$0.9\pm1.9$\\
$B^+\to \pi^0(\rho^+\to)\pi\pi$ &${\cal B} (10^{-6})$ &$10.14^{+5.18}_{-3.89}$ &$10.9\pm1.4$~\tnote{$\dagger$}~~ \\
&${\cal A}_{CP} (\%)$ &$5.6^{+13.4}_{-14.8}$ &$2\pm11$\\
$B^0\to \pi^-(\rho^+\to)\pi\pi$ &${\cal B} (10^{-6})$ &$24.49^{+2.29}_{-1.72}$~\tnote{1} &$23.0\pm2.3$~\tnote{1}~\tnote{$\dagger$}~~ \\
&${\cal A}_{CP} (\%)$ &$3.8^{+5.1}_{-5.3}$ &$13\pm6$\\
$B^0\to\pi^+(\rho^-\to)\pi\pi$ &${\cal B} (10^{-6})$ &$24.49^{+2.29}_{-1.72}$~\tnote{1} &$23.0\pm2.3$~\tnote{1}~\tnote{$\dagger$}~~ \\
&${\cal A}_{CP} (\%)$ &$-16.4^{+11.8}_{-10.1}$ &$-8\pm8$\\
\hline
\end{tabular}}
\end{threeparttable}
\end{center}
\end{table}
\section{CONCLUSION}
In this work we have performed a global fit of the Gegenbauer moments in two-meson DAs to measured
branching ratios and direct $CP$ asymmetries in three-body hadronic $B$ meson decays
$B\to VP_3\to P_1P_2P_3$ with $V=\rho,\phi, K^*$ and $P_3=\pi,K$ in the LO PQCD approach.
Two-meson DAs, collecting both nonresonant and multi-resonance contributions, serve as
crucial nonperturbative ingredients of factorization theorems for the above decays.
The Gegenbauer moments of the pion and kaon DAs determined in the LO global analysis
of two-body hadronic $B$ meson decays have been input
for theoretical consistency. To facilitate the numerical study,
we have constructed a Gegenbauer-moment-independent database, via which a decay amplitude
is decomposed into a linear combination of the relevant Gegenbauer moments in the two-meson
DAs. It was found that the fitted Gegenbauer moments differ from those associated with
an intermediate resonance that decays into the meson pair, and from those adopted in previous PQCD
investigations on specific modes. This observation indicates that
the Gegenbauer moments of a two-meosn DA cannot be inferred from sum-rule results
for an intermediate resonance, and their global determination is essential.
We have examined two scenarios for the determination of the $K\pi$ DAs in order to
check the convergence of the Gegenbauer expansion, and the sensitivity of the fitted
observables to our setup. It was noticed that the Gegenbauer expansion is improved by increasing
the number of Gegenbauer moments at the cost of large uncertainties for fit outcomes,
and that the branching ratios of $B_s$ meson decays and direct $CP$ asymmetries in
some modes are sensitive to the chosen scenarios. Hence, more accurate $K\pi$ DAs
are necessary for predicting these quantities unambiguously.
We have also explored the potential to fix the
dependence of the Gegenbauer moments on the meson-pair invariant mass, and shown that
the parameters for the twist-3 DA $\phi_{\pi\pi}^{s}$ can be constrained effectively by the
current data. Therefore, the determination of the dependence on the meson-pair invariant mass
is promising, when data become more precise.
It has been demonstrated that most of the data considered in the fit are well reproduced,
namely, the fit quality is satisfactory. It implies that the two-meson DAs presented in
this paper are ready for applications to other multi-body hadronic $B$ meson decays
involving the same meson pairs. With the obtained Gegenbauer moments of two-meson DAs, we have made
predictions for those observables, whose data were excluded in the fit because of
their substantial experimental errors or significant subleading contributions to the
corresponding factorization formulas. Except the $B_s^0 \to \pi^+(K^{*-}\to)K\pi$ branching
ratio, our predictions agree with the data within uncertainties in the former case.
Since our results were still derived in the LO PQCD approach, the data in the latter
case remain unexplained, and deserve more through analyses. As pointed out
before, the precision of the extracted two-meson DAs can be improved systematically,
when higher-order and/or higher-power corrections to three-body hadronic $B$ meson decays
are taken into account in our formalism. At the same time, more precise measurements are urged,
especially those of $CP$ asymmetries. These efforts will
strengthen the constraint on the Gegenbauer moments and sharpen the confrontation
between theoretical predictions and experimental data.
\begin{acknowledgments}
We thank W.F. Wang for helpful discussions.
This work is supported in part by ``Fundamental Research Funds for Central Universities''
under Grant No.~KJQN202144 and the National Natural Science Foundation of China under
Grant No.~12005103, No.~12075086 and No.~11947013, and by MOST of R.O.C. under Grant No. MOST-107-2119-M-001-035-MY3.
YL is also supported by the Natural Science Foundation of Jiangsu Province under
Grant No.~BK20190508 and the Research Start-up Funds of Nanjing Agricultural University.
DCY is supported by the Natural Science Foundation of Jiangsu Province under Grant No.~BK20200980.
ZR is supported in part by the Natural Science Foundation of Hebei Province under Grant No.~A2019209449.
\end{acknowledgments}
|
{
"timestamp": "2021-05-11T02:19:13",
"yymm": "2105",
"arxiv_id": "2105.03899",
"language": "en",
"url": "https://arxiv.org/abs/2105.03899"
}
|
\section{Introduction}
Image super-resolution (SR) is a low-level vision problem that reconstructs a single low-resolution (LR) image to a high-resolution (HR) image. This problem is ill-posed since multiple HR images can degrade to the same LR image.
Many deep-learning-based methods have been proposed to address this problem \cite{dong2015image,Kim_2016_CVPR,Lim_2017_CVPR_Workshops,ledig2017photo,zhang2018image,zhang2018residual,guo2020closed, ma2020structure, liu2020residual} and have achieved great success.
\begin{figure}[ht]
\centering
\includegraphics[width=0.45\textwidth]{scatterv6}
\caption{Performance comparison of existing lightweight methods on Set5 \cite{bevilacqua2012low} (4$\times$). The size of the dot denotes the Multi-Adds of the method. Our method achieve state-of-the-art performance with fewer parameters or fewer Multi-Adds.}
\label{fig:scatter}
\vspace{-0.2cm}
\end{figure}
While the SR performance is boosted by the deep learning approach, the model complexity is also increased.
For example, RDN \cite{zhang2018residual} had 22M parameters and EDSR \cite{Lim_2017_CVPR_Workshops} reached up to 43M parameters.
It is difficult to deploy these models to the equipment with low computing power.
For real-world applications, lightweight and efficient SR models have also been designed in recent years, including handcrafted SR neural networks \cite{kim2016deeply,kim2016deeply,ahn2018fast} and neural architecture search (NAS) based SR methods \cite{chu2019fast,song2020efficient,lee2020journey, pan2020real}.
Although great improvements have been achieved by existing lightweight SR methods, they still suffer from several limitations.
First, hand-crafted lightweight SR models like IMDN \cite{Hui-IMDN-2019} and RFDN \cite{liu2020rfdn} adopted several $3 \times 3$ convolution layers with large amount of parameters and Multi-Adds.
The building blocks designed by these methods with the same three $3 \times 3$ convolution layers can also be suboptimal and lack flexibility for SISR tasks.
Second, the network-level architecture of these methods only considered concatenating the output features of the blocks at the end of the model while omitting intermediate information flows among the blocks, which have been demonstrated to enlarge the reception field \cite{huang2017densely} and could be useful for improving SR performance \cite{tong2017image, seif2018large, zhang2018residual, shang2020perceptual}.
However, the network cannot be connected too densely in order to achieve an efficient lightweight SR model.
Therefore, it's important to find which connection benefits the cell most to improve the performance of lightweight SR models while keeping a slightly low model complexity.
Finally, Most Neural Architecture Search (NAS) based methods for SR tasks were based on reinforcement learning and evolutionary methods which are time-consuming and require a large number of computing resources to search for appropriate models. Furthermore, they failed to achieve better peak signal-to-noise ratio (PSNR) or structural similarity index measure (SSIM) \cite{wang2004image} results with searched lightweight SR models comparing with the existing state-of-the-art (SOTA) hand-crafted SR models.
To address these problems, we propose a lightweight image super-resolution method with a fully differentiable neural architecture search (DLSR) which is composed of cell-level and network-level search techniques.
For cell-level search, we design a large search space (see Table \ref{tab:Operations}) that contains more lightweight convolution operations to increase the probabilities of finding more lightweight models.
As opposed to existing work \cite{liu2018darts, song2020efficient} that searched for arbitrary combinations and connections of basic operations or searched for handcrafted blocks, we search for the operation combinations based on information distillation structure that provides prior knowledge of nice lightweight SR structures.
Based on the flexibility of lightweight operation combinations, our search space not only contains the handcrafted RFDB \cite{liu2020rfdn} structure but also explores for a better cell for efficient SR.
To utilize the intermediate information flow between the cells, we design a network-level search space that contains all possible connections among the cells to further boost the performance.
As opposed to FALSR \cite{chu2019fast} which uses an evolutionary algorithm to search for block connections with discrete encoding, we first densely connect the blocks to build a super-net, then utilize the continuous relaxed architecture parameters to weigh the connections and optimize the parameters with the stochastic gradient descent method.
During searching, the network automatically identifies the most important intermediate information flow connections.
In addition, we design a loss function composed of three parts: L1 loss, Hign Frequency Error Norm (HFEN) loss \cite{ravishankar2010mr}, and the number of parameters of the operations. HFEN is an image comparison metric from medical imaging and uses a Laplacian of Gaussian kernel for edge-detection. Thus, the HFEN loss can help to minimize the reconstruction error of high-frequency image details.
In addition, we treat the number of parameters of the operations as a regularization term to push the searching direction into a more lightweight space. Experimental results show that our DLSR method surpasses other SOTA lightweight SR methods in terms of PSNR, SSIM with fewer parameters and Multi-Adds on benchmark datasets: Set5 \cite{bevilacqua2012low}, Set14 \cite{yang2010image}, B100 \cite{martin2001database}, and
Urban100 \cite{huang2015single} in $\times2, \times 3, \times 4 $ super-resolution tasks.
In the end, our main contributions are summarized as three-fold:
\begin{itemize}
\item We propose a differentiable NAS strategy for searching a lightweight SR model, which incorporates both cell-level and network-level search spaces to strengthen the SR performance. The proposed approach significantly reduces the searching cost compared to existing RL-based NAS methods.
\item We design a loss function that considers distortion, high-frequency reconstruction, and lightweight regularization that push the searching direction to explore a better lightweight SR model.
\item We conduct extensive experiments to evaluate the efficacy of our method, which achieves state-of-the-art performance on the benchmark datasets in terms of PSNR, SSIM, and model complexity.
\end{itemize}
\section{Related work}
\subsection{CNN-based Image Super-Resolution}
SR performance has been greatly improved by CNN-based methods \cite{dong2015image, Kim_2016_CVPR, tai2017memnet, Lim_2017_CVPR_Workshops, ledig2017photo, zhang2018image, zhang2018residual, wang2018esrgan, zhang2018learning, zhang2020deep, liu2020residual}. Dong \etal\cite{dong2015image} proposes SRCNN which is a shallow three-layer network to map interpolated LR images to HR images.
Kim \etal\cite{Kim_2016_CVPR} proposes the VDSR network, which is composed of 20 layers and global skip-connection to improve the performance.
In addition, Dong \etal\cite{dong2016accelerating} designs the transposed convolution layer, and Shi \etal\cite{shi2016real} proposes the sub-pixel convolution layer for SR tasks, and both performed the upsampling operation at the end of the CNN and hence largely saved on computation in the feature extraction phase due to the reduction of spatial dimension. Lim \etal \cite{Lim_2017_CVPR_Workshops} has proposes EDSR and MDSR, which remove Batch Normalization layers in SRResnet \cite{ledig2017photo} and greatly improve the performance.
Zhang \etal\cite{zhang2018residual} proposes RDN networks by introducing dense connections into EDSR residual blocks. RCAN \cite{zhang2018image} introduces channel attention to achieve better SR performance. However, most of these CNN-based methods contain large parameters and require large amounts of computation, which limits their real-world applications.
\begin{figure*}[t]
\centering
\subfigure[The architecture of cell]{
\includegraphics[width=0.45\textwidth]{block}
}
\hspace{1cm}
\subfigure[The architecture of MRB]{
\includegraphics[width=0.25\textwidth]{mix_layer}
}
\caption{The cell-level search space. The cell is composed of 3 mixed residual blocks with an information distillation mechanism and an ESA block. The `Conv' in figure(a) denotes the $1\times 1$ convolution layer that cuts the channel number by half. Figure(b) shows the architecture of mixed block, which is composed of multiple operations weighted by parameter $\alpha$, residual skip connection, and ReLU layer.}
\label{fig:cell}
\vspace{-0.2cm}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[width=0.6\linewidth]{network_level_space}
\end{center}
\vspace{-0.2cm}
\caption{The network-level search space. Each cell is connected with all of its prior cells. And each connection is weighted by architecture parameter $\beta$. Each cell's input feature is composed of weighted features from the prior cells through concatenation and $1 \times 1$ convolution. The connections from each cell to the last convolution layer are omitted for clarity.}
\label{fig:network-level}
\vspace{-0.3cm}
\end{figure*}
\subsection{Lightweight Image Super-Resolution}
Lightweight and efficient CNN for SR tasks has been widely explored to suit mobile devices with an extremely small amount of parameters and computation \cite{tai2017image, ahn2018fast, hui2018fast, Hui-IMDN-2019, zhu2019efficient, 10.1007/978-3-030-58542-6_17, zhang2020aim, liu2020rfdn}.
In order to reduce the parameters, Kim \etal\cite{tai2017image} introduces recursive layers combined with residual schemes in the feature extraction stage.
Ahn \etal\cite{ahn2018fast} proposes the CARN-M model which utilizes group convolution and cascade network architecture that significantly reduces the parameters.
Hui \etal\cite{hui2018fast,Hui-IMDN-2019} proposes an information distillation mechanism (IDM) that utilized a channel splitting strategy to distill and compress local short-path feature information.
RFDN \cite{liu2020rfdn} rethinks the channel splitting strategy and decouples the convolution layer and channel splitting layer.
Furthermore, they apply the skip-connection on the $3 \times 3$ convolution that makes up the shallow residual block (SRB), which significantly improves the SR performanc. For this they won first place in the AIM2020 efficient super-resolution challenge.
\subsection{SISR with Neural Architecture Search}
As NAS techniques have achieved great success in image classification \cite{liu2018darts,he2020milenas} and other tasks, recent works have started to adopt NAS to search efficient SR networks.
FALSR \cite{chu2019fast} utilizes reinforcement learning and evolutionary methods to search for lightweight SR models, building SR as a constrained multi-objective optimization problem.
Song \etal \cite{song2020efficient} proposes to search for multiple handcrafted efficient residual dense blocks to stack the SR model using evolutionary methods.
Guo \etal \cite{guo2020hierarchical} proposes to search for cell structures and upsampling positions with reinforcement learning.
Recently, TPSR \cite{lee2020journey} has adopted reinforcement learning to find an efficient GAN-based SR model, resulting a tiny SR model that performes well on both perceptual and distortion metrics.
However, most of the prior SR methods with NAS utilized reinforcement learning or evolutionary methods that were time-consuming. In this work, we explore the fully differentiable NAS to search for an efficient, accurate, and lightweight SR model with a single GPU.
\section{Method}
In this section, we introduce our {\bf D}ifferentiable NAS method for {\bf L}ightweight {\bf S}uper-{\bf R}esolution model, dubbed DLSR.
Below, we first describe the search space of cell-level and network-level. Then, we discuss the search strategy and loss function of our proposed DLSR.
\subsection{Search Space}
\paragraph{Cell-level search space} The cell-level topology structure is based on residual feature distillation block (RFDB) \cite{liu2020rfdn}, which is comprised of three shallow residual blocks (SRB) with an information distillation mechanism and a contrast-aware channel attention (CCA) layer \cite{Hui-IMDN-2019}.
The smallest building block SRB is composed of a $3 \times 3$ convolution layer and a residual connection.
However, we argue that the $3 \times 3$ convolution in RFDB could be suboptimal and would thus not always be the best choice for lightweight super-resolution. In order to improve the flexibility of the RFDB and search for a more lightweight structure, we replace the SRB with {\bf M}ixed {\bf R}esidual {\bf B}lock (MRB) in Figure \ref{fig:cell}.
The MRB is composed of a mixed layer, a residual connection, and a ReLU layer, in which the mixed layer is made up of multiple operations including separable convolution, dilated convolution, and normal convolution, as shown in Table \ref{tab:Operations}.
For mixed layer $k$, we denote the input feature as {\small $x_k$}, and the operation space as $O$, where each element represents a candidate function $o(\cdot)$ weighted by the cell architecture parameters {\small $\alpha _o^k$}, as illustrated in Figure \ref{fig:cell}. We use softmax to perform the continuous relaxation of the operation space as done in Darts \cite{liu2018darts}. Thus, the output of mixed layer $k$ denoted by {\small${f_k}({x_k})$} is given as:
\begin{small}
\begin{equation}\label{eq:mix_layer}
{f_k}({x_k}) = \sum\limits_{o \in {\rm O}} {\frac{{\exp (\alpha _o^k)}}{{\sum\limits_{o' \in {\rm O}} {\exp (\alpha _{o'}^k)} }}o({x^k})}.
\end{equation}
\end{small}
\!\!During searching, the operation with the largest {\small$\alpha _o^k$} is reserved as the genotype of the layer.
The structure of each cell is composed of three MRBs with a feature distillation mechanism and an enhanced spatial attention (ESA) block as shown in Figure \ref{fig:cell}. Hence, the number of possible combinations for each cell is $9 \times 9 \times 9$.
\begin{table}[t]
\begin{small}
\begin{center}
\caption{Operations and their complexities in mixed layer. Dilated convolution \cite{yu2017dilated} is joint with group convolution. Muti-Adds are calculated in $\times 2$ SR task with 50 channels on 1280×720 image.}
\begin{tabular}{|c|c|c|c|}
\hline
\multirow{1}{5em}{Operation}& Kernel Size & Params (K) & Muti-Adds (G) \\
\hline
\hline
\multirow{4}{5em}{convolution}&$1 \times 1$&2.5&0.576 \\
& $3 \times 3$&22.5&5.184\\
& $5 \times 5$&62.5&14.400\\
& $7 \times 7$&122.5&28.224\\
\hline
\multirow{3}{5em}{Separable convolution}\!\!& $3 \times 3$&5.9&1.359\\
&$5 \times 5$&7.5&1.728\\
&$7 \times 7$&9.9&2.281\\
\hline
\multirow{2}{5em}{Dilated convolution}\!\!& $3 \times 3$&2.95&0.680\\
&$5 \times 5$&3.75&0.864\\
\hline
\end{tabular}
\end{center}
\label{tab:Operations}
\end{small}
\vspace{-0.8cm}
\end{table}
\smallskip
\noindent
{\bf Network-level search space}\ Different from HNAS \cite{guo2020hierarchical} which designs the network-level search space to search for the upsampling positions or HiNAS \cite{zhang2020memory} that is designed to search for the network width, we design the network-level search space to search for the shortcut connections among the cells to explore the intermediate information, as shown in Figure \ref{fig:network-level}.
The whole network is stacked with $6$ cells, and each cell is connected with all of its predecessors. The output features of its prior cells are concatenated and passed into $1 \times 1$ convolution layer to aggregate the information. In addition, each cell's output feature is connected to the last convolution layer. The connection between cell $i$ and cell $j$, which is also the feature maps of cell $i$, is denoted {\small ${x^{i}}$}, weighted by the network-level architecture parameters {\small ${\beta ^{(i,j)}}$}. We also utilize softmax function as continuous relaxation for parameters {\small ${\beta ^{(i,j)}}$} as Eq.~\eqref{eq:mix_layer}. Then, the input of the cell $j$ denoted by {\small ${I_j}$} is formulated as:
\begin{small}
\begin{equation}
{I_j} = g\left(\frac{{\exp ({\beta ^{(i,j)}})}}{{\sum\limits_{i' < j} {\exp ({\beta ^{(i',j)}})} }}{x^i}\right),
\label{eq:connection}
\end{equation}
\end{small}
\!\!where {\small$g(\cdot)$} denotes the operations of concatenation and $1 \times 1$ convolution.
Thus, we build a continuous and dense super-network search space considering all the intermediate information among the cells.
\smallskip
\noindent
{\bf Search complexity}\
Based on the above illustration, the proposed DLSR method includes both the cell-level and network-level search spaces. Thus, the overall search complexity of our method is estimated as:
\begin{equation}\label{eq:search_space}
9 \times 9 \times 9 \times 5 \times 4 \times 3 \times2 = 87480.
\end{equation}
It is nontrivial and requires a large amount of computation cost to explore such a large search space for a lightweight and accurate super-resolution model via reinforcement learning \cite{guo2020hierarchical} or evolutionary algorithm \cite{chu2019fast,song2020efficient} based neural architecture search approaches.
In this work, we solve this problem via a fully differentiable neural architecture search approach.
\subsection{Search Strategy}
We extend the popular differentiable NAS methods including DARTS \cite{liu2018darts} and its improved version MiLeNas \cite{he2020milenas} for the low-level computer vision (SISR) task.
These two methods were originally proposed for the image classification task which is a high-level computer vision task.
Motivated by MiLeNas, the objective function of our DLSR model is defined as the following regularized form:
\begin{small}
\begin{equation}
{\min _{\theta,\alpha ,\beta }}[{L_{tr}}({\theta^*}(\alpha ,\beta ) + \lambda {L_{val}}({\theta^*}(\alpha ,\beta );\alpha ,\beta )],
\label{eq:object_function}
\end{equation}
\end{small}
\!\!where {\small$\theta$} denotes the weights parameters of the network and {\small$\lambda $} is a non-negative regularization parameter that balances the importance of the training loss and validation loss. Because the architecture parameters {\small$\alpha$} and {\small$\beta$} are both continuous, we directly apply Adam \cite{kingma2014adam} to solve problem \eqref{eq:object_function}. We define the architecture parameter {\small$A = [ \alpha, \beta ]$}, the parameters {\small$\theta$}, {\small$\alpha$}, and {\small$\beta$} are updated via the following iteration:
\begin{small}
\begin{gather}
\theta = \theta - {\eta _\theta }{\nabla _\theta}{L_{tr}}(\theta,A); \label{eq:theta_update}\\
A = A - {\eta _A }{\nabla _A}{L_{tr}}(\theta,A)+\lambda{\nabla _A}{L_{val}}(\theta,A). \label{eq:A_update}
\end{gather}
\end{small}
\!\!During the searching process, we preserve the operation that has the maximal value of the {\small$\alpha$} as the searched operation of the layer. The connections that have the maximal and the submaximal value of the {\small$\beta$} are preserved as the searched input connections of the cell. Our searching and training procedure is summarized in Algorithm \ref{alg:overall}.
\subsection{Loss Function}
To achieve lightweight and accurate SR models, the loss function is composed of three parts, which include L1 loss as distortion loss, HFEN loss \cite{ravishankar2010mr} for reconstruction, and parameters of the operations as a lightweight limitation.
\begin{small}
\begin{gather}
L_1 = \frac{1}{N}\sum_{i=1}^{N}\left|({H_\theta }({I^{LR}}) - {I^{HR}})\right| \label{eq:l1_loss}\\
L_{HFEN} = \frac{1}{N}\sum_{i=1}^{N}\left|\nabla{H_\theta(I}^{LR})-\nabla\ I^{HR}\right| \label{eq: HFEN_loss}\\
L_P=\sum_{o\in O}\frac{p_o}{\sum_{c\in O}p_c}softmax(\alpha_o) \label{eq: latency_loss} \\
L(\theta ) = {L_1} + \mu \times {L_{HFEN}} + \gamma \times {L_{P}} \label{eq:loss_function}.
\end{gather}
\end{small}
\!\!Specifically, {\small$L_1$} loss is popularly used for SR tasks \cite{Lim_2017_CVPR_Workshops,chu2019fast,liu2020rfdn,zhao2020efficient} to minimize the distortion between the reconstructed SR image and ground truth HR image;
{\small$L_{HFEN}$} \cite{chaitanya2017interactive} is a gradient-domain L1 loss, and each gradient {\small$\nabla(\cdot)$} is computed using High Frequency Error Norm (HFEN) \cite{ravishankar2010mr} which is an image comparison metric from medical imaging and uses a Laplacian of Gaussian kernel for edge-detection with Gaussian filter pre-smoothed images. {\small$L_{HFEN}$} is adopted to strengthen the reconstruction of image details such as edges and stripes.
{\small$L_P$} is a regularization item based on the parameters of operations. {\small$p_o$} denotes the number of parameters of operation $o$. {\small$L_P$} utilizes the number of the parameters to weigh the architecture parameter $\alpha$, so as to reduce the {\small$\alpha$} of the operations which have a large number of parameters and push the algorithm to search for lightweight operations.
The {\small$\mu$} and {\small$\gamma$} are weighting parameters for balancing the reconstruction performance and model complexity, respectively.
As for retraining the searched networks, the last term {\small$L(\theta )$} in the total loss function \eqref{eq:loss_function} is removed by setting {\small$\gamma = 0$}.
\SetKwInput{KwInput}{Input}
\SetKwInput{KwOutput}{Output}
\begin{algorithm}[ht]
\caption{Searching and training Algorithm}
\label{alg:overall}
\KwInput{Training set {\small$\mathbb{D}$}}
Initialize the super-network {\small$\mathcal{T}$} with architecture parameters {\small$\alpha$} and {\small$\beta$}.\\
Split training set {\small$\mathbb{D}$} into {\small$\mathbb{D}_{train}$} and {\small$\mathbb{D}_{valid}$}.\\
Train the super-network {\small$\mathcal{T}$} on {\small$\mathbb{D}_{train}$} for several steps to warm up.\\
\For{{\small$t = 1,2,\ldots, T$}}
{Sample train batch {\small$\mathbb{B}_{t} = \left \{ {(x_{i}, y_{i})} \right \}_{i=1}^{batch}$} from {\small$\mathbb{D}_{train}$}\\
Optimize {\small$\theta$} on the {\small$\mathbb{B}_{t}$} by Eq. \eqref{eq:theta_update} \\
Sample valid batch {\small$\mathbb{B}_{v} = \left \{ {(x_{i}, y_{i})} \right \}_{i=1}^{batch}$} from {\small$\mathbb{D}_{valid}$}\\
Optimize {\small$\alpha$} and {\small$\beta$} on the {\small$\mathbb{B}_{v}$} by Eq. \eqref{eq:A_update} \\
Save the genotypes of the searched networks
}
Train searched networks from the scratch \\
Pick up the best performing network {\small$\mathcal{S}$} \\
\KwOutput{A lightweight SR network {\small$\mathcal{S}$}}
\end{algorithm}
\section{Experiments}
\subsection{Datasets}
We use high-quality DIV2K \cite{agustsson2017ntire} and Flickr2K \cite{timofte2017ntire} datasets as training datasets. The DIV2K dataset consists of 800 training images and the Flicker2K dataset consists of 2650 training images. The LR images are obtained by the bicubic downsampling of HR images. In addition, we use the standard benchmark datasets, Set5 \cite{bevilacqua2012low}, Set14 \cite{yang2010image}, B100 \cite{martin2001database}, and Urban100 \cite{huang2015single} as test datasets.
\begin{table*}
\begin{small}
\caption{Image super-resolution results with scale factors of 2, 3, 4 on benchmark datasets.}
\label{tab:benchmark_result}
\begin{center}
\begin{tabular}{|l|c|c|c|c|c|c|c|}
\hline
\multirow{2}{7em}{Method} & \multirow{2}{2em}{Scale} & Params & Multi-Adds& Set5 & Set14 & B100 & Urban100 \\
\cline{5-8}
& &(K) &(G) & PSNR/SSIM & PSNR/SSIM & PSNR/SSIM & PSNR/SSIM \\
\hline\hline
Bicubic & \multirow{9}{2em}{$\times 2$} & - & - & 33.66/0.9299&30.24/0.8688&29.56/0.8403&26.88/0.8403\\
DRRN \cite{tai2017image} & & 297 & 6,796.9&37.74/0.9591&33.23/0.9136&32.05/0.8973&31.23/0.9188\\
CARN-M \cite{ahn2018fast} & & 412 & 91.2& 37.53/0.9583&33.26/0.9141&31.92/0.8960&31.23/0.9194\\
FALSR-B \cite{chu2019fast} & & 326 & 74.7 &37.61/0.9585&33.29/0.9143&31.97/0.8967&31.28/0.9191\\
ESRN-V \cite{song2020efficient} & & 324 & 73.4 &37.85/0.9600&33.42/0.9161&32.10/0.8987&31.79/0.9248\\
IMDN \cite{Hui-IMDN-2019} & & 694 & - & 38.00/0.9605&33.63/0.9177&32.19/0.8996&32.17/0.9283\\
PAN \cite{zhao2020efficient} & & \bf{261} & 70.5 &38.00/0.9605&33.59/0.9181&32.18/0.8997&32.01/0.9273\\
RFDN \cite{liu2020residual} & & 534 & 123.0 & \bf{38.05/0.9606}&\bf{33.68/0.9184}&32.16/0.8994&32.12/0.9278\\
\bf{DLSR(Ours)} & & 322 & \bf{68.1} &38.04/\bf{0.9606}&33.67/0.9183&\bf{32.21/0.9002}&\bf{32.26/0.9297}\\
\hline
\hline
Bicubic & \multirow{8}{2em}{$\times 3$} & - & - &30.39/0.8682&27.55/0.7742&27.21/0.7385&24.46/0.7349\\
DRRN \cite{tai2017image} & & 297 & 6,796.9 &34.03/0.9244&29.96/0.8349&28.95/0.8004&27.53/0.8378\\
CARN-M \cite{ahn2018fast}& & 412 & 46.1 &33.99/0.9236&30.08/0.8367&28.91/0.8000&27.55/0.8385\\
ESRN-V \cite{song2020efficient} & & 324 & 36.2 &34.23/0.9262&30.27/0.8400&29.03/0.8039&27.95/0.8481\\
IMDN \cite{Hui-IMDN-2019} & & 703 & - &34.36/0.9270&30.32/0.8417&29.09/0.8046&28.17/0.8519\\
PAN \cite{zhao2020efficient}& & \bf{261} & 39.0 &34.40/0.9271&30.36/0.8423&29.11/0.8050&28.11/0.8511\\
RFDN \cite{liu2020residual}& & 541 & 55.4 &34.41/0.9273&30.34/0.8420&29.09/0.8050&28.21/0.8525\\
\bf{DLSR(Ours)} & & 329 & \bf{30.9} &\bf{34.49/0.9279
}&\bf{30.39/0.8428}&\bf{29.13/0.8061}&\bf{28.26/0.8548}\\
\hline
\hline
Bicubic & \multirow{8}{2em}{$\times 4$} & - & - &28.42/0.8104&26.00/0.7027&25.96/0.6675&23.14/0.6577\\
DRRN \cite{tai2017image} & & 297 & 6,796.9 &31.68/0.8888&28.21/0.7720&27.38/0.7284&25.44/0.7638\\
CARN-M \cite{ahn2018fast}& & 412 & 32.5 &31.92/0.8903&28.42/0.7762&27.44/0.7304&25.62/0.7694\\
ESRN-V \cite{song2020efficient}& & 324 & 20.7 &31.99/0.8919&28.49/0.7779&27.50/0.7331&25.87/0.7782\\
IMDN \cite{Hui-IMDN-2019}& & 715 & - &32.21/0.8948&28.58/0.7811&27.56/0.7353&26.04/0.7838\\
PAN \cite{zhao2020efficient}& & \bf{272} & 28.2 &32.13/0.8948&28.61/0.7822&27.59/0.7363&26.11/0.7854\\
RFDN \cite{liu2020residual}& & 550 & 31.6 &32.24/0.8952&28.61/0.7819&27.57/0.7360&26.11/0.7858\\
\bf{DLSR(Ours)} & & 338 & \bf{17.9} &\bf{32.33/0.8963
}&\bf{28.68/0.7832}&\bf{27.61/0.7374}&\bf{26.19/0.7892}\\
\hline
\end{tabular}
\end{center}
\end{small}
\vspace{-0.7cm}
\end{table*}
\begin{table*}[ht]
\begin{small}
\begin{center}
\caption{Comparison results with TPSR-NOGAN on benchmark datasets.}
\label{tab:TPSR_comparison}
\begin{tabular}{|l|c|c|c|c|c|c|c|}
\hline
\multirow{2}{7em}{Method} & \multirow{2}{2em}{Scale} & Params & Multi-Adds& Set5 & Set14 & B100 & Urban100 \\
\cline{5-8}
& &(K) &(G) & PSNR/SSIM & PSNR/SSIM & PSNR/SSIM & PSNR/SSIM \\
\hline\hline
TPSR-NOGAN & $\times 2$ & 60 & 14.0 &37.38/0.9583&33.00/0.9123&31.75/0.8942&30.61/0.9119\\
\bf{DLSR-S(Ours)} &$\times 2$ & \bf{56} & \bf{12.4} & \bf{37.71/0.9595}&\bf{33.33/0.9150}&\bf{31.96/0.8973}&\bf{31.26/0.9196}\\
\hline
\hline
TPSR-NOGAN & $\times 4$ & \bf{61} & 3.6 &31.10/0.8779&27.95/0.7663&27.15/0.7214&24.97/0.7456\\
\bf{DLSR-S(Ours)} &$\times 4$ & 62 & \bf{3.4} & \bf{31.75/0.8885}&\bf{28.31/0.7745}&\bf{27.38/0.7298}&\bf{25.47/0.7663}\\
\hline
\end{tabular}
\end{center}
\end{small}
\vspace{-0.6cm}
\end{table*}
\begin{figure*}[ht]
\centering
\subfigure[HR]{
\begin{minipage}[t]{0.15\linewidth}
\centering
\label{Fig.x2_hr}
\includegraphics[width=1.1in]{comparison_result/x2/Set14/HR}\\
\includegraphics[width=1.1in]{comparison_result/x2/Urban100/img_072/HR}\\
\includegraphics[width=1.1in]{comparison_result/x2/Urban100/img_092/HR}
\end{minipage}
}
\subfigure[Bicubic]{
\begin{minipage}[t]{0.15\linewidth}
\centering
\label{Fig.x2_bic}
\includegraphics[width=1.1in]{comparison_result/x2/Set14/Bicubic}\\
\includegraphics[width=1.1in]{comparison_result/x2/Urban100/img_072/Bicubic}\\
\includegraphics[width=1.1in]{comparison_result/x2/Urban100/img_092/Bicubic}
\end{minipage}
}
\subfigure[CARN-M]{
\begin{minipage}[t]{0.15\linewidth}
\centering
\label{Fig.x2_carn-m}
\includegraphics[width=1.1in]{comparison_result/x2/Set14/CARN-M}\\
\includegraphics[width=1.1in]{comparison_result/x2/Urban100/img_072/CARN-M}\\
\includegraphics[width=1.1in]{comparison_result/x2/Urban100/img_092/CARN-M}
\end{minipage}
}
\subfigure[FALSR-B]{
\begin{minipage}[t]{0.15\linewidth}
\centering
\label{Fig.x2_FALSR-B}
\includegraphics[width=1.1in]{comparison_result/x2/Set14/FALSR-B}\\
\includegraphics[width=1.1in]{comparison_result/x2/Urban100/img_072/FALSR-B}\\
\includegraphics[width=1.1in]{comparison_result/x2/Urban100/img_092/FALSR-B}
\end{minipage}
}
\subfigure[PAN]{
\begin{minipage}[t]{0.15\linewidth}
\centering
\label{Fig.x2_PAN}
\includegraphics[width=1.1in]{comparison_result/x2/Set14/PAN}\\
\includegraphics[width=1.1in]{comparison_result/x2/Urban100/img_072/PAN}\\
\includegraphics[width=1.1in]{comparison_result/x2/Urban100/img_092/PAN}
\end{minipage}
}
\subfigure[Ours]{
\begin{minipage}[t]{0.15\linewidth}
\centering
\label{Fig.x2_set14_OURS}
\includegraphics[width=1.1in]{comparison_result/x2/Set14/Ours}\\
\includegraphics[width=1.1in]{comparison_result/x2/Urban100/img_072/OURS}\\
\includegraphics[width=1.1in]{comparison_result/x2/Urban100/img_092/OURS}
\end{minipage}
}
\caption{Visual comparisons among SOTA lightweight models in $\times 2$ image super-resolution. The test image patches are from Set14 and Urban100. Note that the results of FALSR-B are based on our test with the pre-trained model which is released by the authors. The results of CARN-M and PAN are directly taken from the authors' release. Our method has better reconstruction performance on image details, such as thin stripes on the clothes and edges of windows.}
\label{Fig.main}
\vspace{-0.2cm}
\subfigure[HR]{
\begin{minipage}[t]{0.15\linewidth}
\centering
\label{Fig.x4_hr}
\includegraphics[width=1.1in]{comparison_result/x4/Set14/HR}\\
\includegraphics[width=1.1in]{comparison_result/x4/Urban100/img_024/HR}\\
\includegraphics[width=1.1in]{comparison_result/x4/Urban100/img_076/HR}\\
\includegraphics[width=1.1in,height=1.2in]{comparison_result/x4/Urban100/img_096/HR}
\end{minipage}
}
\subfigure[Bicubic]{
\begin{minipage}[t]{0.15\linewidth}
\centering
\label{Fig.x4_bic}
\includegraphics[width=1.1in]{comparison_result/x4/Set14/Bicubic}\\
\includegraphics[width=1.1in]{comparison_result/x4/Urban100/img_024/Bicubic}\\
\includegraphics[width=1.1in]{comparison_result/x4/Urban100/img_076/Bicubic}\\
\includegraphics[width=1.1in,height=1.2in]{comparison_result/x4/Urban100/img_096/Bicubic}
\end{minipage}
}
\subfigure[CARN-M]{
\begin{minipage}[t]{0.15\linewidth}
\centering
\label{Fig.x4_carn-m}
\includegraphics[width=1.1in]{comparison_result/x4/Set14/CARN-M}\\
\includegraphics[width=1.1in]{comparison_result/x4/Urban100/img_024/CARN-M}\\
\includegraphics[width=1.1in]{comparison_result/x4/Urban100/img_076/CARN-M}\\
\includegraphics[width=1.1in,height=1.2in]{comparison_result/x4/Urban100/img_096/CARN-M}
\end{minipage}
}
\subfigure[RFDN]{
\begin{minipage}[t]{0.15\linewidth}
\centering
\label{Fig.x4_RFDN}
\includegraphics[width=1.1in]{comparison_result/x4/Set14/RFDN}\\
\includegraphics[width=1.1in]{comparison_result/x4/Urban100/img_024/RFDN}\\
\includegraphics[width=1.1in]{comparison_result/x4/Urban100/img_076/RFDN}\\
\includegraphics[width=1.1in,height=1.2in]{comparison_result/x4/Urban100/img_096/RFDN}
\end{minipage}
}
\subfigure[PAN]{
\begin{minipage}[t]{0.15\linewidth}
\centering
\label{Fig.x4_PAN}
\includegraphics[width=1.1in]{comparison_result/x4/Set14/PAN}\\
\includegraphics[width=1.1in]{comparison_result/x4/Urban100/img_024/PAN}\\
\includegraphics[width=1.1in]{comparison_result/x4/Urban100/img_076/PAN}\\
\includegraphics[width=1.1in,height=1.2in]{comparison_result/x4/Urban100/img_096/PAN}
\end{minipage}
}
\subfigure[Ours]{
\begin{minipage}[t]{0.15\linewidth}
\centering
\label{Fig.x4_OURS}
\includegraphics[width=1.1in]{comparison_result/x4/Set14/OURS}\\
\includegraphics[width=1.1in]{comparison_result/x4/Urban100/img_024/OURS}\\
\includegraphics[width=1.1in]{comparison_result/x4/Urban100/img_076/OURS}\\
\includegraphics[width=1.1in,height=1.2in]{comparison_result/x4/Urban100/img_096/OURS}
\end{minipage}
}
\caption{Visual comparisons among SOTA lightweight models in $\times 4$ image super-resolution. The test image patches are from Set14 and Urban100. Note that the results of RFDN are based on our test with the pre-trained model which is officially released by the authors. Our method shows better reconstruction performance and less deformation on image details such as texts and stripes.}
\label{Fig.main_X4}
\vspace{-0.6cm}
\end{figure*}
\subsection{Implementation Details}
We merge the DIV2K and Flickr2K datasets together and denote them as DF2K dataset with a total of 3450 images.
During the searching stage, we split the dataset to 3000 images as training dataset {\small$\mathbb{D}_{train}$} and the remaining 450 images as validation dataset {\small$\mathbb{D}_{valid}$}. We augment the datasets by random rotations of $90^\circ$, $180^\circ$, $270^\circ$, and horizontal flips.
We perform $\times 2$ SR for searching the neural network architectures and apply the searched models to $\times2, \times 3, \times 4$ SR tasks.
Both the searching and training stages are performed on a single NVIDIA Tesla V100 GPU.
More implementation details can be found in {\bf supplementary material}.
\subsection{Searched Results}
The searched network structure and cell structure are shown in Figure \ref{fig:beta21}. For clarity, we omit the connections between each block and the end of the model in the figure. The searched cell is made up of a $1\! \times\! 1$ convolution layer, $7\! \times\! 7$ separable convolution layer, $5 \times 5$ separable convolution layer, ESA block, and residual connections with information distillation mechanism. Since the parameters and FLOPS of the $1 \!\times\! 1$ convolution, $5 \!\times\! 5$ separable convolution, and $7\!\times\!7$ separable convolution are all fewer than the original $3\!\times\!3$ convolution, we obtain a much smaller (nearly half the original size) model compared with vanilla RFDN \cite{liu2020rfdn}.
\subsection{Comparison with State-of-the-art Methods}
We compare the DLSR model with state-of-the-art lightweight SR methods on two commonly-used metrics: peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) \cite{wang2004image} on the Y channel of the transformed YCbCr space. We also present the number of the parameters and number of the operations (Multi-Adds) to show the model complexity. Multi-Adds is calculated on 720p ($1280 \times 720$) HR images. The $\times2, \times 3, \times 4 $ super-resolution results are shown in Table \ref{tab:benchmark_result}, and the best results are highlighted.
The visual performance of $\times2,\times4$ super-resolution are shown in Figures \ref{Fig.main} and \ref{Fig.main_X4}. Compared with DRRN \cite{tai2017image}, our method only takes $1\%$ Multi-Adds, while achieving 1dB PSNR improvement on the Urban100 dataset in $\times 2, \times 3$ SR tasks, 0.7dB PSNR improvement on the Set5 dataset in $\times 4$ SR task, as well as 0.3-0.7dB PSNR improvement on other tasks, respectively. Our method surpasses other NAS based methods like FALSR-B \cite{chu2019fast} and ESRN-V \cite{song2020efficient} by a large margin with 0.2-0.4dB PSNR improvement with fewer parameters and Multi-Adds in most of the SR tasks (Table \ref{tab:benchmark_result}).
As shown in Table \ref{tab:GPU_days}, the search cost of our method is significantly less than NAS-based SR methods. FALSR-B \cite{chu2019fast} takes less than 3 days on 8 GPUs to execute their pipeline once. ESRN-V \cite{song2020efficient} takes around one day on 8 GPUs to execute their evolution procedure. Our method only takes around 2 days on one GPU.
Compared with hand-crafted light-weight models like IMDN \cite{Hui-IMDN-2019} and RFDN \cite{liu2020rfdn}, the DLSR method only takes about half the amount of parameters while still outperforming them. Compared with PAN \cite{zhao2020efficient}, which is the most lightweight deep SR model in AIM2020 Efficient Super Resolution, our method is still able to outperform it with fewer Multi-Adds.
\begin{table}[H]
\vspace{-0.1cm}
\begin{small}
\caption{Searching cost of NAS based SR methods}
\begin{center}
\begin{tabular}{|c|c|}
\hline
NAS based SR method & GPU days \\
\hline
FALSR \cite{chu2019fast} & 24 \\
ESRN \cite{song2020efficient} & 8 \\
\bf{DLSR(ours)}& \bf{2} \\
\hline
\end{tabular}
\label{tab:GPU_days}
\end{center}
\end{small}
\vspace{-0.7cm}
\end{table}
{
\begin{table*}
\begin{small}
\begin{center}
\caption{Comparison results between the model with/without network-level connections DLSR/DLSR-B.}
\label{tab:DLSR-B_comparison}
\begin{tabular}{|l|c|c|c|c|c|}
\hline
\multirow{2}{2em}{Method} & \multirow{2}{2em}{Scale}&Set5&Set14&B100&Urban100\\
\cline{3-6}
& & PSNR/SSIM & PSNR/SSIM & PSNR/SSIM & PSNR/SSIM \\
\hline
DLSR-B & $\times 2$ & 38.04/0.9606 & 33.63/0.9177 &32.20/0.9000 & 32.20/0.9293\\
DLSR & $\times 2$ & 38.04/0.9606&\bf{33.67}/0.9183&\bf{32.21/0.9002}&\bf{32.26}/0.9297\\
\hline
\hline
DLSR-B & $\times 4$ & 32.27/0.8959 & 28.67/0.7832 &27.60/0.7372 & 26.16/0.7885\\
DLSR & $\times 4$ & \bf{32.33}/0.8963&\bf{28.68}/0.7832&\bf{27.61}/0.7374&\bf{26.19}/0.7892\\
\hline
\end{tabular}
\end{center}
\end{small}
\vspace{-0.5cm}
\begin{small}
\begin{center}
\caption{Comparison results with different loss function configurations on benchmark datasets.}
\label{tab:loss_comparison}
\begin{tabular}[h]{|l|c|c|c|c|c|c|c|}
\hline
\multirow{2}{7em}{Method} & \multirow{2}{2em}{Scale} & Params & Multi-Adds& Set5 & Set14 & B100 & Urban100 \\
\cline{5-8}
& &(K) &(G) & PSNR/SSIM & PSNR/SSIM & PSNR/SSIM & PSNR/SSIM \\
\hline\hline
DLSR-L1 & $\times 2$ & 323 & \bf{68.1} &\bf{38.04/0.9606}&33.69/0.9185&32.20/0.9002&32.27/0.9297\\
DLSR-HFEN &$\times 2$ & 365 & 77.8 & 38.02/0.9605&\bf{33.78/0.9200}&\bf{32.21/0.9003}&\bf{32.34/0.9305}\\
DLSR & $\times 2$ & \bf{322} & \bf{68.1} & \bf{38.04/0.9606}&33.67/0.9183&{\bf32.21}/0.9002&32.26/0.9297\\
\hline
\end{tabular}
\end{center}
\end{small}
\vspace{-0.65cm}
\end{table*}
}
\begin{figure*}[ht]
\centering
\subfigure[HR]{
\begin{minipage}[t]{0.18\linewidth}
\centering
\label{Fig.x2_HR}
\includegraphics[width=1.2in]{comparison_result/x2/Set14/HR}\\
\includegraphics[width=1.2in]{comparison_result/x2/Urban100/img_092/HR}\\
\end{minipage}
}
\subfigure[DLSR-L1]{
\begin{minipage}[t]{0.18\linewidth}
\centering
\label{Fig.x2_l1}
\includegraphics[width=1.2in]{comparison_result/x2/Set14/l1}\\
\includegraphics[width=1.2in]{comparison_result/x2/Urban100/img_092/l1}\\
\end{minipage}
}
\subfigure[DLSR-HFEN]{
\begin{minipage}[t]{0.18\linewidth}
\centering
\label{Fig.x2_HFEN}
\includegraphics[width=1.2in]{comparison_result/x2/Set14/beta23}\\
\includegraphics[width=1.2in]{comparison_result/x2/Urban100/img_092/beta23}\\
\end{minipage}
}
\subfigure[DLSR]{
\begin{minipage}[t]{0.18\linewidth}
\centering
\label{Fig.x2_DLSR}
\includegraphics[width=1.2in]{comparison_result/x2/Set14/Ours}\\
\includegraphics[width=1.2in]{comparison_result/x2/Urban100/img_092/OURS}\\
\end{minipage}
}
\vspace{-0.05cm}
\caption{Visual comparisons among the models trained with different loss configurations in $\times 2$ image super-resolution. DLSR-L1 model is searched and retrained only with L1 loss. DLSR-HFEN model is searched and retrained with L1 loss and HFEN loss. DLSR model is searched and retrained with L1 loss, HFEN loss, and parameter regularization.}
\label{Fig.comparision_loss}
\vspace{-0.5cm}
\end{figure*}
The visual comparison results show that our method achieves better performance in reconstructing image details such as tiny stripes and the edges of the text. In Figure \ref{Fig.main}, our DLSR method reconstructs the right direction of the thin stripes on the clothes with high visual quality and successfully reconstructs the edge of the window, while other methods cannot. In Figure \ref{Fig.main_X4}, our method reconstructs the round edge of the text 'O' and other stripe-like image details without distortion. In summary, the quantitative and visual results both demonstrate that our models outperform the state-of-the-art SR models on multiple datasets and scales with fewer parameters and Multi-Adds.
In addition, to compare our method with the Tiny Perceptual Super Resolution (TPSR) model \cite{lee2020journey}, which is a super lightweight SR model with 60K parameters, we cut the channel number of our DLSR model to 18 and denote the smaller model as DLSR-S. The $\times 2, \times 4$ SR comparison result is shown in Table \ref{tab:TPSR_comparison}. As our method is not based on the generative adversarial networks (GAN), we compare it with the baseline model of the TPSR method called TPSR-NOGAN. The result indicates that even with fewer Multi-Adds, our DLSR-S model can still surpass the TPSR-NOGAN with a large margin of PSNR and SSIM.
\subsection{Ablation Studies}
First, we discuss the effectiveness of network-level connections. To compare with DLSR, we design our method to search for a baseline model only on the cell-level search space. After searching and retraining, we name this model DLSR-B. Coincidentally, the DLSR-B has the same numbers of parameters and Multi-Adds as DLSR. The comparison result is illustrated in Table \ref{tab:DLSR-B_comparison}. The result shows that the network-level connections can improve the performance of DLSR-B. Thus, the search space of network-level connections which we propose is innovative and meaningful.
Second, we discuss the effectiveness of the loss function we designed that is comprised of three parts: the L1 loss, the HFEN loss, and the parameter loss. We conduct the experiments on three different models: DLSR-L1, DLSR-HFEN, DLSR. The DLSR-L1 model is searched and retrained only with L1 loss. The DLSR-HFEN model is searched and retrained with L1 loss and HFEN loss. The DLSR model is searched and retrained on all three parts of the loss. The comparison result is illustrated in Table \ref{tab:loss_comparison} and Figure \ref{Fig.comparision_loss}. The result shows the DLSR model achieves a better trade-off between the SR performance and the model complexity, and that HFEN loss can contribute to a better visual effect with sharper image details.
In addition, more visual comparisons and ablation studies on verifying the search stability of our DLSR model can be found in {\bf supplementary material}.
\section{Conclusions}
In this work, we propose a novel {\bf D}ifferentiable neural architecture search approach to search for {\bf L}ightweight single image {\bf S}uper-{\bf R}esolution models on both the cell-level and the network-level, dubbed DLSR.
In addition, we design a novel loss function that considers distortion, high-frequency reconstruction, and lightweight regularization that jointly pushes the searching direction to explore a better lightweight SR model.
Experimental results show that our DLSR method can surpass both the hand-crafted and NAS-based SOTA lightweight SR methods in terms of PSNR and SSIM with fewer parameters and Multi-Adds.
\section{Experiment Details}
During the search stage, the high-resolution (HR) patch size is set as 64 and the minibatch size is set as 64. We optimize the $\theta$, $\alpha$ and $\beta$ parameters with ADAM optimizer \cite{kingma2014adam} with $2 \times 10^5$ iterations.
For parameter $\theta$, the learning rate is set to $3 \times 10^{-4}$, the momentum parameter and exponential moving average parameter are set as (0.9,0.999) and the weight decay is set to $10^{-8}$.
For parameters $\alpha$ and $\beta$, the learning rate is set to $3 \times 10^{-4}$, the momentum parameter and exponential moving average parameter are set as (0.5,0.999) and the weight decay is set to $10^{-8}$.
The warm-up process takes $2 \times 10^4$ steps that only parameter $\theta$ is updated. The learning rates of the warm-up process and searching process are both set to $3 \times 10^{-4}$.
We save the genotypes of the searched models at about $5 \times 10^4$ step, $10^5$ step, and $1.5 \times 10^5$ step when the distribution of the architecture parameters turn stable during searching.
The number of channels is set to 48 and the number of cells is set to 6. The hyper-parameter $\lambda$ is set as 1.0, $\mu$ is set as 0.2, and $\gamma$ is set as 0.2.
For retraining the searched networks, we use the whole DF2K dataset with the same data augmentation as the searching stage. For $\times 2, \times 3, \times 4 $ super-resolution, HR patch size is set as 128, 192, and 256, respectively.
We train our searched DLSR model with ADAM optimizer \cite{kingma2014adam} with the same settings as the optimization of the parameter $\theta$ during the searching stage. We train the model in $2 \times 10^6$ steps and set the minibatch size as 32.
The learning rate is initialized with $3 \times 10^{-4}$ and halved every $4 \times 10^5$ steps.
The weights of both $\times 3, \times 4 $ super-resolution models are warmed up by the weight of the pre-trained $\times 2$ SR model. All the experiments are conducted in PyTorch 1.2 and Python 3.7.
\section{Ablation Study}
In this section, we discuss the search stability of our method.
To prove of the effectiveness of our proposed DLSR method, we repeat the searching algorithm several times with different initialization of random seed, and retrain the searched models, denoted as DLSR-a, DLSR-B, and DLSR-c. Table \ref{tab:stablity_comparison} shows the SR results of the lightweight SR models we have obtained. The results show that our proposed DLSR method is stable and effective.
\begin{table*}[ht]
\begin{small}
\begin{center}
\label{tab:stablity_comparison}
\begin{tabular}[h]{|l|c|c|c|c|c|c|c|}
\hline
\multirow{2}{7em}{Method} & \multirow{2}{2em}{Scale} & Params & Multi-Adds& Set5 & Set14 & B100 & Urban100 \\
\cline{5-8}
& &(K) &(G) & PSNR/SSIM & PSNR/SSIM & PSNR/SSIM & PSNR/SSIM \\
\hline\hline
DLSR-a & $\times 2$ & {\bf 309} & \bf{65} &38.04/{\bf 0.9606}&{\bf 33.68}/0.9181&32.20/0.9000&{\bf 32.21}/0.9290\\
DLSR-b &$\times 2$ & 328 & 69.3 & 38.04/{\bf 0.9606}&33.65/0.9180&{\bf 32.21/0.9002}&32.20/0.9290\\
DLSR-c & $\times 2$ & 341 & 72.5 & {\bf 38.05}/0.9605&{\bf 33.68/0.9183}&32.19/0.9000&{\bf 32.21/0.9292}\\
\hline
\end{tabular}
\end{center}
\caption{The results of the lightweight SR models we have searched with different initialization of random seed.}
\end{small}
\end{table*}
\section{More Visual Comparison Results}
More visual comparison results show that our DLSR model has superior performance compared with other lightweight SR models on image details. In Figure \ref{Fig.more_X4} and Figure \ref{Fig.more_X42}, our method reconstructs the stripe-like and lattice-like image details on the buildings without distortion.
\begin{figure*}[ht]
\centering
\subfigure[HR]{
\begin{minipage}[t]{0.15\linewidth}
\centering
\label{Fig.more_x4_hr}
\includegraphics[width=1.1in,height=0.8in]{comparison_result/x4/Urban100/nimg_042/HR}\\
\includegraphics[width=1.1in]{comparison_result/x4/Urban100/nimg_061/HR}\\
\includegraphics[width=1.1in,height=0.8in]{comparison_result/x4/Urban100/nimg_062/HR}\\
\end{minipage}
}
\subfigure[Bicubic]{
\begin{minipage}[t]{0.15\linewidth}
\centering
\label{Fig.more_x4_bic}
\includegraphics[width=1.1in,height=0.8in]{comparison_result/x4/Urban100/nimg_042/Bicubic}\\
\includegraphics[width=1.1in]{comparison_result/x4/Urban100/nimg_061/Bicubic}\\
\includegraphics[width=1.1in,height=0.8in]{comparison_result/x4/Urban100/nimg_062/Bicubic}\\
\end{minipage}
}
\subfigure[CARN-M]{
\begin{minipage}[t]{0.15\linewidth}
\centering
\label{Fig.more_x4_carn-m}
\includegraphics[width=1.1in,height=0.8in]{comparison_result/x4/Urban100/nimg_042/CARN-M}\\
\includegraphics[width=1.1in]{comparison_result/x4/Urban100/nimg_061/CARN-M}\\
\includegraphics[width=1.1in,height=0.8in]{comparison_result/x4/Urban100/nimg_062/CARN-M}\\
\end{minipage}
}
\subfigure[RFDN]{
\begin{minipage}[t]{0.15\linewidth}
\centering
\label{Fig.more_x4_RFDN}
\includegraphics[width=1.1in,height=0.8in]{comparison_result/x4/Urban100/nimg_042/RFDN}\\
\includegraphics[width=1.1in]{comparison_result/x4/Urban100/nimg_061/RFDN}\\
\includegraphics[width=1.1in,height=0.8in]{comparison_result/x4/Urban100/nimg_062/RFDN}\\
\end{minipage}
}
\subfigure[PAN]{
\begin{minipage}[t]{0.15\linewidth}
\centering
\label{Fig.more_x4_PAN}
\includegraphics[width=1.1in,height=0.8in]{comparison_result/x4/Urban100/nimg_042/PAN}\\
\includegraphics[width=1.1in]{comparison_result/x4/Urban100/nimg_061/PAN}\\
\includegraphics[width=1.1in,height=0.8in]{comparison_result/x4/Urban100/nimg_062/PAN}\\
\end{minipage}
}
\subfigure[Ours]{
\begin{minipage}[t]{0.15\linewidth}
\centering
\label{Fig.more_x4_OURS}
\includegraphics[width=1.1in,height=0.8in]{comparison_result/x4/Urban100/nimg_042/Ours}\\
\includegraphics[width=1.1in]{comparison_result/x4/Urban100/nimg_061/Ours}\\
\includegraphics[width=1.1in,height=0.8in]{comparison_result/x4/Urban100/nimg_062/Ours}\\
\end{minipage}
}
\caption{Visual comparisons among SOTA lightweight models in $\times 4$ image super-resolution. The test image patches are from Urban100. Note that the results of RFDN are based on our test with the pre-trained model which is officially released by the authors. Our method shows better reconstruction performance and less deformation on image details.}
\label{Fig.more_X4}
\centering
\subfigure[HR]{
\begin{minipage}[t]{0.15\linewidth}
\centering
\label{Fig.more_x4_hr2}
\includegraphics[width=1.1in,height=1.2in]{comparison_result/x4/Urban100/nimg_089/HR}\\
\includegraphics[width=1.1in,height=0.8in]{comparison_result/x4/Urban100/nimg_093/HR}\\
\includegraphics[width=1.1in,height=0.8in]{comparison_result/x4/Urban100/nimg_099/HR}
\end{minipage}
}
\subfigure[Bicubic]{
\begin{minipage}[t]{0.15\linewidth}
\centering
\label{Fig.more_x4_bic2}
\includegraphics[width=1.1in,height=1.2in]{comparison_result/x4/Urban100/nimg_089/Bicubic}\\
\includegraphics[width=1.1in,height=0.8in]{comparison_result/x4/Urban100/nimg_093/Bicubic}\\
\includegraphics[width=1.1in,height=0.8in]{comparison_result/x4/Urban100/nimg_099/Bicubic}
\end{minipage}
}
\subfigure[CARN-M]{
\begin{minipage}[t]{0.15\linewidth}
\centering
\label{Fig.more_x4_carn-m2}
\includegraphics[width=1.1in,height=1.2in]{comparison_result/x4/Urban100/nimg_089/CARN-M}\\
\includegraphics[width=1.1in,height=0.8in]{comparison_result/x4/Urban100/nimg_093/CARN-M}\\
\includegraphics[width=1.1in,height=0.8in]{comparison_result/x4/Urban100/nimg_099/CARN-M}
\end{minipage}
}
\subfigure[RFDN]{
\begin{minipage}[t]{0.15\linewidth}
\centering
\label{Fig.more_x4_RFDN2}
\includegraphics[width=1.1in,height=1.2in]{comparison_result/x4/Urban100/nimg_089/RFDN}\\
\includegraphics[width=1.1in,height=0.8in]{comparison_result/x4/Urban100/nimg_093/RFDN}\\
\includegraphics[width=1.1in,height=0.8in]{comparison_result/x4/Urban100/nimg_099/RFDN}
\end{minipage}
}
\subfigure[PAN]{
\begin{minipage}[t]{0.15\linewidth}
\centering
\label{Fig.more_x4_PAN2}
\includegraphics[width=1.1in,height=1.2in]{comparison_result/x4/Urban100/nimg_089/PAN}\\
\includegraphics[width=1.1in,height=0.8in]{comparison_result/x4/Urban100/nimg_093/PAN}\\
\includegraphics[width=1.1in,height=0.8in]{comparison_result/x4/Urban100/nimg_099/PAN}\\
\end{minipage}
}
\subfigure[Ours]{
\begin{minipage}[t]{0.15\linewidth}
\centering
\label{Fig.more_x4_OURS2}
\includegraphics[width=1.1in,height=1.2in]{comparison_result/x4/Urban100/nimg_089/Ours}\\
\includegraphics[width=1.1in,height=0.8in]{comparison_result/x4/Urban100/nimg_093/Ours}\\
\includegraphics[width=1.1in,height=0.8in]{comparison_result/x4/Urban100/nimg_099/Ours}
\end{minipage}
}
\caption{Visual comparisons among SOTA lightweight models in $\times 4$ image super-resolution. The test image patches are from Urban100. Note that the results of RFDN are based on our test with the pre-trained model which is officially released by the authors. Our method shows better reconstruction performance and less deformation on image details.}
\label{Fig.more_X42}
\end{figure*}
|
{
"timestamp": "2021-05-11T02:20:31",
"yymm": "2105",
"arxiv_id": "2105.03939",
"language": "en",
"url": "https://arxiv.org/abs/2105.03939"
}
|
\section{Introduction}
\label{sec:intro}
\vspace{-0.3cm}
Over the past decade, neural networks have shown remarkable performance in various fields~\cite{he2016deep, simonyan2014very}. In particular, this structure transforms high-dimensional data into low-dimensional latent space by taking advantage of non-linear transformations and vast parameters. However, in addition to its popularity, this competence also multiplies the computation load. Increasing computational and storage demands, therefore, inevitably reduce its applicability on portable devices.
In recent years, several critical studies aim to reduce the need for these limitations with a slight decrease in performance than full precision counterparts. In general, compressing full precision parameters into binarized ones plays a prominent role because of the simplicity and efficiency
~\cite{soudry2014expectation, courbariaux2015binaryconnect, courbariaux2016binarized, rastegari2016xnor, hubara2017quantized, zhou2016dorefa}. Mathematically, this type of methods minimizes the approximation error between full and binary weights as follows:
\begin{eqnarray}
\label{eqn:bc1}
J(\alpha, \mathbf{W}^b) = \underset{\alpha, \mathbf{W}^b}{\mathrm{argmin}}{|| \mathbf{W} - \alpha \mathbf{W}^b ||}
\end{eqnarray}
\noindent Here, $\mathbf{W}$ and $\mathbf{W}^b$ indicate full and binary precision trainable parameters, respectively. Moreover, $\alpha$ denotes a scale factor where its optimal solution is equal to mean of the absolute value of full precision weights $\mathbb{E}[|\mathbf{W}|]$. In the end, this approximation leads to $\mathbf{W}\approx\alpha \mathbf{W}^b$ with a significant quantization error. Observe that this quantization error is even worse for shallow networks since the allowable range for parameters is large. This issue eventually weakens the stability of solutions. Additionally, error propagation for binary precision parameters can be severe due to limited operations in back-propagation steps (i.e., multiplication with binarized weights instead of double precision). This limitation causes additional problems for converging optimal solutions.
In this work, the quantization error in Eq.~\ref{eqn:bc1} is reduced by an iterative quantization step. In particular, full weights are iteratively quantized in several steps, and a binary representation is computed with a negligible increase in the computational load. As noticed, this schema intuitively shares an assumption similar to hierarchical clustering, which tries to create a hierarchy of clusters. Similarly, this step helps improve the stability of binarized weights because of the better partitions. Indeed, our bitwise operations induce a fast inference similar to other quantized neural network by consuming less power and memory (appx. $16\times$) on the devices. However, performance can slightly reduce due to the bit-length compared to full-precision counterparts.
In addition, a regularization term is introduced that is feasible for all threshold-based binarized networks. This term penalizes parameters that are far from the thresholds at which binary transitions occur. Hence the parameters are concentrated close to the binary transitions. This term favors changing weights quickly in binary precision to observe their effects to the performance and avoid overfitting for parameters.
The rest of the paper is formed as follows. First, the related work for binarized weight networks is summarized. Later, the concept of binarized weight networks and our contributions are described. Lastly, we explain the test results and final remarks.
\vspace{-0.3cm}
\section{Related Work}
\label{sec:related}
\vspace{-0.3cm}
Although there is a large body of studies related to the compression of computation and memory requirements for deep neural networks~\cite{iandola2016squeezenet}, binary-precision networks play a crucial role due to their efficiency and simplicity~\cite{soudry2014expectation, courbariaux2015binaryconnect}.
In baseline methods, all trainable parameters are binarized with a quantizer. Later, the backpropagation step is also realized on these binary-precision weights. \cite{rastegari2016xnor} shows that the performance for binarized-parameters can be further improved by adding a single scale factor. Moreover, activation and parameters are also converted to binary-precision to achieve a binarized inference~\cite{rastegari2016xnor, hubara2017quantized, zhou2016dorefa}.
Note that these methods limit weights to \{-1, 1\} (with some scale factor) and decrease the sparsity property of neural networks. \cite{li2016ternary, zhu2016trained} explain that the use of zero weights \{-1,0,1\} contributes to performance by promoting sparsity in their solutions. In this way, robust representations can be computed. \cite{zhu2016trained} also shows that multiple scale factors obtain better results compared to the cases where a single scale factor is utilized. Recently, SBNN prunes some of NN layers in a binarized network and refines the network with a softsign function~\cite{wu2020sbnn}. The most similar work~\cite{yang2019quantization, zhang2018lq} to our method is to quantize continuous parameters to a set of integer numbers with multiple binary quantizers. However, this can lead to overly quantized representations where instability occurs. We recommend~\cite{qin2020binary, roth2020resource} to read for the survey in binary neural networks.
Lastly, the applications of this terminology are proliferated on different tasks~\cite{ma2019efficient, chiang2020deploying}.
\vspace{-0.3cm}
\section{Binarized Weight Error Networks}
\label{sec:proposal}
\vspace{-0.3cm}
\begin{figure}
\centering
\includegraphics[scale=0.13]{flow.png}
\vspace{-0.3cm}
\caption{The flow of conventional (first) and our method (second) for the binarized network. Differently, our method quantizes full-precision parameter $W$ by counting approximation error $E$.}
\vspace{-0.4cm}
\label{fig:f0}
\end{figure}
The motivation of binarized neural networks is to minimize the error between full-precision $\mathbf{W}$ and binary-precision $\mathbf{W}^b$ weights as formulated in Eq.~\ref{eqn:bc1}. Generally, two binarized network structures come into prominence in the literature.
In binary-weight networks, an approximation problem is solved by assigning a binary weight as:
\begin{eqnarray}
\label{eqn:bc2}
\mathbf{W}^b = \begin{cases}
+1, & \text{if } \mathbf{W}\geq 0 \\
-1, & \text{if } \mathbf{W}<0 \end{cases}
\end{eqnarray}
Implicitly, this overall solution corresponds to the operation $\mathbf{W}^b=\text{sign}(\mathbf{W})$. Note that $\mathbf{W}^b$ takes only $\{-1,1\}$, and the responses of weights are always active since there is no zero operation. This assumption conflicts with the conclusions reached about filter characteristics, as explained in~\cite{krizhevsky2012imagenet} (i.e., Gabor-type filtering and sparsity).
On the contrary, ternary-weight networks compute a representation with a threshold function in Eq.~\ref{eqn:bc2}, since there is no deterministic solution for binary-weights as claimed in~\cite{hwang2014fixed}:
\begin{eqnarray}
\label{eqn:bc2}
\mathbf{W}^b = \begin{cases}
+1, & \text{if } \mathbf{W}>\Delta \\
0, & \text{if } |\mathbf{W}|<\Delta \\
-1, & \text{if } \mathbf{W}<-\Delta \end{cases}
\end{eqnarray}
\noindent Here, this assumption is exploited to approximate the threshold value $\Delta$ as $0.66\cdot\mathbb{E}[|\mathbf{W}|]$. This model introduces more representation capacity because of the sparsity, while stability is also increased.
\vspace{-0.3cm}
\subsection{Binarized Weight Error Networks}
\label{ssec:weighterr}
\vspace{-0.1cm}
As excepted, quantization errors for binary weights are moderately high compared to ternary weights. However, we believe there is room to take additional steps to reduce this error.
In particular, the idea emphasized in our work is based on the fact that an iterative approach can be more suitable for binarization process to increase the capacity of representations than the quantization of full-precision parameters in a single step. For this purpose, the problem definition in Eq.~\ref{eqn:bc1} is rewritten as:
\begin{eqnarray}
\label{eqn:bc3}
J_i(\alpha, \mathbf{W}_i^b) = \underset{\alpha, \mathbf{W}_i^b}{\mathrm{argmin}}{|| \mathbf{E}_i - \alpha_i \mathbf{W}_i^b ||}
\end{eqnarray}
\noindent Here, $\mathbf{E}_i$ indicates the quantization error that occurs in the previous iteration step $i-1$. This is equal to $\mathbf{E}_0=\mathbf{W}$ in the first iteration. When the total iteration is set to 2, the final representation can be approximated as:
\begin{eqnarray}
\label{eqn:bc4}
\mathbf{W}^b = \alpha_{opt} \cdot (\mathbf{W}_0^b + \mathbf{W}_1^b)
\end{eqnarray}
\noindent Here, $\alpha_{opt}$ indicates the optimum scale value for this conversion that needs to be recalculated. The flows of conventional and our method for binarized networks are illustrated in Fig.~\ref{fig:f0}.
An observation for Eq.~\ref{eqn:bc4} is that the use of two binary-weight models for both $\mathbf{W}_0^b$ and $\mathbf{W}_1^b$ implicitly reproduces a ternary-weight model. This result is critical because it confirms the mathematical proof of our method. Note that this step also shows that binary-representation can be improved further with simple replacements in the formulation.
Second, we compute the optimum scale value $\alpha_{opt}$ as:
\begin{eqnarray}
\label{eqn:scale}
\alpha_{opt} = 0.75\cdot \alpha_1 - 0.25\cdot \alpha_2
\end{eqnarray}
\noindent where $\alpha_1$ and $\alpha_2$ are equal to $\mathbb{E}[|\mathbf{E_0}|]$ and $\mathbb{E}[|\mathbf{E_1}|]$, respectively. If binary and ternary models are sequentially applied by using Eq.~\ref{eqn:bc3} (that we call BT), the representation will take one of the values $\{-2, -1, 0, 1, 2\}$. Hence, an ideal scale value must be between the lower bound (i.e., $(\alpha_1 - \alpha_2)$) and upper bound (i.e., $0.5\cdot(\alpha_1 + \alpha_2)$, $0.5$ is because of the maximum binary range which is $2$). Intuitively, the mean value of these scales yields the optimum scale value as given in Eq.~\ref{eqn:scale}.
\vspace{-0.3cm}
\subsection{Transition Regularization}
\label{ssec:weightreg}
\vspace{-0.1cm}
As explained in~\cite{li2016ternary}, even if full-precision parameters are sampled from either a normal or uniform distribution, the allowable parameter range can still be large. A high approximation error is therefore expected. Furthermore, parameter updates with binary-precision are problematic as accurate error propagation is nearly impossible and greatly overfits the parameters.
We propose a novel regularization term that forces parameters to concentrate around binary transitions and penalizes those that are far from those points. For clarity, transition points correspond to $0$ for binary-weight models, $\{-\Delta, \Delta\}$ for ternary-weight models and $\alpha_{opt}$s for our models. Hence, the effects of parameter changes for binary-precision can be learned more quickly with this regularization term at train time.
To favor these changes, a simple and effective trick is used. Mathematically, full-precision parameters $\mathbf{W}$ are corrupted with a random noise $\mathbf{\tilde{W}}=\mathbf{W} + 0.1 \cdot \mathcal{N}(0, \alpha)$. Later, these noisy weights are also quantized $\mathbf{\tilde{W}}^b$ similarly with Eq.~\ref{eqn:bc1}. Note that at the boundaries of transition points, binarized-representations for corrupted parameters differ from the original versions:
\begin{eqnarray}
\label{eqn:bc5}
\mathcal{L} = \mathcal{L}_{pb} -\lambda ||\mathbf{W}^b - \mathbf{\tilde{W}}^b||_1
\end{eqnarray}
\begin{figure}
\centering
\vspace{-0.3cm}
\includegraphics[scale=1]{cifarsvhn.png}
\vspace{-0.3cm}
\caption{Visual samples from Fashion (Upper-Left), ImageNet2012 (Upper-Right), Cifar10 (Lower-Left) and SVHN (Lower-Right) datasets.}
\vspace{-0.3cm}
\label{fig:f1}
\end{figure}
\noindent Here, $\lambda$ is a coefficient that scales the contribution of regularization term to the overall loss function with a task-specific loss function $\mathcal{L}_{pb}$. Note that this coefficient is empirically set to $0.1$. From this equation, the absolute distance between actual and corrupted binary-precision weights is expected to be maximized. Hence, full-parameters are pushed close to these transition points. This notion is similar to Dropout~\cite{srivastava2014dropout} regularization where parameter space is intentionally corrupted.
In the experiments, we will show that this term is particularly useful for classification tasks than inverse problems. The main reason is that as explained in~\cite{erin2015deep}, regression tasks can lead to holes due to continuous solution spaces, so that sparsity cannot be maintained correctly in such problems.
\vspace{-0.3cm}
\subsection{Implementation Detail}
\label{ssec:weightreg}
\vspace{-0.1cm}
For the implementation detail\footnote{https://github.com/savasozkan/BofT}, each model is trained with Adam stochastic optimizer at train time. Mini-batch size is set 128, and learning rate scaling is adopted. In the classification task, VGG-6 (K) with batch normalization is used for acceleration and parameter regularization (i.e., 2×(K-C3) + MP2 +2×(2×K-C3) + MP2 + 2×(4×K-C3) + MP2 + 8×K-FC + Softmax and K is the number of filters). Here, "K-C3" is convolution layer with K filters of a $3\times3$ kernel. Also, "MP2" and "K-FC" denote a max pooling with stride $2$ and a fully-connected layer with K filters respectively. Similarly, espcn architecture with skip-connection is utilized for super-resolution and denoising tasks.
\vspace{-0.3cm}
\section{Experiments}
\label{sec:experiments}
\vspace{-0.3cm}
Experiments are conducted on two sets of tasks as visual classification and visual inverse problems to show the superiority of our method. The performance of our model (BT) is compared with binary-weight (BWN)~\cite{rastegari2016xnor}, ternary-weight (TWN)~\cite{li2016ternary}, trained ternary quantization (TTQ)~\cite{zhu2016trained} and LQ-Nets (LQ)~\cite{zhang2018lq} models that are selected as baselines. Note that all results are obtained by ours running on their publicly available codes. For classification tasks, the mean average precision (MAP)@1 metric is used, while PSNR scores are calculated for super-resolution and denoising tasks. For clarity, two problem sets are explained in two individual sections.
\vspace{-0.3cm}
\subsection{Visual Classification}
\label{ssec:expclass}
\begin{table}[t]
\begin{center}
\vspace{-0.1cm}
\caption{Comparisons with baselines on ImageNet2012 (INet), Fashion, Cifar10 and SVHN datasets. MAP@1 scores are reported. Best results for binary precision are bold.}
\scalebox{0.92}{
\begin{tabular}{c|c|c*{4}{c}|c}
\hline \hline
& Arch. & Full & BWN & TWN & TTQ & LQ & BT(ours) \\
\hline
\parbox[t]{1mm}{\multirow{2}{*}{\rotatebox[origin=c]{90}{INet}}}
& K=64 & 42.81 & 35.30 & 38.25 & 37.76 & 39.01 & \textbf{40.21} \\
& K=128 & 44.13 & 40.08 & 42.59 & 41.18 & 42.65 & \textbf{42.80} \\
\hline
\parbox[t]{1mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{Fashion}}}
& K=16 & 93.48 & 92.51 & 93.20 & 93.32 & 93.34 & \textbf{93.41} \\
& K=32 & 94.16 & 93.28 & 93.97 & 93.91 & \textbf{94.04} & 94.03 \\
& K=64 & 94.56 & 94.16 & 94.30 & 94.18 & 94.29 & \textbf{94.55} \\
& K=128 & 94.58 & 94.39 & 94.32 & 94.34 & 94.45 & \textbf{94.51} \\
\hline
\parbox[t]{1mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{Cifar10}}}
& K=16 & 87.62 & 78.70 & 82.94 & 83.83 & \textbf{84.67} & 84.61 \\
& K=32 & 90.64 & 86.45 & 88.37 & 88.53 & 88.93 & \textbf{89.49} \\
& K=64 & 92.89 & 90.35 & 91.67 & 91.02 & 91.90 & \textbf{92.01} \\
& K=128 & 93.58 & 92.06 & 92.39 & 92.23 & 92.85 & \textbf{92.92} \\
\hline
\parbox[t]{1mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{SVHN}}}
& K=16 & 94.14 & 93.15 & 93.44 & 93.32 & 94.04 & \textbf{94.09} \\
& K=32 & 94.93 & 94.32 & 94.84 & 94.69 & 94.83 & \textbf{94.88} \\
& K=64 & 95.43 & 95.10 & 95.22 & 95.29 & 95.31 & \textbf{95.37} \\
& K=128 & 95.76 & 95.17 & 95.28 & 95.65 & \textbf{95.84} & 95.74 \\
\hline \hline
\end{tabular}}
\label{tab:t2}
\vspace{-0.7cm}
\end{center}
\end{table}
Experimental results are reported on Fashion, ImageNet2012 (64×64), Cifar10 and SVHN datasets. Fashion, Cifar10 and SVHN datasets have 10 object/character classes, while ImageNet2012 has 1000 object classes. Visual samples for these datasets are illustrated in Fig.~\ref{fig:f1}. Random crop, padding and cutout policies are exploited as data augmentation. Moreover, softmax cross-entropy loss is used as the task-specific loss function $\mathcal{L}_{pb}$. Note that all trainable parameters, excluding fully-linear layers, are binarized in our paper.
First, we compare our method with baseline models, and results are illustrated in Table~\ref{tab:t2}. As observed, our method outperforms all its binary-precision counterparts. This result validates that it improves the quantization performance by considering the approximation error. Especially for ImageNet2012 (INet), the increase is significant, and the proposed model yields close results with the full-precision architecture. Moreover, the performance saturates when the number of parameters increases for all-binarized models as expected.
\begin{table}
\begin{center}
\caption{Impact of transition regularization term on ImageNet2012 (INet), Fashion, Cifar10 and SVHN datasets. MAP@1 scores are reported.}
\scalebox{0.92}{%
\begin{tabular}{c|c|c|c*{2}{c}}
\hline \hline
& Arch. & Full & BWN & TWN & BT(ours) \\
\hline
\parbox[t]{1mm}{\multirow{2}{*}{\rotatebox[origin=c]{90}{INet}}}
& K=64 & 42.81 & 40.29 & 38.45 & 40.97 \\
& K=128 & 44.13 & 35.60 & 42.82 & 43.22 \\
\hline
\parbox[t]{1mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{Fashion}}}
& K=16 & 93.48 & 93.11 & 93.36 & 93.45 \\
& K=32 & 94.16 & 93.76 & 94.08 & 94.12 \\
& K=16 & 94.56 & 94.34 & 94.42 & 94.55 \\
& K=32 & 94.58 & 94.45 & 94.50 & 94.55 \\
\hline
\parbox[t]{1mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{Cifar10}}}
& K=16 & 87.62 & 79.42 & 83.42 & 85.03 \\
& K=32 & 90.64 & 86.78 & 88.52 & 90.12 \\
& K=16 & 92.89 & 90.57 & 91.92 & 92.12 \\
& K=32 & 93.58 & 92.26 & 92.57 & 92.99 \\
\hline
\parbox[t]{1mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{SVHN}}}
& K=16 & 94.14 & 93.67 & 93.82 & 94.12 \\
& K=32 & 94.93 & 94.52 & 94.90 & 94.88 \\
& K=16 & 95.43 & 95.28 & 95.34 & 95.40 \\
& K=32 & 95.76 & 95.48 & 95.54 & 95.72 \\
\hline \hline
\end{tabular}}
\label{tab:t3}
\vspace{-0.7cm}
\end{center}
\end{table}
Finally, the impact of transition regularization term is reported in Table~\ref{tab:t3}. Results confirm that this term improves all threshold-based binarized weight models. In particular, these improvements reach up to $1\%$ for the shallow networks (i.e., K is lower) compared to results in Table~\ref{tab:t2}. The reason is that when the number of parameters increases, parameters are automatically concentrated in a narrow range, which has a positive impact on the quantization.
\vspace{-0.3cm}
\subsection{Visual Inverse Problems}
\label{ssec:expres}
\vspace{-0.1cm}
Experiments are carried out on two well-known inverse problem tasks: visual super-resolution and visual denoise. At train time, images from the COCO val2017 dataset are used for both tasks. For high resolution, these images are rescaled by 2, 3, and 4 to simulate changes in resolution. At test time, the data sets Set5, Set14, Urban, and BSD100 are used. Additionally, different sigma values (i.e., with a mean of zero) are used to attach images for the denoise task. Tests are also performed on Set5 and Set14 records.
\begin{table}
\begin{center}
\caption{Super-resolution results on different datasets with various scale factors. PSNR scores are reported.}
\scalebox{0.92}{%
\begin{tabular}{c|c|c*{3}{c}|c}
\hline \hline
& Dataset & Full & Bicubic & BWN & TWN & BT(ours) \\
\hline
\parbox[t]{1mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{Scale 2}}}
& Set5 & 36.61 & 33.68 & 36.00 & 36.03 & 36.63 \\
& Set14 & 32.58 & 30.24 & 32.14 & 32.13 & 32.42 \\
& Urban & 29.51 & 26.61 & 28.91 & 28.83 & 29.31 \\
& BSD & 31.48 & 29.48 & 31.08 & 31.13 & 31.28 \\
\hline
\parbox[t]{1mm}{\multirow{3}{*}{\rotatebox[origin=c]{90}{Scale 3}}}
& Set5 & 33.11 & 30.43 & 32.61 & 32.67 & 32.95 \\
& Set14 & 29.46 & 27.54 & 29.13 & 29.12 & 29.32 \\
& BSD & 28.38 & 27.14 & 28.21 & 28.25 & 28.34 \\
\hline
\parbox[t]{1mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{Scale 4}}}
& Set5 & 30.82 & 28.45 & 30.35 & 30.41 & 30.72 \\
& Set14 & 27.70 & 26.01 & 27.38 & 27.43 & 27.66 \\
& Urban & 24.81 & 23.12 & 24.38 & 24.49 & 26.69 \\
& BSD & 27.01 & 25.91 & 26.73 & 26.81 & 26.97 \\
\hline \hline
\end{tabular}}
\label{tab:t4}
\vspace{-0.7cm}
\end{center}
\end{table}
First, an input image is converted to YUV color space for both tasks, and Y channel is used for both train and test phases. Also, our NN architecture is fully convolutional. It takes an input image and applies consecutive convolution layers with residual-based learning, as explained in~\cite{zhang2017beyond}. At the output, it is upsampled with a subpixel layer~\cite{shi2016real} for super-resolution (i.e., (64-C3) + 2×(64-C3) + 2×(64-C3) + SubPixel + (1-C3)). Note that the first and last layers are full-precision networks, while the rest is binarized.
Experimental results for super-resolution are reported in Table~\ref{tab:t4}. Our method outperforms all other baselines. In particular, for small scale factors, negligible performance drops can be observed compared to full-precision networks. Similarly, the reason is that the allowable parameter range is expected to be small for small scale factors, and the sparsity is decreased.
\begin{table}
\begin{center}
\caption{Denoising results on different datasets with various sigma values. PSNR scores are reported.}
\scalebox{0.92}{%
\begin{tabular}{c|c|c*{3}{c}|c}
\hline \hline
& Dataset & Noisy & Full & BWN & TWN & BT(ours) \\
\hline
\parbox[t]{1mm}{\multirow{2}{*}{\rotatebox[origin=c]{90}{0.05}}}
& Set5 & 26.20 & 33.58 & 33.06 & 33.03 & 33.59 \\
& Set14 & 26.14 & 32.41 & 31.62 & 31.64 & 32.37 \\
\hline
\parbox[t]{1mm}{\multirow{2}{*}{\rotatebox[origin=c]{90}{0.1}}}
& Set5 & 20.26 & 29.73 & 29.63 & 29.42 & 29.71 \\
& Set14 & 20.17 & 28.89 & 28.01 & 28.07 & 28.91 \\
\hline
\parbox[t]{1mm}{\multirow{2}{*}{\rotatebox[origin=c]{90}{0.2}}}
& Set5 & 14.65 & 26.41 & 26.05 & 25.53 & 26.55 \\
& Set14 & 14.37 & 25.19 & 24.60 & 24.29 & 25.04 \\
\hline \hline
\end{tabular}}
\label{tab:t5}
\vspace{-0.7cm}
\end{center}
\end{table}
Denoising results are also shown in Table~\ref{tab:t5}. From the results, the proposed method achieves slightly better performance even for full-precision networks. Residual-based learning eventually promotes robustness with binarized networks since parameter range (i.e., additive noise varies $[0.05, 0.2]$) is heavily reduced. Hence, binarized weights can converge better solutions compared to other tasks.
As indicated, the proposed regularization term yields similar or slightly worse performance for inverse problems. The main reason is that the learning process may lead to holes in continuous feature space partitioning.
\vspace{-0.3cm}
\section{Conclusion}
\label{sec:conc}
\vspace{-0.3cm}
This paper proposes a novel model for binarized weight networks that achieves state-of-the-art performance compared to its counterparts. This model estimates a binary representation by taking into account the approximation error with an additional term. In addition, a novel regularization term is presented. This term is useful for threshold-based binarized models in particular. This term favors parameters that focus on binary transitions so that overfitting the parameters for binarized weights is avoided. In the experiments, we prove that this term is significant for classification-related tasks and enhances the performance of binarized methods. Experiments with different benchmarks for different tasks are conducted to confirm the superiority of our contributions.
\vspace{-0.3cm}
\bibliographystyle{IEEEbib}
|
{
"timestamp": "2021-05-11T02:19:12",
"yymm": "2105",
"arxiv_id": "2105.03897",
"language": "en",
"url": "https://arxiv.org/abs/2105.03897"
}
|
\section{Introduction and preliminaries}
The cosmic censorship hypothesis of Roger Penrose is one of the most outstanding conjectures in the gravitational theory to find the final fate of
the gravitational collapse of a massive star.
According to a given observer, this conjecture asserts there can be no naked singularity in spacetime or in other words, there exist no families
of future directed causal curves, which in the past terminate at the singularity \cite{Joshi}.\\
The global hyperbolicity condition on a spacetime implies the existence of maximal length geodesics
joining causally related points and so, it is generally believed that a mathematical statement of the cosmic
censorship hypothesis is that spacetime should be globally hyperbolic. However, it seems that there are weaker
conditions that may play the role of global hyperbolicity \cite{Beem(1987)}. One such condition is pseudoconvexity.
The concepts of causal or null (or maximally null) pseudoconvexity are defined, by restricting the condition of
pseudoconvexity to causal or null (or maximal null) geodesics, respectively. Various implications of causal and
null pseudoconvexity on the geodesic structure of a Lorentzian manifold have been studied in several classical and recent papers
by Beem, Parker, Krolak, and Low \cite{Beem(1987), Beem(1992), Beem(1996), Low(1989), Low(1990)}.
In Ref. \cite{Low(1990)}, Low states the equivalence of the null pseudoconvexity of $M$ and the Hausdorffness of the space of null
geodesics, $\mathcal{N}$,
for a strongly causal spacetime $M$. A sufficient condition to ensure that $\mathcal{N}$ is Hausdorff is the
absence of naked singularities \cite[Proposition 2.2]{Low(1989)}. But, we can see in \cite[Example 2.2.12]{Bautista(2016)}
that it is not a necessary condition.
Recently, Borjian and Bahrampour introduced two types of naked
singularities called nakedly singular future boundary and nakedly singular past boundary \cite{Borjian}.\\
In this paper, by an example, we establish that a proposition proved by Beem and Krolak (see \cite [Proposition 1]{Beem(1992)})
is not valid and it is required to have a minor modification. Then, by the corrected version of this proposition,
we show that the existence of (at least) one of these naked singularities implies the failure of the Hausdorff property for
the space of null geodesics of $M$ which is Conjecture 3.1 of \cite {Borjian} for causally continuous spacetime.
Also, the converse of this conjecture is solved.\\
The causal ladder is a diagram that illustrates the strengths of the causality conditions (see \cite[Page 73, Fig. 3.3]{Beem(1996)} and
\cite{Min(2008)}).
As these various applications of pseudoconvexity show, it is useful to place this property within the causal ladder.
Recently, we proved that every strongly causal spacetime is causally simple if and only if it is maximal
null pseudoconvex \cite{Vatan}. On the other hand, there is a conjecture that says strongly causal null
pseudoconvex spacetimes are causally simple \cite{Bautista(2017)}. We also prove this conjecture
for $n$-dimensional spacetimes ($n\geq 3$) and the converse of it for
two-dimensional spacetimes and then give a new hierarchy in the causal ladder (see Fig \ref{Fig1}).\\
Here, we briefly review some basic definitions and concepts from the topic of Lorentzian causality theory needed for the next sections.\\
In general relativity, a \textit{spacetime} is a pair $(M,g)$ where $M$ is a real, connected, $C^{\infty}$ Hausdorff manifold of dimension two
or more, and $g$ is a globally defined $C^{\infty}$ Lorentzian metric on $M$ of signature $(+,-,...,-)$. When there is no ambiguity, we use $M$
to refer to the spacetime $(M,g)$.\\
We say that a vector $v\in T_{p}M$ is \textit
{timelike} if $g_{p}(v,v)>0$, \textit {causal} if
$g_{p}(v,v)\geq0$, \textit {null}
if $g_{p}(v,v)=0$ and \textit {spacelike} if
$g_{p}(v,v)<0$.
A smooth curve is called \textit {future directed
timelike curve} if its tangent vector is everywhere
timelike future pointing vector and similarly for
spacelike, causal, null future directed (or null past directed)
curve can be defined.
If $p, q \in M$, then $q$ is in the \textit
{chronological future of $p$}, written $q\in I^{+}(p)$ or $p \prec q$,
if there is a timelike future pointing curve $\gamma:
[0, 1]\rightarrow M$ with $\gamma(0) = p$, and
$\gamma(1) = q$;
similarly, $q$ is in the \textit{causal future of $p,$}
written $q\in J^{+}(p)$ or $p \preceq q$, if there is a future pointing
causal curve from $p$ to $q$. For any point, $p$, the set $I^{+}(p)$ is open;
but $J^{+}(p)$ need not, in general, be closed. $J^{+}(p)$
is always a subset of the closure of $I^{+}(p)$.
To be more careful, it is useful to recall that \textit {the causal ladder} is a set of conditions
on spacetimes, where each of these implies its previous one \cite{Beem(1996)}:
\begin{enumerate}
\item A spacetime $M$ which has no point $p$ with
a non-degenerate causal curve that starts and ends at $p$
is said to satisfy \textit {the causal condition}.
\item A spacetime $M$ is said to be distinguishing if for all points $p$ and $q$ in $M$, either
$I^{+}(p) = I^{+}(q)$ or $I^{-}(p) = I^{-}(q)$ implies $p = q$.
\item If each point $p$ has arbitrarily small neighborhoods
in which any causal curve intersects in a single component,
$M$ satisfies the condition of \textit {strong causality}.
\item A distinguishing spacetime $M$ is said to be \textit{causally continuous} at $p$ if the set-valued functions
$I^{+}$ and $I^{-}$ are both inner continuous and outer continuous at $p$.
The set-valued function $I^{\pm}$ is said to be \textit{inner continuous} at $p\in M$ if for each compact set
$K \subseteq I^{\pm}(p)$, there exists a neighborhood $U(p)$ of $p$ such that $K \subseteq I^{\pm}(q)$
for each $q \in U(p)$. The set-valued function $I^{\pm}$ is \textit{outer continuous} at $p$ if for each compact set $K$
in the exterior of $\overline{I^{\pm}(p)}$ there exists some neighborhood $U(p)$ of $p$ such that for each $q \in U(p)$, $K$
is in the exterior of $\overline{I^{\pm}(q)}$. We recall that $I^{\pm}$ is always inner continuous (see \cite[Proposition 4.3]{Min(2019)}).
\item If $M$ is distinguishing and $J^{\pm}(p)$ is closed for all $p\in M$,
then $M$ is \textit{causally simple}.
\item A spacetime $M$ is said to be \textit{globally
hyperbolic} if $M$ is strongly causal and $J^{+}(p)
\cap J^{-}(q)$ is compact for all $p$ and $q$ in $M$.
\end{enumerate}
\begin{definition}\label{def-maximal}
\cite{Beem(1992)} A future null geodesic ray $\gamma: [0,b) \rightarrow
M$ is said to be \textit{maximal} if $\gamma (t) \not \in I^{+}
(\gamma(0))$ for all $t\in(0,b)$. In other words, if
$t_{1}\neq t_{2}$ and $t_{1},t_{2}\in [0,b)$,
then $\gamma( t_{1})$ and $\gamma( t_{2})$ are
not chronologically related. Also, the cases $\gamma: [a,b] \rightarrow M$ and $\gamma:(a,b) \rightarrow M$ are
defined as maximal null geodesic segment and maximal null geodesic in the same manner.
\end{definition}
A spacetime $(M, g)$ is said to be \textit{pseudoconvex} if and only if given any compact set $K$ in $M$,
there exists always a larger compact set $K^{\ast}$ such that all ``geodesic segments'' joining points of
$K$ lie entirely in $K^{\ast}$.
There are different types of pseudoconvexity which correspond to different classes of geodesics in spacetimes.
For example, causal, null, and maximal null pseudoconvexity can be defined by restricting the condition ``geodesic segments''
to causal, null, and maximal null geodesic segments, respectively \cite{Beem(1987)}.
\begin{remark}\label{remark1}
Since every maximal null geodesic is a null geodesic and every null geodesic is a causal geodesic, it is clear from
the definitions that causal pseudoconvexity implies null pseudoconvexity and
null pseudoconvexity implies maximal null pseudoconvexity.
\end{remark}
The limit curve theorems are surely one of the most fundamental tools of Lorentzian geometry. Their importance is certainly
superior to that of analogous results in Riemannian geometry because in Lorentzian manifolds the curves may have a causal
character and, hence, it is particularly important to establish whether two points can be connected by a causal, a timelike, or a lightlike curve.
There are different forms of convergence for a sequence of
nonspacelike curves $\{\gamma_{n}\}$ in Lorentzian geometry and general relativity. For example, the limit curve convergence,
the $C^{0}$ convergence, and the uniform convergence. For arbitrary space-times, each of
the limit curve convergence and the $C^{0}$ convergence is not stronger than the other. But in strongly causal space-times,
these two types of convergence are almost equivalent for sequences of causal curves (see \cite[Proposition 3.34]{Beem(1996)}).
Recently, Minguzzi introduced a discussion of the history of limit curve theorems
results in Lorentzian geometry and proved a strong version of limit curve theorems by a generalized version of uniform convergence \cite{Min(2008-1)}.
\begin{definition}\label{def-uniform}
\cite[Definition 2.1]{Min(2008-1)} (In this definition $a_{n}, b_{n}, a,$ and $b$ may take an infinite value.)
Let $h$ be a Riemannian metric on $M$ and let $d_{0}$ be the associated Riemannian distance. The sequence of curves
$\gamma_{n} : [a_{n},b_{n}] \rightarrow M$ converges $h$-uniformly to $\gamma : [a,b] \rightarrow M$ if
$a_{n} \rightarrow a, b_{n} \rightarrow b,$ and for every $\epsilon > 0$ there
is $N > 0$, such that for $n > N$, and for every $t \in [a,b] \cap [a_{n},b_{n}], d_{0}(\gamma (t), \gamma_{n}(t)) <\epsilon $.
\end{definition}
The sequence of curves $\gamma_{n} : [a_{n},b_{n}] \rightarrow M$ converges $h$-uniformly on compact subsets to
$\gamma : [a,b] \rightarrow M$ if for every compact interval $[a^{\prime},b^{\prime}] \subseteq [a,b]$, there is a choice of sequences
$a_{n}^{\prime},b_{n}^{\prime} \in [a_{n},b_{n}]$, $a_{n}^{\prime} < b_{n}^{\prime}$, such that $a_{n}^{\prime} \rightarrow a^{\prime}$, $b_{n}^{\prime} \rightarrow b^{\prime}$, and for any such choice $\gamma_{n} \vert _{[a_{n}^{\prime},b_{n}^{\prime}]}$ converges $h$-uniformly to $\gamma \vert _{[a^{\prime},b^{\prime}]}$.
Also, Minguzzi proved that the $h$-uniform convergence implies the $C^{0}$ convergence and on compact subsets,
it is independent of the Riemannian metric $h$ chosen (see \cite[Theorem 2.4]{Min(2008-1)}). In this paper, by the
limit curve or the limit geodesic segment, we mean that the $h$-uniform convergence on compact subsets is applied.\\
In Ref. \cite{Vatan}, we found an equivalent property to the pseudoconvexity, called the \textit{LGS property}:
\begin{definition}\label{def-LGS}
Assume $p_{n} \rightarrow p$ and $q_{n} \rightarrow q$ for distinct points $p$ and $q$ in a
spacetime $M$. We say that the spacetime $M$ has the limit geodesic segment property (LGS), if
each pair $p_{n}$ and $q_{n}$ can be joined by a ``geodesic segment'', then there is a limit geodesic
segment from $p$ to $q$. Namely, for every sequence of geodesics $\gamma_{n}$ from $p_{n}$ to $q_{n}$
where $p_{n} \rightarrow p, q_{n} \rightarrow q,$ there are a subsequence $\gamma_{k}$ and a geodesic
segment from $p$ to $q$ such that $\gamma_{k}$ converges $h$-uniformly to $\gamma$.
Similarly, causal, null, and maximal null LGS property can be defined by restricting the condition ``geodesic segment''
to causal, null, and maximal null geodesics, respectively.
\end{definition}
\begin{proposition}\label{causal LGS}
\cite[Proposition 4]{Vatan} Let $ (M, g)$ be a strongly causal spacetime. Then, it is (null or maximal null) causal pseudoconvex
if and only if it has the (null or maximal null) causal LGS property.
\end{proposition}
\section{Main results} %
The paper builds on the literature on pseudoconvexity and causality theory \cite{Beem(1987),Beem(1992), Beem(1996), Borjian, HOWKING(1973),
Min(2019), Penrose, Vatan}.
\subsection{Pseudoconvexity and causal simplicity conditions} %
Null pseudoconvexity is a property weaker than causal pseudoconvexity.
There are some examples of spacetimes which are null pseudoconvex but not causally pseudoconvex (see \cite[Example 1]{Vatan}).
\begin{figure}
\center
\vspace{0.1cm}
\hspace{2cm}
\includegraphics [width=10cm,height=10cm]{Fig1}
\caption{A new hierarchy in the causal ladder by incorporating our new result on the three types of pseudoconvexity conditions.}
\label{Fig1}
\end{figure}
During the construction of a boundary for spacetimes, Bautista et al. studied in detail the space of light rays
and conjectured that strongly causal null pseudoconvex spacetimes are causally simple \cite{Bautista(2017)}.
Beem and Krolak in \cite[Theorem 1]{Beem(1992)} proved that causal simplicity implies maximal null pseudoconvexity.
In Ref. \cite{Vatan}, we recently proved the converse of this fact in the case of strongly causal spacetime, that
is a refined version of the conjecture.
\begin{theorem}\label{causally simple-null pseudo}
\cite[Theorem 2]{Vatan} Let $ (M, g)$ be a strongly causal spacetime. $(M,g)$ is causally simple if and only if it is maximally null pseudoconvex.
\end{theorem}
Also, Beem and Parker proved that global hyperbolicity implies causal pseudoconvexity (see \cite[Lemma 3.2]{Beem(1987)}).
In Ref. \cite{Hedicke}, Hedicke and Suhr showed that the space of null geodesics of a n-dimensional causally simple spacetime $M$ for $n>2$
is Hausdorff if it admits an open conformal embedding into a globally hyperbolic spacetime. And by Theorem \ref{Low}, $M$ is null pseudoconvex.
Now, only one step remains to establish the ladder Fig \ref{Fig1}. Namely, in every two-dimensional strongly causal spacetime
the maximal null pseudoconvexity implies the null pseudoconvexity. Before this, we need the following two lemmas that
the proofs are straightforward.
\begin{lemma}\label{lemma seqen}
Let $M$ be a causally simple spacetime and $\{p_{n} \}$ and $\{q_{n}\}$ are sequences in $M$ converging to $p$
and $q$ respectively, and there are causal geodesic segments (even causal curves) $\gamma_{n}$ from $p_{n}$ to $q_{n}$ for all $n$.
Then $q \in J^{+}(p)$.
\end{lemma}
Every point $p$ in spacetime $M$ admits a local basis $\lbrace V_{k}, k \geq 1\rbrace,$ for the
topology such that for every $k$, the open set $V_{k}$ is a relatively compact subset of $M$, a
strictly convex normal neighborhood of $p$, and a globally hyperbolic spacetime itself
(see \cite[Theorems 1.35 and 2.7]{Min(2019)}).
Let $\gamma$ be a future directed null geodesic with past (future) endpoint $p$. In what follows, by a local extension of $\gamma$ at $p$,
we mean the maximal extension of this geodesic into the past (future) in a strictly convex normal neighborhood of $p$.
\begin{lemma}\label{lemma order}
Let $(M, g)$ be a two-dimensional causally simple spacetime and let $p^{\prime}_{n} \rightarrow p$, $q^{\prime}_{n} \rightarrow q$,
and $p^{\prime}_{n}$ can be joined to $q^{\prime}_{n}$ by a future directed null geodesic $\gamma^{\prime}_{n}$ for all $n$ and let
$\gamma_{n}$ be a local extension of $\gamma^{\prime}_{n}$ at $p^{\prime}_{n}$ and at $q^{\prime}_{n}$.
Then it is possible to choose a subsequence $\lbrace\gamma_{k}\rbrace$ of $\lbrace\gamma_{n}\rbrace$ and
distinct points $p_{k}$ and $q_{k}$ on $\gamma_{k}$ for all $k$ such that $p_{k} \rightarrow p$, $q_{k} \rightarrow q$, and
one of the following conditions is satisfied:
\begin{itemize}
\item[(1)]
$p \prec . . . \prec p_{2}\prec p_{1} $ and $q \prec . . . \prec q_{2}\prec q_{1}$.
Obviously in this case, if $n>m$ then $p_{n}\not \in J^{+}(p_{m})$.
\item[(2)]
$p_{1}\prec p_{2}\prec . . . \prec p$ and $q_{1}\prec q_{2}\prec . . . \prec q$.
Obviously in this case, if $n>m$ then $q_{n}\not \in J^{-}(q_{m})$.
\item[(3)]
The sequence $\lbrace\gamma_{k}\rbrace$ is constant.
\end{itemize}
Moreover, it is possible to choose the monotone sequences $\lbrace p_{k} \rbrace$ on a null geodesic passing through $p$
and $\lbrace q_{k} \rbrace$ on a null geodesic passing through $q$ (in the Cases (1) and (2) replace $\preceq$ with $\prec$).
\end{lemma}
\begin{remark}\label{remark2}
We note that any limit curve of a sequence of maximal null geodesics is a maximal null geodesic.
For this, let $\gamma$ be a future directed causal curve from $p$ to $q$ as a limit curve of a sequence of
future directed maximal null geodesics $\gamma_{n}$ from $p_{n}$ to $q_{n}$ such that $p_{k} \rightarrow p$ and $q_{k} \rightarrow q$.
By the maximality of $\gamma_{n}$, the Lorentzian arc length of $\gamma_{n}$ is equal to the Lorentzian distance from $p_{n}$ to
$q_{n}$ namely, $L(\gamma_{n})=d(p_{n},q_{n})$ (see \cite[Definitions 4.1 and 4.10]{Beem(1996)}
that \cite[Definitions 4.10]{Beem(1996)} is equivalent to Definition \ref{def-maximal}). Now, By using \cite[Lemma 4.4]{Beem(1996)},
we have $L(\gamma) \leq d(p,q) \leq lim\, inf \, d(p_{n},q_{n})=0$. Hence $L(\gamma) = d(p,q)=0$ and $\gamma$
may be reparametrized to a maximal null geodesic segment from $p$ to $q$ by \cite[Theorem 4.13]{Beem(1996)}.
Because every null geodesic is locally maximal, then it immediately implies that any limit curve of a sequence of
null geodesics is a null geodesic. For this, we can cover $\gamma$ with a finite number of strictly convex normal neighborhoods
$U_{1}, ..., U_{m}$ such that $p\in U_{1}$, $q\in U_{m}$, and ${\gamma_{n}\vert}_{U_{i}}$ is a maximal null geodesic segment for all $i$.
So, ${\gamma\vert }_{U_{i}}$ is a maximal null geodesic segment for all $i$.
\end{remark}
\begin{theorem}\label{main}
Let $M$ be a strongly causal two-dimensional spacetime. $M$ is maximal null pseudoconvex if and only if
it is null pseudoconvex.
\end{theorem}
\begin{proof}
$(\Leftarrow)$ See Remark \ref{remark1}.
$(\Rightarrow)$ Suppose, on the contrary, $M$ is maximal null pseudoconvex but not null pseudoconvex.
Therefore, Theorem \ref{causally simple-null pseudo} implies $M$ is causally simple and Proposition \ref{causal LGS}
implies the null LGS property is not satisfied. So,
there are sequences $p_{n}$ and $q_{n}$ with $p_{n}\prec q_{n}$ joined by a future directed
null geodesic $\gamma_{n}$, for any natural value of $n$ without any limit curve from $p$ to $q$
(see Definitions \ref{def-uniform}, \ref{def-LGS}).
Let $h$ be a complete Riemannian metric on $M$. By \cite[Theorem 3.1, part (2), case (ii)]{Min(2008-1)},
there is a subsequence parametrized with respect to $h$-length denoted as $\gamma_{k}:[0,b_{k}]\rightarrow M$,
$\gamma_{k}(0)=p_{k}\rightarrow p$, $\gamma_{k}(b_{k})=q_{k}\rightarrow q$, (an analogous reparametrized sequence
$\gamma_{k}^{\prime}:[-b_{k},0]\rightarrow M$, $\gamma_{k}^{\prime}(0)=q_{k}\rightarrow q$,
$\gamma_{k}^{\prime}(-b_{k})=p_{k}\rightarrow p$) and there are a future directed inextendible causal curve
$\eta_{1}:[0,+\infty)\rightarrow M$, $\eta_{1}(0)=p$, and a past directed inextendible causal curve
$\eta_{2}:(-\infty,0]\rightarrow M$, $\eta_{2}(0)=q$ such that $\gamma_{k}$ and $\gamma_{k}^{\prime}$
converges $h$-uniformly on compact subsets to $\eta_{1}$ and $\eta_{2}$, respectively.
Now, Remark \ref{remark2} implies that $\eta_{1}$ and $\eta_{2}$ are null geodesics.
Also, we have $p\not\in \eta_{2}$ and $q\not\in \eta_{1}$; otherwise, $\eta_{1}$ and $\eta_{2}$ are one the reparametrization
of the other from $p$ to $q$ and it leads us to get a contradiction (see \cite[Theorem 3.1, part (2), case (i)]{Min(2008-1)}).
Lemma \ref{lemma seqen} implies that $\overline{q} \in J^{+}(\overline{p})$ for
all $\overline{p} \in \eta_{1}$ and $\overline{q} \in \eta_{2}$.
We remark that if $\overline{q_0}=\eta_{2}(s_{0}) \in \partial J^{+}(p)$, then by \cite[Lemma 3]{Vatan},
$\eta_{2}(s)\in \partial J^{+}(p)$ for all real value $s\in (-\infty ,s_{0}]$). Now, one of the two cases occurs.
Either there exists $\overline{p_{0}} \in \eta_{1}$ and $\overline{q_{0}} \in \eta_{2}$ such that
$\overline{q_{0}} \in \partial J^{+}(\overline{p_{0}})$ and so $\eta_{2}((-\infty ,s_{0}])\subseteq \partial J^{+}(\overline{p_{0}})$,
or for every $\overline{p} \in \eta_{1}$ and $\overline{q} \in \eta_{2}$, $\overline{q} \in I^{+}(\overline{p})$
and so $\eta_{2}\subset I^{+}(\overline{p})$ for all $\overline{p} \in \eta_{1}$.
\begin{itemize}
\item[\textbf{Case 1)}]
$\eta_{2}((-\infty ,s_{0}])\subseteq \partial J^{+}(\eta_{1}(t_{0}))$, for some $\overline{p_{0}} = \eta_{1}(t_{0})$
and $\overline{q_{0}} = \eta_{2}(s_{0})$. In this case,
we consider
$r= \eta_{2}(s_{1}) \in \partial J^{+}(\overline{p_{0}})$ for some $s_{1} \in (-\infty ,s_{0})$ and so by \cite[Corollary 4.14]{Beem(1996)},
there is a maximal null geodesic from $\overline{p_{0}}$ to $r$.
By \cite[Proposition 2.19]{Penrose}, this yields one of the following two possibilities:\\
1-1) $\eta_{2}(s_{0})=\overline{q_{0}} \in I^{+}(\overline{p_{0}})$ which contradicts
$ \eta_{2}(s_{0}) \in \partial J^{+}(\overline{p_{0}})$.\\
1-2) The union of segments $\overline{p_{0}}r $ and $r\overline{q_{0}} \subseteq \eta_{2}$ constitutes a single null geodesic from
$\overline{p_{0}}$ to $\overline{q_{0}}$. Namely, $\overline{p_{0}}r \subseteq \eta_{2}$ and so $\eta_{1}$ and $\eta_{2}$
intersect each other at $\overline{p_{0}}$.
By consideration of the two limit curves $\eta_{1}$ and $\eta_{2}$ of the sequence $\gamma_{k}$ in a strictly convex normal
neighborhood $U(\overline{p_{0}})$ of $\overline{p_{0}}$, we conclude that $\eta_{1}$ must coincide with $\eta_{2}$
as a unique null geodesic limit curve of $\gamma_{k}$ in $U$. So, $\eta_{1}$ is a null geodesic from $p$ to $q$
and Case 1 leads to a contradiction.
\item[\textbf{Case 2)}]\label{case2}
$\eta_{2}\subset I^{+}(\eta_{1}(s))$, for all real values of $s\in [0,+\infty)$.\\
Consider the provided sequence $\lbrace p_{m} \rbrace$ in Part (1) of Lemma \ref{lemma order}
by $p\preceq . . .\preceq p_{2}\preceq p_{1}$ such that $p$, $p_{n}$ are on a null geodesic passing through $p$
(Part (3) of Lemma \ref{lemma order} leads to a constant sequence that never happens here and for Part (2),
the proof is similar by replacing ``$+$" with ``$-$", ``$p$" with ``$q$" and ``$\eta_{1}$" with ``$\eta_{2}$").
From the hypothesis of this case, we immediately conclude $p\in I^{-}(q)$
and so there is a strictly convex normal neighborhood $U$ of $p$ such that $U\subseteq I^{-}(q)$. On the other hand, all but
finitely many elements of $\lbrace p_{m} \rbrace$ are in $I^{-}(q)$, by the fact that $\lbrace p_{m} \rbrace$ converges
to $p$. If necessary, by selecting a suitable subsequence of $\lbrace p_{m} \rbrace$, there exists an $N_{0} \in \mathbf{N}$ such that:
\begin{equation}\label{1}
\forall m > N_{0}\qquad p_{m} \not\in J^{+}(p_{N_{0}}),\quad q_{m-1}, q \in I^{+}(p_{N_{0}}).
\end{equation}
\begin{figure}
\vspace{0.1cm}
\hspace{3cm}
\includegraphics[width=10cm,height=11cm]{Fig2}
\caption{Diagram for the proof of Theorem \ref{main}}
\label{Fig2}
\end{figure}
Thus, every $\gamma_{m}$ intersects $\partial J^{+}(p_{N_{0}})$ for all $m \geq N_{0}$ by \cite[Lemma 3]{Vatan}.
Let $c_{m}$ be $\gamma_{m}(t)$ for the minimum value of $t$ in which $\gamma_{m}$
intersects $\partial J^{+}(p_{N_{0}})$. The sequence $\lbrace c_{m}\rbrace$ is a subset of
$\partial J^{+}(p_{N_{0}})$ and by the causal simplicity condition $c_{m}\in \partial J^{+}(p_{N_{0}})\subseteq J^{+}(p_{N_{0}})$.
So, Corollary 4.14 in Ref. \cite{Beem(1996)} implies that there is a sequence of maximal
null geodesic segments $\lbrace \theta_{m} \rbrace$ from $p_{N_{0}}$ to any elements of $\lbrace c_{m}\rbrace$.\\
\textbf{Claim 1:} $\lbrace c_{m}\rbrace$ has an accumulation point.
\begin{itemize}
\item[] Proof of Claim 1: Let $U(p)$ be a strictly convex normal neighborhood of $p$ such that $\overline{U}$ is compact.
If infinitely many of $\lbrace c_{m}\rbrace$ are in $\overline{U}$, then an accumulation point $c$ is achieved.
Otherwise, because $\partial U$ is compact, we can conclude the sequence $\lbrace \theta_{m} \rbrace$
has a maximal null geodesic $\theta$ with endpoint $p_{N_{0}}$ as a limit curve.
We must prove $\lbrace c_{m}\rbrace$ has an accumulation point $c$ on $\theta$.
In two--dimensional spacetimes, there are only two null geodesics passing through
any point and so any $\theta_{m}$ coincides with $\theta$ and
the sequence $\lbrace c_{m}\rbrace ^{\infty}_{m=N_{0}}$ is on the null geodesic
segment $p_{N_{0}}c_{N_{0}}\subseteq \theta$
as a compact set (see Fig \ref{Fig2}). Therefore, the sequence
$\lbrace c_{m}\rbrace ^{\infty}_{m=N_{0}}$ has an accumulation point $c$ such that
$c_{N_{0}} \succeq c_{N_{0}+1} \succeq c_{N_{0}+2}\succeq ... \succeq c_{N_{0}+n}\succeq...\succeq c$.
\end{itemize}
\begin{figure}
\vspace{0.1cm}
\hspace{0.5cm}
\includegraphics[width=8cm,height=10cm]{Fig3}
\caption{Spacetime $M$ is causally simple but it is not null pseudoconvex. }
\label{Fig3}
\end{figure}
Let $l_{1}$ be the accumulation point of $\lbrace c_{m}\rbrace$ (i.e. $l_{1}=c$). Therefore, $l_{1}$ is on $\theta$ and $\theta$ intersects $\eta_{1}$
and then enters $I^{+}(p)$ for the first time at $l_{1}$, because $c_{N_{0}} \in I^{+}(p) $ and $\partial J^{+}(p) \subseteq (\theta \cap \eta_{1}$).\\
\textbf{Claim 2:} $q \in I^{+}(c_{N_{0}})$.
\begin{itemize}
\item[] Proof of claim 2: Since $q \in I^{+}(l_{1})$, there are $k_{0}\geq 0$ such that $q \in I^{+}(c_{n})$ for $n \geq N_{0}+k_{0}$. So,
$\theta$ intersects $\gamma _{n}$ at $c_{n}^{\prime}$ for $n \geq N_{0} + k_{0}$ and the sequence
$c_{N_{0}+k_{0}}^{\prime} \succeq c_{N_{0}+k_{0}+1}^{\prime} \succeq ...$ converges to $l_{2}$ on $\theta \cap \eta_{1}$.
Now, $c_{N_{0}} \preceq l_{2}$ and $l_{2} \prec q$ and so $c_{N_{0}} \prec q$. This means $q \in I^{+}(c_{N_{0}})$.
\end{itemize}
Now, we start the same process. In the first step,
label $c_{N_{0}}=c_{N_{0}}^{l_{1}}, c_{N_{0}+1}=c_{N_{0}+1}^{l_{1}},...$. Replace
$\lbrace c_{m}^{l_{1}}\rbrace ^{\infty}_{m=N_{0}}$ and $l_{1}$ with $\lbrace p_{m} \rbrace ^{\infty}_{m=N_{0}}$
and $p$, respectively (introduced at the beginning of this case) and find $l_{2}, c_{N_{0}}^{l_{2}}, c_{N_{0}+1}^{l_{2}},...$ and repeat the process.
At step n, we can similarly show that $q \in I^{+}(c_{N_{0}+n})$ and therefore these steps don't stop.
So, there are infinite points $p=l_{0} \preceq l_{1} \preceq l_{2} \preceq ... \preceq l_{n} \preceq ...$ on $\eta_{1}$ and
infinite points $c_{N_{0}}^{l_{1}} \preceq c_{N_{0}}^{l_{2}} \preceq c_{N_{0}}^{l_{3}} \preceq ... \preceq c_{N_{0}}^{l_{n}} \preceq ...$
on $\gamma_{N_{0}}$ such that at each step, $\eta_{1}$ and $\theta$ cut each other in $l_{n}$ and also
$\gamma_{N_{0}}$ and $\theta$ cut each other in $c_{N_{0}}^{l_{n}}$. The segments $l_{0}l_{1}, l_{1}l_{2}, l_{2}l_{3},..., l_{n-1}l_{n},... $ and
$c_{N_{0}}^{l_{1}}c_{N_{0}}^{l_{2}}, c_{N_{0}}^{l_{2}}c_{N_{0}}^{l_{3}}, ... ,c_{N_{0}}^{l_{n-1}}c_{N_{0}}^{l_{n}} , ...$
are maximal null geodesics. But, this is impossible because $\gamma_{N_{0}}$ is compact and don't admit infinitely many cut-points. Therefore,
Case 2 leads to a contradiction.\\
\end{itemize}
\end{proof}
\begin{remark}
We note that Theorem \ref{main} is not true for three-dimensional spacetimes because the null pseudoconvexity and strong causality lift to
Lorentzian covers but, there are three-dimensional spacetimes which show that causal simplicity doesn't lift \cite{Costa, Hedicke, Schinner}.
So, we immediately conclude that these causally simple spacetimes, \cite[Example 2.3.]{Costa} and \cite[Theorem 2.7.]{Hedicke}, are not null pseudoconvex as we illustrate Example 2.3. of Ref. \cite{Costa} in Figure \ref{Fig3}.
Therefore, the causal simplicity doesn't imply the null pseudoconvexity in three-dimensional spacetimes.
\end{remark}
The study of the topology and the geometry of the space of null geodesics $\mathcal{N}$ gives a new approach to
considerations of causal structures of a spacetime $M$. Assuming that $M$ is strongly causal, we ensure that $\mathcal{N}$ possesses
a differentiable structure \cite[Theorem 1]{Low(2001)}. In order for $\mathcal{N}$ to be a manifold, it is
necessary that it be Hausdorff. For more general spacetimes, $\mathcal{N}$ may fail to be Hausdorff, as
the interesting example of the plane wave spacetime considered by Penrose \cite{Low(2001)} shows. Low shows by
the following theorem in Ref. \cite{Low(1990)} that the null pseudoconvexity condition of $M$ corresponds to the Hausdorffness of $\mathcal{N}$.
He describes an open problem in Ref. \cite{Low(2001)} to find necessary and sufficient causality conditions of $M$ to ensure
that $\mathcal{N}$ is Hausdorff. Recently, Bautista, Ibort, Lafuente, and Low studied in detail the space of light rays
and also discussed to place this necessary and sufficient property within the causal ladder, and finally
conjectured that strongly causal null pseudoconvex spacetimes are causally simple \cite{Bautista(2017)} that
Theorem \ref{causally simple-null pseudo} implies this conjecture.
Here, Theorems \ref{causally simple-null pseudo}, \ref{main}, and \ref{Low}
can be applied to solve this problem in two-dimensional spacetimes, see Corollary \ref{cor1}.
\begin{theorem}\label{Low}
\cite{Low(1990)} Let $M$ be a strongly causal spacetime. Then the following conditions are equivalent:
\begin{itemize}
\item[1)] $M$ is null pseudoconvex.
\item[2)] The space of null geodesics, $\mathcal{N}$, is Hausdorff.
\end{itemize}
\end{theorem}
\begin{corollary}\label{cor1}
Let $\mathcal{N}$ be the space of null geodesics of a two-dimensional spacetime strongly causal $M$. Then $M$ is causally simple
if and only if $\mathcal{N}$ is Hausdorff.
\end{corollary}
\subsection{Naked singularities} %
Recently, two types of naked singularities have been introduced by Borjian and Bahrampour in Ref. \cite{Borjian}
and the relationships between the presence of each of these naked singularities in $M$, and failure of the Hausdorff
property for $\mathcal{N}$ have been investigated.
A spacetime $M$ is said to be a \textit{nakedly singular future boundary} if it contains some point $p$ and some future endless null geodesic
$\Gamma$ such that $\Gamma \subseteq \partial I^{-}(p)$ and for each $q\in I^{-}(p)$, $\Gamma \cap \partial I^{-}(q)=\emptyset $.
Also, a \textit{nakedly singular past boundary} is defined similarly by replacing $``+"$ with $``-"$.\\
In fact, Borjian and Bahrampour state a conjecture that says ``if a strongly causal spacetime
$M$ is a nakedly singular future boundary or a nakedly singular past boundary, then $\mathcal{N}$ is non-Hausdorff".
In this section, we prove the conjecture and show that the presence of one of these types of naked singularities implies
the failure of the Hausdorff condition. Aso, the results of the proviose section solve the converse of this conjecture.
\begin{figure}
\vspace{0.1cm}
\hspace{3cm}
\includegraphics[width=10cm,height=8cm]{Fig4}
\caption{In this non-causally continuous spacetime $B^{+}(\Gamma)$ is not closed. In fact, $c\in \partial B^{+}(\Gamma)$ but
$c \not\in B^{+}(\Gamma)$.}
\label{Fig4}
\end{figure}
\begin{proposition}\label{property}
Let $\Gamma$ be a past (future) endless null geodesic and $B^{+}(\Gamma)=\lbrace q\in M \vert \Gamma \subseteq \partial I^{+}(q) \rbrace$
$(\: B^{-}(\Gamma)=\lbrace q\in M \vert \Gamma \subseteq \partial I^{-}(q) \rbrace \:).$
Then $B^{+}(\Gamma)$ $(\: B^{-}(\Gamma) \:)$ is a causally convex set. Moreover,
$B^{+}(\Gamma)$ $(\: B^{-}(\Gamma) \:)$ is closed, if $M$ is causally continuous.
\end{proposition}
\begin{proof}
Let $\Gamma$ be a past endless null geodesic, $q \in M$ and $r, s \in B^{+}(\Gamma)$ such that $r \preceq q \preceq s$.
We show that $q \in B^{+}(\Gamma)$. $q \preceq s$ implies that
$\Gamma \subseteq \partial I^{+}(s) \subseteq \overline{ I^{+}(q)}$. It is sufficient to show that $\Gamma \cap \partial I^{+}(q)=\emptyset$.
On the contrary, assume that $q_{0}\in (\Gamma \cap \partial I^{+}(q))$. So, $r \preceq q \prec q_{0}$ and it implies $r \prec q_{0}$
(see \cite[Theorem 2.24]{Min(2019)}). Therefore, $q_{0}\in (I^{+}(r) \cap \partial I^{+}(r))$ and this is a contradiction. Thus, $q \in B^{+}(\Gamma)$.\\
Now, Let $M$ be causally continuous, and let $c_{n}\in B^{+}(\Gamma)$ and $ c_{n} \longrightarrow c$. We show that $c \in B^{+}(\Gamma)$.
On the contrary, assume that $\Gamma \nsubseteq \partial I^{+}(c)$. There are two cases:\\
Case 1: $\exists$ $q_{0}\in (\Gamma \cap I^{+}(c))$.\\
In this case, $c \in I^{-}(q_{0})$ and since $I^{-}$ is inner continuous, there is an open neighborhood $U(c)$ of $c$ such that
$U(c) \subseteq I^{-}(q_{0})$. So, there exists $N_{0}$ such that $c_{n}\in U(c) \subseteq I^{-}(q_{0})$ for any $n\geq N_{0}$. Therefore,
$q_{0} \in I^{+}(c_{N_{0}})$ but $c_{n}\in B^{+}(\Gamma)$ and this is a contradiction.\\
Case 2: $\exists$ $q_{0}\in (\Gamma \cap (M \setminus \overline{I^{+}(c)}))$.\\
By assumption, since $I^{+}$ is outer continuous, there is an open neighborhood $U(c)$ of $c$ such that
$q_{0}\in (M \setminus \overline{I^{+}(q)})$ for each $q \in U(c)$, especially for some $c_{N_{1}}\in U(c)$ but $c_{N_{1}}\in B^{+}(\Gamma)$ and this is a contradiction. Similarly, one can prove that $B^{-}(\Gamma)$ is causally convex and closed.
\end{proof}
Figure \ref{Fig4} shows that the causal continuity condition in Proposition \ref{property} is necessary.
The following proposition is proved by Beem and Krolak (see \cite [Proposition 1]{Beem(1992)}).
\begin{proposition}\label{Beem}
Assume $(M,g)$ is distinguishing, but not causally simple. Then $\exists$ points $x_{1}$ and $x_{2}$ in $M$ such that $x_{1}$ has a future
inextendible maximal null geodesic ray in $\partial I^{-}(x_{1})$ and $x_{2}$ has a past inextendible maximal null geodesic ray in $\partial I^{+}(x_{2})$.\\
Conversely, assume $(M,g)$ has an $x_{1}$ (resp. $x_{2}$) such that $\partial I^{-}(x_{1})$ has a future inextendible maximal null geodesic ray
[resp. $\partial I^{+}(x_{2})$ has a past inextendible maximal null geodesic ray]; then $(M,g)$ is not causally simple.
\end{proposition}
\begin{figure}
\vspace{0.1cm}
\hspace{3cm}
\includegraphics[width=10cm,height=9cm]{Fig5}
\caption{This spacetime is the two-dimensional Minkowski space with a half-line of a null geodesic removed.
All points in the hatched region can be chosen to be $x_{2}$ but no point in the spacetime can be $x_{1}$ in Proposition \ref{Beem}.
The maximal past inextendible null geodesic ray $\Gamma$ is a subset of $\partial J^{+}(x_{2})$ and $J^{+}(x_{2})$ is not closed for all
points $x_{2}$ in the hatched region but there is no maximal future inextendible null geodesic ray in $\partial J^{-}(p)$ and
$J^{-}(p)$ is closed for all points $p\in M$. In the proof of Proposition \ref{Beem}, the invalid argument
``we may take $q$ to be the $x_{1}$" is used.}
\label{Fig5}
\end{figure}
Figure \ref{Fig5} shows that this proposition is not valid and it is required to have a minor modification.
Now, we provide a corrected version of this proposition as follows:
\begin{proposition}\label{Vatan-Beem}
Assume $(M,g)$ is distinguishing. Then $M$ is not causally simple if and only if there exists a point $x_{1} \in M$ such that $x_{1}$ has a future
inextendible maximal null geodesic ray in $\partial I^{-}(x_{1})$ or a point $x_{2} \in M$ such that $x_{2}$ has a
past inextendible maximal null geodesic ray in $\partial I^{+}(x_{2})$.
\end{proposition}
\begin{proof}
It is sufficient to remove the following sentences in the proof of \cite[Part (I) of Proposition 1]{Beem(1992)}:\\
``The same type of argument shows that $\exists$ a maximal future inextendible null geodesic ray that starts at $x_{2}$ and
fails to reach $q$. Thus we may take $q$ to be the $x_{1}$."\\
Instead, replace it with the following sentence:\\
``Also, if there is a point $x_{1}$ such that $J^{-}(x_{1})$ is not closed, then a similar proof shows the existence of a maximal future inextendible
null geodesic ray in $\partial I^{-}(x_{1})$."
\end{proof}
A spacetime $M$ is said to be past (future) reflecting at $q$ in $M$ if for all $p$ in $M$
$$I^{+}(q)\subseteq I^{+}(p) \Rightarrow I^{-}(p)\subseteq I^{-}(q),$$
$$ ( I^{-}(q)\subseteq I^{-}(p) \Rightarrow I^{+}(p)\subseteq I^{+}(q) ) $$
and is said to be reflecting at $q$ if it satisfies both conditions. The spacetime is said to be reflecting if it is reflecting at all points.
It is shown that the reflectivity and the causal continuity conditions are equivalent (see \cite[Definition 4.9]{Min(2019)}).
\begin{proposition}\label{f-pnb}
Let $(M,g)$ be a reflecting spacetime and let there exists a past (future) inextendible maximal null geodesic ray $\Gamma$
such that $\Gamma \subseteq \partial I^{+}(q)$ ($\Gamma \subseteq \partial I^{-}(q)$), for some $q \in M$.
Then the following statements are true:
\begin{itemize}
\item[(I)] $M$ is a nakedly singular past (future) boundary at $p$, for all points $p \in B^{+}(\Gamma)$ ($p \in B^{-}(\Gamma)$).
\item[(II)] $int(B^{+}(\Gamma))=\emptyset$ and $\partial B^{+}(\Gamma)=B^{+}(\Gamma)$ ($int(B^{-}(\Gamma))=\emptyset$ and $\partial B^{-}(\Gamma)=B^{-}(\Gamma)$).
\end{itemize}
\end{proposition}
\begin{proof}
By hypothesis, $q \in B^{+}(\Gamma)\not = \emptyset$. On the contrary, assume that $M$ is not a nakedly singular
past (future) boundary at some point $p \in B^{+}(\Gamma)$.
This means that $\exists w \in I^{+}(p)$ and $v \in \Gamma \cap \partial I^{+}(w)$. Now, there are two cases:
\begin{itemize}
\item[] Case 1: $w \in \overline{I^{-}(v)}$. Since $I^{+}(p)$ is an open set containing $w$, it intersects $I^{-}(v)$ and so
$ I^{-}(v) \cap I^{+}(p)\not = \emptyset$ that implies $v \in I^{+}(p)$. This is a contradiction because of $v \in \Gamma $
and $\Gamma \subseteq \partial I^{+}(p)$.
\item[] Case 2: $w \not\in \overline{I^{-}(v)}$. By reflectivity of $M$, this case
implies $v \not\in \overline{I^{+}(w)}$ but we have $v \in \partial I^{+}(w) \subseteq\overline{I^{+}(w)}$,
which is a contradiction.
\end{itemize}
Therefore, $M$ is a nakedly singular past (future) boundary at all points of $ B^{+}(\Gamma)$.\\
For the proof of the second statement, let $p\in int(B^{+}(\Gamma))\not = \emptyset$. So, $I^{+}(p) \cap B^{+}(\Gamma)$ is a non-empty set
and we assume that $w \in I^{+}(p) \cap B^{+}(\Gamma)$. Therefore, we have $w \in I^{+}(p)$ and
$\exists v \in \Gamma \cap \partial I^{+}(w)$. Now, there are precisely the same as the above two cases, which leads to a contradiction.
\end{proof}
Propositions \ref{Vatan-Beem} and \ref{f-pnb} immediately imply the following results:
\begin{corollary}\label{cor2}
Let $(M,g)$ be a causally continuous spacetime. Then $M$ is not causally simple if and only if $M$ is a nakedly singular future boundary or
nakedly singular past boundary spacetime.
\end{corollary}
Now, we are ready to conclude the conjecture introduced in Ref. \cite {Borjian} by using Theorems \ref{causally simple-null pseudo}, \ref{Low} and Corollary \ref{cor2}.
\begin{corollary}\label{cor3}
Let $\mathcal{N}$ be the space of null geodesics of a causally continuous spacetime $M$.
if $M$ is a nakedly singular future boundary or nakedly singular past boundary spacetime, then $\mathcal{N}$ is non-Hausdorff.
\end{corollary}
Also, Theorem \ref{main} implies that the converse of Corollary \ref{cor3} is tru for two-dimensional spacetimes and Figure \ref{Fig3} refutes
the converse of Corollary \ref{cor3} for three-dimensional spacetimes.
\section{Conclusion}
The geometry of the space of null geodesics $\mathcal{N}$ of a strongly causal spacetime $M$ has provided insights into many aspects
of spacetime geometry. Although $\mathcal{N}$ is guaranteed to have a differentiable structure, it need not be Hausdorff.
In this paper, we find a necessary and sufficient condition through spacetime causality conditions as a solution to Problem 3
suggested in Ref. \cite{Low(2001)}: A two-dimensional spacetime $M$ is causally simple if and only if $\mathcal{N}$ is Hausdorff. This condition is weaker than
the causal pseudoconvexity and the global hyperbolicity conditions which are equivalent to the Hausdorffness of the space
of causal geodesics $C$ and the space of smooth endless causal curves $\mathcal{C}$ of $M$, respectively \cite{Low(1990)}.\\
The failure of the Hausdorff condition corresponds to the presence of a particular type of naked singularity. It is shown that
$M$ is nakedly singular if and only if $\mathcal{C}$ is non-Hausdorff. Finally, by an example, we show that a proposition
by Beem and Krolak (see \cite[Proposition 1]{Beem(1992)}) is required to have a minor modification. Then, as a result, we prove
$\mathcal{N}$ is non-Hausdorff if $M$ is a nakedly singular future boundary or nakedly singular past boundary spacetime
(as a solution of \cite[Conjecture 3.1]{Borjian}) and the converse of the conjecture is true only in two-dimensional spacetimes.\\
Recently in Ref. \cite[Theorem 2.7.]{Hedicke}, Hedicke and Suhr refuted Chernov's conjecture which says that every causally simple
spacetime can be conformally embedded as an open subset into some globally hyperbolic spacetime \cite{Chernov}.
On the other hand, any two-dimensional simply connected and causally simple space-time can be causally isomorphically
embedded into two-Minkowski spacetime (see \cite[Theorem 3]{Vatan-Bahram}). So, the results motivate that we conjecture the following statement:
\begin{conjecture}
Every null pseudoconvex strongly causal spacetime can be conformally embedded as an open subset into some globally hyperbolic spacetime.
\end{conjecture}
|
{
"timestamp": "2021-05-11T02:13:14",
"yymm": "2105",
"arxiv_id": "2105.03730",
"language": "en",
"url": "https://arxiv.org/abs/2105.03730"
}
|
\section{Introduction}
\hspace{0.25in} A triangle with rational sides and rational area is
called a rational triangle. Diophantine problems concerning rational triangles have attracted considerable attention.
For instance, several mathematicians have considered the problem of finding two rational triangles with a common perimeter and a common area (see \cite{Aa}, \cite{Br}, \cite{HM}, \cite{Lu}, \cite{Yi}). In fact, Choudhry \cite{Ch} has described a method of generating an arbitrarily large number of scalene rational triangles with a common perimeter and a common area. Skalba and Ulas have considered the problem of finding pairs of Pythagorean triangles with given ratios between catheti. Regarding problems concerning rational triangles with a common circumradius, it has been shown by Lehmer \cite[Theorem XI, p. 101]{Le} that there exist infinitely many rational triangles with a common circumradius. Further, Andrica and \c{T}urca\c{s} \cite{AT} have recently proved that there are no pairs consisting of a rational right triangle and a rational isosceles triangle which have the same circumradius and the same inradius or which have the same circumradius and the same perimeter.
This paper is concerned with three diophantine problems pertaining to a pair of scalene rational triangles that have the same circumradius. The three problems require that we find pairs of rational triangles with:
\noindent (i) a common circumradius and a common perimeter;
\noindent (ii) a common circumradius and a common inradius;
\noindent (iii) a common circumradius and a common area.
We note that none of the above three problems has been considered earlier in the literature. We obtain parametric solutions of each of the above problems. We also show how more parametric solutions of these problems may be obtained. We note that rational values of the parameters may yield two triangles with rational sides as a solution to any of our three diophantine problems. In each case, we may, after appropriate scaling, readily obtain two triangles whose sides and areas are given by integers and which have the desired properties.
\section{Some basic formulae regarding rational triangles}\label{formulae}
In this section we will give basic formulae for the sides, the area, the circumradius and the inradius of a general triangle whose sides and area are rational.
We note that Brahmagupta \cite[p. 191]{Di} and Euler \cite[p. 193]{Di} have independently given two sets of formulae for the sides of a general rational triangle. The problem of determining all rational triangles has also been considered by Carmichael \cite[pp. 11--13]{Ca} and by Lehmer \cite{Le}. We could try to use these well-known formulae about rational triangles to find pairs of such triangles with a common circumradius as well as a common perimeter/ common inradius. The resulting diophantine equations are, however, difficult to solve.
We will now derive a new set of formulae for the sides and area of an arbitrary rational triangle. It is interesting to observe that using the new formulae given below we can neatly resolve the problems of finding pairs of rational triangles with a common circumradius as well as a common perimeter/ common inradius.
Let $a, b , c$, be the sides of an arbitrary rational triangle. The area, circumradius and inradius of the triangle, denoted by $A, R$ and $r$, respectively, are given by the following well-known formulae:
\begin{align}
A&=\sqrt{(a+b+c)(a+b-c)(b+c-a)(c+a-b)}/4, \label{defA}\\
R&=abc/\sqrt{(a+b+c)(a+b-c)(b+c-a)(c+a-b)}, \label{defR}\\
r&=\sqrt{(a+b+c)(a+b-c)(b+c-a)(c+a-b)}/\{2(a+b+c)\}. \label{defr}
\end{align}
On making the invertible linear transformation defined by,
\begin{equation}
a=y+z,\quad b=z+x, \quad c=x+y, \label{ltt1}
\end{equation}
the above formulae may be written as,
\begin{align}
A&=\sqrt{(x + y + z)xyz}, \label{defA2}\\
R&=(x + y)(y + z)(x + z)/\{4\sqrt{(x + y + z)xyz}\}, \label{defR2}\\
r&=\sqrt{(x + y + z)xyz}/(x + y + z), \label{defr2}
\end{align}
where we note that $x, y$ and $z$ are necessarily nonzero rational numbers.
It follows from \eqref{defA2} that the area of the triangle will be rational if and only if there is a nonzero rational number $t$ such that
\begin{equation}
(x + y + z)x=t^2yz, \label{relt}
\end{equation}
so that
\begin{equation}
z= (x + y)x/(t^2y - x), \label{valz}
\end{equation}
and now the values of $A, R$ and $r$ may be written, in terms of nonzero rational numbers $x, y$ and $t$, as follows:
\begin{align}
A&=txy(x + y)/(t^2y - x), \label{defA3}\\
R&=(x^2+t^2y^2)(t^2 + 1)/(4(t^2y - x)t), \label{defR3}\\
r&=x/t. \label{defr3}
\end{align}
We note that, using the relations \eqref{ltt1} and the value of $z$ given by \eqref{valz}, the sides $a, b, c$ of our triangle may be written, in terms of three arbitrary nonzero parameters $x, y$ and $t$, as follows:
\begin{equation}
a = (x^2+t^2y^2)/(t^2y - x),\quad b = xy(t^2 + 1)/(t^2y - x),\quad c = x + y. \label{valabcgen}
\end{equation}
The perimeter $P$ of the triangle is now given, in terms of $x, y$ and $t$, by the formula,
\begin{equation}
P=2t^2y(x + y)/(t^2y - x). \label{defP}
\end{equation}
To find pairs of triangles with the desired properties, we will begin with two triangles whose sides may be written, using the formulae \eqref{valabcgen}, in terms of arbitrary parameters $x_i, y_i, t_i,\,i=1,2$. We will then impose the desired conditions on the two triangles, and solve the resulting diophantine equations. We will follow this approach in Sections \ref{eqRs} and \ref{eqRr} to obtain pairs of rational triangles with a common circumradius and a common perimeter/ common inradius.
\section{Three diophantine problems concerning rational triangles with the same circumradius}\label{diophprob}
\subsection{Pairs of rational triangles with a common circumradius and a common perimeter}\label{eqRs}
We will now obtain examples of pairs of triangles with a common circumradius and a common perimeter.
Let the sides $a_1, b_1, c_1$ and $a_2, b_2, c_2$ of the two triangles, respectively, be expressed, using the formulae \eqref{valabcgen}, in terms of two sets of arbitrary nonzero rational parameters $x_i, y_i, t_i, \;i=1,2$, applicable to the two triangles, respectively.
It follows from the formula \eqref{defR3} that if the two triangles have a common circumradius, the parameters $x_i, y_i, t_i, \; i=1, 2$, must satisfy the following condition:
\begin{equation}
(x_1^2+t_1^2y_1^2)(t_1^2 + 1)/(4(t_1^2y_1 - x_1)t_1)=(x_2^2+t_2^2y_2^2)(t_2^2 + 1)/(4(t_2^2y_2 - x_2)t_2). \label{condR}
\end{equation}
Further, on using the formula \eqref{defP}, the condition that the two triangles also have the same perimeter may be be written as follows:
\begin{equation}
2t_1^2y_1(x_1 + y_1)/(t_1^2y_1 - x_1) = 2t_2^2y_2(x_2 + y_2)/(t_2^2y_2 - x_2). \label{condP}
\end{equation}
We will now solve the simultaneous equations \eqref{condR} and \eqref{condP}. On equating either side of Eq.~\eqref{condP} to $m$, and solving for $x_1$ and $x_2$, we get,
\begin{equation}
x_1 = t_1^2y_1(m - 2y_1)/(2t_1^2y_1 + m),\quad x_2 = t_2^2y_2(m - 2y_2)/(2t_2^2y_2 + m). \label{valx12Rp}
\end{equation}
On substituting the values of $x_1$ and $x_2$ given by \eqref{valx12Rp}, Eq.~\eqref{condR} may be written as follows:
\begin{equation}
(t_1^2 + 1)(4y_1^2t_1^2 + m^2)(2y_2t_2^2 + m)t_2 = (t_2^2 + 1)(4y_2^2t_2^2 + m^2)(2y_1t_1^2 + m)t_1. \label{condRp1}
\end{equation}
Eq.~\eqref{condRp1} may be considered as a cubic curve in $y_1$ and $y_2$, and a rational point on it, obtained by equating each side of \eqref{condRp1} to 0, is $(y_1, y_2)=(-m/(2t_1^2), -m/(2t_2^2))$. By drawing a tangent to the curve at this point, and taking its intersection with the curve \eqref{condRp1}, we get a rational solution of \eqref{condRp1}.
We can now obtain a solution of the simultaneous equations \eqref{condR} and \eqref{condP}, and using the relations \eqref{valabcgen}, we get the sides of the two triangles which have a common circumradius and common perimeter. We omit the tedious details, and simply give below the sides $a_1, b_1, c_1$, and $a_2, b_2, c_2$ of the two triangles, obtained after appropriate scaling:
\begin{equation}
\begin{aligned}
a_1& =t_1(t_2^2 + 1)(t_1^4t_2^4 + 3t_1^4t_2^2 + 4t_1^3t_2^3 + 3t_1^2t_2^4 + t_1^4+ 2t_1^3t_2 + 3t_1^2t_2^2\\
& \quad \quad + 2t_1t_2^3 + t_2^4)(4t_1^6t_2^6 + 5t_1^6t_2^4 - 2t_1^5t_2^5+ 5t_1^4t_2^6 + 3t_1^6t_2^2\\
& \quad \quad - 2t_1^5t_2^3 + 2t_1^4t_2^4 - 2t_1^3t_2^5 + 3t_1^2t_2^6 + t_1^6 - 2t_1^3t_2^3 + t_2^6),\\
b_1& = 2t_1t_2^3(t_1^2 + 1)^2(t_2^2 + 1)(t_1^2t_2^2 + t_1^2 + t_1t_2 + t_2^2)\\
& \quad \quad \times (3t_1^5t_2^4 + t_1^4t_2^5 + 3t_1^5t_2^2 + t_1^4t_2^3 + t_1^3t_2^4 - t_1^2t_2^5\\
& \quad \quad+ t_1^5 + t_1^4t_2 + t_1^3t_2^2 - t_1^2t_2^3 - t_1t_2^4 - t_2^5), \\
c_1 & = t_1(t_1 + t_2)(t_2^2 + 1)(3t_1^4t_2^4 + 3t_1^4t_2^2 + 3t_1^2t_2^4 + t_1^4+ t_1^2t_2^2 + t_2^4)\\
& \quad \quad \times(2t_1^6t_2^5 + 2t_1^6t_2^3 - t_1^5t_2^4 + 3t_1^4t_2^5 - 3t_1^5t_2^2+ t_1^4t_2^3 \\
& \quad \quad + t_1^3t_2^4 + 3t_1^2t_2^5 - t_1^5 - t_1^4t_2 - t_1^3t_2^2 + t_1^2t_2^3 + t_1t_2^4 + t_2^5),
\end{aligned}
\label{sidestrg1eqRp}
\end{equation}
\begin{equation}
\begin{aligned}
a_2& =t_2(t_1^2 + 1)(t_1^4t_2^4 + 3t_1^4t_2^2 + 4t_1^3t_2^3 + 3t_1^2t_2^4 + t_1^4 + 2t_1^3t_2\\
& \quad \quad + 3t_1^2t_2^2 + 2t_1t_2^3 + t_2^4)(4t_1^6t_2^6 + 5t_1^6t_2^4 - 2t_1^5t_2^5 + 5t_1^4t_2^6 \\
& \quad \quad + 3t_1^6t_2^2 - 2t_1^5t_2^3 + 2t_1^4t_2^4 - 2t_1^3t_2^5 + 3t_1^2t_2^6 + t_1^6 - 2t_1^3t_2^3 + t_2^6),\\
b_2 &= 2t_1^3t_2(t_1^2 + 1)(t_2^2 + 1)^2(t_1^2t_2^2 + t_1^2 + t_1t_2 + t_2^2)\\
& \quad \quad \times (t_1^5t_2^4 + 3t_1^4t_2^5 - t_1^5t_2^2 + t_1^4t_2^3 + t_1^3t_2^4 + 3t_1^2t_2^5\\
& \quad \quad - t_1^5 - t_1^4t_2 - t_1^3t_2^2 + t_1^2t_2^3 + t_1t_2^4 + t_2^5),\\
c_2& =t_2(t_1 + t_2)(t_1^2 + 1)(3t_1^4t_2^4 + 3t_1^4t_2^2 + 3t_1^2t_2^4 + t_1^4 + t_1^2t_2^2 + t_2^4)\\
& \quad \quad \times (2t_1^5t_2^6 + 3t_1^5t_2^4 - t_1^4t_2^5 + 2t_1^3t_2^6 + 3t_1^5t_2^2 + t_1^4t_2^3 \\
& \quad \quad + t_1^3t_2^4 - 3t_1^2t_2^5 + t_1^5 + t_1^4t_2 + t_1^3t_2^2 - t_1^2t_2^3 - t_1t_2^4 - t_2^5).
\end{aligned}
\label{sidestrg2eqRp}
\end{equation}
The common circumradius of the above two triangles is
\begin{multline}
\{(t_1^2 + 1)(t_2^2 + 1)(t_1^4t_2^4 + 3t_1^4t_2^2 + 4t_1^3t_2^3 + 3t_1^2t_2^4 + t_1^4 + 2t_1^3t_2 \\
+ 3t_1^2t_2^2 + 2t_1t_2^3 + t_2^4)(4t_1^6t_2^6 + 5t_1^6t_2^4 - 2t_1^5t_2^5 + 5t_1^4t_2^6 + 3t_1^6t_2^2\\
- 2t_1^5t_2^3 + 2t_1^4t_2^4 - 2t_1^3t_2^5 + 3t_1^2t_2^6 + t_1^6 - 2t_1^3t_2^3 + t_2^6)\}/4,
\end{multline}
and the common perimeter is
\begin{multline}
4t_1^3t_2^3(t_1 + t_2)(t_1^2 + 1)(t_2^2 + 1)(t_1^2t_2^2 + t_1^2 + t_1t_2 + t_2^2)\\
\times (3t_1^4t_2^4 + 3t_1^4t_2^2 + 3t_1^2t_2^4 + t_1^4 + t_1^2t_2^2 + t_2^4). \quad \quad \quad \quad
\end{multline}
As a numerical example, when $t_1=2, t_2=3$, we get, after appropriate scaling, two triangles with sides $ 1321940, 1166616, 1636180$ and $991455$, $1548096, 1585185 $, having a common circumradius $1652425/2$ and a common perimeter $4124736$.
It is interesting to observe that when we take $t_1=1$, the first triangle becomes a right triangle while taking $t_2=1$ makes the second triangle a right triangle. We give below the sides of the two triangles with $t_2=1$:
\begin{equation}
\begin{aligned}
a_1& = 2t_1(5t_1^4 + 6t_1^3 + 6t_1^2 + 2t_1 + 1)(13t_1^6 - 4t_1^5 + 7t_1^4 - 4t_1^3 + 3t_1^2 + 1),\\
b_1& = 4t_1(t_1^2 + 1)^2(2t_1^2 + t_1 + 1)(7t_1^5 + 3t_1^4 + 2t_1^3 - 2t_1^2 - t_1 - 1),\\
c_1& = 2t_1(t_1 + 1)(7t_1^4 + 4t_1^2 + 1)(4t_1^6 - 5t_1^5 + 3t_1^4 + 4t_1^2 + t_1 + 1),
\end{aligned}
\label{sidestrg1eqRpspl}
\end{equation}
\begin{equation}
\begin{aligned}
a_2& = (t_1^2 + 1)(5t_1^4 + 6t_1^3 + 6t_1^2 + 2t_1 + 1)(13t_1^6 - 4t_1^5 + 7t_1^4 \\
& \quad \quad - 4t_1^3 + 3t_1^2 + 1), \\
b_2& = -8t_1^3(t_1^2 + 1)(2t_1^2 + t_1 + 1)(t_1^5 - 3t_1^4 - 4t_1^2 - t_1 - 1),\\
c_2& = (t_1 + 1)(t_1^2 + 1)(7t_1^4 + 4t_1^2 + 1)(9t_1^5 + t_1^4 + 4t_1^3 - 4t_1^2 - t_1 - 1).
\end{aligned}
\label{sidestrg2eqRpspl}
\end{equation}
We now have a scalene triangle whose sides are given by \eqref{sidestrg1eqRpspl} and a right triangle whose sides are given by \eqref{sidestrg2eqRpspl} such that the two triangles have a common circumradius
\[(t_1^2 + 1)(5t_1^4 + 6t_1^3 + 6t_1^2 + 2t_1 + 1)(13t_1^6 - 4t_1^5 + 7t_1^4 - 4t_1^3 + 3t_1^2 + 1)/2\]
and a common perimeter
\[ 8t_1^3(t_1 + 1)(t_1^2 + 1)(2t_1^2 + t_1 + 1)(7t_1^4 + 4t_1^2 + 1).\]
As a numerical example, when $t_1=2$, we get two triangles with sides $ 500516, 609400, 252324$ and $625645, 123200, 613395$ having common circumradius $625645/2$ and common perimeter $1362240$.
We note that the cubic curve in $y_1$ and $y_2$ defined by Eq.~\eqref{condRp1} may be considered as an elliptic curve, and more rational points on the curve \eqref{condRp1} may be found by the well-known tangent and chord process or equivalently, by using the group law. These rational points will yield additional parametric solutions of the problem of finding pairs of rational triangles with a common circumradius and a common perimeter.
\subsection{Pairs of rational triangles with a common circumradius and a common inradius}\label{eqRr}
We will now obtain two triangles with a common circumradius and a common inradius. As in Section \ref{eqRs}, let the sides $a_i, b_i, c_i, \, i=1, 2$, of the two triangles be expressed, using the formulae \eqref{valabcgen}, in terms of arbitrary parameters $x_i, y_i, t_i, \, i=1, 2$.
The condition for the two triangles to have a common circumradius is given by \eqref{condR} while, on using the formula \eqref{defr3}, the condition that they have a common inradius may be written as follows:
\begin{equation}
x_1/t_1=x_2/t_2. \label{condr}
\end{equation}
We may therefore write $x_1=mt_1, x_2=mt_2$, where $m$ is an arbitrary parameter, and now the condition \eqref{condR} reduces to
\begin{multline}
t_2(t_1^2 + 1)y_1^2y_2 - t_1(t_2^2 + 1)y_1y_2^2 - m(t_1^2 + 1)y_1^2 + m(t_2^2 + 1)y_2^2\\
- m^2t_1(t_2^2 + 1)y_1 + m^2t_2(t_1^2 + 1)y_2 - m^3(t_1 - t_2)(t_1 + t_2)=0. \label{condRr1}
\end{multline}
This is a cubic equation in $y_1$ and $y_2$, and it is easily observed that a rational point on the cubic curve defined by \eqref{condRr1} is given by $(y_1, y_2)=(t_2m, t_1m)$. We now draw a tangent to the cubic curve at the point $(t_2m, t_1m)$, and take its intersection with the curve, and thus obtain a new rational point on the curve \eqref{condRr1}. Using this new rational point, we readily obtain a rational solution of the simultaneous diophantine equations \eqref{condR} and \eqref{condr}. Now on using the relations \eqref{valabcgen}, we obtain the sides of two triangles with a common circumradius and a common inradius. On appropriate scaling, the sides $a_1, b_1, c_1$, and $a_2, b_2, c_2$ of the two triangles may be written as follows:
\begin{equation}
\begin{aligned}
a_1& = t_1(t_2^2 + 1)(t_1^2t_2^2 + t_1^2 - 8t_1t_2 + t_2^2 + 9),\\
b_1 &= -(t_1t_2^2 - t_1 - 2t_2)(t_1^2 + 1)(2t_1t_2 - t_2^2 - 3), \\
c_1& = -2(t_1^2t_2^2 - t_1^2 - 4t_1t_2 + t_2^2 + 3)(t_1^2t_2 - 2t_1 - t_2),\\
\end{aligned}
\label{sidestrg1eqRr}
\end{equation}
\begin{equation}
\begin{aligned}
a_2& = t_2(t_1^2 + 1)(t_1^2t_2^2 + t_1^2 - 8t_1t_2 + t_2^2 + 9),\\
b_2& = (t_1^2t_2 - 2t_1 - t_2)(t_2^2 + 1)(t_1^2 - 2t_1t_2 + 3),\\
c_2& = -2(t_1^2t_2^2 + t_1^2 - 4t_1t_2 - t_2^2 + 3)(t_1t_2^2 - t_1 - 2t_2),
\end{aligned}
\label{sidestrg2eqRr}
\end{equation}
where $t_1$ and $t_2$ are arbitrary parameters.
The common circumradius of the above two triangles is
\[(t_1^2 + 1)(t_2^2 + 1)(t_1^2t_2^2 + t_1^2 - 8t_1t_2 + t_2^2 + 9)/4,\]
while their common inradius is $2(t_1t_2^2 - t_1 - 2t_2)(t_1^2t_2 - 2t_1 - t_2)$.
As a numerical example, when $ t_1=9/2$ and $t2=7/6$,
we get, after appropriate scaling, two triangles with sides $2055, 1105, 3002$, and $ 4795, 4845, 482$, with their common circumradius and common inradius being $58225/24$ and $228$, respectively.
As in the case of the two triangles in Section \ref{eqRs} given by \eqref{sidestrg1eqRp} and \eqref{sidestrg2eqRp}, we note that if we take $t_1=1$, the first triangle given by \eqref{sidestrg1eqRr} becomes a right triangle while on taking $t_2=1$, the second triangle given by \eqref{sidestrg2eqRr} becomes a right triangle. We give below the sides of the two triangles with $t_2=1$:
\begin{equation}
\begin{aligned}
a_1& = 2t_1(t_1^2 - 4t_1 + 5),\\
b_1 &= 2(t_1^2 + 1)(t_1 - 2),\\
c_1& = 4(t_1 - 1)(t_1^2 - 2t_1 - 1)
\end{aligned}
\label{sidestrg1eqRrspl}
\end{equation}
\begin{equation}
\begin{aligned}
a_2& = (t_1^2 + 1)(t_1^2 - 4t_1 + 5),\\
b_2& = (t_1^2 - 2t_1 - 1)(t_1^2 - 2t_1 + 3), \\
c_2& = 4(t_1 - 1)^2.
\end{aligned}
\label{sidestrg2eqRrspl}
\end{equation}
We now have a scalene triangle whose sides are given by \eqref{sidestrg1eqRrspl} and a right triangle whose sides are given by \eqref{sidestrg2eqRrspl} such that the two triangles have a common circumradius
$(t_1^2 + 1)(t_1^2 - 4t_1 + 5)/2$ and a common inradius $2(t_1^2 - 2t_1 - 1)$.
As a numerical example, when $t_1=4$,
we get two triangles with sides $40, 68, 84$, and $85, 77, 36$, with the common circumradius and common inradius being $85/2$ and $14$, respectively.
As in Section \ref{eqRs}, the cubic curve defined by Eq.~\eqref{condRr1} may be considered as an elliptic curve, and additional rational points on this curve, found using the group law, will lead to additional parametric solutions of our problem.
\subsection{Pairs of rational triangles with a common circumradius and a common area}\label{eqRA}
We will now find pairs of rational triangles with a common circumradius and a common area. It follows from formulae \eqref{defA} and \eqref{defR} that two triangles, with rational sides $a_1, b_1, c_1$, and $a_2, b_2, c_2$, respectively, will have a common circumradius and a common area if the following conditions are satisfied:
\begin{equation}
a_1b_1c_1=a_2b_2c_2, \label{cond1}
\end{equation}
and
\begin{multline}
(a_1+b_1+c_1)(a_1+b_1-c_1)(b_1+c_1-a_1)(c_1+a_1-b_1)\\
=(a_2+b_2+c_2)(a_2+b_2-c_2)(b_2+c_2-a_2)(c_2+a_2-b_2). \label{cond2}
\end{multline}
Further, the common area of the two triangles will be rational if and only if each side of Eq.~\eqref{cond2} is a perfect square.
To solve the simultaneous diophantine equations \eqref{cond1} and \eqref{cond2}, we write,
\begin{equation}
a_1=pu,\quad b_1=qv,\quad a_2=pv, \quad b_2=qu, \quad c_2=c_1, \label{subsabc}
\end{equation}
where $p, q, u $ and $ v$ are arbitrary parameters. Now Eq.~\eqref{cond1} is identically satisfied while Eq.~\eqref{cond2} reduces to
\begin{equation}
(u - v)(u + v)(p - q)(p + q)\{2c_1^2 - (u^2 + v^2)(p^2 + q^2)\}=0. \label{cond2a}
\end{equation}
To obtain a nontrivial solution of Eqs.~\eqref{cond1} and \eqref{cond2}, the last factor on the left-hand side of Eq.~\eqref{cond2a} must be equated to 0. We now have a quadratic equation in $u, v$ and $c_1$, and accordingly, we readily obtain the following solution of Eq.~\eqref{cond2a}:
\begin{equation}
\begin{aligned}
u& = (m^2 + 2mn - n^2)p - (m^2 - 2mn - n^2)q,\\
v& = (m^2 - 2mn - n^2)p + (m^2 + 2mn - n^2)q,\\
c_1&=(p^2 + q^2)(m^2 + n^2),
\end{aligned}
\label{valuvc1}
\end{equation}
where $m$ and $n$ are arbitrary parameters.
On substituting the values of $u, v$ and $c_1$ in the relations \eqref{subsabc}, we obtain the following solution of the simultaneous equations \eqref{cond1} and \eqref{cond2}:
\begin{equation}
\begin{aligned}
a_1 &= (m^2 + 2mn - n^2)p^2 - (m^2 - 2mn - n^2)pq,\\
a_2& = (m^2 - 2mn - n^2)p^2 + (m^2 + 2mn - n^2)pq, \\
b_1& = (m^2 - 2mn - n^2)pq + (m^2 + 2mn - n^2)q^2,\\
b_2& = (m^2 + 2mn - n^2)pq - (m^2 - 2mn - n^2)q^2,\\
c_1&= c_2 = (m^2 + n^2)p^2 + (m^2 + n^2)q^2,
\end{aligned}
\label{solabc}
\end{equation}
where $m, n, p$ and $q$ are arbitrary parameters.
We now have two triangles with a common circumradius and a common area. For the common area to be rational, the values of $a_1, b_1, c_1$ given by \eqref{solabc} must satisfy the condition,
\begin{equation}
(a_1+b_1+c_1)(a_1+b_1-c_1)(b_1+c_1-a_1)(c_1+a_1-b_1)=h^2, \label{areasq}
\end{equation}
where $h$ is some nonzero rational number.
On using the relations \eqref{solabc}, the condition \eqref{areasq} may be written as follows:
\begin{multline}
16mn(p^2 + q^2)^2(m + n)(m - n)(mp + mq - np + nq)(mq - np)\\
\times (mp + nq)(mp - mq + np + nq) = h^2. \label{areasq2}
\end{multline}
Now on writing,
\begin{equation}
m=tn,\quad p=uq, \quad h=4v(u^2 + 1)n^4q^4, \label{areatr1}
\end{equation}
where $t$ is an arbitrary rational parameter, the condition \eqref{areasq2} reduces to
\begin{multline}
-(t + 1)^2(t - 1)^2t^2u^4 + (t^2 + 2t - 1)(t^2 - 2t - 1)(t - 1)(t + 1)tu^3\\
+ 6(t + 1)^2(t - 1)^2t^2u^2 - (t^2 + 2t - 1)(t^2 - 2t - 1)(t - 1)(t + 1)tu\\
- (t + 1)^2(t - 1)^2t^2 = v^2. \label{areasq3}
\end{multline}
It is readily seen that when $u=1$, the left-hand side of \eqref{areasq3} is a perfect square, namely $4(t + 1)^2(t - 1)^2t^2$. We note that the left-hand side of \eqref{areasq3} is a quartic function of $u$, and we know one value of $u$ that makes this quartic function a perfect square. Now on applying a method described by Fermat (as quoted by Dickson \cite[p. 639]{Di}), we obtain the following value of $t$ that makes the left-hand side of \eqref{areasq3} a perfect square:
\begin{multline}
u=(t^8 + 8t^7 + 20t^6 - 56t^5 - 26t^4 + 56t^3 + 20t^2 - 8t + 1)\\
\times (t^8 - 8t^7 + 20t^6 + 56t^5 - 26t^4 - 56t^3 + 20t^2 + 8t + 1)^{-1}, \label{valu}
\end{multline}
With the value of $u$ given by \eqref{valu}, and using the relations \eqref{solabc} and \eqref{areatr1}, we get the sides of two rational triangles which have a common circumradius and a common area. On appropriate scaling, the sides $a_1, b_1, c_1$, and $a_2, b_2, c_2$ of the two triangles may be written as follows:
\begin{equation}
\begin{aligned}
a_1 &= 2t(t^4 - 2t^2 + 5)(5t^4 - 2t^2 + 1)(t^8 + 8t^7 + 20t^6 \\
& \quad \quad - 56t^5 - 26t^4 + 56t^3 + 20t^2 - 8t + 1),\\
b_1 &= (t - 1)(t + 1)(t^4 - 4t^3 + 10t^2 - 4t + 1)(t^4 + 4t^3 \\
& \quad \quad + 10t^2 + 4t + 1)(t^8 - 8t^7 + 20t^6 + 56t^5 - 26t^4\\
& \quad \quad - 56t^3 + 20t^2 + 8t + 1),\\
c_1 &= (t^2 + 1)(t^{16} + 104t^{14} - 548t^{12} + 3032t^{10} - 4922t^8\\
& \quad \quad+ 3032t^6 - 548t^4 + 104t^2 + 1),
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
a_2 &= (t - 1)(t + 1)(t^4 - 4t^3 + 10t^2 - 4t + 1)(t^4 + 4t^3\\
& \quad \quad + 10t^2 + 4t + 1) (t^8 + 8t^7 + 20t^6 - 56t^5 - 26t^4\\
& \quad \quad + 56t^3 + 20t^2 - 8t + 1),\\
b_2 &= 2t(t^4 - 2t^2 + 5)(5t^4 - 2t^2 + 1)(t^8 - 8t^7 + 20t^6 + 56t^5\\
& \quad \quad - 26t^4 - 56t^3 + 20t^2 + 8t + 1), \\
c_2 &= (t^2 + 1)(t^{16} + 104t^{14} - 548t^{12} + 3032t^{10} - 4922t^8 \\
& \quad \quad + 3032t^6 - 548t^4 + 104t^2 + 1),
\end{aligned}
\end{equation}
where $t$ is an arbitrary parameter.
The common circumradius of the two triangles is
\begin{multline*}
(t^2+1)(t^4 - 2t^2 + 5)(5t^4 - 2t^2 + 1)(t^4 - 4t^3 + 10t^2 - 4t + 1)(t^4 + 4t^3 \\
+ 10t^2+ 4t + 1)(t^8 + 8t^7 + 20t^6 - 56t^5 - 26t^4 + 56t^3 + 20t^2 - 8t + 1)\\
\times (t^8 - 8t^7 + 20t^6 + 56t^5 - 26t^4 - 56t^3 + 20t^2 + 8t + 1)\{2(3t^4 - 6t^2 - 1)\\
\times (t^4 + 6t^2 - 3)(t^4 - 4t^3 - 6t^2 - 4t + 1)(t^4 + 4t^3 - 6t^2 + 4t + 1)\}^{-1},
\end{multline*}
while their common area is
\begin{multline*}
t(t - 1)(t + 1)(3t^4 - 6t^2 - 1)(t^4 + 6t^2 - 3)(t^4 - 4t^3 - 6t^2 - 4t + 1)(t^4 + 4t^3 \\
- 6t^2 + 4t + 1)(t^{16} + 104t^{14} - 548t^{12} + 3032t^{10} - 4922t^8 + 3032t^6 - 548t^4 + 104t^2 + 1).
\end{multline*}
As a numerical example when $t=2$, we get two triangles with sides
\[3283540, 7603539, 7776485, \quad {\rm and} \quad 4279155, 5834452, 7776485,\]
having common circumradius $10402718520025/2639802$ and common area $12317028393582$.
We note that additional solutions of Eq.~\eqref{areasq3} may be found by repeated application of the aforementioned method of Fermat. Alternatively, we may consider Eq.~\eqref{areasq3} as a quartic model of an elliptic curve, reduce it by a birational transformation to the cubic Weierstrass model using the known rational point on the quartic curve \eqref{areasq3}, and find additional rational points on the cubic elliptic curve using the group law. These rational points may then be used to obtain additional parametric solutions of our problem.
\section{Some open problems}
We have obtained pairs of rational triangles with a common circumradius and also having either a common perimeter or a common inradius or a common area. It would be of interest to determine whether there are three or more rational triangles with a common circumradius and also having a common perimeter or a common inradius or a common area.
|
{
"timestamp": "2021-05-11T02:16:43",
"yymm": "2105",
"arxiv_id": "2105.03816",
"language": "en",
"url": "https://arxiv.org/abs/2105.03816"
}
|
\section{Introduction}
\label{sec:intro}
Black holes in Einstein-Skyrme model are the evidence of counterexamples to the no-hair conjecture for black holes which states that a black hole can only be characterized by its mass and its electric and magnetic charges. These objects up to now have been studied mostly in four dimensions. In particular, several models admitting asymptotically flat spacetimes exist in the literature, see for example, \cite{Volkov:2016ehx}. We have only few models with asymptotically anti-de Sitter spacetimes \cite{Shiiki:2005aq, Shiiki:2005xn, Perapechka:2016cof}, while there is only a model with de Sitter background \cite{Brihaye:2005an}.
There are some higher dimensional models which can be mentioned as follows. A five dimensional Einstein-Skyrme model has been considered which can be thought of as an $O(5)$ sigma model coupled to gravity \cite{Brihaye:2017wqa}. In the model, we have some universality properties that can be adopted to any dimension higher than five. For example, a topological charge is globally defined on the spacetime. The second model is a seven dimensional Skyrme branes \cite{BlancoPillado:2008cp} which provides a brane world scenario. The authors set that the skyrmion field defines on a warped spherically symmetric three dimensional submanifold of a seven-dimensional spacetime which is conformal to $\mathrm{I\hspace{-0.7mm}R}^{3+1} \times {\mathcal S}^3$ where $\mathrm{I\hspace{-0.7mm}R}^{3+1}$ and $ {\mathcal S}^3$ are four-Minkowski spacetime and three-conformally flat geometries, respectively. Another example of black holes in higher dimensional Einstein-Skyrme theories is studied in \cite{Gunara:2018lma}. In this latter model, the authors study the Einstein-Skyrme theory with the cosmological constant $\Lambda \le 0$ on a $d+1$-dimensional nontrivial static spacetime $\mathcal{M}^{d+1}$ which is conformal to $ \mathcal{M}^{3+1} \times \mathcal{N}^{d-3}$ where $\mathcal{M}^{3+1}$ and $\mathcal{N}^{d-3}$ are the four dimensional spacetime and the compact $(d-3)$-dimensional submanifold, respectively. Then, they construct a family of static black holes and prove the existence of such solutions with finite energy if $\Lambda = 0$.
The purpose of this paper is to provide the analysis of static black hole solutions of higher dimensional Einstein-Skyrme models with general couplings including the scalar potential $V(\phi)$ and the cosmological constant $\Lambda \le 0$. In particular, we set the spacetime $\mathcal{M}^{d+1}$ with $d \ge 3$ to be static and conformal to $ \mathcal{M}^{1+1} \times S^{d-1}$ where $\mathcal{M}^{1+1}$ and $S^{d-1}$ are the two dimensional spacetime and the compact $(d-1)$-dimensional sphere, respectively, with the metric functions $\delta(r)$ and $m(r)$. We also have an $SU(2)$ valued Skyrme field which is locally defined on the submanifold $\mathcal{S}^{d} \subseteq \mathcal{M}^{d+1} $, where $\mathcal{S}^{d}$ is conformal to $\mathrm{I\hspace{-0.7mm}R}^+ \times S^{d-1} $ with $\mathrm{I\hspace{-0.7mm}R}^+ $ being positive real number. This Skyrme field can further be simplified into a form that can be expressed in terms of a profile function $\xi \equiv \xi(r)$ where $r$ is the radial coordinate \cite{Date:1986be}.
We write down some consequences of the above as follows. First, the covariant topological current such as baryon number lives locally on the submanifold $\mathcal{S}^{d}$. Second, to obtain physical black hole, we have to specify the behavior of the functions $\delta(r), ~ m(r)$, and $\xi(r)$ on the boundaries, namely, at the (event) horizon and the outer boundary, together with their local-global existence and their linear stability. Finally, we want to mention that since our model consists of general couplings in diverse dimension ($d \ge 3$) which is a complicated structure, it is not necessary to use the notion of branches related to the value of $\xi(r)$ on the horizon. Instead, we just need the value of $\xi(r)$ on the horizon to be regular (or finite) which will be useful to establish the local-global existence of solutions.
In order to have a well-defined model, we first have to analyze the functions $\delta(r)$, $m(r)$, and $\xi(r)$ near the boundaries. Near the (event) horizon, all the function can be linearly expanded such that these functions become fixed on the horizon. At this region, the spacetime $\mathcal{M}^{d+1}$ breaks into ${\mathcal T}^{1+1} \times S^{d-1} $ where the 2-surface ${\mathcal T}^{1+1}$ could be either a flat Minkowski surface $\mathrm{I\hspace{-0.7mm}R}^{1+1}$ or an anti-de Sitter surface $AdS_2$ \cite{Kunduri:2007vf}. Analysis on the Ricci scalar implies that the value of $\xi(r)$ on the horizon has to satisfy an inequality in order to have a consistent solution. In the asymptotic limit, we set the decay rate of all the functions $\delta(r), ~ m(r)$, and $\xi(r)$ to be of the form $O(r^{-n})$ with $n \ge 1$ such that the lowest order of these functions are constants implying that the black hole spacetime converges Einstein geometry.
Next, we construct local-global existence and uniqueness of this skyrmionic black hole solution. By employing Picard's iteration and the contraction mapping properties, we firstly show local existence and uniqueness of the solutions. Then, using the uniqueness property we argue that the local solution can be extended to the maximal solution by gluing some of the local solutions. Since we have particularly the decay properties of the functions $\delta(r), ~ m(r)$, and $\xi(r)$ of the form $O(r^{-n})$ with $n \ge 1$ in the asymptotic region, it can be established a family of global solutions whose energy is finite by taking the values of both the field $\xi(r)$ and the scalar potential to be vanished in this region for arbitrary $\Lambda \le 0$.
Finally, we discuss linear stability of the black holes using perturbation method to obtain a linear equation called Sturm-Liouville equation. In other words, we have an eigenvalue problem. Using the behavior of the functions $\delta(r), ~ m(r)$, and $\xi(r)$ in the asymptotic region and applying so called the fixed point theorem \cite{Krasnosel_1964}, it can be shown the existence of both stable and unstable solutions for any $\Lambda \le 0$.
We organize this paper as follows. In Section \ref{sec:Skyrmemod} we discuss shortly the Skyrme model in diverse dimension. The static solutions in the theory are considered in Section \ref{sec:EinsteinSkyrmemod}. We perform the analysis of the solutions near the boundaries, namely, the (event) horizon and the asymptotic region in Section \ref{sec:boundaryprop}. In Section \ref{sec:ExisSolFinitE} we prove that there exists a family of unique global solutions with finite energy in the model. In Section \ref{sec:linearstableanal} we consider linear stability of solutions and show the existence of stable and unstable solutions.
\section{Skyrme Model in Diverse Dimension}
\label{sec:Skyrmemod}
In this section we shortly discuss the Skyrme model in $d \ge 3$ spatial dimensions. The starting point is to consider the standard Skyrme model in \(3+1\) dimension whose action is of the form
\begin{equation}\label{eq:SkyrmeL4d}
\mathcal{S}_4= \int dx^4\sqrt{-g}\left( \gamma_1g^{\mu\nu}~Tr(L_\mu L_\nu)+\gamma_2g^{\mu\nu}g^{\alpha\beta}~Tr([L_\mu, L_\alpha][L_\nu,L_\beta])\right) ~ ,
\end{equation}
with \(L_\mu=U^{\dagger}\partial_\mu U\) where \(U=\phi^0I_{2}+i\vec{\phi}.\vec{\sigma}\) is an \(SU(2)\) valued chiral field originally proposed by T. Skyrme \cite{Skyrme:1961vq, Skyrme:1962vh}. The vector \(\vec{\sigma}=(\sigma_1,\sigma_2,\sigma_3)\) are the Pauli matrices, while $(\phi^0,\vec{\phi})$ are real scalar field satisfying \(O(4)\) model condition, $ \phi^a\phi^a=1$, see also for example, \cite{Brihaye:2017wqa, Manton:2004tk}.
In fact, one can work another way to construct such functional by introducing a so called strain tensor \(D=JJ^T\) where \(J\) is the Jacobian matrix of the map \cite{manton1987}. The energy functional of static Skyrme model can be constructed from the invariants of the strain tensor $D$ which are the combinations of its eigenvalues. Then, the Lagrangian from \eqref{eq:SkyrmeL4d} can be expressed to be the sum of these two terms
\begin{eqnarray}
\phi^a_i\phi^a_jg^{ij} ~ , \\
\phi^a_{[i}\phi^b_{j]}\phi^a_{[k}\phi^b_{l]}g^{ij}g^{kl} ~ ,
\end{eqnarray}
where \(\phi^a_i \equiv \frac{\partial\phi^a}{\partial x^i}\) and \(g^{ij}\) are the components of the inverse of metric tensor. This construction of Skyrme model can be employed to generalize \eqref{eq:SkyrmeL4d} either for including the higher order terms \cite{Gudnason_2017} or for extending the Skyrme model in five dimensions \cite{Brihaye:2017wqa}.
Now, let us consider the eigenvalues of \(D_{ij}\) from a \(d+1\) dimensional Skyrme model, that is, \(\lambda_1^2,\lambda_2^2,\dots,\lambda_d^2\). The most general Lagrangian can be written as
\begin{equation} \label{eq:SkyrmeLdgen}
{\mathcal L} = - \gamma_0 V+\sum_{n=1}^d \gamma_nL_n ~ ,
\end{equation}
where $\gamma_p \ge 0$, $p = 0,...,d$, are the coupling constants, \(V\equiv V(\phi)\) is a scalar potential, and \(L_n\) have the form
\begin{eqnarray}
L_1&\propto& \lambda_1^2+\dots+\lambda_d^2 ~ , \nonumber\\
L_2&\propto& \lambda_1^2\lambda_2^2+\lambda_1^2\lambda_3^2+\dots+\lambda_{d-1}^2\lambda_d^2 ~ , \nonumber\\
\vdots \nonumber\\
L_d&\propto& \lambda_1^2\lambda_2^2\dots\lambda_d^2 ~ .
\end{eqnarray}
In four dimensional case (\(d=3\)), we have the standard Skyrme model with additional BPS-Skyrme term (\(L_3\propto \lambda_1^2\lambda_2^2\lambda_3^2\)) proposed in \cite{Adam:2010zz}.
Using the above prescription and identifying
\begin{eqnarray}\label{eq:SkyrmeLdgenterm}
L_n = \frac{1}{\left(n!\right)^2} \phi^{a_1}_{[i_1}\dots\phi^{a_n}_{i_n]}\phi^{a_1}_{[j_1}\dots\phi^{a_n}_{j_n]}g^{i_1j_1}\dots g^{i_nj_n} ~ ,
\end{eqnarray}
the Lagrangian of the model has the form
\begin{equation}\label{eq:SkyrmeLdgenform}
{\mathcal L} =\gamma_0 V+\sum_{n=1}^d \frac{\gamma_n}{\left(n!\right)^2}\phi^{a_1}_{[i_1}\dots\phi^{a_n}_{i_n]}\phi^{a_1}_{[j_1}\dots\phi^{a_n}_{j_n]}g^{i_1j_1}\dots g^{i_nj_n} ~ ,
\end{equation}
which is just an \(O(d+1)\) model proposed in \cite{Brihaye:2017wqa}.
Let us turn to a special case where we use the hedgehog ansatz of the scalars $\phi$ whose the form is given by \cite{Date:1986be}
\begin{equation}\label{eq:hedgehodans}
\phi = \left(\cos\xi,\frac{\textbf{x}}{r}\sin\xi\right) ~ ,
\end{equation}
with \(\textbf{x}\) is a positional vector on \(\mathbb{R}^d\) satisfying \(\textbf{x}^T\textbf{x}=r\) and \(\xi\equiv\xi(r)\).
Using this ansatz, the coresponding strain tensor is
\begin{equation}
D_{ij} = \phi^a_i\phi^a_j=\delta_{ij}\frac{\sin^2\xi}{r^2}+x_ix_j\left( \frac{(\xi')^2}{r^2}-\frac{\sin^2\xi}{r^4}\right) ~ .
\end{equation}
Then, using the eigenvalue equation $det(D-\lambda^2I_d)=0$, we simplify the term of the Lagrangian \eqref{eq:SkyrmeLdgenterm} to
\begin{equation}\label{eq:SkyrmeLdgentermansatz}
L_n = \frac{\sin^{2(n-1)}\xi}{r^{2(n-1)}}\left( C^{d-1}_{n-1}(\xi')^2+\left(C^{d}_{n} - C^{d-1}_{n-1}\right)\frac{\sin^2\xi}{r^2}\right) ~ ,
\end{equation}
where $C^{d}_{n} \equiv \frac{d!}{n! (d-n)!}$ and $\xi' \equiv \frac{d\xi}{dr}$ showing that our case admits spherical symmetry.
\section{Static Solutions in Einstein-Skyrme Model}
\label{sec:EinsteinSkyrmemod}
We can now couple the above Skyrme to gravity via Einstein-Hilbert Lagrangian
\begin{equation}\label{eq:EinsteinSkyrmeLdgenform}
{\mathcal L}_g = \frac{1}{2} (R - 2 \Lambda ) - \gamma_0 V+\sum_{n=1}^d \frac{\gamma_n}{\left(n!\right)^2}\phi^{a_1}_{[i_1}\dots\phi^{a_n}_{i_n]}\phi^{a_1}_{[j_1}\dots\phi^{a_n}_{j_n]}g^{i_1j_1}\dots g^{i_nj_n} ~ ,
\end{equation}
where $R$ and $\Lambda$ are the Ricci scalar and the cosmological constant, respectively, defined on $(d+1)$-dimensional spacetime ${\mathcal M}^{d+1}$. Throughout the paper, we take $\Lambda \le 0$. In order to be compatible with the discussion in the preceding section, we take the ansatz metric to be static spherical symmetric
\begin{equation}\label{eq:metricans}
ds^2=-e^{2\delta}f ~ dt^2+\frac{dr^2}{f} + r^2 R_0^2 ~ d\Omega^2_{d-1} ~ ,
\end{equation}
where \(\delta\equiv\delta(r)\), \(f\equiv f(r)\), and \(d\Omega^2_{d-1}\) is the metric of $(d-1)$-dimensional sphere $S^{d-1}$ with radius $R_0 > 0$.
On the metric \eqref{eq:metricans}, the effective static energy functional for the Skyrme model has the form
\begin{equation}\label{eq:EeffSkyrme}
E=\int d^{d}x \sqrt{-g^{(d+1)}} \left( u f (\xi')^2+v \right) ~ ,
\end{equation}
with
\begin{eqnarray}\label{eq:uv}
u &\equiv& \sum_{n=1}^d\gamma_n C^{d-1}_{n-1} \frac{\sin^{2(n-1)}\xi}{(rR_0)^{2(n-1)}} ~ , \nonumber\\
v &\equiv& \gamma_0V+\sum_{n=1}^d \gamma_n C^{d}_{n} \left(1-\frac{n}{d}\right) \frac{ \sin^{2n}\xi}{(rR_0)^{2n}} ~ ,
\end{eqnarray}
and we assume $V = V(\xi)$ for the rest of this paper. Varying the energy \eqref{eq:EeffSkyrme} with respect to $\xi$, it leads to the equation of motions of $\xi$ given by
\begin{equation}\label{eq:eomSkyrme}
\xi''+\left(\frac{d-1}{r}+\delta'+\frac{f'}{f}+\frac{1}{u}\partial_r u \right)\xi'+ \frac{(\xi')^2}{2u}\frac{\partial u}{\partial \xi} =\frac{1}{2uf} \partial_\xi v ~ ,
\end{equation}
where $\partial_r u \equiv \frac{\partial u}{\partial r}$ and $\partial_\xi v \equiv \frac{\partial v}{\partial\xi}$.
Next, the dynamics of the metric functions $\delta(r)$ and $f(r)$ are governed by the Einstein field equation coming from the variation of the action related to Lagrangian \eqref{eq:EinsteinSkyrmeLdgenform}. The components of the Einstein field equation can be simplified into the following equations
\begin{equation}
(d-1)\frac{f}{2}\left( \frac{f'}{rf}+\frac{d-2}{r^2}\right)-\frac{(d-1)(d-2)}{2 r^2 R^2_0} + \Lambda = - u f (\xi')^2 - v ~ ,
\label{eq:Einsteineq}
\end{equation}
\begin{equation}
(d-1)\frac{f}{2}\left( \frac{2\delta'f+f'}{rf}+\frac{d-2}{r^2}\right) -\frac{(d-1)(d-2)}{2 r^2 R^2_0} + \Lambda = u f (\xi')^2 - v ~ ,
\label{eq:Einsteineq1}
\end{equation}
\begin{eqnarray}
&& \frac{f}{2}\left( 2\delta''+2(\delta')^2 + \frac{f''+3\delta'f'}{f}\right) + \frac{d-2}{r}\left(f'+\delta'f\right)-\frac{(d-2)(d-3)}{2 r^2 R^2_0}\left(1-fR_0^2\right) + \Lambda \nonumber\\
&& \quad = ~ \frac{1}{d (d-1)} \sum_{n=1}^d\gamma_n C^{d}_{n} ~ n \left(2n-d-1\right) \frac{\sin^{2(n-1)}\xi}{(rR_0)^{2(n-1)}} f(\xi')^2 \nonumber\\
&& \quad \quad - \frac{1}{d (d-1)} \sum_{n=1}^d\gamma_n C^{d}_{n} \left( 2n^2-3nd+n-d+d^2\right) \frac{\sin^{2n}\xi}{(rR_0)^{2n}} - \gamma_0 V ~ .
\label{eq:Einsteineq2}
\end{eqnarray}
Additionally, we write down the Ricci scalar of the metric \eqref{eq:metricans}
\begin{eqnarray} \label{eq:Ricciscalar}
R &=&\frac{1}{r^2 R^2_0} (d-2)(d-1) -\frac{2 }{r} (d-1)(f'+\delta' f) \nonumber\\
&&-\left[ 2f(\delta''+\delta'^2) + 3\delta' f'+f'' \right] - (d-2)(d-1) \frac{f}{r^2} ~ .
\end{eqnarray}
which will be useful in the next section discussion.
Now, we take a particular for of the metric function $f(r)$ as
\begin{equation}\label{eq:metform}
f= \frac{1}{R_0^2}-\frac{2m}{(d-2)r^{d-2}}-\frac{2\Lambda}{d(d-1)}r^2 ~ ,
\end{equation}
where $m \equiv m(r)$. Using \eqref{eq:metform}, \eqref{eq:Einsteineq} and \eqref{eq:Einsteineq1} become simply
\begin{eqnarray}
m' &=& \frac{(d-2)}{(d-1)}r^{d-1} \left(u f (\xi')^2+v \right) ~ , \nonumber\\
\delta' &=& \frac{2}{(d-1)}u(\xi')^2r ~ .\label{eq:constraints}
\end{eqnarray}
We will discuss the properties of $m(r)$, $\delta(r)$, and $\xi(r)$ near the boundaries, namely, near the horizon ($r \to r_h$) and around the asymptotic region ($r \to +\infty$) in the next section.
\section{Near Boundary Properties}
\label{sec:boundaryprop}
In this section we discuss the properties of solutions of the Einstein field equation \eqref{eq:Einsteineq2}, \eqref{eq:constraints}, and the equation of motions \eqref{eq:eomSkyrme} near the boundaries. The first part of discussion is to consider the properties of $m(r)$, $\delta(r)$, and $\xi(r)$ near the horizon and then, we continue it in the asymptotic region as $r \to +\infty$.
Let us first consider the behavior of $m(r)$, $\delta(r)$, and $\xi(r)$ in the horizon limit. Suppose there exists a horizon at the radius $r = r_h$ such that $f(r_h) = 0$ implying
\begin{equation} \label{eq:massBH}
M = m(r_h) = \frac{(d-2)r_h^{d-2}}{2R_0^2}-\frac{(d-2)}{d(d-1)}\Lambda r_h^d ~ ,
\end{equation}
where $M > 0$ is the ADM mass of the black holes. Near the region, we can expand the metric functions as
\begin{eqnarray}
\xi&=& \xi_h + \xi_1(r-r_h)+O((r-r_h)^2) ~ , \nonumber\\
\delta&=& \delta_h + \delta_1(r-r_h)+O((r-r_h)^2) ~ , \nonumber\\
m&=& M + m_1(r-r_h)+O((r-r_h)^2) ~ , \label{eq:horizonexpand}
\end{eqnarray}
where $\xi_h$ and $\delta_h$ are positive constants, while $\xi_1$, $\delta_1$, and $m_1$ are real constants. Inserting \eqref{eq:horizonexpand} into \eqref{eq:constraints} and \eqref{eq:eomSkyrme}, it then yields
\begin{eqnarray}
\delta_1 &=& \frac{2}{(d-1)}\left(\sum_{n=1}^d\gamma_n C^{d-1}_{n-1}\frac{\sin^{2(n-1)}\xi_h}{(r_hR_0)^{2(n-1)}} \right)\xi_1^2r_h ~ ,\nonumber\\
m_1 &=& \frac{(d-2)}{(d-1)}r_h^{d-1}\left( \gamma_0V(\xi_h) +\sum_{n=1}^d \gamma_n C^{d}_{n} \left(1-\frac{n}{d}\right) \frac{\sin^{2n}\xi_h}{(r_hR_0)^{2n}} \right) ~ , \nonumber\\
\xi_1 &=& \frac{\gamma_0 ~ \partial_\xi V(\xi_h)+\frac{\sin(2\xi_h)}{(r_hR_0)^2}\sum_{n=1}^d \gamma_n C^{d-1}_{n-1} (d-n)\frac{\sin^{2(n-1)}\xi_h}{(r_hR_0)^{2(n-1)}} }{2\left(\frac{d-2}{R_0^2r_h}-\frac{2\Lambda r_h}{(d-1)}-\frac{2m_1}{(d-2)r_h^{d-2}}\right)\left(\sum_{m=1}^d\gamma_n C^{d-1}_{m-1} \frac{\sin^{2(m-1)}\xi_h}{(r_hR_0)^{2(m-1)}} \right)} ~ , \label{eq:horizonexpandres}
\end{eqnarray}
which show that we have a $(d+4)$-dimensional parameter space spanned by $\xi_h$, $\delta_h$, $M$, and $\gamma_p$ with $p = 0,...,d$. From the last equation in \eqref{eq:horizonexpandres}, it is straightforward to see that among $\gamma_p$, we should have at least one of them to be non-zero.
The second expression in \eqref{eq:horizonexpandres} and \eqref{eq:Einsteineq2} can be used to find the form of Ricci scalar on the horizon which takes the form
\begin{equation}\label{eq:ricciscalarhorizon}
R|_{r_h} = \frac{(d-1)(d-2)}{r^2_hR_0^2} + \Lambda_e + 2 \sum_{n=1}^d \gamma_n C^{d}_{n} \left(\frac{2n^2 -3nd -n+d +d^2}{d(d-1)}\right) \frac{\sin^{2n}\xi_h}{(R_0r_h)^{2n}} ~ ,
\end{equation}
with
\begin{equation}
\Lambda_e = \frac{2(d+1)}{(d-1)} \left(\Lambda + \gamma_0 V(\xi_h) \right) - \frac{(d-1)(d-2)}{r^2_hR_0^2} ~ .
\end{equation}
Around this region, the spacetime topology changes to ${\mathcal T}^2 \times S^{d-1}$ where ${\mathcal T}^2$ is a 2-surface ${\mathcal T} ^2$ which could be either ${\mathcal T} ^2 \simeq \mathrm{I\hspace{-0.7mm}R}^2$ or ${\mathcal T} ^2 \simeq AdS_2$ \cite{Kunduri:2007vf}. Thus, the second term and the third term in the right hand side of \eqref{eq:ricciscalarhorizon} should be
\begin{equation}\label{eq:horizongeomcon}
\Lambda_e + 2 \sum_{n=1}^d \gamma_n C^{d}_{n} \left(\frac{2n^2 -3nd -n+d +d^2}{d(d-1)}\right) \frac{\sin^{2n}\xi_h}{(R_0r_h)^{2n}} \le 0 ~ .
\end{equation}
Next, we consider the behavior of $m(r)$, $\delta(r)$, and $\xi(r)$ in the asymptotic region. In order to have a finite and regular solution, the metric functions $\delta(r)$ and $m(r)$ should be decreasing functions whose form are
\begin{eqnarray}
\delta(r) &=& \frac{\tilde{\delta}_1}{r^{n_1}} + O\left(r^{-(n_1 +1)} \right) \ , \nonumber\\
m(r) &=& M + \frac{\tilde{m}_1}{r^{n_2}} +O\left(r^{-(n_2 +1)} \right) \ , \label{eq:Expandasymp}
\end{eqnarray}
where $M > 0$ is the ADM mass given in \eqref{eq:massBH}, whereas $\tilde{\delta}_1 , m_1 \in \mathrm{I\hspace{-0.7mm}R}$, $n_1 \ge 1$, and $n_2 \ge 1$. Moreover, the skyrmionic scalar $\xi(r)$ would have the form
\begin{equation}
\xi(r) = \xi_\infty + \frac{\tilde{\xi}_1}{r^{n_3}}+ O\left(r^{-(n_3 + 1)} \right) ~ , \label{eq:fasympcon}
\end{equation}
where $n_3 \ge 1$ and $\xi_\infty, \tilde{\xi}_1 \in \mathrm{I\hspace{-0.7mm}R}$ showing that $\xi(r)$ will be frozen as $r \to +\infty$. Inserting \eqref{eq:Expandasymp} and \eqref{eq:fasympcon} to the constraints \eqref{eq:constraints}, we obtain
\begin{equation}
n_1 \ge d+1 ~ , \quad n_2 \ge 1 ~ , \quad n_3 \ge \frac{1}{2} (d+1) ~ , \label{eq:nsympconres}
\end{equation}
with either $\gamma_0 = 0$ or $\gamma_0 > 0$ and
\begin{equation}
V( \xi_\infty ) = 0 ~ . \label{eq:Vasympconres}
\end{equation}
We also have
\begin{eqnarray}
\tilde{\delta}_1 &=& - \frac{n_3^2}{n_1} \gamma_1 \tilde{\xi}_1^2 ~ ,\nonumber\\
\tilde{m}_1 &=& \frac{n_3^2 ~ \Lambda}{d (d-1)n_2} \gamma_1 \tilde{\xi}_1^2 ~ .
\label{eq:constraintsasymp}
\end{eqnarray}
The equation of motions \eqref{eq:eomSkyrme} gives us the condition
\begin{equation}
\gamma_0 ~ \partial_\xi V (\xi_\infty) = \frac{4 \gamma_2 \Lambda}{d(d-1) R_0^2} \sin \xi_\infty \cos \xi_\infty ~ , \label{eq:eomSkyrmeasymcon}
\end{equation}
where we have used the ansatze \eqref{eq:Expandasymp} and \eqref{eq:fasympcon}.
Now, we write down the trace of the Einstein field equation
\begin{eqnarray}
R &=& \frac{2 (d+1)}{(d-1)} \left( \Lambda + \gamma_0 V\right) + \frac{2}{d (d-1)} \sum_{n=1}^d\gamma_n C^{d}_{n} \left( 2n^2 -3nd -n+d +d^2\right)\frac{\sin^{2n}\xi}{(rR_0)^{2n}} \nonumber\\
&& - \frac{2}{d (d-1)} \sum_{n=1}^d\gamma_n C^{d}_{n} n \left(2n-d -1\right) \frac{\sin^{2(n-1)}\xi}{(rR_0)^{2(n-1)}} f(\xi')^2 ~ . \nonumber\\
\label{eq:traceEinsteineq}
\end{eqnarray}
In this region the geometry converges to Einstein such that \eqref{eq:traceEinsteineq} should be simplified to
\begin{equation}
R = \frac{2 (d+1) \Lambda}{(d-1)} + O\left(r^{-n} \right) ~ , \label{eq:Ricciscalarasym}
\end{equation}
with cosmological constant $2 \Lambda / (d-1)$ (or Ricci-flat with $\Lambda = 0$) such that if we subtitute \eqref{eq:Expandasymp} and \eqref{eq:fasympcon} to \eqref{eq:traceEinsteineq}, we find either the case without the scalar potential, that is, $\gamma_0 = 0$ or \eqref{eq:Vasympconres} with $\gamma_0 > 0$. As we will see in the next section, the finiteness of the energy functional \eqref{eq:EeffSkyrme} constrains the value of $\xi_\infty $ and the bound of $n_3$.
Finally, the topological charge called baryon number $B$ for such a system is
\begin{equation}
B = \frac{\Gamma\left(\frac{d+1}{2}\right)}{\sqrt{\pi}\Gamma\left(\frac{d}{2}\right)} \int_{\xi_\infty}^{\xi_h} \sin^{d-1}\xi ~d\xi = {\mathcal B}(\xi_h) - {\mathcal B}(\xi_\infty) ~ ,
\label{eq:topocharge}
\end{equation}
where
\begin{equation}
\begin{aligned}[b]
{\mathcal B}(\xi) = \frac{\Gamma\left(\frac{d+1}{2}\right)}{\sqrt{\pi}\Gamma\left(\frac{d}{2}\right)2^{d-2}}
&
\begin{cases}
\sum\limits_{r=0}^{\frac{d-3}{2}} C^{d-1}_r (-1)^{\frac{1}{2} (d-1-2r)} \frac{\sin \left( (d-1-2r) \xi\right)}{d-1-2r} + \frac{1}{2} C^{d-1}_{\frac{d-1}{2}} \xi & ; ~ d=3,5,7,... \\
\sum\limits_{r=0}^{\frac{d-2}{2}} C^{d-1}_r (-1)^{\frac{1}{2} (d-2r)} \frac{\cos \left( (d-1-2r) \xi\right)}{d-1-2r} & ; ~ d=4,6,8,...
\end{cases}
\end{aligned}
\end{equation}
We have used the definition of topological charge given in \cite{Brihaye:2017wqa}. A vacuum solution can be obtained if we take $\xi_h = \xi_\infty$.
Let us consider an example as follows. Suppose we have $\xi_h = \pi$ and $\xi_\infty = 0$, then $\Lambda_e \le 0$, \eqref{eq:horizonexpandres} becomes
\begin{eqnarray}
\delta_1 &=&\frac{2\gamma_1 }{(d-1)} \xi_1^2 r_h ~ ,\nonumber\\
m_1 &=& \frac{(d-2)}{(d-1)}r_h^{d-1} \gamma_0 V(\pi) ~ , \nonumber\\
\xi_1 &=& \frac{\gamma_0 ~ \partial_\xi V(\pi) }{2 \gamma_1 \left(\frac{d-2}{R_0^2r_h}-\frac{2 r_h}{(d-1)} \left(\Lambda -\gamma_0 V(\pi) \right) \right) } ~ , \label{eq:horizonexpandsol}
\end{eqnarray}
with $\gamma_1 > 0$, and the topological charge \eqref{eq:topocharge} simplifies to
\begin{equation} \label{eq:topochargeexam}
\begin{aligned}[b]
B = \frac{\Gamma\left(\frac{d+1}{2}\right)}{\sqrt{\pi}\Gamma\left(\frac{d}{2}\right)2^{d-2}}
&
\begin{cases}
\frac{1}{2} C^{d-1}_{\frac{d-1}{2}} & ; ~ d=3,5,7,... \\
\sum\limits_{r=0}^{\frac{d-2}{2}} C^{d-1}_r (-1)^{\frac{1}{2} (d-2r)} \frac{(-1)^{ (d-1-2r)} - 1}{d-1-2r} & ; ~ d=4,6,8,...
\end{cases}
~ .
\end{aligned}
\end{equation}
It is easy to see that we have black hole solutions if $\gamma_0 > 0$ with either $ V(\pi) \ne 0$ or $\partial_\xi V(\pi) \ne 0$. In other words, the presence of the scalar potential term $\gamma_0 V$ plays an important role in determining the existence of black hole solutions in this theory. For example, in a model with generalized pion-mass scalar potential
\begin{equation}
V(\xi) = (1 - \cos\xi)^n ~ , \quad n \ge 1 ~ ,
\label{eq:pionpotensial}
\end{equation}
we may have a black hole solution since $V(\pi) = 2^n$ and $\partial_\xi V(\pi) = 0$. Moreover, in the asymptotic region this model admits \eqref{eq:Vasympconres} and $\partial_\xi V (0) = 0$ which shows that these correspond to finite energy solutions as we shall see in the next section.
In a case of $\gamma_0 = 0$ (or $ V(\phi) \equiv 0$), we might have a smooth regular solution which is not a black hole. This latter situation has been observed in four dimensional Einstein-Skyrme theory \cite{Shiiki:2005aq}.
\section{Local-Global Existence of Finite Energy Solutions}
\label{sec:ExisSolFinitE}
In this section we establish local-global existence and uniqueness of black hole solutions of the theory. By employing the Picard's iteration and the contraction mapping properties, we prove the local existence and the uniqueness. Then, using maximal solution technique we construct the global existence. We use the expansions \eqref{eq:Expandasymp} and \eqref{eq:fasympcon} to show the finiteness of solutions. Finally, the finite energy solutions are discussed.
\subsection{Local Existence and Smoothness}
\label{subsec:LocExisSmooth}
Let us define a set of dynamical variables ${\bf{w}} \equiv ( \xi , p_\xi)$ where $p_\xi \equiv \xi'$. We write down the constraints \eqref{eq:constraints} in the integral form
\begin{eqnarray}
m &=& m_h + \frac{(d-2)}{(d-1)} r^{-\frac{2}{d-1}} \int r^{d-1 + \frac{2}{d-1}} \left( \left( \frac{1}{R_0^2} - \frac{2\Lambda}{d(d-1)}r^2\right) u p_\xi^2+v \right) dr ~ , \nonumber\\
\delta &=& \delta_h + \frac{2}{(d-1)} \int r u p_\xi^2 ~ dr ~ , \label{eq:constraintsinteg}
\end{eqnarray}
where $u \equiv u(\xi, r)$ and $v \equiv v(\xi, r)$ given in \eqref{eq:uv}. Suppose $I \equiv [r, r+ \varepsilon]$ where $r \ne r_h$, $\varepsilon > 0$, and $U\subset \mathrm{I\hspace{-0.7mm}R}^2$ be an open set.
\begin{lemma}
\label{lemmalocalLipshitzconstraint}
Suppose the scalar potential $V$ in \eqref{eq:EinsteinSkyrmeLdgenform} is at least a $C^2$-real function. Then, the metric functions $\delta({\bf{w}},r )$ and $m({\bf{w}},r )$ in \eqref{eq:constraintsinteg} are locally Lipschitz with respect to $\bf{w}$.
\end{lemma}
\begin{proof}
First, we have
%
\begin{eqnarray}
\left| m \right|_U &\le& \frac{(d-2)}{(d-1)} r^{-\frac{2}{d-1}} \int_r^{r+ \epsilon} r^{d-1 + \frac{2}{d-1}} \left| \left( \frac{1}{R_0^2} - \frac{2\Lambda}{d(d-1)}r^2\right) u p_\xi^2+v \right| dr \nonumber\\
&\le& \frac{(d-2)}{(d-1)} C r^{d-1 } \left| \left( \frac{1}{R_0^2} - \frac{2\Lambda}{d(d-1)}r^2\right) u p_\xi^2+v \right| ~ , \label{boundedm}
\end{eqnarray}
%
for $C > \epsilon$ which is bounded since the function $\xi(r)$ is at least a $C^2$-real function. Using similar argument, we can show that $\delta$ is also bounded. Then, for ${\bf{w}}, \tilde{\bf{w}} \in U$
\begin{eqnarray}
\left| \delta ({\bf w}, r) - \delta( \tilde{{\bf w}}, r) \right|_U &\le& \frac{2r}{(d-1)} C \left| u(\xi,r) p_\xi^2 - u( \tilde{\xi }, r) \tilde{p}_\xi^2 \right| ~ , \nonumber\\
\left| m ({\bf w}, r) - m( \tilde{{\bf w}}, r) \right|_U &\le& \frac{(d-2)}{(d-1)} C r^{d-1} \left| \left( \frac{1}{R_0^2} - \frac{2\Lambda}{d(d-1)}r^2\right) \left( u(\xi,r) p_\xi^2 - u( \tilde{\xi }, r) \tilde{p}_\xi^2\right)\right| \nonumber\\
&& + \frac{(d-2)}{(d-1)} C r^{d-1} \left| v(\xi,r) - v( \tilde{\xi }, r) \right| ~ .
\end{eqnarray}
Using the fact that for any smooth function ${\mathcal F}(f)$, we have locally
\begin{equation}
{\mathcal F}( f) - {\mathcal F}(\tilde{f} ) \leq \sup_{s\in[0,1]}\left[ {\mathcal F}'( f + s(\tilde{f} - f)) \right] (f - \tilde{f}) ~ , \label{anyFlocal}
\end{equation}
on $U$, we get that $\delta$ and $m$ indeed satisfy the local Lipschitz condition
\begin{eqnarray}
\left| \delta ({\bf w}, r) - \delta ( \tilde{{\bf w}}, r) \right|_U &\le& C_{ \delta}(|\bf{w}|, |\tilde{\bf{w}}|) | \bf{w} - \tilde{\bf{w}}| ~ , \nonumber\\
\left| m ({\bf w}, r) - m( \tilde{{\bf w}}, r) \right|_U &\le& C_m (|\bf{w}|, |\tilde{\bf{w}}|) | \bf{w} - \tilde{\bf{w}}| ~ , \label{localLipshitzconstraints}
\end{eqnarray}
on an open set $U\subset \mathrm{I\hspace{-0.7mm}R}^2$ where $C_{ \delta}(|\bf{w}|, |\tilde{\bf{w}}|)$ and $C_m(|\bf{w}|, |\tilde{\bf{w}}|)$ are bounded positive-valued functions .
\end{proof}
Next, we rewrite \eqref{eq:eomSkyrme} into
\begin{equation}
\frac{d \bf{w}}{dr} = \mathcal{J}({\bf w}, r) ~ , \label{ReducedEinsteineqSkyrmionEq1}
\end{equation}
where
\begin{equation}
\mathcal{J}({\bf{w}}, r) \equiv \left( \begin{array}{c}
p_\xi \\
J_\xi
\end{array} \right) ~ , \label{fungsiJ}
\end{equation}
where
\begin{eqnarray}
J_\xi \equiv - \left(\frac{d-1}{r}+\delta'+\frac{f'}{f}+\frac{1}{u}\frac{\partial u}{\partial r}\right) p_\xi - \left(\frac{1}{2u}\frac{\partial u}{\partial \xi}\right) p_\xi^2 + \frac{1}{2uf}\frac{\partial v}{\partial\xi} ~ ,
\end{eqnarray}
with the constraint \eqref{eq:constraintsinteg}. We can now state the result of the local existence and the uniqueness of \eqref{ReducedEinsteineqSkyrmionEq1} as follows.
\begin{lemma}
\label{localLipshitz}
The operator $\mathcal{J}$ defined in \eqref{fungsiJ} is locally Lipschitz with respect to $\bf{w}$.
\end{lemma}
\begin{proof}
From \eqref{fungsiJ}, we obtain the following estimate
\begin{eqnarray}
\left| J_\xi \right|_U \le \left| \frac{d-1}{r}+\delta'+\frac{f'}{f}+\frac{1}{u}\frac{\partial u}{\partial r}\right| |p_\xi | + \left| \frac{1}{2u}\frac{\partial u}{\partial \xi}\right| |p_\xi |^2 + \left| \frac{1}{2uf}\frac{\partial v}{\partial\xi} \right| ~ .
\end{eqnarray}
Since $\xi(r)$ belongs at least to a class of $C^2$-real functions, then its values is bounded on any closed interval $I$. Thus, $\left| \mathcal{J}( {\bf w}, r) \right|_U$ is bounded on $U$.
Moreover, for ${\bf{w}}, \tilde{\bf{w}} \in U$, we also have
\begin{eqnarray}
\left| \mathcal{J}( {\bf w}, r) - \mathcal{J}( \tilde{ {\bf w}}, r) \right|_U & \le & \frac{d-1}{r} |p_\xi - \tilde{p}_\xi | + \left| \delta' ( {\bf w}, r) p_\xi - \delta' ( \tilde{ {\bf w}}, r) \tilde{p}_\xi \right| \nonumber\\
&& + \left| \frac{f'}{f} ( {\bf w}, r) p_\xi - \frac{f'}{f} ( \tilde{ {\bf w}}, r) \tilde{p}_\xi \right| + \left| \frac{1}{u}\frac{\partial u}{\partial r}( {\bf w}, r) p_\xi - \frac{1}{u}\frac{\partial u}{\partial r}( \tilde{ {\bf w}}, r) \tilde{p}_\xi \right| \nonumber\\
&& + \left| \frac{1}{2u}\frac{\partial u}{\partial \xi} ( {\bf w}, r) p_\xi^2 - \frac{1}{2u}\frac{\partial u}{\partial \xi}( \tilde{ {\bf w}}, r) \tilde{p}_\xi^2 \right| + \left| \frac{1}{2uf}\frac{\partial v}{\partial\xi}( {\bf w}, r) - \frac{1}{2uf}\frac{\partial v}{\partial\xi}( \tilde{ {\bf w}}, r) \right| ~ . \nonumber\\
\end{eqnarray}
Employing some computations using the result in Lemma \ref{lemmalocalLipshitzconstraint} and the local property \eqref{anyFlocal} on $U$, it can be shown $ \mathcal{J}$ is locally Lipshitz with respect to $\bf{u}$ satisfying
\begin{equation}
\left| \mathcal{J}({\bf w}, r) - \mathcal{J}( \tilde{{\bf w}}, r) \right|_U \le C_{ \mathcal{J}}(|\bf{w}|, |\tilde{\bf{w}}|) | \bf{w} - \tilde{\bf{w}}| ~ . \label{localLipshitzcon}
\end{equation}
\end{proof}
It is useful for the next analysis to write down \eqref{ReducedEinsteineqSkyrmionEq1} in the integral form
\begin{equation}
{\bf{w} }(r) = {\bf{w} }(r_h) + \int_{r_h}^{r}\:\mathcal{J}\left( {\bf{w} }(s), s \right)\:ds ~ . \label{IntegralEquation}
\end{equation}
By introducing a Banach space
\begin{equation}
{\mathfrak X} \equiv \{ {\bf{w} } \in C(I,\mathrm{I\hspace{-0.7mm}R}^2) : \: {\bf{w} }(r_h) = {\bf{w} }_{0}, \: \sup_{r\in I}| {\bf{w} }(r)|\leq L_0 \} ~ ,
\end{equation}
endowed with the norm
\begin{equation}
|{\bf{w} }|_{\mathfrak X} = \sup_{r\in I}\:|\mathbf{w}(r)| ~ ,
\end{equation}
where $L_0$ is a positive constant, we define an operator $\mathcal{K}$
\begin{equation}
\mathcal{K}(\mathbf{w}(r)) = \mathbf{w}_{0} + \int_{r_h}^{r} ds ~ \mathcal{J}\left(s,\mathbf{w}(s)\right)\:. \label{OpKdefinition}
\end{equation}
Then, Lemma \ref{localLipshitz} implies the uniqueness result proving that the differential equation (\ref{ReducedEinsteineqSkyrmionEq1}) has a unique local solution.
\begin{corollary}{\textnormal{\cite{Akbar_2015}}}
\label{unigueness}
The operator $\mathcal{K}$ defined in (\ref{OpKdefinition}) is a mapping from ${\mathfrak X} $ to itself and $\mathcal{K}$ is a contraction mapping on $I = [r,r + \varepsilon ]$ with $r \ne r_h$, $\varepsilon > 0$ ,and
\begin{equation}
\varepsilon \leq \min\left(\frac{1}{C_{L_0}},\frac{1}{C_{L_0} L_0 + \|\mathcal{J}(r)\|}\right) ~ .
\end{equation}
Then, the operator $\mathcal{K}$ is a contraction mapping on ${\mathfrak X}$.
\end{corollary}
Finally, a maximal solution can be constructed as follows. Let ${\bf w}(r)$ be a solution defined on the interval $( r_h, r_m)$ with $r_m > 0$. Then, we repeat the above arguments of the local existence with the initial condition ${\bf w}(r-r_0)$ for some $r_h < r_0 < r$ and use the uniqueness condition to glue the solutions to get the maximal solution. It is straightforward that we have a global solution by taking $r_m \to +\infty$.
\subsection{Global Existence}
\label{subsec:GlobExis}
The second part of this section we show that a regular global solution of \eqref{ReducedEinsteineqSkyrmionEq1} on $I_{\infty}\equiv [r_h, +\infty)$ does exist satisfying the expansions \eqref{eq:Expandasymp} and \eqref{eq:fasympcon}. As expected, these establish the finiteness of \eqref{IntegralEquation} on $I_{\infty}$.
First of all, we introduce two intervals, namely, $I_L \equiv [r_h + \varepsilon, L]$ with $\varepsilon > 0$ and $I_A \equiv (L, +\infty)$ for finite and large $L \gg r_h$. On $I_A$, all the functions $\delta(r)$, $m(r)$, and $f(r)$ can be expanded as in \eqref{eq:Expandasymp}, and \eqref{eq:fasympcon}. Equation \eqref{IntegralEquation} can be written down as
\begin{equation}
{\bf{w} }(L) = {\bf{w} }(r_h) + \int_{r_h + \varepsilon}^{L}\:\mathcal{J}\left( {\bf{w} }(s), s \right) ~ ds ~ + \int_{L}^{+\infty}\:\mathcal{J}\left( {\bf{w} }(s), s \right) ~ ds . \label{IntegralEquation1}
\end{equation}
In order to suppress the third term in the right hand side in \eqref{IntegralEquation1}, we should set: 1. $\gamma_1 > 0$ which means that the kinetic term must be non-zero, 2. $\partial_\xi V(\xi_\infty)$ must be well-defined. The latter condition follows from the fact that the scalar potential $V$ should at least a $C^2$-real function as stated in Lemma \ref{localLipshitz} and it has a value given in \eqref{eq:eomSkyrmeasymcon}. Then, we have a finite and globally well-defined solution of \eqref{IntegralEquation} since the function $\xi(r)$ is at least a $C^2$-function.
Next, we want to prove the finite energy solutions of \eqref{IntegralEquation} by analyzing the estimate of the energy functional \eqref{eq:EeffSkyrme}. Then, using the expansions \eqref{eq:Expandasymp} and \eqref{eq:fasympcon}, and taking first $\Lambda < 0$, we obtain an inequality
\begin{eqnarray}
E & \le & A(S^{d-1}) R_0^{d-1} \sup_{r \in I_L} \left| \int_{r_h}^L e^\delta r^{d-1} \left( u f(\xi')^2+v \right) dr\right| \nonumber \\
&& + ~ \frac{A(S^{d-1})}{d(d-1)} 2 \gamma_1 |\Lambda | n_3^2 \tilde{\xi}^2_1 R_0^{d-1} \left| \int_{L}^{+\infty} \frac{ dr }{r^{2n_3 - d+1}} \right| \nonumber \\
&& + A(S^{d-1}) R_0^{2d-2} \left| \int_{L}^{+\infty} r^{d-1} \left( - \gamma_0 V \left(\xi_{+\infty} \right) + \sum_{n=1}^d \gamma_n C^{d}_{n} \left(1-\frac{n}{d}\right) \frac{ \sin^{2n}\xi_{+\infty}}{(rR_0)^{2n}} \right) dr\right| ~ . \nonumber \\
\label{Estaticineq}
\end{eqnarray}
The first term in the right hand side of (\ref{Estaticineq}) is finite since all $C^2$-functions, namely, $f(r)$ and $\xi(r)$ are bounded on the closed interval $I_L$. In order to control the second and the third terms in (\ref{Estaticineq}) on the open interval $I_A$, one has to set $\xi_{+\infty}$, the scalar potential $V \left(\xi_{+\infty} \right)$, and the order $n_3$ in \eqref{eq:fasympcon} to be $\xi_{\infty} = 0$, $V \left(\xi_{\infty} \right) = 0$, and $n_3 > \frac{d}{2} $ for finite $\tilde{\xi}_1$, respectively. In the case of $\Lambda = 0$, we just replace $\Lambda$ by $1/R_0^2$ in \eqref{Estaticineq}. One regains the same results. Comparing this energy estimate analysis with the result \eqref{eq:nsympconres}, we conclude
\begin{eqnarray}
\xi_{\infty} = 0 ~ , \quad V \left(\xi_{\infty} \right) = 0 ~ , \quad n_3 \ge \frac{1}{2} (d+1) ~ . \label{eq:fasympcon0}
\end{eqnarray}
Moreover, in order to have a consistent result, since we have the Einstein's field equation, one has to check the estimate
\begin{eqnarray}
\label{G00}
&& \int_{r_h}^{+\infty} e^\delta r^{d-1} \left( R^0_{~ 0} - \frac{1}{2} \delta^0_{~ 0} R + \delta^0_{~ 0} \Lambda \right) dr \nonumber\\
&& =\frac{1}{2} \int_{r_h}^{+\infty} e^\delta r^{d-1} \left( (d-1)\left( \frac{f'}{r}+\frac{(d-2)}{r^2} f \right)-\frac{(d-1)(d-2)}{ r^2 R^2_0} + 2 \Lambda \right) dr ~ . \nonumber\\
\end{eqnarray}
Applying the expansions \eqref{eq:Expandasymp} and \eqref{eq:fasympcon}, and repeating similar steps as in \eqref{Estaticineq}, we obtain that the integral \eqref{G00} is finite. In other words, the finite energy black holes do exist.
\indent Thus, we could state
\begin{theorem}
\label{MainresultExistence}
Suppose ${\bf{w} }(r)$ be a solution of \eqref{ReducedEinsteineqSkyrmionEq1} with the initial value $\mathbf{w}_h$ and $\Lambda \le 0$. Then, we have a family of global solutions with finite energy satisfying \eqref{eq:horizonexpand}, \eqref{eq:Expandasymp}, \eqref{eq:fasympcon}, \eqref{eq:nsympconres}, and \eqref{eq:fasympcon0} that connects two boundaries, namely the horizon and the asymptotic regions.
\end{theorem}
\section{Linear Stability Analysis}
\label{sec:linearstableanal}
In this section, we will discuss the linear stability analysis of the models using similar method as in \cite{Shiiki:2005aq}. First, we take in advance the metric functions in the preceding sections to be time dependent, namely, $\delta_t \equiv \delta(r, t)$, $m_t \equiv m(r, t)$, and $\xi_t \equiv \xi(r, t)$. Then, we expand them around $\delta(r)$, $m(r)$, and $\xi(r)$ where $\delta(r)$, $m(r)$, and $\xi(r)$ are the static solutions of \eqref{eq:eomSkyrme}, \eqref{eq:Einsteineq2}, and \eqref{eq:constraints}.
Let us write down the energy functional of the models in which we have $\delta(r, t)$, $m(r, t)$, and $\xi(r, t)$
\begin{equation}\label{eq:EeffSkyrmetdep}
E(t) =\int d^{d}x \sqrt{-g^{(d+1)}} \left( u_t \left(\frac{(\dot{\xi}_t)^2}{e^{2\delta_t}f_t} + f_t (\xi'_t)^2 \right) +v_t \right) ~ ,
\end{equation}
where $u_t$ and $v_t$ have the same form as defined in \eqref{eq:uv} but we replace $\xi$ by $\xi_t$. The variation of \eqref{eq:EeffSkyrmetdep} with respect to $\xi_t$ gives the Skyrmion equation of motions
\begin{eqnarray}
\left(\xi''_t -\frac{\ddot{\xi}_t}{e^{2\delta_t}f_t^2}\right)+\left(\frac{\delta'_t f_t + f'_t}{f_t}+\frac{d-1}{r}+\frac{1}{u_t}\frac{\partial u_t}{\partial r}\right)\xi'_t +\left(\frac{\dot{\delta}_t f_t +\dot{f}_t}{e^{2\delta_t}f_t^3}\right)\dot{\xi}_t \nonumber\\
+\left(\frac{1}{2u_t}\frac{\partial u_t}{\partial \xi_t}\right)\left( (\xi'_t)^2-\frac{(\dot{\xi}_t)^2}{e^{2\delta_t}f_t^2}\right) = \frac{1}{2u_t f_t}\frac{\partial v_t}{\partial\xi_t} ~. \label{eq:eomSkyrmet}
\end{eqnarray}
In this case, the components of the Einstein field equations in \eqref{eq:Einsteineq}-\eqref{eq:Einsteineq2} are modified, that is,
\begin{equation}\label{eq:Einsteineqt}
(d-1)\frac{f_t}{2}\left( \frac{f'_t}{rf_t}+\frac{d-2}{r^2}\right) -\frac{(d-1)(d-2)}{2(rR_0)^2}+\Lambda = - u_t \left(\frac{(\dot{\xi}_t)^2}{e^{2\delta_t}f_t}+ f_t (\xi'_t)^2\right) - v_t ~ ,
\end{equation}
\begin{equation}\label{eq:Einsteineq1t}
(d-1)\frac{f_t}{2}\left[\frac{2\delta'_t f_t +f'_t}{rf_t}+\frac{d-2}{r^2}\right]-\frac{(d-1)(d-2)}{2(rR_0)^2}+\Lambda = u_t \left(\frac{(\dot{\xi}_t)^2}{e^{2\delta_t}f_t}+ f_t (\xi'_t)^2\right) - v_t ~ ,
\end{equation}
\begin{eqnarray}
&&\frac{e^{-2\delta_t}}{2f_t^3}\left( f_t \ddot{f}_t - 2(\dot{f}_t)^2- f_t \dot{f}_t \dot{\delta}_t \right) +\frac{f_t}{2} \left( 2\delta''_t +2(\delta'_t)^2+\frac{f''_t +3\delta'_t f'_t}{f_t}\right) \nonumber\\
&& +\frac{d-2}{r}\left(f'_t+\delta'_t f_t \right)-\frac{(d-2)(d-3)}{2(rR_0)^2}\left(1-f_t R_0^2\right)+\Lambda\nonumber\\
&& \quad = ~ \frac{1}{d (d-1)} \sum_{n=1}^d\gamma_n C^{d}_{n} ~ n \left(2n-d-1\right) \frac{\sin^{2(n-1)}\xi_t}{(rR_0)^{2(n-1)}} \left(-\frac{(\dot{\xi}_t)^2}{e^{2\delta_t}f_t}+ f_t (\xi'_t)^2\right) \nonumber\\
&& \quad \quad - \frac{1}{d (d-1)} \sum_{n=1}^d\gamma_n C^{d}_{n} \left( 2n^2-3nd+n-d+d^2\right) \frac{\sin^{2n}\xi_t}{(rR_0)^{2n}} - \gamma_0 V ~ . \label{eq:Einsteineq2t}
\end{eqnarray}
Equations \eqref{eq:Einsteineqt} and \eqref{eq:Einsteineq1t} can further be simplified into
\begin{eqnarray}
m'_t &=& \frac{(d-2)}{(d-1)}r^{d-1}\left( u_t \left(\frac{(\dot{\xi}_t)^2}{e^{2\delta_t}f_t}+f_t (\xi'_t)^2\right)+ v_t \right) ~ , \nonumber\\
\delta'_t &=& \frac{2u_t r}{(d-1)}\left(\frac{(\dot{\xi}_t)^2}{e^{2\delta_t}f_t^2}+(\xi'_t)^2\right) ~ .\label{eq:constraintst}
\end{eqnarray}
Then, there are small fluctuation of $\delta_t(r, t)$, $m_t(r, t)$, and $\xi_t(r, t)$ around the static classical solutions such that we have the expansion
\begin{eqnarray}
m_t(r,t) &=& m(r)+\varepsilon ~ m_l(r,t) ~ , \nonumber\\
\delta_t(r,t) &=& \delta(r)+\varepsilon ~ \delta_l(r,t) ~ , \nonumber\\
\xi_t(r,t) &=& \xi(r)+\varepsilon ~ \xi_l(r,t) ~ , \label{eq:expandstatsol}
\end{eqnarray}
where \(\varepsilon > 0\) is a small parameter. First, substituting the first and the second equations in \eqref{eq:expandstatsol} to \eqref{eq:constraintst} and taking the first order approximation on \(\varepsilon\), it yields
\begin{eqnarray}
m_l &=& \frac{d-2}{d-1}2r^{d-1}f e^{\delta}\xi' u ~ \xi_l ~ , \nonumber\\
\delta'_l &=& \frac{2r}{d-1}\left( (\xi')^2 \frac{\partial u}{\partial \xi} \xi_l + 2 u \xi' ~ \xi'_l \right) ~ , \label{eq:expandstatsolsol}
\end{eqnarray}
where $u \equiv u_t(\xi)$, $ \frac{\partial u}{\partial \xi} \equiv \frac{\partial u_t}{\partial \xi}(\xi)$, and we have used \eqref{eq:eomSkyrme} in the computation to get the first equation in \eqref{eq:expandstatsolsol}.
Again, substituting the last equation in \eqref{eq:expandstatsol} to \eqref{eq:eomSkyrmet} and employing the computation up to the first order of \(\varepsilon\), we get
\begin{equation}
\frac{ u}{e^{\delta}f}\ddot{\xi}_l =\frac{1}{r^{d-1}}\left( u f r^{d-1}e^{\delta}\xi_l'\right)'+K(\xi, m,\delta, r)\xi_l ~ , \label{eq:eomSkyrmetorder1}
\end{equation}
with
\begin{eqnarray}
K(\xi,m,\delta,r) &\equiv& \frac{4}{d-1}r e^{\delta} (\xi')^3 \frac{\partial u}{\partial\xi} u f +\frac{1}{r^{d-1}}\left(r^{d-1}e^{\delta}f \xi' \left(\frac{\partial u}{\partial\xi} -\frac{4}{d-1} u^2 \right)\right)'\nonumber\\
&&-\frac{e^{\delta}}{2}\left(f \frac{\partial^2 u}{\partial\xi^2} ( \xi')^2 + \frac{\partial^2 v}{\partial\xi^2} \right) ~ .
\end{eqnarray}
Taking
\begin{equation}
\xi_l (r, t) = \left(u f e^{\delta} r^{d-1} \right)^{-1/2} \psi(r) e^{i\omega t}~ ,
\end{equation}
we can cast \eqref{eq:eomSkyrmetorder1} into Sturm-Liouville equation
\begin{equation}
\psi'' + \left(-\frac{1}{2} \left(u f e^{\delta} r^{d-1} \right)''+\frac{ \left( (u f e^{\delta} r^{d-1})' \right)^2}{4 u f e^{\delta} r^{d-1} } + r^{d-1} \frac{ e^{\delta}f }{u} K(\xi,m,\delta,r) + \omega^2\right) \psi = 0 ~ . \label{eq:eomSkyrmetorder1lg}
\end{equation}
The solution $\psi(r)$ is said to be linearly stable if the eigenvalue $\omega^2 > 0$ and $\psi(r) > 0$ for $r_h < r < +\infty$. However, to have unstable solutions, it is sufficient to show that there exists an eigenvalue with $\omega^2<0$.
It is straightforward to show that there exists a unique local solution of \eqref{eq:eomSkyrmetorder1lg} in an arbitrary interval $I \subset I_\infty$ using the same procedure as in subsection \ref{subsec:LocExisSmooth}. Then, we apply the uniqueness condition to get a global solution in $ I_\infty$ by gluing all of these local solutions.
Let us first discuss the behavior of \eqref{eq:eomSkyrmetorder1lg} in the asymptotic region where $r \in I_A $. First of all, we take the case of $\Lambda < 0$. Employing the expansion \eqref{eq:Expandasymp} and \eqref{eq:fasympcon}, eq. \eqref{eq:eomSkyrmetorder1lg} can be simplified to
\begin{equation}
\psi'' + \frac{\Lambda \gamma_0}{d(d-1) \gamma_1} \frac{\partial^2 V}{\partial \xi^2}(\xi_\infty) r^{d+1} \psi = 0 ~ , \label{eq:eomSkyrmetorder1lgasymp}
\end{equation}
where we have set $n_3 = d-1$ implying $d \ge 3$. At the lowest order of the expansion, we could have either $\omega^2 \in \mathrm{I\hspace{-0.7mm}R}$ for $\tilde{\xi}_1 \in \mathrm{I\hspace{-0.7mm}R}$, or
\begin{equation}
\omega^2 = - \frac{16 \Lambda^2 \tilde{\xi}_1 \gamma_1}{d^2 (d-1)^2} ~ , \label{eq:omega}
\end{equation}
for $\tilde{\xi}_1 \in \mathrm{I\hspace{-0.7mm}R}$. Assuming
\begin{equation}
\frac{\partial^2 V}{\partial \xi^2}(\xi_\infty) > 0 ~ , \label{eq:locminVasymp}
\end{equation}
the solution of \eqref{eq:eomSkyrmetorder1lgasymp} has the form
\begin{equation}
\psi(r, \Lambda) = A_0 \left(\frac{d+3}{k_\Lambda^{1/2} r^{\frac{d+1}{2}} } \right)^{1/2} {\mathrm{exp}}\left(-\frac{2 k_\Lambda^{1/2}}{d+3} r^{\frac{d+3}{2}} \right) \left(1 + O\left( r^{-\frac{d+3}{2}} \right) \right) ~ , \label{eq:soleomSkyrmetorder1lgasymp}
\end{equation}
which is the asymptotic form of the modified Bessel of the second kind, where
\begin{equation}
k_\Lambda \equiv \frac{|\Lambda | \gamma_0}{d(d-1) \gamma_1} \frac{\partial^2 V}{\partial \xi^2}(\xi_\infty) ~ ,
\end{equation}
and $A_0 > 0$. It is easy to see that $\psi(r)$ in \eqref{eq:soleomSkyrmetorder1lgasymp} is a positive definite function on $I_A$. In the $\Lambda = 0$ case, we have a similar equation as \eqref{eq:eomSkyrmetorder1lgasymp} with $\omega^2 \in \mathrm{I\hspace{-0.7mm}R}$ whose solution is given by
\begin{equation}
\psi(r,0) = A_0 \left(\frac{d+1}{k_0^{1/2} r^{\frac{d-1}{2}} } \right)^{1/2} {\mathrm{exp}}\left(-\frac{2 k_0^{1/2}}{d+1} r^{\frac{d+1}{2}} \right) \left(1 + O\left( r^{-\frac{d+1}{2}} \right) \right) ~ , \label{eq:soleomSkyrmetorder1lgasymp1}
\end{equation}
which has positive value on $I_A$ with
\begin{equation}
k_0 \equiv \frac{ \gamma_0}{2 \gamma_1 R_0^2} \frac{\partial^2 V}{\partial \xi^2}(\xi_\infty) ~ ,
\end{equation}
satisfying \eqref{eq:locminVasymp}.
Next, we show that the existence of positive definite solutions of \eqref{eq:eomSkyrmetorder1lg} on $I_L $. In our proof, we use the Fixed Point Theorem:
\begin{theorem}{\textnormal{\cite{Krasnosel_1964}}}
\label{FixedPoint}
Let us consider ${\mathcal E}$ to be a Banach space in which we have a cone ${\mathcal K} \subset {\mathcal E}$. Assuming both $\Omega_1$ and $\Omega_2$ to be open subsets of ${\mathcal E}$ with $0 \in \Omega_1$ and $\bar{\Omega}_1 \subset \Omega_2$, and let $\hat{H}$ be a completely continuous operator satisfying
\begin{equation}
\hat{H} : {\mathcal K} \cap \left( \bar{\Omega}_2 \diagdown \Omega_1 \right) \to {\mathcal K} ~ ,
\end{equation}
such that either
\begin{itemize}
\item[i)] $\lVert \hat{H}\psi \rVert \le \lVert \psi \rVert$, $\psi \in {\mathcal K} \cap \Omega_1 $, and $\lVert \hat{H}\psi \rVert \ge \lVert \psi \rVert$, $\psi \in {\mathcal K} \cap \Omega_2 $;
\item[] or
\item[ii)] $\lVert \hat{H}\psi \rVert \ge \lVert \psi \rVert$, $\psi \in {\mathcal K} \cap \Omega_1 $, and $\lVert \hat{H}\psi \rVert \le \lVert \psi \rVert$, $\psi \in {\mathcal K} \cap \Omega_2 $.
\end{itemize}
Then, we have a fixed point of $\hat{H}$ in ${\mathcal K} \cap \left( \bar{\Omega}_2 \diagdown \Omega_1 \right)$.
\end{theorem}
In addition, some steps of the proof follow closely \cite{Wang_1994}. The first step is to consider $\psi(r)$ in \eqref{eq:eomSkyrmetorder1lg} in the interval $[r_1, r_2]$ with boundary conditions
\begin{eqnarray}
a_0 \psi(r_1) - a_1 \psi'(r_1) &=& 0 ~ , \nonumber\\
b_0 \psi(r_2) - b_1 \psi'(r_2) &=& 0 ~ , \label{eq:boundcond}
\end{eqnarray}
where $a_0, a_1, b_0, b_1 \ge 0$. Suppose we have a solution of \eqref{eq:eomSkyrmetorder1lg}
\begin{equation}
\psi(r) = \int_{r_1}^{r_2} G(r,s) F(s)\psi(s) ds \equiv \hat{H}\psi(r) ~ ,\quad \psi \in C[r_1, r_2] ~ , \label{eq:soleomSkyrmetorder1lg}
\end{equation}
where
\begin{equation}
F(r) \equiv -\frac{1}{2} \left(u f e^{\delta} r^{d-1} \right)''+\frac{ \left( (u f e^{\delta} r^{d-1})' \right)^2}{4 u f e^{\delta} r^{d-1} } + r^{d-1} \frac{ e^{\delta}f }{u} K(\xi,m,\delta,r) + \omega^2 ~ ,
\end{equation}
and $ G(r,s)$ is the Green's function of
\begin{equation}
\psi'' = 0 ~ , \label{eq:soleomSkyrmetordertriv}
\end{equation}
with boundary conditions \eqref{eq:boundcond} whose form is given by
\begin{equation} \label{eq:Greenfunc}
\begin{aligned}[b]
G(r,s) =
&
\begin{cases}
\frac{1}{\rho} X(r) Y(s) & ; ~ r_1 \le s \le r \le r_2 ~ , \\
\frac{1}{\rho} X(s) Y(r) & ; ~ r_1 \le r \le s \le r_2 ~ ,
\end{cases}
\end{aligned}
\end{equation}
where $\rho > 0$,
\begin{equation}
X(r) \equiv b_0 \left(r_2 -r \right) + b_1 ~ , \quad Y(r) \equiv a_0 r + a_1 ~ , \quad [r_1, r_2] ~ .
\end{equation}
Suppose we have a cone ${\mathcal K}$ in $C[r_1, r_2]$ given by
\begin{equation}
{\mathcal K} \equiv \left\{ \psi \in C[r_1, r_2] : \psi(r) \ge 0, \quad \min\limits_{[r_{5/4}, r_{7/4}] } \psi(r) \ge C_G \lVert \psi \rVert \right\} ~ ,
\end{equation}
where $r_{5/4} > r_1$, $r_{7/4} < r_2$, $\lVert \psi \rVert \equiv \sup\limits_{[r_1, r_2]} | \psi(r)| $,
\begin{equation}
C_G \equiv \min \left\{ \frac{(r_2-r_{7/4}) b_0 + b_1}{ r_2 (b_0 + b_1)}, \frac{r_1 a_0 + a_1}{ r_2 (a_0 + a_1)}\right\} ~ .
\end{equation}
Since $G(r,s) \le G(s,s)$ for $r_1 \le r, ~ s \le r_2$, if $\psi \in {\mathcal K}$, then we have
\begin{equation}
\hat{H}\psi(r) = \int_{r_1}^{r_2} G(r,s) F(s)\psi(s) ds \le \int_{r_1}^{r_2} G(s,s) F(s)\psi(s) ds ~ , \label{eq:ineq1}
\end{equation}
implying
\begin{equation}
\lVert \hat{H}\psi(r) \rVert \le \int_{r_1}^{r_2} G(s,s) F(s)\psi(s) ds ~ .
\end{equation}
Moreover, since
\begin{equation}
G(r,s) \ge C_G ~ G(s,s) ~ , \quad [r_{5/4}, r_{7/4}] ~ ,
\end{equation}
it follows
\begin{equation}
\min\limits_{ [r_{5/4}, r_{7/4}] } \hat{H}\psi(r) \ge C_G \lVert \hat{H}\psi \rVert ~ .
\end{equation}
Thus, $\hat{H}{\mathcal K} \subset {\mathcal K}$ which implies that the mapping $\hat{H}: {\mathcal K} \to {\mathcal K}$ is completely continuous.
Suppose there is a constant $C_1 > 0$ and $0 < \psi \le C_1$ such that
\begin{equation}
\int_{r_1}^{r_2} G(s,s) F(s) ds \le 1 ~ . \label{eq:ineq2}
\end{equation}
If $\psi \in {\mathcal K}$ and $ \lVert \psi \rVert = C_1$, then from \eqref{eq:ineq1} and \eqref{eq:ineq2} it follows
\begin{equation}
\hat{H}\psi(r) \le \int_{r_1}^{r_2} G(s,s) F(s)\psi(s) ds \le \lVert \psi \rVert ~ . \label{eq:ineq3}
\end{equation}
Defining
\begin{equation}
\Omega_1 \equiv \left\{ \psi \in{\mathcal E} : \lVert \psi \rVert < C_1\right\} ~ ,
\end{equation}
\eqref{eq:ineq3} implies
\begin{equation}
\lVert \hat{H}\psi \rVert \le \lVert \psi \rVert ~ , \quad \psi \in {\mathcal K}\cap \partial\Omega_1 ~ .
\end{equation}
Next, we also have $C_2 > 0$ and $\psi \ge C_2$ such that
\begin{equation}
C_G \int_{r_{5/4}}^{r_{7/4}} G(r_{3/2},s) F(s) ds \ge 1 ~ , \label{eq:ineq4}
\end{equation}
where $r_{5/4}< r_{3/2} < r_{7/4}$. Introducing $C_3 \equiv \max\left\{\frac{C_1}{r_{7/4} -r_{5/4}}, \frac{C_2}{C_G} \right\} $ and
\begin{equation}
\Omega_2 \equiv \left\{ \psi \in{\mathcal E} : \lVert \psi \rVert < C_3\right\} ~ ,
\end{equation}
if $\psi \in {\mathcal K}$ and $ \lVert \psi \rVert = C_3$, then
\begin{equation}
\min\limits_{ [r_{5/4}, r_{7/4}] } \psi(r) \ge C_G \lVert \psi \rVert \ge \lVert \psi \rVert ~ ,
\end{equation}
such that
\begin{eqnarray}
\hat{H}\psi(r_{3/2}) = \int_{r_1}^{r_2} G(r_{3/2},s) F(s)\psi(s) ds \ge C_G \lVert \psi \rVert \int_{r_{5/4}}^{r_{7/4}} G(r_{3/2},s) F(s) ds \ge \lVert \psi \rVert ~ . \label{eq:ineq5}
\end{eqnarray}
So, we obtain
\begin{equation}
\lVert \hat{H}\psi \rVert \ge \lVert \psi \rVert ~ , \quad \psi \in {\mathcal K}\cap \partial\Omega_2 ~ .
\end{equation}
To conclude, using i) of Theorem \ref{FixedPoint}, the operator $ \hat{H}$ has a fixed point in ${\mathcal K} \cap \left( \bar{\Omega}_2 \diagdown \Omega_1 \right)$ with $C_1 \le \lVert \psi \rVert \le C_3$. Moreover, the fact $G(r,s) > 0$ implies $\psi(r) > 0$ in the interval $[r_1, r_2]$. It is worth mentioning that by interchanging $\Omega_1 \to \Omega_2$ and $\Omega_2 \to \Omega_1$ in the above computation, and using ii) of Theorem \ref{FixedPoint}, we will obtain the same results.
Since we have proved that $\psi(r) > 0$ in an arbitrary interval $[r_1, r_2] \subset I_L$ which is also unique, we could extend the proof by gluing all of these solutions to have $\psi(r) > 0$ in $I_L$. Again, by gluing $\psi(r) > 0$ in $I_L$ with \eqref{eq:soleomSkyrmetorder1lgasymp}, we can finally obtain $\psi(r) > 0$ in $I_\infty$.
So, we have
\begin{theorem}
\label{MainresultStability}
There exists stable and unstable static spherical symmetric solutions of higher dimensional Einstein-Skyrme system with general couplings \eqref{eq:EinsteinSkyrmeLdgenform} and $\Lambda \le 0$.
\end{theorem}
\section*{Acknowledgments}
The work in this paper is partly supported by Riset ITB 2021 and PDUPT Kemenristek 2021.
|
{
"timestamp": "2021-05-11T02:18:01",
"yymm": "2105",
"arxiv_id": "2105.03853",
"language": "en",
"url": "https://arxiv.org/abs/2105.03853"
}
|
\section{Introduction}
Quantitative investment analysis \cite{guida2019big} is the use of mathematical and statistical methods to assist investors in making profitable investment decisions. With the help of high-performance computer technology \cite{High-Frequency}, we can effectively analyze huge amounts of financial data in a short time, and automatically execute orders according to the pre-programmed instructions. Recently, quantitative investment methods have attracted more and more attention from investors and researchers.
Fundamental analysis and technical analysis are the two primary methods used to analyze financial data and make investment decisions \cite{schwager1984complete}. Fundamental analysis determines the fair value of the business via analyzing the company's financial statements. While the technical analysis \cite{murphy1999technical} is a methodology for forecasting the direction of prices via analyzing past market data, primarily price and volume. For instance, Moving Average Convergence / Divergence (MACD) \cite{appel2008understanding} employs moving averages to reveal changes in the strength, direction, momentum, and duration of a trend in a contract price; Dual Thrust strategy \cite{pruitt2012building} firstly generates two lines, i.e., BuyLine and SellLine, according to the mean and variance values during a period of time, and then generates the trading signal when the current price breaks through the BuyLine or SellLine. The above mentioned models are relatively simple, and thus they are widely used as baselines in quantitative analysis. However, their generalization ability \cite{DingWSGG20} is relatively weak.
The generalization ability of quantitative method can be improved to a certain extent by combing with some advanced supervised learning techniques. For instance, the method \cite{chen2015lstm} feeds the historic price data, technical analysis data and economic fundamentals into LSTM (Long Short-Term Memory) \cite{schmidhuber1997long} to predict the price fluctuation in daily frequency; the method \cite{FengC0DSC19} uses an attentive LSTM model to predict the movement of contract price in the near future and employs adversarial training to improve the generalization of the model; the method \cite{0101BHCXS20} uses a LSTM model with graph convolutional network to process overnight news, and then predicts the overnight stock movement between the previous close price and the open price. However, the accuracy of the forecast can only reflect part of the model performance \cite{li2019deep}, long-term goals and delayed rewards should be considered in real futures trading. In addition, several important factors (e.g., transaction cost) in real transaction are not considered at all in these methods.
Recently, some quantitative methods based on Reinforcement Learning (RL) \cite{sutton2018reinforcement} have been proposed to solve sequential trading decision-making problems, and aim to maximize the expectation of cumulative reward. For instance,
the method \cite{deng2016deep} firstly employs fuzzy network to reduce the uncertainty of the input data and employs auto-encoder to reduce the dimension of the features, and then uses a recurrent neural network to map the state in the environment to agent's action directly; the methods \cite{tan2011stock,zhang2020deep} use the deep Q-network (DQN) \cite{tan2011stock} to approximate the action-value function in order to estimate the value of agent's trading decision; the method \cite{si2017multi} sets multiple targets according to the mean and variance of return in financial market to balance the profits and risks.
Note that the above reinforcement learning based methods do not depend on any prior knowledge, thus they often require many experiences for learning, preventing them from being practical in most real situations \cite{BrysHSCTN15}.
Imitation learning is a learning pattern in RL, which introduces prior knowledge in the training stage.
In this pattern, an expert demonstrates how to solve the task, and the agent imitates these demonstrations to select the corresponding actions. Behavior cloning (BC) \cite{BC1,BC2} is a typical method in imitation learning.
Until now, there are just a few related works in the financial field. For instance, the method \cite{liu2020adaptive} firstly constructs an actor-critic network to map the state to the action, and then uses behavior cloning method to pre-train this network.
However, BC based methods use supervised learning to imitate expert's demonstrations greedily instead of reasoning about the consequences of actions. Thus, when the agent drifts and encounters out-of-distribution states, the agent does not know how to return to the demonstrated states \cite{reddy2019sqil}.
In this paper, we firstly model the index prediction problem as a Markov decision process, and then propose a RL based method with expert trajectory for quantitative trading. Unlike behavior cloning that sets the expert actions as demonstrations and uses supervised learning to imitate expert's actions, we design the temporal-difference error (TD-error) derived from both the agent-environment interaction and the expert-environment interaction. In this way, the agents are more adaptable for inevitable noise in financial data. In addition, in order to express the current situation of the market more comprehensively, we introduce more than 100 short period alpha factors as the stage instead of several technical indicators used in related work. Experimental results evaluated on share price index futures in China, including IF (CSI 300) and IC (CSI 500), show the advantages of the proposed method compared with three typical technical analysis, including Buy \& Hold, MACD and Dual Thrust, and two RL based methods, including DQN and BC.
The rest of this paper is arranged as follows.
Section \ref{sec:Preliminaries} shows the preliminaries of RL used in the proposed method.
Section \ref{sec:Proposed_work} shows the proposed method;
Section \ref{sec:Experiments} shows the comparative results and discussions.
Finally, the concluding remarks of this paper and future works are given in Section \ref{sec:Conclusion}.
\section {Preliminaries}
\label{sec:Preliminaries}
In this section, we will describe some preliminaries about RL \cite{sutton2018reinforcement} used in the proposed method, such as MDP, Q-learning, $\epsilon-$greedy and experience replay.
\paragraph{MDP:}In a finite MDP, the agent and environment interact at each discrete time step. At the time step $t$, the agent first receives some representation $s_t \in \mathcal{S}$ of the environment's state, and then selects an an action $a_t \in \mathcal{A}$ according to $s_t$. Then, the agent gets a numerical reward $r_t \in \mathcal{R}$ from the environment, and then the agent turns to a new stage $s_{t+1}$.
The interactions between the agent and the environment would finally produces a trajectory
$\tau = [s_0, a_0, r_0, s_1, a_1, r_1, \ldots ]$.
\paragraph{Q-learning:} RL aims to find an optimal policy $\pi:\mathcal{S}\rightarrow\mathcal{A}$ to maximize the cumulative reward $G_t$ of agent:
\begin{equation}
G_t =\sum_{k = t}^{T} \gamma ^k r_{t + k}
\end{equation}
where $\gamma \in [0, 1]$ is the discount factor; $T$ is a final time step in the current episode.
The value function describes the expectation of the cumulative reward, and the optimal policies share the same optimal value function.
Q-table expresses the maximum of value function which is calculated from each state and each action.
Q-learning \cite{watkins1992q} updates the Q-table based on the following formula until convergence,
\begin{equation}
Q(s_t, a_t) \leftarrow Q(s_t, a_t) + \alpha [r_t + \gamma max_a Q(s_{t+1}, a) - Q(s_t, a_t)]
\end{equation}
where $ \alpha $ is the learning rate.
Q-network is used to approximate the Q-table which is too large, and it outputs $Q(\cdot, \cdot; \theta)$, where $\theta$ denotes the parameters in Q-network.
\paragraph{$\epsilon$-greedy:} $\epsilon-$greedy \cite{DBLP:TokicP11} aims to increase agent's exploration ability.
At each time step $t$ in the training stage, the agent chooses the action randomly with probability of $\epsilon$.
With the probability of $1-\epsilon$, the agent selects the action with the largest action value, denoted as,
\begin{equation}
a_{t} = \left\{
\begin{aligned}
&\max_{a}Q(s_t, a;\theta), \ \text{with\ probability\ }(1 - \epsilon) \\
&\text{a\ random\ action}, \ \text{with\ probability\ }\epsilon
\end{aligned}
\right.
\end{equation}
Note that $\epsilon$ decreases gradually during the training stage.
\paragraph{Experience replay:} Experience replay \cite{DBLP:Lin92} aims to break the temporal correlation among MDP samples and reduce the amount of experience required to learn. After the interaction between the agent and the environment, the current sample is saved into the replay buffer. When the buffer is full, we randomly select a mini-batch of samples from the buffer to calculate the loss function.
\section {Proposed Method}
\label{sec:Proposed_work}
In this section, we first describe the problem definition of MDP in futures trading, and then we show the details of the proposed method based on RL with expert trajectory.
\subsection{Problem Definition}
\label{sec:Problem_definition}
In the proposed method, we model the trading-making problem as a finite MDP. Thus, the discrete probability distribution of the state $s_t$ and the reward $r_t$ just depends on the last state and action. In the following, we will describe what are the state set $\mathcal{S}$, action set $\mathcal{A}$ and the reward design.
\begin{itemize}
\item \textbf{State Set $\mathcal{S}$:} At each time step $t$, the representations $s_t \in \mathcal{S}$ of the environment's state include historical market indicators, such as OHLC (open, high, low, close) prices, volume and amount, and over 100 factors \footnote{https://www.joinquant.com/data/dict/alpha191} calculated from those fundamental data such as volatility, momentum and other statistics over a period of time.
\item \textbf{Action Set $\mathcal{A}$:} For a fair comparison with other trading methods, we assume that the transaction at each time step is one unit. Therefore, the action set is defined as $\mathcal{A} = \{-1, 0, 1\}$, where $-1$ denotes short position; $0$ denotes no holding; $1$ denotes long position.
\item \textbf{Reward Design in Training and Testing stages: } Since the expert trajectory is introduced in the proposed method, there is no need for the agent to learn from reward signal. Like \cite{reddy2019sqil}, in the training stage, the reward of the expert is fixed as $r = 1$ and the reward of the agent is fixed as $r = 0$.
In the testing stage, however, the real reward of the agent is used to measure the performance of the proposed method \cite{deng2016deep}. At the time step $t$, the agent gains the reward:
\begin{equation}
r_t = a_{t-1}(p_t - p_{t-1}) - c|a_t - a_{t-1}|
\end{equation}
where $a_t$ is the current action, $a_{t-1}$ is the previous action, while $c$ is rate of transaction cost. Note that when the action is unchanged, i.e., $|a_t - a_{t-1}| =0$, the agent maintains current position without any transaction cost. When the action changes, i.e., $|a_t - a_{t-1}| = 1$ or 2, the corresponding contraction cost would be $c$ and $2c$.
\end{itemize}
\begin{figure*}
\centering
\includegraphics[scale=0.45]{figs/framework.pdf}
\caption{The framework of the proposed method}
\label{fig:framework}
\end{figure*}
\subsection{The Proposed RL with expert trajectory}
\label{subsec:proposed}
As illustrated in Fig. \ref{fig:framework}, there are four main components in the proposed framework, that is, Expert, Q-Network, Environment and Reply Buffer. In the following, we first describe the four components respectively, and then describe the training stage in the proposed method.
\begin{itemize}
\item \textbf{Expert:} In the training stage, the expert can always take correct trading actions and generate the corresponding expert trajectory as the demonstration. According to the demonstration, the agent optimizes the policy via imitating the expert's actions. Usually, the expert actions can shorten the inefficient random exploration stage.
As described in section \ref{sec:Problem_definition}, we assume that at each time step $t$, investor should take an actor from the actor set, i.e., short, no holding, and long, and the transaction is one unit.
Thus, at each time step $t$, the investment profit only depends on the close price at the next time step $t+1$.
If the investor can make the correct action at every time step, the accumulated profit would be the highest.
In other words, we can obtain the optimal policy using Greedy algorithm if we can always predict the close price at the next time step correctly. In the investigated problem, therefore, the expert trajectory is simple, which is composed of the correct trading actions during the training: taking a long position when the close price increases from the time step $t$ to $t+1$; taking a short position when the close price decreases; no holding when the close price does not change.
\item \textbf{Q-network:} Q-Network is composed of a LSTM model and a full-connected (FC) layer, and it aims to approximate the state-action value function. The input of the Q-network is a given state that consists of a sequence of historical market indicators and factors, and the output is the action value tensor for each possible actions.
\item \textbf{Environment:} Environment is a stimulated financial market. The agent acts as an investor to interact with the environment. Agent takes a trading action according to a given stage in the environment, and then obtains the reward from the environment and observes the next state.
\item \textbf{Replay Buffer:} Replay Buffer is used to achieve experience replay, which aims to reduce the temporal correlation among MDP samples. At each time step, the experiences of agent and expert are saved to replay buffer. When the buffer is full, a mini-batch of samples are randomly selected to update the weights in Q-network.
\end{itemize}
At each time step in the training process, the agent firstly observes the state $s_t$, and then takes an action $a_t$ according to the output of the Q-network and the $\epsilon$-greedy rule. And then the agent obtains reward $r_t$ from the environment and observes the next state $s_{t+1}$. At the same time, the agent obtains the demonstrative action $a_t^e$ and the expert reward $r_t^e$ from expert trajectory. To achieve experience replay, the sample $(s_t, a_t, a_t^e, r_t, r_t^e, s_{t+1})$ is saved into the replay buffer.
The above process repeats until the replay buffer is full.
When the the replay buffer is full, we randomly select a mini-batch of samples $(s, a, a^e, r, r^e, s')$ from the buffer, where $s$ denotes the current state, and $s'$ denotes the next state. $a$ ($a^e$) and $r$ ($r^e$) denote agent's (expert's) action and reward respectively.
Then the agent loss $L^{a}$ and expert loss $L^{e}$ are defined as the TD-error generated from the agent-environment and expert-environment respectively, that is
\begin{equation}
L^{a} = E[(r + \gamma\cdot max_{a}Q(s', a; \theta) - Q(s, a; \theta))^2]
\end{equation}
\begin{equation}
L^{e} = E[(r^e + \gamma\cdot max_{a}Q(s', a; \theta) - Q(s, a^e; \theta))^2]
\end{equation}
\vspace{0.5em}Finally, the loss function $L$ is defined as the average value of $L^{a}$ and $L^{e}$:
\begin{equation}
L = \frac{L^{a} + L^{e}}{2}
\end{equation}
We use $L$ to calculate the gradient of weights in Q-network via back propagation.
The pseudo-code of the training process is shown in Pseudo-code \ref{alg:algorithm}.
\begin{algorithm}[tb]
\caption{Reinforcement learning with expert trajectory}
\label{alg:algorithm}
\textbf{Input}:\\
Episode number $N$; Time step number $T$ in an episode;\\
Buffer capacity $N_c$; Batch size $N_s$;\\
Actions in expert trajectory $[a_{0}^{e}, a_{1}^{e}, a_{2}^{e},\dots]$.\\
\textbf{Parameter}: \\ Q-Network parameters $\theta$;
\begin{algorithmic}[1]
\STATE $episode\leftarrow 0$;
\WHILE{$episode < N$}
\STATE $t \leftarrow 0$;
\STATE Observe initial state $s_0$;
\WHILE {$t < T$}
\STATE $b_c \leftarrow 0$;
\WHILE{$b_c < N_c$ \& $t < T$ }
\STATE Select a random action with probability $\epsilon$ or select an action with max action-value:\\ $a_t \leftarrow max_{a}Q(s_t, a; \theta)$ from Q-network;
\STATE Interact with the environment, obtain the reward $r_t$ and observe the next state $s_{t+1}$;
\STATE Take the expert action $a_t^{e}$ to interact with the environment, obtain the reward $r^{e}_t$ and observe the next state $s_{t+1}$;
\STATE Save the resulting sample $(s_t, a_t, a_t^e, r_t, r_t^e, s_{t+1})$ into replay buffer;
\STATE $t \leftarrow t+1$;
\STATE $b_c \leftarrow b_c+1$;
\ENDWHILE
\STATE Select a mini-batch of sample $(s, a, a^e, r, r^e, s')$ with the size $N_s$ randomly from the buffer;
\STATE Calculate loss function: \\$L = (L^{a} + L^{e}) / 2$;
\STATE Use $L$ to update $\theta$ in Q-network;
\STATE Clear replay buffer;
\ENDWHILE
\STATE $episode \leftarrow episode+1$;
\ENDWHILE
\end{algorithmic}
\end{algorithm}
\section{Experimental Results and Discussions}
\label{sec:Experiments}
In this section, we first describe the experimental setup used in our experiments, and then give a brief description on metrics and the baseline methods. Finally, we show the comparative results and discussions.
\subsection{Experimental Setup}
\label{subsec:set}
Two major futures indexes in China, e.g., IF (CSI 300 index) and IC (CSI 500 index), are included in our experiments.
We obtain the tick level data including the price, volume and transaction amount from Oct. 1st, 2015 to Oct. 15th, 2020. The data from Oct. 1st, 2015 to Oct. 1st, 2019 is used in the training stage, while the remaining data is used for testing. The original tick level data is aggregated into 5-min lever data for calculating the factors in our experiments.
Note that all experimental data is from Tinysoft \footnote{\url{http://www.tinysoft.com.cn/}}, a professional financial data platform in China. In this platform, the contract with the largest volume in the previous day is considered as the dominant contract of this commodity on the next day. Thus, day trading strategy is used for all test methods, meaning that all positions are closed before the market closes in order to avoid unmanageable risks and negative price gaps between two different trading days.
In addition, we set the discount factor $\gamma=0.992$, and set the transaction cost $C=0.023$ \textperthousand \hspace{0.2em}according to the regulation of financial futures exchange, and the margin system is not considered.
In the testing stage, a stop-loss strategy is used in the proposed method. For each time step $t$, the strategy first calculates the mean $\mu_{k}$ and standard deviation $\delta_{k}$ according to the price fluctuation at previous $k$ time steps.
If the price breaks through upper bound, i.e., $\mu_{k}+\delta_{k}$ while we hold a short position, or the price breaks through the lower bound, i.e., $\mu_{k}-\delta_{k}$ while we hold a long position, then the position is closed. According to our experiments (Refer to section \ref{sec:comparative} for details), we fixed $k=25$.
\subsection{Evaluation Criteria}
To evaluate the profit and risk aversion ability of the proposed model, three commonly used criterias are employed.
\begin{itemize}
\item \textbf{Accumulated profits:} The accumulated profits of the model is expressed as the sum of each time step during the testing stage, i.e. $R = \sum_{t = 0}^{T} r_t$, while $T$ is the number of time steps in the test set.
\item \textbf{Sharp ratio:} Sharp ratio is used to measure profits expectations and the proportion of risk.
Sharpe ratio can be expressed as $Sharp = \frac{E(r)}{\sigma(r)}$, where $E(r)$ is the expectation of all the benefits of the test set and the standard deviation of the profits is $\sigma(r)$.
\item \textbf{Sortino ratio:} Sortino ratio is similar to sharp ratio, but it uses the downward standard $\sigma(r_d) $, where $r_d$ represents downward earnings (compound earnings). The sortino ratio can be expressed as $ Sortino = \frac{E(r)}{\sigma(r_d)} $.
\end{itemize}
\subsection{Baseline Methods}
\label{subsec:baselines}
Three typical technique analysis, i.e. Buy \& Hold, MACD \cite{appel2008understanding}, and Dual Thrust \cite{pruitt2012building}, and two RL based methods, i.e., DQN and BC, are included for comparative studies in our experiments.
\begin{figure*}[!t]
\centering
\subfloat[Compared with three technique analysis] {\includegraphics[width =8.7cm]{{figs/IF_compare-eps-converted-to.pdf}}} \quad
\subfloat[Compared with two RL related methods] {\includegraphics[width=8.7cm]{figs/IF_compare2-eps-converted-to.pdf}}
\caption{The profit curves of various methods during the testing date from Oct. 2, 2019 to
Oct. 15, 2020. (a) Compared with three technique analysis, including Buy \& Hold, MACD, and Dual trust; (b) Compared with two RL based methods, including DQN and BC.}
\label{IF}
\end{figure*}
\begin{itemize}
\item \textbf{Buy \& Hold: } Investor buys a contract at the beginning and holds it in the whole day of trading. Then the position is closed at the end of the trading day.
\item \textbf{MACD:} This method includes the DIF line and the DEA line. The DIF line is the difference between a short period exponential moving average (EMA) and a longer period EMA of the pr ice series.
The DEA line is the weighted moving average of the DIF line.
When the $DIF$ line breaks through $DEA$ line upwards and $DIF$ value is bigger than 0, long signal is generated, and if $DIF$ line breaks through $DEA$ line downwards and $DIF$ value is smaller than 0, short signal is generated.
\item \textbf{Dual Thrust:} Let $R = max (HH - LC, HC - LL) $, where
$LC$ is the minimum of close price, $HC$ is the maximum of close price, $LL$ is the minimum of the lowest price, and $HH$ is the maximum of the highest price during a period of time. Using the sum and difference between $R$ and open price, we obtain the BuyLine and SellLine respectively. When the price breaks through BuyLine/SellLine, the long/short signal is generated in Dual Thrust.
\item \textbf{DQN:} Compared with the proposed method, the DQN based method trains the agent through the reward, which is designed as the actual profit, instead of the expert trajectory. The loss is defined as the TD-error between two adjacent time steps.
Then, the other modules remain unchanged, and the stop-loss strategy is attached.
\item \textbf{BC:} Behavior cloning is another way of imitation learning.
In BC, the actions in expert trajectory are used as labels and also the Q-Network is remained.
While training, the cross entropy is used to measure the gap between the actions generated by the Q-Network and labels, with an Adam optimizer to reduce the loss, and the stop-loss strategy is attached.
\end{itemize}
\begin{table}
\caption{The overall performances of various approaches in IF and IC markets. Those values with an asterisk (*) denote the best results in the corresponding cases. }
\label{tab:performance}
\begin{center}
\begin{tabular}{c | c | c c c}
\hline
\multirow{2}{*}{\textbf{Market}} & \multirow{2}{*}{\textbf{Approach}} & \multicolumn{3}{c}{\textbf{Performance}}\\
\textbf{}&\textbf{}&\textbf{Profits}&\textbf{Sharp}&\textbf{Sortino}\\
\hline
\multirow{2}{*}{{IF}} & {Buy \& hold}&153.8 & 0.2 &0.28\\
\textbf{} & {MACD}&426.6&0.59&0.55\\
\textbf{} & {Dual thrust}&107.2&0.57&0.29\\
\textbf{} & {DQN}&796.4&1.25&1.84\\
\textbf{} & {BC}&473.2&0.74&1.08\\
\textbf{} & {Proposed}&\textbf{1064.0 *} &\textbf{2.09 *} & \textbf{2.47 *}\\
\hline
\multirow{2}{*}{{IC}} & {Buy \& hold}&545.5&0.45&0.62\\
\textbf{} & {MACD}&55.6&0.05&0.04\\
\textbf{} & {Dual thrust}&179.0&0.4&0.16\\
\textbf{} & {DQN}&1317.3&1.27&1.85\\
\textbf{} & {BC}&-113.8&-0.11&-0.15\\
\textbf{} & {Proposed}& \textbf{1934.3 *} &\textbf{1.86 *}& \textbf{2.86 *}\\
\hline
\end{tabular}
\label{comparation}
\end{center}
\end{table}
\subsection{Comparative Studies}
\label{sec:comparative}
In this section, we will compare the proposed method with other five related works as described previously in section \ref{subsec:baselines}. For the IF market, the accumulated profits of various methods during the testing date (i.e., from Oct. 2, 2019 to Oct. 15 2020) are shown in Fig. \ref{IF}. From Fig. \ref{IF}, we observe that the curve of the proposed method is higher than those of others methods in most cases, and is always the highest at the end of testing period, which means that the proposed method gains more profits stably.
The overall performances among the testing date for both IF and IC markets are shown in Table \ref{tab:performance}. From Table \ref{tab:performance}, we observe the two following observations:
\begin{itemize}
\item First of all, the proposed method always achieves the best performance on the three criteria both for IF and IC markets. Taking IF for instance, we achieve as high as 1064, and both the sharp and sortino rates are higher than 2.0. Except for DQN, the profits of other methods are less than 474, and their sharp and sortino rates are less than 1.10.
\item For the three typical technique analysis, they can always achieve the positive profits, although their profits are not very high. For the DQN, it achieves much better results than the three typical technique analysis and BC. For the BC, however, the performances seem quite unstable: the profit is positive in the IF market while it becomes negative in the IC market.
\vspace{0.5em} In addition, two following experiments, including the parameter $k$ in the stop-loss strategy and the trading frequency in the proposed method, are also considered.
\begin{figure}
\centering
\includegraphics[scale=0.58]{figs/IF_diff_window-eps-converted-to.pdf}
\caption{The profit curves with the stop-loss strategy using different parameter $k$ and without using stop-loss strategy. }
\label{fig:IF_k}
\end{figure}
\end{itemize}
\vspace{0.5em} \paragraph{About Parameter $k$ in the stop-loss strategy: } As described in section \ref{subsec:set}, there is a parameter $k$ in the stop-loss strategy. In this section, we would compare the profit curves for the stop-loss strategy with different parameter $k$ ranging from 5 to 35 with a step 10, and without the stop-loss strategy. The results are shown in Fig. \ref{fig:IF_k}. From Fig. \ref{fig:IF_k}, we observe that the stop-loss strategy with the parameter $k=25, 35$ can achieve better profits than others in most cases, and the strategy with $k=25$ finally gains the highest profit in the end. That is why we set $k=25$ in our experiments. We also observe that the stop-loss strategy with smaller parameters, e.g., $k=5, 15$, achieves even poorer results than that without stop-loss strategy in most cases. Thus, we conclude that the stop-loss strategy is necessary, and its parameter $k$ should be carefully selected.
\vspace{0.5em} \paragraph{About the Trading Frequency:} In previous experiments, we set the trading frequency is 5-min in the proposed method. In this section, we compare the performances for other two trading frequencies, including 3-min and 30-min. The comparative results are shown in Table \ref{tab:frequency}. From Table \ref{tab:frequency}, we observe that in both IF and IC markets, the proposed method with related higher frequencies (e.g., 3-min and 5-min) works better than that with lower frequency, i.e., 30-min, in our experiments. Taking IC for instance, the proposed method with 5-min trading frequency gains 1934.3 profits, while it drops to 405.2 when the frequency becomes 30-min. Thus, we conclude that the trading frequency is one of important factors that would affect the proposed method.
\begin{table}
\caption{The overall performances of the proposed method with different trading frequencies in IF and IC markets. Those values with an asterisk (*) denote the best results in the corresponding cases.}
\begin{center}
\begin{tabular}{c | c | c c c}
\hline
\multirow{2}{*}{\textbf{Market}} & \multirow{2}{*}{\textbf{Frequency}} & \multicolumn{3}{c}{\textbf{Performance}}\\
\textbf{}&\textbf{}&\textbf{Profits}&\textbf{Sharp}&\textbf{Sortino}\\
\hline
\multirow{2}{*}{{IF}} &{3min}&892.1&1.76&2.08\\
\textbf{} & {5min}& \textbf{1064.0 *}& \textbf{2.09 *}& \textbf{2.47 *}\\
\textbf{} & {30-min}&763.8&0.37&0.55\\
\hline
\multirow{2}{*}{{IC}} &{3min}&1243.1&1.19&1.81\\
\textbf{} & {5min}& \textbf{1934.3 *}& \textbf{1.86 *}& \textbf{2.86 *}\\
\textbf{} & {30-min}&405.2&0.21&0.29\\
\hline
\end{tabular}
\label{tab:frequency}
\end{center}
\end{table}
\section{Conclusion}
\label{sec:Conclusion}
In this paper, we propose a novel quantitative trading method based reinforcement learning with expert trajectory. The main contributions of this paper as follows:
\begin{itemize}
\item According to the price trend at the next time step, we design a simple yet very effective expert trajectory. Agent can effectively learn the optimal policy by utilizing such prior experiences to balance the exploration and exploitation in training process.
\item Unlike behavior cloning that tries to learn the expert's policy using supervised learning, we first introduce the TD-error generated from both expert-environment interaction and agent-environment interaction to optimize the Q-network for financial applications.
\item Compared with three typical technique analysis and two RL based methods, experimental results evaluated two future markets show that the proposed method is very promising in quantitative trading.
\end{itemize}
This is our first attempt to apply the RL with expert trajectory in quantitative trading. There are many issues worth further studying. For instance, we empirically believe that the expert-environment interaction and agent-environment interaction are equally important for TD-error in the proposed method. Different weights of the two interactions should be further considered. In addition, we use a stop-loss strategy to limit the investor's loss on a position. Some risk measures, such as Sharp ratio and Sortino ratio, should be considered in reward function and/or loss function in our future works.
\bibliographystyle{splncs04}
|
{
"timestamp": "2021-05-11T02:17:36",
"yymm": "2105",
"arxiv_id": "2105.03844",
"language": "en",
"url": "https://arxiv.org/abs/2105.03844"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.